Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen
2082
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo
Michael F. Insana Richard M. Leahy (Eds.)
Information Processing in Medical Imaging 17th International Conference, IPMI 2001 Davis, CA, USA, June 18–22, 2001 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Michael F. Insana University of California, Biomedical Engineering One Shields Avenue, Davis, CA 95616, USA E-mail:
[email protected] Richard M. Leahy University of Southern California, Signal and Image Processing Institute 3740 McClintock Avenue, Los Angeles, CA 90089-2564, USA E-mail:
[email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Information processing in medical imaging : 17th international conference ; proceedings / IPMI 2001, Davis, CA, USA, June 18 - 22, 2001. Michael F. Insana ; Richard M. Leahy (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 2001 (Lecture notes in computer science ; Vol. 2082) ISBN 3-540-42245-5
CR Subject Classification (1998): I.4, I.2.5-6, I.5, J.1, I.3 ISSN 0302-9743 ISBN 3-540-42245-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2001 Printed in Germany Typesetting: Camera-ready by author, data conversion by Boller Mediendesign Printed on acid-free paper SPIN: 10839299 06/3142 543210
Preface
The 17th biennial International Conference/Workshop on Information Processing in Medical Imaging (IPMI) was held June 18–22, 2001, on the campus of the University of California, Davis. Following the successful meeting in beautiful Visegr´ad, Hungary, this year’s conference summarized important developments in a broad range of topics regarding the acquisition, analysis, and application of information from medical images. Seventy-eight full manuscripts were submitted to the conference; of these, twenty-two were accepted as oral presentations in six sessions of three or four papers each. Thirty-two excellent submissions that could not be accommodated as oral presentations were presented as posters. Manuscripts from oral presentations were limited to 14 pages, whereas those from poster presentations were limited to 7 pages. Every effort was made to maintain those traditional features of IPMI that have made this conference a unique and exciting experience since the first in 1969. First, papers are presented in single-track sessions, followed by discussion that is unbounded with respect to the schedule. Although unlimited discussion ruins carefully planned meal schedules, many participants welcome the rich, detailed descriptions of essential techniques that often emerge from the discussions. For that reason, IPMI is often viewed as a workshop in contrast to the constrained schedules of most conferences. Second, the focus at IPMI has been to encourage the participation of young investigators, loosely described as students, postdocs, and junior faculty under 35 years of age who are presenting at IPMI for the first time. Looking back to our first encounters at IPMI in the 1980’s, we co-chairs remember the challenge and thrill of having our senior colleagues probe deeply into the science and engineering that authors spent so much time advancing and refining. Truly, this format nurtures new talent in a way that encourages the brightest investigators to engage and advance medical image science. Third, the setting and dress has always been casual, which promotes collegiality and an exchange of information unfettered by the usual formalities. This year, the conference was held on the UC Davis campus, where attendees stayed together in the university housing. The causal approach helps organizers keep costs low, thus encouraging young investigator participation. Of course, the tradition of carrying on discussion into the evening over a beer, this year at Cantina del Cabo in Davis, was a pleasant experience for many. We also took Wednesday afternoon off to enjoy tours in the wine country of Northern California and dinner at the elegant Soga’s restaurant. We organizers also assumed the responsibility of looking forward by encouraging new topics, new authors, and new format elements. First, most sessions at this conference opened with a half-hour talk by a senior investigator who introduced the topics. With the diversity of topics, the depth of presentation, and a
VI
Preface
large number of young investigators, the co-chairs thought it would be helpful to experiment with session introductions that provided a high-level review of the topic. Second, we invited a plenary speaker, Sanjiv Gambhir from UCLA, to review the exciting advances in multimodality molecular imaging. Sam’s interests involve the use of mutiple imaging techniques, including X-ray CT, autoradiography, optical-flourescence imaging, and PET, to explore biochemical and physiological processes in animals and humans. These exciting new techniques include the use of molecular probes, e.g., radiolabelled antisense oligonucleotides, for in vivo imaging of gene expression with PET. The future of medical imaging will require those of us developing methodologies to extend our systems and techniques to include the molecular nanoscale, a formidable challenge indeed. Third, we were happy and surprised by many outstanding submissions in the areas of image quality assessment, molecular and diffusion tensor imaging, and fMRI/EEG/MEG approaches. These three of six session topics reflect the organizers’ and program committee’s desire to extend the topics of IPMI beyond its traditional strengths in image analysis and computer vision, while maintaining an emphasis on mathematical approaches. These changes are experimental and may not survive to become part of the IPMI tradition. Nevertheless, we hope the attendees view these experiments as reflections of the sense of adventure that characterizes IPMI’s approach to imaging research. At the time of year we are writing this preface, threats of rolling blackouts loom ominously throughout our state during the summer months. Perhaps the conference staff should be looking into bicycle powered generators to run the LCD projectors and air conditioners. Instead we have limited our preparation to hoping that California can transcend third-world status before June, while we eagerly await the scientific program and hope it can approach the exciting, enriching experiences provided to us by our conference co-chair predecessors.
March 2001
Michael F. Insana Richard M. Leahy
Acknowledgements
The XVIIth IPMI conference was made possible by the efforts of many hardworking individuals and generous organizations. First, the organizers wish to thank the Scientific Program Committee for their critical reviews that determined the content of the program. Considering they were asked to review an average of 10 full manuscripts in December near the holidays, their efforts were truly heroic. We also extend our gratitude to all authors who submitted papers to the conference, and our regrets to those we turned down, often because of time constraints. We gratefully acknowledge the assistance of the Conference and Event Services staff at UC Davis, particularly Teresa Brown who coordinated most aspects of conference logistics. Michael Insana wishes to thank Terry Griffin at UCD who helped organize communications with authors and attendees. Richard Leahy expresses his gratitude to David Shattuck, Karim Jerbi, and Evren Asma at USC for taking time from their research to provide expert assistance in compiling and checking the proceedings. Finally, we express our appreciation of financial support from the following organizations
The Whitaker Foundation The National Institutes of Health Department of Biomedical Engineering, UC Davis Signal and Image Processing Institute, USC Anonymous Friends of Medical Imaging
Francois Erbsmann Prize Winners
1987 10th IPMI, Utrecht, The Netherlands John M. Gauch, Dept. of Computer Science, University of North Carolina, Chapel Hill, NC, USA JM Gauch, WR Oliver, SM Pizer: Multiresolution shape descriptions and their applications in medical imaging. 1989 11th IPMI, Berkeley, CA, USA Arthur F. Gmitro, Dept. of Radiology, University of Arizona, Tucson, AZ, USA AF Gmitro, V Tresp, V Chen, Y Snell, GR Gindi: Video-rate reconstruction of CT and MR images. 1991 12th IPMI, Wye (Kent), UK H. Isil Bozma, Dept. of Electrical Engineering, Yale University, New Haven, CT, USA HI Bozma, JS Duncan: Model-based recognition of multiple deformable objects using a game-theoretic framework. 1993 13th IPMI, Flagstaff, AZ, USA Jeffrey A. Fessler, Division of Nuclear Medicine, University of Michigan, Ann Arbor, MI, USA JA Fessler: Tomographic reconstruction using information-weighted spline smoothing. 1995 14th IPMI, Brest, France Maurits K. Konings, Dept. of Radiology and Nuclear Medicine, University Hospital Utrecht, The Netherlands MK Konings, WPTM Mali, MA Viergever: Design of a robust strategy to measure intravascular electrical impedance. 1997 15th IPMI, Poultney, VT, USA David Atkinson, UMDS, Radiological Sciences, Guy’s Hospital, London, UK D Atkinson, DLG Hill, PNR Stoyle, PE Summers, SF Keevil: An autofocus algorithm for the automatic correction of motion artifacts in MR images. 1999 16th IPMI, Visegr´ ad, Hungary Liana M. Lorigo, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge MA, USA LM Lorigo, O Faugeras, WEL Grimson, R Keriven, R Kikinis, C-F Westin: Co-dimension 2 geodesic active contours for MRA segmentation.
Conference Committee
Chairs Michael F. Insana Richard M. Leahy
University of California, Davis, USA University of Southern California, USA
Scientific Committee Christian Barillot INRIA/CNRS, France Harrison H. Barrett University of Arizona USA Yves Bizais Universit´e de Bretagne Occidentale, France Michael Brady Oxford University, UK Gary Christensen University of Iowa, USA Alan Colchester University of Kent, UK D. Louis Collins McGill University, Canada James S. Duncan Yale University, USA Jeffrey A Fessler University of Michigan, USA Guido Gerig University of North Carolina, Chapel Hill, USA Gene Gindi State University of New York, Stony Brook, USA David Hawkes Guy’s Hospital, London, UK Derek Hill Guy’s Hospital, London, UK Nico Karssemejier University Hospital Nijmegen, The Netherlands Frithjof Kruggel Max-Planck-Institute of Cognitive Neuroscience, Germany Attila Kuba Jozsef Attila University, Hungary Nicholas Lange McLean Hospital, Belmont, MA, USA Kyle J. Myers Food and Drug Administration, USA Stephen M. Pizer University of North Carolina, USA Jerry L. Prince Johns Hopkins University, USA Martin Samal Charles University Prague, Czech Republic Milan Sonka University of Iowa, USA Chris Taylor University of Manchester, UK Andrew Todd-Pokropek University College London, UK Max A. Viergever University Hospital Utrecht, The Netherlands
The 1999 IPMI Board
Yves Bizais Harrison Barrett Randy Brill Alan Colchester Stephen Bacharach Frank Deconinck Robert DiPaola James Duncan Michael Goris Attila Kuba Doug Ortendahl Stephen Pizer Andrew Todd-Pokropek Max Viergever
Table of Contents
Objective Assessment of Image Quality On the Difficulty of Detecting Tumors in Mammograms . . . . . . . . . . . . . . . . . Arthur E. Burgess, Francine L. Jacobson, Philip F. Judy
1
Objective Comparison of Quantitative Imaging Modalities Without the Use of a Gold Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 John Hoppin, Matthew Kupinski, George Kastis, Eric Clarkson, Harrison H. Barrett Theory for Estimating Human-Observer Templates in Two-Alternative Forced-Choice Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Craig K. Abbey, Miguel P. Eckstein
Shape Modeling The Active Elastic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Xenophon Papademetris, E. Turan Onat, Albert J. Sinusas, Donald P. Dione, R. Todd Constable, James S. Duncan A Minimum Description Length Approach to Statistical Shape Modelling . 50 Rhodri H. Davies, Tim F. Cootes, Chris J. Taylor Multi-scale 3-D Deformable Model Segmentation Based on Medial Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Sarang Joshi, Stephen Pizer, P. Thomas Fletcher, Andrew Thall, Gregg Tracton Automatic 3D ASM Construction via Atlas-Based Landmarking and Volumetric Elastic Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Alejandro F. Frangi, Daniel Rueckert, Julia A. Schnabel, Wiro J. Niessen
Molecular and Diffusion Tensor Imaging A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images Olivier Coulon, Daniel C. Alexander, Simon R. Arridge
92
Distributed Anatomical Brain Connectivity Derived from Diffusion Tensor Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, Gareth J. Barker
XII
Table of Contents
Study of Connectivity in the Brain Using the Full Diffusion Tensor from MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Philipp G. Batchelor, Derek L.G. Hill, Fernando Calamante, David Atkinson
Poster Session I: Registration and Structural Analysis Incorporating Image Processing in a Clinical Decision Support System . . . . 134 Paul Taylor, Eugenio Alberdi, Richard Lee, John Fox, Margarita Sordo, Andrew Todd-Pokropek Automated Estimation of Brain Volume in Multiple Sclerosis with BICCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 D. Louis Collins, Johan Montagnat, Alex P. Zijdenbos, Alan C. Evans, Douglas L. Arnold Automatic Image Registration for MR and Ultrasound Cardiac Images . . . 148 Caterina M. Gallippi, Gregg E. Trahey Estimating Sparse Deformation Fields Using Multiscale Bayesian Priors and 3-D Ultrasound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Andrew P. King, Philipp G. Batchelor, Graeme P. Penney, Jane M. Blackall, Derek L.G. Hill, David J. Hawkes Automatic Registration of Mammograms Based on Linear Structures . . . . . 162 Robert Marti, Reyer Zwiggelaar, Caroline Rubin Tracking Brain Deformations in Time-Sequences of 3D US Images . . . . . . . . 169 Xavier Pennec, Pascal Cachier, Nicholas Ayache Robust Multimodal Image Registration Using Local Frequency Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Baba C. Vemuri, Jundong Liu, Jos´e L. Marroquin Steps Toward a Stereo-Camera-Guided Biomechanical Model for Brain Shift Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 ˇ Oskar Skrinjar, Colin Studholme, Arya Nabavi, James Duncan
Poster Session I: Functional Image Analysis Spatiotemporal Analysis of Functional Images Using the Fixed Effect Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Jayasanka Piyaratna, Jagath C. Rajapakse Spatio-temporal Covariance Model for Medical Images Sequences: Application to Functional MRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Habib Benali, M´elanie P´el´egrini-Issac, Frithjof Kruggel
Table of Contents
XIII
Microvascular Dynamics in the Nailfolds of Scleroderma Patients Studied Using Na-fluorescein Dye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Philip D. Allen, Chris J. Taylor, Ariane L. Herrick, Marina Anderson, Tonia Moore Time Curve Analysis Techniques for Dynamic Contrast MRI Studies . . . . . 211 Edward V.R. Di Bella, Arkadiusz Sitek Detecting Functionally Coherent Networks in fMRI Data of the Human Brain Using Replicator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Gabriele Lohmann, D. Yves von Cramon Smoothness Prior Information in Principal Component Analysis of Dynamic Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 ˇ ıdl, Miroslav K´ ˇamal, Werner Backfrieder, V´ aclav Sm´ arn´y, Martin S´ Zsolt Szabo Estimation of Baseline Drifts in fMRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Fran¸cois G. Meyer, Gregory McCarthy Analyzing the Neocortical Fine-Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Frithjof Kruggel, Martina K. Br¨ uckner, Thomas Arendt, Christopher J. Wiggins, D. Yves von Cramon
fMRI/EEG/MEG Motion Correction Algorithms of the Brain Mapping Community Create Spurious Functional Activations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Luis Freire, Jean-Fran¸cois Mangin Estimability of Spatio-temporal Activation in fMRI . . . . . . . . . . . . . . . . . . . . 259 Andre Lehovich, Harrison H. Barrett, Eric W. Clarkson, Arthur F. Gmitro A New Approach to the MEG/EEG Inverse Problem for the Recovery of Cortical Phase-Synchrony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Olivier David, Line Garnero, Francisco J. Varela Neural Field Dynamics on the Folded Three-Dimensional Cortical Sheet and Its Forward EEG and MEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Viktor K. Jirsa, Kelly J. Jantzen, Armin Fuchs, J.A. Scott Kelso
Deformable Registration A Unified Feature Registration Method for Brain Mapping . . . . . . . . . . . . . . 300 Haili Chui, Lawrence Win, Robert Schultz, James Duncan, Anand Rangarajan
XIV
Table of Contents
Cooperation between Local and Global Approaches to Register Brain Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Pierre Hellier, Christian Barillot Landmark and Intensity-Based, Consistent Thin-Plate Spline Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Hans J. Johnson, Gary E. Christensen Validation of Non-rigid Registration Using Finite Element Methods . . . . . . 344 Julia A. Schnabel, Christine Tanner, Andy D. Castellano Smith, Martin O. Leach, Carmel Hayes, Andreas Degenhard, Rodney Hose, Derek L.G. Hill, David J. Hawkes
Poster Session II: Shape Analysis A Linear Time Algorithm for Computing the Euclidean Distance Transform in Arbitrary Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Calvin R. Maurer, Jr., Vijay Raghavan, Rensheng Qi An Elliptic Operator for Constructing Conformal Metrics in Geometric Deformable Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Christopher Wyatt, Yaorong Ge Using a Linear Diagnostic Function and Non-rigid Registration to Search for Morphological Differences Between Populations: An Example Involving the Male and Female Corpus Callosum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 David J. Pettey, James C. Gee Shape Constrained Deformable Models for 3D Medical Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 J¨ urgen Weese, Michael Kaus, Christian Lorenz, Steven Lobregt, Roel Truyen, Vladimir Pekar Stenosis Detection Using a New Shape Space for Second Order 3D-Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Qingfen Lin, Per-Erik Danielsson Graph-Based Topology Correction for Brain Cortex Segmentation . . . . . . . . 395 Xiao Han, Chenyang Xu, Ulisses Braga-Neto, Jerry L. Prince Intuitive, Localized Analysis of Shape Variability . . . . . . . . . . . . . . . . . . . . . . 402 Paul Yushkevich, Stephen M. Pizer, Sarang Joshi, J.S. Marron A Sequential 3D Thinning Algorithm and Its Medical Applications . . . . . . . 409 K´ alm´ an Pal´ agyi, Erich Sorantin, Emese Balogh, Attila Kuba, Csongor Halmai, Bal´ azs Erd˝ ohelyi, Klaus Hausegger
Table of Contents
XV
Poster Session II: Functional Image Analysis An Adaptive Level Set Method for Medical Image Segmentation . . . . . . . . . 416 Marc Droske, Bernhard Meyer, Martin Rumpf, Carlo Schaller Partial Volume Segmentation of Cerebral MRI Scans with Mixture Model Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Aljaˇz Noe, James C. Gee Nonlinear Edge Preserving Smoothing and Segmentation of 4-D Medical Images via Scale-Space Fingerprint Analysis . . . . . . . . . . . . . . . . . . . 431 Bryan W. Reutter, V. Ralph Algazi, Ronald H. Huesman Spatio-temporal Segmentation of Active Multiple Sclerosis Lesions in Serial MRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Daniel Welti, Guido Gerig, Ernst-Wilhelm Rad¨ u, Ludwig Kappos, Gabor Sz´ekely Time-Continuous Segmentation of Cardiac Image Sequences Using Active Appearance Motion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Boudewijn P.F. Lelieveldt, Steven C. Mitchell, Johan G. Bosch, Rob J. van der Geest, Milan Sonka, Johan H.C. Reiber Feature Enhancement in Low Quality Images with Application to Echocardiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Djamal Boukerroui, J. Alison Noble, Michael Brady 3D Vascular Segmentation Using MRA Statistics and Velocity Field Information in PC-MRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Albert C.S. Chung, J. Alison Noble, Paul Summers, Michael Brady Markov Random Field Models for Segmentation of PET Images . . . . . . . . . 468 Jun L. Chen, Steve R. Gunn, Mark S. Nixon, Roger N. Gunn
Analysis of Brain Structure Statistical Study on Cortical Sulci of Human Brains . . . . . . . . . . . . . . . . . . . . 475 Xiaodong Tao, Xiao Han, Maryam E. Rettmann, Jerry L. Prince, Christos Davatzikos Detecting Disease-Specific Patterns of Brain Structure Using Cortical Pattern Matching and a Population-Based Probabilistic Brain Atlas . . . . . . 488 Paul M. Thompson, Michael S. Mega, Christine Vidal, Judith L. Rapoport, Arthur W. Toga Medial Models Incorporating Object Variability for 3D Shape Analysis . . . 502 Martin Styner, Guido Gerig
XVI
Table of Contents
Deformation Analysis for Shape Based Classification . . . . . . . . . . . . . . . . . . . 517 Polina Golland, W. Eric L. Grimson, Martha E. Shenton, Ron Kikinis
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
On the Difficulty of Detecting Tumors in Mammograms Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy Radiology Department, Brigham and Women’s Hospital, 75 Francis St. Harvard Medical School, Boston MA 02115
[email protected],
[email protected],
[email protected] Abstract. We did human observer experiments using a hybrid image technique to determine the variation of tumor contrast thresholds for detection as a function of tumor sizes. This was done with both mammographic backgrounds and filtered noise with the same power spectra. We obtained the very surprising result that contrast had to be increased as lesion size increased to maintain contrast detectability. All previous investigations with white noise, radiographic and CT imaging system noise have shown the opposite effect. We compared human results to predictions of a number of observer models and found fairly good qualitative agreement. However we found that human performance was better than what would be expected if mammographic structure was assumed to be pure noise. This disagreement can be accounted for by using a simple scaling correction factor.
1
Introduction
Detectability of abnormalities in medical images is determined by a number of factors. Examples are: spatial resolution, image noise, lesion contrast and patient structure. Image display and visual system capabilities are also important for human observers The consequences of these effects can be summarized by the contrast-detail (CD) diagram, a plot of the lesion contrast needed to reach a defined detection accuracy as a function of lesion size. The same CD diagram form has been consistently found in previous work using phantoms and artificial signals in image noise. The contrast threshold decreases steadily as signal size increases. There has been very little formal study of the effect of patient structure on lesion detection [1], [2], [3]. Bochud et al. [4],[5],[6] did experiments designed to determine whether spatial variations in mammograms due to normal patient anatomical structure can be considered to be a form of image noise. They concluded that the effects of structure backgrounds are not fully described by their average power spectrum and that human observers are able to use some information contained in the phase spectrum. They estimated that the effect of anatomical structure variations was three times as important as imaging system noise for microcalcifications and 30 to 60 times as important for an 8 mm simulated module. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 1–11, 2001. c Springer-Verlag Berlin Heidelberg 2001
2
Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy
Zheng et al. [7], reported mammogram power spectra of the form P (f ) = K/f 3 . We used this spectrum with the frequency domain integral in equation (2) below to evaluate lesion detectability indices and found the surprising prediction of a positive CD slope - that contrast thresholds would increase as lesion size increases. However the use of the integral form of equation (2) was based on the dubious assumption that second order statistics of mammograms are stationary. The goals of our work were to (1) measure human CD diagrams for lesion detection in mammographic backgrounds and 1/f 3 filtered noise backgrounds with matched average second order statistics, (2) evaluate performance of two models that do not require the assumption of stationarity and (3) compare human performance with model predictions. We will describe the models, experimental methods and results. We will show that the prediction of positive CD diagram slopes is robust. It occurs for both human and model observers with both mammogram and ’1/f 3 ’ filtered noise backgrounds.
2
Models
Many experiments (discussed in references [8] and [9]) have shown that human detection and discrimination performance can be described by models based on matched filters. The models can be described in several ways, using continuous images, g(x,y), signals, s(x,y), noise, n(x,y), and statistically-defined backgrounds, b(x,y), for example. For discrete images, (x,y) are pixel row and column addresses. Digital images and components can be described in column vector notation with the Np pixel values in lexicographic order [10]. In this notation, the image and component vectors are g, s, n, and b. The total image is then described by the two alternative summation forms g(x, y) = s(x, y) + n(x, y) + b(x, y) and g = s + n + b
(1)
When noise and backgrounds have stationary statistics observer performance modeling is most convenient in the frequency domain. Then the signal is described by its Fourier transform, S(u,v) and stochastic components are described by power spectra, Pn (u, v) and Pb (u, v). The discrete models can be evaluated [11], [10] in the spatial domain with noise and background fluctuations described by covariance matrices, Kn and Kb - 2D arrays with Np rows and Np columns. This approach has the advantage that the stationarity assumption can be avoided. Several models will be presented for the simple signal known exactly (SKE) detection task, with the signal 2D profile and potential locations precisely defined and known to the observer. 2.1
Ideal Observer
The ideal observer [12], [13] uses Bayes’ theorem to combine a priori information (about the signal profile and possible locations for example) with new data optimally extracted from the image. The optimum strategy depends on task
On the Difficulty of Detecting Tumors in Mammograms
3
details. For the SKE detection task, the ideal observer uses a prewhitening (PW) matched filter. The detectability index, d’, equations for the SKE case in the frequency and spatial domains are given by ∞ ∞ |S(u, v)|2 (d )2 = = st (Kn + Kb )−1 s. (2) [P (u, v) + P (u, v)]dudv n b −∞ −∞ The Fourier domain integral equation requires the assumption of stationarity for the noise and background second order statistics. The spatial domain vectormatrix form of the equation does not require stationarity. 2.2
Channelized Observer Models
The ideal observer model is impractical to use for images with nonstationary statistics because the covariance matrix size is very large - about 4x109 elements for an ensemble of 256x256 images. Fiete et al. [14] , proposed an optimum linear observer model for cases where ideal observer performance calculation is impractical - which they referred to as the Hotelling model. We will use the term Fisher-Hotelling (FH), since it is similar to Fisher linear discriminant analysis. The covariance matrix size can be reduced dramatically by describing the image data near the potential signal locations by coefficients of a set of smooth basis functions centered on the potential locations rather than by pixel values. One example is the difference of Gaussians (DOG) channel model [15]. Barrett et al.[16] suggested using Laguerre-Gauss (LG) basis functions. For isotropic signals, a radial channel count, Nc , of 6 to 8 is adequate [17]. Each channel is described by a basis vector, tc , and the set of channels is described by a matrix, T, whose columns are the individual basis vectors. The response to the signal is rs = Tt s. The covariance matrix of the channel responses to noise and backgrounds, Kc , has dimension (Nc )2 . We will refer to the two channelized models as FHDOG and FHLG . The detectability equation for this channelized FH model class is (d )2chan = rts K−1 c rs .
3 3.1
(3)
Materials and Methods Observer Experiments
The human observer experiments were designed to comply with the requirements of the SKE detection task and we attempted to optimize experimental conditions to maximize human performance. We used the two-alternative forced choice (2AFC) method and hybrid digital images, with signals added to both digitized mammograms and filtered noise images. The filtered noise had a spectrum P (f ) = K/f 3 , where f is radial frequency. An example display is shown in figure 1. During each trial, two randomly selected backgrounds were displayed side-byside with one containing the signal. The observers selected the side they believed to contain the signal. Three experienced observers took part, two physicists and a radiologist (the authors).
4
Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy
Fig. 1. Example display for 2AFC experiments. A reference copy of the lesion is shown above the mammographic backgrounds. The two possible lesion locations are surrounded by circle cues [19], which are exaggerated here for publication.
The background regions (each 61x61 mm with 0.12 mm pixels on the mammogram and 0.29 mm pixels on the monitor) were selected from 210 digitized normal craniocaudal mammograms. A log exposure (log E) amplitude scale was used. Backgrounds were confined to the constant thickness region of the breast to eliminate the confounding effect of the large, systematic brightness variations at the periphery. The signals used in the experiments were a simulated nodule and 4 realistic breast lesions (1 fibroadenoma and 3 ductal carcinomas). The nodule equation was s(r) = rect(ρ/2)(1 − ρ2 )1.5 , where ρ is a normalized radial distance (r/R) and R is the nodule radius. Tumor images were extracted from digitized specimen radiographs [23]. The tumors (original size 8 to 18 mm) were rescaled to fit in a common array size (256x256). During experiments the lesions were minified to sizes in the range from 4 to 128 pizels (corresponding to the range 0.5 → 15.6 mm on the mammograms). Images were displayed on a Clinton grayscale monitor with a maximum luminance of 75 cd/m2 and 1024(V)x1280(H) pixels. Each observer did 256 trials for each experimental condition in blocks of 128 trials. The 4 tumor profiles led to 4 slightly different CD diagrams. We used the average contrast threshold (across all sizes) for each lesion to adjust the 4 diagrams up or down to a common relative contrast scale. Additional experiments were done using simulated noise. We used one tumor to determine the CD diagram with white noise with a range of 256 gray levels, a mean of 128 and a pixel standard deviation of 25.6. The purpose was to determine whether the typical CD diagram, with threshold contrast decreasing as signal size increased, would be obtained under our display conditions. We also did experiments using the simulated nodule and filtered noise with a spectrum
On the Difficulty of Detecting Tumors in Mammograms
5
matched to the power-law exponent (3.0) for the ensemble average estimated for the set of mammographic backgrounds. The purpose was to allow comparison of human and model results for an isotropic signal and stationary noise with known statistics. 3.2
Statistics of Mammographic Images
We did measurements in 213 square ROIs (61x61 mm). Two spectrum calculation methods were used. The first was the discrete Fourier transform method [21] with a radial Hanning window. The spectrum was averaged over angle and gave radial slice frequency dependence, P (f ) = K/f 3 below 1 c/mm. We also measured the radial averages of individual periodograms. The exponent distribution had a mean of 2.8 and a standard deviation of 0.35. Maximum entropy method (MEM) spectral estimates [22] were done using row and column projections to determine 1D spectral slices. The average exponents were 2.5 (std. dev. 0.3) and 3.0 (std. dev. 0.5) in the fx and fy directions respectively. We also used a spatial method [26] to evaluate second order statistics. Pixel variance was measured over a range of circle diameters, centered on each of the ROIs and the ensemble averages of variance as a function of size were determined. For power-law noise with an exponent of 3, a plot of log(ensemble average variance) versus log(diameter) should give a slope of one. We obtained a value of 0.99. Pixel variance results for the mammographic background set and the matching filtered noise backgrounds are shown in figure 4A. The covariance matrices for the FHDOG and FHLG models were determined with no signal present. The FHDOG model had 7 channel filters based on a viewing distance of 75 cm from the monitor and center frequency separations of one octave. The FHLG model had 6 basis functions and a free spatial scaling parameter that was adjusted for each nodule radius to maximize d values. Response vectors for each image, rg , were obtained by cross-correlating basis vectors with the image data, centered on the ROIs. Covariance matrices were calculated using the ensemble expectation value, < . . . >, formulation Kc = (rg − rg )(rg − rg )t .
4
(4)
Results
The CD diagram data for human detection of the 4 extracted tumors in mammographic structure and one tumor in white noise are shown in figure 2. The upper data are for 4 different lesions in mammographic backgrounds (in log E units). Contrast thresholds increase for sizes greater than 1 mm with a positive CD diagram slope of 0.3. The lower data are for one lesion in white noise with an interpolated curve through the data as a guide to the eye (it has no theoretical significance). The results are averages for 3 observers with 256 trials for each observer per datum. Standard errors of the estimates for the data are all about 5% of mean values.
6
Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy
amp (d’=2) [log_E units]
0.05
0.02
mammography background
0.01
0.005
white noise
0.002 0.001 0.5
1
2
5
Lesion size (mm)
10
20
Fig. 2. Contrast thresholds for detection as a function of lesion size.
We compared human nodule detection results with predictions of 3 observer models. Prewhitening (PW) matched filter observer model performance was calculated using numerical integration of the frequency domain version of equation (2). This model is ideal for stationary noise, as is the case in figure 3B. It is a nonideal approximation for mammograms (in figure 3A), where the stationarity assumption is not valid. For mammograms we used the mean parameters (exponent = 2.83) of single image periodograms averaged over angle. FH model performance, which is not dependent on the stationarity assumption, was evaluated using equation (3). The human and model observer nodule detection results for mammographic backgrounds are shown in figure 3A. The models give qualitatively fair agreement to human results. However, the models are incomplete since human internal noise was not included. With human induced internal noise [18] included, the model thresholds would be about 40% higher (this will be discussed below). The best fit regression line to human results has a slope of 0.30. The slopes are 0.40, 0.46 and 0.40 for the PW, FHDOG and FHLG models respectively. Note that humans perform better than FH models at large nodule size. Results for filtered noise backgrounds are shown in figure 3B. All model observers have better performance than humans. The best fit regression line to human results has a slope of 0.44. The regression line slopes are 0.50, 0.51 and 0.50 for the ideal (PW), FHDOG and FHLG models respectively. Since the filtered noise is known to be stationary, the performance of the PW model is ideal. Human efficiency ranges from 24 to 40%. Observer efficiency, η, for a given task is defined using η = (dO /dI )2 , where the subscripts indicate the observer being tested and the ideal observer. Typical efficiencies for humans have been found in to be in the 30 to 60% range for a variety of simple tasks. To a first
On the Difficulty of Detecting Tumors in Mammograms
7
approximation, human inefficiency is due to 2 types of internal noise - static and induced [18]. For large external noise levels, static internal noise can be neglected and induced internal noise can be modeled by scaling the image noise and background power spectra (or covariance matrices) by a factor of (1+ϕ), where the value of ϕ is selected (typically 0.3 to 1) to provide a fit to human results. 0.1
0.1
human avg Ideal FH_DOG FH_LG
human avg
PW
amp (d’=2)
amp (d’=2)
FH_DOG
FH_LG
A 0.01 1
diameter [mm] (A)
10
B 0.01 1
diameter [mm]
10
(B)
Fig. 3. The CD diagrams for simulated nodule detection in (A) mammographic backgrounds and (B) matching power-law noise. Error bars are about the size of the symbols. PW matched filter observer model performance for mammograms is not ideal because of nonstationary statistics
Figure 4A shows pixel variance measurements for the mammogram image set and the matching filtered noise to demonstrate the close agreement in second order statistics. There was equally good agreement between radial averages of ensemble power spectra. The human and FHLG model results for the two types of backgrounds are replotted in figure 4B. This illustrates the salient point that human thresholds for 2AFC detection are very different while those for the FHLG are quite similar for the two sets of images that we tried to match on the basis of second order statistics.
5
Discussion
In most previous experimental CD diagram measurements, thresholds decreased as signal size increased. We obtained the same result in our white noise control experiment. Our most important result is that CD diagrams for mammograms are completely different. We found that thresholds increased as lesion size increased for lesions larger than 1 mm with a positive slope of 0.3 on a log-log
8
Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy 10
-2
0.1 human, mammo human, noise FH_LG, mammo FH_LG, noise
10
amp (d’=2)
pixel variance
noise mammograms
-3
slope = 0.99
A 10
-4
1
10
ROI diameter [mm] (A)
B 0.01 1
diameter [mm] (B)
10
Fig. 4. (A) The pixel variance measurement results (log E 2 units) for mammogram and filtered noise backgrounds with matched power spectra. (B) The CD diagrams for humans and the FHLG model for the two types of statistically matched backgrounds.
plot. We obtained similar results for ’1/f 3 ’ filtered noise, confirming that the positive CD diagram slope is due to the power-law statistics of mammograms. The threshold increase below 1 mm (the size range for microcalcifications) for mammograms is due to imaging system noise dominance in the spectrum at high spatial frequencies. The most important point about model observer results is that they all give similar CD diagram slopes and differ mainly in absolute threshold values. This suggests that the question of nonstationarity of mammogram statistics is not a major issue. There is good agreement between the FH model results for the two types of basis functions (channels), LG and DOG. This is consistent with the results of Abbey [24], who found that model performance was not particularly sensitive to basis function selection. It has been found that a number of other models [25] also give similar CD diagram slopes. There is fair agreement between human and model results for simulated nodule detection in the mammographic backgrounds. However, if induced internal noise was included in the models, thresholds would be about 40% higher than human results. By contrast, the human results with power-law noise would be in good agreement with the models if realistic induced internal noise values were included. The human efficiencies of 24 to 40% for detection in filtered noise are consistent with results from previous experiments with statistically defined backgrounds. As figure 4B demonstrates, the FHLG model results are very similar for the two types of backgrounds, whereas human results are not. This suggests that for observer models, mammographic backgrounds can be considered to be pure random noise (i.e.
On the Difficulty of Detecting Tumors in Mammograms
9
models cannot use anatomical information) while humans can make some use of anatomical information in 2AFC experiments. This difference is not a serious problem for modeling human performance because it can be accounted for by a simple scaling process. There are also differences in slopes of CD diagrams to be accounted for: between humans and models and between the two types of backgrounds. We suspect that the cause of the difference for backgrounds may be that power-law exponents vary between 2 and 4 in the set of mammographic backgrounds. This point requires further study. Our mammographic CD diagram results were based on a collection of mammographic regions with an average power law exponent of about 3. It has been shown [26] using the frequency domain integral form of equation (2), that for scaled signals and stationary noise, the CD diagram slope, m, is related to the power-law noise exponent β, by a linear relationship with m = 0.5(β − 2). This CD slope equation is subject to the constraint that the signal has the same normalized 2D profile as its size changes and that signal energy as a function of frequency decreases sufficiently rapidly that the integral converges. Spectral analysis of individual mammogram regions gave a range of exponents from 2 to 4 and MEM spectral analysis of smaller regions showed that the exponent varies within a mammogram. This has interesting implications for lesion detectability. Consider the consequences of the CD diagram slope equation for detection of a growing tumor with thickness parallel to the x-ray beam proportional to diameter perpendicular to the beam. As the tumor grows, its projected contrast will be determined by the local difference in x-ray attenuation and its thickness, so contrast will increase linearly with diameter as long as the composition of the tumor and surrounding tissue do not change. The tumor becomes detectable, at the selected accuracy criterion, when its trajectory crosses the appropriate CD threshold line. If the tumor is in a region of low exponent, detection probability will change rapidly with size. If the exponent is large, detection probability will change slowly with size. If the spectra, P (f ) = K/f β , have similar values of K, then the lesion will be detectable at smaller sizes for mammoigram regions with smaller power-law exponents. X-ray mammography is the primary method of detecting breast cancer. However, lesions are often extremely difficult to detect in the complex and highly variable normal parenchymal patterns. Our work was designed to develop a quantitative understanding of the statistical properties of images of this breast structure and its effect on detectability of realistic lesions. Our finding that more lesion contrast is needed as the size of the lesion increases helps explain why large lesions can be missed despite careful search of a mammogram [27]. The theoretical prediction that lesion detectability will be dependent on the local statistical properties of patient structure is also important. The experiments described here were done using an artificial 2AFC task with high image contrast designed to allow comparison of human observer results with theoretical model predictions. We have obtained similar results in experiments involving search for the lesion [28]. We recognize that our investigations must be extended to more clinically realistic decision tasks. The importance of this work may increase in the future,
10
Arthur E. Burgess, Francine L. Jacobson, and Philip F. Judy
when mammograms are viewed on CRTs for primary interpretation. The novel CD diagram results for lesion detection in mammograms and our finding that human results can be described by observer models suggest that it may be possible to use these models to develop image processing algorithms that will help increase the accuracy of digital mammogram interpretation.
Acknowledgements Larry Clarke and Maria Kallergi provided the mammograms. Jack Beutel digitized the specimen radiographs and provided H&D curve data. We also thank Craig Abbey, Dev Chakraborty, Kyle Myers and Robert Wagner for very helpful discussions. This research was supported by grant R01-CA58302 from the National Cancer Institute.
References 1. Revesz, G., Kundel, H.L., Graber, M.A.: The influence of structured noise on detection of radiologic abnormalities. Invest. Radiol. 9 (1974) 479–486 2. Kundel, H.L., Nodine, C.F., Thickman, D. Carmody, D., et al.: Nodule detection with and without a chest image. Invest. Radiol. 20 (1985) 94-99 3. Judy, P.F., Swensson, R.G., Nawfel, R.D., Chan K.H.: Contast detail curves for liver CT Med. Phys. 19 (1992) 1167–1174 4. Bochud, F.O., Verdun, F.R., Valley, J.F., Hessler C., et al.: The importance of anatomical noise in mammography Proc. SPIE 3036 (1997) 74–80 5. Bochud, F.O., Abbey, C.K., Eckstein, M.P.: Further inverstigation of the effect of phase spectrum on visual detection in structured backgrounds Proc. SPIE 3663 (1999) 273–281 6. Bochud, F.O., Valley, J.F., Verdun F.R., Hessler C., et al.: Estimate of the noisy component of anatomical backgrounds Med. Phys. 26 (1999) 1365–1370 7. Zheng, B., Chang, Y.-H., Gur, D.: Adaptive computer-aided diagnosis scheme of digitized mammograms. Acad. Radiol. 3 (1996) 806–814 8. Burgess, A.E.: High level visual decision efficiencies, In Blakemore, C. (ed.) Vision: Coding and Efficiency. Cambridge Univ. Press,: London (1990) 431–440 9. Barrett, H.H., Yao, J., Rolland, J.P., Myers, K.J.: Model observers for assessment of image quality. Proc. Nat. Acad. Sci. USA 90 (1993) 9758–9765 10. Abbey, C.K., Bochud, F.O.: Modeling visual signal detection tasks in correlated image noise with linear observer models. In Beutel, J., Kundel, H., van Metter, R.L. (eds.) Handbook Of Medical Imaging: Physics and Psychophysics. SPIE Press,: Bellingham (2000) 629–654 11. Eckstein, M.P., Abbey, C.K., Bochud, F.O.: A practical guide to model observers for visual detection in synthetic and natural noisy images. In Beutel, J., Kundel, H.L., van Metter, R.L. (eds.) Handbook of Medical Imaging. SPIE Press,: Bellingham (2000) 593–628 12. Wagner, R.F., Brown, D.G.: Unified SNR analysis of medical imaging systems. Phys. Med. Biol. 30 (1985) 489–518 13. Myers, K.J.: Ideal observer models of visual signal detection. In Beutel, J., Kundel, H., van Metter, R.L. (eds.) Handbook Of Medical Imaging: physics and Psychophysics. SPIE Press,: Bellingham (2000) 558–592
On the Difficulty of Detecting Tumors in Mammograms
11
14. Fiete, R.D., Barrett, H.H., Smith, W.E., Myers, K.J.: Hotelling trace criterion and its correlation with human-observer performance. J. Opt. Soc. Am. A4 (1987) 945–953 15. Wilson, H., Bergen, J.: A four-mechanism model for threshold spatial vision. Vision Res. 19 (1979) 19–32 16. Barrett, H.H., Abbey, C.K., Gallas, B., Eckstein, M.P.: Stabilized estimates of Hotelling observer detection performance in patient structured noise. Proc. SPIE 3340 (1998) 27–43 17. Burgess, A.E., Li, X., Abbey, C.K.: Visual signal detectability with two noise components: anomalous masking effects. J. Opt. Soc. Am. A14 (1997) 2420–2442 18. Burgess, A.E., Colborne, B.: Visual signal detection IV: Observer inconsistency. J. Opt. Soc. Am. A5 (1988) 617–627 19. Kundel, H.L., Nodine, C.F., Toto, L., Lauver, S.: A circle cue enhances detection of simulated masses on mammographic backgrounds. Proc. SPIE 3032 (1997) 81–84 20. Burgess, A.E., Chakraborty, S.: Producing lesions for hybrid images: extracted tumours and simulated microcalcifications. Proc. SPIE 3663 (1999) 316–322 21. Bendat, J.S., Piersol, A.G.: Random Data: analysis and measurement procedures. John Wiley & Sons, New York (1986) 22. Press, W.H., Flannery, B.P., Teukolsky, S.A., Vettering, W.T.: Numerical Recipes in Fortran. Second Edition. Cambridge Univ. Press, (1992) 23. Burgess, A.E.: Mammographic structure: Data preparation and spatial statistics analysis. Proc. SPIE 3661 (1999) 642–653 24. C.K. Abbey: Assessment of reconstructed images, Ph.D. Dissertation, Univ. of Arizona, 1998. 25. Burgess, A.E.: Evaluation of detection model performance in power-law noise. Proc. SPIE 4324 (2001) , (in press) 26. Burgess, A.E., Jacobson, F.L., Judy, P.F.: On the detection of lesions in mammographic structure. Proc. SPIE 3663 (1999) 304–315 27. Rosenberg, R.D., Hunt, W.C., Williamson, M.R., et al.: Effects of age, breast density, ethnicity, and estrogen replacement therapy on screening mammographic sensitivity and cancer stage at diagnosis: review of 183,134 screening mammograms in Albuquerque, New Mexico, Radiology 209, (1998), 511–518 28. Burgess, A.E., Jacobson, F.L., Judy, P.F.: Breast parenchymal patterns: Human observer lesion detection experiments. Med. Phys. (in press)
Objective Comparison of Quantitative Imaging Modalities Without the Use of a Gold Standard John Hoppin1,4 , Matthew Kupinski2,4 , George Kastis3,4 , Eric Clarkson2,3,4, and Harrison H. Barrett1,2,3,4 1
Program in Applied Mathematics, University of Arizona 2 Department of Radiology, University of Arizona 3 Department of Optical Sciences, University of Arizona 4 Center for Gamma Ray Imaging, University of Arizona
Abstract. Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x-axis.
1
Introduction
There are many approaches in the literature to assessing image quality, but there is an emerging consensus in medical imaging that any rigorous approach must specify the information desired from the image (the task) and how that information will be extracted (the observer). Broadly, tasks may be divided into classification and estimation, and the observer can be either a human or a computer algorithm. In medical applications, a classification task is to make a diagnosis, perhaps to determine the presence of a tumor or other lesion. This task is usually performed by a human observer, and task performance can be assessed by psychophysical studies and ROC (receiver operating characteristic) analysis. Scalar figures of merit such as a detectability index or area under the ROC curve can then be used to compare imaging systems. Often, however, the task is not directly a diagnosis but rather an estimation of some quantitative parameter from which a diagnosis can later be derived. An M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 12–23, 2001. c Springer-Verlag Berlin Heidelberg 2001
Objective Comparison of Quantitative Imaging Modalities
13
example is the estimation of cardiac parameters such as blood flow, ventricular volume or ejection fraction (EF). For such tasks, the observer is usually a computer algorithm, though often one with human intervention, for example defining regions of interest. Task performance can be expressed in terms of the bias and variance of the estimate, perhaps combined into a mean-square error as a scalar figure of merit. For both classification and estimation tasks, a major difficulty in objective assessment is lack of a believable standard for the true state of the patient. In ROC analysis for a tumor-detection task, we need to know if the tumor is really present, and for estimation of ejection fraction we need to know the actual value for each patient. In common parlance, we need a gold standard, but it is rare that we have one with real clinical images. For classification tasks, biopsy and histological analysis are usually accepted as gold standards, but even when a pathology report is available, it is subject to error; the biopsy can give information on false-positive fraction but if a lesion is not detected on a particular study and hence not biopsied, its contribution to the false-negative fraction will remain unknown. Similarly, for cardiac studies, ventriculography or ultrasound might be taken as the gold standard for estimation of EF, and nuclear medicine or dynamic MRI might then be compared to the supposed standard. A very common graphical device is to plot a regression line of EF’s derived from the system under study to ones derived from the standard and to report the slope, intercept and correlation coefficient for this regression. Even a cursory inspection of papers in this genre reveals major inconsistencies. In reality, no present modality can lay claim to the status of gold standard for quantitative cardiac studies. Indeed, if there were such a modality, there would be little point in trying to develop new modalities for this task. Because of the lack of a convincing gold standard for either classification or estimation tasks, simulation studies are often substituted for clinical studies, but there is always a concern with how realistic the simulations are. Researchers who seek to improve the performance of medical imaging systems must ultimately demonstrate success on real patients. A breakthrough on the gold-standard problem was the 1990 paper by Henkelman, Kay and Bronskill on ROC analysis without knowing the true diagnosis [1]. They showed, quite surprisingly, that ROC parameters could be estimated by using two or more diagnostic tests, neither of which was accepted as the gold standard, on the same patients. Recent work by Beiden, Campbell, Meier and Wagner has clarified the statistical basis for this approach and studied its errors as a function of number of patients and modalities as well as the true ROC parameters [2]. The goal of this paper is to examine the corresponding problem for estimation tasks. For definiteness, we cast the problem in terms of estimation of cardiac ejection fraction, and we pose the following question: If a group of patients of unknown state of cardiac health is imaged by two or more modalities, and an estimate of EF is extracted for each patient for each modality, can we estimate
14
John Hoppin et al.
the bias and variance of the estimates from each modality without regarding any modality as intrinsically better than any other. Stated differently, can we plot a regression line of estimated EF vs. true EF without knowing the truth?
2
Approach
We begin with the assumption that there exists a linear relationship between the true EF and its estimated value. We will describe this relationship for a given modality m and a patient p using a regression line with a slope am , intercept bm , and noise term m . We represent the true EF for a given patient with Θp and an estimate of the EF made using modality m with θpm . The linear model is thus represented by θpm = am Θp + bm + m .
(1)
We make the following assumptions: 1. Θp does not vary for a given patient across modalities and is statistically independent from patient to patient. 2. The parameters am and bm are characteristic of the modality and independent of the patient. 3. The error terms, m , are statistically independent and normally distributed 2 . with zero mean and variance σm We write the probability density function for the noise m as 1 1 2 pr({m }) = exp − 2 m , 2 2σm 2πσm m=1 M
(2)
using assumption number 3 above, where M is the total number of imaging modalities. Using equ. 1, we rewrite equ. 2 as the probability of the estimated EF’s for multiple modalities and a specific patient given the linear model parameters and the true EF as M 1 1 2 }, Θp ) = exp − 2 (θpm − am Θp − bm )2 . pr({θpm }p |{am , bm , σm 2 2σm 2πσm m=1 (3) The notation {θpm }p represents the estimated ejection fractions for a given patient p over M modalities. Using the following property of conditional probability pr(x1 , x2 ) = pr(x1 |x2 )pr(x2 ), as well as the marginal probability law, pr(x1 ) = dx2 pr(x1 , x2 ),
(4)
(5)
Objective Comparison of Quantitative Imaging Modalities
15
we write the probability of the estimated EF for a specific patient across all modalities given the linear model parameters as 2 pr({θpm }p |{am , bm , σm }) = M 1 dΘp pr(Θp )S exp − 2 (θpm − am Θp − bm )2 , 2σm m=1
(6)
where S=
M
1 . 2 2πσ m m=1
(7)
From assumption number 1 above, the likelihood of the linear model parameters can be expressed as M P 1 2 L= (− 2 (θpm − am Θp − bm ) ) , (8) S dΘp pr(Θp ) exp 2σm p=1 m=1 where P is the total number of patients. Upon taking the log and rewriting products as sums we obtain, λ = ln(L) = P ln(S)+ M P 1 ln dΘp pr(Θp ) exp (− 2 (θpm − am Θp − bm )2 ) . 2σm p=1 m=1
(9)
It is this scalar λ, the log-likelihood, which we seek to maximize to obtain our 2 estimates of am , bm , and σm . These estimates will be maximum likelihood estimates for our parameters. Although pr(Θp ) may appear to be a prior term, we are not using a maximum a posteriori approach; we are simply marginalizing over the unknown parameter Θp . Thus we have derived an expression for the log-likelihood of the model parameters which does not require knowledge of the true EF Θp , i.e. without the use of a gold standard. This is analogous to curve fitting lines without the use of the x-axis. Although the expression for the log-likelihood in equ. 9 does not require the true EF Θp , it does require some knowledge of their distribution pr(Θp ). We will refer to this distribution as the assumed distribution (pra (Θp )) of the EFs. In this paper we will investigate the effect different choices of the assumed distributions have on estimating the linear model parameters. We first sample parameters from a true distribution (prt (Θp )) and generate different estimated EF’s for the different modalities by linearly mapping these values using known am ’s and bm ’s, then add normal noise to these values with known σm ’s. These EF estimates form the values θpm , which will be used in the process of determining the estimates of the linear model parameters by optimizing equ. 9. We will look at cases in which the assumed and true distributions match, as well as cases in which they do not match.
16
John Hoppin et al.
For our experiments we will investigate beta distributions and truncated normal distributions as our choices for both the assumed and true distributions. The beta distribution is limited to the interval [0,1] with probability density function given by pr(θ) =
θν−1 (1 − θ)ω−1 , B(ν, ω)
(10)
where B(ν,ω) is a normalizing constant. The truncated normal distribution is given by 1 pr(θ) = A(µ, σ) exp − 2 (θ − µ)2 Π(x), (11) 2σ where A(µ, σ) is the normalizing constant and Π(x) is a rect function which truncates the normal from 0 to 1. It should be noted that µ and σ are the mean and standard deviation for the normal distribution, not necessarily the mean and standard deviation of the truncated normal. Our choice of distributions bounded between 0 and 1 stems from our desire to apply these methods to the specific problem of evaluating modalities which estimate EF, a parameter which is bounded between 0 and 1. Using a truncated normal for the assumed distribution in equ. 9, we find the following closed-form solution for the log-likelihood: 2 β − 4αγ β 2α + β A(µ, σ) π √ √ exp − erf erf λ = P ln(S) + 2 α 4α 2 α 2 α (12) where α=
M 1 a2m + , 2 2 2σ 2σm m=1
β=− γ=
M µ am (θpm − bm ) − , 2 2 σ σm m=1
M µ2 (θpm − bm )2 + . 2 2σ 2 m=1 2σm
The expression for the log-likelihood with a beta assumed distribution does not easily simplify to a closed form solution, and thus we used numerical integration techniques to evaluate equ. 9. We used a quasi-Newton optimization method in Matlab on a Dell Precision 620 running Linux to maximize the log-likelihood as a function of our parameters[3]. For each experiment we generated EF data for 100 patients using one of the aforementioned distributions. We then ran the optimization routine to estimate the parameters and repeated this entire process 100 times in order to compute sample means and variances for the parameter estimates. The tables below consist of the true parameters used to create the patient data as well as the sample means and standard deviations attained through the simulations.
Objective Comparison of Quantitative Imaging Modalities
3
17
Results
3.1
Estimating the Linear Model Parameters for a Given Assumed Distribution
We first investigated the results of choosing the assumed distribution to be the same as the true distribution. The asymptotic properties of maximum likelihood estimates would predict that in the limit of large patient populations the estimated linear model parameters would converge to the true values[4]. The results, shown in Table 1, are consistent with this prediction. For the experiment below we have chosen ν = 1.5 and ω = 2 for the beta distribution and µ = 0.5 and σ = 0.2 for the truncated normal distribution. Figure 1 illustrates the results of an individual experiment using the truncated normal distribution. Table 1. Values of the estimated linear model parameters using matching assumed and true distributions. a1
a2
True Values 0.6 0.7 pr(Θ)=Beta 0.59±.03 0.69±.03 pr(Θ)=Normal 0.58±.04 0.68±.04 σ1 True Values 0.05 pr(Θ)=Beta 0.048±.005 pr(Θ)=Normal 0.048±.006
a3 0.8 0.79±.05 0.78±.06 σ2 0.03 0.029±.009 0.028±.010
b1
b2
b3
-0.1 0.0 0.1 -0.10±.02 0.00±.02 0.11±.03 -0.09±.02 0.01±.02 0.11±.03 σ3 0.08 0.079±.007 0.080±.007
In an attempt to understand the impact of the assumed distribution on the method we next used a flat assumed distribution, which is in fact a special case of the beta distribution (ν = 1,ω = 1). We used the same beta and truncated normal distributions for the true distribution as was chosen in the previous experiment, namely ν = 1.5, ω = 2, µ = 0.5 and σ = 0.2. As shown in Table 2, the parameters estimated using a flat assumed distribution are clearly not as accurate as those in the experiment with matching assumed and true distributions. However, the systematic underestimation on the am ’s and the systematic overestimation on the bm ’s has not affected the ordering of these parameters. In fact, the estimated parameters have been shifted roughly the same amount. It should also be noted that the estimates of the σm ’s are still accurate. We will return to this point later in the paper. 3.2
Estimating the Linear Model Parameters and the Parameters of the Assumed Distribution
After noting the impact of the choice of the assumed distribution on the estimated parameters it occurred to us to investigate the effect of varying this distribution. In the case of the beta distribution this was simply a case of adding
18
John Hoppin et al.
0.8
0.6
0.7 0.6
0.4
0.4
θ
θ
0.5 0.2
0.3 0.2
0
0.1 −0.2 0
0.2
0.4
0.6
Θ
0.8
0 0
1
0.2
(a)
0.4
Θ
0.6
0.8
1
(b)
1.4 1.2 1
θ
0.8 0.6 0.4 0.2 0 0
0.2
0.4
Θ
0.6
0.8
1
(c)
Fig. 1. The results of an experiment using 100 patients, 3 modalities, and the same true parameters as shown in Table 1. In each graph we have plotted the true ejection fraction against the estimates of the EF for three different modalities ((a), (b) and (c)). The solid line was generated using the estimated linear model parameters for each modality. The dashed lines denote the estimated standard deviations for each modality. The estimated a, b and σ for each graph are (a)0.59, -0.07, 0.06, (b)0.69, 0.03, 0.025 and (c)0.83, 0.12, 0.082. Note that although we have plotted the true EF on the x-axis of each graph, this information was not used in computing the linear model parameters.
Objective Comparison of Quantitative Imaging Modalities
19
Table 2. Values of estimated linear model parameters using a flat assumed distribution (pra (Θ) = 1). a1 True Values 0.6 0.53±.03 prt (Θ)=Beta prt (Θ)=Normal 0.50±.01 True Values prt (Θ)=Beta prt (Θ)=Normal
a2
a3
b1
b2
b3
0.7 0.8 -0.1 0.0 0.1 0.61±.03 0.70±0.05 -0.09±.02 0.02±.02 0.13±.03 0.56±.03 0.64±.08 -0.05±.02 0.07±.03 0.18±.04 σ1 σ2 σ3 0.05 0.03 0.08 0.049±0.005 0.031±0.009 0.079±0.007 0.048±0.005 0.033±0.008 0.080±0.007
ν and ω to the list of parameters over which we were attempting to maximize the likelihood. In similar fashion, we added µ and σ to the list of parameters for the truncated normal distribution. In the case of the beta distributions, we limited the search in the region 1≤ ν,ω≤5, since values of ν and ω between 0 and 1 create singularites at the boundaries, an impossibility considering the nature of EF. In the case of the truncated normal distributions we limited the search in the region 0≤ µ ≤1 and 0.1≤ σ ≤10. We began by choosing the form of the assumed distribution and the true distribution to be the same, i.e. we estimated the parameters of the beta distribution while using beta distributed data. We found that the method successfully approximated the values of all parameters, including those on the assumed distribution, as displayed in Table 3. The results of an individual experiment is displayed graphically in Fig. 2.
Table 3. Values of estimated linear model and distribution parameters with the assumed distribution and the fixed true distribution having the same form. a1 True Values pr(Θ)=Normal pr(Θ)=Beta
0.6 0.59±.03 0.60±.09 b1 True Values -0.1 pr(Θ)=Normal -0.09±.03 pr(Θ)=Beta -0.10±.03 σ1 True Values 0.05 pr(Θ)=Normal 0.050±.002 pr(Θ)=Beta 0.048±.006 Distribution True Values µ = 0.5, ν = 1.5 pr(Θ)=Normal µ = 0.50±.03 pr(Θ)=Beta ν = 1.50±.53
a2 0.7 0.69±.04 0.70±.09 b2 0.0 0.01±.03 0.01±.03 σ2 0.03 0.029±.004 0.030±.011 Parameters σ = 0.2, ω = 2.0 σ = 0.20±.02 ω = 2.08±.99
a3 0.8 0.79±.04 0.79±.11 b3 0.1 0.11±.04 0.11±.04 σ3 0.08 0.080±.003 0.080±.006
20
John Hoppin et al.
In the previous experiment the estimated parameters associated with both the beta and truncated normal distributions were very close to their true values. We now show the results when the assumed distribution differs from the true distribution in Table 4. We know from our previous experiment that when the form of the assumed and true distributions match, the correct distribution parameters are estimated (on average). However, it remains to be seen what distribution parameters will be estimated when the forms of the two distributions differ. Thus in Fig. 3 we display the true distribution as well as the assumed distribution with the mean estimates of the distribution parameters. Note that although the assumed distribution cannot equal the true distribution, it does take on a form which approximates the true distribution in an attempt to maximize the likelihood. Table 4. Values of estimated linear model parameters using different forms of the varying assumed distribution and the fixed true distribution. a1 True Values pra (Θ)=Normal/prt (Θ)=Beta pra (Θ)=Beta/prt (Θ)=Normal True Values pra (Θ)=Normal/prt (Θ)=Beta pra (Θ)=Beta/prt (Θ)=Normal True Values pra (Θ)=Normal/prt (Θ)=Beta pra (Θ)=Beta/prt (Θ)=Normal
4
a2
a3
0.6 0.7 0.8 0.56±.04 0.65±.05 0.74±.06 0.66±.10 0.78±.09 0.89±.12 b1 b2 b3 -0.1 0.0 0.1 -0.09±.02 0.01±.02 0.12±.03 -0.14±.06 -0.06±.06 0.03±.07 σ1 σ2 σ3 0.05 0.03 0.08 0.050±.005 0.029±.004 0.080±.007 0.050±.007 0.025±.011 0.079±.009
Discussion and Conclusions
We have developed a method for characterizing an observers’ performance in estimation tasks without the use of a gold standard. Although a gold standard is not required for this method, it is necessary to make some assumptions on the distribution of the parameter of interest (i.e., EF). We have found that when the assumed distribution matches the true distribution, the estimates of the linear model parameters are both accurate and precise. Conversely, when the assumed and true distributions do not match, we find that our linear model parameters are no longer as accurate. This led us to investigate the role of the assumed distribution in the accuracy of the linear model parameters. By optimizing both the distribution parameters and the model parameters we found that one can effectively find both the model parameters and the form of the assumed distribution.
Objective Comparison of Quantitative Imaging Modalities
0.6
21
0.8 0.7 0.6
0.4
0.5
θ
θ
0.4 0.2
0.3 0.2 0
0.1 0
−0.2 0
0.2
0.4
0.6
Θ
0.8
−0.1 0
1
0.2
(a)
0.4
Θ
0.6
0.8
1
(b)
1.4 1.2 1
θ
0.8 0.6 0.4 0.2 0 0
0.2
0.4
Θ
0.6
0.8
1
(c)
Fig. 2. The results of an experiment using 100 patients, 3 modalities, and the same true parameters as shown in Table 3. In each graph we have plotted the true ejection fraction against the estimates of the EF for three different modalities ((a), (b) and (c)). The solid line was generated using the estimated linear model parameters for each modality. The dashed lines denote the estimated standard deviations for each modality. The estimated a, b and σ for each graph are (a)0.66,-0.11,0.050, (b)0.75,0.01,0.035 and (c)0.86,0.07,0.073. Note in this study the parameters of the beta distribution were estimated along with the linear model parameters.
22
John Hoppin et al.
2.5
Probability Density
2
1.5
1
Estimated Density True Density
0.5
0 0
0.2
0.4
0.6
Ejection Fraction
0.8
1
0.8
1
(a)
Probability Density
1.5
1
Estimated Density True Density
0.5
0 0
0.2
0.4
0.6
Ejection Fraction
(b)
Fig. 3. When the form of the assumed distribution does not match that of the true distribution, we see that the optimal distribution parameters are such that the form of the assumed distribution approximates the true distribution. In (a), the true distribution is a truncated normal which is approximated automatically by the method using a beta distribution (ν = 3.93, ω = 3.47). In (b), the roles are reversed, as a truncated normal automatically approximates a beta distribution (µ = 0.33, σ = 0.42).
Objective Comparison of Quantitative Imaging Modalities
23
When comparing different imaging modalities one would typically prefer the modality with the most reproducible estimates, i.e. the smallest σ. While the estimates of the slope and intercept of our linear model change according to the assumed distribution, the estimates of the σ values remain accurate. This facilitates modality comparisons without knowledge of a gold standard. While the σ’s serve as a description of a modality’s reproducibility, the slope and intercept values describe the systematic error (or bias) of the modality. If one is confident in these estimates they could be employed to adjust and correct systematic error for each modality. Another interesting result of the experiments is the successful estimation of the distribution parameters to fit the form of the true distribution. This could serve as an insight into the distribution of the true parameter for the population studied, i.e., the patient distribution of EF’s. A major underlying assumption of the method proposed in this paper is that the true parameter of interest does not vary according to modality. This assumption may not be accurate in the context of estimating EF, which may vary moment to moment with a patient’s mood and breathing pattern. This assumption may be valid, however, for other estimation tasks. Another assumption we have made is the linear relationship between the true and estimated parameters of interest. More complicated non-linear models can easily be accommodated by this method. However, the integration in equ. 8 could become more costly. In the future, we would like to investigate varying true parameters, i.e. the true EF for a patient varies with modality. We would also like to study the robustness of the technique to different underlying true distributions. In addition, we plan to study the fundamental mathematical properties of the method.
5
Acknowledgements
The authors thank Dr. Dennis Patton from the University of Arizona for his helpful discussions on the various modalities used to estimate ejection fractions. This work was supported by NSF grant 9977116 and NIH grants P41 RR14304, KO1 CA87017-01, and RO1 CA 52643.
References 1. Henkelman, R.M., Kay, I., Bronskill, M.J.: Receiver Operator Characteristic (ROC) Analysis without Truth. Medical Decision Making. 10 (1990) 24–29. 2. Beiden, S.V., Campbell, G., Meier, K.L., Wagner, R.F.: On the Problem of ROC Analysis without Truth: The EM Algorithm and the Information Matrix. In Medical Imaging 2000: Image Perception and Performance Proceedings of SPIE Vol. 3981 (2000) 126–134. 3. Press, W.H.,Teukolsky, S.A.,Vetterling, W.T.,Flannery, B.P.: Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, New York, New York. 1995. 4. Kullback, S: Information Theory and Statistics. Dover Publications, Mineola, New York. 1968.
Theory for Estimating Human-Observer Templates in Two-Alternative Forced-Choice Experiments Craig K. Abbey1 and Miguel P. Eckstein12 1
2
Cedars-Sinai Medical Center, Los Angeles, CA 90048 USA Dept. of Psychology, University of California, Santa Barbara 93106
[email protected] and
[email protected] Abstract. This paper presents detailed derivations of an unbiased estimate for an observer template (a set of linear pixel weights an observer uses to perform a visual task) in two-alternative forced-choice experiments. Two derivations of the covariance matrix associated with the error present in this estimation method are also derived and compared in human-observer data.
1
Introduction
In medical imaging applications, an optimal imaging system produces images that allow a clinician or other observer to best perform a diagnostic task of interest.[1] Optimizing imaging systems by this principle must therefore imply numerous evaluations of human-observer performance. Unfortunately, such studies are costly and time consuming. These difficulties with human-observer studies have motivated the search for models of human-observer performance in diagnostic tasks. A predictive model of human-observer performance could be used in place of human observers as part of a general system optimization.[2] In many tasks, the goal is to detect or discriminate a spatially compact signal – such as a focal lesion – embedded in image noise. There is good evidence in this case that observers adopt a linear decision strategy described by an observer template.[3,4,5] The observer template can be thought of as the set of pixel weights that determine the observer’s visual strategy for performing the task. There has been a considerable effort to find observer templates that predict human observer performance and to understand how the templates change with the statistical properties of the images. Recently, Ahumada and coworkers have described a new approach to the problem of determining the observer template (called a “classification image” in the vision literature) in yes-no visual detection and discrimination tasks. [6,7] In this approach, the observer template is estimated directly from a human-observer study using both the trial-to-trial decisions and the noisy image stimuli that produced the decisions. The approach has been extended to the two-alternative forced-choice (2AFC) experimental paradigm by Abbey et al[8] and used to evaluate the effects of correlated noise on the observer template.[9] M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 24–35, 2001. c Springer-Verlag Berlin Heidelberg 2001
Theory for Estimating Human-Observer Templates
25
However, there is currently very little theoretical analysis of the estimation problem at the core of direct methods to obtain observer templates. The goal of this paper is to begin filling this gap. We present a detailed derivation demonstrating that the 2AFC estimation procedure suggested by Abbey et al is an unbiased estimate of the observer template under the assumptions of Gaussian noise and a linear detection strategy. In addition, we derive an analytical approximation for evaluating the statistical error in the estimation procedure, and we show that this approximation accurately predicts the sample error in humanobserver data.
2
Theory
In a 2AFC detection task, an observer is presented with two images and asked to identify the image that contains the signal. We will denote an image generically by the vector g. We will refer to the signal present image as g+ , and to the signal-absent image as g− . The two images can be decomposed into components g+ = b + s + n+ ,
and
g− = b + n− ,
(1)
where b is a common background, s is the signal profile, and the noise vector, n+ and n− , correspond to the noise in the signal-present and signal-absent images respectively. We will restrict our attention to simple detection and discrimination tasks, and hence the signal profile, s, is presumed to be a fixed (nonrandom) vector. The two noise vectors are assumed to be independent multivariate Gaussian random vectors with zero mean, and known correlation structure described by the covariance matrix, Kn . In this section, we describe how a decision is made in a given trial of a 2AFC experiment under a linear decision model described by an observer template. We then derive an unbiased estimate of the observer template, and the error associated with this estimate. The estimation procedure has been used previously to analyze 2AFC detection tasks. The derivation of the estimate and its error covariance matrix are novel to this work. 2.1
Modelling Decisions in a 2AFC Detection Task
Linear Internal Response Variables An observer is presumed to make a decision in each trial of a 2AFC experiment by forming a scalar-valued decision variable (sometimes called a test statistic or observer response variable) to each of the two images in a given trial. The image that produces the larger decision variable is chosen by the observer as the image containing the signal. For a linear observer, a decision variable, λ, is a linear function of the image defined by λ = wt g + ε,
(2)
where ε is a stochastic internal noise component. The internal noise component is often assumed to be a zero-mean Gaussian random variable, independent of
26
Craig K. Abbey and Miguel P. Eckstein
g. We will follow that assumption here and denote the variance of the internal noise component by σε2 . Note that ε is intended to be a composite internal noise variable and can have elements arising from multiple sources, such as the intrinsic and induced internal noise components suggested by Burgess. [10] The superscript t in Eqn (2) indicates the transpose operation, and hence wt g is the scalar product of w and g. The vector w is the observer template that determines how the image influences the decision variable. The goal of the procedure described in this paper is to estimate this vector. In a given trial of a 2AFC experiment, an observer is presented with both g+ and g− . From these two images, the observer forms decision variables λ+ and λ− according to Eqn (2) with corresponding internal noise components ε+ and ε− . The observer makes a correct decision if λ+ > λ− , and an incorrect decision otherwise. The Trial Score and Figures of Merit for Performance We will define the trial score, o (to indicate the outcome), to be 1 when a correct decision is made and 0 when an incorrect decision is made. In terms of the decision variables, λ+ and λ− , the score can be defined as o = step λ+ − λ− , where the step function is 1 for arguments greater than 0, and 0 otherwise. The trial score can be related to the components of (2) and (1) by o = step wt b + s + n+ + ε+ − wt b + n− − ε− = step wt (s + ∆n) + ∆ε , (3) where ∆n = n+ − n− , and ∆ε = ε+ − ε− . For consistency with prior work,[9] we note that ∆n is equivalent to ∆g − s. For the multivariate distribution ascribed to n+ and n− , ∆n is a zero-mean Gaussian random vector with covariance matrix given by 2Kn . Likewise, ∆ε is a Gaussian random variable with variance 2σε2 The most general measure of performance in a 2AFC experiment is the proportion of correct responses, PC . In terms of the trial score from (3), the proportion correct is defined by an expectation over both ∆n and ∆ε as PC = o = step wt (s + ∆n) + ∆ε ∆n,∆ε , (4) where the angular brackets, · · · , indicate a mathematical expectation. Subscripts on these brackets are used to emphasize the variables involved in the expectation. For Gaussian distributions governing ∆n and ∆ε, the proportion correct is directly related to the detectability index d . In the notation defined above, the observer detectability index is defined by wt s dw = . wt Kn w + σε2
(5)
Theory for Estimating Human-Observer Templates
27
Note that the subscript on the detectability index indicates the dependence on the observer template w. The relationship between proportion correct and the observer detectability under the Gaussian assumptions used here is given by d PC = Φ √w , (6) 2 where Φ is the standard Gaussian cumulative distribution function (CDF), x 1 1 dz √ exp − z 2 . (7) Φ (x) = 2 2π −∞ 2.2
Derivation of Basic Quantities for Template Estimation
Section 2A described how the observer template fits into a mathematical model of how decisions are made in a 2AFC task. We now turn to deriving the basic quantities needed for an unbiased estimate of the observer template within the framework of this decision model. Abbey et al[8] analyze the vector quantity q = (2o − 1) ∆n, and state (without proof) that its mean value is related to the observer template, w. Note that the 2o − 1 term effectively weights ∆n by 1 if a correct decision is made on the trial (trial score of 1), and by −1 if an incorrect decision is made (trial score of 0). Hence we can think of q as the difference between the noise field of the image chosen by observer as containing the signal and the noise field of the rejected image. The goal of this section is to derive the relation between q and the observer template, w. Preliminary Results Three scalar expectations will be necessary for the subsequent derivations. We state them here without derivations. Let z be a standardnormal random variable (zero-mean and variance of unity) and let a and b be independent of z, then b , (8) step (az + b) = Φ |a| b Φ (az + b) = Φ √ , (9) 2 a +1 and
b2 1 zΦ (az + b) = exp − , 2 (a2 + 1) 2π (a2 + 1) a
(10)
where Φ is the standard Gaussian CDF defined in (7). The expectation in (8) is straightforward to derive using the definition of the CDF. Expectations (9) and (10) can be obtained using integration by parts.
28
Craig K. Abbey and Miguel P. Eckstein
We also use the following basic result from probability theory. Let x and y be two random variables such that x and T (y) have the same distribution. Then the expectation of some function of x, say g (x), is equivalent to an expectation in y according to g (x)x = g (T (y))y .
(11)
This result can be found proved in many probability theory texts. Expectation of q Let us consider the expectation of q = (2o − 1) ∆n. The quantity of interest is defined as q = (2o − 1) ∆n = 2 o∆n − ∆n = 2 o∆n . Note that o∆n is a vector quantity, and hence its vector-valued expectation can be thought of as a scalar expectation in each element. The second line follows from the first because ∆n is a zero-mean random vector, and hence ∆n = 0. It is tempting to assume that the product, o∆n, will likewise be zero-mean. However, the trial score is dependent on ∆n, and hence the product will not necessarily have a mean of 0. If we substitute (3) for o, we can rewrite the expectation as q = 2 step wt (s + ∆n) + ∆ε ∆n ∆n,∆ε which explicitly includes the observer template and the internal-noise component. Since ∆ε is presumed to be independent of ∆n, we√can compute the expectation in ∆ε from (8) (equating ∆ε with az where a = 2σε ) as step wt (s + ∆n) + ∆ε ∆ε ∆n ∆n t
w (s + ∆n) √ =2 Φ . ∆n 2σε ∆n
q = 2
The remaining expectation is only over ∆n. We compute this quantity by making two changes of variables as described below. Since ∆n is presumed to be a zero-mean multivariate Gaussian with covari√ 1/2 ance matrix 2Kn , it has exactly the same distribution as 2Kn ˜ z, where ˜ z is a vector of independent standard normal random variables. Each vector element is independent, zero-mean, and has variance equal to one (i.e. K˜z = I). Note 1/2 that Kn should be interpreted in the sense of a matrix square root [11] which 1/2 acts on the eigenvalue spectrum of Kn . Note that Kn is also a symmetric non1/2 1/2 negative matrix and Kn Kn = Kn From (11), we can write the expectation in ∆n as an expectation in ˜ z,
Theory for Estimating Human-Observer Templates
√ 1/2 wt s + 2Kn ˜ z √ 1/2 √ q = 2 Φ 2Kn ˜ z 2σε ˜ z √ t 1/2 t z w s + 2w Kn ˜ √ = 23/2 K1/2 Φ ˜ z n 2σε ˜ z
29
(12)
A useful property of ˜ z in (12) is that its distribution is invariant to unitary transformations. As a result, we can appeal to (11) and replace ˜ z by Uz in (12). Let us consider a unitary matrix U such that w t Kn w e1 , (13) Ut K1/2 n w = t
where e1 = [1, 0, 0, 0, · · · ] . A unitary matrix that satisfies this relation can be constructed by setting the first column of the matrix equal to the unit vector u1 = √
1 wt K
nw
K1/2 n w,
(14)
and then choosing the remaining column vectors to be an ortho-normal basis for the orthogonal complement of u1 . Note that u1 = Ue1 . Applying (13) to (12) yields √ 1/2 wt s + 2wt Kn Uz 3/2 1/2 √ q = 2 Kn Φ Uz 2σε z √ t tK w z w s + 2w n 1 √ = 23/2 K1/2 z . n U Φ 2σε z
Note that the argument of Φ is only dependent on z1 , the first element of z (this is the result of et1 z). Since the elements of z are statistically independent, every element of the remaining vector-valued expectation will be zero except for the first. We can thus write q = 23/2 K1/2 n Uce1 ,
(15)
where c=
Φ
wt s +
2wt Kn w z1 √ . z1 2σε
√
z1
√ √ √ Application of (10) (with a = 2wt Kn w/ 2σε and b = wt s/ 2σε ) yields an expression for the value of c given by √ 2 w t Kn w (wt s) c= exp − 4 (wt Kn w + σε2 ) 2π (wt Kn w + σε2 )
30
Craig K. Abbey and Miguel P. Eckstein
Plugging this expression for c back into Eqn (15), and using the definition of u1 in (14) for Ue1 yields 2 (wt s) 2 exp − q = Kn w. 4 (wt Kn w + σε2 ) π (wt Kn w + σε2 ) Using the definition of d found in (5), we obtain 2 2 exp − (dw /2) q = Kn w. π (wt Kn w + σε2 )
(16)
Hence, the expectation of q – the difference in the noise fields weighted by the trial score – is seen to be the product of the noise covariance matrix, the observer template, and a complicated positive scalar. We have argued previously[8] that the scalar magnitude in Eqn (16) is essentially irrelevant since multiplying w by a positive constant (and adjusting the internal-noise standard deviation by the same constant) yields an equivalent decision strategy. To make this point another way, consider the elements of w in terms of their units. Elements of w take the image intensities in g+ and g− and turn them into units of the internal response variable. Since the internal response variable is unobservable, it is not clear what these units are, and hence the magnitude of w is somewhat arbitrary. To get a scaled version of w, we need to remove the dependence on Kn from the right side of the equation. To do this, we can consider the expected value of K−1 n q. The resulting expectation, −1 Kn q = K−1 n q 2 2 exp − (dw /2) = w, (17) π (wt Kn w + σε2 ) is seen to be a scaled version of the observer template. Covariance Matrix Associated with q In order to get an analytic expression for the error associated with the template estimate described in the next section, we will need to know the covariance of q. We define this quantity to be t Kq = (q − q) (q − q) = qqt − q qt , (18) where the expectation is computed with respect to a matrix quantity qqt . From the definition of q, we see that t 2 qq = (2o − 1) ∆n∆nt = ∆n∆nt = 2Kn , (19)
Theory for Estimating Human-Observer Templates
31
where the second line is a result of the fact that 2o − 1 can only assume the 2 values 1 or −1, and hence (2o − 1) must always be 1. The third line follows from the definition of a covariance matrix for a zero-mean random variable. We can use (19) and the expression for q in (16) to write the covariance matrix of q in Eqn (18) as 4 exp −2 (dw /2)2 Kn wwt Kn . (20) Kq = 2Kn − π (wt Kn w + σε2 ) 2.3
Estimation Procedures
Template Estimation The analytic results of section 2B suggest procedures for estimating the observer template and obtaining the error associated with that estimate. We can replace the expected value in (17) by its sample average to form an estimate of the observer template. In the ith trial of an experiment (i = 1, · · · , NT ), we can compute qi from the trial score, oi , and the noise-field difference, ∆ni . The estimated observer template is ¯, w ˆ = K−1 n q
(21)
where q ¯ is the sample average NT 1 q ¯= qi NT i=1
=
NT 1 (2oi − 1) ∆ni . NT i=1
Since a sample average is an unbiased estimate of its mean, w ˆ is therefore an unbiased estimate of the observer template, scaled according to Eqn (17). Estimation Error Covariance Matrix From Eqn (21) we see that the error covariance associated with w ˆ is defined −1 −1 Kw ˆ = Kn Kq ¯ Kn ,
¯. The covariance where Kq¯ is the covariance matrix of the sample average q matrix of q ¯ is related to the covariance matrix for q by 1 Kq . NT
(22)
1 −1 K Kq K−1 n . NT n
(23)
Kq¯ = The resulting error covariance is thus Kw ˆ =
32
Craig K. Abbey and Miguel P. Eckstein
Two approaches present themselves for determining the error covariance matrix associated with w. ˆ In the sample approach, we substitute the sample estimate for Kq in (22). The sample covariance is defined N
ˆq = K
T 1 t (qi − q ¯) (qi − q ¯) . NT − 1 i=1
The resulting covariance matrix for w ˆ is given by 1 −1 ˆ ˆw K Kq K−1 K ˆ = n . NT n
(24)
A second approach can be derived from the analytic expression for Kq given in Eqn (20). In this case we find that 2 4 exp −2 (dw /2) 1 −1 Kn wwt Kn K−1 Kw K 2Kn − ˆ = n NT n π (wt Kn w + σε2 ) 2 2 exp −2 (d /2) w 2 −1 wwt . = Kn − NT π (wt Kn w + σε2 ) If the second term in this expression is small enough, it may be conveniently neglected yielding the approximation Kw ˆ
2 −1 K . NT n
(25)
The attraction of this approximation is that it is simple and independent of w.
3
Testing Sample and Approximate Errors in the Estimated Template
The template estimation procedure given in (21) has been validated previously by Abbey et al.[8] However, in that work the estimation error was not considered. As a preliminary test of the approximate error given in (25), we use the human observer data of Abbey et al. to compare the sample and approximate methods for obtaining errors in the estimated template. 3.1
Experimental Data
The psychophysical data used here are the results of a 2, 000-trial 2AFC detection task. The images are 32 × 32 pixels with a mean intensity of 128 grey-levels (GL) corresponding to a mean luminance of 16.0 cd/m2 on a linearized display. Because of the small size of the images (32×32 pixels) the images are upsampled by a factor of 2 for display yielding an effective pixel size of 0.60mm. The signal
Theory for Estimating Human-Observer Templates
33
in these experiments is a centered Gaussian “bump” with a standard deviation of 3.0 pixels and an amplitude of 10.0 GL. Gaussian white noise (independent from pixel to pixel) with a common standard deviation (σpix ) of 25.0 GL is added to each image. The images are intended to resemble those used in early work by Burgess[3]the signal profile used in the experiments and the uniform background of a nonsignal image. An example of a noisy signal-present and signal-absent image can be found in the middle of the figure.
Fig. 1. Images and estimated templates. A and D: Mean signal-present and signal-absent profiles. B and E: Example signal-present and signal-absent images. C and F: Template estimates for the two participating observers.
Two observers participated in the psychophysical study. Observer CKA is an author of this paper and fully aware of research goals motivating the experiment (or so he believes). Observer ECF is naive to the goals of the study and compensated for performing the experiments. Both observers have fairly extensive experience in detection tasks of this sort, and task-specific training was completed before starting the experiment. The two images of the right side of Figure 1 are the estimated observer templates – computed according to Eqn (21) – for observers ECF and CKA. A
34
Craig K. Abbey and Miguel P. Eckstein
feature of interest in these two templates is the mild negative region surrounding a bright central region. This negative surround is indicative of suppression at low spatial frequencies. 3.2
Template Error Comparison
Figure 2 shows comparisons of the sample and approximation methods for obtaining the estimator error. The plots show the variance of the elements on a horizontal slice through the middle of the estimated templates. Because the noise is white with a common variance, the approximate method in (25) predicts a common variance for the elements of the template estimate variance to 2 = 1.6 × 10−6 . As seen in the figure, this prediction is in good be 2/NT σpix agreement with the sample estimate of variance obtained from (24). Hence, it appears that Eqn (25) is a good analytic method for obtaining the estimation error covariance matrix.
Fig. 2. Sample estimate and analytic approximation to the variance on a slice through the center of an observer template.
4
Summary
We have presented a detailed derivation of a method for estimating a linear observer template in 2AFC experiments. In particular, we have explicitly derived the expected value of the template estimate, as well as two methods for obtaining the error associated with this estimate. Our preliminary comparison of the two methods indicates a high degree of agreement in human observer data. It is our hope that the mathematical framework described here will serve as a starting point for further investigations into direct methods for understanding how human observers perform visual tasks relevant to medical imaging. For example, we
Theory for Estimating Human-Observer Templates
35
have begun work to extended the approach presented here to general multiplealternative forced-choice experiments. We have also been using the analysis of estimator error presented here to derive statistical hypothesis tests on estimated templates.
Acknowledgements The authors wish to thank Francois Bochud for helpful discussions. Support: NIH RO1-53455
References 1. H.H. Barrett, “Objective assessment of image quality: Effects of quantum noise and object variability.” J. Opt. Soc. Am. A, Vol. 7, pp. 1266-1278 1990. 2. K.J. Myers et al., “A systematic approach to the design of diagnostic systems for nuclear medicine,” in Information Processing in Medical Imaging (S.L. Bacharach, ed.). Martinus Nijhoff, Dordrecht, Netherlands: 431-444, 1986. 3. A.E. Burgess, and H. Ghandeharian, “Visual signal detection. II. Signal-location identification,” J Opt Soc Am A, 1:900-905, 1984. 4. H.H. Barrett, T. Gooley, K. Girodias, J. Rolland, T. White, and J. Yao, “Linear discriminants and image quality,” in Information Processing in Medical Imaging, (A.C.F. Cholchester and D.J. Hawkes, Eds.), Springer-Verlag, Berlin, 458-473, 1991. 5. C.K. Abbey, and H.H. Barrett, “Linear iterative reconstruction algorithms: Study of observer performance,” in Information Processing in Medical Imaging, (Y. Bizais, C. Barillot, and R. Di Paola, Eds.), Kluwer Academic, Dordrecht, 65-76, 1995 6. A.J. Ahumada, “Perceptual classification images from vernier acuity masked by noise,” Perception 26, pp. 18, 1996. 7. B.L. Beard and A.J. Ahumada, Jr., “A technique to extract relevant image features for visual tasks,” Proc. SPIE Vol. 3299, pp. 79-85, 1998. 8. C.K. Abbey and M.P. Eckstein, “Estimation of human-observer templates for 2 alternative forced choice tasks”, Proc. SPIE 3663, pp. 284-295 1999. 9. C.K. Abbey, M.P. Eckstein, and F.O. Bochud, “Estimates of human observer templates for a simple detection task in correlated noise”, Proc. SPIE 3981, 2000. 10. A.E. Burgess and B. Colborne, “Visual signal detection. IV. Observer inconsistency,” J Opt Soc Am A, 5:617-627, 1988. 11. K.V. Mardia, J.T. Kent, and J.M. Bibby, Multivariate Analysis. Academic press, San Diego., 1979.
The Active Elastic Model Xenophon Papademetris1 , E. Turan Onat2 , Albert J. Sinusas3,4 , Donald P. Dione4 , R. Todd Constable1 , and James S. Duncan1,3 1
4
Departments of Diagnostic Radiology, 2 Mechanical Engineering, 3 Electrical Engineering, and Medicine, Yale University New Haven, CT 06520-8042
[email protected] Abstract. Continuum mechanical models have been used to regularize ill-posed problems in many applications in medical imaging analysis such as image registration and left ventricular motion estimation. In this work, we present a significant extension to the common elastic model which we call the active elastic model. The active elastic model is designed to reduce bias in deformation estimation and to allow the imposition of proper priors on deformation estimation problems that contain information regarding both the expected magnitude and the expected variability of the deformation to be estimated. We test this model on the problem of left ventricular deformation estimation, and present ideas for its application in image registration and brain deformation during neurosurgery.
Continuum mechanical models have been extensively used in medical imaging applications over the last ten years, particularly within the contexts of image registration and cardiac motion estimation. More recently, similar models have been applied to the problem of brain deformation during neurosurgery. The models used have been selected either (i) because of their mathematical properties (e.g. [3,9]) or (ii) as an attempt to model the underlying physics of the situation (e.g. [11,17,21]). Such models are a specific case of the quadratic regularizers used in many computer vision applications, such as in the work of Horn[12] or in the deformable models used for segmentation (see McInerney and Terzopoulos [16] for a review). The classical elastic model is derived from the properties of elastic solids such as metals. In cases of small deformations, the linear elastic model may also be applied to model biological tissue which is more hyperelastic in nature. All linear elastic models so far used in medical imaging work are passive models. These models will produce no deformation of their own and are essentially used for smoothing and/or interpolation. Using an elastic model results in an underestimation of the deformation as the model itself biases the estimates towards zero deformation. In this paper we present work to extend these elastic models to allow for non-zero bias. We call this model the ‘active elastic model’. The active elastic model is designed to be used to solve a problem of the following form: ‘Given an input of noisy, possibly sparse, displacements find a M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 36–49, 2001. c Springer-Verlag Berlin Heidelberg 2001
The Active Elastic Model
37
dense smooth displacement field which results in a deformation which is close to a desired/expected deformation.’ This new method allows us to construct a proper prior model on the deformation that includes both a mean (the desired magnitude of the deformation) and a covariance (derived from the desired degree of smoothness). The rest of this paper reads as follows: In section 1, we review the basic mathematics of the general energy minimization framework and we compare the use of a passive and an active elastic model for estimation purposes. In section 2, we examine the problem of bias in deformation estimation and demonstrate how the active model can be used to reduce this bias. We present some preliminary results of the application of an active model to reduce the bias in left ventricular deformation estimation in section 3.1 and we conclude by discussing potential applications of this methodology in other areas such as image registration and brain deformation during neurosurgery in section 4.
1
The Energy Minimization Framework
In this section we describe a framework in which the goal is to estimate a displacement field u which is a smooth approximation of a noisy displacement field um . We will assume that um is derived from some image-based algorithm, such as the shape-based tracking algorithm[20,17], MR tagging measurements (e.g. [11]) or optical flow estimates (e.g. [12]). We can pose this problem as an approximation problem whose solution is a least-squares fit of u to um subject to some smoothness constraints and takes the form: arg min m 2 c(x)|u (x) − u(x)| dv + W (α, u, x)dV (1) u ˆ= u V where: u(x) = (u1 , u2 , u3 ) is the vector valued displacement field defined in the region of interest V and x is the position in space, c(x) is the spatially varying confidence in the measurements um and W (α, u, x) is a positive semi-definite regularization functional. W is solely a function of u, a model parameter vector α and the spatial position x. This approach also generalizes to the case where the input displacement field um is sparse. At those locations where no measurement um (x) exists the confidence c(x) can be set equal to zero. 1.1
The Linear Elastic Model
In the early computer vision literature (e.g. [12]) the model W was generated using a regularization functional which penalized a weighted sum of the squared derivatives of the displacement field to impose a smoothness constraint. In medical imaging analysis work the classical linear elastic model is often used, especially in those cases where the problem is the estimation of a real deformation (e.g. left ventricular motion estimation [11,17].)
38
Xenophon Papademetris et al.
A common way to define solid elastic models is in terms of an internal energy function. This internal energy function must be invariant to rigid translation and rotation in order to satisfy certain theoretical guidelines (see Eringen[8] for more details.) Hence the use of any elastic model provides no constraints on the rigid component of the displacement. Additional constraints must be employed to take advantage of any other prior information regarding the magnitude of the overall translation and rotation. The classical linear elastic model[22] captures the mechanical properties of a deforming solid in terms of an internal, or strain energy function of the form:
W =
1 t
C
2
(2)
where C is a 6 × 6 matrix representing the elastic properties of the material and is the strain vector. In the most commonly used case, that of isotropic, infinitesimal linear elasticity these can be written as:
C −1
1 −ν 1 −ν = E 0 0 0
−ν 1 −ν 0 0 0
−ν 0 0 0 −ν 0 0 0 1 0 0 0 , 0 2(1 + ν) 0 0 0 0 2(1 + ν) 0 0 0 0 2(1 + ν)
∂u1 ∂x1 ∂u2 ∂x3 ∂u3 ∂x3
= ∂u1 ∂x2 + ∂u1 ∂x + 3 ∂u2 ∂x3 +
∂u2 ∂x1 ∂u3 ∂x1 ∂u3 ∂x2
(3)
where u(x) = (u1 (x), u2 (x), u3 (x)) is the displacement at point x = (x1 , x2 , x3 ). E is the Young’s modulus which is a measure of the stiffness of the material and ν is the Poisson’s ratio which is a measure of the incompressibility. In the rest of the paper we will refer to the classical linear elastic model as the passive model to distinguish it from the active linear elastic model described in the next section. 1.2
The Active Linear Elastic Model
The classical linear elastic model described in equation (2) is a passive model. In the absence of any external force, the material will do nothing. Given no external work, equilibrium is reached at the lowest energy state where the strain vector is identically equal to zero. Such a material model is not accurate in the case of actively deforming objects such as the left ventricle of the heart. In this case, a substantial part of the deformation is actively generated by the muscle and is clearly not a result of external forces. This active deformation does not produce a change in the strain energy of the material and to account for this factor we need to modify the elastic model appropriately. With this in mind we propose the active elastic model which takes the form: W =
1 ( − a )t C( − a ) 2
(4)
The Active Elastic Model
39
where a is the active strain component. The active strain component represents the deformation that is not a product of external forces and hence should not be penalized by the model. In the absence of external forces, the active elastic model results in a deformation equal to the one actively generated by the object. So in this sense it can deform itself and hence it justifies the label active. Given a prior model of the active contraction, the active elastic model can also be used to generate a prediction of the position of the deforming object. This model is also appropriate in the case where it is used to regularize an image registration problem where there is no such physical notion of active deformation. Here, the active component a can be thought of as the expected magnitude of the deformation. 1.3
The Elastic Model as a Prior Probability Density Function
The energy minimization problem described in equation (1) can also be expressed as a Bayesian maximum a-posteriori estimation problem[17]. In this case, the solution vector u ˆ is the u that maximizes a posterior probability density p(u|um ). Using Bayes’ rule, we can pose this problem (at each point x) as:
p(um |u)p(u) u ˆ = argumax p(u|um ) = p(um ) arg max m {log p(u |u) + log p(u)} (5) = u by noting that p(um ) is a constant once the measurements have been made. The measurement probability p(um |u) can be obtained by using a white noise model for the noise in the measurements um . The prior probability density function p(u) can be derived using an energy function (such as W ) using a probability density function of the Gibbs form [10]. We note that this approach has been previously used in medical imaging problems (e.g. Christensen [3], Gee [9] and others). In the cases of the passive and the active model, this prior distribution has the form: − t C
2 −( − a )t C( − a ) Active: log p(u) = k2 + 2
Passive: log p(u) = k1 +
(6) (7)
where k1 and k2 are normalization constants. Note further that the standard multivariate normal distribution (mean=µ, covariance =Σ) has the form (k3 is similarly a normalization constant): log p(u) = k3 +
−(u − µ)t Σ −1 (u − µ) 2
(8)
By comparing equations (6) and (7) to equation (8), we can see that in both cases the material matrix C plays a similar role to the inverse of the covariance matrix (the stiffer the material is, the greater the coupling between the displacements of neighboring points and hence the smaller the effective component of
40
Xenophon Papademetris et al. q 1
A
p 1
K
p 2
A
q 2
L
Fig. 1. A one-dimensional example. Consider a one-dimensional object consisting of two points p1 and p2 originally a distance L apart. The body is modeled using an elastic spring of stiffness of K. The body is then somehow deformed (stretched). In the deformed state, we have initial estimates of the positions of p1 and p2 shown as q1 and q2 respectively, and the confidence in these estimates is given by A. The problem can be visualized by connecting point pairs (p1 ,q1 ) and (p2 ,q2 ) with zero length springs of effective stiffness A and points (p1 ,p2 ) with a spring of stiffness K and length L. In this case, the initial displacements are given by um = [q1 − p1 , q2 − p2 ]t and the strain 2 . is equal to u1 −u L the covariance matrix), and that in the case of the active model, the active strain
a acts like the mean of the distribution. In the case of the passive model, the mean is effectively zero. Hence we can explicitly see that the active elastic model is a generalization of the passive model, by adding the possibility of having a non-zero mean.
2
Bias Reduction Using the Active Elastic Model
The passive elastic model will likely underestimate the real deformation as a result of its penalization of all deformations. We proceed to illustrate the problem by means of a simple example and demonstrate how the active model can be used to reduce the bias. We also describe how the problem (or more precisely its symptoms) have been dealt with in the literature and point out some of the shortcomings in those approaches. 2.1
A Simple Example
To illustrate the concept of the active elastic model more concretely we will use the simple one-dimensional case described in figure 1. In this case the approximation functional (see equation 1) takes the form: arg min m (9) u ˆ= A |u (p1 ) − u(p1 )|2 + |um (p2 ) − u(p2 )|2 + W u We will consider two forms of W , a passive model Wpassive and an active model Wactive which have the form: K u1 − u2 2 Wpassive = 2 L 2 K u1 − u2 − a Wactive = 2 L
(10)
The Active Elastic Model
41
Note here that the active model reduces to the passive model if the value of the active strain a is set to zero. Substituting for the models defined in equation (10) into equation (9), and differentiating with respect to u we obtain the following matrix equations (in the active case): m −K u1 Au1 + K a A+ K L L = (11) a u2 Aum −K A+ K 2 + K
L L To simplify the math in order to make the illustration clearer, we set um (p2 ) = 0, u(p2 ) = 0. This results in the following two solutions for u(p1 ):1 Aum (p1 ) A+ K L Aum (p1 ) + K a Active Model: u(p1 ) = A+ K L
Passive Model: u(p1 ) =
(12) (13)
Further we can write the expected value of u(p1 ), E(u(p1 )) in terms of the expected value of um (p1 ), E(um (p1 )) as: A E(um (p1 )) A+ K L
A
K a m E(u Active Model : E(u(p1 )) = (p )) + 1 A+ K A+ K L L
Passive Model : E(u(p1 )) =
2.2
(14) (15)
Bias Estimation and Reduction
In the solution produced by the passive model, the expected value of u(p1 ) (see equation 14) will be smaller than the expected value of the measurements um (p1 ) as long as K > 0. Hence any estimation using the passive elastic model is biased, and will underestimate the actual deformation. Consider the case where L = 1, A = 3K. In this case by substitution into equations (14) and (15) we get the following expressions: Passive: E(u(p1 )) =
3 E(um (p1 )), 4
Active: E(u(p1 )) =
3 1 E(um (p1 )) + a 4 4
So by an appropriate choice of a derived from knowledge of the specific problem the bias in the estimation can be significantly reduced. For example, if we had 1
As an aside, we also note that the expressions of equations (12) and (13) can be . For example, rewritten so that the constants K and A appear only as the ratio K A um (p1 ) equation (12) can be rewritten as u(p1 ) = 1+ K . Hence the absolute value of the AL
stiffness K or the data confidence A do not enter into the problem. This can be a problem in the case of the estimation of real deformation (such as in the case of the left ventricle) as the two are measured in different units and hence make the equation inconsistent from a dimensionality viewpoint.
42
Xenophon Papademetris et al.
1) prior knowledge of the expected strain in this case (where = u(p2 )−u(p ), we L a could use such information to set the active strain so as to reduce the bias. We note further that the effect of the bias is more significant where the relative confidence of the measurements (A) is low as a result of noisy data.
2.3
Alternative Methods of Bias Reduction
We also note that the problem of bias has been dealt with in a number of different ways in the literature (often without being actually recognized as such). Zero Stiffness: This ‘solution’ is used by Park et al[19] where the Young’s Modulus is set to zero. In this case, temporal filtering is used for noise reduction. This eliminates the problems associated with bias; it also forfeits all the usefulness of exploiting the spatial relationships between different points in the model. The method is successful in part because the input data are very clean. Direct Bias Correction: Sometimes further knowledge about the problem can be used to correct for some of the bias. In our earlier work [18,17] on left ventricular deformation estimation we solved the problem in a two step fashion, for each frame in the image sequence. At each time t the problem was solved first using a formulation like that of equation (1) to produce an estimate of the position of all the points at time t + 1. Then all points that were on the endo- and epi-cardial surfaces of the heart at time t were mapped to the (pre-segmented) endo- and epi-cardial surfaces at time t + 1, using a modified nearest neighbor approach. In this approach the bias in the radial and circumferential directions is largely accounted for but there remains bias in the longitudinal direction (which lies parallel to the ‘major’ axis of the surface). Other methods which constrain the tracked tokens to lies on a given curve or surface fall into this category of bias correction (e.g. [14]). The Incremental Approach: In this case the estimation problem is broken into a number of small (algorithmic) steps. This has the effect of reducing the bias which is directly related to the magnitude of um . Consider again the simple example of figure 1 with L = 1, A = 3K as before. If the displacement um (p1 ) is applied in one step, we get an estimate of u(p1 ) = 0.75um(p1 ) and a bias of 0.25um(p1 ). The incremental approach is best explained algorithmically. At each increment i ∈ (0, N ) the estimate of u(p1 ) is defined as di (p1 ). Then, for any increment i we calculate di (p1 ) as: i = 0 : d0 (p1 ) = 0 i > 0 : di (p1 ) = di−1 (p1 ) + 0.75
i um (pi ) − di−1 (p1 ) N
The Active Elastic Model
43
This essentially is a history-free approach as in each step the model is only used to regularize the difference between the current input and the last step as opposed to the whole of the input. This approach results in smaller input displacements which are closer to zero, thus resulting in a reduction of the bias. The reduction of the bias is directly related to the number of steps. In this specific case when N = 2 the total bias is 0.16um(p1 ), when N = 4 it is 0.08um(p1 ), and for N = 8, it is reduced to 0.04um(p1 ). The Fluid Model: This is essentially the limiting case of the incremental approach. In the work of Christensen[3], it takes the differential form: µ∇2 v + (λ + µ)∇(∇.v) = F
(16)
where F is the image derived forcing function and v is the local velocity vector. The isotropic linear elasticity model can also be written in differential form by differentiating the energy functional posed in equation (1) and generating a force F by grouping together all external displacements um . This takes the form (as derived in Christensen [2]): µ∇2 u + (µ + λ)∇(∇.u) = F
(17)
where λ and µ are the Lam`e constants which are defined in terms of the Young’s Eν E modulus E and the Poisson’s ratio ν as[22]: λ = (1+ν)(1−2ν) , and µ = 2(1+ν) . If we compare equations (16) with (17) we see that they have essentially the same form, with the one being in terms of the velocity v and the other in terms of the displacement u. The fluid model can be seen to be the limiting case of the incremental approach of the previous section as the step size goes to zero. This approach has the advantage of explicitly stating its assumptions properly and possibly some numerical advantages. 2 Disadvantages of the Incremental/Fluid Approach: The incremental/fluid approach substantially reduces the bias, but the history of the deformation is lost at each (algorithmic) step. Hence in this way we cannot capture aspects of real materials such as progressive hardening with increased deformation (using nonlinear elastic models) as at each step the deformation is assumed to be zero. Also the fact that the analysis is reset at the end of each step makes incorporation of temporal smoothness constraints in problems such as left ventricular motion estimation very difficult. Perhaps more fundamental in certain cases is the lack of the ability of either of these approaches to encapsulate any prior information available as to the expected magnitude of the deformation, as opposed to simply its relative smoothness. 2
This is perhaps the answer to the ‘controversy’ as to whether the linear elastic model is useful in the case of large deformations. If the (passive) linear elastic model is applied using the incremental approach, as is often the case, it is really a fluid model in disguise hence it has similar large deformation capabilities.
44
2.4
Xenophon Papademetris et al.
Relation of the Active Elastic Model to Other Methods
In this section we clarify the relationship of certain other methods in the literature which relate or appear to relate to the active elastic model. Any criticism of these methods is simply with respect to its application in the problem of interest of our own work. (We do note that these methods were mostly designed to solve different problems.) The thin-plate spline: A common regularization function is the thin-plate spline model[1] which in two dimensions has the form (using u = (u1 , u2 ) and x = (x1 , x2 ) : W (u) =
∂ 2 u 2 ∂ 2 u 2 ∂ 2 u 2 ∂ 2 u 2 ∂ 2 u 2 ∂ 2 u 2 1 1 1 2 2 2 + + + + + ∂x21 ∂x22 ∂x1 ∂x2 ∂x21 ∂x22 ∂x1 ∂x2
It can easily be shown that this function would qualify as a solid elastic model as it is invariant to rigid translation and rotation. In fact this function is invariant to all affine transformations. Hence, the bias in the estimate of the deformation in methods which utilize the thin-plate spline as a regularizer (e.g. [4]), is limited to only that component of the deformation which is not captured by an affine transform. In this respect the thin-plate spline is superior to the standard (passive) elastic regularizers, but a bias problem still remains which in certain cases could be substantial. The Active Shape Model: In a series of papers Cootes et al (e.g. [6,7]) presented a methodology for segmentation and registration using a point-based shape model. While this is interesting work, it does not directly relate to the active elastic model presented in this paper. The goal of the active shape model is to capture the statistical variation of the shape of a given structure/object across a number of images, whereas the goal of our work is to be able to include information regarding the expected deformation of a given object across a sequence of images. The balloon variation of the active contour: In the balloon model of Cohen et al[5], an additional force is added to the standard snake[13] algorithm to provide for a constant expansion or contraction force. While this force does reduce the bias towards zero deformation of the underlying snake, it does so as an additional force and not as a change in the regularization model. Hence it cannot be used to capture prior information regarding the expected magnitude of the deformation, as can the elastic model. Non-Rigid Registration of Brain Images with Tumor Pathology: Kyriacou et al [15] presented some interesting work relating to the registration of pre- and posttumor brain images. To achieve an accurate registration a uniform contraction of the tumor is first used to estimate the shape of the post-tumor brain prior to the growth of the tumor. Unlike the balloon approach of Cohen[5], this uniform contraction procedure is very close in spirit to our work on the active elastic model, as in this case the tumor is shrinking under the influence of internal contraction and not as a result of an external force.
The Active Elastic Model
3 3.1
45
Experimental Results Methodology
In this section we present some preliminary results of the application of this algorithm to left ventricular deformation estimation. The active elastic model is used to do two things: (i) Isovolumic Bias Correction and (ii) Imposition of a temporal smoothness constraint alongside the Isovolumic Bias Correction. We bootstrap the algorithm by using the output produced by our previous work [18,17]. We label this algorithm as the ‘passive’ algorithm. In the passive algorithm, the images are segmented interactively and then initial correspondence is established using a shape-tracking approach. A dense motion field is then estimated using a passive, transversely linear elastic model, which accounts for the fiber directions in the left ventricle. The dense motion field is in turn used to calculate the deformation of the heart wall in terms of strains. We note that, although we apply bias correction in the passive algorithm (see section 2.3) bias remains in the estimate of the strain in the longitudinal direction (which lies parallel to the ‘major’ axis of the surface). The output of the ‘passive’ algorithm consists of a set of vectors p (xi , tj ) representing the strain estimated by the passive algorithm at position xi and time tj . Typically we divide the heart into about 800-1000 (i.e. i ∈ 1 : 1000) elements and use 6-9 time frames (j ∈ 1 : 9) resulting in a total of approximately 7000 6 × 1 vectors p = [ prr , pcc , pll , prc , prl , plc ]t . The components of p are the normal strains in the radial (rr), circumferential (cc) and longitudinal (ll) directions as well as the shears between these direction (e.g. prc is the radial-circumferential shear strain). These vectors p are then used to generate an estimate of the active strain a
(in one of two different methods as discussed below) and then a new set of output strains is estimated using the new ‘active’ algorithm. In this case we do not employ any additional bias correction. A. Isovolumic Bias Correction: In this bias correction procedure at each discrete element position xi and time tj we generate an output vector a (xi , tj ) by adjusting the longitudinal strain to create a new set of strain estimates a that result in an incompressible deformation. The fractional change in volume produced under strain p can be approximated as: δV p = (1 + prr ) × (1 + pcc ) × (1 + pll ) If we assume that most of the bias is in the longitudinal direction and that in reality the volume is preserved we can generate an estimate of the active strain
a (xi , tj ) by simply (i) setting a (xi , tj ) = p (xi , tj ) and (ii) adjusting the longitudinal component of a to correct for any divergence from the incompressibility constraint i.e. 1
all = p (1 + rr ) × (1 + pcc )
46
Xenophon Papademetris et al.
These estimates a are used as the mean value for the active elastic model. The variance is determined by the stiffness matrix and is the same as it was for the passive model. We label the results produced by this procedure as Active. B. Temporal Smoothing and Isovolumic Bias Correction: In this case, before estimating the active strain component a as above the strain vectors p (xi , tj ) are smoothed by performing a temporal convolution with a one-dimensional Gaussian kernel of standard deviation σ = 1.0 in the time direction to produce a temporally smooth set of vectors s . The s vectors are then used instead of the un-smoothed vectors p as the input to isovolumic bias correction procedure described above. This combined temporal smoothing and isovolumic bias correction procedure is used to generate an estimate of the active strain a to be used with the active elastic model. We label the results produced by this procedure as ActiveT. 3.2
Experiments
Data: We tested the new algorithm(s) by comparing its output to those obtained using MR tagging[14] and implanted markers[18]. In the MR tagging case we used one human image sequence provided to us by Dr Jerry Prince from John Hopkins University. The images were acquired using 3 orthogonal MR tagging acquisitions and the displacements estimated using an algorithm presented in Kerwin[14]. From these displacements we estimate the MR tagging derived strains. Images from one of the three acquisitions had the evidence of the tag lines removed using morphological operators, was segmented interactively and the strains were estimated using our previous approach (Passive)[18]. In the case of implanted markers we used 8 canine image sequences with implanted markers as was described in [18]. Tests: We tested two permutations of the active algorithm. For the algorithm labeled Active in figure 3, we used as input the output of the passive algorithm after isovolumic bias correction, without any temporal smoothing. The algorithm labeled as ActiveT used the output of the passive algorithm with both temporal smoothing and isovolumic bias correction. Figure 2 illustrates the output of algorithm ActiveT at four points in the cardiac cycle as applied to the MR tagging sequence. The output of the tagging method[14] at End-systole is presented for comparison. Figure 3 shows the error between the estimates of our old algorithm labeled passive and the two variations of the new active algorithm (Active and ActiveT ), as compared to the output of the tagging algorithm[14] and to the estimates obtained using the MR markers. In the case of the tagging algorithm we observe an overall reduction in mean strain error from 9.9% (passive) to 8.1% (active) at end-systole (frame 10). In the case of the implanted markers we observe a similar reduction from 7.2% to 6.3%. It is also interesting to note that the MR tagging algorithm [14] produces a reduction of myocardial volume of 12% between end-diastole and end-systole, our
The Active Elastic Model
47
Longit.
Radial
Circum.
passive algorithm an increase of approximately 14% and all both versions of the active algorithm produced small increases (< 2%) showing that the isovolumic bias correction was effective.
Fig. 2. Leftmost four columns: Circumferential, Radial and Longitudinal strain outputs of our active (Active 2T ) algorithm at four points in the systolic half of the cardiac cycle. Far right column: Output of MR tagging based algorithm[14] on the same image sequence.
(End-Systole)
End-Systole
Fig. 3. Absolute Strain Error vs Tag Data or Implanter Markers. Passive – passive model from [18], Active and ActiveT represent two versions of the active algorithm without and with temporal smoothing. We note that both the active algorithms result in error reduction as compared to the passive algorithm. In the case of the tagging data we plot the absolute error in the cardiac-specific strains whereas in the case of implanted markers we use the principal strains instead (see [18].)
48
4
Xenophon Papademetris et al.
Conclusions
The active elastic model is a generalization of the original elastic model which penalizes deformations away from a preset value as opposed to simply all deformations. This model can be used as a prior to solve problems where we have prior information regarding the magnitude and the variability of the expected deformation, hence it can be used to construct a proper prior probability density function for the displacement field having both a mean and a covariance, as opposed to the more traditional elastic model which has a fixed mean of zero. The cardiac deformation example is an obvious application of this model as the active strain component can be used to model the active contraction of the left ventricle in the systolic phase of the cardiac cycle. In the case of image registration such an active model could be used to good effect in cases where even a gross sense of the magnitude of the deformation exists a priori. For example, in Wang et al[23] where statistical shape-based segmentation information is used to constrain an elastic model, information from the segmentation regarding the relative deformation of different structures can be used with an active elastic model to drive the elastic model towards the expected solution, thus applying ‘forces’ to the elastic model from within as opposed to from ‘the outside’. Another example is the case of cerebro-spinal fluid loss in neurosurgery which results in large deformations in the ventricles not accounted for by gravitational forces[21]. In this case an active elastic model could be used to account for the expected large deformation of the ventricles (based perhaps on population statistics from inter-operative images) and hence reduce the bias in the final displacement field.
References 1. F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 567–585, 1989. 2. G. E. Christensen. Deformable Shape Models for Anatomy. Ph. D. dissertation, Washington University, Saint Louis, MI, August 1994. 3. G. E. Christensen, R. D. Rabbitt, and M. I. Miller M. I. Deformable templates using large deformation kinematics. IEEE Transactions on Image Processing, 5(10):1435–1447, 1996. 4. H. Chui, J. Rambo, R. Schultz, L. Win, J. Duncan, and A. Rangarajan. Registration of cortical anatomical structures via 3d robust point matching. In Information Processing in Medical Imaging, pages 168–181, Visegrad, Hungary, June 1999. 5. L. D. Cohen and I. Cohen. Finite element methods for active contour models and balloons for 2D and 3D images. IEEE Trans. Pattern Analysis and Machine Intelligence, 15(11):1131–1147, November 1993. 6. T. Cootes, A. Hill, C. Taylor, and J. Haslam. The use of active shape models for locating structures in medical images. In H. H. Barrett and A. F. Gmitro, editors, Information Processing in Medical Imaging, pages 33–47. LNCS 687, SpringerVerlag, Berlin, 1993.
The Active Elastic Model
49
7. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Active shape models – their training and application. Comp. Vision and Image Understanding, 61(1):38– 59, 1995. 8. A. C. Eringen. Mechanics of Continua. Krieger, New York, NY, 1980. 9. J. C. Gee, D. R. Haynor, L. Le Briquer, and R. K. Bajcsy. Advances in elastic matching theory and its implementation. In CVRMed-MRCAS, Grenoble, France, March 1997. 10. D. Geman and S. Geman. Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984. 11. E. Haber, D. N. Metaxas, and L. Axel. Motion analysis of the right ventricle from MRI images. In Medical Image Computing and Computer Aided Intervention (MICCAI), pages 177–188, Cambridge, MA, October 1998. 12. B. K. P. Horn and B. G. Schunk. Determining optical flow. Artificial Intelligence, 17:185–203, 1981. 13. M. Kass, A. Witkin, and D. Terzopoulus. Snakes: Active contour models. International Journal of Computer Vision, 1:312–331, 1988. 14. W. S. Kerwin and J. L. Prince. Cardiac material markers from tagged MR images. Medical Image Analysis, 2(4):339–353, 1998. 15. S. Kyriakou and C. Davatzikos. A biomechanical model of soft tissue deformation with applications to non-rigid registration of brain image with tumor pathology. In Medical Image Computing and Computer Assisted Intervention, pages 531–538. Springer, Berlin, 1998. LNCS 1496. 16. T. McInerney and D. Terzopoulos. Deformable models in medical image analysis: a survey. Medical Image Analysis, 1(2):91–108, 1996. 17. X. Papademetris, A. J. Sinusas, D. P. Dione, and J. S. Duncan. Estimation 3D left ventricular deformation from echocardiography. Medical Image Analysis, in-press (March 2001). 18. X. Papademetris, A. J. Sinusas, D. P. Dione, and J. S. Duncan R. T. Constable. Estimating 3D strain from 4D cine-MRI and echocardiography: In-vivo validation. In Medical Image Computing and Computer Aided Intervention (MICCAI), Pittsburgh, U.S.A., October 2000. 19. J. Park, D. N. Metaxas, and L. Axel. Analysis of left ventricular wall motion based on volumetric deformable models and MRI-SPAMM. Medical Image Analysis, 1(1):53–71, 1996. 20. P. Shi, A. J. Sinusas, R. T. Constable, E. Ritman, and J. S. Duncan. Point-tracked quantitative analysis of left ventricular motion from 3D image sequences. IEEE Transactions on Medical Imaging,, 19(1):36–50, January 2000. 21. O. Skrinjar and J. Duncan. Real time 3D brain shift compensation. In Information Processing in Medical Imaging (IPMI 99), pages 42–55, 1999. 22. A. Spencer. Continuum Mechanics. Longman, London, 1980. 23. Y. Wang and L. H. Staib. Elastic model based non-rigid registration incorporating statistical shape information. In Medical Image Computing and Computer Aided Intervention (MICCAI), pages 1162–1173. Springer, Berlin, 1998. LNCS 1496.
A Minimum Description Length Approach to Statistical Shape Modelling Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor Division of Imaging Science, Stopford Building, Oxford Road, University of Manchester, Manchester, M13 9PT, UK.
[email protected] Abstract. Statistical shape models show considerable promise as a basis for segmenting and interpreting images. One of the drawbacks of the approach is, however, the need to establish a set of dense correspondences between examples of similar structures, across a training set of images. Often this is achieved by locating a set of ‘landmarks’ manually on each of the training images, which is time-consuming and subjective for 2D images, and almost impossible for 3D images. This has led to considerable interest in the problem of building a model automatically from a set of training shapes. We extend previous work that has posed this problem as one of optimising a measure of model ‘quality’ with respect to the set of correspondences. We define model ‘quality’ in terms of the information required to code the whole set of training shapes and aim to minimise this description length. We describe a scheme for representing the dense correspondence maps between the training examples and show that a minimum description length model can be obtained by stochastic optimisation. Results are given for several different training sets of 2D boundaries, showing that the automatic method constructs better models than the manual landmarking approach. We also show that the method can be extended straightforwardly to 3D.
1
Introduction
Statistical models of shape show considerable promise as a basis for segmenting and interpreting images [5]. The basic idea is to establish, from a training set, the pattern of ‘legal’ variation in the shapes and spatial relationships of structures in a given class of images. Statistical analysis is used to give an efficient parameterisation of this variability, providing a compact representation of shape and allowing shape constraints to be applied effectively during image interpretation [6]. One of the main drawbacks of the approach is, however, the need during training - to establish dense correspondence between shape boundaries over a reasonably large set of example images. It is important to establish the ‘correct’ correspondence, otherwise an inefficient parameterisation of shape can result, leading to difficulty in defining shape constraints. In practice, correspondence has often been established using manually defined ‘landmarks’; this is both time-consuming and subjective. The problems are exacerbated when the approach is applied to 3D images. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 50–63, 2001. c Springer-Verlag Berlin Heidelberg 2001
A Minimum Description Length Approach to Statistical Shape Modelling
51
Several previous attempts have been made to automate model building [10, 11, 16]. The problem of establishing dense correspondence over a set of training boundaries can be posed as that of defining a parameterisation for each of the training set, leading to implicit correspondence between equivalently parameterised boundary points. Two different but equally arbitrary parameterisations of the training boundaries have been proposed [2, 14] , but neither of these addresses the issue of optimality. Shape ‘features’ (e.g. regions of high curvature) have been used to establish point correspondences, with boundary length interpolation between these points . Although this approach corresponds with human intuition, it is still not clear that it is in any sense optimal. A third approach, and that followed in this paper, is to treat finding the correct parameterisation of the training shape boundaries as an explicit optimisation problem. The optimisation approach has been described by several authors [10, 16, 4] and is discussed in more detail in Section 3. The basic idea is to find the parameterisation of the training set that yields, in some sense, the ‘best’ model. We have previously described an approach in which the best model is defined in terms of ‘compactness’, as measured by the determinant of its covariance matrix [16]. We represented the parameterisation of each of a set of training shapes explicitly, and used genetic algorithm search to optimise the model with respect to the parameterisation. Although this work showed promise, there were several problems: the objective function, although reasonably intuitive, could not be rigorously justified; the method was described for 2D shapes and could not easily be extended to 3D; and it was sometimes difficult to make the optimisation converge. In this paper we define a new objective function with a rigorous theoretical basis and describe a new representation of correspondence/parameterisation that extends to 3D and also results in improved convergence. Our objective function is defined in an information theoretic framework. The key insight is that the ‘best’ model is that which describes the entire training set as efficiently as possible, thus we adopt a minimum description length criterion. In the remainder of the paper we outline the model-building problem, review previous attempts to automate the process, describe in detail how we construct our objective function, describe our representation of correspondence, and present experimental results for automatic model building, using genetic algorithm search to optimise the objective function.
2
Statistical Shape Models
A 2D statistical shape model is built from a training set of example outlines. Each shape, Si , can (without loss of generality) be represented by a set of (n/2) points sampled along the boundary at equal intervals, as defined by some parameterisation Φi of the boundary path. Using Procrustes analysis [9] the sets of points can be rigidly aligned to minimise the sum of squared differences between corresponding points. This allows each shape Si to be represented by an n-dimensional shape vector xi , formed by concatenating the coordinates of its
52
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
sample points, measured in a standard frame of reference. Using Principal Component analysis, each shape vector can be approximated by a linear model of the form x=x ¯ + Pb
(1)
where x ¯ is the mean shape vector, the columns of P describe a set of orthogonal modes of shape variation and b is a vector of shape parameters. New examples of the class of shapes can be generated by choosing values of b within the range found in the training set. This approach can be extended easily to deal with continuous boundary functions [16], but for clarity we limit our discussion here to the discrete case. The utility of the linear model of shape shown in (1) depends on the appropriateness of the set of boundary parameterisations {Φi } that are chosen. An inappropriate choice can result in the need for a large set of modes (and corresponding shape parameters) to approximate the training shapes to a given accuracy and may lead to ‘legal’ values of b generating ‘illegal’ shape instances. For example, consider two models generated from a set of 17 hand outlines. Model A uses a set of parameterisations of the outlines that cause ‘natural’ landmarks such as the tips of the fingers to correspond. Model B uses one such correspondence but then uses a simple path length parameterisation to position the other sample points. The variance of the three most significant modes of models A and B are (1.06, 0.58, 0.30) and (2.19, 0.78, 0.54) respectively. This suggests that model A is more compact than model B. All the example shapes generated by model A using values of b within the range found in the training set are ‘legal’ examples of hands, whilst model B generates implausible examples - this is illustrated in Fig. 1 . model A
model B
Fig. 1. The first three modes of variation (±2σ) of models A and B
The set of parameterisations used for model A were obtained by marking the ‘natural’ landmarks manually on each training example, then using simple
A Minimum Description Length Approach to Statistical Shape Modelling
53
path length parameterisation to sample a fixed number of equally spaced points between them. This manual mark-up is a time-consuming and subjective process. In principle, the modelling approach extends to 3D, but in practice, manual landmarking becomes impractical.
3
Previous Work
Various authors have described attempts to automate the construction of statistical shape models from a set of training shapes. The simplest approach is to select a starting point and equally space landmarks along the boundary of each shape. This is advocated by Baumberg and Hogg [2] but, as shown in the previous section, it does not generally result in a satisfactory model. Kelemen et al [14] use spherical harmonic descriptors to parameterise their training shapes. Although it is independent of origin, this is still an arbitrary parameterisation of the boundary, which is in no obvious sense optimal. Benayoun et al [3], Kambhamettu and Goldgof [13] and Wang et al [19] all use curvature information to select landmark points. It is not, however, clear that corresponding points will always lie on regions that have the same curvature. Also, since these methods only consider pairwise correspondences, they may not find the best global solution. A more robust approach to automatic model building is to treat the task as an optimisation problem. Hill and Taylor [10] attempt this by minimising the total variance of a shape model. They choose to iteratively perform a series of local optimisations, re-building the model at each stage. This makes the approach prone to becoming trapped in local minima and consequently depends on a good initial estimate of the correct landmark positions. Rangarajan et al [17] describe a method of shape correspondence that also minimises the total model variance by simultaneously determining a set of correspondences and the similarity transformation required to register pairs of contours. Bookstein [4] describes an algorithm for landmarking sets of continuous contours represented as polygons. Points are allowed to move along the contours to minimise a bending energy term. Again, it is not obvious that the resulting model is in any useful sense optimal. Kotcheff and Taylor [16] describe an objective function, based on the determinant of the model covariance. This favours compact models with a small number of significant modes of variation, though no rigorous theoretical justification for this formulation is offered. They use an explicit representation of the set of shape parameterisations {Φi } and optimise the model directly with respect to {Φi } using genetic algorithm search. Their representation of {Φi } is, however, problematic and does not guarantee a diffeomorphic mapping. They correct the problem when it arises by reordering correspondences, which is workable for 2D shapes but does not extend to 3D. Although some of the results produced by their method are better than hand-generated models, the algorithm did not always converge.
54
4
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
An Information Theoretic Objective Function
We wish to define a criterion for choosing the set of parameterisations {Φi } that are used to construct a statistical shape model from a set of training boundaries {Si }. Our aim is to choose {Φi } so as to obtain the ‘best possible’ model. Since we wish to obtain a compact model with good generalisation properties we define the ‘best’ model as that which can account for the observations (the training boundaries) in as simple a way as possible. We formalise this by stating that we wish to find {Φi } that minimises the information required to code the whole training set to some accuracy δ on each of the elements of {xi }. Note that to describe {xi } to arbitrary accuracy would require infinite information; δ should be chosen to reflect the measurement errors involved in acquiring the training boundaries. 4.1
Description Length for a Set of Shape Vectors
Suppose we have a set {Si } of s training shapes that are parameterised using {Φi } and sampled to give a set of n-dimensional shape vectors {xi }. Following (1) we can approximate {xi } to an accuracy of δ in each of its elements using a linear shape model of the form xi = x ¯ + Pbi +ri
(2)
Where x ¯ is the mean of {xi }, P has t columns which are the t eigenvectors of the covariance matrix of {xi } corresponding to the t largest eigenvalues λj , bi is a vector of shape parameters, and ri is a vector of residuals. The elements n λj over the of ri can be shown to have zero mean and a variance of λr = n1 j=t+1
training set. The total information required to code the complete training set using this encoding is given by IT otal = IModel + sIb + sIr
(3)
Where IModel is the information required to code the model (the mean vector, x ¯, and the eigenvectors of the covariance matrix P), Ib is the average information required to code each parameter vector bi , and Ir the average information required to code each residual vector, ri . For simplicity, we assume that the elements of the mean x ¯ and the matrix P are uniformly distributed in the range [-1,1], and that we use km bits per element for the mean and kj bits per element for the j th column of P giving quantisation errors δm = 2−km and δj = 2−kj respectively. Thus
IModel = nkm + n
t j=1
kj
(4)
A Minimum Description Length Approach to Statistical Shape Modelling
55
The elements of bi are assumed to be normally distributed over the training set with zero mean and variance λj . To code them to an accuracy δb , we require on average
Ib =
t
[kb + 0.5 log(2πeλj )]
(5)
j=1
Where kb = −log(δb ), see Appendix A for details. All logs are base 2. Similarly, to code the n elements of ri to an accuracy of δr = 2−kr we require on average Ir = n[kr + 0.5 log(2πeλr )]
(6)
Substituting (4), (5) and (6) into (3) we obtain
IT otal = nkm + n
t
kj + s
j=1
t
[kb + 0.5 log(2πeλj )] + sn[kr + 0.5 log(2πeλr )]
j=1
(7) 4.2
Minimum Description Length
IT otal is a function of the quantisation parameters km , kj , kb , and kr , which are related to δ, the overall approximation error. Since we wish ultimately to minimise IT otal with respect to {Φi } we need first to find the minimum with respect to the quantisation parameters. This can be found analytically, leading to an expression in terms of s, n, k, t, {λj } and λr . IT otal = −0.5(n + nt + st) log(12αλr /s) + snk t log(λj ) + 0.5ns log(αλr ) +0.5(n + s) j=1
+0.5s(n + t) log(2πe) − 0.5st log(s)
(8)
ns where α = ( n(s−1)−t(n−s) ) The details of this derivation are given in Appendix B. Thus, for a fixed number of modes, t, to optimise IT otal we need to minimise
F = (n + s)
t
log(λj ) + [n(s − 1) − t(n + s)] log(λr )
(9)
j=1
Note that this is independent of δ. Finally, the number of modes, t, should be chosen to minimise IT otal . Since t must be an integer, this can be achieved
56
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
using a simple, exhaustive search. Note, however, that the average information required to code bj , the j th element of the shape vector bi , is kb +0.5 log(2πeλj ). This must be greater than zero, which imposes an upper bound on t such that λt > 12αλr /(2πe).
5
Representation and Optimisation of {Φi }
We wish to find the global optimum of F in (9) with respect to the set of shape parameterisations {Φi }. Our approach is to use an explicit representation of {Φi } coupled with a stochastic search to find the optimum. We require a representation of {Φi } that ensures a diffeomorphic mapping between each pair of training shapes. In 2D this can be achieved by enforcing the ordering of corresponding points around the training shapes. In 3D, however, no such ordering exists. We have developed a new method of representation that guarantees diffeomorphic mapping without using an explicit ordering constraint. Here we describe the method for 2D shapes; Appendix C explains how it can be extended to 3D. We define a piecewise linear parameterisation for each training shape by recursively subdividing boundary intervals by inserting nodes. The position of each new node is coded as the fraction of the boundary path length between neighbouring nodes - thus by constraining the subdivision parameters to the range [0,1] we can enforce a hierarchical ordering where, at each level of the hierarchy, nodes are positioned between those already present. This is illustrated by the example in Fig. 2 which demonstrates the parameterisation of a circle.
Fig. 2. A diagram that demonstrates the parameterisation of a circle. The squares represent the landmarks that are already in place. The parameter values are: Φi = (Origin, 0.65(0.65(0.4, 0.8), 0.8(0.5, 0.2)) Recursive subdivision is continued until an arbitrarily exact parameterisation is achieved. Correspondence is assumed across the whole training set between equivalent nodes in the subdivision tree. We can manipulate a set of these parameterisations {Φi } in order to optimise our objective function F . In practice the search space is high-dimensional with many local minima leading us to prefer a stochastic optimisation method such as simulated annealing [15] or genetic algorithm search [8]. We chose to use a genetic algorithm to perform the experiments reported below.
A Minimum Description Length Approach to Statistical Shape Modelling
6
57
Results
We present qualitative and quantitative results of applying our method to several sets of outlines of 2D biomedical objects. We also investigate how our objective function behaves around the minimum and how it selects the correct number of modes to use. 6.1
Results on 2D Outlines
We tested our method on a set of 17 hand outlines, 38 left ventricles of the heart, 24 hip prostheses and 15 outlines of the femoral articular cartilage.
←− ±2σ −→
←− ±2σ −→
←− ±2σ −→
←− ±2σ −→
Fig. 3. The first three modes of variation of the automatically generated models. Each row shows the variation (±2σ) of a mode. The top rows are the principal modes with the largest variation.
58
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
In figure 3, we show qualitative results by displaying the variation captured by the first three modes of each model (first three elements of b varied by (2±[standard deviations over training set ]). We also give quantitative results in figure 4, tabulating the value of F , the total variance and variance explained by each mode for each of the models, comparing the automatic result with those for models built using manual landmarking and equally spaced points. The quantitative results in figure 4 show that the automatically generated models are significantly more compact than both the models built by hand and by equally-spacing points. It is interesting to note that the model produced by equally spacing landmarks on the hip prostheses is more compact than the manual model. This is because equally-spaced points suffice as there is little variation, but errors in the manual annotation adds extra noise that is captured as statistical variation. Hip Prostheses
Hands Mode 1 2 3 4 5 6 VT F
Hand Built 9.34 5.12 2.41 1.38 0.67 0.49 20.68 43020
Equally-spaced 20.74 7.4 5.13 3.15 1.71 1.21 41.21 44114
Automatic 8.44 4.61 2.1 1.36 0.44 0.34 18.64 41304
Mode 1 2 3 4 5 6 VT F
Hand Built 4.01 1.24 0.71 0.63 0.60 0.51 8.21 30401
Hand Built 1.89 1.36 0.57 0.49 0.17 0.14 4.98 10541
Equally-spaced 2.48 1.21 0.70 0.49 0.32 0.17 6.15 11014
Automatic 3.73 0.98 0.7 0.55 0.51 0.48 7.1 27989
Knee Cartilage
Heart Ventricles Mode 1 2 3 4 5 6 VT F
Equally-spaced 3.81 1.04 0.61 0.54 0.5 0.48 7.88 28123
Automatic 1.97 1.13 0.66 0.34 0.18 0.13 4.68 7348
Mode 1 2 3 4 5 6 VT F
Hand Built 6.29 4.10 2.01 1.86 1.65 1.28 20.23 18023
Equally-spaced 6.54 4.42 2.33 1.87 1.66 1.24 21.01 18514
Automatic 5.91 4.82 2.37 1.65 1.58 1.43 19.04 17494
Fig. 4. A quantitative comparison of each model showing the variance explained by each mode. F is the value of the objective function and VT is the total variance.
6.2
The Behaviour of F
To demonstrate the behaviour of our objective function we took some landmarks from the automatically generated hand model and added random noise to each one. Figure 5 shows a plot of F against the standard deviation of the noise. The plot shows that as the landmarks are moved further away from their original positions, the value of F increases - as expected.
A Minimum Description Length Approach to Statistical Shape Modelling
6.3
59
Selecting the Number of Modes
We used the automatically generated heart model to show how the number of modes affects the value of the objective function. Figure 6 shows a plot of F against the number of modes used in the model. The values form a quadratic with a minimum at nine modes which captures approximately 93% of the total variation. 4
5.8
2
x 10
5.6 1.8
5.4 1.6
5.2 1.4 F
F
5
4.8
1.2
4.6 1
4.4 0.8
4.2
4
0
0.5
1
1.5
2 2.5 3 Std. dev on each point
3.5
4
4.5
Fig. 5. How noise on the landmarks affects the value of the objective function.
7
0.6
5
0
5
10 15 Number of Modes
20
25
Fig. 6. The values of F for a model built with a different number of modes.
Discussion and Conclusions
We have derived an objective function that can be used to evaluate the quality of a statistical shape model. The expression we use has a theoretical grounding in Information Theory, is independent of quantisation error and unlike other approaches [10, 16],does not involve any arbitrary parameters. The objective function includes a log(λi ) term which is equivalent to the product of the λi ’s, (and thus the determinant of the covariance matrix) as used by Kotcheff and Taylor [16], but the more complete treatment here shows that other terms are also important. As well as providing good results when used as an objective function for automatically building statistical shape models, the function may also be used to calculate the correct number of modes to use for a given model. If we implicitly optimise the number of modes, however, the number of false minima increases which means the genetic algorithm requires a larger population to find a satisfactory solution. We have described a novel representation of correspondence that enforces a diffeomorphic mapping and is applicable in 2D and 3D. Although a formal proof
60
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
is beyond the scope of this paper, the representation reduces the search space, which improves the convergence properties of the search. As with any stochastic optimisation technique, our search requires a large number of function evaluations. The results in this paper typically took several hours to produce. Although this is a one-off, off-line process, it is likely to become impractical when a larger set of training shapes is used because its complexity is at least O(s2 ). We are currently working on finding faster ways of locating the optimum. Although we have not yet implemented the method in 3D, the results on 2D objects offer a significant improvement over those from a hand-built model. The method we have described in this paper only optimises the shape parameterisation. We also intend to consider the pose of each shape in the search, as this is likely to affect the information content of the model.
A Information Content of a Scalar Suppose a variable x can take n discrete values xi (i = 1 . . . n), with a corresponding probability pi . The entropy, or information needed to transmit a value of x is given by
Hd = −
n
pi log(pi )
(10)
i=1
If the log is base 2, then Hd has units of bits. Now, suppose x is a continuous variable with a p.d.f. p(x). Now the entropy is defined as Hc = −
∞
p(x)log(p(x))dx
(11)
−∞
(10) and (11) are not, however, directly comparable since, to transmit a real number to an arbitrary accuracy may require an infinite number of bits. Suppose now what we approximate the continuous case with discrete x’s that can take on the values xi = iδ(i = 1 . . . n), with a probability given by the continuous p.d.f p(x), thus pi ≈ δp(iδ). Now Hd ≈ Hc − logδ
(12)
Therefore, if we discretise x in steps of 2−k , then H d ≈ Hc + k
(13)
A Minimum Description Length Approach to Statistical Shape Modelling
61
For example, if all values are in the range [0, R], then p(x) = 1/R and Hc = log(R) so Hd = k + log(R). This agrees with the value obatained from (10), as there are 2−k R possible states, all equall likley. Alternatively, if x is distributed as a Gaussian with variance σ 2 , it can be shown that Hc = 0.5log(2πeσ 2) [18], so the number of bits required to transmit a value of x, disretised in steps of 2−k , is [k + 0.5log(2πeσ 2 )].
B Quantisation Effects IT otal in (7) is a function of the quantisation parameters δm , {δj }, δb and δr . Since we wish ultimately to minimise IT otal with respect to {Φi } we need first to find the minimum with respect to these parameters. First, we need to determine what quantisations δm , {δj }, δb and δr are required to achieve a quantisation error δ = 2−k in the final reconstruction. We assume that by quantising a parameter, we effectively add noise to that parameter. We have used error propagation to estimate the effects of noise on the final reconstruction. ¯ induces In our linear model (2), noise of variance σ 2 on the elements of x noise of variance σ 2 on xi . Similarly, noise of variance of σ 2 on the elements of bi can be shown to induce an average noise of variance σ 2 /2 on the elements of xi . Noise of variance σ 2 on the elements of the j th column of P induces an average noise of variance λj σ 2 on each element of xi . Quantising a value to δ induces noise with a flat distribution in [−δ/2, δ/2] ¯ , P, and bi , causes an additional and thus a variance of δ 2 /12. Thus quantising x error that must be corrected by the residual term, ri . In effect, the variance of the residual is increased from the original λr . Considering this, the variance on the elements of the residual is given by
λ/r = λr +
t 1 2 t 2 (δm + δb + λj δj2 ) 12 2n j=1
(14)
Using the central limit theorem we assume that the residuals are normally distributed. λr is substituted for λr in (7) giving IT otal = nkm + n
t
kj
j=1
+ s
t
[kb + 0.5 log(2πeλj )] + sn[k + 0.5 log(2πeλ/r )]
(15)
j=1
We can now find the minimum of IT otal with respect to δm , {δj }, δb and δr . By equating the differentials to zero, we can show that at the optimum δj2
=2
−2kj
δb2
2 = 2−2km = 12λr /s δm
= =
12λr /(sλj ) = 2−2kb = 12λr
2 δm /λj 2 = sδm
(16) (17) (18)
62
Rhodri H. Davies, Tim F. Cootes, and Chris J. Taylor
Substituting (16), (17) and (18) into (14) gives ns ) λ/r = αλr where α = ( n(s − 1) − t(n − s)
C Extension to 3D In this appendix, we describe how our representation of the parameterisation can be extended to surfaces. Our ultimate goal is to build 3D statistical shape models of biomedical objects. Our training data are parallel stacks of shape outlines, segmented from slices of 3D Magnetic Resonance Images (MRI). We can interpolate these outlines to form a triangulated surface using the algorithm of Geiger [7]. As in 2D, we define an explicit representation of the parameterisation by recursively subdividing the surface by inserting nodes. The position of each new node is defined as its position inside the triangle formed by its three neighbouring nodes. As these triangles edges are ambiguous on the original surface, we can flatten the surface to that of a sphere using the methods of Angenent at al or Hurdal at al [1, 12]. These methods flatten each surface using conformal mappings. As these mapping are diffeomorphisms, each point on the original surface has a unique, corresponding point on the sphere. This allows us to subdivide the surface using spherical triangles. An example is shown in figure 7. We iterate until we have a sufficient number of landmarks on the sphere. They can then be projected onto the shape’s surface using the inverse of the conformal mapping and evaluated using our objective function.
Fig. 7. A diagram to demonstrate how each point is added to the surface. The hollow points are those that are already fixed on the surface. This is a simplification, as on the sphere, the triple of points would form a spherical triangle.
Acknowledgements The authors would like to thank Dr. Alan Brett for his contribution to the ideas for the work in this paper. Tim Cootes is funded under an EPSRC Advanced Fellowship Grant. Rhodri Davies would like to thank the BBSRC and AstraZeneca Pharmaceuticals 1 for their financial support. 1
AstraZeneca Pharmaceuticals, Alderley Park, Macclesfield, Cheshire, UK
A Minimum Description Length Approach to Statistical Shape Modelling
63
References [1] Angenent, S., S. Haker, A. Tannenbaum and R. Kikinis: On the laplace-beltrami operator and brain surface flattening. IEEE Trans. Medical Imaging, 1999. 18: p. 700-711. [2] Baumberg, A. and D. Hogg, Learning Flexible Models from Image Sequences, in European Conference on Computer Vision, Stockholm, Sweden. 1994. p. 299-308. [3] Benayoun, A., N. Ayache, and I. Cohen. Adaptive meshes and nonrigid motion computation. in International Conference on Pattern Recognition. 1994. Jerusalem, Israel. [4] Bookstein, F.L., Landmark methods for forms without landmarks: morphometrics of group differences in outline shape. Medical Image Analysis, 1997. 1(3): p. 225243. [5] Cootes, T., A. Hill, C. Taylor, and J. Haslam, The use of Active shape models for locating structures in medical images. Image and Vision Computing, 1994. 12: p. 355-366. [6] Cootes, T., C. Taylor, D. Cooper and J. Graham, Active shape models - their training and application. Computer Vision and Image Understanding, 1995. 61: p. 38-59. [7] Geiger, B., Three-dimensional modelling of human organs and its application to diagnosis and surgical planning, . 1993, Technical Report, INRIA, France. [8] Goldberg, D.E., Genetic Algorithms in Search, Optimisation and Machine Learning. 1989: Addison Wesley. [9] Goodall, C., Procrustes Methods in the Statistical Analysis of Shape. Journal of the Royal Statistical Society, 1991. 53(2): p. 285-339. [10] Hill, A. and C. Taylor. Automatic landmark generation for point distribution models. in British Machine Vision Conference. 1994. Birmingham, England: BMVA Press. [11] Hill, A. and C.J. Taylor, A framework for automatic landmark identification using a new method of non-rigid correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, April, 2000. [12] Hurdal, M. K, P. L. Bowers, K. Stephenson, D. W. L. Sumners, K. Rehm, K. Schaper, D. A. Rottenberg, Quasi-conformally flat mapping the human cerebellum, MICCAI’99, p. 279-286, [13] Kambhamettu, C. and D.B. Goldgof, Point Correspondence Recovery in Non-rigid Motion, in IEEE Conference on Computer Vision and Pattern Recognition. 1992. p. 222-227. [14] Kelemen, A., G. Szekely, and G. Gerig, Elastic model-based segmentation of 3-D neuroradiological data sets. IEEE Transactions On Medical Imaging, 1999. 18(10): p. 828-839. [15] Kirkpatrick, S., C. Gelatt, and M. Vecchi, Optimization by Simulated Annealing. Science, 1983. 220: p. 671-680. [16] Kotcheff, A.C.W. and C.J. Taylor, Automatic Construction of Eigenshape Models by Direct Optimisation. Medical Image Analysis, 1998. 2: p. 303-314. [17] Rangarajan, A., H. Chui and F. L. Bookstein,The Softassign Procrustes Matching Algorithm, in 15th IPMI 1997. p. 29-42. [18] Therrien, C.W., Decision Estimation and Classification. 1989: John Whiley and Sons. [19] Wang, Y., B. S. Peterson, and L. H. Staib. Shape-based 3D surface correspondence using geodesics and local geometry. CVPR 2000, v. 2: p. 644-51.
Multi-scale 3-D Deformable Model Segmentation Based on Medial Description Sarang Joshi, Stephen Pizer, P. Thomas Fletcher, Andrew Thall, and Gregg Tracton Medical Image Display & Analysis Group, University of North Carolina at Chapel Hill, Chapel Hill NC 27514
[email protected] Abstract. This paper presents a Bayesian multi-scale three dimensional deformable template approach based on a medial representation for the segmentation and shape characterization of anatomical objects in medical imagery. Prior information about the geometry and shape of the anatomical objects under study is incorporated via the construction of exemplary templates. The anatomical variability is accommodated in the Bayesian framework by defining probabilistic transformations on these templates. The modeling approach taken in this paper for building exemplary templates and associated transformations is based on a multi-scale medial representation. The transformations defined in this framework are parameterized directly in terms of natural shape operations, such as thickening and bending, and their location. Quantitative validation results are presented on the automatic segmentation procedure developed for the extraction of the kidney parenchyma-including the renal pelvis-in subjects undergoing radiation treatment for cancer. We show that the segmentation procedure developed in this paper is efficient and accurate to within the voxel resolution of the imaging modality.
1
Introduction
Modern anatomic imaging technologies are enabling extremely detailed study of anatomy, while the development of functional imaging modalities are providing detailed in vivo associated information regarding the physiological function. While modern imaging modalities provide exquisite imagery of the anatomy and its function, automatic segmentation of these images and the precise quantitative study of the biological variability exhibited in these images continues to pose a challenge. In this paper we present a multi-scale medial framework based on deformable templates[7],[5],[16] for the automatic extraction and analysis of the shape of anatomical objects from the brain and abdomen, imaged respectively via MRI and CT. The multi-scale deformable template approach is based on the medial axis representation of objects first proposed by Blum [3] for studying shape. The approach presented herein is an extension of the early work by Pizer[13] and Firtsch[6] in 2D on deformable medial representation of objects. We adopt a Bayesian approach of incorporating prior knowledge of the anatomical variations and the variation of the imaging modalities. Following the M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 64–77, 2001. c Springer-Verlag Berlin Heidelberg 2001
Multi-scale 3-D Deformable Model Segmentation
65
deformable templates paradigm, we incorporate prior information about the geometry and shape of the anatomical objects under study via the construction of exemplary templates. The infinite anatomical variability is accommodated in the Bayesian framework by defining probabilistic transformations on these templates[7]. The segmentation problem in this paradigm is that of finding the transformation S of the template, that maximizes the posterior, P (S|data) ∝ P (data|S)P (S) , where P (S) is the prior probability function capturing prior knowledge of the anatomy and its variability, and P (data|S) is the data likelihood function capturing the image data-to-geometry relationship. For efficiency of implementation we equivalently maximize the log-posterior given by LogP (S|data) = LogP (data|S) + LogP (S|data) , up to an additive constant. The modeling approach taken in this paper for building exemplary templates and associated transformations is based on a multi-scale medial representation. The transformations defined in this framework are parameterized directly in terms of natural shape operations, such as thickening and bending, and their location. This multi-scale approach has many stages of scale, at each of which the geometric primitives are intuitive for that scale and have the property that their spacing is comparable to the linear measure of the size of space (modeling aperture) that they summarize. This leads to a spatial tolerance that successively decreases with scale level. A Markov Random Field approach, described in detail in [14] is used to defining the energetics of the log probabilities needed for the posterior. The log probabilities at a given scale are not only conditioned on a neighborhood at that scale, but conditioned on the result of the next larger scale. The posterior at each scale can then be separately optimized successively decreasing the scale. The multi-scale nature of our approach allows for the investigation of these properties at various scales from the coarse scale of entire body sections to the fine scale on the order of the resolution of the imaging modality. The intuitiveness derives from the ability to have many of the levels of scale describe medial properties. In addition, the size properties derived from medial description allow the creation of natural levels of scale each suited for shape description at that scale level. The next two sections discuss the medial representation of objects. Section 3 discusses the deformation of models to fit image data and the geometric measures used in the log prior term measuring geometric typicality. Section 4 discusses the log likelihood term measuring the match of a deformed model to a target image, and Section 5 gives segmentation results using this method.
2
Medial Representation of Objects
Many authors in image analysis, geometry, human vision, computer graphics, and mechanical modeling have come to the understanding that the medial relationship between points on opposite sides of a figure is an important factor in
66
Sarang Joshi et al.
the objects shape description. Biederman [1], Marr [11], Burbeck [4], Leyton [9], and others have produced psychophysical and neurophysiological evidence for the importance of medial relationships (in 2D projection) in human vision. The medial geometry has also been explored in 3D by Nackman [12], and Siddiqi [15], and medial axis modeling techniques have been applied by many researchers, including Bloomenthal [2], Igarashi [8] and Markosian [10]. Of these, Bloomenthal skeletal-based soft-objects; Igarashi used a medial spine in 2D to generate 3D surfaces from sketched outlines; and Markosian used implicit surfaces generated by skeletal polyhedra. Our representation, described in [Pizer 1999], expands the notion of medial relations from that of a medial atom implying boundaries by including of a widthproportional tolerance and by using a width-proportional sampling of the medial manifold in place of a continuous representation. The advantages, relative to the ideas of medial axis descended from Blum [1967], are in representational and computational efficiency and in stability with respect to boundary perturbation. Associating a tolerance with the boundary position provides opportunities for stages of the representation with successively smaller tolerance. Representations with large tolerance can ignore detail and focus on gross shape, and in these large-tolerance stages, discrete sampling can be coarse, resulting in considerable efficiency of manipulation and presentation. Smaller-tolerance stages can focus on retirements of the larger-tolerance stages and thus more local aspects. The medial representation used in this paper called m-rep, is based on a hierarchical representation of linked figural models, defined at coarse scale by a hierarchy of figures protrusions, indentations, neighboring figures, and included figures which represent solid regions and their boundaries simultaneously. The linked collection of figural components imply a fuzzy, i.e., probabilistically described boundary position with a width-proportional tolerance. At small scale these figural boundaries are made precise by displacing a dense sampling of the m-rep implied boundary. A model for a single figure is made from a net, (a mesh or a chain) of medial atoms; each atom describing not only a position and width, but also a local figural frame giving figural directions, and an object angle between opposing, corresponding positions (medial involutes) on the implied boundary. A figure can be expressed as a sequence over scale of nets, implying successively refined (smaller tolerance) versions of the figural boundary. 2.1
Single Figure Description via m-rep
We now describe the representation of single figural forms. Our representation is based on the notion of medial involutes of Blum [1967] and starts with a parameterization of a medial atom m that locally implies opposing figural boundaries as illustrated in Fig. 1. The medial atom m by itself not only implies two opposing sections of boundary, but as well the solid region between them. Medial atoms on the interior of the medial manifold are defined as a four tuple m = {x, r, F , θ}, consisting of:
Multi-scale 3-D Deformable Model Segmentation
67
1. x ∈ IR3 , the skeletal position, 2. r ∈ IR+ , the local width defined as the distance from the skeletal position of two or more implied boundary positions, 3. F ∈ SO(3) the local frame parameterized by (n, b, b⊥ ), where n is the normal to the medial manifold , b is the direction in the tangent plane of the fastest narrowing of the implied boundary sections, 4. θ ∈ [0, π2 ] the object angle determining the angulation of the implied sections of boundary relative to b. The two opposing boundary points implied by the medial atom are given by y = x + p and y = x + s. The vectors p and s are given by
p = rR(b,n) (θ)b
,
s = rR(b,n) (−θ)b ,
where R(b,n) (θ) is a rotation by θ in the (b, n) plane.
Fig. 1. A medial atom defined by the 4-tuple {x, r, F, θ} with involutes P and S perpendicular to the implied surface.
For stability at the ends in image matching, medial atoms on the boundary of the medial manifold also include an extra parameter η that captures the elongation of the edge away from a spherical end cap. The end section of the medially implied boundary is as a parametric curve form one involute to the other passing through the point x + ηrb and orthogonal to b. The curve c(t) parametrized by t ∈ [−1, 1] is defined by c(t) = x + rη(t)R(b,n) ((1 − t)θ))p , where η(t) = (cos(tπ) + 1) with θ being the object angle.
(η − 1) +1 , 2
68
Sarang Joshi et al.
In the above representation x gives the central location of the solid section of figure that is being represented by the atom m. The scalar r gives the local scale and size of the solid section of figure that is being represented by the atom. The object angle θ and the direction b also define the gradient of the scalar field r via ∇r = −b cos θ . The scalar field r also provides a local ruler for the precise statistical analysis of the object. There are three basic types of medially defined figural segments with corresponding medial manifolds M of dimension 0, 1, 2 respectively. Figural segments with two dimensional medial manifolds represent slab-like segments, tube-like segments, where the medial manifold is an one dimensional space curve, and spherical segments, where the medial manifold consists of a single point. Shown in Fig. 2 are examples of slab like and tubular figures. In this paper we will focus on slab-like segments having 2-dimensional medial manifolds discretized into a net of medial atoms. For easy of implementation we have been using a quadrilateral mesh of discretized medial atoms mki,j ∈ M , (i, j) ∈ [1, N ] × [1, M ] for approximating the continuous medial manifold at particular scale k with tolerance and the level of discretization inversely proportional to scale with the final scale having tolerance on the order of the resolution of the imaging modality. We define a medial scale space by a sequence of successive refinement of medial nets defined via offsets from a spline interpolation of medial atoms from the scale above. 2.2
Spline Interpolation of Medial Atoms
Given a quadrilateral mesh of medial atoms mi,j , (i, j) ∈ [1, · · · , N ] × [1, · · · , M ] we define a continuous medial surface via a B´ezier interpolation of the discretely sampled medial atoms. The medial position x(u, v), u ∈ [i, i + 1], v ∈ [j, j + 1] is defined via a bicubic polynomial interpolation of the form x(u, v) =
3
dm,n um v n
m,n=0
with dm,n are chosen to satisfy the known normal/tangency and continuity conditions at the sample points xi,j . Given the interpolation of the medial positions the radius function r(u, v) is also interpolated as a bicubic scalar field on the above interpolated medial manifold given r and ∇r at the mesh points points xi,j . Having interpolated r and its gradient, the frame F and the object angle θ are defined via the relation ship ∇r = −b cos θ
Multi-scale 3-D Deformable Model Segmentation
69
Fig. 2. Top rows shows an example of a slab like figure with 2 dimensional medial manifold. Shown in the bottom row is tubular figure with 1 dimensional medial manifold. 2.3
Figural Coordinate System
The prior (geometric typicality) measure requires geometrically consistent correspondence between boundary points in the model and those in a deformed model. The likelihood (deformed model to target image match) measure requires correspondence between template intensities at positions in 3-space relative to the model and target image intensities at positions in 3-space relative to the deformed model. Both of these correspondences are made via the medial geometry. The continuous medial manifold of a figure, defined via the spline interpolation describe above, is parameterized by (u, v), with u and v taking the atom index numbers at the discreet mesh positions. A parameter t ∈ {−1, 1} designates the side of the medial manifold on which an implied boundary point lies. As described in section 2.1, t varies continually between −1 and 1 as the implied boundary point moves around the crest of the object from one side of the medial axis to another. For single figures boundary correspondences are defined via the common parameterization (u, v, t). Positions in the image in the neighborhood of the implied boundary are inˆ where (u, v, t) is the parameterization of the closest point on dexed by (u, v, t, d), the medially implied boundary and dˆ is the signed distance (interior = negative, exterior = positive) from the boundary in multiples of the local radius r of the medial point at (u, v).
70
2.4
Sarang Joshi et al.
Connecting m-reps Figures into Objects
As illustrated in Fig. 3, protrusion and indentation figures combine into objects in a hierarchical fashion, with the same Boolean operators of union and difference as with Constructive Solid Geometry models, but here recognizing the tolerance of the figures. A figure may be separated from all other figures, or it may be the parent of one or more attached sub-figures: protrusion and/or indentation. A sub-figure on a slab or tube or sphere may be a slab or tube. The interior of a protrusion sub-figure is combined with the parent by union of their interiors with the modification that the boundaries may smoothly blend. An indentation subfigure subtracts its interior from its parent, in the set theoretic sense, again with smooth blending. As illustrated in Fig. 3, a slab protrusion or indentation on a figure has a segment of its medial meshs end atoms that are at the open end of the figure and on the implied boundary of the parent, where the subfigure attaches to its parent. If the subfigure is a tube, it has a single open-end atom where the tube is attached to its parent, and a closed end atom at the other end. We call these the hinge atoms. The remaining end atoms form the closure of that figure. We intersect the subfigures interpolated medial mesh with the implied boundary of the parent figure. In what is presented herein we will concentrate on single figure objects.
Fig. 3. Fig. showing the medial mesh of protrusion sub figure with hinge atoms and the resulting blended implied surface.
2.5
Construction of m-rep Figures
Using the visualization and computer aided design techniques developed, we have built numerous models of anatomical objects. In this paper we focus on the automatic segmentation of the kidney as imaged in CT for radiation treatment for cancer. Shown in Fig. 4 is the template m-rep model of the kidney built from a CT of the abdomen.
Multi-scale 3-D Deformable Model Segmentation
71
Fig. 4. Fig. showing the m-rep model of the template kidney. The left panel shown the medial atoms and the implied surface. The right panel shows the model overlaid on the associated CT imagery.
3
Transformation of m-reps Figures
Having defined the construction of typical anatomical objects via m-rep figures, anatomical variability is accommodated by defining a cascade of transformations S k , k = 0, · · · , N increasing in dimensionality. These transformations are applied globally to the entire object as well as locally to individual atoms at various scales. Each transformation is applied at its own level of locality to each of the primitives appearing at that level. At each level of locality by the Markov random field framework the primitive is related only to immediately neighboring primitives at that level. Each level’s result provides both a initial value and a prior for the primitives at the next smaller scale level. The transformation at the last (smallest) scale level is finally a dense displacement field applied to the boundary of the figure on the scale of the voxel resolution of the imaging modality. 3.1
Object-Level Similarity Transformation
To begin with, a similarity transformation S 0 = (α, O, t) ∈ [(IR+ ×SO(3)) n IR3 ] is defined on the scale of the entire object and is applied to the whole medial manifold M. The similarity transformation S 0 scales, translates and rotates equally all the medial atoms of the object, that is m1i,j = S 0 ◦ mi,j = {αOxi,j + t, αr, O ◦ F , θ} . Notice that the similarity transformation does not affect the object angle. As the medial representation is invariant under the similarity transformation, this is
72
Sarang Joshi et al.
equivalent to applying the similarity transformation S 0 to the implied boundary B of the medial mesh to yield the transformed boundary B 1 . A prior is induced on the above defined transformation based on the displacement of the implied boundary of the objects. Throughout, an independent Gaussian prior on boundary displacement is used with variance proportional to the local radius r. For the whole object similarity transformation S 0 the log-prior becomes ||y − S 0 ◦ y||2 dy . LogP (S 0 ) = − 2(σr(y))2 B 3.2
Atom Level Transformation
Having accomplished the gross placement of the figure, attention is now focused on the sub-sections of the figure defined by each of the medial atoms. At this stage local similarity transformations as well as rotations of the local angulation, 1 = (α, O, t, β)i,j ∈ [(IR+ × SO(3)) n IR3 ] × [− π2 , π2 ] are applied to the medial Si,j atom, that is, 1 1 1 m2i,j = Si,j ◦ m1i,j = (αi,j Oi,j x1i,j + ti,j , αi,j ri,j , Oi,j ◦ F1i,j , θi,j + βi,j ) .
(1)
The resulting implied boundary is defined as B 2 . A prior on the local atom 1 transformations Si,j is also induced based on the displacement of the implied boundary with an additional Markov random field prior on the translations, guaranteeing the smoothness of the medial manifold. In keeping with the level 1 be the portion of the implied boundary affected by the atom of locality Let Bij 1 1 of the atom m1i,j becomes mi,j . The prior energy on the local transformation Si,j 2 n,m=1 ||ti,j − ti+n,j+m ||2 ||y − y || , dy − LogP (S 1 ) = − 1 − x1 1 (σr(y))2 ||x || Bi,j i,j i+n,j+m i,j n,m=−1 where y is the corresponding position on the figural boundary implied by the transformed atom m2 , and ti,j is the translation component of the local trans1 formation Si,j . Good association between points on the boundary y and the deformed boundary y is made using the figural coordinate system describe in section 2.3. The point y is the point on the deformed model having the same (u, v, t) coordinates as that of the original point y. The integral in the above prior is implemented as a discrete sum over a set of boundary points by defining a sampling of the (u, v, t) coordinate space and calculating the associated implied boundary before and after an atom deformation. 3.3
Dense Boundary Displacement Field Transformation
At the final stage the implied boundary of the figure is displaced in the normal direction using a dense displacement field defined on the implied boundary B 2 , y ∈ B 3 = y + n(y)d(y), y ∈ B 2 , where n(y) is the normal to the implied boundary at y ∈ B 2 .
Multi-scale 3-D Deformable Model Segmentation
73
As with the local atom transformations the prior is induced on the dense displacement field using a Markov random field prior derived from energetics associated with thin elastic membranes to guarantee smoothness. The log-prior on the displacement field d(y) becomes |d(y)|2 2 − |∇d(y)| dy (2) LogP (d(x)) = − 2 B2 (σr(y)) B2 The above above prior is implements via a discrete implementation as follows. Let yi ∈ B2 , i = 1, · · · , N be the set of discrete boundary points on the implied boundary B 2 . Let N (yi ) be the set of neighbors of the point yi . The discrete approximation of equation 2 becomes −
N |d(yi )|2 i=1
4
(σr(yi )
−
N i=1 j∈N (yi )
|d(yj ) − d(yi )|2 . ||yj − yi ||
Image Data Log-Likelihood
Having defined the transformation and the associated prior energetics, we now define the data likelihood function needed for defining the posterior. We have been defining the data likelihood functions, using the object centered coordinate system developed in section 2.3, by defining correlation functions between a predefine template image Itemp and the data Idata in the neighborhood of the boundary of the medially define object B. Leting δ be the size of the collar around the object, in multiples of r the local radius, the data log likelihood function becomes δ ˆ data (y , d)dyd ˆ Itemp (y, d)I dˆ , (3) −δ
B
ˆ ∈ IR3 is the point in the template image at distance rdˆ away from where (y, d) ˆ is the point in the data image at distance rdˆ the boundary point y, and (y , d) away from the boundary point y in the transformed object B . This association between points in the template image and the data image is made using the object coordinate system described in section 2.3. The image positions in the ˆ where (u, v, t) neighborhood of the implied boundary are indexed by (u, v, t, d), is the parameterization in the object centered coordinate system of the closest point on the medially implied boundary B, and dˆ is the signed distance (interior = negative, exterior = positive) from the boundary in multiples of the local radius r of the medial point at (u, v). In implementing the correlation defined in Eqn. 3 care must be taken in implementing the surface integral by a discrete voxel summation. The template image needs to be normalized by the determinant of the Jacobian associated with the implied model surface B. At model building time intensities in the template image Itemp are associated with their positions’ ˆ values. As the model deforms, a target image position is calculated (u, v, t, d)
74
Sarang Joshi et al.
ˆ value, using the deformed model, and the intensity for each template (u, v, t, d) interpolated at that target image position is associated with the corresponding template intensity. We have have been using two basic types of templates: an analytical template derived from the derivative of the Gaussian and an empirical template learned from an example image from which the template medial model was built. Using the data likelihood defined above and the prior defined in previous section, the log posterior is defined as a weighted sum of the two terms with weights chosen by the user. For optimizing the log-posterior with respect to the global object similarity transformation and the local atom-by-atom transformation, we have been using a genetic optimization algorithm. Genetic algorithms have the advantages of not being susceptible to local minimum and not requiring the computation of the derivative of the posterior with respective to the transformation parameters. For optimizing the posterior with respect to the dense displacement field d(bf y) we have been using a simple gradient decent algorithm.
5
Results
We have been using the automatic segmentation procedure for extracting the kidney parenchyma-including the renal pelvis-in subjects undergoing radiation treatment for cancer. Results from a series of three data sets are presented. Using a few seconds, the user rigidly place the template model in the subject data set. This initialization stage of the algorithm is followed by the hierarchical automatic segmentation which takes on the order of 5 minutes for convergence depending on the data set. At the first scale level, a object similarity transformation is estimated accommodating gross size and orientation differences between the template model kidney and the subject’s kidney. Fig. 5 compares the results of the similarity transformation to the clinical hand segmentation in the axial, coronal, and sagittal views through the kidney. The yellow contour of the resulting implied boundary is overlaid, for comparison with the clinical hand segmentation shown in red. Note that the clinical hand segmentation did not include the renal pelvis, while our single figure model of the kidney used in this study includes the renal pelvis. initial hand placement of similarity Fig. 5, shows the improvement in the segmentation as a result of the atom deformation process, thus accommodating more local object shape changes. The arrow in Fig. 6 highlights the improvement due to the final stage of the deformation, as the dense displacement field accommodates the fine featured variation in the shapes of the kidney. For quantitative comparisons of the segmentations of the method with manual segmentations, we have used two metrics from a geometric scoring package developed by Guido Gerig and Matthieu Jomier called VALMET : relative overlap and mean surface distance. The relative overlap measure is defined as the ratio of the intersection of the two segmentations divided by the union. Although the relative overlap is commonly used in the literature for scoring
Multi-scale 3-D Deformable Model Segmentation
75
Fig. 5. Axial (left), coronal (middle) and sagittal (right) slices through the subject kidney CT data set. The contours show the results of the object similarity transformation and the atom deformation. Notice the improvements in the results at the places marked.
Fig. 6. The improvement in the segmentation of the kidney after the dense displacement field deformation. The contours shows the results of the atom transformation of the dense displacement field deformation.
segmentations it is sensitive to the size of the object and not very effective in characterizing shape differences between two segmentations. The symmetric, mean surface distance Ds between the boundary of the two segmentations using Euclidean distance transforms of the segmentations is defined as follows. Let yi1 , i = 1, · · · , N ∈ B 1 and yj2 , j = 1, · · · , M ∈ B 2 be the boundary points of two segmenattions B 1 , B 2 ; the mean surface distance then is N M 1 1 1 Ds (B 1 , B 2 ) = min ||y 1 − yj2 || + min ||y 1 − yj2 || . 2 N i=1 j=1···M i M j=1 i=1···N i Shown in table 1 is the summary of the results from the study for the three data sets. The results shown above are typical of the three data sets and are form Data set 613. The segmentation improves at each stage of the algorithm for all three data sets. The accuracy of the segmentation as measured via the
76
Sarang Joshi et al.
mean surface distance is on the order of the resolution of the data set and on average within one pixel of the hand segmentation. Table 1. Table showing the relative overlaps and the mean surface distance between the manual segmentations and the automatic segmentations at the different stages of the hierarchical procedure for the three data sets processed. Data Set (cm)
Scale Level Relative Overlap Surface Distance (cm) Similarity Transformation 0.85 0.26 613 Atom deformation 0.86 0.23 0.15 × 0.15 × 0.5 Field deformation 0.90 0.16 Similarity Transformation 0.88 0.22 608 Atom deformation 0.89 0.19 0.2 × 0.2 × 0.4 Field deformation 0.93 0.14 Similarity Transformation 0.77 0.65 1402 Atom deformation 0.86 0.38 0.15 × 0.15 × 0.3 Field deformation 0.90 0.38
6
Discussion and Conclusion
It can be seen from the quantitative analysis of the segmentations that the accuracy of the automatic segmentation as measured via the average surface distance is on the order of the resolution of the imaging modality. Although these results show that our current methodology can segment structures in the abdomen such as the kidney with high level of accuracy, improvement can be expected from the change in the image template used in the data likelihood. All the results shown in this paper were generated using a Gaussian derivative template for the data-likelihood. We expect that the results would be substantially improved by the use of our already implemented but not yet tested training image template in place of the Gaussian derivative template that would allow a spatially varying template capturing the different gray scale characteristics of the kidney boundaries. This model to image match would be further improved a statistical model reflecting image intensity variations across a population of subjects. We have also been working on extending this frame work to the deformation of objects with multiple attached sub-figures and multiple objects with priors induced on the transformations that reflect the knowledge of the associated relative typical geometry.
7
Acknowledgement
We thank Prof. Gerig and Matthieu Jomier for the use of their scoring tool for the comparison of segmentation as well as for the many insightful discussions and comments. We would like to also thank Dr. Zhi Chen for the generating
Multi-scale 3-D Deformable Model Segmentation
77
the table comparing the segmentations. We also thank Prof. Ed. Chaney for providing us the data sets and invaluable insights. This work was supported by NIH Grants P01 CA47982 R01 CA67183 This research was carried out on computers donated by Intel.
References 1. Irving Biederman. Recognition-by-Components: A Theory of Human Image Understanding. Psychological Review, 94(2):115–147, 1987. 2. Jules Bloomenthal and Chek Lim. Skeletal methods of shape manipulation. In Proc. Shape Modeling and Applications, pages 44–47. IEEE, 1999. 3. H. Blum. A transformation for extracting new descriptors of shape. In Models for the Perception of Speech and Visual Form. MIT Press, 1967. 4. A. C. Burbeck, S M Pizer, B. S. Morse, D. Ariely, G. Zauberman, and J. Rolland. Linking object boundaries at scale: a common mechanism for size and shape judgments. In Computer Science Department technical report TR94-041, page 361:372, Chapel Hill, 1996. University of North Carolina. 5. T. Cootes, C. Taylor, D. Cooper, and J. Graham. Active shape models - their training and application. Computer Vision, Graphics, and Image Processing: Image Understanding, 1(61):38–59, 1994. 6. D. Fritsch, S. Pizer, L. Yu, V. Johnson, and E. Chaney. Segmentation of Medical Image Objects using Deformable Shape Loci. In International Conference on Information Processing in Medical Imaging, pages 127–140, Berlin, Germany, 1997. Springer-Verlag. 7. U. Grenander. General Pattern Theory. Oxford Univ. Press, 1994. 8. Takeo Igarashi, Satoshi Matsuoka, and Hidehiko Tanaka. Teddy: A sketching interface for 3d freeform design. Proceedings of SIGGRAPH 99, pages 409–416, August 1999. 9. M. Leyton. Symmetry, Causality, Mind. MIT Press, Boston, 1992. 620 pages. 10. Lee Markosian, Jonathan M. Cohen, Thomas Crulli, and John F. Hughes. Skin: A constructive approach to modeling free-form shapes. Proceedings of SIGGRAPH 99, pages 393–400, August 1999. 11. David Marr and H. K. Nishihara. Representation and recognition of the spatial organization of three-dimensional shapes. Proc. Roy. Soc. London Ser. B, 200:269– 294, 1978. 12. Lee R. Nackman. Three-Dimensional Shape Description Using the Symmetric Axis Transform. PhD thesis, UNC Chapel Hill, 1982. under the direction of Stephen M. Pizer. 13. S. Pizer, D. Fritsch, P. Yushkevich, V. Johnson, and E. Chaney. Segmentation, registration, and measurement of shape variation via image object shape. IEEE Transactions on Medical Imaging, 18:851–865, October 1999. 14. S.M. Pizer, T. Fletcher, Y. Fridman, D.S. Fritsch, A.G. Gash, J.M. Glotzer, S. Joshi, A. Thall, G Tracton, P. Yushkevich, and E.L. Chaney. Deformable M-Reps for 3D Medical Image Segmentation. In Review, ftp://ftp.cs.unc.edu/pub/users/nicole/defmrep3d.final.pdf, 2000. 15. Kaleem Siddiqi, Sylvain Bouix, Allen Tannenbaum, and Steven W. Zucker. The hamilton-jacobi skeleton. In Proc. Computer Vision, volume 2, pages 828–834. IEEE, 1999. 16. Alan Yuille and Peter Hallinan. Active Vision, chapter Deformable Templates. MIT Press, Cambridge, MA, 1992.
Automatic 3D ASM Construction via Atlas-Based Landmarking and Volumetric Elastic Registration Alejandro F. Frangi1 , Daniel Rueckert2 , Julia A. Schnabel3 , and Wiro J. Niessen1 1
3
Image Sciences Institute, University Medical Center Utrecht (UMC) Room E.01.334, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands {alex,wiro}@isi.uu.nl 2 VIPG, Department of Computing, Imperial College, London, UK
[email protected] CISG, Radiological Sciences, Guy’s Hospital, King’s College London, UK
[email protected] Abstract. A novel method is introduced that allows for the generation of landmarks for three-dimensional shapes and the construction of the corresponding 3D Active Shape Models (ASM). Landmarking of a set of examples from a class of shapes is achieved by (i) construction of an atlas of the class, (ii) automatic extraction of the landmarks from the atlas, and (iii) subsequent propagation of these landmarks to each example shape via a volumetric elastic deformation procedure. This paper describes in detail the method to generate the atlas, and the landmark extraction and propagation procedures. This technique presents some advantages over previously published methods: it can treat multiple-part structures, and it requires less restrictive assumptions on the structure’s topology. The applicability of the developed technique is demonstrated with two examples: CT bone data and MR brain data.
1
Introduction
Statistical models of shape variability [5] or Active Shape Models (ASM) have been successfully applied to perform segmentation and recognition tasks in twodimensional images. In building those statistical models, a set of segmentations of the shape of interest is required as well as a set of landmarks that can be defined in each sample shape. Manual segmentation and determining point correspondences are time consuming and tedious tasks. This is particularly true for three-dimensional applications where the number of slices to analyze and the amount of landmarks required to describe the shape increases dramatically with respect to two-dimensional applications. This work aims at automating the landmarking procedure while we still rely on the existence of a manual segmentation of the shapes. Several authors have proposed techniques to find point (landmark) correspondences but only a few of them have indicate or investigated their applicability in M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 78–91, 2001. c Springer-Verlag Berlin Heidelberg 2001
Automatic 3D ASM Construction
79
the field of statistical shape models. Wang et al. [16] use a surface registration technique to find 3D point correspondences based on a metric matching surfaceto-surface distance, and surface normals and curvature. The authors suggest that this technique could be used to build 3D ASMs but they do not report any results on statistical model building. Kelemen et al. [9] report on the construction of 3D ASMs of neuroradiological anatomical structures. In this method the authors used a correspondence-by-parameterization approach to establish surface landmarks. The landmark correspondence is defined in the parameter domain of an underlying spherical harmonic parameterization. Although this approach has been used to build 3D ASMs, no explicit volumetric or surface registration between shapes takes place. To our knowledge, little work has been done on the automatic construction of 3D ASM using elastic registration [7,8,4,3]. The frameworks proposed by Brett and Taylor [4,3] are most closely related to this paper. In these approaches, each shape is first converted into a polyhedral representation. In the first approach [4], shape pairs are matched using a symmetric version of the Iterative Closest Point (ICP) algorithm of Besl and McKay [2]. Using this method, the authors were able to build 3D ASMs by automatically finding corresponding landmarks between surfaces. Surfaces are represented by means of dense triangulations that are matched via sparse triangulations (obtained by triangle decimation from the dense triangulations). The nodes of this sparse triangulation become the final landmarks. One problem acknowledged by the authors is the possibility of obtaining shape models with surface folding due to some landmark groups (triples) matched in different order between training examples. This is a consequence of the use of the ICP technique which does not incorporate connectivity constraints (purely local registration). In Brett and Taylor [3] this problem is overcome by transforming the surface to a planar domain by means of harmonic maps where connectivity constraints can be explicitly enforced. This technique avoids invalid cross-correspondences but is only applicable to single-part shapes that are topologically isomorphic to a disk. The work by Fleute and Lavall´ee [7,8] is also closely related to our work. They use a multi-resolution elastic registration technique based on octree-splines. This approach is a surface-based technique that registers shapes by minimization of a distance measure. In contrast to this, in this work we use a free-form elastic registration technique based on maximization of normalized mutual information (volume-based technique). In addition, we provide experiments giving empirical evidence of the convergence of the atlas generation procedure that is not analyzed in [7,8]. In this work a technique is introduced that addresses the shortcomings of point-based registration where no overall connectivity constraints are imposed. It uses a free-form elastic registration technique based on maximization of normalized mutual information (volume-based technique). Our method introduces global constraints by modifying the pairwise shape corresponder from a pointbased registration technique into a volume-based elastic registration technique. By construction, the deformation field is enforced to be smooth and the regu-
80
Alejandro F. Frangi et al.
larization term of the deformation will further penalize folding. In addition, our method can be applied to multiple-part shapes. The paper is organized as follows. In Section 2, our approach is described. In Section 3, results are presented that show the applicability of the method to modeling the radius in volumetric Computed Tomography (CT) data and the caudate nucleus in Magnetic Resonance Imaging (MRI); empirical evidence is given on convergence properties and reconstruction errors. Finally, Section 4 closes the paper with some conclusions and directions for future research.
2 2.1
Method Background
Suppose that we have n shapes described as vectors, {xi ; i = 1 · · · n}. Each shape consists of l 3-D landmarks, {pj = (p1j , p2j , p3j ); j = 1 · · · l} that represent the nodes of a surface triangulation. How to obtain those l 3-D landmarks is not a trivial issue and is precisely the topic of this paper. Each vector is of dimension 3l and is made up of the concatenation of the landmarks, i.e. xi = (p11 , p21 , p31 , p12 , p22 , p32 , · · · , p1l , p2l , p3l ). Moreover, it is assumed that the positions of the landmarks of all shapes are in the same coordinate system. These vectors form a distribution in a 3l-dimensional space. The goal is to approximate this distribution with a linear model of the form x=x ˆ + Φb
(1)
n where x ˆ = n1 i=1 xi is the average landmark vector, b is the shape parameter vector of the model, and Φ is a matrix whose columns are the principal n 1 ˆ)(xi − x ˆ)T . The princomponents of the covariance matrix S = n−1 i=1 (xi − x cipal components of S are calculated as its eigenvectors, φi , with corresponding eigenvalues, λi (sorted so that λi ≥ λi+1 ). If Φ contains the t eigenvectors corresponding to the largest eigenvalues, then we can approximate any shape of the training set, x, using Eqn. (1) where Φ = (φ1 |φ2 | · · · |φt ) and b is a t dimensional vector given by b = ΦT (x − x ˆ). The vector b defines the shape parameters of the ASM. By varying these parameters we can generate different instances of the shape class under analysis using Eqn. (1). Under the assumption that the cloud of landmark vectors follows a multi-dimensional Gaussian distribution, the variance of the i-th parameter, bi , across the training set √ is given by λi . By applying limits to the variation of bi , for instance |bi | ≤ ±3 λi , it can be ensured that a generated shape is similar to the shapes contained in the training class. 2.2
Overview
Ideally, a landmark is an anatomically characteristic point that can be uniquely identified on a set of shapes. However, anatomical landmarks are usually too
Automatic 3D ASM Construction
81
sparse to accurately describe a 3D shape. Therefore, we will consider pseudolandmarks, i.e. landmarks lying on the shape’s surface and determining its geometry. In our framework, automatic landmarking is carried out by mapping the landmarks of an atlas that is representative of a set of training shapes. Let us assume that n segmented shapes (3D binary images) are available, Tn = {Bi } where i = 1 · · · n. To generate the landmarks for the n shapes, the task is to build an atlas A, landmark it, and propagate its landmarks to the n shapes (Fig. 1). In the following we will describe these three steps in detail. Patient Coordinates
1
Atlas−aligned Coordinates
Atlas Coordinates
Atlas Coordinates
Ta
Te
Copy Landmarks
Ta
Te
Copy Landmarks
n
Atlas−aligned Coordinates
Te−1 −1
Te
PCA
Landmark
Atlas Fig. 1. Overview of the automatic landmarking framework. All individual data sets are matched to an atlas via an quasi-affine transformation (Ta ) and an elastic transformation (Te ). The landmarks in the atlas can then be copied to the individual patients. The elastic deformation is subsequently reversed. Thus, Principal Component Analysis (PCA) is carried out in a space where all shapes are aligned with the atlas (atlasaligned coordinates). The principal modes of variation will therefore account for elastic deformations and not for pose or size changes.
Atlas Building. In the context of this paper, an atlas is an average representation of the shape of a structure inferred from a set of training shapes Tn . In order to build the atlas, three issues have to be addressed: the selection of a pairwise corresponder to match two different shapes, a strategy to blend shapes that are represented as binary volumes in a common coordinate frame, and a scheme to obtain an average or mean shape with marginal bias towards a particular individual. Pairwise shape corresponder. Given a shape Bi , it is matched to the atlas, A, using an quasi-affine registration algorithm with nine degrees of freedom (rigid transformation plus anisotropic scaling) adapted from [14]. This algorithm matches shapes using a criterion based on normalized mutual informa-
82
Alejandro F. Frangi et al.
tion [15]. Since the shapes are binary images, we have experimented with several other registration measures (sum of squared differences and cross-correlation) but normalized mutual information was found to be superior. After registration, the shape Bi is expressed in the coordinate system of A. The coordinate system of A will be referred to as the atlas-aligned coordinate system. Shape blending. Once we have found the quasi-affine transformations that map each of the Bi shapes into atlas-aligned coordinates, these shapes have to be combined to form an average shape (binary image). Let Bi and DT (Bi ) denote the shape in atlas coordinates and its Euclidean distance transform [6] respectively, with the convention that inner points have a negative distance while outer points have a positive distance. Then, an average shape can be obtained in the distance transformed domain by computing n DT (Bav ) = n1 i=1 DT (Bi ). A binary representation of the shape Bav can be obtained by thresholding the distance transform map to its zero-level set (Figure 2(a)). Mean shape. To generate the mean shape it is necessary to register all Tn shapes into a common reference frame (atlas-aligned coordinates). However, the atlas is not initially known. To solve this problem an iterative algorithm was developed. One training shape is randomly selected as the initial atlas, A0 , and all remaining shapes are registered to it using the pairwise shape corresponder. After this step, all shapes Tn are expressed in the canonical system of A0 and can be blended to generate a new atlas A1 . This procedure is iterated I times to reduce the effect of the initial shape. Any metric of similarity between the atlases of two consecutive iterations can be used to monitor the convergence of the procedure. The final atlas is AI . This iterative algorithm is summarized in the flow diagram of Figure 2(b). To check for the influence of the randomly selected training shape, atlases with different start shapes have been quantitatively compared. Atlas Landmarking. By means of the iterative procedure of the previous subsection a binary atlas, A, has been obtained. In order to landmark this atlas the marching cubes [11] algorithm is used which generates a dense triangulation of the boundary surface. This triangulation can be decimated to obtain a sparse set of nodes that keeps the geometry of the original triangulation to a desired degree of accuracy. The number of nodes in this decimated triangulation corresponds to the number of landmarks. The use of different triangle densities (decimation ratios) has been investigated to observe their influence in the statistical models generated with our technique (see results section). The decimation strategy applied in this paper is the one proposed by Schroeder et al. [13]. Note that, as an alternative to marching cubes, an expert could manually pinpoint anatomical landmarks in the atlas. Anatomical landmarks, however, may be too sparse to accurately represent the shape of the structure. By using marching cubes, a dense and approximately even distribution of landmarks is obtained.
Automatic 3D ASM Construction
B’1
DT
DT 1 Threshold
B’n
DT
83
Bav
DT n
(a)
(b)
Fig. 2. (a) Shape-based blending of n registered binary shapes based on distance transforms (DT). By convention, the inside of the shape has negative distance and the outside positive distance. (b) Flow diagram of the iterative atlas construction algorithm.
Landmark Propagation. Once the atlas is constructed and landmarked, its landmarks can be propagated to the individual shapes. This is carried out by warping each sample binary volume into the atlas with a transformation, T = Ta + Te , that is composed of an quasi-affine (Ta ) and an elastic (Te ) transformation. The transformation Ta accounts for pose and size differences between the atlas and each sample volume while the transformation Te accounts for shape differences. The global transformation is obtained using an quasi-affine registration algorithm adapted from [14]. Registration of binary volumes was carried out using normalized mutual information [15]. The elastic transformation is expressed as a volumetric free-form deformation field computed with the method of Rueckert et al. [12] that also uses normalized mutual information as a registration measure. Once the global transformation T has been found, the landmarks of the atlas could be propagated to the atlas-aligned coordinate system by applying the inverse of the elastic transformation (Te−1 ). This process is repeated for each sample shape. As a result, a set of landmarks is obtained that describes shape variations with respect to the atlas. Since these landmarks are now in atlas-aligned coordinates, pose and size variations are explicitly eliminated from further analysis. These transformed landmarks are subsequently used as input for Principal Component Analysis (PCA) as indicated in Figure 1. Figure 1 suggests that each sample shape is warped to the atlas. In this case, the inverse of the deformation field has to be computed to propagate the landmarks. However, this mapping does not necessarily exist. This was illustrated for the sake of conceptual simplicity only. From a computational point of view
84
Alejandro F. Frangi et al.
it is more convenient to warp the atlas to each sample shape and use the direct deformation field for landmark propagation.
3 3.1
Results Data Sets
In order to exemplify the methodology, two case studies were analyzed. The first case study consists of a set of 14 manual segmentations of the head of the radius, a bone of the wrist, extracted from CT scans (voxel dimensions 1 × 1 × 2 mm3 ). The second is a set of 20 manual segmentations of the caudate nucleus, a deep structure of the brain, from MR scans (voxel dimensions 1 × 1 × 1.2 mm3 ). In building the model of the caudate nucleus each hemisphere of the structure was treated independently. This was done because this particular two-part structure has an almost specular symmetry with respect to the sagittal plane separating the left and right brain hemispheres. Such symmetry would be difficult to capture with a single quasi-affine transformation. After the landmarks of each side (sub-atlas) are extracted and propagated, Principal Component Analysis (PCA) is applied to the concatenation of the landmarks of both sides. In this way, inter-hemisphere relationships are included in the statistical analysis. 3.2
Atlas Construction
Convergence Properties. As a metric to measure convergence we have used the κ statistic [1]. This statistic measures the similarity between two binary images, κ(Am , Am−1 ), in a way that is independent of the structure’s volume. Figure 3 shows the evolution of the κ statistic, κ(m), as a function of the iteration number, m. This statistic ranges between 0.0 and 1.0 and a value above 0.9 is usually regarded as an excellent agreement [1]. The κ(m) statistic compares the similarity between the atlases Am and Am−1 . Figure 3(a) corresponds to the atlas of the radius. Two curves are shown for two different initial shapes used in the initialization procedure. Similar curves are drawn in Figure 3(b) for the left and right caudate nucleus atlases. The atlas of each subpart (left/right caudate nucleus) was obtained independently. The trend of these plots is similar to that observed in the atlas of the radius. Figure 3 indicates that after five iterations the shape of the atlas stabilizes (κ > 0.97). Effect of Initial Shape. We investigated whether the atlases generated with the two different initializations are comparable in shape, i.e. similar up to an quasi-affine registration. This was done in the following way. For each individual shape, two quasi-affine transformations can be found that map it to each of the two atlases, A and B. Let us call these transformations TAi and TBi , respectively. Let TAB be the quasi-affine transformation that maps the atlas A into the atlas −1 B. In this situation, the transformation Ti = TBi TAB TAi should be equal to the identity transformation, TI . It is possible now to measure the average and the
Automatic 3D ASM Construction Iterative atlas computation (radius)
Iterative atlas computation (nucleus caudate)
1
1
0.99
0.98
0.98
0.96 κ(m)
κ(m)
0.97 0.96
0.94 Reference A (left) Reference A (right) Reference B (left) Reference B (right)
0.92
0.95
0.9
0.94 0.93 0.92
85
0.88
Reference A Reference B 1
2
3
4
5 6 Iterations, m
7
8
9
10
0.86
1
2
3
(a)
4
5 6 Iterations, m
7
8
9
10
(b)
Fig. 3. Convergence of the atlas construction algorithm. The κ statistic between two consecutive atlases as a function of the iteration number. Iteration zero corresponds to the reference (initial) shape used in the iterative algorithm. The κ(m) statistic compares the agreement between the atlases Am and Am−1 . Curves for different initial shapes (A and B) are shown.
standard deviation of the difference Ti − TI . These two measures will provide the bias and dispersion introduced by using two different initial shapes to build the atlas. The results of this analysis are shown in Table 1 for each atlas and each transformation parameter. This table indicates that the deviation from an identity transformation depends on the type of shape. For the very elongated and thin structure of the caudate nucleus the error standard deviations (SDs) are larger compared to the radius. As a consequence, the influence of the initial shape on the final atlas will depend on the shape itself. Translation and rotation error SDs are below 3.3 mm and 0.1◦ , respectively. Scaling error SDs are below 14.5%. From a practical point of view Table 1 indicates that the atlas does indeed depend on the initial shape and that the effect is has to do with the class of shapes being modeled. In the applications presented in this chapter, this effect is not critical. After performing an quasi-affine registration of the atlases
Table 1. Mean (standard deviation) of the error in each transformation parameter (translation, rotation and scaling) of the transformation Ti with respect to the identity transformation for three different atlases. Parameter tx ty tz rx ry rz sx sy sz
Units [mm] [mm] [mm] [◦ ] [◦ ] [◦ ] [%] [%] [%]
Radius -0.72 (1.68) -1.20 (1.32) +0.64 (1.99) +0.01 (0.02) -0.01 (0.02) -0.01 (0.02) -0.57 (1.99) -1.48 (1.78) +1.57 (6.08)
Caudate (L) +1.25 (3.28) -0.20 (0.71) -0.25 (0.54) -0.01 (0.03) -0.04 (0.09) +0.01 (0.08) +3.45 (14.51) -2.12 (6.28) -3.22 (7.23)
Caudate (R) +0.62 (1.42) -0.14 (0.57) +0.06 (0.17) +0.02 (0.03) +0.10 (0.05) -0.02 (0.06) -5.60 (8.20) -1.47 (3.92) -1.98 (4.12)
86
Alejandro F. Frangi et al.
1st mode
2nd mode
3rd mode
√ −3 λi
mean
√ +3 λi
Fig. 4. Shape instances generated using the 3D model from 14 data sets of the radius. The instances are generated by varying a single shape parameter, fixing all others constant at zero standard deviations from the mean shape. Each instance of the model consists of 2500 nodes.
generated with two different initializations, the average boundary-to-boundary distance between the two atlases was 1.3 mm and 0.6 mm for the radius and the two caudate nucleus atlases, respectively. These errors are on the order of, and slightly smaller than the voxel dimensions, respectively.
3.3
Point Distribution Models
Figures 4 and 5 show the mean shape models and the first three modes of variation obtained from PCA for the radius and caudate nucleus test cases, respectively. The number of mesh nodes is 2500 and 1000, respectively. In both cases there are√no visible surface foldings neither in the mean shape nor in the models for ±3 λi .
Automatic 3D ASM Construction
87
1st mode
2nd mode
3rd mode
√ −3 λi
mean
√ +3 λi
Fig. 5. Shape instances generated using the 3D model from 20 data sets of the caudate nucleus. The instances are generated by varying a single shape parameter, fixing all others constant at zero standard deviations from the mean shape. Each instance of the model consists of 1000 nodes. 3.4
Reconstruction Error
Figure 6 illustrates the relative shape variance explained with an increasing number of modes. Similar curves for different decimation ratios (number of model triangles) are provided. These curves are only marginally dependent on this factor. From ten modes onwards, the model captures more than 90% of the shape variance. Note the steeper slope of the curves corresponding to the caudate nucleus. Over the training set there is apparently less variability in the shape of the caudate nucleus than in the shape of the radius. As a consequence, with fewer modes a larger amount of shape variation can be explained. In order to assess the ability of these models to recover shapes not used in the training set we carried out the following experiment. Reconstruction errors were computed by reconstructing the landmarks of one shape of the training set with the ASM built from the remaining shapes (leave-one-out experiment).
88
Alejandro F. Frangi et al. Cumulative relative variance (radius)
Cumulative relative variance (nucleus caudate)
100
100
80
80
40
40
20
0
0.25 0.50 0.75 0.90 0.95
60 %
0.25 0.50 0.75 0.90 0.95
%
60
20
0
2
4
6
8 Modes
10
12
14
(a)
0
0
5
10 Modes
15
20
(b)
Fig. 6. Percentage of total shape variance versus the number of modes used in the 3D ASM and for various decimation ratios. The number of landmarks before decimation was 15519 for the radius, and 2320 for the caudate nucleus. The decimation ratio represents the ratio between the nodes eliminated from the triangulation of the atlas and its initial number. Note that the number of modes is at most the number of sample shapes minus one.
The errors reported in Figure 7 are the average of the reconstruction errors over all shapes taking out one in turn. The same experiment was repeated for different decimation ratios and increasing number of modes of shape variation taken into the reconstruction. The reconstruction errors were computed in millimeters. For the caudate nucleus, the reconstruction error is below the voxel dimensions (10 modes). In the case of the radius, the reconstruction error is slightly larger than the slice thickness. One possible explanation to this higher error could be the fact that no image resampling was used during registration. On the other hand, in comparison to the shape of the caudate nucleus, the radius represents a more complex structure with larger shape variability in the training set. This could explain the poorer reconstruction performance in the leave-one-out experiments of the radius. The plots of Figure 7 also indicate that the reconstruction error is slightly dependent on the decimation ratio and, as expected, inversely proportional to the number of modes of variation.
4
Discussion and Conclusion
This paper presents a method for the automatic construction of 3D active shape models. The technique is based on the automatic extraction of a dense mesh of landmarks in an atlas constructed from the training shapes which are propagated through an elastic deformation field to each shape of the training set. The method is able to treat single and multiple-part shapes. The first part of the proposed technique involves the building of an atlas from a set of example shapes. In Section 3 we showed experimental results indicating that this procedure is convergent. Moreover, different initial shapes seem to contribute only marginally to the final atlas. That is, the final atlases are similar
Automatic 3D ASM Construction Leave-one-out reconstruction error (radius) 2.9
0.25 0.50 0.75 0.90 0.95
1.1 Error (mm)
2.8 Error (mm)
Leave-one-out reconstruction error (nucleus caudate) 1.15
0.25 0.50 0.75 0.90 0.95
2.85
89
2.75 2.7
1.05
1
2.65 0.95
2.6 2.55
0
2
4
6
8 Modes
(a)
10
12
14
0.9
0
5
10 Modes
15
20
(b)
Fig. 7. Reconstruction error in the leave-one-out experiments. The number of landmarks before decimation was 15519 for the radius, and 2320 for the caudate nucleus. The decimation ratio represents the ratio between the nodes eliminated from the triangulation of the atlas and its initial number.
up to an quasi-affine transformation. However, we note that the influence of the initial shape depends on the class of shapes being modeled and has to be assessed on a case-by-case basis. In the work by Fleute and Lavall´ee [7,8] a similar algorithm was used to build the average model (atlas). However, no experimental evidence was reported with respect to the convergence of the atlas construction algorithm. An alternative to our iterative method of atlas construction is the tree-based approach presented by Brett and Taylor [4]. This hierarchical strategy is attractive since it gives a unique (non-iterative) way to build an atlas from a given set of examples. However, one problem of Brett’s method is that the training shapes have to be ranked according to a pairwise match quality. This requires that all possible pairs have to be matched and scored before the tree is built. Brett presented results with only eight shapes [4] but ordering the examples according to the matching quality would be cumbersome for a realistic amount of training shapes. For a total number of n shapes it is necessary to compute N = (n − 1)2 ≈ O(n2 ) pairwise matches to build the average shape. Our approach obtains the average shape in N = nI ≈ O(n) matches where I is the total number of iterations required for convergence. Section 3 shows experimental evidence that after about five iterations the atlas shape stabilizes. Our method for building the mean shape model is based on averaging shapes in the domain of their distance transforms. A similar strategy was proposed by Leventon et al. [10] to incorporate statistical constraints into the level-set approach to image segmentation. However, in that work, PCA is applied on the distance transform domain and not on a surface representation. As a consequence, the number of degrees of freedom is considerably larger than in our method. There is an intrinsic limitation in both our method and that of Leventon et al. Averaging distance transforms of several shapes does not necessarily yield a valid mean shape representation. It is easy to show, for instance, that
90
Alejandro F. Frangi et al.
in case of a large misalignment between the averaged shapes, this procedure can introduce topological changes. Although we did not observe this problem in our experiments this can be a potential source of failure of the technique when building models of complex structures. The proposed technique could be used with any elastic registration algorithm. In this sense, the method is a generic framework open to future research. Currently, the volumetric elastic registration of Rueckert et al. [12] is used to match binary images. The use of elastic registration as a method to establish shape correspondences imposes a constraint on the type of shapes that can be handled. It is assumed that the class of shapes has a well-defined topology. If there are sub-structures in one image not represented in the other image to be matched, the transformation would have to destroy those parts. This situation could arise when building a model of normal and abnormal medical structures where some parts in the latter are missing because of a diseased state or surgery. However, establishing correspondences in these mixed models also remains an ill-defined problem with any of the previously published approaches [7,8,3]. Results of the construction of models of two anatomical structures have been presented. Experiments were carried out to establish the ability of the models to generalize to shapes not present in the training set. The average reconstruction error was below 2.65 mm (radius) and 0.95 mm (caudate nucleus) when the number of nodes used was sufficient to explain 90% of the shape variability. These errors are on the order of, and slightly smaller than the voxel dimensions, respectively. In our experiments we have not observed problems of wrong correspondences leading to flipping of triangles and surface folding. This is an important improvement compared to the initial method of Brett and Taylor [4]. Also, our method is less restrictive in terms of the shapes that can be modeled. This is an important feature with respect to the improved method of Brett and Taylor [3] that is based on harmonic maps and therefore limited to shapes that are isomorphic to a disc. Finally, it would be interesting to perform a comparison between the models built with different methods. In order to carry out a quantitative comparison it is necessary to define a measure of model quality. The definition of such a measure is in itself an interesting issue. Obviously, different methods will yield different sets of landmarks which precludes a landmark-based comparison. If one defines a given segmentation task, a comparison could be established on the basis of the segmentation accuracy. Although these measures can have a prominent practical value to determine the best model-building technique for a given problem, the conclusions will remain task-dependent. Possibly, other more task-independent criteria related to the compactness and generalizability of the built models could be within the interesting candidate measures to explore.
Acknowledgements This research was sponsored by the Dutch Ministry of Economic Affairs (IOP Beeldverwerking IBV97009) and EasyVision Advanced Development, Philips
Automatic 3D ASM Construction
91
Medical Systems BV, Best, The Netherlands. Dr. Maarten Hoogbergen provided us with the radius segmentations, and the Department of Psychiatry of the University Medical Center Utrecht with the caudate nucleus segmentations.
References 1. D.G. Altman. Practical Statistics for Medical Research. Chapman & Hall, 1991. 2. P.J. Besl and N.D. McKay. A method for registration of 3D shapes. IEEE Trans Pattern Anal Machine Intell, 14(2):239–55, February 1992. 3. A.D. Brett and C.J. Taylor. Automated construction of 3D shape models using harmonic maps. In S. Arridge and A. Todd-Pokropek, editors, Medical Image Understanding and Analysis, pages 175–78, London, July 2000. 4. A.D. Brett and C.J. Taylor. A method of automated landmark generation for automated 3D PDM construction. Imag Vis Comp, 18(9):739–48, 2000. 5. T.F. Cootes, C.J. Taylor, D.H. Cooper, and J. Graham. Active Shape Models their training and application. Comp Vis Image Underst, 61(1):38–59, 1995. 6. P.E. Danielsson. Euclidean distance mapping. Comp Graph Imag Proces, 14:227– 48, 1980. 7. M. Fleute and S. Lavall´ee. Building a complete surface model from sparse data using statistical shape models: applications to computer assisted knee surgery. In W.M. Wells, A. Colchester, and S. Delp, editors, Medical Imaging Computing & Computer-Assisted Intervention, volume 1496 of Lect Notes Comp Science, pages 879–87, Boston, USA, September 1998. Springer Verlag. 8. M. Fleute and S. Lavall´ee. Incorporating a statistically based shape model into a system for computer-assisted anterior cruciate ligament surgery. Med Image Anal, 3(3):209–22, 1999. 9. A. Kelemen, G. Sz´ekely, and G. Guerig. Elastic model-based segmentation of 3-D neuroradiological data sets. IEEE Trans Med Imaging, 18(10):828–39, October 1999. 10. M. Leventon, W.E.L. Grimsom, and O. Faugeras. Shape-based 3D surface correspondence using geodesics and local geometry. In Comp Vis Patt Recogn, volume 1, pages 316–23, South Carolina, USA, June 2000. IEEE Computer Society. 11. W.E. Lorensen and H.E. Cline. Marching cubes: a high resolution 3D surface reconstruction algorithm. Computer Graphics: SIGGRAPH’87 Conference Proceeding, 21:163–69, July 1987. 12. D. Rueckert, L.I. Sonoda, C. Hayes, D.L.G. Hill, M.O. Leach, and D.J. Hawkes. Non-rigid registration using free-form deformations: Application to breast MR images. IEEE Trans Med Imaging, 18(8):712–21, August 1999. 13. W.J. Schroeder, J.A. Zarge, and W.E. Lorensen. Decimation of triangle meshes. Comp Graphics, 26(2):65–70, 1992. 14. C. Studholme, D.L.G. Hill, and D.J. Hawkes. Automated 3D registration of MR and PET brain images by multiresolution optimization of voxel similarity measures. Med Phys, 24(1):25–35, 1997. 15. C. Studholme, D.L.G. Hill, and D.J. Hawkes. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn, 32(1):71–86, 1998. 16. Y. Wang, B.S. Peterson, and L.W. Staib. Shape-based 3D surface correspondence using geodesics and local geometry. In Comp Vis Patt Recogn, volume 2, pages 644–51, South Carolina, USA, June 2000. IEEE Computer Society.
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge Department of Computer Sciences, University College London, Gower Street, London WC1E 6BT, United-Kingdom
[email protected] Abstract. A method for regularizing diffusion tensor magnetic resonance images (DT-MRI) is presented. The scheme is divided into two main parts: a restoration of the principal diffusion direction, and a regularization of the 3 eigenvalue maps. The former make use of recent variational methods for restoring direction maps, while the latter makes use of the strong structural information embedded in the diffusion tensor image to drive a non-linear anisotropic diffusion process. The whole process is illustrated on synthetic and real data, and possible improvements are discussed.
1
Introduction
Diffusion tensor magnetic resonance imaging (DT-MRI) is an image acquisition technique based on water diffusion characteristics, that allows the investigation in vivo of physiological and structural information of tissues [3]. Applications of DT-MRI cover research activities such as white matter fiber tracking and brain connectivity studies as well as clinical diagnostic of disruptions caused by multiple sclerosis or stroke. The measurement acquired at each voxel is a diffusion tensor (DT), D, represented by a symmetric positive definite matrix, that quantifies the amount of diffusion in every direction. This tensor can be expressed by a set of six coefficients, and is often decomposed into its eigensystem, consisting of three eigenvalues and three associated eigenvectors. As with many other MRI techniques, the level of image noise depends on the chosen voxel discretization and the acquisition time. A post-processing technique to reduce noise would relax the scanning time versus voxel size trade-off as well as improving analsyis methods. However DT-MRI is a fairly new technique and its nature requires new image processing methods (see for instance [1]). In particular, few regularization methods have been presented in the literature so far. In this paper we propose a regularization scheme for DT-MR images. The method uses variational and PDE-based techniques and relies on the separation of two types of information contained in the tensor: the principal diffusion direction (PDD), defined by the eigenvector associated with the largest eigenvalue, and the amount of diffusion along each eigenvector, defined by the eigenvalues. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 92–105, 2001. c Springer-Verlag Berlin Heidelberg 2001
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
93
The PDD field is regularized using an adaptation of recent methods for restoring direction maps [16,7,20] , and the 3 eigenvalue maps are regularized subsequently using a non-linear anisotropic diffusion process. In the next section we introduce necessary concepts related to DT-MR images. Then the two following sections describe the regularization methods for the PDD field and the eigenvalue maps. Experiments and results are then presented.
2
Background
In the following, D = (Dij ) will stand for the diffusion tensor matrix, (λi )i=1,2,3 for its eigenvalues with λ1 ≥ λ2 ≥ λ3 , and (vi )i=1,2,3 for the associated eigenvectors. Several scalar measurements can be derived from the DT-MR images to describe properties of the underlying tissues (see for instance [4]). Here, we will make use of the fractional anisotropy, FA, a measure of anisotropy of the tensor, which varies between 0 and 1, and is defined as follows: (λ1 − λ2 )2 + (λ1 − λ3 )2 + (λ2 − λ3 )2 (1) FA = 2(λ21 + λ22 + λ23 ) The multi-dimensional nature of the information contained in each voxel makes these images difficult to visualize. Various techniques exist for visualisation of tensor volumes. In this paper, we represent each tensor as an ellipsoid whose axes are aligned with the (vi ) and scaled by the eigenvalues (see for example figure 1). Noise in DT-MR images comes from various sources. Partial volume effects caused by large voxel sizes result in a local averaging of the estimated tensor. This can lead to poor estimation of directions and values, and often creates oblate tensors (λ1 λ2 ) where white matter fibers of different orientation cross inside one voxel. Another source of noise is the motion of the ventricles, related to blood flow, that may cause a blur of the tensor estimation. There are also problems of “sorting bias” ([5]): in the presence of noise the ranking of eigenvalues may not be regionally consistent. This results in an overestimate of the local anisotropy and one can observe “switches” between principal directions of diffusion, leading to corrupted eigenvector fields. One method to address that problem is proposed in [5]. To reduce the effects of these sources of noise, a few methods can be found in the literature. Parker et al. [13] regularize DT-MR images using a non-linear smoothing of the diffusion-weighted images used to estimate the tensor. Even though it aims at regularizing the whole tensor information, this method does not take into account the strong structural information carried by the eigenvectors, and brings intrinsic edge-estimation problems. Poupon et al. [17] have proposed a PDD field regularization method, which we describe later in section 3. It is difficult to process the magnitude and direction information simultaneously. For instance, figure 1 shows that an interpolation of two anisotropic tensors based on linear interpolation of the coefficients results in a tensor with correctly
94
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
interpolated PDD but lower anisotropy, which is an undesirable effect. A better approach would be to transform directions and eigenvalues separately. Moreover, we do not know any scalar measure that separates tissues equally on the basis of both pieces of information (e.g. one that distinguishes between anisotropic and isotropic regions as well as between anisotropic differently oriented regions). Therefore, we chose to process directions and eigenvalues separately. This is a similar approach to color image restoration methods that separate chromaticity and brightness (see for instance [19]), which led to recent research in direction map restoration presented in the next section. An important advantage of our approach is the fact that the DT eigenvalues can be regularized using directional information from the previously restored PDD field.
a)
b)
c)
Fig. 1. Tensor interpolation. A coefficient-based interpolation of (a) and (b) results in tensor (c) with lower anisotropy. Interpolation: Cij = 12 (Aij + Bij ). The next two sections describe our regularization techniques for both the PDD field and the eigenvalue maps.
3
Restoration of the Principal Diffusion Direction
To our knowledge, the only method presented so far for restoring the first eigenvector field can be found in [17]. In this paper, Poupon et al. propose a Markovian model associated with an energy function whose minimum corresponds to the regularized field. The model relies on a trade-off between the two following assumptions: – white matter tracts have a low curvature, – diffusion should be maximum in the direction of the first eigenvector. We adopt the same assumptions and propose a very simple iterative restoration scheme, which is inspired from the most recent advances in direction map restoration [16,7,20]. Direction maps are a particular case of vector fields in which each vector has a Euclidean norm equal to 1. Directions live, in the 3D case, on the unit sphere S2 . Perona first proposed a PDE-based model for the 2D case [16], and was followed by Chang and Shen [7] and Tang et al. [20] who devised more general models.
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
3.1
95
Direction Regularization: The Original Model
Chang and Shen [7] presented variational models to restore features on non-flat manifolds and algorithms to implement them in the discrete case. Our problem fits into this framework, since the unit sphere S2 is a simple example of a non-flat manifold. Our method starts from the models in [7], which we briefly summarize here. Let f : Ω → M be a feature distribution on a m-dimensional Riemannian manifold M , where Ω ⊂ IRn is the image domain. Chang and Shen define the fitted total variation (TV) energy to be minimized: λ TV e(f, p)dp + d2 (f (0) , f )dp, (2) ε (f, λ) = 2 Ω Ω where e(f, p) is a strength function at pixel p, d is the metric on M induced by its Riemannian structure, and f (0) the original feature distribution. The variational problem is solved by studying the associated Euler-Lagrange equations, leading to a diffusion equation. A discrete model is then derived from this continuous approach. If α is a pixel on the discrete domain Ωn , let Nα be a neighborhood of α. If one defines a locally Riemannian distance dl (a distance that locally tends to the metric d), the strength function is defined as follow: 1 d2l (fα , fβ )] 2 . (3) e(f, α) = [ β∈Nα
The fitted TV energy then becomes: εTV (f, λ) =
e(f, α) + λ
α∈Ωn
1 d2 (f (0) , fα ), 2 l α
(4)
α∈Ωn
In the case of directions, i.e. M = S2 , if one chose dl to be the embedded Euclidean distance (the Euclidean distance in IR3 , which locally tends to the geodesic distance on S2 ), the corresponding diffusion equation is: dfα = FαTV (f ) = Πfα ( wαβ fβ + λfα(0) ), dt
(5)
β∈Nα
where Πfα is the orthogonal projection on the plane tangent to M at fα , and wαβ is a weight defined by: wαβ (f ) =
1 1 + . e(f, α) e(f, β)
(6)
The following iterative scheme then minimizes εTV : a) f˜αn = fαn−1 + ∆tFαTV (f n−1 ), b) fαn =
n f˜α . n f˜α
(7)
96
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
This model has the great advantage that it does not require the computation of spatial derivatives, and it converges strictly to minimize εTV (f, λ). The flow FαT V is composed of two terms. The first term is a projection in the tangent plane of a weighted sum of the neighboring features at the considered pixel. The strength function can be interpreted as a local measure of smoothness of the direction map: the smaller it is, the smoother the map is, and the higher the influence of the voxel is. This weighting recalls more classical anisotropic diffusion schemes, a high strength function indicating presence of an “edge” and inducing slow feature diffusion. The second term is a data-driven term that constrains the regularized map to be close to the original one. 3.2
Modifications of the Model
Although it is non-linear and preserves singularities, the model presented above requires some modifications in order to restore our direction field properly. We therefore modified the scheme implemented by equation 7 in order to fit the particular requirement of the PDD maps. Although those changes loosen the strict variational theoretical framework, they are easily interpreted in terms of behavior, and prove to be efficient in practice. Chan and Shen’s TV model restores orientations, living on S2 , whereas the PPD field only carries axial information, so that v1 ≡ −v1 . To cope with this difference, we must map features to the same hemisphere before computing distances. At each iteration of equation 7, when we compute FαTV , if a vector fβ ∈ Nα does not belong to the hemisphere defined by fα (i.e. if fβ .fα < 0) it needs to be “flipped”. Therefore we propose the new definition: wαβ fβα + λfα(0) ). (8) FαTV (f ) = Πfα ( β∈Nα
with: fβα =
−f , if f .f < 0 β β α fβ , else
(9)
Another limitation of the original model lies in the definition of the weights wαβ . We have found empirically that neighboring features often influence each other even though they belong to two different fiber bundles (figure 4-(b)). We must reduce the diffusion flow between two neighboring features fα and fβ in two situations: 1. α belongs to a tract (anisotropic medium) and β belongs to an isotropic medium. In that situation the two corresponding tensors have very different anisotropy values. 2. α and β belong to separate anisotropic tracts whith different orientations. The first point above suggests the choice of a weighting function that takes into account the anisotropy. The second point is partly taken into account by
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
97
the original model although the control of the diffusion flow is not strict enough. Therefore we propose to replace equation 6 with the following weighting function: wαβ (f ) =
aα + aβ (fα .fβ )2m , 2
(10)
where aα = FA(α) is the fractional anisotropy at node α, as defined in equation 1, and m a control parameter. Influence of neighbors is then weighted by their anisotropy, so that data from an isotropic medium have negligible influence on data from white matter fiber tracts, and the diffusion flow decreases with differences in direction. The higher the value of m, the more severe this “directional tuning” is. This particular model is shown to give excellent results experimentally, as shown in section 5. A quick analogy with the method of Poupon et al. [17] leads us to the following observations. Both methods have the same assumptions, i.e. fiber tracts have a low curvature and the regularised field must be close enough to the original data. We do not state that our model gives better results than that in [17] and, in fact, performances are probably similar. However, our model has the following advantages: it is certainly faster, it has a simpler implementation, and most of all it does not have to face the problem of discretisation of the directions since they are defined and transformed in a continuous fashion on S2 – in [17] the unit sphere is discretised in 162 directions.
3.3
Re-projection of the Other Eigenvectors
Once the first eigenvector has been restored, before re-constructing the whole tensor, the second and third eigenvectors must be reoriented. We use a similar approach to the preservation of principal directions algorithm presented in [1], which computes the reorientation of diffusion tensors after a non-rigid transformation has been applied to the whole image. The second eigenvector is projected on the plane orthogonal to the regularized first eigenvector in order to compute the new eigensystem: – let (v1 , v2 , v3 ) be the original set of eigenvectors, and v1r the regularized first eigenvector. – Define v2r = v2 − (v2 .v1r )v1r . – Define v3r = v1r × v2r The new tensor is then constructed using the new set of eigenvectors and the original eigenvalues: λ1 0 0 Dr = (v1r v2r v3r ) 0 λ2 0 (v1r v2r v3r )T 0 0 λ3
(11)
98
4
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
Regularization of the Eigenvalues
The eigenvalues describe the amount of water diffusion along the principal axes of the DT. They are also subject to noise and reducing this noise would help the discrimination of tissues and quantitative analysis of DT images. Consistency of eigenvalues should be preserved along fiber tracts and we propose here a method that aims to regularize them in an anisotropic fashion using the previously restored PDD map. Various non-linear anisotropic smoothing methods have been proposed in the literature (see [10,21] for reviews). Most of them use a formulation related to the local image structure. Some are PDE-based using either a diffusion tensor [9,23] or an explicit flow formulation [11], others use a smoothing kernel whose shape adapts to the local image structure [2,12]. The diffusion approach, as studied by Weickert [22], is particularly interesting because of its general character and its scale-space, as well as pure regularization, point of view. We focus here on the coherence-enhancing scheme proposed in [23]. General nonlinear anisotropic diffusion filtering is defined as follows. Given the original image I0 (x), a one parameter family of images I(x, t) is built as the solution of the diffusion equation ∂I
∂t = div(M∇I), I(x, 0) = I0 (x),
(12)
where M is a flow tensor (usually called diffusion tensor, but we will avoid this denomination to prevent confusion with the diffusion tensor in the images), and is a function of the local image structure. In the continuous case, if M is symmetric positive definite and smooth enough, existence and uniqueness of f have been proven, as well as scale-space and regularization properties [22]. At every voxel, let us define the flow tensor with the following matrix: µ1 0 0 (13) M = E 0 µ2 0 ET , 0 0 µ3 where E = [v1r v2r v3r ] is the eigenvector matrix of Dr . With this formulation, one can explicitly define the diffusion flow along the three eigenvectors by means of the (µi )i=1,2,3 . In the case of DT-MR images, structural information is mostly carried by the first eigenvector v1r . In the white matter, the PDD generally points along the fiber bundle, and along that direction we can expect to find other points in the same fiber. In isotropic tissues, where A has a low value, v1r does not have any particular meaning and there should not be any diffusion flow along that direction. Therefore, in a similar way to coherence-enhancing diffusion [23], we define the following set of eigenvalues: µ1 =
0
α + (1 − α) exp(C (1 − µ2 = µ3 = α, 2
1 )) FA2σ
if FA2σ = 0, else,
(14)
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
99
where α is a small parameter that guarantees a minimum diffusion and keeps M positive definite, and FAσ = FA ⊗ Gσ is the fractional anisotropy (eq. 1) smoothed with a Gaussian function. C is a control parameter that controls the acceptable magnitude of anisotropy. Smoothing the fractional anisotropy ensures the smoothness of M and helps the stability of the scheme. The scheme defined by equations 12-14 has the following behavior, illustrated in figure 2: – where FAσ C, i.e. in tissues with low anisotropy, we have µ1 ≈ α and µ2 = µ3 = α, there is almost no diffusion flow. – where FAσ C, i.e. in white matter, we have µ1 ≈ 1 and µ2 = µ3 = α, there is a diffusion flow along v1r .
Fig. 2. Function µ1 (FAσ ) for different values of C Instead of defining a semi-local measure of coherence using image derivatives [23], fractional anisotropy provides a natural characterisation of the underlying tissues. Moreover, values of anisotropy are characteristics of tissues and are expected to be stable across images. Therefore we could in the future expect to define a value of C based on a quantitative evaluation of fractional anisotropy for a population of subjects. Figure 3 shows an example of a µ1 map together with a simple anisotropy image and its smoothed version. If one defines Ii = ∇I.vir , the diffusion flow can be explicitly written as: J = ([α + (1 − α) exp(C 2 (1 −
1 ))]I1 )v1r + (αI2 )v2r + (αI3 )v3r FA2σ
(15)
with the following diffusion equation: ∂I = div(J). ∂t
(16)
100
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
FA
FA σ
µ1
Fig. 3. fractional anisotropy map FA, FAσ with σ = 0.8, and the corresponding µ1 map with C = 0.8 The diffusion process described by equations 15 and 16 is applied simultaneously on the 3 eigenvalue maps. The discretization of the diffusion equation is performed using a rotation-invariant optimised scheme presented in [25]. We observe that, because we use the regularized PDD field, the flow direction does not need to be smoothed as is done in [24]. Because the diffusion process can only be driven along v1r there is no flow from white matter to an isotropic tissue, except where a tract terminates, and this particular problem will be addressed in further work. For the same reason, when two different bundles are neighbors, there is no intensity flow “mixing” them. Note that, as for the PDD field, regularization is essentially performed in white matter, where anisotropy is high enough. Further work will aim at defining a scheme that regularize grey matter and CSF isotropically while still having an anisotropic behavior in white matter.
5
Experiments
The process was run on synthetic and real data. In the following, the PDD regularization was performed until time t = 10, with λ = 0.5, and m = 2. The eigenvalue map regularization was performed until scale t = 100 with σ = 0.8, C = 0.8, and α = 0.001. Firstly, a synthetic dataset was created and is shown in figure 4. It is composed of two orthogonal straight bundles to which some uniform noise has been added. Outside the bundles, directions are chosen randomly, and anisotropy is set to 0.05. The original direction regularisation TV model was used, as well as our modified PDD regularisation model. Figures 4-(b) and 4-(c) show that both models regularize the directions inside each bundle properly, and have very small influence from the isotropic medium. At the interface, however, it is clear that with the original TV model the two bundles influence each other while our model overcomes this limitation.
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
101
Fig. 4. (a) A noisy synthetic direction map; (b) the same, regularised with the TV model at t = 10 with λ = 0.5 with a close-up of the interface between the two bundles; (c) regularised with our modified model at t = 10 with λ = 0.5 and m = 2 and a close-up of the interface. Directions are scaled with anisotropy.
Secondly, the whole process was run on echo-planar DT-MR brain data acquired with cardiac gating and a 96x96 matrix, and reconstructed to 128x128, 1.875x1.875x3.0 mm3 voxels, with 42 axial slices. Figure 5 shows some results on this dataset. One can see in figure 5-(b,c) a representation of the tensor ellipsoids on a small part of the extremity of the splenium of the corpus callosum. Six tracts were built from six points, using “hyperstreamlines”, from the Visualization Toolkit library [18]. Those hyperstreamlines are generated using the PDD field, and their cross section is determined using the values of λ2 and λ3 . It is clear that the tracking is improved by the PDD regularization. The tracks appear to have a smoother curvature and their relative trajectories are more consistent. At the top-left corner of the image, the three tracts are no longer diverging, and at the junction of the two bundles, tracts keep a smoother and more consistent trajectory. As shown in figure 5-(b,c), it is more difficult to assess the regularity of the eigenvalues map, mainly because the tracts are not clearly visible in the scalar images.
102
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
a)
b)
d)
c)
e)
Fig. 5. Fractional anisotropy map (a), λ1 map at the same slice before (b) and after regularisation (c), tracts and tensors along the extremity of the splenium of the corpus callosum before (d) and after regularization (e).
Figures 6-(a,b,c) show a close-up of a tract from the exact same part of the brain, in the original data, after PDD regularization, and after eigenvalue regularization. The effect of the eigenvalues regularization is more evident here: one can see on figure 6-(c) that the hyperstreamline has a more regular cross section, showing more regularity of λ2 and λ3 along the tract as well as an improvement in anisotropy, more regular along the tract. This shows that the model has the expected effect: eigenvalues are regularised consistently along tracts. Some problems still remain with the eigenvalue regularization scheme. For example, strong partial volume effects induce a regularisation bias and a loss of anisotropy. For instance on the upper part of the corpus callosum, the local shape of the tracts creates an important partial volume effect that reduces anisotropy at the inter-hemispheric plane. This low anisotropy is propagated by
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
103
Fig. 6. (a) One tract in the original data, (b) after the PPD regularization, and (c) after eigenvalues regularization. the diffusion process and eigenvalues are locally corrupted. This issue is mainly related to the quality of the data and the original resolution. The choice of the parameter C also influencesthe quality of the diffusion process and appears to be problematic in areas where the relationship between the value of the anisotropy and the nature of the underlying tissues is not so clear (for instance white matter with relatively low anisotropy). This particular problem should be solved by controlling the flow using not only the anisotropy but also the variation of the eigenvalues (λi )i=1,2,3 along the principal directions. Another problem is the diffusion flow “going out” of the tract ends. This will be addressed in further work and is related to the use of a contour map to identify the location of tract terminations. The main issue is that the regularization process is effective only in highly anisotropic media. Smoothing in low anisotropy tissues would further improve tissue discrimination. Moreover, noise in fractional anisotropy maps is higher where tissues have low true anisotropy (grey matter) [13,15], and there is considerable need for eigenvalue regularization in those areas.
6
Conclusion
We have presented a regularization scheme for DT-MR images, that includes a restoration of the PDD field followed by non-linear anisotropic coherenceenhancing diffusion applied on the eigenvalue maps. The PDD restoration proves to be successful and will find direct application in white matter fiber tractography. Indeed, almost all methods for tractography use the PDD information [8,14,17,6]. Even though tractography results are difficult to assess they do provide a good environment in which to validate the PDD field restoration. Techniques for proper validation of our methods within this application are the focus of current considerations.Regularization of the eigenvalue maps needs improvement and is also under current investigations. A new definition of the µi functions (eq. 14) should allow a continuous change of behavior of the flow from strong
104
Olivier Coulon, Daniel C. Alexander, and Simon R. Arridge
anisotropic coherence-enhancing diffusion in white matter to isotropic smoothing in grey matter and CSF. Because the DT provides a natural description of the local tissue structure, it seems reasonable that it can be used to determine the type of behavior the flow should have. We plan to look further into the use of the DT to determine directly what the flow tensor at each point should be within a non-linear anisotropic diffusion process.
Acknowledgements OC is funded by the Wellcome Trust. Images were kindly provided by Geoff Parker, Imaging Science and Biomedical Engineering, University of Manchester, and the NMR research unit of the Institute of Neurology, London. All 3D renderings were done using the Visualisation Toolkit (VTK - http://www.kitware.com).
References 1. Alexander, D.C., Pierpaoli, C., Basser, P.J., Gee, J.C.: Spatial transformations of diffusion tensor images. To appear in IEEE Trans. on Medical Imaging (2001) 2. Almansa, A., Lindeberg, T.: Enhancement of fingerprint images using shapeadapted scale-space operators. In Sporring, J., Nielsen, M., Florack, L., Johansen, P., editors, Gaussian Scale-Space Theory, Kluwer-Academic (1997) 3–19 3. Basser, P.J., Matiello, J., LeBihan, D.: MR Diffusion tensor spectroscopy and imaging. Biophysical Journal 66 (1994) 259–267 4. Basser, P.J., Pierpaoli, C.: Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. Journal of Magnetic Resonance, Series B 111 (1996) 209–219 5. Basser, P.J., Pajevic, S.: Statistical artefacts in diffusion tensor MRI (DT-MRI) caused by background noise. Magnetic Resonance in Medicine 44 (2000) 41–50 6. Basser, P.J., Pajevic, S., Pierpaoli, C., Duda, J., Aldroubi, A.: In vivo fiber tractography using DT-MRI data. Magnetic Resonance in Medicine 44 (2000) 625-632 7. Chan, T., Shen, J.: Variational restoration of non-flat image features: model and algorithm. Technical Report CAM-TR 99-20, UCLA (1999) 8. Conturo, T.E., Lori, N.F., Cull, T.S., Akbudak, E., Snyder, A.Z., Shimony, J.S., McKinstry, R.C., Burton, H., Raichle, M.E.: Tracking neuronal fiber pathways in the living human brain. Proc. Natl. Acad. Sci. USA 96 (1999) 10422–10427 9. Cottet, G.H., Germain, L.: Image processing through reaction combined with nonlinear diffusion. Mathematics of Computation 61:204 (1993) 659–673 10. Deriche, R., Faugeras, O.: Les EDP en traitement des images et vision par ordinateur. Technical Report 2697, INRIA (1995) 11. Krissian, K., Malandain, G., Ayache, N.: Directional anisotropic diffusion applied to segmentation of vessels in 3D images. In Scale-Space’97, LNCS 1252, SpringerVerlag (1997) 345–348 12. Lindeberg, T., Garding, J.: Shape-adpated smoothing in estimation of 3D depth cues from affine distortions of local 2D brightness structure. Image and Vision Computing 15 (1997) 415–434 13. Parker, G.J.M., Schnabel, J.A., Symms, M.R., Werring., D.J., Barker, G.J.: Nonlinear smoothing for reduction of systematic and random errors in diffusion tensor imaging. Journal of Magnetic Resonance Imaging 11 (2000) 702–710
A Regularization Scheme for Diffusion Tensor Magnetic Resonance Images
105
14. Parker, G.J.M., Wheeler-Kingshott, C.A., Barker, G.J.: Distributed anatomical brain connectivity derived from diffusion tensor imaging. In IPMI’2001, SpringerVerlag (2001). 15. Pierpaoli, C., Basser, P.J.: Toward a quantitative assessment of diffusion anisotropy. Magnetic Resonance in Medicine 36 (1996) 893–906 16. Perona, P.: Orientation diffusion. IEEE Trans. on Image Processing 7(3) (1998) 457–467 17. Poupon, C., Clark, C.A., Frouin, V., R´egis, J., Bloch, I., Le Bihan, D., Mangin, J.-F.: Regularization of diffusion-based direction maps for the tracking of brain white matter fascicles. Neuroimage 12 (2000) 184–195 18. Schroeder, W., Martin, K., Lorensen, B.: The Visualization Toolkit. An object oriented approach to 3D graphics, 2n d edition. Prentice Hall (1998) 19. Tang, B., Sapiro, G., Caselles, V.: Color image enhancement via chromaticity diffusion. Technical Report, ECE-University of Minensota (1999) 20. Tang, B., Sapiro, G., Caselles, V.: Diffusion of general data on non-flat manifolds via harmonic maps theory: the direction diffusion case. International Journal of Computer Vision 36(2) (2000) 149–161 21. Weickert, J.: A review of non-linear diffusion filtering. In ScaleSpace’97, LNCS 1252, Springer-Verlag (1997) 3–28 22. Weickert, J.: Anisotropic diffusion in image processing. B.G. Teubner, Stuttgart (1998) 23. Weickert, J.: Coherence-enhancing diffusion filtering. International Journal of Computer Vision 31(2/3) (1999) 111–127 24. Weickert, J.: Coherence-enhancing diffusion of colour images. Image and Vision Computing 17 (1999) 201–212 25. Weickert, J., Scharr, H.: A scheme for coherence-enhancing diffusion filtering with optimised rotation invariance. To appear in J. of Visual Communication and Image Representation (2000)
Distributed Anatomical Brain Connectivity Derived from Diffusion Tensor Imaging Geoffrey J.M. Parker1,2, Claudia A.M. Wheeler-Kingshott1, and Gareth J. Barker1 1
NMR Research Unit, University Department of Clinical Neurology, Institute of Neurology, University College London, Queen Square, London WC1N 3BG, UK 2 Imaging Science and Biomedical Engineering, University of Manchester, Oxford Road, Manchester M13 9PT, UK
[email protected] Abstract. A method is presented for determining likely paths of anatomical connection between regions of the brain using MR diffusion tensor information. Level set theory, applied using fast marching methods, is used to generate 3-D time of arrival maps, from which connection paths between brain regions may be identified. The method is demonstrated in the normal brain and it is shown that major white matter tracts may be elucidated and that multiple connections and tract branching are allowed. Maps of the likelihood of connection between brain regions are also determined. Two metrics are described for estimating the (informal) likelihood of connection between regions.
1
Introduction
Diffusion tensor imaging (DTI) is an MRI technique developed to allow noninvasive quantification of the self-diffusion of water in vivo (see for example [1,2,3]). Diffusion is anisotropic in many tissues; in particular, brain white matter demonstrates significant anisotropy. High anisotropy reflects both the underlying highly directional arrangement of white matter fibre bundles forming white matter tracts and of their intrinsic microstructure. DTI is able to characterise this anisotropy and to distinguish the principal orientation of diffusion, corresponding to the dominant axis of the bundles of axons making up white matter tracts in any given voxel. However, although DTI provides directional information concerning microscopic tissue fibre orientation at the voxel scale, it provides no explicit connection information between voxels. Early work [4] into anatomical connectivity attempted to group together neighbouring DTI voxels based on a similarity measure reflecting their relative principal diffusion orientations and coincidence. While this approach allows voxels to be grouped into sets that correspond to anatomical tracts, or portions thereof, it does not provide information concerning the fibre directions within these regions, and is therefore poorly suited to determining the route of interregion connectivity. The classification into separate groupings is also binary; there is no attempt at determining the connection likelihood. Other work has M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 106–120, 2001. c Springer-Verlag Berlin Heidelberg 2001
Distributed Anatomical Brain Connectivity
107
attempted to provide the route of connection between regions [5,6,7,8,9]. Each of these approaches follows the paths of white matter tracts by tracing from a start voxel in a point-to-point manner; a single path is produced (although the methods presented in [5] allow for separate paths to merge if they meet). The methods of [6,7,8] may be defined as ‘streamline-like’ approaches, due to their close analogy to standard methods for finding paths through vector fields. The approach of [5] relies on a voxel similarity measure to define a chain of voxels that represents a good path through the tensor field. A balance between the bending energy of putative traces and their faithful following of the directional information provided by the diffusion tensor is achieved using a Markovian approach. Each of the above methods suffers from two major disadvantages that the method presented herein attempts to overcome: firstly, there is no (or at best limited) natural mechanism to allow for the branching of tracts (an anatomically reasonable occurrence, seen, for example, in the corona radiata), meaning that connectivity is restricted to a representation as a one-to-one mapping between voxels in different regions; secondly, there is no attempt to determine how reasonable, or likely, any path is in representing a ‘true’ pathway of connection. The method presented here utilises the principles of level set theory and of the fast marching algorithm [10,11,12]. These techniques model the evolution over time of an interface or front. We hypothesise that the fast marching technique may be used in the context of the diffusion tensor field to propagate fronts that evolve at a rate governed by the directionality of the tensor. We control propagation using the principal eigenvector (1 ) of the tensor. As the 1 field provides a variable rate of propagation for the front, different regions in a volume dataset will be crossed by the front at different times after propagation begins. Maps showing the time of arrival from a start point may therefore be determined for the whole brain. From this information, paths of connection between brain regions may be determined [11,15]. We introduce associated ‘goodness’ metrics describing how likely a putative connection is, based on the information in the DTI data set. This paper describes the four major steps involved in determining anatomical brain connectivity using fast marching tractography: the evolution of a front from a seed point using a variant on the fast marching method; the generation of paths from all points in a given dataset to the seed point; the creation of connectivity maps using a goodness metric; and the selection of a subset of the paths as being reasonable pathways of connection. We also describe two putative goodness metrics. Examples of the results of this process in normal brains are presented.
2 2.1
Methods Data Acquisition
Two sets of DTI brain data were acquired using a GE Signa 1.5 Tesla scanner with a standard quadrature head coil. Diffusion-encoded images were obtained
108
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
using an echo-planar acquisition with the following parameters: cardiac gating; 96 × 96 acquisition matrix, reconstructed to 128 × 128; 1.875 × 1.875 × 3.0 mm3 or 1.875 × 1.875 × 2.5 mm3 voxels; 42 axial slices; 240 mm FOV; TE 100 ms; 28 non-collinear diffusion-weighting directions [16]. The acquisition time for each dataset was approximately 20 minutes. The second order 3 × 3 symmetric diffusion tensor, its eigenvalues and eigenvectors, and the fractional anisotropy [2] of the tensor were calculated from these data using software developed in-house. 2.2
Front Evolution
The first step towards determining paths of connection to a given start, or seed, point, A ∈ 3 , involves the growth of a volume from this point. This is achieved using a modified version of the fast marching algorithm, a rapid implementation of boundary-value level sets methods. We are primarily interested in the behaviour of the surface of the volume (the front). The rate, F , at which the front propagates from the start point(s) is linked to the information contained in the 1 field (Fig. 1). A number of possibilities exists for the form of this function (for examples see [13,14]). Here we present a new definition of F as a measure of voxel similarity, related to the ideas of voxel linking presented in [5]. Each iteration, p, in the front evolution involves the determination of F (r), where r is the position of a voxel that is a candidate for being occupied by the front during the pth evolution step (a voxel belonging to the ‘narrow band’ (Fig. 1b)): F (r) = min( ( |1 (r) · nd (r)| ), ( |1 (r ) · nd (r)| ), ( |1 (r) · 1 (r )| ) ) .
(1)
r is the position of a voxel neighbouring r that has already been passed by the front, along the direction of the discretised normal to the front, nd (r) (Fig. 1), and F (r) is defined along nd (r). This formula ensures that front evolution will occur most rapidly if both 1 (r) and 1 (r ) are close to co-linear with nd (r), and close to co-linear with each other. Front evolution will be fastest along the white matter tracts, where strong coherence between the 1 in neighbouring voxels is observed (Fig. 1a). The definition of nd (r) involves the following steps: We define S(r) as the set of nearest neighbours, q, to r that have already been passed by the front (q ∈ S(r)) (Fig. 1b). We then define an approximation to the unit normal at r (using 26-neighbour connectivity), using f ∈ {0, 1} to describe whether a voxel, q, has already been passed by the front: n(r) =
∇f . |∇f |
(2)
nd (r) is defined as the unit vector connecting voxel centres most closely approximating n(r). r is then defined as the member of S(r) connected to r in the direction −nd (r). Figure 2a shows a 2-D map of 1 in an axial brain image. Also shown is a map of fractional anisotropy, showing white matter tracts as high signal intensity (Fig.
Distributed Anatomical Brain Connectivity
Front
Grey Matter Front 00 11 11 00 00 11
r’ r 11 00 00 11 00 11
109
Tract
r’ r n(r)
n ε1 (a)
(b)
Fig. 1. Vectors used in the calculation of the speed function, F . (a) The principal eigenvector of diffusion, 1 , is arranged in a directionally coherent manner in tracts, whilst in grey matter this coherence is largely lacking. (b) The relationship between the positions of the front, 1 (r), 1 (r ) (needles), grid points (voxels) passed by front (black circles), grid points in the narrow band (grey circles), and grid points not yet reached by front (white circles). S(r) highlighted by dotted region
2b), due to the highly directional nature of water diffusion in these structures, reflecting the directionally coherent organisation of the tissue microstructure. The evolution of a front using the fast marching algorithm allows a time of arrival, T , from the seed point to any point in the image volume to be determined. F and T are related by the Eikonal equation: |∇T |F = 1 .
(3)
Note that our definition of nd (r) is related to ∇T under the condition that T can be assumed to be approximately equal for all members of S(r): nd (r) ≈ n(r) ≈
∇T . |∇T |
(4)
To provide a value of T (r) we approximate Eq. 3 to T (r) = T (r ) +
|r − r | . F (r)
(5)
This construction ensures that information concerning the values of T (r) propagates only from the voxel with which the value of F (r) is determined (Eq. 1), along nd (r). Based on the methods of Sethian [10], we define a set of all points lying just outside the front (the narrow band), and which are candidates for inclusion within the front (Fig. 1b). The grid point into which the front propagates at iteration p is the member of the narrow band with the smallest value of T .
110
(a)
(c)
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111 0000000000000000000000000000000000000000000000 1111111111111111111111111111111111111111111111
(b)
(d)
Fig. 2. (a) Principal eigenvectors of the diffusion tensor (1 ), modulated with the principal eigenvalue. (b) Fractional anisotropy map. (c) T map. Dark ≡ low T values (early arrival); bright ≡ high T values (late arrival). Black pixels represent CSF or regions outside the brain parenchyma. (d) Traces from arbitrary points to start point (point of confluence of all paths is the start point for the front propagation process) using discrete gradient descent through T map (c)
Equivalently (Eq. 3), this ensures that the front will evolve at the point at which F is highest, ensuring rapid propagation along the direction of 1 , and slow propagation in other directions. Values of T in the narrow band are tested to see if they need to be updated at each iteration step. If u is the position of a voxel that was an unsuccessful candidate at the previous iteration step (p − 1), the value of T (u) is updated only if F (u)p > F (u)p−1 . Figure 2c shows an illustrative 2-D axial example (i.e. the z component of 1 is set to zero) of front propagation from a seed region placed in the splenium of the corpus callosum. The front propagates at a rate determined by the local
Distributed Anatomical Brain Connectivity
111
1 value. Fastest propagation occurs along the white matter tracts where 1 directional coherence is high. 2.3
Determining Paths of Connection
Level set and fast marching methods may be used to construct minimum cost paths through weighted domains [11,15]. We interpret the diffusion tensor field as providing this cost function. The time, T , at which the front reaches each point in the image is determined by the cumulative effect of the F values experienced by the front up to that point. |∇T | (Eq. 3) may therefore be interpreted as a cost function affecting the rate of front propagation. More specifically, given a cost function G(x1 , x2 , . . . , xn ) and a starting point A ∈ 3 , it is possible to define a path γ(τ ) : [0, ∞) → 3 from a seed point, A, to any point r ∈ 3 , that minimises the integral
r=γ(L)
G(γ(τ ))dτ ,
(6)
A=γ(0)
where L is the total length of γ and τ is the position on γ [11]. As we require the tensor field to provide the cost function, a natural definition of G is obtained by direct substitution of the |∇T | values calculated by the fast marching process [11], as this ensures that cost will be low when the 1 are being followed faithfully and be high for increments that do not follow 1 . The fast marching algorithm ensures that the minimum cost incurred in travelling from A to r is the time of arrival, T (r). This implies a path between A and r satisfying:
(r)
|∇T |(γ(τ ))dτ .
T (r) = min γ
(7)
(A)
The minimum cost path back to the seed point, A, may be found by gradient descent through T [11,15]. This process is shown from arbitrary points in the previous 2-D example in Fig. 2c,d, allowing any point within the data set to be connected to the original start point. Back propagation from different points may lead to paths that merge (or, equivalently, branch when viewed from the seed point outwards). Gradient descent is achieved either by using discrete steps between voxel centres (26-neighbour connectivity) or by a Euler approximation with constant time step. For the discrete approach ∇T is replaced by the discretised normal, nd , ensuring that the voxels involved in propagating information at each iteration of the fast marching process (Eqs. 1 and 5) are linked. For presentation purposes, the resulting discrete paths are smoothed using a moving average of width 3. 2.4
‘Goodness’ Metrics
Each point in T (i.e. every point within the brain) may be connected to the seed point using the gradient descent method (Fig. 2d). The likelihood that any
112
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
given path is representative of a true anatomical connection may be estimated by determining a ’goodness metric’, φ. By mapping φ throughout the brain, all points with a high likelihood of connection to the seed point may be identified. By setting a threshold in φ, the most likely pathways may be extracted. A possible definition of φ for any putative pathway, γ, may be constructed using the speed function, F , as determined during the front evolution process: φ1 (γ) = min F ( γ(τ )) τ
= min τ
1 . |∇T |( γ(τ ))
(8)
This formulation uses the intrinsic cost function defined in the path generation process (Eq. 7), and may therefore be seen as a ‘natural’ choice for a goodness metric. An alternative metric may be defined by analysing the relationship between the path tangent and the underlying 1 direction. We employ the scalar product between the tangent, w, and 1 : φ2 (γ) = max ( 1 − |w( γ(τ )) · 1 ( γ(τ ))| ) . τ
(9)
This formulation may be interpreted as assessing how faithful the paths found using the above methods are to the underlying arrangement of the 1 field.
(a)
(b)
Fig. 3. (a) φ1 map. (b) φ2 map. Points with a high φ are bright, implying a high likelihood of connection to the start point (arrow ). 2-D front propagation as in Fig. 2
Both metrics assign an informal likelihood for a given pathway based on the worst case along the length of the path of the property they are sensitive to. Figure 3 shows application of the 2 metrics in the previous 2-D example. Both
Distributed Anatomical Brain Connectivity
113
show high connectivity between the seed point and regions in a crescent pattern posterior to the corpus callosum, as would be expected from inspection of the arrangement of the 1 field (Fig. 2).
3
Results
Figure 4 shows maximum intensity projections (MIPs) of φ1 and φ2 in a 3-D example in the motor pathway. The MIPs are normalised into the Talairach co-ordinate system [17] using SPM99 [Wellcome Dept. of Cognitive Neurology, Institute of Neurology, UCL, London, UK]. The seed point was placed in the middle portion of the cerebral peduncle. The form of the corticospinal tract is clearly visible.
(a)
(b)
Fig. 4. MIPs in Talairach co-ordinates (grid ). Bright implies high φ. (a) φ1 ; (b) φ2 . Paths determined using discrete gradient descent
Talairach-normalised maps of φ1 and φ2 at different axial levels reveal that the seed region is connected to the region of white matter immediately adjacent to the primary motor area and the supplementary motor area (approximate Talairach co-ordinates (-20,-20,65) and (-10,-10,65), respectively) (Fig. 5). The maps also show that the route taken passes through the posterior limb of the internal capsule, consistent with the expected anatomical pathway. The seed region is also connected to the pyramids in the medulla, again consistent with the known route of the corticospinal tract. It is evident from Figs. 4 and 5 that metric 2 provides an estimate of the likelihood of connection within the brain with higher contrast between wellconnected and less well-connected regions. Also, Fig. 5b shows that likelihoods based on φ1 produce more regions on the contralateral side of the brain to the
114
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
i
ii
iii
(a)
iv
(b)
(c)
Fig. 5. Maps in Talairach co-ordinate system of (a) Fractional anisotropy, (b) φ1 , (c) φ2 . i) Left pyramid; ii) posterior limb of the internal capsule; iii) Talairach co-ordinates (-15,-10,52); iv) Talairach co-ordinates (-16,-12,52). From top to bottom, images are at approximate Talairach z co-ordinates -50, 10, and 52
seed point with relatively high φ values - a situation which is unlikely to represent true anatomical connectivity. For these reasons we judge φ2 as being superior to φ1 . Figure 6 shows the 1% most likely paths from the seed point used for Figs. 4 and 5, when using φ2 as the goodness metric (discrete gradient descent method). It is apparent that connections to the motor region of the brain have been found, even though a region of relatively low diffusion tensor anisotropy has had to have been traversed. The principal tract of interest is well described in Figs. 4–6, with other areas of the brain showing low connection likelihood. However, one notable ‘false-positive’ is apparent in Fig. 6; the algorithm assigns a region in the corpus callosum as having high connection likelihood, and this area is shown as connected, which is likely to be erroneous. However, when viewed in Fig. 5, this region has a somewhat lower likelihood than much of the rest of the ‘high φ region’, suggesting that the arbitrary threshold of 1% for selecting paths is inadequate. Further experiments were performed on a second dataset using φ2 and the Euler gradient descent method. The 1 % most likely paths were traced from start points in the mid-cerebral peduncle (as before) and in each of the optic radiations, from a position lateral to the lateral geniculate nucleus. Figure 7 shows the results in the corticospinal tract. Figure 8 shows the results in the optic radiations. No significant false-positives were observed in these experiments.
Distributed Anatomical Brain Connectivity
115
Fig. 6. Coronal (through the corticospinal tract) and axial (through the level of the pyramids) fractional anisotropy maps plus paths generated using discrete gradient descent method. Solid arrows show low anisotropy due to the presence of crossing fibres. Dashed arrow shows probable false-positive paths
4
Discussion
We have shown results of diffusion tensor tractography using fast marching tractography (FMT). We have shown that in the normal brain it is possible to use this approach to follow major white matter tracts. Examples have been presented in the motor pathway and the optic radiation in two individuals. The examples presented show that maps of connectivity may be obtained that are in agreement with known anatomical and functional connectivity. The use of the Talairach co-ordinate system provides a degree of validation of the pathways of connection found. Regions of gross functional anatomy may be identified, in particular the motor areas. This validation approach is therefore appropriate for the corticospinal tract. However, there is no general ’goldstandard’ of anatomical connectivity in the human brain; surprisingly little is known about human brain anatomical connectivity. This lack of information is due to the fact that the tract tracing methods successfully used for determining connections in animal brains cannot be applied to humans due to their invasive nature. Therefore, the only human connectivity data available so far stems from gross dissection of the human brain [17], histological staining techniques of major fibre bundles [18], and degeneration studies in patients [19,20,21]. These techniques cannot be applied generally in vivo but can only be used post mortem or under specific disease conditions. These constraints do not apply to many animal models. In particular, detailed knowledge of anatomical connectivity is available in the macaque brain, forming the closest well-described model to the human brain. A study is under way examining the relationship between DTI tractography using the fast marching method and the well-known cerebral anatomical
116
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
Fig. 7. Coronal (through the corticospinal tract) and sagittal (through the brain midline) fractional anisotropy maps plus paths connected to the mid portion of the cerebral peduncle. Paths generated using Euler gradient descent method
connectivity of this animal [22]. Such validation studies are required to allow future studies into the relative merits of different tractography approaches. Figures 6 and 7 demonstrate the ability of this method to trace paths through relatively low anisotropy regions, and successfully identify the connection to the motor area. The region in question has low anisotropy values due to the presence of crossing fibres, demonstrating the limitations of the relatively coarsescale DTI technique in resolving small-scale pathways. Whilst the fast marching tractography approach is able to continue through such regions, and find the ‘true’ pathways of interest, the possibility of false positives must not be ignored. Likewise, although multiple points have been shown to have a high likelihood of connection to the start region, the possibility of false negatives cannot be ruled out. Previous preliminary work has included a measure of the tensor anisotropy in the definition of the speed function, F [13,14]. Here we deal only with 1 , allowing grey matter structures, such as the thalamus, to be included in the tractography process. This is also likely to contribute to the ability to follow tracts through the low anisotropy regions discussed above. A possible development to reduce false positive rates would be to use a quantitative threshold for the φ values, rather than a simple ‘best centile’ approach. The quantitative nature of the goodness metrics should lend themselves to such a thresholding approach; only a justification for setting a threshold without having to resort to arbitrary decisions is required. We have concluded that goodness metric 2 is superior to metric 1, even though φ1 may be a more natural choice. The reason for the better performance of φ2 may be that it is insensitive to the of curvature of a path. The definition of
Distributed Anatomical Brain Connectivity
117
Fig. 8. Axial fractional anisotropy map at the level of the optic radiations, showing bilateral top 1 % traces, determined using the Euler gradient descent method. Start point on each tract highlighted by black circles
φ1 involves the fast marching speed function, F , relating neighbouring eigenvector orientations and coincidence, and hence has an explicit curvature component. When assessing putative pathways, a significant penalty on curvature is not necessarily desirable, hence the better performance of φ2 , which depends only on the relationship between the path tangent and the underlying eigenvector field. This argument then raises the question as to whether the definition of F (Eq. 1) for the front propagation process is the most suitable choice, as the definition given here has an implicit curvature dependance. Further work will investigate alternative speed functions and goodness metrics. The incorporation into the speed function of diffusivity information reflecting the form of the whole tensor may yield benefits under some conditions. An obvious alternative to the use of ’worst case’ goodness metrics, such as φ1 and φ2 , is to define a metric with characteristics more globally related to the nature of the path. This approach appears attractive from the point of view of reducing the possible detrimental effect of noise on the estimation of path likelihood. However, the use of a ’worst case’ metric provides the benefit of reducing the risk of spurious paths being found due to the selection of separate, abutting or crossing paths as a single path - such occurrences may have a small effect on global estimates of goodness, whilst having a decisive effect when using a ’worst case’
118
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
approach. Additionally, worst-case metrics are naturally pessimistic, and are therefore likely to provide some protection against false-positive results. Further work is also required in the development of more formal likelihood measures. Such metrics have been developed utilising the bending energy of individual paths [5], and are likely to be applicable to our approach. However, the use of path curvature is not necessarily appropriate, as discussed above. The methodology presented here is related in its philosophy to graph search methods, as suggested in [23] for non-diffusion MR brain images. However, the main aim of [23] was to measure brain morphometry, and connectivity based on diffusion information was not determined. Other workers have determined pathways of connection using other methods [4,5,6,7,8,9], but have not generally allowed for tract branching or generated measures of the likelihood of connection between regions. The time for the analysis of a 42-slice whole-brain DTI dataset using the fast marching tractography approach as outlined in this work, including each of the steps outlined in the methods section, is of the order of 10 minutes. Such an analysis allows all points in the brain to be assessed for their likelihood of connection to a seed point. This time may be reduced by restricting the analysis volume using prior knowledge of anatomical connectivity. The fast marching tractography approach allows the generation of both connectivity maps and connection paths. These appear reasonable in comparison with known normal anatomy. The principal advantages of this method are the possibility for branching of paths, and the estimation of the likelihood of connection of any brain region to a start region. These two benefits in combination make it unique in the study of connectivity using DTI data to date.
Acknowledgements This work was supported by the Multiple Sclerosis Society of Great Britain and Northern Ireland. The contributions of Klaas Stephan, Olga Ciccarelli, Sofia Eriksson, David Werring, and Olivier Coulon are gratefully acknowledged.
References 1. Basser, P.J., Mattiello, J., Le Bihan, D.: Estimation of the Effective Self-Diffusion Tensor from the NMR Spin Echo. J. Magn. Reson. B. 103 (1994) 247–254 2. Pierpaoli, C., Basser, P.J.: Toward a Quantitative Assessment of Diffusion Anisotropy. Magn. Reson. Med. 36 (1996) 893–906 3. Basser, P.J., Pierpaoli, C.: Microstructural and Physiological Features of Tissues Elucidated by Quantitative-Diffusion-Tensor MRI. J. Magn. Reson. B. 111 (1996) 209–219 4. Jones, D.K., Simmons, A., Williams, S.C.R., Horsfield, M.A.: Non-Invasive Assessment of Axonal Fiber Connectivity in the Human Brain via Diffusion Tensor MRI. Magn. Reson. Med. 42 (1999) 37–41
Distributed Anatomical Brain Connectivity
119
5. Poupon, C., Clark, C.A., Froulin, V., et al.: Regularization of Diffusion-Based Direction Maps for the Tracking of Brain White Matter Fascicles. NeuroImage 12 (2000) 184–195 6. Conturo, T.E., Lori, N.F., Cull, T.S., et al.: Tracking Neuronal Fiber Pathways in the Living Human Brain. Proc. Nat. Acad. Sci. USA 96 (1999) 10422–10427 7. Mori, S., Crain, B.J., Chacko, V.P., van Zijl, P.C.M.: Three-Dimensional Tracking of Axonal Projections in the Brain by Magnetic Resonance Imaging. Ann. Neurol. 45 (1999) 265–269 8. Basser, P.J., Pajevic, S., Pierpaoli, C., Duda, J., Aldroubi, A.: In Vivo Fiber Tractography Using DT-MRI Data. Magn. Reson. Med. 44 (2000) 625–632 9. Tuch, D.S., Belliveau, J.W., Wedeen, V.J.: A Path Integral Approach to White Matter Tractography. In: Proceedings of the 8th meeting of the International Society for Magnetic Resonance in Medicine. (2000) 791 10. Sethian, J.A.: A Fast Marching Level Set Method for Monotonically Advancing Fronts. Proc. Nat. Acad. Sci. USA 93 (1996) 1591–1595 11. Sethian, J.A.: Level Set Methods and Fast Marching Methods. 2nd edn. Cambridge University Press, Cambridge (1999) 12. Malladi, R. and Sethian, J.A.: An O(N logN ) Algorithm for Shape Modeling. Proc. Nat. Acad. Sci. USA 93 (1996) 9389–9392 13. Parker, G.J.M. and Dehmeshki, J.: A Fast Marching Analysis of MR Diffusion Tensor Imaging for Following White Matter Tracts. In: Medical Image Understanding and Analysis MIUA2000 (2000) 185–188 14. Parker, G.J.M. and Dehmeshki, J.: A Level Sets Approach to Determining Brain Region Connectivity. In: Proceedings of the 1st International Workshop on Image and Signal Processing and Analysis IWISPA 2000, 22nd International conference on Information Technology Interfaces (2000) 145–150 15. Kimmel, R. and Sethian, J.A.: Computing Geodesic Paths on Manifolds. Proc. Natl. Acad. Sci. USA 95 (1998) 8431–8435 16. Jones, D.K., Horsfield, M.A., Simmons, A.: Optimal Strategies for Measuring Diffusion in Anisotropic Systems by Magnetic Resonance Imaging. Magn. Reson. Med. 42 (1999) 515–525 17. Talairach, J. and Tournoux, P.: Co-planar Stereotaxic Atlas of the Human Brain. Georg Thieme Verlag, Stuttgart (1988) 18. B¨ urgel, U., Schormann, T., Schleicher, A., Zilles, K.: Mapping of Histologically Identified Long Fiber Tracts in Human Cerebral Hemispheres to the MRI Volume of a Reference Brain: Position and Spatial Variability of the Optic Radiation. NeuroImage 10 (1999) 489–499 19. Miklossy, J., van der Loos, H.: The Long-Distance Effects of Brain Lesions: Visualization of Myelinated Pathways in the Human Brain Using Polarizing and Fluorescence Microscopy. J. Neuropathol. Exp. Neurol. 50 (1991) 1–15 20. Pujol, R., Marti-Vilalta, J. L., Junque, C., Vendrell, P., Fernandez, J.,and Capdevilla, A.: Wallerian Degeneration of the Pyramidal Tract Studied by Magnetic Resonance Imaging. Stroke 21 (1990) 404–409 21. Werring, D.J., Toosey, A.T., Clark, C.A., Parker, G.J.M., Barker, G.J., Miller, D.H., Thompson, A.J.: Diffusion tensor imaging can detect and quantify corticospinal tract degeneration after stroke. J. Neurol. Neurosurg. Psychiatry 69 (2000) 269–272 22. Stephan, K.E., Parker, G.J.M., Barker, G.J., Rowe, J.B., MacManus, D.G., Passingham, R.E., Lemon, R.N., Turner, R.: In Vivo Tracing of Anatomical Fibre Tracts in the Macaque Monkey Brain by Diffusion Tensor Imaging (DTI). In: Proceedings Human Brain Mapping (2001) (In press)
120
Geoffrey J.M. Parker, Claudia A.M. Wheeler-Kingshott, and Gareth J. Barker
23. Styner, M., Coradi, T., Gerig, G.: Brain Morphometry by Distance Measurement in ˇ amal, M., Todd-Pokropek, A. a Non-Euclidian, Curvilinear Space. In: Kuba, A., S´ (eds.): Information Processing in Medical Imaging IPMI’99. Lecture Notes Computer Science, Vol. 1613. Springer-Verlag, Berlin Heidelberg New York (1999) 364– 369
Study of Connectivity in the Brain Using the Full Diffusion Tensor from MRI Philipp G. Batchelor1, Derek L.G. Hill1 , Fernando Calamante2 , and David Atkinson1 1
Division of Radiological Sciences, King’s College London Institute of Child Health, University College London
2
Abstract. In this paper we propose a novel technique for the analysis of diffusion tensor magnetic resonance images. This method involves solving the full diffusion equation over a finite element mesh derived from the MR data. It calculates connection probabilities between points of interest, which can be compared within or between subjects. Unlike traditional tractography, we use all the data in the diffusion tensor at each voxel which is likely to increase robustness and make intersubject comparisons easier.
1
Introduction
Water molecules in tissue move continuously, and this movement can be exploited to study diffusivity in the brain using MRI [1,2]. Such MR diffusion images define six independent values at each voxel. The six values define a symmetric tensor, which can equivalently be described using 3 eigenvalues and 3 eigenvectors of that tensor. The anisotropy of the diffusion tensor is of particular interest in brain images, as it is related to fibre tracts in white matter [3,4,5,6,7,8]. The technique of MR tractography has recently been proposed to study white matter connectivity in the brain [4,5,6,7] using MR diffusion images. Here, we want to propose a different approach, which uses the complete diffusion tensor. Essentially, we are solving a diffusion equation based on the measured diffusion tensor, the initial condition is a seed at a point which would be used as starting point in tractography. The seed diffuses through the brain, and the amount at some position is interpreted as a probability to reach that point, given the input data. The advantage of this approach is that it is not dependent on a point to point eigenvalue/eigenvector computation, thus in that sense hopefully more robust. It is also intuitively related to the underlying physico-chemical process1 [9,10].
1
which is actually more complicated, but some of the other physical quantities cannot be measured in vivo
M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 121–133, 2001. c Springer-Verlag Berlin Heidelberg 2001
122
2
Philipp G. Batchelor et al.
Methods: Extracting Information
The images produced by Diffusion tensor MRI provide new information on the imaged tissue, not available with classical MR techniques [11]. The images produced, however, are not directly related to the tensor components, which must be computed algebraically [11]. Recent studies have addressed the optimisation of acquisition of diffusion tensor data[6,15]. In this study, we are concerned with the analysis of such data. 2.1
From Diffusion Imaging to Diffusion Tensor
The diffusion weighted images are acquired in different directions. As the tensor has at least six independent components, it is necessary to acquire diffusion weighted images with the diffusion sensitisation in at least six directions ei . We note these images βi , i = 1 . . . n, n ≥ 6. Furthermore, the non weighted image is noted β0 . Then βi log = −b eti Dei (1) β0 We acquired 25 slices using a single shot Echo Planar Images (EPI), with b = 1000 s/mm2 , and voxel size 2 × 2 × 2 mm, and six averages. Here b is related to the diffusion weighting gradient strength and timings. The √ seven directions √ were: the x,√ y, and z coordinate √ axis directions, plus [1, 1, 1]/ 3, [−1, −1, 1]/ 3, [1, −1, −1]/ 3, and [−1, 1, −1]/ 3. Positive Definiteness For the physics to make sense, one should have βi ≤ β0 in order for the left hand side to be negative, so as to make the directional measurements eti Dei positive, which a diffusion tensor should satisfy. Figure 1 shows in white regions where this is not satisfied for β1 /β0 .The tensor is not positive definite outside the head, and also at some voxels within the head. The number of voxels in the brain where the tensor is not positive definite decreases with data averaging, suggesting that the non-positive definite values are caused by measurement noise. We define li := − log ββ0i /(2b), so, for li > 0, we get the system of equations for the unknown diffusion tensor: et1 De1 = Et1 d = l1 .. .. . . etn Den = Etn d = ln where we have defined d = [Dxx , Dyy , Dzz , Dxy , Dxz , Dyz ]. Ei is constructed from the directions ei from the mapping [x, y, z] → [x2 , y 2 , z 2 , 2xy, 2xz, 2yz]. Summarised in matrix form: E := [E1 ; . . . ; E1 ] is the matrix whose rows are the Ei ’s. We get, Ed = [l1 , . . . , l7 ]t with the obvious condition that E has maximal rank. It is still possible for the computed tensor to have negative eigenvalues, as shown in figure 2
Study of Connectivity in the Brain
123
Fig. 1. White: positions where the ratio of the diffusion weighted image in direction e1 is larger than the value in the β0 image at the corresponding position. Middle, one acquisition, right, averaged six times. The β0 is diaplyed on the left
Fig. 2. In this axial slice through the head pixels where the diffusion tensor has a negative eigenvalue are marked as white, left smallest eigenvalue, middle middle eigenvalue
2.2
Classification of Diffusion Related Images
There are various ways of displaying and analysing the diffusion tensor images[16]. One straightforward possibility is to display the tensor components, giving six images in three dimensions. This can be dangerous, however, as the components of a tensor are not invariants of the coordinate system, and the handling of six images is cumbersome. A preferable approach is to use invariants of the tensor[3]. Scalars have the advantage of being easy to use. The eigenvalues are the essential scalar information contained in a tensor. The fundamental invariants are the homogeneous functions of the three eigenvalues: λ1 λ2 λ3 ,
λ1 λ2 + λ2 λ3 + λ1 λ3 ,
λ1 + λ2 + λ3
(2)
124
Philipp G. Batchelor et al.
to which one can add any function of the eigenvalues, for example λλ31 or the relative anisotropy (RA) [3,15]. One could generalise this to study the ratio of the largest eigenvalue to the average of the two other ones, to quantify how cylindrical the tissue is. Usually, the aim is to emphasise the anisotropy of the tensor, as the anisotropy of the measured tensor is supposed to be related to the anisotropy of the tissues. We use the convention that eigenvalues are sorted from 1 to 3 by increasing value. We must take care to avoid negative eigenvalues. If we take the relative anisotropy (RA), which is the ratio of the standard deviation to the mean for the three eigenvalues, the denominator is the mean diffusivity. A negative eigenvalue makes a low denominator, thus a larger anisotropy. This is in general true, negative eigenvalues create large anisotropy. For example, changing one sign of the totally isotropic tensor gives a fractional anisotropy larger than 1, namely 1.1547. Associated with the eigenvalues are the eigenvectors. These define vector fields on the image. Different techniques have been developed to display them. Usually, in what is called tractography, the direction corresponding to the maximal eigenvalue is followed. For example, [4] studies the flow associated with the largest eigenvector: r˙ (t) = emax (r(t))
(3)
One difficulty associated with eigenvalue/eigenvectors is the sorting [17], and particularly multiple eigenvalues, and associated singularities in the vector field. For example, Basser et al in [4] stop following the tract when it reaches the boundary, or a region with low anisotropy, or the radius of curvature is below two voxels, or the principal direction is not the most collinear one with the tract tangent. Because of all the problems associated with noise, such as ambiguities in direction and errors in sorting of the eigenvalues, it can be advantageous to study connection probabilities rather than absolute connections. One approach is to use the fast marching algorithm to propagate a front using the direction of the principal eigenvector [18]. The advantage of our approach is that we use the entire diffusion tensor in an intuitive way, and measuring diffusion of a seed placed in the tensor field allows an elegant definition of probability. 2.3
Diffusion Equation
Traditional tractography enforces a choice, based on sorting at every step. This means rejecting two thirds of the available information, even in cases where the difference between the largest and second largest eigenvalues might be extremely small. Also, even with a smooth family of tensors, the eigenvalues are not necessarily smooth [19]. We propose here a different technique, based on a simple model of diffusion: at a point which would be used as starting point for tractography, a seed is diffused.
Study of Connectivity in the Brain
f (x) =
1 0
in starting region elsewhere
125
(4)
is the seed, used as an initial condition in the partial differential equation (PDE) ∂ u(x, t) ∂t u(x, 0) = f (x)
∇ · D∇u(x, t) = −
(5) (6)
This uses directly the diffusion tensor, and thus avoids the problems related to the computation of the eigenvalues. We interpret it as a probabilistic tractography, which allows the starting value to follow simultaneously all the possible paths around it, with a certain probability. This is an idealised model of diffusion, and a more realistic equation would have to take into account intracellular and extracellular diffusion, convection terms, etc...[9]. To understand the effect of the anisotropy, we compare with the effect of homogeneous isotropic diffusion, we call u ˆ the isotropically diffused seed: ∂ u ˆ(x, t) ∂t u ˆ(x, 0) = u(x, 0) = f (x)
∇ · ∇ˆ u(x, t) = ∆ˆ u(x, t) = −
(7) (8)
In the computation, we use natural boundary conditions, i.e. in the variational formulation, we do not enforce a boundary value. This corresponds to D∇u · n = ∇ˆ u · n = 0 on the boundary. Under this constraint, from Green’s theorem, the mean value is a time constant: u(t) = uˆ(t) = f . This boundary condition means that the normal part of the gradient is zero, in other words, nothing escapes. In terms of heat flow, this would mean that the brain is insulated. Not only are these assumptions intuitive, but as they ensure that the total amount of seed is conserved, enabling us to interpret u(t)/ u a probability. 2.4
Discretisation-Numerics
We have written in equation 5 the continuous form of the equation, but to solve it, it needs to be discretised. As in [20], we note that the discrete equation can be as meaningful as the continuous one. We used a Crank-Nicholson scheme, with Galerkin finite element discretisation in space, and finite difference in time. The finite element method amounts to considering a weak formulation [21] of the equation on a finite dimensional subspace of the space of solutions. This subspace is normally the one generated by continuous, piecewise polynomial functions, whose support is a mesh neighbourhood of a vertex. The diffusion equation is a parabolic equation, and for further theoretical considerations on the method, we refer to [21]. Example Our approach can be clarified by considering a simple abstract two dimensional example. Consider the simplest possible square mesh in figure 3, with just one internal node (node 5), and eight boundary nodes.
126
Philipp G. Batchelor et al. 7
4
1
8
9
5
6
2
3
Fig. 3. A mesh for a square, with one internal node
On it, we will test some simple diffusion tensors, e.g. 1+0 0 1 with > 0. For any value of , small or large, the tractography will impose to move in the x direction. Using the numbering on the figure, we see that the nonzero matrix components affecting the inner node are (15), (25), (45), (55), (85), (95), and (65). Usually, the contribution at a node is computed by summing the integrals over elements containing that node. The discrete version
1
−→
Fig. 4. Left, the initial condition, Right, after one time step, for = 0.1.
of the integral using the finite elements is the sum of the values of the mass matrix times the vector of values at vertices of the solution. This is due to the fact that the finite element basis is not orthogonal, and the mass matrix is the
Study of Connectivity in the Brain
127
matrix of inner products. We give the values given by the discretised diffusion equation on this very small mesh. The values at different time steps is given by iteratively solving a linear system of the form AU = BUold , where A and B are computed from the stiffness, mass, and time step, and the vector U contains the nodal values of the function. At = 0, = 0.1, and = 1, and t = 0.1, in time steps of 0.01, i.e. ten iterations. (remember that the square has sides of length 1): 0.0868 0.1178 0.0206 0.1178 0.3140 0.1178 and 0.0206 0.1178 0.0868
0.0869 0.1168 0.0215 0.1196 0.3104 0.1196 and 0.0215 0.1168 0.0869
0.0859 0.1120 0.0274 0.1294 0.2908 0.1294 0.0274 0.1120 0.0859
This is also illustrated in figure 4. The numbers can be read as probabilities to reach the corresponding point, starting with probability one at node 5. As with all diffusion related techniques, there is still the problem of the stopping time. For the moment, we use an ad-hoc choice.
3
Application
3.1
Diffusion Tensor Components
The images in figure 5 show the diffusion components. It is difficult to grasp all the information contained in a tensor field, using this representation. We can get a very rough idea of what the diffusion tensor look like on average by selecting a ‘brain region’, and a CSF region, as shown in figure 6. In the region of interest, the mean diffusivity, i.e. one third of the trace of the diffusion tensor 2 2 was: 1.038 · 10−3 mm s .
3.2
Anisotropies
It is important to remember that high anisotropy doesn’t mean high diffusivity, and this doesn’t mean high consistency of direction of the eigenvector! In figure 7, we display some of the classical anisotropies [11,15], with the mean diffusivity image. The images displayed here are from axial slices of two volunteers. The mean relative anisotropy (RA) was 0.2841 in the brain, and 0.1472 in the CSF, i.e. the ratio of means anisotropies is approximately in ratio 2:1 whereas the corresponding diffusivity is in ratio 1:3. 2
The region used contains parts of CSF, which has a higher diffusivity and lower 2 anisotropy. The mean diffusivity in a white matter region is ∼ 0.86 · 10−3 mm , in s accordance with the literature
128
Philipp G. Batchelor et al. y
x
z
x
y
z
Fig. 5. Diffusion tensor components
3.3
Diffusion Equation
The first step in solving the diffusion equation is to extract a region of interest. For the purpose of demonstration, we use approximate white matter segmentations, obtained by thresholding the anisotropy. (For better results, a segmented tissue map from the structural MR of the same subject, or even from an atlas could be used in place of the diffusion anisotropy mask, though matching such data to the EPI diffusion images requires a non-rigid registration algorithm that can compensate for MR distortion [22,23]). This limits the solution to the region of interest, and also excludes region with greatest likelihood of neagtive eigenavlues (figure 2). Furthermore, the CSF has high diffusivity, but is not of interest for the connectivity. One of the meshes used is displayed in figure 8. After extracting the regions, we need to convert the diffusion tensor components from values on the voxel dataset to values on the finite element mesh. In figure 9 we display the results at different time steps. For the starting position as chosen (the points positions are shown by crosses in the figure showing the isotropic homogeneous diffusion), the values of the seeded function are shown in table 1.
Study of Connectivity in the Brain
129
Fig. 6. The region of interest, left, and the CSF region chosen for equation Table 1. Values of the solution of the diffusion equation at different points, t = 40[2b]. These should be interpreted as probabilities of reaching the locations xi Point
x0 x1 x2
4
x u(x, 40)
u ˆ(x, 40) 0.121119 0.173109 0.0970939 0.0695148 0.149793 0.00732437
Discussion
It is generally accepted in the diffusion MRI literature that tissue anisotropy should be described in terms of rotation invariant indices based on some function of the eigenvalues, as for example relative anisotropy. There is less agreement on how to efficiently use the diffusion tensor information for purposes such as studying connectivity within the brain. The recently proposed technique of tractography, involving following the principle eigenvector at each voxel, has been widely reported [4,5,6,7], but has some disadvantages. This technique, which we refer to as traditional tractography, ignores large amount of the data collected (the two other eigenvectors, and the size of the eigenvalues), and does not correctly take account of partial volume effects and noise in the data. With traditional tractography, if the wrong direction is chosen at any point (e.g. due to noise) then the rest of the trajectory is wrong. We have proposed an alternative strategy which can be described as probabilistic tractography. In this approach, we solve the full diffusion equation on a finite element grid derived from the MR data. Starting at a selected seed point, the signal diffuses throughout the brain, taking account of the diffusion tensor values at each location, and by comparing the amount of signal that has reached a target point of interest, compared to what would have reached that target in
130
Philipp G. Batchelor et al. FA
FA
RA
RA
D
D
Fig. 7. The anisotropy indices and the mean diffusivity, for two different subjects
an isotropic medium, we can assign probabilities to connections. This can be rerun from as many different seed points as desired. We believe that this method makes fewer assumptions about the data than more traditional approaches, as is illustrated when the two largest eigenvalues are almost identical, rather than having to make a decision at each stage as to the correct direction to follow. A similar finite element methodology has previously been used to study sucrose distribution in the cat brain based on MR diffusion measurements[24], however that study was not related to connectivity analysis. We have demonstrated our technique on a single slice taken from a 3D MR diffusion tensor data set of two subjects, but the approach can easily be extended to three dimensions. We suggest correcting for any diffusion tensors that are not positive definite prior to solving the diffusion equations. As we point out, these corrections should be done anyway, even if the tensor data is being processed merely for anisotropy images, as anisotropy will be erroneously overestimated if one of the eigenvalues is negative. An advantage of our approach is that it facilitates intersubject comparisons. For example, it might be desirable to study the relative strengths of connection between locations A and B compared with C and D in a cohort of subjects. This could be done by seeding points A and C in each subject, running the algorithm, and then comparing the probabilities calculated at points B and D. The locations A-D could be identified in the images in many ways. For some studies, these might be locations of functional activity identified using BOLD fMRI. In other cases, points A-D could be features on an atlas non-rigidly registered to each subject’s diffusion images [22,23]. It is worth noting that intersubject comparison using our approach does not require non-rigid transformation of the tensor values
Study of Connectivity in the Brain
131
Fig. 8. The xx, xy, and yy components of the tensor, sampled over the mesh representing the white matter. The contour of the brain is also displayed. Flat shading is used to show the underlying mesh structure . The gray values thus represent intensities, here mapped linearly from minimum (black) to maximum (white) to a grayscale table
themselves, thus avoiding the difficulties in doing this identified by Alexander and Gee [25].
Acknowledgements We would like to thank Laura Johnson, Donald Tournier, Dr A. Connelly and J. Schnabel, Prof. D. Hawkes, and the EPSRC (Gr/N04867) for funding.
References 1. D. Le Bihan, R. Turner, P. Douek, and N. Patronas. Diffusion MR Imaging: Clinical Applications. AJR, 159:591–599, 1992. 2. A. Szafer, J.Z. Zhong, and J.C. Gore. Theoretical Model for Water Diffusion in Tissues. Magn. Reson. in Med., 33:697–712, 1995. 3. P. Basser and C. Pierpaoli. Microstructural and Physiological Features of Tissues Elucidated by Quantitative-Diffusion-Tensor MRI. Med. Phys., Series B 111:209– 219, 1996. 4. P.J. Basser, S. Pajevic, C. Pierpaoli, J. Duda, and A. Aldroubi. In Vivo Fiber Tractography Using DT-MRI Data. Magn. Reson. in Med., 44:625–632, 2000. 5. T.E. Conturo, N.F. Lori, T.S. Cull, Akbudak E., Snyder A.Z., Shimony J.S., McKinstry R.C., Burton H., and Raichle M.E. Tracking neuronal fiber pathways in the living human brain. Proc. Natl. Acad. Sci. USA, 96:10422–10427, 1999. 6. D.K. Jones, A. Simmons, S.C. Williams, and M.A. Horsfield. Non-invaisce assessment of axonal fiber connectivity in the human brain via diffusion tensor MRI. Magn. Reson. Med., 42:37–41, 1999.
132
Philipp G. Batchelor et al.
Fig. 9. Top: Diffusion, at time t = 20 (left ), and t = 40 (middle), and right, homogeneous, isotropic diffusion at t = 40. Below, the positions used in the table are marked by a sphere, in clockwise order starting from bottom from x0 to x2 ; right a three dimensional view, incorporating the ventricleslabel
7. S. Mori, B.J. Crain, V.P. Chacko, and van Zijl P.C. Three-dimensional tracking of axonal projections in the brain by magnetic resonance imaging. Ann. Neurol., 45:265–269, 1999. 8. C. Pierpaoli and P.J. Basser. Toward a Quantitative Assessment of Diffusion Anisotropy. Magn. Reson. in Med., 36(6), 1996. 9. C. Nicholson and E. Sykov´ a. Extracellular space structure revealed by diffusion analysis. Trends in Neurosciences, 21(5):207–215, 1998. 10. I. Vorisek and E. Sykova. Evolution of Anisotropic Diffusion in the Developing Rat Corpus Callosum. J. Neurophysiol., 78:912–919, 1997.
Study of Connectivity in the Brain
133
11. P. Basser and C. Pierpaoli. A Simplified Method to Measure the Diffusion Tensor from Seven Images. Magn. Reson. in Med., 39:928–934, 1998. 12. D.K. Jones, M.A. Horsfield, and A. Simmons. Optimal Strategies for Measuring Diffusion in Anisotropic Systems by Magnetic Resonance Imaging. Magn. Reson. in Med., 42:515–525, 1999. 13. N.G. Papadakis, D. Xing, C.L.H. Huang, L.D. Hall, and Carpenter T.A. A comparative study of acquisition schemes for diffusion tensor imaging using MRI. J. Magn. Reson., 137(1):67–82, 1999. 14. N.G. Papadakis, C.D. Murrills, L.D. Hall, Huang C.L.H., and T.A. Carpenter. Minimal gradient encoding for robust estimation of diffusion anisotropy. Magn. Reson. Imag., 18(6):671–679, 2000. 15. N.G. Papadakis, D. Xing, G.C. Houston, J.M. Smith, M.I. Smith, M.F. James, A.A. Parsons, C. L.-H. Huang, L.D. Hall, and T.A. Carpenter. A Study of Rotationally Invariant and Symmetric Indices of Diffusion Anisotropy. Magn. Reson. in Med., 17(6):881–892, 1999. 16. G. Kindlmann, D. Weinstein, and D. Hart. Strategies for Direct Volume Rendering of Diffusion Tensor Fields. IEEE Trans. Vis. and Comput. Graphics, 6(2):124–138, 2000. 17. K.M. Martin, N.G. Papadakis, C. L.-H. Huang, L.D. Hall, and T.A. Carpenter. The Reduction of the Sorting Bias in the Eigenvalues of the Diffusion Tensor. Magn. Reson. Imag., 17(6):893–901, 1999. 18. G.J.M. Parker and J. Dehmeshki. A Fast Marching Analysis of MR Diffusion Tensor Imaging for Following White Matter Tracts. In Proceedings, MIUA 2000, pages 185–188, 2000. 19. T. Kato. Perturbation Theory for Linear Operators. Classics in Mathematics. Springer, 1995, reprint of 1980 edition. 20. Langtangen Hans Petter. Computational Partial Differential Equations. Number 2 in Lecture Notes in Computational Science and Engineering. Springer, 1999. 21. Thom´ee Vidar. Galerkin Finite Element Methods for Parabolic Problems. Springer Series in Computational Sciences. Springer, 1997. 22. D. L. G. Hill, C. R. Maurer Jr., A. J. Martin, S. Sabanathan, W. A. Hall, D. J. Hawkes, D. Rueckert, and C. L. Truwit. Assessment of intraoperative brain deformation using interventional MR imaging. In C. Taylor and A. Colchester, editors, Medical Image Computing and Computer Assisted Inter, volume 1679 of Lecture Notes in Computer Science, pages 910–919. Springer Verlag, 1999. 23. C. R. Maurer Jr., D. L. G. Hill, A. J. Martin, H. Liu, M. McCue, D. Rueckert, D. Lloret, W. A. Hall, R. E. Maxwell, D. J. Hawkes, and C. L. Truwit. Investigation of intraoperative brain deformation using a 1.5T interventional MR system: preliminary results. IEEE Transactions on Medial Imaging, 17(5):817–825, 1998. 24. P.G. Mc Queen, A.J. Jin, C. Pierpaoli, and P.J. Basser. A Finite Element Model of Molecular Diffusion in Brain Incorporating in vivo Diffusion Tensor MRI Data. In ISMRM, page 193, 1996. 25. D.C. Alexander and J.C. Gee. Elastic Matching of Diffusion Tensor Images. Computer Vision and Image Understanding, 77:233–250, 2000.
Incorporating Image Processing in a Clinical Decision Support System Paul Taylor, Eugenio Alberdi, Richard Lee, John Fox, Margarita Sordo, and Andrew Todd-Pokropek Centre for Health Informatics and Multiprofessional Education, University College London, Archway Campus, Highgate Hill, London, UK, N19 3UA
[email protected] Abstract. A prototype system to assist radiologists in the differential diagnosis of mammographic calcifications is presented. Our approach is to incorporate image-processing operators within a knowledge-based decision support system. The work described in this paper involves three stages. The first is to identify a set of terms that can represent the knowledge required in an example of radiological decision-making. The next is to identify image processing operators to extract the required information from the image. The final stage is to provide links between the set of symbolic terms and the image processing operators.
1
Introduction
The intended application of our work is breast X-rays or mammograms. Mammography is used in the USA and in many European countries to screen for breast cancer. It is the investigation of choice throughout the diagnosis, management and followup of the disease. Work in computer aids for the interpretation of mammograms began in the 1980s and the volume of articles published in the field continues to grow. The main thrust of this work has been on the detection of abnormalities. There are a variety of clinical signs for which radiologists search when reporting mammograms. One of the most important is that of calcification. The chief difficulty is that calcifications are not necessarily indicative of cancer and the differentiation of benign and malignant, or potentially malignant, calcifications requires high-level radiological expertise. The best algorithms are able to detect calcifications with very high levels of sensitivity. These levels of sensitivity are, however, only achieved at some cost in specificity. Attempts to use computers in mammography have therefore tended to rely on the human film-reader to maintain specificity. The idea is that the computer can be used to prompt for abnormalities. The prompts will alert the user to signs he or she might have missed, thus increasing sensitivity without, it is hoped, an adverse impact on specificity[1]. Researchers in this paradigm have tried to establish what impact false positive prompts have on the human film-reader. Is there a threshold for the false M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 134–140, 2001. c Springer-Verlag Berlin Heidelberg 2001
Incorporating Image Processing in a Clinical Decision Support System
135
positive rate, above which the prompts cease to be helpful? It seems that there is no simple answer to this question[2]. One of the factors that affects performance is the radiologists’ understanding of the basis for the false positive prompts[3]. A goal for our work is that the system should be able to generate an explanation of why a certain set of calcifications are considered to be potentially malignant. The United Kingdom screening programme is facing a severe and worsening manpower crisis[4]. It is already necessary to involve non-radiologists in the interpretation of films. The evidence suggests that radiographers can be trained to interpret screening films with similar performance to radiologists[5]. We are keen to develop a system which can support the training of non-radiologist film readers, and believe that such a system must include a representation of the knowledge radiologists acquire when learning to interpret screening images. One of the problems facing mammography today is that improvements in the image acquisition process mean that many more signs are detected than was the case a few years ago. A pressing goal of research in mammographic image analysis is therefore to improve the capacity of the radiologist accurately and consistently to assess the level of risk associated with calcifications. A third goal for our work is to to communicate useful information about the risk associated with calcifications. In the next section we give an overview of the prototype system we are developing. We then go on to describe in more detail the development of the knowledge base, the image-processing operators incorporated into the system and the basis for the mapping between the two.
2
Overview of the Design of CADMIUM II
The prototype developed here is based on an earlier system known as CADMIUM (Computer Assisted Decision Making for Image Understanding in Medicine) [6]. CADMIUM II has a different architecture based on a client-server model. There are three principal components to the architecture. The CADMIUM client handles interactions with the user and the display of all required images. The CADMIUM server provides longer-term storage of the image data and is responsible for all image processing. The third element in the design is called the Solo Server. The Solo Server is a decision support engine. It uses a language known as Proforma which has been designed to represent clinical guidelines[7]. CADMIUM Client The CADMIUM Client supports the user interface and controls interaction with the CADMIUM Server and the Solo Server. When the user requests a case, the relevant images are retrieved from the CADMIUM Server. If he or she then requests decision support for that case, the Client calls the Solo Server, which handles the reasoning, and obtains the results of image processing from the CADMIUM Server. CADMIUM Server The CADMIUM server stores all of the patient data and all of the images. All image processing operations are performed on the server. The images used in the
136
Paul Taylor et al.
prototype are mammograms digitised at a resolution of 50 microns and rescaled to 100 microns. The image processing algorithms are executed on the Server. The results of the image processing are then made available to the CADMIUM Client. Solo Server The Solo Server stores the Proforma protocols that represent the clinical decisions supported by the system and performs the symbolic reasoning. The representation of a decision includes the set of candidate options, the arguments that serve to increase or decrease the support for a candidate. The decision of interest in our application is the differential diagnosis of calcifications and the candidates are the three diagnostic classes corresponding to the three different management options: benign, malignant and indeterminate. The arguments are based on properties of the calcifications.
3
Development of the Knowledge Base
The aim is to develop a computer aid that provides support in the form of arguments that are expressed in terms that are familiar to radiologists and other trained film-readers. The development of these arguments involved three steps. First, working with radiologists we built a representation of the protocol followed in the UK breast screening programme. This included an element corresponding to the decision made in assessing calcifications. Next, we carried out studies to identify an appropriate set of terms to use in arguments based on the characteristics of calcifications. Finally we identified the published evidence that could provide a sound basis for these arguments. In this paper we are chiefly concerned with the second of the three steps. Two ‘knowledge elicitation’ studies were performed. In the first, eleven radiologists were asked to think out loud as they interpreted twenty cases of calcifications. The audiotapes of these sessions were analysed and 159 different terms for the description of calcifications were identified. Working with radiologists we removed synonyms and compound terms, reducing the set of descriptors to 50. A subset of 19 of these descriptors were useful in discriminating between benign and malignant calcifications. These discriminating descriptors were classsified as high or low certainty arguments. High certainty arguments were only ever used to describe calcifications with a clear diagnosis - either benign or malignant. Low certainty arguments are never used for both benign and malignant calcifications, but were sometimes used for calcifications considered indeterminate on the basis of radiological appearance. Both sets are presented in Table 1. A more detailed account of this investigation is presented elsewhere[8]. The second knowledge elicitation study had two aims: first to validate the descriptor set derived from the above study and second to obtain more data about the certainty associated with the arguments. Ten radiologists reported on 40 sets of calcifications. They used a form based on the descriptor set. They were allowed to suggest new descriptors where the existing set was felt to be inadequate, but only minor adjustments to the descriptor set were required. We
Incorporating Image Processing in a Clinical Decision Support System High Certainty Arguments Benign vascular distribution curvilinear shape large size contour with a rim lucent density centre isolated Malignant branching shape ill-defined contour orientation towards nipple
137
Lower Certainty Arguments Benign well defined contour homogeneous variation small AND scattered AND round associated opacity AND few (1-5) flecks low density AND small AND multiple AND scattered Malignant pleomorphism small AND low density AND assoc. opacity
Table 1. Arguments derived from the first study
then constructed a table with a row for each of 45 descriptors and a column for each of the 40 sets of calcifications. Each cell of the table was used to record the number of radiologists who used that descriptor for that set of calcifications. This number varied between 0 and 10. The number was assumed to correspond to the applicability of the descriptor in the case of the set. This data forms the basis for the mapping between the descriptor set and the image processing operations.
4
Image Processing
The decision support provided in CADMIUM II is based on arguments about how the characteristics of calcifications relate to the risk of malignancy. In order to provide appropriate advice, the system must determine which of the different arguments in the knowledge base apply in a given case. In CADMIUM II image processing is used to detect and characterise the calcifications in a mammogram. In the following two sub-sections the selection and implementation of operators for the detection and characterisation of calcifications are described. Detection Algorithms for the detection of calcifications have been published since the 1970s. The best of these achieve high level of sensitivity at what are thought to be acceptable levels of specificity. Although there are a number of such algorithms in the literature data comparing the performance of the different approaches is relatively scarce. We selected four quite different approaches to the detection of calcifications and reimplemented them according to a common scheme. The details of this work and of the subsequent comparison are published elsewhere[9]. Karssemeijer’s Markov random field approach was identified as the most appropriate for our purposes[10]. Characterisation We want to use a set of image processing measures to characterise the detected calcifications. By this we mean that we wish to identify the extent to
138
Paul Taylor et al.
which they exhibit the various properties that were identified in building the knowledge base. Starting with the properties used in the high and low certainty arguments of Table 1, we identified a set of underlieing dimensions we would have to measure. For a significant subset of these there are readily available image processing techniques that have been previously applied in imaging systems[11]. The remaining properties do not correspond very obviously to measurements that have been used by previous authors working in the application of image analysis to mammography. We have designed a set of image processing measures that allow us to characterise mammograms according to these properties. Both sets are listed in Table 2. Existing measures Properties of Calcs round vs linear small vs large lucent centre vs non high vs low density Contour well-defined vs ill-defined Number of flecks many vs few Distribution scattered vs clustered Variation between flecks variable vs uniform shape variable vs uniform size New measures Shape branching vs not branching curvilinear vs straight Contour with rim vs without Associated finding opacity vs no opacity Distribution ductal vs segmental Orientation towards nipple vs not Shape Size Density
Measures compactness no. of pixels contrast across diameter mean gray level mean border contrast no. of calcifications mean separation S.D. of shape measures S.D. of size nodes of skeleton local curvature of skeleton contrast over boundaries opacity detection cluster moment angle and tendency
Table 2. Measures for the characterisation of calcifications
Linking Image Processing and Symbolic Reasoning The final stage in the development of CADMIUM II is to map between the measures described in the above section and the symbolic descriptors used in the arguments. This involves two steps. First it is necessary to determine that the expected association between the measure and the property referred to in the knowledge base actually exists. Secondly it is necessary to establish which ranges of measurement values correspond to the appropriate qualifiers for the property. To give a concrete example, one of the properties used in the knowledge base is that of size. This is measured above as ‘area’, simply the number of pixels that call within the boundary of an identified calcification.
Incorporating Image Processing in a Clinical Decision Support System
139
To complete the required mapping between the knowledge base property ’size’ and the image processing measurement ’area’ we must first ensure that assessments of size correlate with measurements of area. Then we must identify the ranges of our measurement of area that correspond to the qualifiers used in the assessment of ’size’: namely large, medium and small. The basis for both of these steps is the data collected in the second of the knowledge elicitation studies described above. The radiologists’ assessments of a set of calcifications can be compared with the image processing measurements to test the validity of the measurement. For those measures that are deemed to be valid, we can plot the image processing measurement against the frequency with which radiologists applied each of the linguistic qualifiers for that property. We can then set thresholds to optimise the separation between the distributions of the qualifiers.
Conclusion The development of highly sensitive algorithms for the detection of calcifications has focussed attention on the need to make these algorithms more specific. Many teams have considered the use of neural nets, or other classification schemes, to discriminate between benign and malignant calcifications. The general approach is to detect calcifications, to derive a set of features from the detected calcifications and then use these, in combination with some kind of gold standard, to ’learn’ a classification rule[11]. Many of the approaches, including all of the neural network approaches, involve the derivation of a rule that remains implicit, that is to say which cannot be explained to the user. We believe that users will perform better if the rule or rules used in the classification of calcifications can be made explicit. Such a system would not only be of value as a decision support tool but could also have a role in supporting the training of non-radiologist film-readers. One of the important areas of further work the development of techniques to handle uncertainty. There are three distinct forms of uncertain information that are involved. The first is the uncertainty associated with the image data. The images contain noise in the form both of scattered radiation and background texture. The detection of calcifications in the image is inevitably associated with a degree of error. The second form of uncertainty concerns the mapping between the quantitative measurements of image properties and the more qualitative terms used in the knowledge base. The final form of uncertainty concerns the strength of the arguments that relate the characteristics of calcifications to provisional diagnoses and management options. We expect to use different approaches to handle the different forms of uncertain information. The lowest level, that of image data, we believe that a Bayesian approach is most appropriate. As regards the higher level, our belief is that users are more interested in a clear statement of rough levels of certainty than in precise estimates of probability and that the argumentation approach adopted here will be appropriate.
140
Paul Taylor et al.
Acknowledgements This work was supported by the United Kingdom’s Engineering and Physical Sciences Research Council and the Imperial Cancer Research Fund. The help of Drs Given-Wilson, Davies, Schneider, Cooke, Rankin, Nockler and their colleagues is gratefully acknowledged.
References 1. Roehrig, J., Doi, T., Hasegawa, A. et al. Clinical results with the R2 Imagechecker system. In: Karssemeijer, N., Thijssen, M., Hendriks, J., and van Erning,L. (eds.) Digital Mammography, Nijmegen. Kluwer Academic Publishers, Dordrecht (1998) 395–400 2. Astley, S., Zwiggelaar, R., Wostenholme, C. et al. Prompting in mammography: how good must prompt generators be? In: Karssemeijer, N., Thijssen, M., Hendriks, J., and van Erning,L. (eds.) Digital Mammography, Nijmegen. Kluwer Academic Publishers, Dordrecht (1998) 347–354–400 3. Hartswood, M., Procter, R., and Williams, L. Prompting in practice: How can we ensure radiologists make best use of computer-aided detection systems in screening mammography? In: Karssemeijer, N., Thijssen, M., Hendriks, J., and van Erning,L. (eds.) Digital Mammography, Nijmegen. Kluwer Academic Publishers, Dordrecht (1998) 363–370 4. Field, S. UK Radiologist workforce survey - Breast Imaging Service. Royal College of Radiologists Newsletter 54 (1998) 12–14 5. Cowley, H. and Gale, A. PERFORMS and mammographic film reading performance: radiographers, breast physicians and radiologists Tech. Rep., Institute of Behavioural Sciences, University of Derby. A report for the Workforce Issues in the Breast Screening Programme meeting (1999). 6. Taylor, P., Fox, J. and Todd-Pokropek, A. The development and evaluation of CADMIUM: a prototype system to assist in the interpretation of mammograms. Medical Image Analysis 3 (1999) 321–337 7. Fox J., Johns, N. and Rahmanzadeh, A. Dissemination of medical knowlege: the PROforma approach. Artificial Intelligence in Medicine 14 (1998) 157–181 8. Alberdi, E., Taylor, P., Lee, R. et al. CADMIUM II: Acquisition and representation of radiological knowledge for computerized decision support in mammography. In: Overhage, J. (ed.): Proceedings of the American Medical Informatics Association Symposium, American Medical Informatics Association, (2000) in press 9. Lee, R., Taylor, P. and Alberdi, E. A comparative study of four techniques for calcification detection. In: Yaffe M. (ed.) Proceedings of the Fifth International Workshop on Digital Mammography, Medical Physics Publishing (2000) in press 10. Karssemeijer, N. Adaptive noise equalisation and recognition of microcalcification clusters in mammograms. International Journal of Pattern Recognition and Artificial Intelligence 7 (1993) 1357–1376 11. Giger, M., Huo, Z., Kupinski, M. and Vyborny, C. Computer-aided diagnosis in mammography. In: Sonka M. and Fitzpatrick, J. (eds.) SPIE Handbook of Medical Imaging: Volume 2, International Society for Optical Engineering, (2000) 915–1004
Automated Estimation of Brain Volume in Multiple Sclerosis with BICCR D. Louis Collins, Johan Montagnat, Alex P. Zijdenbos, Alan C. Evans, and Douglas L. Arnold Montreal Neurological Institute McGill University, Montreal, Canada {louis,jmontagn,alex,alan,doug}@bic.mni.mcgill.ca http://www.bic.mni.mcgill.ca
Abstract. Neurodegenerative diseases are often associated with loss of brain tissue volume. Our objective was to develop and evaluate a fully automated method to estimate cerebral volume from magnetic resonance images (MRI) of patients with multiple sclerosis (MS). In this study, MRI data from 17 normal subjects and 68 untreated MS patients was used to test the method. Each MRI volume was corrected for image intensity non-uniformity, intensity normalized, brain masked and tissue classified. The classification results were used to compute a normalized metric of cerebral volume based on the Brain to IntraCranial Capacity Ratio (BICCR). This paper shows that the computation of BICCR using automated techniques provides a highly reproducible measurement of relative brain tissue volume that eliminates the need for precise repositioning. Initial results indicate that the measure is both robust and precise enough to monitor MS patients over time to estimate brain atrophy. In addition, brain atrophy may yield a more sensitive endpoint for treatment trials in MS and possibly for other neuro-degenerative diseases such as Huntington’s or Alzheimer’s disease.
1
Introduction and Previous Work
A number of neuro-degenerative diseases are characterized by brain tissue loss. For example, multiple sclerosis (MS) is a neurological disorder that predominately affects young adults and is associated with recurrent attacks of focal inflammatory demyelination (plaques) that cause neurological impairment, separated by periods of relative stability. It is difficult to evaluate the effect of therapy in clinical trials of MS since it is a complex disease with a high degree of variability in clinical signs and symptoms that vary over time and between individuals. The clinically accepted gold standard measure for burden of disease in MS is the Kurztke Expanded Disability Status Scale (EDSS) [1]. Unfortunately, this metric is highly variable between neurologists (large inter-rater variability), is dependent on the timing of the test with respect to the latest exacerbation of the disease and has a variable sensitivity to change depending on the degree M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 141–147, 2001. c Springer-Verlag Berlin Heidelberg 2001
142
D. Louis Collins et al.
of clinical disability. Taken together, these factors make it difficult to precisely and accurately quantify the overall burden of disease. Therefore, large numbers of subjects (hundreds) are required to participate in clinical trials for new drug evaluation in order to have enough statistical power to detect oftentimes subtle differences between treatment arms. Our goal has been to develop an objective, automatic, robust image-based method to quantify disease burden in MS. Our interest has turned to central nervous system (CNS) atrophy since histopathological work has confirmed that substantial axonal loss occurs in MS plaques [2] and recent quantitative work has confirmed that CNS atrophy is greater in MS patients than in age-matched normals [3,4,5]. We propose a fully automated, head-size normalized, brain-volume estimation procedure. The BICCR metric is defined as ratio of brain tissue volume to the total volume enclosed within the intra-cranial cavity. The volumes are derived from the result of a tissue classification process. This metric is similar to the brain parenchymal fraction (BPF) of Fisher [6] where BPF is defined as the ratio of brain tissue volume to total volume enclosed by the brain surface. The main difference is that all extra-cerebral CSF (i.e., CSF between the cortex and dura, in addition to that in the sulci) is included in the BICCR measure. We will show that the BICCR measure is better correlated with disability, and thus may be a better surrogate for disease burden.
2 2.1
Methods Data
Controls: Seventeen normal healthy controls (age range of 25-61 years) were recruited from the staff, students and research fellows of the Montreal Neurological Institute and McGill community. Patients: Seventy patients with MS were selected from the population followed in the Montreal Neurological Hospital MS clinic. Forty-eight patients were classified as relapsing-remitting (RR), characterized by recurrent relapses with complete or partial remission (disease duration 0.5 to 24 years, EDSS range 0-5.0, age range 26-58). Twenty-two patients were classified as secondary progressive (SP), characterized by progression in the absence of discrete relapses after earlier RR disease (disease duration 4 to 36 years, EDSS range 3.5-9.0, age range 27-59 years). MRI acquisistion: All MR data was acquired on a Philips Gyroscan operating at 1.5 T (Philips Medical Systems, Best, The Netherlands) using a standard head coil and a transverse dual-echo, turbo spin-echo sequence, 256x256 matrix, 1 signal average, 250mm field of view, (TR/TE1/TE2 = 2075/32/90 ms) yielding proton density-weighted (PDW) and T2-weighted (T2W) images. Fifty contiguous 3mm slices were acquired approximately parallel to the line connecting the anterior and posterior commissures (AC-PC line).
Automated Estimation of Brain Volume in Multiple Sclerosis with BICCR Non−uniform Input intensity correction image
Filtered image
Bayesian classification
Corrected image
Tissue classes
Stereotaxic registration
Registered image
143
Normalization Normalized image
Atrophy computation Masking and volumes computation
Anisotropic diffusion
Fig. 1. Diagram of the atrophy computation method stages.
2.2
Data Analysis
Atrophy Estimation The fully automated method uses MR images to quantify brain atrophy and is based on estimation of the brain to intracranial capacity ratio (BICCR). The method estimates the intracranial, brain parenchymal and CSF volumes and uses these values in a ratio described below. The technique is voxel-based. Each image voxel is classified as a brain tissue, CSF or background. The number of voxels in each class multiplied by the elementary voxel volume gives an estimate of actual tissue and CSF volumes. As a voxel-based approach, this method requires preliminary processing stages that aim at correcting the image intensities by minimizing the bias and the noise due to the acquisition device. Images are also registered in a common brain-based coordinate space (Talairach) by a linear registration procedure. This ensures that the scale differences between individuals are compensated for and that the resulting atrophy measure is invariant to brain size. Figure 1 diagrams the atrophy measure stages. The processing stages involve: Intensity non-uniformity correction. The inhomogeneity of the MR acquisition device magnetic field introduces a bias perceptible in images as a continuous variation of gray-level intensities. The non-uniform intensity correction algorithm [7] iteratively proceeds by computing the image histogram and estimating a smooth intensity mapping function that tends to sharpen peaks in the histogram. The intensities for each tissue type thus have a tighter distribution and are relatively flat over the image volume. Application of this procedure improves the accuracy of the tissue classification stage described below [7]. Stereotaxic registration. Each image is linearly registered in a common Talairach space in order to compensate for size variations between individuals. Moreover, the Talairach-like brain-based coordinate system of stereotaxic space facilitates anatomically driven data manipulation in all processing steps. The target image for stereotaxic registration is a template image built from an earlier study [8] involving the averaging of more than 300 MR images. The registration algorithm proceeds with a coarse-to-fine approach by registering subsampled and blurred MRI volumes with the stereotaxic target [9]. The final data used for subsequent processing is only resampled once to minimize resampling/interpolation artefacts.
144
D. Louis Collins et al.
Intensity normalization. In preparation for intensity-based classification, each image is intensity normalized to an average PDW (PD-weighted) or T2W (T2weighted) target volume already in stereotaxic space. An affine intensity mapping is estimated that best maps the histogram of each image onto the template. After normalization, the histogram peaks corresponding to each tissue class have the same value in all images. In conjuction with intensity non-uniformity correction, this step permits data from all subjects to be classified using a single trained classifier (i.e., the classifier does not have to be retrained for each subject). Cropping. Since the entire cerebrum was not covered by the MRI acquisition in all subjects, the inferior (z < −22mm, in Talairach coordinates) and superior (z > 58mm) slices were cropped away from both PDW and T2W volumes, cutting off the very top of the brain (above the centrum semi-ovale) and the bottom of the brain (just above the pons). This yielded an anatomically equivalent 80mm thick volume across all subjects that contains most of the cerebrum. Anisotropic diffusion It has been shown that the application of an edgepreserving noise filter can improve the accuracy and reliability of quantitative measurements obtained from MRI [10,11]. We have selected anisotropic diffusion, a filter commonly used for the reduction of noise in MRI. This type of filter was pioneered by Perona and Malik [12] and generalized for multidimensional and multispectral MRI processing by Gerig et al. [13]. This stage reduces voxel misclassification due to noise and minimizes the speckled appearance sometimes apparent in the resulting classified images. Bayesian classification. A Bayesian classifier [14] is then used to identify all grey-matter (GM), white-matter (WM), cerebrospinal fluid (CSF), lesion (L) and background (BKG) voxels. Prior to classification, the Bayes classifier is trained manually by selecting a set of 20 volumes randomly among all volumes to be processed. From each sample volume, 50 voxels belonging to each class are selected by hand. The resulting 5000 samples (20 volumes × 50 samples × 5 classes) were used to compute each class mean intensity and the covariance matrices used in the Bayesian classifier. Brain masking. Mathematical morphology [15] was used to eliminate the scalp and meninges from further processing. A brain mask was created by applying an opening operator (i.e., erosion followed by dilation) to the PDW volume after thresholding at 40% of the mean PDW intensity value. Voxels remaining in the regions of the eyes and nasal sinus were removed using a standard mask in stereotaxic space. The resulting patient-specific brain mask was applied to both the PDW and T2W volumes leaving all voxels within the intracranial cavity. BICCR computation. After processing, the total volume of voxels in each class was used to define the BICCR metric: BICCR =
GM + WM + L . GM + WM + L + CSF
(1)
It is is important to note that the value of CSF contains all extra-cerebral cerebrospinal fluid within the cropped volume in addition to the ventricular and sulcal components. Similar to the brain parenchymal fraction (BPF) of Fisher
Automated Estimation of Brain Volume in Multiple Sclerosis with BICCR
(a)
(b)
145
(c)
Fig. 2. Results: (a) box & whisker plot for comparison of BICCR mean values (heavy circles) for NC, RR and SP groups; correlation of BICCR with age (b) and disease duration (c) (RR=black circles, SP=grey squares). and Rudick [6,4], the BICCR metric is a ratio and not only represents a sizenormalized index of brain atrophy but it also accounts for possible differences in voxel size between scans due to scanner drift. To determine the reproducibility of the method, 4 healthy volunteers were scanned on 2 separate occasions over a mean period of 222 days. BICCR was computed for each image set. Reproducibility was estimated by computing the coefficient of variation of the repeated measures.
3
Results
The BICCR value for the normal control (NC, n=17) subjects was 86.1 ± 2.8 (mean ± s.d.). The mean coefficient of variation estimated on scan-rescan tests of 4 normal controls was 0.21%. Comparison of the mean BICCR values for the NC, RR and SP groups is presented in Figure 2-a. An ANOVA showed a significant difference between groups (F = 8.885, p < 0.001). A post-hoc test (Tukey’s HSD) showed that BICCR was significantly lower in the secondary progressive group (81.3 ± 5.1) than either the NC group (p < 0.001) or the relapsing-remitting group (84.5 ± 4.3; p = 0.01). The Z-score (number of standard deviations from the mean of healthy controls) was -0.673 for RR (not significantly different from NC) and 1.864 (p < 0.001) for SP groups. The average absolute percentage of brain tissue lost (compared to normal controls) was 1.8% for RR and 5.6% for SP groups. We looked at the relationship between BICCR with respect to age, disease duration and EDSS. ANOVA showed no significant differences in age between the NC, RR and SP groups (F = 1.134, p = 0.327). As expected, the mean duration of disease of the SP group was significantly greater than that for the RR group (Student’s t = 3.88, p < 0.001). Also expected, disability (measured by EDSS) was greater for the SP group when compared to that of the RR group (t = 11.43, p < 0.001).
146
D. Louis Collins et al.
For the RR group, BICCR was correlated with disease duration (Spearman r = −0.523, p < 0.001), but not with age, nor disability as measured by EDSS (see Figs 2-b and -c). For the SP group, BICCR was correlated with disease duration (Spearman r = −0.661, p < 0.001) and EDSS (Spearman r = −0.649, p < 0.001) but not with age. When evaluated over all patients with MS (RR and SP combined), BICCR was correlated with EDSS (r = −0.409, p < 0.01) and duration (r = −0.593, p < 0.0001). The main difference between the BICCR and BPF metrics is the inclusion of extra-cerebral CSF in the denominator. In a simple test to compare the correlation of disability (measured by EDSS) with BICCR and a measure similar to BPF, we used morphological operators to remove the extra-cerebral CSF voxels from the BICCR metric. When evaluated on 20 SP MS patients, the magnitude of the Spearman’s correlation coefficient dropped from -0.638 (BICCR) to -0.574 (modified BICCR).
4
Discussion
We have presented a robust procedure to estimate brain atrophy using a fully automatic technique and have applied it to MRI data from normal controls and patients with MS. We have confirmed that the brains of patients with MS have greater atrophy when compared to normal controls, and that atrophy progresses with the severity and duration of the disease. Our procedure compares well to the BPF measure of Fisher [6]. The mean BPF and BICCR values are similar for normal controls. However, the BPF method is reported to have a very small intersubject variance when estimated on normal controls (approximately 0.7%). This value is much smaller than the variance for normal controls reported here. This may be due to subject selection and the greater age range for our normal controls. Another difference between the two techiques is that the classification procedure used in the BPF computation accounts for partial volumes effects between tissue classes, while the BICCR method uses a discrete classification result. While this method should yield an unbiased result for objects that are larger than the voxel size, the BICCR method may underestimate CSF volume in regions that have dimensions on the order of the voxel size, in sulci for example. The high precision of the BICCR method permits detection of small changes ( 0.5%) in brain volume (i.e., atrophy) in single subjects over a short period of time (< 1 year). Comparison of BICCR with a BPF-like measure shows that BICCR correlates better with disability, making it possibly a more sensitive surrogate for disease burden. These results have important implications for the design of clinical trials if atrophy is deemed an acceptable surrogate for burden of disease in MS. The fact that cerebral atrophy is generally correlated with irreversible neurological dysfunction make atrophy an important surrogate to evaluate in MS using state of the art image analysis techniques. Characterization of brain atrophy will yield information complementary to other MR-based measures of focal and dif-
Automated Estimation of Brain Volume in Multiple Sclerosis with BICCR
147
fuse abnormality with varying specificity for underlying pathological changes. Brain atrophy may yield a more sensitive endpoint for treatment trials in MS and possibly also for other neurdegenerative diseases such as Huntington’s or Alzheimer’s disease. Acknowledgements: Funding for this work was provided by the Medical Research Council of Canada.
References 1. J. F. Kurtzke, “Rating neurologic impairment in multiple sclerosis: An expanded disability status scale,” Neurology, vol. 33, pp. 1444–1452, 1983. 2. B. D. Trapp, J. Peterson, R. M. Ransohoff, R. Rudick, S. Mork, and B. Lars, “Axonal transection in the lesions of multiple sclerosis,” New England Journal of Medicine, vol. 338, pp. 278–85, 1998. 3. J. Simon, L. Jacobs, M. Campion, et al. “A longitudinal study of brain atrophy in relapsing multiple sclerosis. the multiple sclerosis collaborative research group (MSCRG).,” Neurology, vol. 53, no. 1, pp. 139–48, 1999. 4. R. Rudick, E. Fisher, J.-C. Lee, J. Simon, D. Miller, and L. Jacobs, “The effect of avonex (ifnβ-1a) on cerebral atrophy in relapsing multiple sclerosis,” Neurology, vol. 52, pp. A289–290, Apr 1999. 5. M. Filippi, G. Mastronardo, M. A. Rocca, C. Pereira, and G. Comi, “Quantitative volumetric analysis of brain magnetic resonance imaging from patients with multiple sclerosis,” J Neurol Sci, vol. 158, pp. 148–53, Jun 30 1998. 6. E. Fisher, R. Rudick, J. Tkach, J.-C. Lee, T. Masaryk, J. Simon, J. Cornhill, and J. Cohen, “Automated calculation of whole brain atrophy from magenetic resonance images for monitoring multiple sclerosis,” Neurology, vol. 52:A352, 1999. 7. J. G. Sled, A. P. Zijdenbos, and A. C. Evans, “A non-parametric method for automatic correction of intensity non-uniformity in MRI data,” IEEE Transactions on Medical Imaging, vol. 17, Feb. 1998. 8. A. C. Evans, D. L. Collins, and B. Milner, “An MRI-based stereotactic atlas from 250 young normal subjects,” Soc.Neurosci.Abstr., vol. 18, p. 408, 1992. 9. D. L. Collins, P. Neelin, T. M. Peters, and A. C. Evans, “Automatic 3D intersubject registration of MR volumetric data in standardized talairach space,” Journal of Computer Assisted Tomography, vol. 18, pp. 192–205, March/April 1994. 10. J. R. Mitchell, S. J. Karlik, D. H. Lee, M. Eliasziw, G. P. Rice, and A. Fenster, “Quantification of multiple sclerosis lesion volumes in 1.5 and 0.5T anisotropically filtered and unfiltered MR exams,” Medical Physics, vol. 23:115–126; 1996. 11. A. P. Zijdenbos, B. M. Dawant, R. A. Margolin, and A. C. Palmer, “Morphometric analysis of white matter lesions in MR images: Method and validation,” IEEE Transactions on Medical Imaging, vol. 13, pp. 716–724, Dec. 1994. 12. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 629–639, July 1990. 13. G. Gerig, O. K¨ ubler, R. Kikinis, and F. A. Jolesz, “Nonlinear anisotropic filtering of MRI data,” IEEE Transactions on Medical Imaging, vol. 11:221–232; 1992. 14. R. Duda and P. Hart, Pattern Recognition Scene Analysis. New York: Wiley, 1973. 15. J. Serra, Image Analysis, Mathematical Morphology. London: Academic Press, 1982.
Automatic Image Registration for MR and Ultrasound Cardiac Images Caterina M. Gallippi and Gregg E. Trahey Department of Biomedical Engineering, Duke University, Durham, NC 27708
[email protected] Abstract. The Statistics Based Image Registration (SBR) method for automatic image registration is presented with application to magnetic resonance (MR)and ultrasound (US) cardiac time series images. SBR is demonstrated for MR myocardial perfusion assessment and US myocardial kinetics studies. The utility of the method for a range of other clinical applications is discussed.
1
Introduction
Accurate multi- and mono- modal image registration could enhance the diagnostic relevance of medical imaging by aligning information in a fashion conducive to disease assessment, motility estimation, or volume measurement. Fully automatic registration methods can expedite information alignment and obviate human error introduced by user selection of corresponding image information. Many techniques for image registration have been described. Extrinsic methods are sometimes inconvenient and can not be applied retrospectively. Landmark and segmentation based methods may require labor intensive human interaction. Pixel or voxel property based registration methods require no a priori information and can be applied to many modes of medical imaging with no data reduction or image segmentation [1]. Several paradigms are reported in literature for pixel property based image registration (Geimanet al,[2]; Hein et al,[3]; Maes et al, [4]; Mailloux et al,[5]; Meyer et al, [6]; Penney et al, [7]; Wells et al,[8]). One pixel property registration method, maximization of mutual information (MI), is based on the principle of identifying image regions that have high individual brightness entropy but low joint entropy. Wu et al outline another approach to image matching that uses a modified correlation measure instead of probability distributions to match pixels in high contrast regions with similar brightness statistics [9]. Developing a single registration algorithm that is substantially robust for aligning both MR to MR and US to US images is a challenging task given the inherent differences in the two imaging modalities. Contrast sensitivity is a major attribute of MR but is degraded by the presence of granular structure called speckle that results from the coherent interactions of echo reflections in US [10]). One of the authors (Gallippi) adapted Wu et al’s method and applied the technique to automatic image registration and warping of MR myocardial perfusion M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 148–154, 2001. c Springer-Verlag Berlin Heidelberg 2001
Automatic Image Registration for MR and Ultrasound Cardiac Images
149
images [11]. We previously showed the technique, Statistics Based Registration (SBR), to outperform correlation and SADS based methods for registration of 2D US breast tissue images beyond the range of speckle correlation [12]. In the current paper, we extend our algorithm’s application to registration of 2D US cardiac stress images. We describe the technique as developed for mono-modal MR image registration and detail the minor modifications for application to US mono-modal image registration. We discuss the clinical usefulness of the technique in US imaging and consider the potential for multi-modal image registration via SBR.
2
Statistics Based Registration (SBR)
MR images of 12 patients were acquired on a 1.5 T Siemens Vision scanner using a phased array chest coil. Two left ventricular short axis and one long axis GdDTPA (Magnevist, Berlex, Wayne, NJ.) perfusion images were acquired during each cardiac cycle. The images had a calibration of 1.30mm per pixel axially and 1.03mm per pixel laterally [11]. Four chamber, short axis and long axis US stress examinations were performed in harmonic imaging mode for 4 patients on a Hewlett-Packard Scanner at 1.8/3.6 MHz. The transducer aperture was 20mm, yielding a 2.3mm lateral resolution. The image calibration was roughly 0.53mm. Twelve frames were acquired per one half cardiac cycle. The SBR method begins with the selection of a single template image to which all other images in the data set (hereafter referred to as ’target’ images) are registered. For time series data sets, the template image was selected as an image near the center of the acquisition period because the center images were likely to be the most similar to all the other images. For each pixel in the template image, the local brightness variation extent, mean and standard deviation in an 11×11 pixel window around the pixel of interest were computed. The kernel size corresponded to approximately 14.30× 11.33mm for MR and 6×6mm or 2.5×6 resolution cells for US. As defined by Wu et al, the local brightness variation extent is V ar = 1 −
1 1 + α1 δw
(1)
where α1 is a constant that depends on the input images’ histogram, and δw is the standard deviation within the 11×11 window. Considering the image resolution cell size, the window size was determined by to be large enough to represent the expected local information but small enough to avoid blurring the statistical distribution in the region surrounding the pixel of interest. Finally, the direction of increasing pixel brightness, or the edge direction, for each template pixel was determined as described below. For pixel Pi,j , edgev = (Pi+1,j−1 + 2Pi+1,j + Pi+1,j+1 ) − (Pi−1,j−1 + 2Pi−1,j + Pi−1,j+1 )(2) edgeh = (Pi−1,j+1 + 2Pi,j+1 + Pi+1,j+1 ) − (Pi−1,j−1 + 2Pi,j−1 + Pi+1,j−1 )(3)
150
Caterina M. Gallippi and Gregg E. Trahey
where edgev is the vertical edge direction and edgeh is the horizontal edge direction. Prior to registration, each target image was segmented into 16 blocks, edge detection was performed with a Sobel operator, and pixels on the strongest 25% of the edges in each block were automatically designated as landmark pixels. There were ample blocks to guarantee a distribution of landmarks throughout the target image but not too many blocks such that weak edges were designated as landmark pixels. The criterion that only the strongest 25% of edges become landmark pixels ensured that landmarks were located in regions of relative high spatial frequency. For each landmark pixel in the target image, the local brightness variation extent, mean and standard deviation were computed in the 11×11 kernel, and the edge direction was found as described in equations 1,2 and 3 for the template image. For each landmark pixel in the target image, the best matching pixel in an N×N search window in the template image was identified. While taking image resolution into consideration, the search window size was chosen empirically to be large enough to accommodate anticipated motion but small enough to impose a loose constraint on the search space and keep computational cost within reason. For the MR images registered, N= 21, corresponding to a 27.30×21.63mm kernel. For the US images, N= 41, corresponding to a 21×21mm or 9.3×20.0 resolution cell kernel. The quality of the match was quantified by calculating a score as a function of the previously calculated local brightness variation extents, means, variances and edge directions as follows: (I1 − I1ave ) × (I2 − I2ave ) × V ar1 × V ar2 × (4) Score = σ 2 (I1 ) σ 2 (I2 ) [sign (edgev1 ) × sign (edgev2 )] [sign (edgeh1 ) × sign (edgeh2 )] where I is pixel brightness, I ave is the mean brightness, V ar is the local brightness variation extent, edgev is the vertical edge direction, and edgeh is the horizontal edge direction. The number ’1’ corresponds to the target image, and the number ’2’ signifies the template image. Each landmark pixel’s matching score was supported by the strength of matches in its neighborhood. Using this criterion for matching pixels, each landmark pixel was individually assigned a unique translation corresponding to the index of the respective highest scoring pixel in the template image. The nature of the registration is not necessarily rigid or affine; each landmark pixel can be translated independently of any other pixel. The predicted translations were filtered by discounting the registration of pixel pairs whose matching scores were less than 90% of the local maximum matching score. In addition to filtering out low scoring candidates, registration of US images required more aggressive filtering. Bright speckle points were occasionally detected as landmarks in the target images. Landmarks detected from speckle points rather than anatomical information may misregister and incorporate inaccurate pixel matches into the overall image registration. To filter such misregistrations, we discounted predicted registrations beyond 50% of the median
Automatic Image Registration for MR and Ultrasound Cardiac Images
151
magnitude and direction of translation in a 41×41 pixel kernel. In summary, the SBR technique performs fully automatic mono-modal registration of both MR and US images via the following steps: 1. 2. 3. 4. 5.
Select an image near the center of the data set as the template image. Compute statistical information and edge directions for every template pixel. For each target image, automatically designate landmark pixels. Compute statistical and edge information for target landmark pixels. Search for each landmark pixel’s best matching pixel in the template search kernel, using equation 4. 6. Filter predicted registrations to discount inaccurate matches.
3
Verification and Clinical Use of Predicted Registrations
The warp tri function in the IDL imaging software package (IDL, Research Systems, Bolder CO) was employed to warp the target images to the template image for MR processing. SBR registration and warping was previously validated through simulation studies involving computer-generated images of translating bars and expanding rings in the presence of independent Gaussian noise [11]. The goal of MR perfusion image registration was to align the MR images in a fashion that would allow the average intensity value in one region of interest to be traced through the frames in the time series data and related to blood perfusion. Without registration, a given region of myocardium within the image moved, so intensity values at a fixed image location through the time series could not reliably be related to perfusion. Figure 1 shows the difference image between the original target and template and the difference image between the processed target and template images. Note that the location of the myocardial wall is altered to match the template position in the processed image, but the intensity values within the myocardium do not change. To determine the effect of SBR and warping on perfusion assessment, time-intensity (TI) curves were generated for each of 8 circular, evenly spaced, regions of interest positioned within the LV wall in original and processed image sets. Figure 1(c) compares TI curves between original and processed data at an anterior mid short axis location in one volunteer. Oscillations seen in the raw data due to breathing motion artifacts are reduced in the processed data. For the 12 patients examined, the mean left-right left ventricular displacement between frames was 1.65 ± 1.13mm prior to processing and 1.23 ± 0.06mm after registration and warping. The total left-right left ventricular displacement was 41.10 ± 28.32mm before processing and 29.97 ± 16.27mm after processing. The mean anterior-posterior left ventricular displacement between frames was 3.25 ± 1.04mm prior to processing and 1.30 ± 0.65mm after processing. The total anterior-posterior left ventricular displacement was 80.10 ± 26.39mm and 34.70 ± 17.94mm before and after processing, respectively. Registration of US images was applied to myocardial kinetics assessment. Predicted translations were verified by substituting the corresponding template data into the target image. For each valid landmark pixel, the data in the
152
Caterina M. Gallippi and Gregg E. Trahey
(a)
(b)
(c)
Fig. 1. Template-target difference image between template and original target image 1(a), and between template and processed target image 1(b). Pixel intensity versus frame number 1(c).
matched template pixel and its surrounding neighborhood was placed into the appropriate pixels in the target image. If the landmarks were accurately registered to the template image, the resulting substituted image would closely resemble the template image. Figure 2 shows a plot of difference image energy versus frame number for cardiac four chamber, short axis, and long axis views before and after SBR processing and substitution. The difference energy is significantly reduced after processing. For the subjects examined in this study, the mean reduction in difference energy after processing was 41.72 ± 4.25%.
Difference Energy vrs Frame Number
5
6
x 10
Difference Energy vrs Frame Number
5
6
x 10
Difference Energy vrs Frame Number
5
9
x 10
8 5
5
3
2
Difference Energy
Difference Energy
Difference Energy
7
4
4
3
2
6
5
4
3
2 1
1 1
0
1
2
3
4
5
6
7
Frame Number
(a)
8
9
10
11
0
1
2
3
4
5
6
7
Frame Number
(b)
8
9
10
11
0
1
2
3
4
5
6
7
Frame Number
8
9
10
11
(c)
Fig. 2. Difference energy before (-) and after (-.-) SBR processing and substitution for four chamber (2(a)), short axis (2(b)), and long axis (2(c)) views. The frame corresponding to 0 difference energy is the template image.
The landmark pixel translations predicted from SBR were applied to measure cardiac wall kinetics in the following manner: using the registration, the predicted translations associated with each landmark were aligned through frames. The
Automatic Image Registration for MR and Ultrasound Cardiac Images
153
Speed of Translation
Magnitude of Translation with Respect to Template
12
20
10 16
Centimeters per Second
Euclidean Distance in Pixels
18
14 12 10 8 6 4
8
6
4
2
2 0
1
2
3
4
5
6
7
Frame Number
(a)
8
9
10
11
0
1
2
3
4
5
6
Frame Number
7
8
9
10
(b)
Fig. 3. Motion measures for 9 landmark pixels in one region of a short axis image: magnitude of translation from template image (3(a)) and average speed of translation per frame number(3(b))
motility of a given anatomical region of interest was then traced through time by examining the magnitude of translation versus frame number at a specific myocardial location. A plot of magnitude of translation from the template versus frame number for 9 landmark pixels in a lower left region of the myocardium on a short axis image is shown in Figure 3(a). Figure 3(b) shows the computed speed of displacement (averaged over the 9 pixels) in the examined region of the short axis image. Note that SBR predicted an average speed of 4.26±1.4cm/s. M-mode-derived and pulsed Doppler tissue imaging methods have shown average myocardial wall speeds in the range of 5cm/s with a maximum speed of 10cm/s [13]. SBR as performed in this study was capable of predicting myocardial speeds from 0 to 12cm/s.
4
Discussion
The SBR technique is demonstrated for mono-modal registration of both MR and US cardiac images. The registration is facilitated by the presence of sharp edges which seems to indicate that SBR would be minimally effective on US images. However, it is important to note that each landmark pixel carries with it the local brightness statistics in its environment. Although each landmark pixel can register independently of its neighboring pixels, the landmark pixel is matched to a template pixel via a score based on local brightness statistics. Speckle in the template and target images is likely to be uncorrelated since the anatomy in the images has moved several millimeters between acquisitions. However, the positive results indicate that border pixel brightness and local speckle statistics are stable enough to preserve cardiac motion information. The predicted registrations were applied to US cardiac stress images for assessing myocardial motion. The predicted registrations could similarly be applied to US cardiac perfusion studies if US contrast agents are employed. SBR could
154
Caterina M. Gallippi and Gregg E. Trahey
also be applied to other imaging environments within MR or US to measure lesion volume, compare serial examinations, and align images for improved 3D rendering and compounding. Since the technique requires no a priori information, and given that the registration is based on matching brightness statistics rather than brightness values, it is hypothesized that SBR could be applied to matching information in multi-modal images. The authors will pursue such questions in future works.
References 1. J.B. Antoine Maintz and M. A. Viergever. A survey of medical image registration. Medical Image Analysis, 2(1):1–36, 1998. 2. B. J. Geiman, L. N. Bohs, M. E. Anderson, S. M. Breit, and G. E. Trahey. A novel interpolation strategy for estimating subsample speckle motion. Physics in Medicine and Biology, 45(2000):1541–1552, 1999. 3. I.A. Hein and Jr. W. D. O’Brien. Current time-domain methods for assessing tissue motion by analysis from reflected ultrasound echoes- a review. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 40(2):84–102, 1993. 4. F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens. Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging, 16(2):187–198, 1997. 5. G. E. Mailloux, F. Langlois, P. Y. Simard, and M. Bertrand. Restoration of the velocity field of the heart from two-dimensional echocardiograms. IEEE Transactions on Medical Imaging, 8(2):143–153, 1989. 6. C. R. Meyer, J. L. Boes, B. Kim, P. H. Bland, G. L. LeCarpentier, J. B. Fowlkes, M. A. Roubidoux, and P. L. Carson. Semiautomatic registration of volumetric ultrasound scans. Ultrasound in Medicine and Biology, 25(3):339–347, 1999. 7. G. P. Penney, J. Weese, J. A. Little, P. Desmedt, D. L. G. Hill, and D. J. Hawkes. A comparison of similarity measures for use in 2d-3d medical image registration. IEEE Transactions on Medical Imaging, 17(4):586–595, 1998. 8. W. M. Wells III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis. Multi-modal volume registration by maximization of mutual information. Medical Image Analysis, 1(1):35–51, 1996. 9. X. Wu and S. Murai. Image matching using a three line scanner. ISPRS Journal of Photogrammetry and Remote Sensing, 52(1):20–32, 1997. 10. G. E. Trahey, S. W. Smith, and O. T. von Ramm. Speckle pattern correlation with lateral aperture translation: Experimental results and implications for spatial compounding. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, UFFC-33(3):257–264, 1986. 11. C. M. Gallippi, C. M. Kramer, Y. Hu, D. A. Vido, N. Reichek, and W. Rogers. Fully automated registration and warping of contrast enhanced first-pass perfusion images (accepted with revisions for publication). Journal of Magnetic Resonance Imaging, 2001. 12. C. M. Gallippi, M. E. Anderson, P. J. Sebold, and G. E. Trahey. Fully automatic registration of ultrasound images. Program and Abstracts: Ultrasonic Imaging and Tissue Characterization, 2000. 13. M. J. Garcia, L. Rodriguez, M. Ares, B. P. Griffin, A. L. Klein, W. J. Stewart, and J. D. Thomas. Myocardial wall velocity assessment by pulsed doppler tissue imaging: Characteristic findings in normal subjects. Americal Heart Journal, 132(3):648–656, 1996.
Estimating Sparse Deformation Fields Using Multiscale Bayesian Priors and 3-D Ultrasound Andrew P. King, Philipp G. Batchelor, Graeme P. Penney, Jane M. Blackall, Derek L.G. Hill, and David J. Hawkes Division of Radiological Sciences and Medical Engineering The Guy’s, King’s and St. Thomas’ Schools of Medicine and Dentistry Guy’s Hospital, London SE1 9RT, UK
[email protected] Abstract. This paper presents an extension to the standard Bayesian image analysis paradigm to explicitly incorporate a multiscale approach. This new technique is demonstrated by applying it to the problem of compensating for soft tissue deformation of pre-segmented surfaces for image-guided surgery using 3-D ultrasound. The solution is regularised using knowledge of the mean and Gaussian curvatures of the surface estimate. Results are presented from testing the method on ultrasound data acquired from a volunteer’s liver. Two structures were segmented from an MR scan of the volunteer: the liver surface and the portal vein. Accurate estimates of the deformed surfaces were successfully computed using the algorithm, based on prior probabilities defined using a minimal amount of human intervention. With a more accurate prior model, this technique has the possibility to completely automate the process of compensating for intraoperative deformation in image-guided surgery.
1
Introduction
Image-guided surgery systems enable surgeons to make more effective use of preoperative images by registering them to the physical space of the patient in the operating theatre. The problem of soft tissue deformation in such systems is now widely appreciated[1]. If tissue moves or deforms during surgery then the rigid-body image-to-physical registration is invalidated, and misleading and potentially dangerous information can be given to the surgeon. Therefore, compensating for this deformation is currently an important research topic. One approach is to use a predictive model, such as a Finite Element Model[2], in which a biomechanical model of the tissue is constructed and used to predict likely deformations given knowledge of the surgical situation, such as the direction of gravity or the amount of cerebrospinal fluid drainage. An alternative is to use a data-driven approach, in which an intraoperative imaging modality such as ultrasound is used to identify structures of interest which are also present in the preoperative image. So far this has involved manual identification of corresponding landmarks in the preoperative image and the ultrasound images[3]. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 155–161, 2001. c Springer-Verlag Berlin Heidelberg 2001
156
Andrew P. King et al.
The work presented in this paper is a combination of these two approaches: data from an intraoperative imaging modality (3-D ultrasound) is combined with prior knowledge of likely deformations to estimate a sparse deformation field for a structure previously segmented from a preoperative image. This deformation field is calculated using a Bayesian maximum a posteriori estimate working over multiple scales.
2
Methods
In [4] we presented a Bayesian approach to estimating intraoperative deformation of pre-segmented surfaces. Here we describe a general formulation which extends the standard Bayesian approach to incorporate a multiscale approach to image analysis problems. In this section we first describe the standard Bayesian approach to image analysis. Next, this is extended to incorporate a multiscale approach. Finally, the multiscale formulation is applied to the problem of automatically estimating a deformation field for a pre-segmented structure based on intraoperative 3-D ultrasound and prior knowledge of likely deformations. 2.1
The Bayesian Approach to Image Analysis
Bayesian theory has found application in a wide range of image analysis problems as it offers a convenient means of incorporating prior knowledge into problems which would otherwise be difficult to solve. In general terms, given some image data I, the image analysis problem is formulated as one of finding the estimate of the feature vector ξ which maximises its posterior probability according to Bayes’ theorem. P (ξ|I) =
P (I|ξ) · P (ξ) P (I)
(1)
P (ξ) is the prior probability of the feature vector, which incorporates domain specific knowledge of the likely nature of image features, and P (I) is the prior probability of the image. It is normal to assume that P (I) is uniform, in which case P (ξ|I) ∝ P (I|ξ)P (ξ). The term P (I|ξ) is the likelihood, or the probability of the image data given a specific feature vector. As such, it represents a model of the relationship between features in the real world and intensities in the image. By defining expressions for the prior probability P (ξ) and the likelihood P (I|ξ) the image analysis problem becomes one of searching for the value of the feature vector ξ which maximises its posterior probability. 2.2
Extension to Multiscale Approach
The benefits of multiscale approaches in image analysis are widely appreciated: multiscale algorithms can have a greater capture range without an excessive increase in computational complexity. However, the general Bayesian formulation
Estimating Sparse Deformation Fields Using Multiscale Bayesian Priors
157
outlined above makes no explicit reference to the scale of the features being estimated. It would therefore seem desirable to extend this formulation to enable more effective use of multiscale information. In order to do this a multiscale representation for the image is required. A multiresolution image pyramid is constructed by successively smoothing and subsampling the original image to produce a series of images which represent the data at a variety of different scales. The most coarse (i.e. lowest resolution) image in the pyramid is defined to be level 0, and the original image to be level L. Given this multiscale representation for the image, an estimate for the feature vector ξ can be made at any level in the pyramid. We define ξ(l) to be the feature vector estimate resulting from the data at level l of the multiresolution pyramid. The multiscale formulation extends Equation (1) by defining the posterior probability of the feature estimate at level l to be conditional upon not only the data at level l, but also the feature estimate from the previous level l − 1. P (ξ(l)|I(l), ξ(l − 1)) =
P (I(l)|ξ(l), ξ(l − 1)) · P (ξ(l)|ξ(l − 1)) P (I(l)|ξ(l − 1))
(2)
Note that the prior probability and the likelihood are now both conditional upon the feature estimate from the previous (i.e. coarser) scale, ξ(l − 1). If we assume that the denominator in (2) is uniform then it simplifies to P (ξ(l)|I(l), ξ(l − 1)) ∝ P (I(l)|ξ(l), ξ(l − 1)) · P (ξ(l)|ξ(l − 1))
(3)
Hence, the posterior probability is now proportional to the product of the likelihood, or imaging model, P (I(l)|ξ(l), ξ(l − 1)), and a multiscale prior probability term P (ξ(l)|ξ(l −1)). Note that the uniformity of P (I(l)|ξ(l −1)) is a simplifying assumption which is necessary to decrease the complexity of the model. 2.3
Application to Intraoperative Deformation
Defining the Feature Vector First of all we must define a form for the feature vector ξ(l) in (3). Since the aim is to find the sparse deformation field for a surface segmented from a preoperative image, this vector should obviously contain boundary information. We have chosen to use a simple triangle mesh representation consisting of the (x, y, z) coordinates defining the boundary points together with their associated connectivity. Hence for a surface of N boundary points we have N feature vectors, ξ i (l), 1 ≤ i ≤ N . This form was chosen as it is a general representation capable of representing any surface. Its disadvantage is that it can also represent unrealistic surfaces, and so some regularisation is required. This regularisation is achieved by defining the prior probabilities so that the curvature properties of the surface are approximately preserved. The mean and Gaussian curvatures of a surface are, respectively, the mean and product of the two principal curvatures. For details of calculating these measures for discrete surfaces please refer to [5]. Now defining v i (l) to be the coordinates of the estimated location of the ith boundary point at level l of the multiscale pyramid, the feature vector for the ith boundary point at level l is given by
158
Andrew P. King et al.
ξ i (l) = (v i (l), Ki (l), Hi (l))T , where Hi (l) and Ki (l) are the computed mean and Gaussian curvatures at boundary point i at level l. Note that the object representation ξi (l) is simply a series of boundary estimates which represent the shape of the object at different scales. The underlying structure of the representation does not change between scales. Prior Probabilities This formulation enables the prior probabilities to be defined to make surfaces with greatly differing curvature to the original surface less probable. The prior probability fields are defined initially at the coarsest level 0 in the multiscale pyramid. This initial definition can be based upon knowledge of the surgical scene, and it’s precise form is application specific. See Section 3 for details. At subsequent levels in the pyramid, the prior probability fields are propagated down from the previous (i.e. coarser) scale. The multiscale prior probability field for point i is defined by −vi (l)−v i (l−1)2
2 2σv e P (i (l)|i (l − 1)) = 2 1 + kH (Hi (l) − Hi (l − 1)) + kK Ki (l) − Ki (l − 1)
(4)
where kH and kK are constants indicating the proportions of mean and Gaussian curvatures to be used in the regularisation. Equation (4) defines the most probable estimate of the feature vector at level l to be identical with the final estimate from level l−1. The probability falls off as either the location or curvature values differ from the estimate from the previous scale. Likelihood The likelihood P (I(l)|ξ i (l), ξi (l − 1)) from Equation (3) represents a model of the relationship between the feature vector and the 3-D ultrasound image data. At present we make the simplifying assumption that the image data I(l) can be modelled using only the feature estimate from the current scale, ξ i (l), and not that from the previous scale, ξ i (l − 1). The nature of the imaging model is dependent on the acoustic properties of the structure of interest and its surrounding tissue, so prior knowledge of these acoustic properties can be incorporated into the imaging model. For example, if the structure of interest has different acoustic impedance to the surrounding tissue, then the intensity of the ultrasound image is likely to be high at the tissue boundary. In this case the ultrasound image intensity is appropriate for the model. If, on the other hand, the acoustic impedances are similar but the degree of scatter is different, then the tissue boundary will cause a gradient in the ultrasound image. In this case the first derivative of the image should be used for the model. In many cases, a weighted combination of the two will be appropriate. 1 P (I(l)|ξ i (l), ξ i (l − 1)) = kM fvi (l) (l) + (1 − kM ) (1 + ∇fvi (l) (l) · ni (l)) 2
(5)
where fvi (l) is the intensity of the 3-D ultrasound image at coordinate v i (l) at level l of the multiresolution pyramid. Note that we take the inner product of
Estimating Sparse Deformation Fields Using Multiscale Bayesian Priors
159
the first derivative of the ultrasound image ∇fvi (l) (l) and the normal of the discrete surface ni (l). This ensures that only image gradients which are consistent with the surface model contribute to the model. The value of kM indicates the proportion of intensity and gradient information to be used in the model, and is set in advance using prior knowledge of the acoustic properties of the tissue. Search Strategy Now that both terms on the right hand side of Equation (3) have been defined we have an expression for the posterior probability of the feature vector. By maximising this probability for each boundary point a maximum a posteriori estimate of the deformed surface can be computed. To produce this estimate we use a two stage technique. First the prior probabilities are maximised without reference to the likelihood. Next, the posterior probabilities are maximised using a coarse-to-fine gradient ascent scheme: the posterior probabilities are maximised at level 0, then this solution used to assign the prior probabilities for level 1; the posterior probabilities are then maximised at level 1, and so on, until a solution at the finest scale L is reached. This two stage technique helps to avoid local minima in the parameter space.
3
Results
In this section we present results from testing the multiscale Bayesian deformation algorithm on ultrasound data acquired from a volunteer’s liver. The volunteer had an MR scan, from which surfaces of two structures of interest were manually segmented: the liver surface and the portal vein. Freehand 3-D ultrasound data of the volunteer’s liver was then acquired using a 3.5MHz probe during a single breath hold. The prior probability fields for the Bayesian formulation at the coarsest level, P (ξ i (0)), were defined by computing an initial rigid-body image-to-physical registration as follows: from the MR image, lines were manually defined representing the centre line of the portal vein, the aorta and the inferior vena cava; next, a number of points in the centres of these vessels were manually identified in the ultrasound B-scans; finally, an iterative closest point (ICP) algorithm[6] was used to compute the initial registration. The pre-segmented surface was transformed by this registration to produce starting = (v init , Kiinit , Hiinit ). These define the estimates for the feature vectors, ξ init i i predicted locations of each of the boundary points in physical space, and hence the locations which correspond to the peak values of the prior probabilities. The prior probability fields at the coarsest level 0 are therefore defined as 2 −vi (0)−v init i
2 2σv e P (ξ i (0)) = init 2 1 + kH (Hi (0) − Hi ) + kK Ki (0) − Kiinit
(6)
At subsequent levels the prior probabilities were defined using the multiscale propagation technique defined in Equation (4). Figures 1(a)-(d) show the results of running the algorithm on the segmented portal vein. Figures 1(a)-(b) are renderings of the surface before and after running the algorithm. Note that the regularisation contained in the definition of the
160
Andrew P. King et al.
(a)
(b)
(c)
(d)
(e)
(f )
(g)
(h)
Fig. 1. Results for (a)-(d) portal vein, and (e)-(h) liver surface. The images are, from left to right: a rendering of the surface before deformation; rendering of final deformed surface (colour indicates Gaussian curvature in both cases); sample slices through the 3-D ultrasound volume overlaid with outlines from the initial positioning of the surface (red) and the final deformed surface (blue). prior probability has resulted in a realistic deformed surface, with the curvature properties being approximately preserved. Figures 1(c)-(d) show slices through the 3-D ultrasound image, overlaid with outlines showing the initial surface af(in red), and the final estimate of the ter the rigid body ICP registration, ξ init i deformed surface, ξ i (L) (blue). In both cases the algorithm has improved the alignment of the surface with the ultrasound data. Figures 1(e)-(h) show the corresponding results for the segmented liver surface. Figures 1(e)-(f) show renderings of the segmented surface before and after running the algorithm, and Figures 1(g)-(h) show the overlays onto the 3-D ultrasound volume. It can be seen that the alignment in Figure 1(g) is very good, due to the presence of a strong reflection at the liver boundary in the image. However, in Figure 1(h) the reflection is not as strong, so the algorithm has used information from the prior probabilities instead, and the surface has not deformed significantly. Note that the same prior probabilities were used here as for the portal vein results.
4
Discussion
In this paper we have presented a general formulation for the extension of the standard Bayesian image analysis paradigm to incorporate a multiscale approach. The technique propagates information through scale-space by using the solution at a given scale to assign prior probabilities at the next (i.e finer) scale.
Estimating Sparse Deformation Fields Using Multiscale Bayesian Priors
161
To demonstrate this approach it was applied to the problem of image to physical registration on a triangulated surface mesh of a volunteer’s liver. A simple ultrasound imaging model was defined, along with a definition for the prior probabilities which implicitly regularises the solution using knowledge of surface curvature. If sufficient prior knowledge is available, this technique has the potential to completely automate the process of compensating for intraoperative deformation. Previous attempts to compensate for deformation using ultrasound have required some interaction to identify corresponding anatomical landmarks[3]. Automating this procedure would greatly increase the usability of image-guided surgery systems, which are currently limited in their range of application by the problem of soft tissue deformation. The current implementation of the algorithm runs in ∼ 20 minutes on a SUN Ultra 2 for a surface of ∼ 350 points. Compounding the 3-D ultrasound volume, constructing the multiresolution pyramid and calculating the intensity gradients is performed separately and also takes ∼ 20 minutes. As the use of true 3-D ultrasound systems becomes more practical and widespread, this part of the processing will become significantly faster. With code optimisation and/or increases in computational power it is likely that the algorithm could be fast enough for intraoperative use. This work, therefore, represents a significant advance in the field of image-guided surgery, which has the potential to greatly increase the utility of image guidance systems. Future work will concentrate on incorporating better prior models, and on further validation of the method using data acquired in the operating theatre.
Acknowledgements We thank the U.K. EPSRC for funding this project. We are also grateful to the radiology and radiography staff at Guy’s Hospital for their assistance.
References 1. Hill, D. L. G., Maurer, C. R., Maciunas, R. J., Barwise, J. A., Fitzpatrick, J. M., Wang, M. Y.: Measurement of Intraoperative Brain Surface Deformation under a Craniotomy. Neurosurgery 43(3) (1998) 514–528 2. Paulsen, K. D., Miga, M. I., Kennedy, F. E., Hoopes, P. J., Hartov, A., Roberts, D. W.: A Computational Model for Tracking Subsurface Tissue Deformation During Stereotactic Neurosurgery. IEEE Trans. Biomed. Engineering 46 (1999) 213–225 3. Comeau, R. M., Sadikot, A. F., Fenster, A., Peters, T. M.: Intraoperative Ultrasound for Guidance and Tissue Shift Correction in Image-Guided Neurosurgery. Medical Physics 27(4) (2000) 787–800 4. King, A. P., Blackall, J. M., Penney, G. P., Edwards, P. J., Hill, D. L. G., Hawkes, D. J.: Bayesian Estimation of Intra-operative Deformation for Image-Guided Surgery Using 3-D Ultrasound. Proceedings MICCAI (2000) 588–597 5. Cs´ ak´ any, P., Wallace, A. M.: Computation of Local Differential Parameters on Irregular Meshes. The Mathematics of Surfaces IX (2000) 19–33 6. Besl, P.J., McKay, N.D.: A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Analysis and Machine Intelligence 14(2) (1992) 239–256
Automatic Registration of Mammograms Based on Linear Structures Robert Marti1 , Reyer Zwiggelaar1, and Caroline Rubin2 1
Division of Computer Science, University of Portsmouth,Porstmouth, UK {robert.marti,reyer.zwiggelaar}@port.ac.uk 2 Breast Screening Unit, Royal South Hants Hospital, Southampton, UK
Abstract. A novel method to obtain correspondence between landmarks when comparing pairs of mammographic images from the same patient is presented. Our approach is based on automatically established correspondence between linear structures (i.e. ducts and vessels) which appear in mammograms using robust features such as orientation, width and curvature extracted from those structures. In addition, a novel multiscale feature matching approach is presented which results in a reliable correspondence between extracted features.
1
Introduction
Detection of abnormal structures or architectural distortions in mammographic images can be performed by comparing different images of the same patient, either the same breast taken at different times (temporal comparison) or using mammographic images of the left and right breast (contralateral comparison). This comparison is not straightforward due to additional dissimilarities between images which are related to patient movement, sensor noise, different radiation exposure and variation of breast compression specially as 2D mammographic images are projections of 3D mammographic structures. Therefore, in order to efficiently compare two mammograms and avoid non target dissimilarities, an initial alignment (also referred to as registration) must be carried out. Methods that are able to recover local deformation (e.g. [1]) rely on corresponding landmarks between images, which turns out to be the most difficult task and plays an important role in registration accuracy. Manually landmark generation is a tedious and time consuming task when the number of control points is large and, moreover, introduces variability. Automatic landmarking methods are, therefore, more suitable but also difficult to develop. Automatically extracted mammographic landmarks include breast boundary [4], pectoral muscle [4], salient regions [5] and crossings of horizontal and vertical structures [7]. This paper presents a novel method to establish image correspondence in mammographic images based on matching their major linear structures (ducts and vessels). Establishing correspondence involves various steps: 1. to identify linear structures in both mammograms (section 2), 2. to extract reliable information from those structures (section 3), 3. to obtain correspondence between the structures (section 4) and 4. registration using a point based method [1]. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 162–168, 2001. c Springer-Verlag Berlin Heidelberg 2001
Automatic Registration of Mammograms Based on Linear Structures
2
163
Detection of Linear Structures
We use a non-linear line operator [3] to detect linear structures in both mammograms. At a given scale, the line operator provides for every pixel a strength and orientation of the linear structure. Line strength is obtained by comparing a straight line of pixels with its neighbourhood and obtaining the maximum value from a number of orientations. Direction of the line strength is determined by the angle of the line which gives the highest line strength. Scale information is obtained from the maximum line strength of the detector at different scales. 2.1
Scale Information
In order to obtain more reliable scale information we investigate the feasibility of normalising the individual line strength images before the scale information is extracted. For the evaluation we have used a dense mammographic background to which synthetic linear structures of different scales have been added. Table 1 compares the scale information obtained with (a) no normalisation, (b) histogram stretching and (c) histogram stretching with the top 1% of the grey-levels mapped to the maximum grey-level value. Scale information using non-processed line strength images results in a biased scale estimation towards the higher scales and detected lines often overestimate the scale information. On the other hand, maximum mapping in combination with stretching provides more reliable scale information with no bias as shown in table 1c. It is our understanding that without normalisation the resulting line strength at the lower scales is affected (and relative to the higher scales suppressed) by relative noise levels. Table 1. Scale detection using (a) no normalisation (45% detected), (b) histogram stretching (51% detected) and (c) histogram stretching with the top 1% of the grey-levels mapped to the maximum grey-level value (87% detected) True scale 1 2 4 (a) 1 0.04 0.00 0.00 Detected 2 0.28 0.12 0.00 scale 4 0.01 0.27 0.29
True scale True scale 1 2 4 1 2 4 (b) 1 0.22 0.00 0.00 (c) 1 0.33 0.09 0.02 Detected 2 0.11 0.00 0.00 Detected 2 0.00 0.27 0.00 scale 4 0.01 0.38 0.29 scale 4 0.00 0.02 0.27
The fact that scale information is improved when using the proposed normalisation is clear, but it is also important to analyse how this normalisation affects detection results, that is, the resulting line strength image. Receiver operating characteristic curves obtained by comparing true structures with the ones detected showed that no processing of strength images gives slightly better detection results, but this is only a small percentage and not comparable to the improved scale detection as indicated in table 1.
164
2.2
Robert Marti, Reyer Zwiggelaar, and Caroline Rubin
Line Processing
Once strength, direction and scale information have been obtained we perform different operations to facilitate the feature extraction process. First, we set a conservative threshold on the line strength image in order to remove background noise. Then non-maximum suppression is applied which removes pixels with low intensity values compared to their neighbours along the normal of the linear structure. Scale information extracted from the line operator (see section 2.1) is used here to determine the position of candidate pixels to be suppressed. In addition, short lines which do not provide reliable information are removed taking the Euclidean distance between the centre pixel and its neighbours into account. Finally a thinning operation will obtain the backbone of the most representative linear structures in the mammograms.
3
Feature Extraction
Feature extraction is needed in order to obtain descriptors of the structures to be used in the matching process. Corresponding linear structures in two mammograms can present large differences related to line strength and line continuity (due to different imaging conditions) but width and orientation of the line and local curvature and branching points are more likely to be preserved and often are features used by radiologists when comparing mammograms. Therefore, features which take line length, end points and line strength into account turn out to be unreliable features to tackle the correspondence problem. In this paper we use local features such as curvature, width and orientation. The basic idea of our method is to extract characteristic points of linear structures determined by the maximum curvature along the structure. Position, orientation (φ) and width (w) are then extracted for those points and used in the matching process. Curvature measures are extracted for each pixel along the linear structures. Maximum curvature points are likely to be characteristic for a linear structure in terms of local curvature and branching points. Before computing curvature we need to extract the orientation of the structures which is obtained directly from the thinned linear structures. Although orientation information could be retrieved from the line detector results, experiments have shown (results not included) that the approach adopted here gives more accurate orientation measures. Curvature values at each pixel are obtained with a similar approach as used in [2]. Curvature (or directional change) between two pixels p and q is defined by the scalar product of their normal vectors. Hence, the curvature measure of a given pixel p is obtained by computing the scalar product between p and its neighbouring pixels. Cp =
N 1 exp(−d2ip )(1 − cos(φp − φi )) N i=1
(1)
where φi is the angle of the normal at each pixel i. As we will be extracting curvature from binary thinned images, we assume unit vectors. N is the number
Automatic Registration of Mammograms Based on Linear Structures
165
of points in a local neighbourhood and dip is the Euclidean distance between points i and p. The distance factor is used here to weight the curvature of each point i, in order to incorporate a bias to points closer to p. Width information is extracted after non-maximum suppression of the strength images (section 2.2). The improved scale information from the line detection step (section 2.1) is indirectly used to extract width information as scale information is used to perform the non-maximum suppression. Width of a linear structure at a point is given by the number of pixels along the normal of the structure.
4
Matching
The matching process needs to consider the following assumptions: – Non-rigid motion: linear structures in mammograms suffer local distortions, therefore they may move independently and no geometrical relationship is established between neighbouring structures. – Multiple matches: a linear structure in one mammogram can match more than one structure in the other mammogram, and vice versa. – Non-bijectivity: a linear structure in one mammogram may not have a corresponding linear structure in the other, and vice versa. – Localisation: After global breast misalignment is removed, matched linear structures lie in approximately the same area in both mammograms. We will refer to this area as the localisation area M . We adopt here a similar but more general approach than the one used in [7]. We denote the set of feature points from both mammograms, as {ai |1 ≤ i ≤ Ni } and {bj |1 ≤ j ≤ Nj }, where Ni and Nj are the number of feature points used, which may not be the same. Subsequently, we build a distance matrix (DM ) in which each position DM (i, j) describes the normalised distance between features of points ai and bj . Hence, a low value means good matching between points. The computation of the feature distance will be discussed later. Once all distances have been entered, the minimum value of each row is detected (and remaining positions in each row deleted) in order to have a unique match in each row. Then, the minimum value of each column is extracted ending up with a set of potentially matched points. It must be mentioned that the approach only works when there are distinct minima in each row and column and that points that do not conform to this are removed from the data-set. Reliable matches are those with a distance smaller than a particular threshold. The use of the distance matrix structure fullfills the first three assumptions: independent motion (matched points ai , bj do not imply matching ai+1 , bj+1 ); a point ai may have multiple matched points bj ; and a point in either mammogram may remain unmatched. As mentioned previously the distance matrix contains the normalised distance between features of points ai and bj . Satisfying the last assumption, localisation, position DM (i, j) will only have a finite value if points ai and bj are in the same localisation area in both mammograms. This assumption can only be stated
166
Robert Marti, Reyer Zwiggelaar, and Caroline Rubin
if mammograms are globally aligned, that is, global deformation (i.e. rotation, translation and scale) is removed. Therefore, we initially register mammograms maximising a mutual information measure using an affine transformation [6]. Registration will provide the transformation parameters (α) needed to compare feature points coordinates and establish the localisation area (M ) in both mammograms. The normalised distance is determined by three components. The first distance is the Euclidean distance (DE ) between point coordinates. Coordinates of one of the points are transformed (Tα ) using the parameters obtained from the registration mentioned earlier. The second distance is the orientation difference between two points (Dφ ). Finally, the last distance is the width difference between two points (Dw ) normalised using the maximum width of all the linear structures (W ). Using equal weighting the normalised distance is given by
DM (i, j) = Dφ + Dw + DE 1 − cos(φi − φj ) |wi − wj | |Tα (ai ) − bj )| + + . = 2 W M 4.1
(2)
Multi-level Matching
The described matching process may obtain matching points which lie localised in small areas and not spread over the whole image, as it would be necessary for image registration. In addition, the localisation area defined using the global registration may not be accurate enough and a local registration should be used. A novel multi-level registration approach is used here to tackle those problems. At the first level, the full images are registered obtaining the transformation parameters. Subsequently, we move to the second level dividing each mammogram in six rectangular sub-images and again register each sub-image. Transformation parameters are carried through each level, assuming that each sub-image at lower levels would suffer a different transformation but it would be related to the deformation on the higher level. Assuming this, we speedup the optimisation process as well as avoiding local minimum situated away from the optimum solution. Once the last level is reached, transformation parameters in each sub-image on that level establish the correspondence of localisation areas for structures within each sub-image. In addition, extracting the local best matches in each sub-image assures that a minimum number of matches will be present, having a more homogeneous point distribution over the whole mammogram.
5
Results
In this section we present initial results using the described approach applied to temporal and contralateral mammographic comparison. Figure 1 shows two mammograms of the same patient taken three years apart where matches between the linear structures are indicated by numbers. As mentioned earlier, matched points can be used as control points in mammographic registration,
Automatic Registration of Mammograms Based on Linear Structures
167
using a point based method such as thin plate splines [1]. Figure 1c shows the subtracted image (where darker areas mean larger misalignment) obtained after automatic registration using the proposed method. Although mis-registration can be observed near the breast outline using our method, registration of internal breast regions is comparable to manual placement of control points. This statement is corroborated in figure 2 which shows temporal and contralateral registration results using our method compared to manually placed control points. Graphs are obtained measuring normalised mutual information between reference and registered images. A high value denotes high similarity between images, therefore good registration. Graph results show that automatic registration performs equally or slightly better in most cases, although some poor results are also obtained. These are due to specific breast characteristics such as the lack of major linear structures (mammograms 4 and 5 in figure 2a) or large image deformation (mammograms 15 and 16 in figure 2b).
Fig. 1. Correspondence in temporal mammograms (from left to right): reference image, warped image, difference between registered image and reference
168
Robert Marti, Reyer Zwiggelaar, and Caroline Rubin 0.625
0.575
0.600 0.575
0.525
Mutual Information
Mutual Information
0.550
0.500 0.475 0.450 0.425
0.550 0.525 0.500 0.475 0.450 0.425
0.400
0.400 2
4
6
8
10
12
Mammogram
14
16
18
2
4
6
8
10
12
14
16
Mammogram
Fig. 2. Registration results for temporal (left) and contralateral (right) experiments where automatic registration (2) is compared to manual registration ()
6
Conclusions
The work presented here describes a novel approach to solve the problem of extraction of reliable features in mammographic images and establishes correspondence between them in pairs of mammograms. We have shown that features extracted from linear structures can provide an automatic approach to the generation of control points for image registration. Features based on scale, orientation and position have been used. Initial results look promising, but further work will be needed to establish the full benefit of our approach. The proposed method will be tested on a larger mammographic dataset and compared to a radiologist gold standard. In addition, other features could be incorporated such as breast boundary and the position of the nipple.
References 1. F. Bookstein. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6):567–585, 1989. 2. J. Deschˆenes and D. Ziou. Detection of line junctions and line terminations using curvilinear features. Pattern Recognition Letters, 21:637–649, 2000. 3. R. Dixon and C. Taylor. Automated asbestos fibre counting. Inst. Phys. Conf. Ser, 44:178–185, 1979. 4. N. Karssemeijer and G.M. te Brake. Combining single view features and asymmetry for detection of mass lesions. 4th International Workshop on Digital Mammography, Nijmegen, The Netherlands, pp. 95-102, 1998. 5. S. Kok-Wiles, J.M. Brady and R.P. Highnam. Comparing mammogram pairs in the detection of lesions. 4th International Workshop on Digital Mammography, Nijmegen, The Netherlands, pp. 103-110, 1998. 6. R. Marti, R. Zwiggelaar, and C. Rubin. Comparing image correspondence in mammograms. 5th International Workshop on Digital Mammography, 2000. In press. 7. N. Vujovic and D. Brzakovic. Establishing the correspondence between control point in pairs of mammographic images. IEEE Transactions on Image Processing, 6(10):1388–1399, 1997.
Tracking Brain Deformations in Time-Sequences of 3D US Images Xavier Pennec, Pascal Cachier, and Nicholas Ayache EPIDAURE, INRIA Sophia Antipolis, 2004 Rte des Lucioles,BP93, 06902 Sophia Antipolis Cedex {Xavier.Pennec, Pascal.Cachier, Nicholas.Ayache}@sophia.inria.fr
Abstract. During a neuro-surgical intervention, the brain tissues shift and warp. In order to keep an accurate positioning of the surgical instruments, one has to estimate this deformation from intra-operative images. We present in this article a feasibility study of a tracking tool based on intra-operative 3D ultrasound (US) images. The automatic processing of this kind of images is of great interest for the development of innovative and low-cost image guided surgery tools. The difficulty relies both in the complex nature of the ultrasound image, and in the amount of data to be treated as fast as possible.
1
Introduction
The use of stereotactic systems is now a quite standard procedure for neurosurgery. However, these systems do no accurately issue the position of specific anatomical structures (especially deep structures in the brain) du to the intraoperative warping of the brain during surgery (brain shift). Over the last years, the development of real-time 3D ultrasound (US) imaging has revealed a number of potential applications in image-guided surgery as an alternative approach to open MR and intra-interventional CT thanks to its comparatively low cost and simplicity of use. However, the automatic processing of US images has not gained the same degree of development as other medical imaging modalities, probably due to the low signal-to-noise ratio of US images. We present in this article a feasibility study of a tracking tool for brain deformations based on intra-operative 3D US images. This work was performed within the framework of the European project ROBOSCOPE (see acknowledgements), which aims to assist neuro-surgical operations using real-time 3D US images and a robotic manipulator arm. The operation is planned on a pre-operative MRI and 3D US images are acquired during surgery to track in real time the deformation of anatomical structures. One can then update the preoperative plan and synthetize a virtual MR image that matches the current brain anatomy. The idea of MR/US registration was already present in [3,1,6,5,4]. In all these works, one can only have a snapshot of the brain shift at a given time-point as the user interaction is required at least to define the landmarks. Recently, an automatic rigid registration of MR and US images was presented [10]. This work M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 169–175, 2001. c Springer-Verlag Berlin Heidelberg 2001
170
Xavier Pennec, Pascal Cachier, and Nicholas Ayache
is based on image intensities and does not rely on feature extraction. However, the estimated motion remains limited to rigid or possibly affine transformations. Up to our knowledge, only [7] deals with an automatic non-rigid MR/US registration. The registration is quite fast (about 5mn), even if the compounding of the 3D US and the computation of its gradient takes about one hour. However, experiments are presented only on phantom data and our experience (see section 3) is that real US images are quite different and may lead to different results. In this paper, we assume that a rigid MR/US registration is performed with dura matter still closed (there is no brain shift yet), for instance using the approach of [10], and we focus on the development of an automatic intensity-based non-rigid tracking algorithm suited for real-time US images sequences. We first present the registration method for two US images and how the method is turned into tracking algorithm. Then, we present qualitative results on a sequence of US images of a phantom and on a small sequence of animal brain images.
2
The Tracking Algorithm
When analysing the problem of tracking the brain deformation in 3D US timesequences, we made the following observations. Firstly, deformations are small between successive images in a real-time sequence, but they are possibly large deformations around the surgical tools with respect to the pre-operative image. Thus, the transformation space should allow large deformations, but only small deformations have to be retrieved between successive images. Secondly, there is a poor signal to noise ratio in US images and the absence of information in some areas. However, the speckle (inducing localised high intensities) is usually persistent in time and may produce reliable landmarks for successive images. As a consequence, the transformation space should be able to interpolate in areas with few information while relying on high intensity voxels for successive images registration. Last but not least, the algorithm is designed in view of a real-time registration during surgery, which means that, at equal performances, one should prefer the fastest method. Following the encouraging results obtained in [8] for the intensity based nonrigid registration of two 3D US images, we adapt in this section the method according to the previous observations. 2.1
Registering Two US Images
Parameterisation of the Transformation We used in [8] the most natural parameterisation of a free-form transformation: the displacement of each voxel. This strategy proved to be successful when the US image carries information in all areas but induces regularization problems when the images present large uniform areas as it is the case in the phantom sequence of section 3.1. In this paper, we use a very similar but parametric scheme where the “displacement” ti for each voxelposition xi has a Gaussian influence on its neighbourhood: T (t1 , ...tn )(x) = i ti .Gσ (x − xi ) (see details in [9]).
Tracking Brain Deformations in Time-Sequences of 3D US Images
171
Similarity Energy Even if there is a poor signal to noise ratio in US images, the speckle is usually persistent in time and may produce reliable landmarks within the time-sequence. Hence, it is desirable to use a similarity measure which favours the correspondence of similar high intensities for the registration of successive images in the time-sequence. First experiments presented in [8] indicated that 2 the simplest one, the sum of square differences (SSD(T ) = (I − J ◦ T ) ), could be adapted. In [2], we developed a more complex similarity measure: the sum of Gaussian-windowed local correlation coefficients (LCC). Let G f be the ¯2 convolution of f by the Gaussian, I¯= (G I) be the local mean, σI2 = G (I − I) ¯ the local variance and LC(T ) = G (I − I)(J ◦ T − J ◦ T ) the local correlation between image I and image J ◦ T . Then, the global criterion to maximise is the sum of the local correlation coefficients: LCC(T ) = (LC(T )/σI .σJ◦T ). We have shown in [8] and [2] how these criteria can be optimised using first and second order gradient descent techniques with a general free-form deformation field by computing the gradient and the Hessian of the criteria. Using our new parameterisation of the transformations simply amounts to a smoothing of the gradient and Hessian [9]. Therefore, it will be more robust and may escape from previous local minima while encouraging smoother transformations. In this article, the optimisation is performed using a Levenberg-Marquard like method. Regularization Energy There is a trade-off to find between the similarity energy, reflected by the visual quality of the registration, and the smoothing energy, reflected by the regularity of the transformation. Despite a weaker theoretical background, we chose for efficiency reasons to alternatively minimise each energy instead of the weighted sum of the two energies. In view of a real-time system, this is particularly well suited for the stretch energy Ereg = ∇T (or membrane model) which is very efficiently solved by using a Gaussian filtering of the transformation. Thus, the algorithm will alternatively optimize the similarity energy and smooth the transformation by Gaussian filtering. 2.2
From the Registration to the Tracking Algorithm
In the previous section, we studied how to register two US images together. We now have to estimate the deformation of the brain between the first image (since the dura mater is still closed, it is assumed to correspond to the preoperative brain) and the current image of the sequence. One could think of registering directly U S1 (taken at time t1 ) and U Sn (at time tn ) but the deformations could be quite large and the intensity changes important. To constrain the problem, we need to exploit the temporal continuity of the deformation. First, assuming that we already have the deformation TUS (n) from image U S1 to U Sn , we register U Sn with the current image U Sn+1 , obtaining the transformation dTUS (n). If the time step between two images is short with respect to the deformation rate, there should be small deformations and small intensity changes. For this step, we believe that the SSD criterion is well adapted. Then, composing with the previous deformation, we obtain a first estimation of TUS (n + 1) dTUS (n) ◦ TUS (n). However, the composition of deformation
172
Xavier Pennec, Pascal Cachier, and Nicholas Ayache
fields involves interpolations and just keeping this estimation would finally lead to a disastrous cumulation of interpolation errors as we go along the sequence. Thus, we only use dTUS (n) ◦ TUS (n) as an initialisation for the registration of U S1 to U Sn . Starting from this position, the residual deformation should be small (it corresponds to the correction of interpolation and systematic error effects) but the difference between homologous point intensities might remain important. In this case, the LCC criterion might be better than the SSD one despite its worse computational efficiency.
3
Experiments
In this section, we present qualitative results of the tracking algorithm on two sequence of US images: a phantom and a dead pig brain with a simulated cyst. Experiments were performed using the SSD and the LCC criterion without significative differences in the results. The registration of each image of the sequence takes between 10 and 15 minutes on a standard PC running linux for the SSD criterion, and between 20 and 30 mn for the LCC criterion. 3.1
A Phantom Study
Within the ROBOSCOPE project, an MR and US compatible phantom was developed by Prof. Auer and his colleagues at ISM (Austria) to simulate brain deformations. It is made of two balloons, one ellipsoid and one ellipsoid with a “nose”, that can be inflated with known volumes. Each acquisition consists in one 3D MR image and one 3D US image. The goal is to use the US sequence to track the deformations and compute the corresponding virtual MR images from the first MR image. Then, the remaining MR images can be used to assess the quality of the tracking. Results are presented in Fig. 1. Even if there are very few salient landmarks (all the information is located in the thick and smooth balloons boundaries, and thus the tracking problem is loosely constrained), results are globally good all along the sequence. This shows that the SSD criterion correctly captures the information at edges and that our parameterised deformation interpolates reasonably well in uniform areas. When looking at the virtual MR in more details, one can however find some places where the motion is less accurately recovered: the contact between the balloons and borders of the US images. Indeed, the parameterisation of the transformation and especially its smoothing are designed to approximate the behaviour of a uniform elastic like body. If this assumption can be justified for the shift of brain tissues, it is less obvious for our phantom where balloons are placed into a viscous fluid. In particular, the fluid motions between the two balloons cannot be recovered. On the borders of the US images, there is sometimes a lack of intensity information and the deformation can only be extrapolated from the smoothing of neighbouring displacements. Since we are not using a precise geometrical and physical model of the observed structures like in [11], one cannot expect this extrapolation to be very accurate.
Tracking Brain Deformations in Time-Sequences of 3D US Images
US 1
US 2
US 3
US 4
US 5
Virtual US 2
Virtual US 3
Virtual US 4
Virtual US 5
virtual MR 2
virtual MR 3
virtual MR 4
virtual MR 5
173
Fig. 1. Beginning of the sequence of 10 images of the phantom. On top: the original US images. Middle: the “virtual” US images (US 1 deformed to match the current US image) resulting from the tracking. Bottom: the virtual MR images synthetized using the deformation field computed on the US images with the contours of the “original” MR images superimposed. The volume of the balloons ranges from 60 to 90 ml for the ellipsoid one and 40 to 60 ml for the more complex one.
Original seg.
Virtual seg. 2
Virtual seg. 3
Original grid
Deformed grid 2
Deformed grid 3
Fig. 2. Top: The 3 original images of the pig brain. The segmentation of the balloon, done on the first image, is deformed according to the transformation found by the tracking algorithm and superimposed to the original US image. Bottom: deformation of a grid to visualise more precisely the location of the deformations found.
174
3.2
Xavier Pennec, Pascal Cachier, and Nicholas Ayache
Animal Brain Images
This dataset was obtained by Dr. Ing. V. Paul at IBMT, Fraunhofer Institute (Germany) from a pig brain at a post-lethal status. A cyst drainage has been simulated by deflating a balloon catheter with a complete volume scan at three steps. We present in figure 2 the results of the tracking. Since we have no corresponding MR image, we present on the two last lines the deformation of a grid (a virtual synthetic image...), to emphasise the regularity of the estimated deformation, and the deformation of a segmentation of the balloon. The correspondence between the original and the virtual (i.e. deformed US 1) images is qualitatively good. In fact, if the edges are less salient than in the phantom images, we have globally a better distribution of intensity features over the field ov view due to the speckle in these real brain images. One should also note on the deformed grid images that the deformation found is very smooth. Reducing the smoothing of the transformation could allow the algorithm to find a closer fit. However, this could allow some unwanted high frequency deformations due to the noise in the US images. We believe that it is better to recover the most important deformations and miss some smaller parts than trying to match exactly the images and have the possibility to “invent” some possibly large deformations.
4
Discussion and Conclusion
We have developed in this paper a tracking algorithm adapted to time sequences of US images and not only to the registration of two images. The algorithm partly fills the goals of the ROBOSCOPE project: it is able to recover an important part of the deformations and issues a smooth deformation, despite the noisy nature of the US images. Experiments on phantom and animal data show that this allows to simulate virtual MR images qualitatively close to the real ones. We observed that the SSD and LCC criteria produced very similar results on our examples, LCC being around 2 times slower than the SSD. Since the computation time of the US-US non-rigid registration is a key issue for real-time motion tracking, one could conclude that SSD has to be preferred to LCC. We believe that the choice of SSD is justified for the registration of successive images in the time sequence. However, for the update of the global deformation (transformation from the first image to the current one), LCC is probably necessary if the sequence was to present some important intensity changes along time. The computation time is still far from real time for a continuous tracking of deformations during surgery but a parallelisation of the algorithm is rather straightforward for the computation of both the image and the regularization energies. The type of transformation is a very sensitive choice for such a tracking algorithm. We made the assumption of a “uniform elastic” like material. This may be adequate for the brain tissues, but probably not for the ventricles and for the tracking of the surgical tools themselves. Indeed, they will penetrate into
Tracking Brain Deformations in Time-Sequences of 3D US Images
175
the brain without any elastic constraint with the neighbouring tissues. A specific adaptation of the algorithm around the tools will likely be necessary. Another possibility for errors is the occlusion of a part of a structure visible in the US, for instance the shadowing by the endoscope. Acknowledgements This work was partially supported by the EC-funded ROBOSCOPE project HC 4018, a collaboration between The Fraunhofer Institute (Germany), Fokker Control System (Netherlands), Imperial College (UK), INRIA (France), ISMSalzburg and Kretz Technik (Austria). The authors address special thanks to Prof. Auer and his colleagues at ISM for the phantom acquisitions, and to Dr. Ing. V. Paul at IBMT, Fraunhofer Institute for the acquisition of the pig brain images.
References 1. R.D. Bucholz, D.D. Yeh, B.S. Trobaugh, L.L. McDurmont, C.D. Sturm, Baumann C., Henderson J.M., Levy A., and Kessman P. The correction of stereotactic inaccuracy caused by brain shift using an intraoperative ultrasound device. In Proc of CVRMed-MRCAS’97, LNCS 1205, p. 459–466, 1997. 2. P. Cachier and X. Pennec. 3D non-rigid registration by gradient descent on a gaussian-windowed similarity measure using convolutions. In Proc. of MMBIA’00, p. 182–189, Hilton Head Island, South Carolina, USA, June 2000. 3. H. Erbe, A. Kriete, A. J¨ odicke, W. Deinsberger, and D.-K. B¨ oker. 3DUltrasonography and Image Matching for Detection of Brain Shift During Intracranial Surgery. Computer Assisted Radiology, p. 225–230, 1996. 4. D.G. Gobbi, R.M. Comeau, and T.M. Peters. Ultrasound/MRI overlay with image warping for neurosurgery. In Proc of MICCAI’00, LNCS 1935, p. 106–114, 2000. 5. D.G. Gobbi, Comeau R.M., and T.M. Peters. Ultrasound probe tracking for realtime ultrasound/MRI overlay and visualization of brain shift. In Proc of MICCAI’99, LNCS 1679, p. 920–927, 1999. 6. N. Hata, M. Suzuki, T. Dohi, H. Iseki, K. Takakura, and D. Hashimoto. Registration of Ultrasound Echography for Intraoperative Use: A Newly Developed Multiproperty Method. SPIE, 2359, 1998. 7. A.P. King, J.M. Blackall, G.P. Penney, P.J. Edwards, D.L.G. Hill, and D.J. Hawkes. Baysian estimation of intra-operative deformation for image-guided surgery using 3-D ultrasound. In Proc of MICCAI’00, LNCS 1935, p. 588–597, 2000. 8. X. Pennec, P. Cachier, and N. Ayache. Understanding the “demon’s algorithm”: 3D non-rigid registration by gradient descent. In Proc. of MICCAI’99, LNCS 1679, p. 597–605, Cambridge, UK, September 1999. 9. X. Pennec, P. Cachier, and N. Ayache. Tracking brain deformations in timesequences of 3D us images. Research Report 4091, INRIA, December 2000. 10. A. Roche, X. Pennec, M. Rudolph, D. P. Auer, G. Malandain, S. Ourselin, L. M. Auer, and N. Ayache. Generalized Correlation Ratio for Rigid Registration of 3D Ultrasound with MR Images. In Proc. of MICCAI’00, LNCS 1935, p. 567–577, Pittsburgh, USA, October 2000. 11. O. Skrinjar and J. Duncan. Real time 3D brain shift compensation. In Proc of IPMI’99, p. 42–55, Visegrad, Hungary, July 1999.
Robust Multimodal Image Registration Using Local Frequency Representations Baba C. Vemuri1 , Jundong Liu1 , and Jos´e L. Marroquin2 1
Department of CISE, University of Florida Gainesville, Fl. 32611 vemuri|
[email protected] 2 CIMAT, Guanajuato 36000, Mexico
[email protected] Abstract. Fusing of multi-modal data involves automatically estimating the coordinate transformation required to align the data sets. Most existing methods in literature are not robust and fast enough for practical use. We propose a robust algorithm, based on matching local-frequency image representations, which naturally allow for processing the data at different scales/resolutions, a very desirable property from a computational efficiency view point. This algorithm involves minimizing – over all affine transformations – the integral of the squared error (ISE or L2 E) between a Gaussian model of the residual and its true density function. The residual here refers to the difference between the local frequency representations of the transformed (by an unknown transformation) source and target data. The primary advantage of our algorithm is its ability to cope with large non-overlapping fields of view of the two data sets being registered, a common occurrence in practise. We present implementation results for misalignments between CT and MR brain scans.
1
Introduction
Image registration is one of the most widely encountered problems in a variety of fields including but not limited to medical image analysis, remote sensing, satellite imaging, etc. Broadly speaking, image registration methods can be classified into two classes [10] namely, feature-based and direct methods. In the former, prominent features from the two images to be registered are matched to estimate the transformation between the two data sets. In the latter, this transformation is determined directly from the image data or a derived “image-like” representation of the same. Several feature-based schemes exist in literature. We will not describe featurebased schemes here but simply refer the reader to the survey [6]. Feature-based approaches have one commonality, i.e., they need to detect landmark features in the images and hence the accuracy of registration is dictated by the accuracy of the feature detector. Amongst the direct approaches, one straightforward direct approach is the optical flow formulation [10] which assumes that the brightness at corresponding M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 176–182, 2001. c Springer-Verlag Berlin Heidelberg 2001
Robust Multimodal Image Registration
177
points in the two data to be registered is the same, an assumption that is severely violated in multi-modal images. The most popular direct approach is based on the concept of maximizing mutual information (MI) reported in [12,3]. Reported registration experiments in these works are quite impressive for the case of rigid and affine motions. Most MI-based algorithms in literature have been formulated for global parameterized motion with the exception of the work reported in Meyer et al., [8] wherein affine transformations as well as thin-plate spline warps are handled. The reported CPU execution times are quite high – of the order of several hours for estimating thin-plate warps. The problem of being able to handle local deformations in a mutual information framework is a very active area of research and some recent papers reporting results on this problem are [5,1]. For an exposition on other direct methods, we refer the reader to the survey [6] and some recent work on level-set methods in [11]. Most direct methods are known to be sensitive to outliers in data and this motivates us to seek a statistically robust scheme. Situations involving outliers arise when the field of view (FOV) of the data sets does not have a significant overlap. In this paper, we develop a multi-modal registration technique which is based on a local frequency representation of the image data obtained by Gabor filtering the image tuned to a certain frequency and then computing the gradient of the phase of the filtered image. This representation is relatively invariant to changes in contrast/brightness. Because, multi-modal images of the same underlying object have very different intensities, a local frequency based representation seems apt to capture the salient commonalities between the two modalities of data. After computing the local frequency representations of the input images, a registration that matches these representations best is determined. To achieve this, we minimize a robust match measure which is based on the integral of the squared error (ISE or L2 E) between a Gaussian model of the residual and its true density function [9]. The residual here refers to the difference between the local frequency representations of the transformed source and target data. This robust estimation framework affords the advantage of not having to deal with additional tuning parameters that would be required when using M-estimators to achieve robustness. One of the key strengths of our method is that – due to the formulation being set in a robust framework – it can handle data sets that do not have significant overlap as is the case in most practical situations.
2
Local Frequency Representation
Tomes of research has been reported on retrieving information that is shared by the multi-modal data sets while reducing or eliminating the (imaging) sensor dependent background information. In this paper, we use a local frequency image representation obtained by Gabor filtering the input images and then computing the gradient of the phase. This local frequency representation has all the advantages of an “energy” representation and additionally can be tuned to any desired frequency/orientation thereby facilitating control of the alignment process. In this paper, we automatically select a bank of frequency and orientation
178
Baba C. Vemuri, Jundong Liu, and Jos´e L. Marroquin
tuned filters each of which corresponds to the significant local maxima in the magnitude of the image’s spectrum. We refer reader to [7] for a more elegant filter selection algorithm which chooses filters tailored to the content of images. Given a signal f in 1D, its analytic signal is defined as fA = f − ifHi , where fHi is the Hilbert transformation of f . The argument of fA is referred to as the local phase of f . The spatial derivative of local phase is called local or instantaneous frequency [4]. The transformation from the real signal to its corresponding analytic signal can be regarded as the result of convolving the real signal with a complex filter, called a quadrature filter. The following properties of local frequency make it a candidate for an invariant image representation [4] in matching multi-modal image data sets: (1) Local phase estimates are relatively invariant to signal energy and to changes in illumination conditions. (2) Local phase estimates and spatial position are equivariant, i.e. the change of local phase is proportional to the change of spatial position. Except for the modulo 2π warp-around, the local phase changes smoothly and monotonically with the position of the signal. (3) The spatial derivative of local phase estimates is equivariant with spatial frequency. The Gabor Filter is a well-known quadrature filter. The complex 2-D Gabor functions have the following general form (see [4]): h(x, y) = g(x , y )exp(2πj(U x + V y)),
(1) √ where (j = −1), (x , y )T = R(x, y) with R being the 2D rotation matrix and x 2 ( λ ) + y2 1 g(x, y) = exp − . (2) 2πλσ 2 2σ 2 The process of computing the local frequency representation can be achieved in three steps, For each tuning frequency (ω, θ) do: – generate a Gabor filter G(x,y) tuned to direction θ and frequency ω, and let q+ (x, y) and q− (x, y) be the result of convolution of the image I with the real and imaginary parts of G respectively. q+ (x, y) = I ⊗ real(G) and q− (x, y) = I ⊗ imag(G). – Compute the local phase gradient (local frequency estimator) for each fil(x,y)−q− (x,y)∇q+ (x,y) ter using the following equation: ∇φ(x, y) = q+ (x,y)∇qq− 2 (x,y)+q 2 (x,y) +
−
where φ(x, y) = arctan(q− (x, y)/q+ (x, y)) (note that φ needs not be computed explicitly). – Construct an image representation (the average squared local frequency magnitude) by summing squared gradient magnitudes for each filter as follows: F (x, y) = ω θ |∇φ(x, y)|2 . Figure 1 depicts a pair of MR-CT slices and the associated local frequency representations. Another notable property of a local frequency image representation is its scalability: the Gaussian scale parameter σ in equation 2 can be varied to directly generate an image scale space representation (such a property is lacking in the mutual information based schemes) which is very useful framework for analysis and computation.
Robust Multimodal Image Registration
(a)
(b)
(c)
179
(d)
Fig. 1. Slices from (a) MR and (b) CT scans; (c) and (d) corresponding local frequency representations.
3
Matching Local Frequency Representations
To match the local-frequency representations of the image pair, we develop a statistically robust matching criteria based on minimization of the integral squared error (ISE) also known as the L2 error or simply L2 E between a Gaussian model of the residual and the true density function of the residual. It was shown in Scott [9] that minimum distance estimators, including the L2 E, are inherently robust without requiring the need to specify any tuning parameters found in robust likelihood methods. Let the local frequency representations of the CT-MR image pairs (or MRMR pair acquired under different imaging protocols) differ by a local displacement, then, the following equation holds for the local-frequency representations: F1 (X + T) = F2 (X) + #(X), where the residual error field # is assumed to be composed by independent, identically distributed random variables, F1 (·) and F2 (·) are the 3D local frequency image representations computed from the MR and CT data sets respectively, X = (x, y, z), T = (u, v, w) is the 3D displacement field at the (x, y, z) points. Our goal is to minimize the L2 E measure given by min E(T) = T
{g(#/θ) − h(#)}2 d#
(3)
where g(.) is a Gaussian function modeling the density of the residual error, θ = [µ, σ] being the vector describing the Gaussian density parameters µ and σ the mean and variance respectively, and h is the true unknown density of the residual error term. By expanding the integrand leads to two terms that are dependent on T and a third term h2 (.) independent of T ,which can be ignored from the minimization. The first term in the expansion is g 2 (.) and the second term is −2Eh g(./θ) i.e., the expectation of g(.) with respect to h, the true density of the residual. The first term being a Gaussian can be evaluated in closed form and we can use the following unbiased estimator for the second term,
180
Baba C. Vemuri, Jundong Liu, and Jos´e L. Marroquin
−
N 2 g(F1 (Xi + Ti ) − F2 (Xi )/θ) f or i = 1, .., N lattice points. N i=1
Thus, the minimization using the L2 E criterion is given by N 2 1 (F1 (X + T) − F2 (X) − µ)2 min E(T, θ) = √ − exp{− } T,µ,σ 2 πσ N i=1 2σ 2
(4)
where T is assumed to be an unknown parameterized affine transformations in 3D. To estimate the parameterized transformation, we solve the minimization problem in equation 4 numerically using a preconditioned gradient descent scheme [2]. The basic iterative form for a variety of gradient-based numerical methods can be written down as xk+1 = xk − αk Dk ∇E(xk ) where E is the function being minimized and Dk is a symmetric positive definite matrix, αk is the step length and a condition to be observed in descent methods is ∇E(xk )t Dk ∇E(xk ) > 0. We choose Dk = diag(dk1 , dk2 , ..., dkn ), where diag(.) indicates a diagonal matrix. The step size α can be determined using line search which basically involves a minimization given by E(xk + αk dk ) = minα≥0 E(xk + αk dk ). For reasons of computational efficiency, we choose successive step length reduction using the Armijo rule (see [2] for details).
4
Implementation Results
In this section, we demonstrate the algorithm performance for inter-modality affine registrations. All the examples contain real (not synthesized) missalignments. For comparison purposes, we have implemented the MI algorithm described in [3] as well as the SSD algorithm applied to the local frequency representations. In all the cases, we compare the computed registrations with the ground truth which are obtained from a manual alignment process by an ”expert” which are in current clinical use. As will be seen from the results described below, the key advantage of our method over the widely used MI-based (or SSD) type methods is that, we can handle large non-overlapping areas between the two data sets being matched. We tested our algorithm, the MI and the SSD methods on MR-CT data from five different subjects. For lack of space, we will only present comparison of our algorithm to the MI method. The MR-CT pairs were miss-aligned due to motion of the subject. The CT image was of size (512,512,120) while the MR image size was (512,512,142)) and the voxel dimensions were (0.46, 0.46, 1.5) and (0.68, 0.68, 1.05) for CT and MR respectively. We estimate the registration by minimizing the L2 E function described earlier. Three of these five data sets have large differences in the FOV causing large non-overlapping areas in the MR-CT pairs. On the first two data sets in the table 1, our algorithm and the MI algorithm produce comparable results due to significant overlap between the data sets. However, the MI method performs unsatisfactorily in comparison to
Robust Multimodal Image Registration
181
Table 1. 3D motion estimates for five MR-CT data sets. Set 1
True Motion 0.990 −0.093 −0.102 3.249 0.043 0.912 −0.405 2.425 0.131 0.399 0.907 3.734
Type L2E
MI
2
0.994 0.104 0.0132 5.217 −0.093 0.933 −0.347 2.611 −0.049 0.344 0.937 1.156
L2E
MI
3
0.988 −0.124 0.093 9.798 0.088 0.941 −0.326 −0.901 0.128 0.314 0.940 −0.228
L2E
MI
4
0.968 0.250 −0.014 8.701 −0.240 0.914 −0.327 7.328 −0.069 0.321 0.944 −22.422
L2E
MI
5
0.968 0.202 −0.146 0.120 −0.242 0.906 −0.346 12.970 0.062 0.370 0.927 −9.870
L2E
MI
Estimated Motion 1.000 −0.081 −0.082 3.479 0.052 0.92 −0.385 2.460 0.113 0.387 0.923 3.785 0.990 −0.093 −0.102 2.916 0.057 0.927 −0.384 2.541 0.108 0.376 0.920 3.150 0.990 0.104 −0.016 5.083 −0.087 0.920 −0.334 2.394 −0.061 0.369 0.902 0.248 0.993 0.115 0.004 04.980 −0.106 0.926 −0.362 1.572 −0.046 0.360 0.932 1.565 0.988 −0.119 −0.074 9.820 0.082 0.943 −0.300 −0.657 −0.117 0.327 0.951 −0.810 0.986 −0.125 −0.093 9.530 0.093 0.926 −0.281 0.289 0.116 0.335 0.979 0.204 0.969 0.258 −0.007 8.530 −0.241 0.900 −0.335 7.62 −0.098 0.348 0.958 −21.48 0.968 0.260 0.053 6.883 −0.259 0.965 −0.030 −0.259 −0.059 0.015 0.998 −10.145 0.972 0.197 −0.130 −0.022 −0.226 0.909 −0.307 12.432 0.066 0.360 0.905 −12.243 0.963 0.1863 −0.195 −0.910 −0.234 0.936 −0.267 10.970 0.133 0.298 0.945 −4.798
RMSE(R&T(mm.)) (0.016, 0.105)
(0.017, 0.3942)
(0.016, 0.419)
(0.010, 0.659)
(0.010, 0.283)
(0.020, 0.510)
(0.016, 0.468)
(0.146, 10.875)
(0.017, 1.017)
(0.049, 3.20)
our L2E method in the last three cases depicted in the table. The initial guess for the transformation in all the cases for both the methods was the zero vector. In cases four and five in the table, the MI method does poorly despite of a very good initial guess. Table 1 summarizes the results of applying our L2 E algorithm and the MI algorithm to five miss-aligned MR-CT pairs. The table depicts, the ground truth transformation (as assessed by a local expert and currently in clinical use), computed parameters of the transformation T using the L2 E and the MI methods and the RMS errors in the computed rotation matrices as well as the translation vectors. The average CPU time for registering these large data sets using our approach on a single R10000 processor of the SGI-Onyx is 20mins. The code however was not optimized to the fullest. As evident, the low RMS error obtained as well as the reasonable CPU time consumed by the L2 E scheme in the presence of large non-overlapping FOVs is indicative of the power of our registration algorithm.
5
Summary
In this paper, we presented a novel statistically robust way to register multimodal data sets. Local-frequency representations of the images to be registered
182
Baba C. Vemuri, Jundong Liu, and Jos´e L. Marroquin
are computed using Gabor filters and the global registration problem is formulated as the minimization of the the integral of the squared error between a Gaussian model of the residual and its true density function. This robust estimation framework affords the advantage of not having to deal with additional tuning parameters that would be required when using M-estimators. Our results of registration for real data sets were compared with those from an application of MI to the same data. Our algorithm achieved better registrations than MI for reasonably large non-overlapping FOVs in a very short time. Our future efforts will be focussed on extending the framework to cope with non-rigid deformations. Acknowledgments We thank Drs. Bova & Bouchet and Mr. Moore for providing the image data. This research was partially supported by the grants NSF IIS9811042 and NIH RO1-RR13197.
References 1. R. Bansal, et.al., [1998], “A novel approach for registration of 2D portal and 3D CT images for treatment setup verification in radiotherapy,” in Proc. of MICCAI, Cambridge, MA, 1075-1086. 2. D. P. Bertsekas, [1999], Nonlinear Programing, Athena Scientific Publishers. 3. A. Collignon, et.al., (1995) Automated multimodality image registration using information theory, In Proc. of IPMI, 263-274. 4. G.H.Granlund and H.Knutsson, [1995], Signal Processing for Computer Vision. Kluwer, Netherlands. 5. M. E. Leventon and W. E. L. Grimson, [1998], “Multimodal volume registration using joint intensity distributions,” Proc. of MICCAI, Cambridge, MA, 1057-1066. 6. J.B. Maintz and M. A. Viergever, [1998], “A Survey of Medical Image Registration,” MedIA,2, 1-36. 7. J. L. Marroquin, et.al., 1997, “Adaptive quadrature filters and the recovery of phase from fringe pattern images,” JOSA, 14(8), 1742-1752. 8. C. T. Meyer, et. al., [1997], Demonstrating the accuracy and clinical versatility of MI...MedIA,1(3), 195-206. 9. D. W. Scott, “Parametric modeling by minimum L2 error,” Technical Report 98-3, Dept. of Stat., Rice University. 10. B. C. Vemuri et. al., [1998], An efficient motion estimator with application to medical image registration, MedIA, 2(1), 79-98. 11. B. C. Vemuri et.al., [2000], “A Level-set based approach to image registration,” IEEE Workshop on MMBIA, June, Hilton Head, SC. 12. P. A. Viola and W. M. Wells (1995), Alignment by maximization of mutual information, in Fifth ICCV, MIT, Cambridge, MA, 16-23.
Steps Toward a Stereo-Camera-Guided Biomechanical Model for Brain Shift Compensation 1 ˇ Oskar Skrinjar , Colin Studholme2 , Arya Nabavi3 , and James Duncan1,2 1
2
Department of Electrical Engineering (
[email protected]), Department of Diagnostic Radiology, Yale University, New Haven, CT, USA 3 Surgical Planning Laboratory, Brigham and Women’s Hospital Harvard Medical School, Boston, MA, USA
Abstract. Surgical navigation systems provide the surgeon with a display of preoperative and intraoperative data in the same coordinate system. However, the systems currently in use in neurosurgery are subject to inaccuracy caused by intraoperative brain movement (brain shift) since they typically assume that the intracranial structures are rigid. Experiments show brain shift of up to one centimeter, making it the dominant error in the system. We propose a system that compensates for this error. It is based on a continuum 3D biomechanical deformable brain model guided by intraoperative data. The model takes into account neuro-anatomical constraints and is able to correspondingly deform all preoperatively acquired data. The system was tested on two sets of intraoperative MR scans, and an initial validation indicated that our approach reduced the error caused by brain shift.
1
Introduction
Commercial surgical navigation systems assume that the organs being operated on are rigid, and are consequently subject to inaccuracy due to the soft tissue deformation. In this paper we concentrate on the problem of brain deformation during the surgery (commonly referred to as brain shift), although a similar approach can be applied to other cases involving soft tissue deformation. Brain shift was reported to be up to about 1 cm ([1], [2], [3], [4], [6], [8], [10]), and it contributes to the inaccuracy of surgical navigation systems more than any other source of error. Researcher have tried to compensate for the brain shift using a deformable model ([5], [9], [15]). We also note a related work on biomechanical model based non-rigid registration of intraopertive brain images ([14]). Brain shift is a complex phenomenon caused by several factors that are not easily measurable and some of them vary from patient to patient. This indicates that most probably it is not possible to realistically model brain deformation using a deformable model without any intraoperative input. This observation is the basis of our approach and is elaborated in Section 2. Use of intraoperative information for model guidance was suggested by a few groups ([5], [8], [9] and [12]). M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 183–189, 2001. c Springer-Verlag Berlin Heidelberg 2001
184
ˇ Oskar Skrinjar et al.
Here we present an approach for dealing with the problem of brain shift that relies on a combination of intraoperative input and a biomechanical deformable brain model. This work builds on our previous efforts ([9]), but differs in a number of ways. Here we propose to guide the deformable model by the reconstructed exposed brain surface using input from a pair of stereo cameras overlooking the craniotomy, while before we relied on manual delineation of brain surface points, which provided less intraoperative information and was disturbing for the surgeon. In addition, we now use a continuum biomechanical model, as opposed to the spring mass model in [9], since it allows for physically sound model guidance through displacement boundary conditions. Also, the continuum model parameters have a nice physical interpretation and can be found in the literature, while this is not the case with the spring mass model parameters. Finally, we have done an in-volume validation of the brain deformation prediction using intraoperative MR scans provided by our collaborators at Harvard Medical School ([17]), while in [9] we had only surface measurements to check the model prediction against.
2
Assumptions
Brain shift is a very complex phenomenon, and here we list factors that, not necessarily in the order of importance, affect brain deformation: gravity, mechanical tissue properties, administered drugs, loss of Cerebro-Spinal Fluid (CSF), interaction of CSF and brain tissues, anatomical constraints, tissue resection and removal, intracranial pressure, geometrical complexity, and patient variability. Given this list, it becomes clear that it is virtually not possible to reliably model brain deformation without any intraoperative information. This is in accordance with observations in [11]. We base our approach on the following three assumptions: – Relatively simple model. Due to the complexity of the brain shift phenomenon, not only that it is difficult to model some of the causing factors, but also it is not clear how to set the model parameters (any increase in the model complexity inevitably involves more parameters). Therefore we base our approach on a simple model, that incorporates the main tissue characteristics (elasticity and almost incompressibility). The complexity of the deformation is made up by intraoperative guidance of the model. – Static model. Since brain deformation is a very slow process with negligible dynamic components (components involving velocity and acceleration), we use a static model. – Intraoperative input. The model has to by guided by intraoperative input.
3 3.1
Approach Intraoperative Input
We use a pair of stereo cameras overlooking the exposed brain surface to acquire intraoperative information about the deforming brain. The idea is to reconstruct
Steps Toward a Stereo-Camera-Guided Biomechanical Model
185
and track the exposed brain surface as it deforms during the surgery. If this can be done reliably, one can use the reconstructed brain surface as a boundary condition for a deformable brain model. Each time the surgeon removes her or his hands and surgical tools out of the way of the cameras, snapshots from the two cameras are taken, exposed brain surface is reconstructed, the surface is used to guide the model (as a boundary condition), and once the model is deformed, it can be used to update (properly warp) all preoperative images available. This approach might reduce the error introduced by the brain shift, while the system is much cheaper than an intraoperative scanner. Clearly, this system uses only intraoperative surface information and it cannot perform well after tissue resections. In addition to using this system before resections, one can use it in the case of subdural electrode implantation (often performed as a first stage of epilepsy surgery) where no tissue is removed, but the brain still deforms due to gravity, loss of CSF and other listed factors. Fig. 1 shows the left and right camera view of an exposed brain surface and a reconstruction of the surface ([13]).
Fig. 1. Left and right camera views of an exposed brain surface and a reconstruction of the surface.
3.2
Model
As motivated in Section 2, we use a simple brain deformation model. A continuum model is employed rather than a spring mass model, since it is a physically more realistic model, and it has advantages regarding model parameters and guidance. Because the brain deformation is relatively small, it is a good approximation to use the linear stress-strain relation for isotropic materials. We are interested in obtaining the displacement field for the brain (to be able to update the preoperative images correspondingly), and therefore the goal is to obtain equations only in displacements. Since we are considering a static model, using the static equilibrium equations for stress, the relations between displacements and strain components and the stress-strain relation, one can obtain ∇2 ux + ∇2 uy + ∇2 uz +
1 1−2ν 1 1−2ν 1 1−2ν
∂ ∂ux ∂x ( ∂x ∂ ∂ux ∂y ( ∂x ∂ ∂ux ∂z ( ∂x
+ + +
∂uy ∂y ∂uy ∂y ∂uy ∂y
+ + +
∂uz ∂z ) + ∂uz ∂z ) + ∂uz ∂z ) +
Fx µ Fy µ Fz µ
= 0, = 0, = 0.
(1)
186
ˇ Oskar Skrinjar et al.
E where F = (Fx , Fy , Fz ) is a body force (gravity in this case) and µ = 2(1+ν) (E is Young’s modulus and ν is Poisson’s ratio). These three equations are only in displacements and are known as Navier equations ([16]). We need to solve Eq. 1 with given displacement boundary conditions. Since they are linear partial differential equations, and since differentiation is a linear operator, one can separately find the solution u0 = (ux , uy , uz ) for the equations with zero boundary conditions, and the solution u00 = (ux , uy , uz ) for the equations with zero body force, and the total solution will be u = u0 + u00 . However, the gravity acts all the time, both before and during the brain deformation, and therefore u0 will be the same in both cases. Since we are interested in the displacement field between the deformed and undeformed state, we do not need to compute u0 . Thus, we need to solve only for u00 , i.e. solve Eq. 1 with the given boundary conditions and zero body force. One should notice that gravity will influence u00 through boundary conditions (since the brain will deform partly because of gravity, and a part of the brain surface will be used as the boundary condition). Another interesting observation is that Young’s modulus does not affect the displacement field (u00 ), since the body force is zero in this case, and therefore the last terms in Eq. 1 containing E (hidden in µ) disappear. Thus, the only model parameter to be set is Poisson’s ratio. We have tested several values for ν, and the one that yielded the smallest error was ν = .4, which is a value used by other groups as well ([14]). We assume that the model is homogeneous since there is no reliable way known to us for setting the model parameter for different brain structures.
3.3
Method
The first step is to segment the pre-deformation brain regions of interest (cerebral hemisphere at the side of the craniotomy, falx, and tentorium). We used manual segmentation for this task. Then we rigidly registered the deformed and the undeformed brain using a normalized mutual information based registration algorithm ([7]), which has a sub-voxel accuracy. We employed a finite element method to determine the deformation governed by Eq. 1. A mesh composed of hexahedral (“brick”) elements (with 5 mm approximate side lengths) was generated using the segmented data and an in-house mesh generator. The generated mesh (of the cerebral hemisphere that was at the side of the craniotomy) had about 6,500 nodes and about 5,000 “brick” elements. Here we used the anatomical constraints that the falx and tentorium are practically fixed, and we fixed the corresponding model nodes. For this reason it is enough to consider only the half of the brain at the side of the craniotomy, since the other part does not deform. We are aware that, although this assumption holds in most of the cases, there are exceptions where falx moved during the surgery. In order to simulate the exposed brain surface generation (that would normally be done by using a pair of stereo cameras) we manually segmented the deformed brain from the intraoperative scan and generated its surface. Since the brain surface didn’t move significantly, we computed the displacement at each point r1 of the undeformed brain surface S1 (only at the part of the brain surface that was visible through
Steps Toward a Stereo-Camera-Guided Biomechanical Model
187
the craniotomy), as ∆r = r2 − r1 , where r2 is the point on the deformed brain surface S2 ) obtained as argr2 ∈S2 min||r2 − r1 ||. Finally, the computed displacements at the exposed brain surface were used as a boundary condition for the deformable brain model.
4
Results and Validation
In this section we present results of the model deformation computation for two cases: a sinking brain, and a bulging brain. For both cases we generated the model and displacement boundary conditions as explained in the previous sections. We used ABAQUS to compute the model deformation. For a model of about 6,500 nodes and about 5,000 “brick” elements, it took about 80 seconds to solve the equations on an SGI Octane R12K machine. This time is almost practically applicable, since it would mean that after about minute and a half after imaging the brain with cameras, one would get updated MR images and other preoperative data. In order to validate the computed deformation we used a set of anatomical landmarks in the scan of the undeformed brain at various positions throughout the volume of the cerebral hemisphere at the side of craniotomy. Then we found the set of the corresponding landmarks in the scan of the deformed brain. Finally we computed the deformed positions of the landmarks from the undeformed brain using the model, and compared them to the corresponding landmarks in the deformed brain. One can see from Table 1 that the maximal displacement was 3.8 mm (3.6 mm for the bulging brain) while the maximal error was 1.4 mm (1.3 mm for the bulging brain) for the case of the sinking brain. Fig. 2 shows a slice in the undeformed brain state, in the deformed state and in a computed state for the two cases. Case t I c e t II c e
1 .7 .3 .8 2.7 2.0 .8
2 .9 .5 1.4 1.8 1.6 1.0
3 .6 .7 .4 .6 1.1 .6
4 .1 .2 .2 3.6 2.4 1.3
5 2.3 1.7 .7 2.6 2.6 .8
6 2.9 2.4 1.3 .8 .5 .4
7 2.1 1.4 1.4 1.3 .8 .9
8 1.0 .7 .4 1.1 1.2 .8
9 1.9 1.3 1.2 1.4 1.5 .9
10 2.7 1.8 1.3 .7 .8 .5
11 .8 .4 .4 .7 .5 .7
12 .8 .5 .8 .4 .2 .5
13 2.1 1.9 1.0 2.4 2.0 1.2
14 3.8 3.0 1.2 .5 .3 .7
Table 1. Case I (sinking brain) and Case II (bulging brain): true landmark displacements (t), computed landmark displacements (c), and error between true and computed landmark locations (e = c − t), for 14 landmarks. All values are in millimeters.
5
Discussion
This work indicates that intraoperative surface information might be enough to compute the pre-resection brain deformation with an error comparable to the
188
ˇ Oskar Skrinjar et al.
scan resolution (the used MR scan had 2.5 mm slice thickness, with in-plane .9375 mm by .9375 mm pixels, while the maximal error of the predicted brain deformation in the presented cases was 1.4 mm).
(a)
(b)
(c)
(d)
(e)
(f )
Fig. 2. (a) A coronal slice of the undeformed sinking brain, (b) the corresponding slice through the deformed intraoperative scan, (c) computed (deformed) slice. Axial slices (d), (e), and (f) correspond to the bulging brain case (undeformed, deformed, and computed, respectively). Note that in both cases the exposed brain surfaced in the computed slice moved similarly to the corresponding surface in the deformed slice. The advantage of our approach over our previous work ([9]) is that not only sinking, but also bulging can be modeled, while the effect of gravity and other factors is indirectly incorporated through the movement of the exposed brain surface, which is used as a boundary condition for the model. In addition, the proposed continuum model has only one parameter to be set (Poisson’s ratio), which is dimensionless and can relatively reliably be estimated, and its values are available in the literature. Our future work is aimed at reducing the problem of specularities on the wet brain surface and at post-resection deformation compensation, for which we believe that intraoperative imaging is necessary.
Acknowledgements We are thankful to Dr. Ron Kikinis, Dr. Ferenc A. Jolesz, and Dr. Peter Black from Brigham and Women’s Hospital and Harvard Medical School, for collaboration and for providing us with data.
Steps Toward a Stereo-Camera-Guided Biomechanical Model
189
References 1. Hill, D., Maurer, C., Wang, M., et al: Estimation of Intraoperative Brain Surface Movement. CVRMed-MRCAS’97, March 1997, 449–458 2. Bucholz, R., Yeh, D., Trobaugh, J., et al: The Correction of Stereotactic Inaccuracy Caused by Brain Shift Using an Intraoperative Ultrasound Device. CVRMedMRCAS’97, March 1997, 459–466 3. Dorward, N. L., Alberti, O., Velani. B., et al: Early Clinical Experience with the EasyGuide Neuronavigation System and Measurement of Intraoperative Brain Distortion. In Hellwing D, Bauer BL (eds): Minimally Invasive Techniques for Neurosurgery, 1997, 193–196. 4. Reinges, M. H. T., Krombach, G., Nguyen, H., et al: Assessment of Intraoperative Brain Tissue Movements by Frameless Neuronavigation. Computer Aided Surgery 2:218, 1997 (abstract) 5. Edwards, P. J., Hill D. L. G., Little, J. A., Hawkes, D. J.: Deformation for Image Guided Interventions Using a Three Component Tissue Model. IPMI’97, Proceedings, June 1997, 218–231 6. Roberts, D. W., Hartov, A., Kennedy F. E., et al: Intraoperative Brain Shift and Deformation: A Quantative Analysis o Cortical Displacement in 28 Cases. Neurosurgery, Vol. 43, 749–760, 1998 7. Studholme, C., Hawkes, D. J., Hill, D. L. G., A Normalised Entropy Measure of 3D Medical Image Alignment, SPIE Medical Imaging, Feb 1998. 8. Maurer, C. R., Hill D. L. G., Maciunas, R. J., et al: Measurement of Intraoperative Brain Surface Deformation Under a Craniotomy. MICCAI’98, Proceedings, October 1998, 51–62 ˇ 9. Skrinjar, O., Duncan, J.: Real Time 3D Brain Shift Compensation. IPMI’99, Proceedings, June/July 1999, 42–55 10. Hata, N., Nabavi, A., Warfield S., et al: A Volumetric Optical Flow Method for Measurement of Brain Deformation from Intraoperative Magnetic Resonance Images. MICCAI’99 Proceedings, September 1999, 928–935 11. Hill, D. L. G., Maurer, Jr. C. R., Martin, A. J., et al: Assessment of Intraoperative Brain Deformation Using Interventional MR Imaging. MICCAI’99 Proceedings, September 1999, 910–919 12. Audette, M. A., Siddiqi, K., Peters, T. M.: Level-Set surface Segmentation and Fast Cortical Range Image Tracking for Computing Intrasurgical Deformations. MICCAI’99 Proceedings, September 1999, 788–797 ˇ 13. Skrinjar, O., Tagare, H. Duncan, S.: Surface Growing from Stereo Images. CVPR 2000 Proceedings, June 2000 14. Ferrant, M., Warfield, S. K., Nabavi, A., et al: Registration of 3D Intraoperative MR Images of the Brain Using a Finite Element Biomechanical Model. MICCAI’2000 Proceedings, October 2000, 19–28 15. Miga, I. M., Staubert, A. Paulsen, D. K., et al: Model-Updated Image Guided Neurosurgery: Preliminary Analysis Using Intraoperative MR. MICCAI’2000 Proceedings, October 2000, 115–124 16. Valliappan, A., Continuum Mechanics Fundamentals, A.A. Balkema, Rotterdam, 1981 17. Nabavi, A., Black, P. McL., Gering, D. T., et al” Serial Intraoperative MR Imaging of Brain shift. Neurosurgery, April 2001
Spatiotemporal Analysis of Functional Images Using the Fixed Effect Model Jayasanka Piyaratna and Jagath C. Rajapakse School of Computer Engineering, Nanyang Technological University, Singapore
[email protected] Abstract. The present study explores a novel spatiotemporal technique using the fixed effect model for the analysis of functional brain images and propose a novel approach to obtain the least square estimation of the signal subspace of activated voxels. The spatial and temporal domain correlations are incorporated using appropriate prior models and the possibility of using the Markov property to incorporate the spatial domain correlations are investigated.
1
Introduction
In functional brain imaging experiments, the subject’s brain or a part of it is imaged at regular intervals in time while the input stimuli are presented in a periodic manner. The hemodynamic response of each brain is mapped onto an image intensity at each scanning instance of time. A functional image is a spatiotemporal signal, which is represented in a matrix F = {fij }n×m , when i ∈ Ω, j ∈ Θ, where Ω denotes the spatial domain of brain voxels, Θ the space of scanning times, m the total number of image scans and n the number of brain voxels in an image scan. Let fij denote the image intensity of the brain voxel i at the j th instance of time. An image of the subject’s head or a part of it taken at a particular instance in time is referred to as an image scan. The image scan taken at time instant j ∈ Θ is given by the vector: fj = (fij |i ∈ Ω)T . Then, the functional brain image consist of m time scans can be written as F = [f1 f2 . . . fm ]. Statistical parameter mapping approach (SPMA) in conjunction with general linear model (GLM) [1] is one of the predominantly used and most established approach available for the detection of activations where statistical parameter maps (SPMs) are obtained by statistically comparing image scans extracted at the activated state (stimulus ON-state) with those taken in the rest state (stimulus OFF-state). The detection of activation is achieved by subsequent analysis in the spatial domain assuming a Gaussian random field to the SPM to incorporate spatial correlations and to account for multiple statistical comparisons. In short, previous activation detection techniques analyze the temporal domain first and thereafter the spatial domain. Consequently, some information and interactions that are distributed between both spatial and temporal domains may be lost in the analysis. Recently, a few spatiotemporal techniques have been proposed to analyze functional images [2,3,5]. Benali et al. [5] proposed a technique for analyzing M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 190–196, 2001. c Springer-Verlag Berlin Heidelberg 2001
Spatiotemporal Analysis of Functional Images Using the Fixed Effect Model
191
the fMRI sequence in both spatial and temporal domains simultaneously using the fixed effect model as an extension to the PCA introduced by [6,7,8]. This technique is only applicable to experiments where input stimuli have equal durations of stimulus ON and OFF states (symmetric experiments). In this paper, we extend this technique to detect activation in non-symmetric experiments. As clear theoretical explanations were not available in the literature for the fixed effect model, we provide explanations for some theoretical issues in the sequel. Furthermore, we introduce a location dependent Markov model for the spatial covariance of the fMRI signal.
2
The Fixed Effect Model
The j th image scan fj is represented by a noisy random vector: fj = ˆ fj + nj , ∀j = 1, 2, . . . , m
(1)
where ˆfj is the noise free fixed image scan and nj represents the random error and noise. The fixed effect model essentially assumes that nj is orthogonal to ˆ fj and is defined by [9]: 1. E{nj } = 0. 2. There exist a subspace V ⊂ Rq of noise free fixed signals such that ˆfj ∈ V, ∀ j = 1, . . . , m. 3. Var{F} = σ 2 C ⊗ Γ. where σ ∈ R+ is a constant, V is q ≤ min{m, n} dimensional subspace and ⊗ denotes the tensor product. E{·} denotes the mean, Var{F} is the covariance matrix of F and assumed to be separable into spatial and temporal domains, where C and Γ denotes the symmetric matrices of spatial and temporal domains, respectively. Let us consider a linear orthogonal transformation of noisy image scan onto the vector subspace V ⊂ Rq (q < n) and let the transformed vector of fj to be ˜fj ∈ V [10] such that ˜ fj = Pf j , where P is the (q × n) matrix which forms an orthonormal basis of the subspace V so that PPT = I. Suppose ˆ fj T˜ T ˆ is reconstructed back from the transformed vector as fj = P fj = P Pf j One would recognize PT P is the linear orthogonal projector of the vector subspace V [11]. The traditional PCA finds the subspace V where the signal can be reconstructed with a minimum error. However, in order to obtain a more general case, one would consider the spatial domain weighted sum of square reconstruction errors as φM =
m j=1
||fj − ˆ fj ||2M
(2)
192
Jayasanka Piyaratna and Jagath C. Rajapakse
where ||.||M is the Euclidean norm [6] and M = {mi1 i2 }n×n is a symmetric and positive definite weighting matrix. Euclidean norm of matrix X is defined by a quadratic metric ||X||M = XT MX. One can introduce temporal domain weight matrix N = {nj1 j2 }m×m in order to obtain the temporal domain weighted sum of square errors φN similar to eq. (2), where N is also symmetric and positive definite. The weighted functions in the two domains can be combined as follows to obtain the total weighted square sum of reconstruction errors Φ: Φ=
m m
nij (fi − ˆfi )T M(fj − ˆfj )
(3)
j=1 i=1
One simplifies the above equation to obtain eq. (4) using the fact that the vector products can be represented as a matrix trace. ˆ T M} ˆ Φ = tr{(F − F)N(F − F)
(4)
ˆ = [ˆf1 ˆf2 . . . ˆfm ]. Using the orthonorwhere tr{·} represents the matrix trace and F mal property of the linear operator P, we obtain Φ = tr{(FNFT M)−(PT PFNFT M)−(FNFT PT PM)+(PT PFNFT PT PM)} and simplify using the symmetric property of the matrices M and N Φ = tr{FNFT M} − tr{PFNFT MPT }
(5) n
In addition Karhunen-Loeve transform [11] provides FNFT M = i=1 λi ei eT i , where ei is the eigenvector of FNFT M corresponding to i th largest eigenvalue [7,8]. As V ⊂ Rq , q ≤ min{m, n} and PFNFT MPT has q eigenvectors drawn from {e1 , e2 , . . . , en }. According to eq. (5), Φ will be minimum if and only if PFNFT MPT =
q
λi ei eT i , q < n.
(6)
i=1
ˆ = span{e1 , e2 , . . . , eq } Therefore, the subspace V is estimated by V Besse et al. have shown for small value of σ 2 , M = C−1 using the perturbation theory [8] and J. Fine has proven the same results for any value of σ 2 using the asymptotic theory [7]. As the covariance matrix of the data in the fixed effect model is assumed separable, it is reasonable to extend the results as M = C−1 and N = Γ −1 as stated by H. Caussinus [6] using duality diagrams. Assuming that C and Γ are known or can be estimated, the least square estimator of V is computed. The basis of V is therefore formed by the vectors: e1 , e2 , . . . , eq , where the e1 , e2 , . . . , en are the eigenvectors of FΓ −1 FT C−1 [6,8].
3
Detection of Activation
According to the linear model of fMRI the time-domain response of an activated voxel f (j) at j ∈ Θ is given by: f (j) = αx(j) ∗ γ(j) + η(j)
(7)
Spatiotemporal Analysis of Functional Images Using the Fixed Effect Model
193
where x(j) is the input stimulus function, γ(j) is the hemodynamic response function (HRF) [12,13] and η(j) is the random noise. The gain of the time-series is given by α and ∗ denotes the convolution operator. The hemodynamically modulated input can approximate the brain’s temporal response due to the input stimulus which is given in a vector h = (hj : j ∈ Θ)T , when hj = x(j) ∗ γ(j). As seen in the previous section, the fMRI signal corresponding to the task activations has to be chosen from the q dimensional signal subspace V. If the signal subspace is available, the vector corresponding to the task-related time domain fMRI must be parallel to the h. Let v ∈ V denote the activation pattern of the fMR image and v will then be an eigenvector of FΓ −1 FT C−1 and let λ to represent the corresponding eigenvalue. Eigenelements v and λ follow: FΓ −1 FT C−1 v = λv,
(8)
multiplying both sides of eq. (8) by FT C−1 : FT C−1 FΓ −1 (FT C−1 v) = λ(FT C−1 v)
(9)
Let u = FT C−1 v ∈ Θ and is clearly an eigenvector of FT C−1 FΓ −1 . Similar to the result obtained in section 2, the signal subspace belonging to the temporal ˆ = span{u1 , u2 , . . . , uq } where domain of the fMR image can be computed as U ui is the i-th eigenvector of FT C−1 FΓ −1 . As we discussed previously, u(∈ U) must be parallel to h, hence we distinguish u from the eigensubspace U. Other eigenvectors may carry information corresponding to the brain connectivity and physiological signals [14]. Assuming that the covariance structures are known or can be estimated, we find the eigenvectors corresponding to the q largest eigenvalues of FT C−1 FΓ −1 and select the eigenvector which has the maximum correlation coefficient with the hemodynamically modulated input vector h to represent u. According to eq. (8) the activation pattern is given by v = FΓ −1 u As C and Γ are not known in practice, we employ two models to estimate these covariance matrices to incorporate spatial and temporal correlation effects. If the fMRI signal is constant for the brain at a stable state, statistical studies show that the noise in the fMR image is uncorrelated in the time domain [15] and this implies that Γ = I [5]. Functional brain images follow a smooth variation and therefore one can model the spatial covariance according to the well known Markov property. Hence the spatial domain correlation effects should be introduced by means of a covariance matrix W = {wi1 i2 }n×n where wi1 i2 =
σ2
2 −β.r 2 (i1 ,i2 )
σ e
0
if i1 = i2 if locations i1 and i2 are neighbors Otherwise
(10)
where r(i1 , i2 ) is the distance between i1 and i2 voxels, which depends on the relative voxel position and β is a constant parameter for the image which accounts
194
Jayasanka Piyaratna and Jagath C. Rajapakse
for the strength of the neighborhood relationship. First and second order neighborhoods are used in our experiments. The Markov model takes into account the actual voxel position in the spatial domain and consequently the matrix W will provide a model for the spatial domain covariance, C = W. The detection of activation using the above model can be stated in the following steps: 1. Compute C and Γ ( Γ = I and C = W). 2. Compute eigenelements [Ut , D] using singular value decomposition(SVD) [Ut , D] = SVD{FT W−1 F}. 3. Select q eigenelements such that the minimum eigenvalue is greater than threshold α. 4. Compute hemodynamically modulated input vector h. 5. Choose the eigenvector as u such that uT h is maximum (closest to being parallel). 6. The activation pattern is given by v = Fu. Once the vector v is transformed into the spatial domain activation map, it can also be considered as a SPM [5]. We approximate the intensity distribution to a multivariate Gaussian field to obtain the threshold value to determine activations [7,12].
(a)
(b)
(c)
Fig. 1. (a) Original activations and detected activations (b) using the present technique with the parameter map thresholding at 2.5 intensity and (c) using the SPM technique with the intensity level thresholding at 1.96 for the synthetic image.
Table 1. Percentages of false negatives and positives and total errors incurred in detection of activations in the synthetic image at SNRs of -4.08dB and -10.46dB. .
% Error False negatives False positives Total errors
Present Approach SPM Approach SNR=−4.08dB SNR=−10.46dB SNR=−4.08dB SNR=−10.46dB 0.00 0.12 0.75 0.51 2.51 2.35 3.34 6.98 2.51 2.47 4.09 7.49
Spatiotemporal Analysis of Functional Images Using the Fixed Effect Model
(a)
195
(b)
Fig. 2. Two axial slices of detected activations from the working memory fMRI experiment, using (a) the present technique and (b)the SPMA with z-threshold of z = 2.75 and blob size threshold of 4. White blobs represent the activations.
4
Results
Experiments were conducted to detect activation in a synthetic functional image and the data obtained in a working memory experiment. For the experiments performed, the parameter maps were obtained using the present technique and the SPMA. An empirical value (β = 1.2) was employed in this paper for the computation of the spatial domain covariance matrix in eq. (10). v was assumed to have a asymptotic Gaussian distribution and consequently it was thresholded using appropriate intensity and cluster size thresholding. A 2-D 64 × 64 functional time-series was simulated taking the highlighted pixels in figure 1(a) as activation by convolving a box-car pattern with a gamma HRF. The input stimulus was periodically presented 12 cycles having durations of 4s ON and 12s OFF alternatively. The spatial correlation was incorporated with a Gaussian kernel having FWHM= 3.0. The time domain Gaussian noise was added to obtain a signal to noise ratio (SNR) of −4.08dB. Figures 1(b) and 1(c) show the detected activations using the present technique and the SPMA respectively. One can notice that the detected activation using our technique gives better detections than the standard SPMA for the simulated image, with less false positives and false negatives. Table 1 indicates the false positive and false positive errors from the present approach and the SPMA at 2 noise levels. The activations detected on images obtained in a memory retrieval task are shown in figures 2(a) and 2(b) More experimentational details about the experiments can be found in [17]. Detection of activation was performed using the present technique and the SPMA. Results are shown in figure 2(a) and 2(b), which have been obtained using the present technique and the SPMA respectively. Present technique provided the activation more focal to the cortical areas with less spurious noise.
5
Conclusion
This paper discussed a spatiotemporal approach for analyzing functional brain images as an extension to the fixed effect model. The underlying fixed effect model assumes that the fMRI noise structure is separable in spatial and temporal domains. Spatial domain covariance matrix is computed assuming the Markov
196
Jayasanka Piyaratna and Jagath C. Rajapakse
property. A novel approach was introduced to detect the activation from any fMRI experiment as an extension to the principal component analysis. We also provided a proof for the least square estimation for the signal subspace. Results with a synthetic images and images obtained in the memory retrieval task convinced that the model was accurate and appropriate for the analysis of functional images.
References 1. K. J. Friston, K. J. Worsley, R. S. J. Frackowiak, J. C. Mazziotta, and A. C. Avans. Assessing the significance of focal activations using their spatial extent. Human Brain Mapping, 1:210–220, 1994. 2. X. Descombes, F. Kruggel, and D. Y. von Cramon. Spatio-temporal fMRI analysis using Markov random fields. IEEE Transactions on Medical Imaging, 17:1028– 1039, 1998. 3. M. McKeown, S. Makeig, G. Brown, T-P Jung, S. Kindermann, and T. Sejnowski. Spatially independent activity patterns in functional magnetic resonance imaging data during the stroop color-naming task. In Proceedings of the National Academy of Sciences USA, pages 1268–1273, Brisbane, Australia, 1998. 4. L. K. Hansen, J. Larsen, F. A. Nielsen, S. C. Strother, E. Rostrup, R. Savoy, N. Lange, J. Sidtis, C. Svarer, and O. B. Paulson. Generalizable patterns in neuroimaging: How many principal components? NeuroImage, 9:534–544, 1999. 5. H. Benali, J. L Anton, M. Pelegrini, M. Di Paola, J. Bittoun, Y. Burnod, and R. Di. Paola. Information Processing in Medical Imaging (IPMI), chapter ”Space-Time Statistical Model for Functional MRI Image Sequence”, pages 285–298. Springer, Berlin, 1997. 6. H. Caussinus. Multidimensional Data Analysis, chapter ”Models and uses of principal component analysis”, pages 149–178. DSTO Press, 1986. 7. J. Fine. Asymptotic study of the multivariate functional model in the case random number of observations for each mean. Statistics, 25:285–306, 1994. 8. P. Besse, H. Caussinus, L. Ferre, and J. Fine. Principal component analysis and optimization of graphical displays. Statistics, 1988. 9. T. W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley, New-York, 1958. 10. K. I. Diamantaras and S. Y. Kung. Principal Component Neural Networks Theory and Applications. Wiley and Sons, 1996. 11. J. R. Schott. Matrix Analysis for Statistics. Wiley, 1997. 12. K. J. Friston, P. Jezzard, and R. Turner. Analysis of functional MRI time-series. Human Brain Mapping, 1:153–171, 1994. 13. G. M. Boynton, S. A. Engel, G. H. Glova, and D. J. Herger. Linear system analysis of fMRI in human VI. The Journal of Neuroscience, 13:4207–4221, 1996. 14. K. J. Friston, C. D. Frith, P. F. Liddle, and R. S. J Frackowiak. Functional connectivity: the principal component analysis of large (PET) data sets. Journal of Cerebral Blood Flow and Metabolism, 13:5–14, 1993. 15. A. Macovski. Noise in MRI. Magnetic Resonance in Medicine, 36:494–497, 1996. 16. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Fannery. Numerical Recipes in C. New York: Cambridge University Press, 1992. 17. J. C. Rajapakse, F. Kruggel, J. M. Maisog, and D. Y. von Cramon. Modeling hemodynamic responses for analysis of functional MRI time-series. Human Brain Mapping, 6(4):283–300, 1998.
Spatio-temporal Covariance Model for Medical Images Sequences: Application to Functional MRI Data Habib Benali1 , M´elanie P´el´egrini-Issac2, and Frithjof Kruggel3 1
2 3
Unit´e 494 INSERM, CHU Piti´e-Salpˆetri`ere, 91, boulevard de l’Hˆ opital, F-75634 Paris Cedex 13, France
[email protected] Unit´e 483 INSERM, 9, quai Saint-Bernard, F-75005 Paris, France
[email protected] Max-Planck Institute of Cognitive Neuroscience, Stephanstraße 1, D-04103 Leipzig, Germany
[email protected] Abstract. Spatial and temporal correlations which affect the signal measured in functional MRI (fMRI) are usually not considered simultaneously (i.e., as non-independent random processes) in statistical methods dedicated to detecting cerebral activation. We propose a new method for modeling the covariance of a stationary spatio-temporal random process and apply this approach to fMRI data analysis. For doing so, we introduce a multivariate regression model which takes simultaneously the spatial and temporal correlations into account. We show that an experimental variogram of the regression error process can be fitted to a valid nonseparable spatio-temporal covariance model. This yields a more robust estimation of the intrinsic spatio-temporal covariance of the error process and allows a better modeling of the properties of the random fluctuations affecting the hemodynamic signal. The practical relevance of our model is illustrated using real event-related fMRI experiments.
1
Introduction
When analyzing data from functional Magnetic Resonance Imaging (fMRI), accurate detection of human cerebral activation raises many issues concerning not only the spatial localization of activated regions [1,2,3,4], but in addition the spatio-temporal properties of these regions [5]. An adequate modeling of the spatial and temporal correlations which affect the measured signal is mandatory [1,2,3,4,5,6] and models of spatio-temporal random processes are increasingly accounted for in statistical analyses. The hypotheses underlying these models must reflect as accurately as possible the properties of the measured data (e.g., spatio-temporal stationarity) to ensure a robust detection of the activation signal. In this work, we focus on the analysis of fMRI time-series based on multivariate regression, as an original extension of the univariate regression widely used M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 197–203, 2001. c Springer-Verlag Berlin Heidelberg 2001
198
Habib Benali, M´elanie P´el´egrini-Issac, and Frithjof Kruggel
in the functional brain mapping literature. This multivariate approach allows to consider spatial and temporal correlations simultaneously. We introduce a new method for modeling the covariance of a stationary spatio-temporal random process. The proposed covariance model is nonseparable in time and space, which allows a better modeling of the intrinsic properties of the hemodynamic signal. In Sect. 2, we introduce the multivariate regression model and show that the spatio-temporal covariance of the error process is required when making statistical inference from fMRI data. Theoretical results that allow defining classes of nonseparable spatio-temporal covariance models are given in Sect. 3. The proposed model is then applied to real data (Sect. 4) and discussed (Sect. 5).
2 2.1
Multivariate Regression Model Definition
Let y i be the T -vector corresponding to the fMRI time-series measured in voxel i (usually, preprocessed data). Denote by X a (T, P ) matrix where each of the P columns of X is called a “regressor”, which is either determined by the experimental design (“regressors of interest”) or represents confounds (“dummy regressors”). Let i be the T -vector of error (or residual) terms. The multivariate regression model can be written as follows: 1 y1 β1 X 0 ... 0 . . .. .. .. . 0 X ... 0 yi = . βi + .. i or Y = (I N ⊗ X)β + , (1) . . 0 . . . . .. .. .. 0 0 ... X yN βN N where N is the number of voxels included in the analysis, I is the identity matrix, Y and are N T -vectors, (I N ⊗ X) is a (N T, N P ) matrix and β is a N P -vector of regression coefficients. ⊗ denotes the Kronecker product. We further assume that is a multidimensional stationary random process with: – E[] = 0, where E[.] denotes the expectation, – var[] = σ 2 Ω, where Ω is the (N T, N T ) covariance matrix of the errors and σ 2 is the variance at the origin. Solving (1) consists in deciding whether Y represents an activation signal, by estimating the coefficients β (Sect. 2.2) and determining using a statistical test whether they contribute significantly to predicting the signal Y (Sect. 2.3). 2.2
Estimating the Regression Coefficients
β is most frequently estimated using ordinary least-squares (OLS) as follows: = (I N ⊗ X)t (I N ⊗ X) −1 (I N ⊗ X)t Y = P Y , β (2)
Spatio-temporal Covariance Model for fMRI Data
199
where t denotes the transpose. However, OLS estimation relies on the assumption that var[] = σ 2 I N T , whereas we have assumed that var[] = σ 2 Ω (Sect. 2.1). is an unbiased estimate of β provided var[β] takes the covariance Nevertheless, β matrix Ω into account as follows [7, p. 114]: = σ 2 P ΩP t . var[β] 2.3
(3)
Statistical Tests
Statistical tests determine whether q 6 P regression coefficients contribute significantly to predicting the signal Y . They rely on a null hypothesis of the general form H0 : Aβ = C, where A is a known (q, N T ) matrix of rank q and C is a known q-vector. The following test value F is usually used to test H0 : F =
−1
t 1 At −C . A var[β] Aβ Aβ − C q
The null distribution of F is well approximated by an F -distribution with q and ν degrees of freedom, where ν is a number of degrees of freedom reflecting the amount of spatio-temporal correlations affecting the data. To calculate F , it is clear from (3) that the covariance matrix Ω has to be known or estimated.
3 3.1
Estimating the Covariance Matrix of the Residuals Modeling the Covariance of a Spatio-temporal Process
Denote by {E(s, t); s ∈ D ⊂ IRd , t ∈ IR+ } a spatio-temporal stationary random process measured on a regular lattice (s1 , t1 ), . . . , (sN , tT ) (s: spatial coordinate; t: temporal coordinate). In practice, E corresponds to the residual process of model (1) and the spatial dimension is d = 3. It is assumed that E satisfies the following regularity condition: var[E(s, t)] < ∞ for all s ∈ D and t > 0 , and the covariance function of E is defined by: cov[E(s, t), E(s , t )] = C(s − s , t − t ) = C(h, u) , where C only depends on the spatial lag h = s−s and the temporal lag u = t−t . Spatio-temporal Variogram To model the covariance C, it is often convenient to estimate the function var[E(s, t)−E(s , t )] from the sampled process E. This function is called the variogram [8] and is independent from the mean of E. The variogram is related to the covariance function C by: var[E(s, t) − E(s , t )] = 2 (C(0, 0) − C(h, u)) .
(4)
200
Habib Benali, M´elanie P´el´egrini-Issac, and Frithjof Kruggel
Valid Models for the Theoretical Covariance Ω It is usually not possible to estimate Ω directly from a single fMRI time-series. Nevertheless, Ω can be estimated if a parametric covariance model C (h, u) is available (θ: vector of unknown parameters). Such a parametric model must be valid, i.e., the resulting covariance function C must be positive-definite. Existing criteria for defining valid classes of parametric spatio-temporal models [8] are based upon Bochner’s theorem [9], which expresses the spectral density G(ω, τ ) of the spectral distribution function of the covariance C(h, u) as follows: C(h, u) = eih! +iuτ G(ω, τ ) dω dτ , where ω: spatial frequency and τ : temporal frequency. If the two conditions C1 : ρ(ω, u) du < ∞ and K(ω) > 0 C2 : K(ω) dω < ∞ (5) are satisfied, with
K(ω) ≡
G(ω, τ )dτ
and
ρ(ω, u) ≡
eiuτ G(ω, τ )dτ , G(ω, τ )dτ
then Cressie and Huang [8] showed that C(h, u) ≡ eih! ρ(ω, u)K(ω) dω
(6)
is a valid continuous stationary spatio-temporal covariance function. Classes of parametric models can then be defined by designing functions ρ and K which satisfy C1 and C2 . The covariance model C is derived using (6) and Ω is finally estimated from C (h, u) [8]. To estimate the parameters θ in practice, a variogram model var is obtained from C using (4) and the experimental variogram computed from the sampled process E is fitted to this model using a generalized least-squares minimization method. 3.2
A Nonseparable Spatio-temporal Model
In previous works, we studied the residuals obtained using univariate models. We showed that the covariance of temporal errors could be modeled by a “damped oscillator” process C(u) ≡ exp(−a|u|) cos(αu) [10]. We also showed that the spatial error process could be modeled by a first-order autoregressive process [4,6]. However, all these models considered spatial and temporal correlations as independent phenomena, whereas experimental variograms suggest that spatiotemporal covariance processes are likely to be nonseparable. We therefore introduce a nonseparable spatio-temporal model defined by: bd/2 ||ω||2 ||ω||2 ρ(ω, u) = + exp − exp −δu2 cos(αu) (7) d/2 4(c|u| + b) 4b (c|u| + b)
Spatio-temporal Covariance Model for fMRI Data
and
||ω||2 K(ω) = exp − , 4b
201
(8)
with δ > 0, b > 0 and c > 0. We can prove that these functions satisfy conditions C1 and C2 given by (5). We can therefore conclude that the function C(h, u) defined by (6), using (7) and (8), is a valid covariance model for the process E. A parametric model for C(h, u) is then derived following [8]: (9) C (h, u) = σ 2 exp −a|u| − b||h||2 − c|u|.||h||2 cos(αu) , θ = {a, b, c, α, σ 2 }, a > 0: scaling parameter of time, α: temporal frequency parameter, b > 0: scaling parameter of space, c > 0: spatio-temporal interaction parameter and σ 2 = C (0, 0). In the particular case c = 0, C (h, u) is a separable spatio-temporal model, the temporal component exp [−a|u|] cos(αu) corresponds to the damped oscillator model and the spatial component exp −b||h||2 corresponds to a Gaussian model. To estimate θ in practice, we account for the so-called “nugget” effect (i.e., microscale variations of the error process that may cause a discontinuity at the origin [11]) by considering the spatio-temporal variogram model:
var [E(s, t) − E(s + h, t + u)] = 0 2σ 2 1 − exp −a|u| − b||h||2 − c|u|.||h||2 cos(αu) + n2
if h = 0 and u = 0 otherwise .
n2 corresponds to the variance of an additive white noise which accounts for small variations of E at the origin.
4
Application: Event-Related Working Memory Experiment
A real event-related experiment was selected to illustrate the usefulness of the proposed model. Subjects performed an item-recognition task [12]. Each trial consisted of a list of 3 to 6 uppercase target letters, presented simultaneously for 2 s, followed by a variable (from 2 s to 7 s) blank delay period, during which subjects had to remember the letters. After this delay a probe letter was displayed for 1 s. Subjects were asked to respond whether the probe letter belonged to the previously presented list. A variable inter-trial interval followed to complete constant duration (18 s) single trials. Eight functional axial slices were acquired parallel to the AC-PC plane (TE 30 ms, TR 1 s, thickness 5 mm, 3 mm gap) using a Bruker Medspec 30/100 3T MR system. The experiment was described in X (see (1)) using separate regressors related to the cue, delay and probe phase, convolved with a Gaussian function (lag 5.5 s, dispersion 1.8 s) to model the smoothness of the hemodynamic response. Three regression models were compared: (M1) the SPM99 univariate model, (M2) the univariate regression model correcting for temporal correlations using a damped
202
Habib Benali, M´elanie P´el´egrini-Issac, and Frithjof Kruggel
oscillator model [10] and (M3) the proposed multivariate model. Assignment of significance was achieved by testing H0 : β = 0 on a voxel-wise basis. Table 1 shows estimated covariance parameters obtained using M3. Note that the model was not separable in time and space (c > 0). Figure 1 shows sample activation maps. Comparing the activation amount, M3 ranged between M1 and M2, with much more focused activation. Note that the strip-like activation, which was presumably motion-related, was not rendered as significant by the nonseparable spatio-temporal model. Table 1. Covariance function parameters for slices 5 to 7. Slice 5 6 7
M1
Covariance Function Parameters a b c α n2 σ2 18104 0.410 1.055 0.230 0.458 0.000 14015 0.313 0.962 0.145 0.388 0.007 12462 0.329 0.935 0.172 0.474 0.000
M2
M3
Fig. 1. For slice 6, activation maps (z-scale: 4-12) obtained for the probe phase and overlaid onto T1 -weighted anatomical scans.
5
Discussion
In this work, we introduced a new method for modeling the covariance of a stationary spatio-temporal random process and applied this approach to fMRI data analysis. To know whether a parametric covariance model is valid a priori, conditions C1 and C2 can be used in practice and the difficulty lies in deriving the covariance C following (6). The proposed nonseparable model was based upon both [8] (i.e., Gaussian model in space) and our previous work [10] (i.e., damped oscillator model in time). This approach is powerful in that it accounts for spatiotemporal interaction, which makes the model more flexible than previous models which considered spatial and temporal correlations separately. This is likely to yield a better modeling of the variance of a random process.
Spatio-temporal Covariance Model for fMRI Data
203
The proposed model was used in the framework of multivariate regression analysis and validated on real fMRI data. For doing so, we introduced a multivariate regression model taking simultaneously the spatial and temporal cor requires no exrelations into account. Estimating the regression coefficients β tra computational cost compared to univariate analysis. Indeed, (2) reduces to [I ⊗ (X t X)−1 X t ]Y = I ⊗ [(X t X)−1 X t y i ], which is equivalent to OLS estimation in univariate regression. Note that the null hypothesis given in Sect. 2.3 or a local test (e.g., on can be tested using either a global test on all estimated β each voxel separately) [13]. In the latter case A selects the coefficients of interest for the voxel under study. The activated regions obtained using the spatio-temporal model had a lesser extent than those obtained using only univariate models, for a given statistical threshold. The reasons for these differences will have to be investigated further, to better characterize the sensitivity and the specificity of the proposed multivariate approach.
References 1. Friston K.J., Jezzard P., Turner R.: Analysis of functional MRI time-series. Hum. Brain Mapp. 1 (1994) 153–171 2. Worsley K.J., Marrett S., Neelin P., Vandal A.C., Friston K.J., Evans A.C.: A unified statistical approach for determining significant signals in images of cerebral activation. Hum. Brain Mapp. 4 (1996) 58–73 3. Bullmore E., Brammer M., Williams S.C.R., Rabe-Hesketh S., Janot N., David A., Mellers J., Howard R., Sham P.: Statistical methods of estimation and inference for functional MR image analysis. Magn. Reson. Med. 35 (1996) 261–277 4. Benali H., Buvat I., Anton J.L., P´el´egrini M., Di Paola M., Bittoun J., Burnod Y., Di Paola R.: Space-time statistical model for functional MRI image sequences. In: Duncan J., Gindi G. (eds.): Information Processing in Medical Imaging. SpringerVerlag, Berlin (1997) 285–298 5. Friston K.J., Josephs O., Zarahn E., Holmes A.P., Rouquette S., Poline J.B.: To smooth or not to smooth? Bias and efficiency in fMRI time-series analysis. NeuroImage 12 (2000) 196–208 6. Kruggel F., von Cramon D.Y.: Temporal properties of the hemodynamic response in functional MRI. Hum. Brain Mapp. 8 (1999) 259–271 7. Seber G.A.F.: Linear regression analysis. John Wiley & Sons, New York (1977) 8. Cressie N., Huang H.C.: Classes of nonseparable, spatio-temporal stationary covariance functions. J. Am. Stat. Assoc. 94 (1999) 1330–1340 9. Bochner S.: Harmonic Analysis and the Theory of Probability. University of California Press, Berkeley (1955) 10. Kruggel F., Benali H., P´el´egrini-Issac M.: Estimating the effective degrees of freedom in univariate multiple regression analysis. Submitted (2001) 11. Cressie N.A.C.: Statistics for Spatial Data, rev. edn. John Wiley & Sons Inc., New York (1993) 12. Kruggel F., Zysset S., von Cramon D.Y.: Nonlinear regression functional MRI data: an item-recognition task study. NeuroImage 11 (2000) 173–183 13. Worsley K.J., Poline J.B., Friston K.J., Evans A.C.: Characterizing the response of PET and fMRI data using Multivariate Linear Models. NeuroImage 6 (1997) 305–319
Microvascular Dynamics in the Nailfolds of Scleroderma Patients Studied Using Na-fluorescein Dye Philip D. Allen1 , Chris J. Taylor1 , Ariane L. Herrick2 , Marina Anderson2 , and Tonia Moore2 1
Imaging Science and Biomedical Engineering, University of Manchester, Manchester M13 9PT, U.K.
[email protected] 2 Rheumatic Diseases Centre, Hope Hospital, Salford M6 8HD, UK.
Abstract. Dynamic microscopy of the nailfold capillaries using Nafluorescein dye can be used to assess the condition of the peripheral circulation of Scleroderma patients, yielding more information than simple morphological studies. In this paper we describe a computer based system for this kind of study and present preliminary results on Scleroderma patients. We show how the dye concentrations vary both in time and as a function of distance from the capillary wall in unprecedented resolution, suggesting that a simple permeability model may be applicable to the data.
1
Introduction
Among the symptoms produced by the connective tissue disease Scleroderma [1] is a reduction in peripheral circulation that is exacerbated by exposure to cold. In extreme cases this effect can be serious enough to warrant amputation of fingers or toes, and so improving the peripheral circulation of these patients is of major concern to clinicians. One method that is widely used to assess the condition of the peripheral circulation is direct observation of the tiny vessels (see figure 1) that link the arterial and venous systems in the nailfold - the skin overlapping the finger nail at its base - using an optical microscope. This is used both to assess the morphology of the capillaries, by measuring key dimensions, and their function by observing uptake of fluorescent dyes by the capillaries and the surrounding tissue. The aim of this project has been to develop a computer based system to facilitate both these problems and this paper focuses on its use in fluoroscopy and presents preliminary results on patients with Scleroderma.
2
Previous Work in Nailfold Fluoroscopy - Technique and Findings
Na-fluorescein (NaF) has a peak excitation at 470nm (visible blue) and fluoresces with a peak at 540nm (visible yellow/green). The standard approach to using M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 204–210, 2001. c Springer-Verlag Berlin Heidelberg 2001
Video Capillaroscopy with Fluorescent Dyes
205
the dye in microscopy is to use a broad-band light source - usually a 100W mercury vapour lamp - and filter out all but the required excitation wavelengths with a band pass filter (450-500nm). The subject is then illuminated with this light, and a barrier filter is placed in-front of the microscope objective stopping wavelengths below 515nm to block out the excitation light, but allowing the fluorescent light through. The images obtained in this way are then stored on video for later analysis. In healthy control subjects the dye appears in the capillaries about 30 seconds after injection, then diffuses through the capillary wall filling a region of tissue immediately surrounding the capillary called the ‘halo’ in about 20 seconds. Some of the dye is observed in the tissue beyond the halo border, but in much lower concentrations, suggesting an effective diffusion barrier. In patients with Scleroderma, the main difference observed is that the halo is more irregular in appearance, if present at all, and a much less effective barrier to diffusion [2,3,4].
3 3.1
Current Work - Technique Basic Data Acquisition - Normal Illumination
Previous work on video capillaroscopy [2,3,4] relied on storing the output of a video microscope on video tape, then digitising a single video frame from which to measure a particular capillary’s dimensions. The major draw back to this approach is that because the capillary walls themselves are transparent with only the red blood cells visible, gaps in the flow of blood (filled with transparent plasma) can render the capillaries incomplete at any one instant. To get over the problem of incomplete capillaries it was decided to integrate the information from a number of sequential video frames. This required a method of automatically registering the frames to correct for the motion between finger and microscope objective - the finger having been only lightly constrained so as not to affect blood flow. A method of frame registration was developed based on linear feature detection which could successfully register nailfold images, and this method was extended to video frames with only partial overlap so that a picture of the entire area under study could be produced (Figure 1). The registration method is described fully in [5], and its robustness and accuracy is described in [6]. This registration methodology has been developed into a complete computer based data acquisition system now used routinely by clinicians, allowing a composite mosaic image of the capillary network to be built up in real time via a simple graphical user interface. The system is based on a standard PC (Pentium2 200MHz, 256Mb RAM)with the output from the microscope’s CCD camera fed directly into a digitiser board (Snapper8), thus eliminating video noise. 3.2
Data Acquisition with Fluorescent Dyes
The system used here is based around a microscope developed by KKTechnologies (www.KKtechnologies.com), and differs from the conventional technique
206
Philip D. Allen et al.
Fig. 1. A mosaic image of the capillaries in the nailfold formed from a video frame sequence.
(see section 2) in that it uses blue LEDs with peak emission matching the peak excitation wavelength of the NaF dye, thus dispensing with the need for an excitation filter and 100W mercury vapour lamp, and so reducing the cost and complexity of the apparatus dramatically. The procedure for acquiring the fluoroscopy data is an extension of the system described in section 3.1. Once a mosaic image of the region of interest has been constructed, illumination of the subject is switched to the blue LEDs. The patient is injected with the dye and the system is triggered manually when the dye appears to capture video frames at 5Hz for 30 seconds followed by 100 frames over the next 30 minutes with an exponentially increasing time interval. This is because the light levels observed rise very rapidly initially, followed by a much slower decline as the dye is extracted by the kidneys. In addition, prior to the dye appearance the system captures frames at 5Hz in a 20 second buffer so that subtle increases in light intensity prior to the apparent appearance of the dye are not lost.
4
Current Work - Results
In practice ethical approval for use of control subjects with fluorescent dyes tends to be harder to gain than for disease patients. We therefore have initially tested the system on 8 Scleroderma patients since we expect there to be significant variation between these patients, and so testing for a real effect in any measurement applied should be easier. The patients had Scleroderma in varying degrees of severity, though a precise definition of severity is not possible, and varying degrees of capillary pattern distortion/enlargement. Each patient was studied on two visits 28 days apart. The various diffusion patterns for scleroderma outlined by previous work in section 2 were observed i.e. inhomogenity of leakage through the pericapillary halo, enlargement of the halo, especially around the capillary loop apex, as well as patterns that appeared to be fairly normal (figure 2). To investigate how the fluorescent light intensity (FLI) varied with time at a particular position in the area under study, a software framework was constructed in which the whole sequence could be replayed and the registered composite of the whole sequence could be viewed. From the composite image, points
Video Capillaroscopy with Fluorescent Dyes
207
Fig. 2. Varying patterns of fluorescent dye diffusion in patients with Scleroderma. in the scene could be chosen using the mouse and the light intensity from a 3x3 pixel neighbourhood could be deduced automatically for the corresponding position in all of the frames in the sequence. To investigate how the FLI vs time profiles varied with distance, points were selected on a line perpendicular to the capillary outer wall at the apex. The apex of the loop was chosen so that the influence of neighbouring capillaries would be minimal. A typical result is shown in figure 3. Within the capillary wall we are effectively measuring the concentration of the dye in the blood plasma and the peak in the FLI can be seen as the injected bolus of dye makes its first pass around the circulatory system - on subsequent recirculations the bolus will have become mixed with the blood more effectively and so these do not show up here as secondary peaks. Moving further away from the capillary wall and into the surrounding tissue, this first-pass bolus peak becomes less and less pronounced and the overall FLI values (relative to those before the dye appeared) decrease, until a point is reached where the FLI profile appears to be independent of distance from the capillary outer wall. This point seems to correspond with the outer edge of the pericapillary halo. In fact, in those patients with fairly well defined halos, it is found that the FLI profiles beyond the capillary halos are independent of position throughout the area under study. This suggests that within the halo region around each capillary loop the transport of dye is following a diffusive process and the FLI profiles obtained are strongly influenced by the concentrations of dye in the plasma of the local capillary. However, the area beyond the halos is a homogeneous leakage space which is fed by a number of local capillaries, and within this region the effect of local individual capillaries is greatly diminished. Also of interest is that the first pass peak is observable beyond the capillary wall. To a small extent this is expected since the plasma layer extends beyond the visible column of red blood cells by about 5 microns. However in some cases it is still visible throughout the halo region - something not observable in previous work [2,3,4] since the temporal sampling rate was much lower. 4.1
Modelling Permeability
If we consider the plasma, the halo, and the region beyond the halos as three compartments, with the plasma feeding the whole system with dye, then a simple
208
Philip D. Allen et al.
Fig. 3. An example of the variation of fluorescent light intensity vs time plots with distance from capillary wall. Here 1 pixel corresponds to 1.23 µm - thus the total length of the distance axis is 123 µm).
permeability model may be applicable. Here we attempted to apply a standard kinetic model used in the study of dye uptake in Magnetic Resonance Imaging [7] of the following form:
Ct (t) = K
trans
Cp (t )e−
K trans V
(1−t )
dt
(1)
Where Cp is concentration of dye in the plasma, Ct is the concentration of the dye in the surrounding tissue, K trans is the transfer constant , and V is the fractional volume of the surrounding tissue to which the dye has access. The physical interpretation of K trans depends on the ratio of capillary permeability to blood flow. If the permeability is high then the flux of dye across the capillary wall is flow limited, and K trans is the blood plasma flow per unit volume of tissue. In the reverse situation where the flux is permeability limited, K trans is the permeability surface area product between blood plasma and the surrounding tissue per unit volume of tissue. It is impossible for us to know in advance which of these regimes we are in, and so we can attempt to fit this model and see if the results are consistent for particular capillaries and/or individuals irrespective of varying blood flow. For this data there are three potential permeability barriers to investigate: plasma to halo, halo to beyond halo, or leakage, and across the whole system i.e. plasma to leakage. Equation 1 was fitted using simplex minimisation to the
Video Capillaroscopy with Fluorescent Dyes
209
data for each patient, and for each of the three permeability barriers outlined above. Only the first 260 frames were used since beyond this point the exponentially increasing frame spacing makes the integration in equation 1 increasingly inaccurate. Figure 4 shows two examples of the fits obtained.
(a)
(b)
Fig. 4. Relative fluorescent light intensity vs time measured at two sites for a Scleroderma patient on two separate visits (a and b) 28 days apart. Data from the Plasma region (upper curve) and beyond the halo (lower curve) are shown, with the permeability model fit (equation 1) represented by the smooth curve.
The real physical situation is bound to be more complex than the one suggested by the model, and so only an approximate fit can be expected. For each patient there are a number of capillary loops in the field of view ranging from 3 to 7, depending on the intrinsic capillary dimensions, and the model is fitted to the data from each. The variation of light intensity from capillary to capillary is greater than the noise on any single FLI profile and so any expression of uncertainty for an individual should stem from this. If the model fit were in some way related to a physical property of the capillary loop to whose data it was applied, then we would expect there to be a correlation between the values of K and V on subsequent visits. If we put all the individual loops from each of the patients together into one big group and compare the values from the two visits we find no correlation for K when fitting to the plasma to leakage barrier (correlation coefficient 0.189) but a reasonable correlation for V (0.673). For the plasma to halo barrier many of the fits result in a V greater than one which is physiologically possible suggesting the model has broken down here. This seems to be because in many cases the two profiles are indistinguishable suggesting an extremely high permeability. Nothing can be concluded about the model fit from this data, but the situation may not be the same with control subjects where we expect the permeability to be lower. For the halo to leakage barrier we find a significant correlation between both K and V - 0.5303 (P=0.0012) and 0.4529 (P=0.0076) respectively (33 loops).
210
5
Philip D. Allen et al.
Conclusions
The system described can be used at the very least to reproduce the kind of analysis done in previous studies, but with much greater ease, for much lower equipment cost, and to much higher temporal accuracy. In particular the increased temporal resolution has revealed features in the fluorescent light intensity vs time profiles not observable before such as the presence of a first-pass dye bolus peak beyond the capillary wall. This may have significance to MRI work in brain tumors where the shape of the profile observed is used to determine whether a capillary is present or not, this being impossible to do directly due to lack of spatial resolution. The use of controls will be important in confirming the significance of this. The lack of correlation between the values of K obtained on subsequent visits for the model fits across the plasma/leakage barrier may be due to an inappropriate use of the model, or genuine variations in the patients between visits. The appearance of Scleroderma patient’s capillaries are known to change substantially over time and so their physical properties may also. Again, this cannot be resolved without the relative stability of controls to compare against. The thing that is clearly shown by this analysis is that it dangerous to consider capillary loops in isolation, as is done in most work where only one loop is studied. Within a Scleroderma patient there can be great variation in the morphology and dynamic behaviour of the capillary loops, and the area under study can also come under the influence of capillary loops out of the field of view.
References 1. D. A. Isenberg and C. Black. Raynaud’s Phenomemon, Scleroderma, and Overlap Syndromes. British Medical Journal, 310:795–798, March 1995. 2. W. Grassi, P. Core, G. Carlino, and C. Cervini. Acute Effects of Single Dose Nifedipine on Cold-Induced Changes of Microvascular Dynamics in Systemic Sclerosis. British Journal of Rheumatology, 33:1154–1161, 1994. 3. A. Bollinger, K. Jager, and W. Seigenthaler. Microangiopathy of Progressive Systemic Sclerosis. Arch Intern Med, 146:1541–1545, 1986. 4. Alfred Bollinger and Bengt Fagrell. Clinical Capillaroscopy. Hogrefe and Huber Publishers, 1990. 5. P. D. Allen, C. J. Taylor, A. L. Herrick, and T. Moore. Enhancement of Temporally Variable Features in Nailfold Capillary Patterns. In British Machine Vision Conference, volume 2, pages 535–544, 1998. 6. P. D. Allen, C. J. Taylor, A. L. Herrick, and T. Moore. Image Analysis of Nailfold Capillary Patterns from Video Sequences. In Medical Image Computing and Computer-Assisted Intervention-MICCAI’99, pages 698–705, 1999. 7. Paul S. Tofts. Estimating Kinetic Parameters From Dynamic Contrast-Enhanced T1 -Weighted MRI of a Diffusable Tracer: Standardized Quantities ans Symbols. Journal of Magnetic Resonance Imaging, 10:223–232, 1999.
Time Curve Analysis Techniques for Dynamic Contrast MRI Studies Edward V.R. Di Bella1 and Arkadiusz Sitek1,2 1
Dept of Radiology, University of Utah, Salt Lake City, UT 84108
[email protected] 2 E. O. Lawrence Berkeley National Laboratory Berkeley, CA 94720
[email protected] Abstract. Clinical magnetic resonance imaging of regional myocardial perfusion has recently become possible with the use of rapid acquisitions to track the kinetics of an intravenous injection of contrast. A great deal of processing is then needed to obtain clinical parameters. In particular, methods to automatically group alike regions for an increased signalto-noise ratio and improved parameter estimates are needed. This work explores two types of time curve analysis techniques for MRI perfusion imaging: factor analysis and clustering. Both methods are shown to work for extraction of the blood input function, with the clustering method appearing to be more robust. The availability of an accurate blood input function then enables more complex approaches to automatically fitting all of the relevant data to appropriate models. These more complex approaches are formulated here and tested in a preliminary fashion.
1
Introduction
Measurements of the dynamic transfer of a tracer or contrast agent into and out of 3D regions of interest can provide a wealth of clinically relevant data. Such 4D data can be considered as a collection of time-activity curves (one for each measured voxel). For many applications, it is desirable to find a lower dimensional representation of parameters to represent the data for clinical interpretation. For example, compartmental modeling of blood flow to the heart in dynamic SPECT, PET, and MRI [1-3] can link regional washin or washout parameters to the clinically important measurement of regional perfusion or viability. These types of studies involve acquisition and reconstruction of data, followed by manual processing. Regions of interest are chosen in the left ventricle blood pool, and in perhaps 20 myocardial tissue regions in the 3D volume. These curves are then fit to a compartmental model and parameters obtained for each of the 20 regions. Such techniques are not widely used due to their complexity and in part because adequate processing methods have not been developed for many commercially available tracers or contrast agents. In particular, automatic methods for choosing regions and processing the 4D data are virtually non-existent for dynamic SPECT and MRI applications. While many existing spatial segmentation M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 211–217, 2001. c Springer-Verlag Berlin Heidelberg 2001
212
Edward V.R. Di Bella and Arkadiusz Sitek
approaches can be applied, the use of the temporal data is critical in obtaining optimal parameter estimates. In fact, it may be that in some cases the temporal signature of each acquired data point is sufficient to yield an “optimal” set of model parameters and that the regions may not always be contiguous or predictable spatially. Indeed, factor analysis approaches, which do not intrinsically use any spatial information, have been applied to dynamic PET and SPECT studies for automatically extracting the blood time-activity curve [4-6]. The contributions of this work are threefold. First, the use of a factor analysis approach for extraction of the blood input function from dynamic cardiac MRI studies is explored. Second, a clustering method based only on temporal correlations is used for the same application. And third, this work formulates extensions of these approaches for automatic grouping and fitting of temporally alike tissue time-signal voxels, assuming the data fits a physiological two-compartment model. In addition to automating and standardizing the processing, these approaches should yield curves with increased signal-to-noise compared to manual region selection, thereby providing improved kinetic parameter estimation.
2
Background
Recent technological advances in MRI have made it possible to acquire from 5-8 slices of the heart every other heart beat [7] with sufficient resolution and signal-to-noise to track the uptake (and washout) of a bolus injection of paramagnetic contrast. Much more rapid acquisitions are possible, though they have not yet achieved the image quality necessary for perfusion measurements. The gadolinium contrast agent used has been shown to adhere to a physiological two compartment model when certain assumptions are made. Typically, the arterial input function and regional tissue time-signal curves are manually extracted and fit to the model [3]. The model parameters have been shown to provide absolute quantitative measures of regional blood flow when compared to microspheredetermined blood flows in four dogs [8]. Factor analysis-type approaches consider the signal over time at each voxel as a linear combination of a few underlying basis curves [9,10]. The primary drawback to many of the Factor Analysis of Dynamic Structures (FADS) methods in the literature is that a unique solution is not provided. Recently, Sitek et al. investigated the FADS non-uniqueness problem and offered possible solutions for dynamic SPECT studies [6,11]. One scheme is to formulate FADS with a coefficient inner product term such that factor coefficients are penalized if they overlap [11]. Here we investigate the use of the method for dynamic cardiac MRI data. Note that factor analysis applied to cardiac imaging typically only seeks to identify the input function, not regional variations in myocardial tissue uptake. Given appropriate models and constraints, it is possible that the factor analysis approaches are sufficiently powerful such that tissue curves from inhomogenous regions could be extracted as well. Some significant work has been done towards automating and improving the noise properties of time-activity curve extraction in dynamic PET studies
Time Curve Analysis Techniques for Dynamic Contrast MRI Studies
213
with FDG [12] and with O-15 water [13,14]. The optimized curves are then often fit to appropriate compartmental models. O’Sullivan’s work used a mixture model (similar to the FADS formulation in [6]) to classify time-activity curves. O’Sullivan reported good results for the targeted application of dynamic PET brain imaging with FDG. In addition, we have used spatial segmentation in combination with some limited temporal information for dynamic SPECT studies [15]. Another natural tool for extracting temporally similar regions is K-means type classification [16]. These types of methods have not previously been applied to dynamic contrast MRI studies. O’Sullivan did use clustering to define the allowable space of the mixture model curves and to give a “warm” start to the optimization. A heuristic was applied to determine the appropriate number of clusters [12].
3 3.1
Theory and Methods Factor Analysis
FADS with Least Squares and Coefficient Overlap Penalty The formulation of FADS we consider here is that of the nonlinear least squares minimization given in [6]. This method has been found to provide results similar to the original FADS implementation of [9]. Briefly, the method solves: 2 Cnp Fpm ) + fneg (C, F ) (1) min (Anm − p
where the matrix A contains the intensity measured at each of N voxels at M time points. C is an NxP matrix where P is the number of factor coefficients at each voxel, and F holds the factor curves (P curves each of length T ). The negativity penalty fneg (C,F) is as given in [6] and a term penalizing overlapping factors [11] is also used in our implementation. This method is termed penalized least squares (PLS-FADS). Extension for Use with Compartmental Model Data If we assume that the data has been pre-processed such that only regions that can be well-represented by a compartmental model are present, then we can reformulate the PLS-FADS method using this knowledge. The same expression given in equation (1) is minimized, but instead of considering each element of the matrix F as an unknown, we allow only the first factor curve of F to vary, and represent the remaining factor curves with a physiological model: Fpm = F1m ⊗ exp(−kp−1 m), p = 2...P
(2)
Instead of MxP unknowns in F, there are now M+P-1 unknowns and the solution is constrained by the use of (2). The difficulty arises when trying to minimize
214
Edward V.R. Di Bella and Arkadiusz Sitek
(1) when using (2) – this is a large nonlinear minimization problem. Thus we assumed that it is possible to obtain a reasonably correct input function from the PLS-FADS method and hold F1m fixed. The problem is still full of local minima so to improve convergence we alternately minimize with respect to the coefficients C and the washout parameters kp . This is a new paradigm for compartmental modeling approaches and is an alternative to choosing regions with the help of FADS or with other segmentation approaches. In this way, regions with similar temporal behavior are grouped automatically and the model parameters associated with each region are determined. A unique aspect is that the regions may still be linear combinations of a few basis curves and may be spatially non-contiguous. 3.2
Clustering
The standard K-means clustering minimizes: min
p n∈Cp
an − µp where an is
the vector of values over time for spatial location n ( a dixel), and µp is the average curve of the members of the pth cluster Cp There is no assurance of converging to a global minimum and generally the minimization is done in an iterative fashion by evaluating every vector until none of them change clusters. A good starting set can assist in achieving reasonable results, since the algorithm is dependent on the starting point. The implementation used here employs a random subset of the time curves to determine initial cluster centers or means. To speed convergence, the cluster means are recomputed each time a curve is moved to another cluster [17].
Extension for Use with Compartmental Model Data Assuming the blood input function is available from either the clustering or the PLS-FADS approach, this function is then used to create a large number of fixed potential cluster mean curves. The curves are created by convolving the input function with fifty different decaying exponentials, according to a two compartment model. This is in the spirit of “spectral analysis” approaches [18] in that fixed exponentials are used. Here a correlation coefficient is computed for each curve and each potential cluster center and the curves are grouped into one of the fifty different clusters. The use of fifty clusters was chosen to provide sufficient range and precision for the data used in this work. As with spectral analysis, it is anticipated that many of the clusters will be empty. This approach fits for the washout parameters as there is no penalty for different scalings. A second step is needed if washin values are desired. This step calculates the variance of each of the previously found clusters. If the variance is above a threshold, the cluster is divided into two groups (different scale factors). This continues recursively until the data is completely segmented, yet regions with similar washin and washout coefficients are left together. Results from this second step are not given here as further work is needed to determine appropriate thresholds.
Time Curve Analysis Techniques for Dynamic Contrast MRI Studies
3.3
215
Application to Dynamic Cardiac MRI
A simulation was created based on a cardiac dynamic gadolinium contrast MRI data set obtained at our facility with a Marconi Eclipse 1.5T scanner. The data was pre-processed to convert signal intensities into gadolinium concentrations as described in [3]. An input curve was obtained from a manually chosen region in the blood pool in one slice. Tissue data fit to a two compartment model was used as the time-varying part of the simulation. The spatial part of the simulation was created from the data set after segmentation with the clustering method. The simulation had four distinct curves and five regions (Fig. 1). Gaussian noise was added to the simulated images. The simulated data was analyzed with PLS-FADS and with the clustering method. The extensions to incorporate compartmental models with the methods were also used. Data from a volunteer and a patient were also analyzed with the two basic methods to extract the blood input function. For these cases, the data were not transformed into gadolinium concentrations. Signal intensities were used. The patient data were from images obtained every heartbeat before and during a bolus injection of gadolinium.
4
Results
Fig. 1 shows the data used for the simulation and results from the two basic methods. The clustering method with the compartmental model formulation provides groups of washout values that are near the truth. The average washout of each grouping was within 1% of truth. The FADS-compartmental modeling
(a)
(b) (c)
Fig. 1. (a) Image of one time frame of simulated data with noise. (b) Factor coefficients from PLS-FADS method. (c) Time-concentration curves from simulated data as estimated by cluster analysis. The blood input function appears to be the same for both methods after scaling. The right ventricle component is also shown (artificially twice its true value to enable both curves to appear on the same plot).
216
Edward V.R. Di Bella and Arkadiusz Sitek
(a)
(b) (c)
Fig. 2. “4D” patient data. (a) A “3D” (two slices) spatial view of segmentation from 7 clusters. Each cluster is assigned a different grey level. (b) One of the factor coefficient images. (c) Blood input from three different methods. The initial part of the curve for FADS was not uniquely determined.
method was very sensitive to the starting estimates; in general, starting estimates needed to be within a few percent of truth to converge correctly. Data from real scans resulted in very similar blood input functions for both basic methods. Fig. 2 shows the 4D results from two slices of patient data.
5
Discussion
Imaging of myocardial perfusion with dynamic contrast MRI is an emerging application that could prove to be an accurate and efficient method if the proper analysis methods are developed and automated. The work here provides a step in that direction by establishing robust methods for extracting the blood input function automatically. The work here also went on to formulate methods for automatically obtaining compartmental model parameter estimates and to demonstrate the utility of the ideas in a preliminary fashion. Such methods may prove to be of great value in the development of dynamic contrast MRI. One result of particular interest is the similarity of the blood time-signal curves from the factor analysis and the clustering methods. Both the right and the left ventricles were almost identical in most of the data analyzed (not shown). Note that the non-blood curves vary since the factor analysis approach models the measured data as a linear combination of a few underlying curves. Also, the initial part of the blood curve from the PLS-FADS method in Fig. 2 was different from the clustering result. If we assume that the manual ROI curve is correct, this result implies that the clustering method will be the choice for blood curve identification (note that much of the differences could be avoided in this case by
Time Curve Analysis Techniques for Dynamic Contrast MRI Studies
217
using only time frames after the initial rise of the blood curve.) Although this work focuses on the use of temporal data, in the future we aim to use spatial and temporal information jointly to find optimal segmentations and physiologic basis curves for dynamic MRI, SPECT, and PET data sets.
References 1. Gullberg GT, Huesman RH, Ross SG, et al. Dynamic cardiac single photon emission computed tomography. In: Zaret BL, Beller GA, eds. Nuclear Cardiology: State of the Art and Future Directions. New York: Mosby-Year Book; 1998:137-187. 2. Jovkar S, Evans AC, Diksic M, et al. Minimisation of parameter estimation errors in dynamic PET: choice of scanning schedules. Phys. Med. Biol. 1989;34:895-908. 3. Vallee J-P, Lazeyras F, Kasuboski L, et al. Quantification of Myocardial Perfusion With FAST Sequence and Gd Bolus in Patients with Normal Cardiac Function. J. Magn. Reson. Imaging. 1999;9:197-203. 4. Wu H-M, Hoh CK, Choi Y, et al. Factor analysis for extraction of blood timeactivity curves in dynamic FDG-PET studies. J. Nucl. Med. 1995;36:1714-1722. 5. Sitek A, DiBella EVR, Gullberg GT. Factor analysis of dynamic structures in dynamic SPECT using maximum entropy. IEEE Trans. Nucl. Sci. 1999;46:22272232. 6. Sitek A, DiBella EVR, Gullberg GT. Factor analysis with a priori knowledge application in dynamic cardiac SPECT. Phys. Med. Biol. 2000;45:2619-2638. 7. Ding S, Wolff SD, Epstein FH. Improved Coverage in Dynamic ContrastEnhanced Cardiac MRI Using Interleaved Gradient-Echo EPI. Magn. Reson. Med.. 1998;39:514-519. 8. Vallee J-P, Sostman HD, MacFall JR, et al. Quantification of Myocardial Perfusion by MRI After Coronary Occlusion. Magn. Reson. Med. 1998;40:287-297. 9. DiPaola R, Bazin JP, Aubry F, et al. Handling of dynamic sequences in nuclear medicine. IEEE Trans. Nucl. Sci.. 1982;29:1310-1321. 10. Buvat I, Benali H, DiPaola R. Statistical distribution of factors and factor images in factor analysis of medical image sequences. Phys. Med. Biol. 1998;43:1695-1711. 11. Sitek A, Gullberg GT, Huesman RH. Correction for ambiguous solutions in factor analysis using a penalized least squares objective. In: IEEE Med. Imaging Conf. Lyon, France: IEEE; 2000. 12. O’Sullivan F. Imaging Radiotracer Model Parameters in PET: A Mixture Analysis Approach. IEEE Trans. Med. Imaging. 1993;12:399-412. 13. Chiao P-C, Rogers WL, Clinthorne NH, et al. Model-based estimation for dynamic cardiac studies using ECT. IEEE Trans. Med. Imag. 1994;13:217-226. 14. Hermansen F, Lammertsma AA. Linear dimension reduction of sequences of medical images: III. Factor analysis in signal space. Phys. Med. Biol. 1996;41:1469-1481. 15. DiBella EVR, Gullberg GT, et al. Automated region selection for analysis of dynamic cardiac SPECT data. IEEE Trans. Nucl. Sci. 1997;44:1355-1361. 16. Bezdek JC, Hall LO, Clarke LP. Review of MR image segmentation techniques using pattern recognition. Med. Phys. 1993;20:1033-1048. 17. Theiler J, Gisler G. A contiguity-enhanced k-means clustering algorithm for unsupervised multispectral image segmentation. In: Proc SPIE ; 1997:108-118. 18. Cunningham V, Jones T. Spectral analysis of dynamic PET studies. J. Cereb. Blood Flow Metab. 1993;13:15-23.
Detecting Functionally Coherent Networks in fMRI Data of the Human Brain Using Replicator Dynamics Gabriele Lohmann and D. Yves von Cramon Max-Planck-Institute of Cognitive Neuroscience, Stephanstr. 2a, 04103 Leipzig, Germany
[email protected] Abstract. We present a new approach to detecting functional networks in fMRI time series data. Functional networks as defined here are characterized by a tight coherence criterion where every network member is closely connected to every other member. This definition of a network closely resembles that of a clique in a graph. We propose to use replicator dynamics for detecting such networks. Our approach differs from standard clustering algorithms in that the entities that are targeted here differ from the traditional cluster concept.
1
Introduction
In this paper we will introduce a new approach to modeling and detecting functionally coherent networks in the human brain based on a well-known concept of theoretical biology called ’replicator equations’. Our approach is based on measurement data of functional magnetic resonance imagery (fMRI). In fMRI, test subjects are subjected to cognitive or sensory stimuli and are asked to respond to them while a sequence of T2*-weighted magnetic resonance images are acquired. In the course of a typical fMRI experiment, several hundred or even several thousand images are recorded at a rate of about 1 to 2 seconds per image. Usually, these image sequences are then analyzed using standard statistical techniques to reveal areas in the brain that are significantly activated when a stimulus condition is contrasted against some baseline condition. The result of such an analysis is an activation map that shows the degree of statistical significance with which each pixel can be considered to be activated. While such maps are of large value for purposes of human brain mapping, they do not reveal interdependencies between areas of activations. Therefore, the aim of this paper is to present a new approach that allows us to identify such interdependencies of brain activations and to detect functionally coherent networks within an fMRI image sequence. The basic assumption here is that during the course of an fMRI experiment, several brain regions are active and interact with each other and thus form a functionally coherent network. We assume that these networks can be detected by analyzing correlations between fMRI time courses. The important point to note here is that our algorithm is not M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 218–224, 2001. c Springer-Verlag Berlin Heidelberg 2001
Functionally Coherent Networks in fMRI Data
219
a clustering algorithm because our concept of a coherent network differs from the traditional cluster concept. A large number of clustering methods for a variety of application domains exist and have been described in the literature ([12]). Clustering has also been applied in the present domain of application given [5], [7], [9]). The difference between the method proposed in this work is in the definition of a cluster. Usually, a clustering is defined as a partitioning of a feature space into several components such that the elements within the same component are close to some central element and the distances between different components are large. Thus, most traditional clustering algorithms identify star-shaped topologies in which each element of the feature space is associated with one of a few central elements.
Fig. 1. A clique in a graph. A clique is a set of nodes such that any two nodes are connected by an arc. The solid lines form a clique, whereas the dashed lines do not.
In contrast, we aim at finding networks (or clusters) that exhibit much stronger coherence properties: each element of a network must be close to every other network member. This requirement seems to be better suited to our domain of application: we want to identify networks of brain activity such that all members that belong to the same network interact with each other. Current knowledge about brain processes suggest that such topologies are more realistic than star-shaped topologies. In order to differentiate between those two concepts, we will subsequently use the term ’network’ instead of ’cluster’. This concept of a coherent network is close to the concept of a clique in graph theory. A clique is defined a collection of nodes in a graph such that any two nodes are connected by an arc (see figure 1). Our definition amounts to a “weak” formulation of the clique criterion in a sense that will be explained in subsequent sections. The algorithm that we propose is based on a concept well known in theoretical biology called “replicator dynamics”. Replicator dynamics describe the growth of populations consisting of several species that interact with each other. Replicator dynamics have recently been used in the context of graph theory as a means of detecting maximal cliques for graph matching purposes [4]. We have adapted the concept of replicator dynamics for our purposes because it allows us to detect networks in the sense defined above. In addition to supporting this new notion of a network, our method has the additional advantage of only using pairwise similarity measurements rather than an explicit measure-
220
Gabriele Lohmann and D. Yves von Cramon
ment vector in each pixel. In our context, this is particularly advantageous as the entities that we want to process are very high-dimensional vectors of time courses. Similarity measurements of time courses can be easily obtained without loss of information, whereas time course vectors are difficult to handle due to the high dimensionality. It is generally not feasible to perform traditional clustering such as k-means in high-dimensional vector spaces although some authors have attempted to do this ( [5], [7], [9]). Pairwise clustering methods have been proposed by Hofmann et al. ([8]) for pattern recognition purposes, but it has not been applied to fMRI data. Recently, Independent Component Analysis (ICA) has been applied to perform fMRI data analysis [6]. ICA tries to decompose the image sequence into a sequence of independent components. It is related to cluster analysis in that it is also an exploratory method.
2
Mathematical Framework
The basic idea underlying our approach is that functional networks can be detected solely by analyzing pairwise similarity measurements between any two time series. Thus, we start out with a similarity matrix W = (wij ) where wij represents a similarity measurement between time courses in pixels i and j. Such similarity measurements may for instance be based on correlation coefficients (Pearson’s or Spearman’s rank correlation) or on mutual information measurements. The algorithm that we propose is solely based on the matrix W . Recently, a class of dynamical systems known from theoretical biology has been used for the purpose of detecting maximal cliques in graphs [2] and also for graph matching [4]. This class of dynamical systems is described by the following equation: d xi (t) = xi (t) [W x(t))i − x(t) W x(t)] , i = 1, ..., n. dt Its discrete version is given by: xi (t + 1) = xi (t)
(W x(t))i . x(t) W x(t)
These equations are known as replicator equations [14]. They are used in theoretical biology to model frequency-dependent evolution of a population containing several interacting species. The dynamical properties of replicator systems are described by the famous Fundamental Theorem of Natural Selection (see also [4],[1, p.15]): Theorem 1. Let W be a non-negative, real-valued symmetric n×n matrix. Then the function x(t) W x(t) is strictly increasing with increasing t along any nonstationary trajectory x(t) under both continuous-time and discrete time replicator
Functionally Coherent Networks in fMRI Data
221
dynamics. Furthermore, any such trajectory converges towards a stationary point x ¯. Finally, a vector x ¯ ∈ Sn is asymptotically stable ¯ is a strict local if and only if x maximizer of x W x in Sn with Sn = {x ∈ Rn | xi = 1, xi ≥ 0, i = 1, ..., n}.
In the context of detecting maximal cliques in a graph, the matrix W is an adjacency matrix containing binary values that indicate whether or not any two nodes are connected by an arc [2],[4]. In our context, the matrix W contains non-negative real values indicating the degree of similarity or dependence between any two time courses. The vector x = (x1 , ..., xn ) represents the degree of membership in a network for each pixel i where xi ∈ [0, 1], ∀i. The process of detecting a network is now straightforward. We start out with an initial vector x which is set to x = ( n1 , ..., n1 ) to avoid an initial bias. We then apply the replicator dynamical process during which the vector x evolves towards some stationary value x ¯ that maximizes x W x. As initially all components xi of x have the same weight xi = n1 , the components that will increase their weight after the first iteration are the ones that interact most closely with many other components. As the process evolves, only those components xi will profit that interact most closely with many other highweighted other components. Interaction with low-weighted components becomes less and less profitable. Eventually, a small set of closely-interacting components will have received a large weight while the remaining components become negligible. These components form a closely coherent network. Note that membership in such a network is a fuzzy concept: a large value of xi indicates a high degree of membership. The degree of coherence within this fuzzy network is expressed by x W x. By the fundamental theorem of selection as stated above, we know that at stationarity, the network is maximally coherent with a coherence measure of x W x. In order to ’defuzzify’ the membership concept, we define the pixel i to belong to the network if its membership value exceeds the average value, i.e. if xi > 1/n. The process terminates if it becomes stationary, i.e. if the difference between subsequent iterations becomes negligible. The first network detected by the algorithm consists of all pixels i whose membership values at stationarity exceeds the average, i.e. for which xi > 1/n. To detect a second network, we eliminate all pixels that are members of the first network and repeat the above process. Further networks can be detected likewise. Note that the networks are ranked according to their degree of coherence. The first networks have a higher degree of coherence than later networks. The above process can be recursively applied at a second level of processing as follows. Suppose a number of networks have been detected as described above. We then update the similarity matrix such that wij =
k∈Ni ,l∈Nj
wkl
222
Gabriele Lohmann and D. Yves von Cramon
with Ni being the set of pixels belonging to network i. In other words, similarity values of pixels belonging to the same network are averaged. The replicator process is then applied again using the updated similarity matrix.
3 3.1
Experiments Synthetic Data
The algorithm described above was implemented on a standard Unix-workstation. It was first applied to synthetically generated data displayed in figure 2. As a similarity metric we used the Euclidean distance.
Fig. 2. Synthetically generated data. The dashed lines indicate the clustering obtained by the algorithm. The rightmost image was processed using a secondlevel approach as described in the text.
3.2
fMRI Data
The algorithm was then applied to data acquired from an fMRI experiment. In this experiment, two volunteers were subjected to various visual stimuli. Three fMRI slices with a thickness of 5mm, interslice distance 2mm, 19,2cm FOV and an image matrix of 64x64 were collected at a 3T Bruker 30/100 Medspec (Bruker Medizintechnik GmbH, Ettlingen, Germany) using a gradient recalled EPI sequence (TR=1000ms, TE=40ms, flip angle=40). The within-plane spatial resolution was 3 × 3mm. We processed an image sequence consisting of 300 time steps corresponding to a recording time of 5 minutes. During that time baseline trials and stimulation trials alternated. During the stimulation trials, the subjects saw a pattern of rotating L-shaped figures and a fixation cross in the center. The subjects were asked to fixate the cross and press a button whenever the appearance of the cross changed. During the baseline trials only the fixation cross was visible. We performed a preprocessing (temporal highpass filtering, Gaussian smoothing with σ = 0.8) as well as a statistical analysis. Only those pixels that showed a significant correlation with the experimental stimulus were considered for further processing.
Functionally Coherent Networks in fMRI Data
223
The network algorithm was applied to these two data sets. The results are shown in figure 3. Note that the most prominent networks that were detected in both subjects belong to the primary visual cortex (V1/V2). This agrees well with current knowledge about the human visual system.
Fig. 3. Results of the algorithm applied to two data sets of an fMRI experiment. The dark red areas represent the primary networks, the yellow to white areas represent secondary networks. In both subjects, the primary networks correspond to the primary visual cortex (V1/V2).
4
Discussion
We have presented a new approach to detecting functional networks in fMRI time series. Our definition of a network resembles that of a clique in a graph. Therefore, it captures entities that are different from those targeted in standard clustering algorithms. This new concept seems to be better suited to the present domain of application. Another advantage over many traditional clustering methods is that we only use pairwise similarity values. We thus avoid problems inherent in highdimensionality. Furthermore, our method requires no prior information about the number of networks, about their locations in space or their statistical distributions. The algorithm has several areas of application. First, it may be used for explorative bottom-up preprocessing of the data so that dominant networks and perhaps also artifacts are detected prior to further statistical processing. Networks can thus be identified without any prior knowledge about the experimental design. Some networks may even be independent of the experimental design. They would remain undetected in standard statistical processing techniques. The algorithm may also be helpful in detecting functional network where no design information is available. For instance, one might want to mask all pixels in an image that are activated within one particular experimental condition. Our algorithm might then be used to further subdivide this mask into pixels belonging to several coherent networks that are activated under the same experimental condition. We are currently investigating further domains of application.
224
Gabriele Lohmann and D. Yves von Cramon
Acknowledgments The authors would like to thank Dr. Toralf Mildner for providing the fMRI data.
References 1. J. Hofbauer, K. Sigmund: The Theory of Evolution and Dynamical Systems, Cambridge University Press, 1988. 2. I.M. Bomze: Evolution towards the Maximum Clique, J. Global Optimization, Vol. 10, 1997, pp. 143–164. 3. O. Sporns, G. Tononi, G.M. Edelman: Theoretical Neuroanatomy: Relating Anatomical and Functional Connectivity in Graphs and Cortical Connection Machines, Cerebral Cortex Vol. 10, Feb. 2000, pp. 127–141. 4. M. Pellilo, K. Siddiqi, S.W. Zucker: Matching Hierarchical Structures using Association Graphs, IEEE Trans. on Pattern Anal. and Machine Intell. Vol. 21, No. 11, Nov. 1999, pp. 1105-1119. 5. C. Goutte, P. Toft, E. Rostrup, F. Nielsen. L.K. Hansen: On Clustering fMRI time series, NeuroImage Vol. 9, 1999, pp. 298-310. 6. M.J. McKeown,M.J, S. Makeig, G.G. Brown, T.P. Jung, S.S. Kindermann, A.J. Bell, T.F. Sejnowski: Analysis of fMRI data by blind separation into independent spatial components, Human Brain Mapping Vol. 6, No. 3, 1998, pp. 160-188. 7. A. Baume, F.T. Sommer, M. Erb, D.Wildgruber B. Kardatzki, G. Palm, W. Grodd: Dynamical Cluster Analysis of Cortical fMRI Activation, NeuroImage Vol. 9, 1999, pp. 477 - 489. 8. T. Hofmann, J.M. Buhmann: Pairwise Data Clustering by Deterministic Annealing, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 1, 1997, pp. 1–14. 9. R. Baumgartner, C. Windischberger, E. Moser: Quantification in functional magnetic resonance imaging: fuzzy clustering vs. correlation analysis, Magnetic Resonance Imaging, Vol. 16, No. 2, 1998, pp. 115-125. 10. J.M. Jolion, P. Meer, S. Bataouche: Robust Clustering with Applications in Computer Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 8, 1991, pp. 791–802. 11. J.B. Poline, B.M. Mazoyer: Analysis of individual positron emission tomography activation maps by detection of high signal to noise ratio pixel clusters, Journal of Cerebral Blood Flow and Metabolism, Vol. 13, 1993, pp. 425 - 437. 12. R.O. Duda, P.E. Hart: Pattern Classification and Scene Analysis, John Wiley & Sons, 1973. 13. M. Singh, P. Patel, D. Khosla, T. Kim: Segmentation of Functional MRI by KMeans Clustering, lEEE Trans. on Nuclear Science, Vol. 43, No. 3, 1996, pp. 2030 - 2036. 14. P. Schuster, K. Sigmund: Replicator dynamics, Journal of theoretical biology, Vol. 100, 1983, pp. 533-538. 15. K.-H. Chuang, M.-J. Chiu, C.C. Lin: Model-free functional MRI analysis using Kohnen Clustering Neural network and fuzzy C-means, IEEE Trans. on Medical Imaging, Vol. 18, No. 12, Dec. 1999, pp. 1117–1128. 16. J.C. Bezdek, L.O. Hall, L.P. Clarke: Review of MR image segmentation techniques using pattern recognition, Med. Phys. Vol. 20, No. 4, Jul/Aug. 1993, pp.1033–1048.
Smoothness Prior Information in Principal Component Analysis of Dynamic Image Data ˇ ıdl1 , Miroslav K´ ˇamal2 , Werner Backfrieder3, and V´ aclav Sm´ arn´ y1 , Martin S´ Zsolt Szabo4 1
Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, POB 18, CZ-182 08 Prague 8, Czech Republic
[email protected],
[email protected] 2 Charles University Prague, Czech Republic
[email protected] 3 Institute of Biomedical Engineering and Physics, AKH Vienna, Austria
[email protected] 4 Johns Hopkins University, Baltimore, MD, USA
[email protected] Abstract. Principal component analysis is a well developed and understood method of multivariate data processing. Its optimal performance requires knowledge of noise covariance that is not available in most applications. We suggest a method for estimation of noise covariance based on assumed smoothness of the estimated dynamics.
1
Introduction
In medical image processing, principal component analysis (PCA) is used for data compression, noise reduction, and feature extraction purposes. Its usefulness and many advantages are well known. Performance of PCA depends on the amount and characteristics of noise in observed data. In data with a low signalto-noise ratio (SNR), inhomogeneous, or correlated noise, the performance of PCA can be poor. The problem has been addressed theoretically in several papers [1,2,3,4,5] with respect to the properties of noise and an optimal scaling of data for PCA was defined. The authors concluded that the optimal metric can be derived directly from the known covariance matrix of the noise, and suggested particular solutions for specific data. With simulated data and known noise, we have found [6] that the methods proposed in [3,4,7] are efficient but their applicability restricted by requirements (e.g. knowledge of the distribution or the covariance matrix of the noise) that are not easily satisfied in practice. That was the motivation for searching for a more practical approach. We suggest that the covariance matrix of the noise and thus the optimal metric for PCA can be estimated using a rather general prior information on the assumed smoothness of dynamic processes recorded in image sequences. This prior was originally developed for PCA of dynamic scintigraphic data where the assumption on smoothness of time-activity curves and of scintillation spectra is M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 225–231, 2001. c Springer-Verlag Berlin Heidelberg 2001
226
ˇ ıdl et al. V´ aclav Sm´
fully substantiated. However, the same prior information can be applied to a wider class of image sequences. The prior information is embraced via the Bayesian paradigm [8] and an iterative search for maximum a posteriori probability (MAP) estimation of the parameters is proposed. The performance of the method is demonstrated in the context of simulated and clinical dynamic image data.
2
Problem Description and Solution
The aim is to improve the performance of PCA when the data SNR is low and/or the noise covariance is unknown. This requires a joint estimation of low-rank mean value of data and the covariance matrix of the noise. Model of Observed Data. The observed image sequence consists of T images having N pixels each, stored column-wise. The images are assumed to be linear combinations of r min(N, T ) underlying images, P (N × r), weighted by coefficients, Q (r×T ). The observed data O consist of this combination corrupted by an additive zero mean noise E O = µ + E = P Q + E. The noise is assumed to contain no outlying realizations so that its distribution can be considered normal. Properties of the noise are thus fully characterized by the covariance C = E(Eit Ejτ ) where E denotes mathematical expectation. Its generic entries describe correlations and variances of the noise at pixels i and j (i, j = 1 . . . N ) at images t and τ (t, τ = 1 . . . T ). Hence, the observed data O are normal with mean µ and covariance C, symbolically O ∼ N (µ, C). Models of Noise Covariance. The array C is huge with 0.5(N T + 1)N T distinct elements. It is much larger than the number N T of data O and thus a restricted covariance structure has to be considered. Usually, independence of noise entries in different pixels and images is assumed, all with the same variance 1/ω > 0. Then, the model of data O becomes O ∼ N (µ, IN ⊗ IT ω −1 ) =
ω N T /2 exp −0.5ωtr (O − µ) (O − µ) , 2π
(1)
where denotes transposition and tr trace. The covariance is C = (IN ⊗ IT )ω −1 where IN is the identity matrix and ⊗ is Kronecker product [9]. The use of the precision ω instead of the variance simplifies formal manipulations. The maximum likelihood estimate of µ of rank r minimizes the quadratic form in the exponent of (1) and thus coincides with the PCA estimate [10]. The results are poor when the covariance C does not have the assumed structure and/or the noise level 1/ω is too high compared to the signal values µ. A solution to this problem depends on a more realistic modelling of the noise. Here,
Smoothness Prior Information in PCA
227
the direct extension C = IN ⊗ Ω −1 of the classical assumption is considered. The precision matrix Ω models changing covariances of the noise between the indi˜ −1 ⊗ Ω −1 with arbitrary vidual images. Formally, it is possible to consider C = Ω ˜ positive definite N ×N matrix Ω. Computational demands are then much higher because the number of pixels N is much larger than the number of images T . Prior Information. We search for a joint estimator of µ, Ω. It is a non-trivial task as it can be shown that the joint maximum likelihood estimate of µ and Ω does not exist. Thus, it is impossible to separate signal and noise spaces without additional information. In nuclear medicine, image sequences reflect the changes of pixel values with time or energy. In the former case, the weights Q of images P can be interpreted as time-activity curves, in the latter case as scintillation spectra. In the following text we will use the time interpretation. The weights Q of images P are usually similar so that the observed adjacent images are similar, too. The adjacent observed images are usually similar so that we expect the weights Q of underlying images to be similar. This qualitative information is quantified as follows. The values Qk(t) of k-th curve k = 1, . . . , r at time t = 2, . . . , T are related to the preceding values through the simple time-dependent auto-regression Qk(t) ∼ N (at−1 Qk(t−1) , β −1 ),
(2)
where the precision β and the coefficients a = [a1 . . . aT −1 ], approximating the curve evolution, are assumed to be common to all curves. The arbitrariness of the initial values Qk(1) is modelled by the flat normal probability density function (p.d.f.) Qk(1) ∼ N (0, 1/ε) with a small precision ε. These assumptions, applied to µ = P Q with orthonormal images P , translate into the prior p.d.f. for µ. Its support has to be restricted to µ of the assumed rank r min(N, T ), i.e. to the space of lower dimension. This restriction of the parameter space to a lower dimension modifies the normalization factor [9]. µ ∼ Kε0.5r β 0.5T r exp {−0.5βtr (µ∆∆ µ )} ,
(3)
where K is a normalizing constant independent of estimated parameters, ∆ is the (T × T ) matrix with the non-zero entries ∆1,1 = ε0.5 , ∆t,t = 1, ∆t−1,t = −at−1 , t = 2, . . . , T and zero entries otherwise. The specification of the prior p.d.f. is completed by assuming mutually independent at ∼ N (1, 1/α), and Ω ∼ W (γN, γwIT ) where W is the Wishart distribution with parameters γ and ω [9]. These priors assign the highest belief to slowly changing dynamics and diagonal covariance but both are very flat. Estimation Algorithm. The observation and noise models, together with the chosen prior distribution on unknown parameters Θ = (µ, Ω, a1 , . . . , aT −1 , β, ε) = (µ, θ) determine the posterior p.d.f. of parameters given by the observations O. Its Θ-dependent part reads 0.5N (1+γ) 0.5T r 0.5r × L(Θ) = |Ω| β ε × exp −0.5tr (O − µ) Ω (O − µ) (4) × exp {−0.5βtr (µ∆∆ µ ) + γw tr(Ω) + α(a + 1)(a + 1) } .
228
ˇ ıdl et al. V´ aclav Sm´
The MAP estimate of Θ maximizes the function (4). Maximization complexity stems mainly from the restricted rank of the mean value µ. This makes an iterative search inevitable. Splitting of the estimated parameter Θ = [µ, θ] simplifies the description of the proposed algorithm. Algorithm SPCA: Smoothed PCA 1. Choose small values of tuning knobs α, γ, w, select the upper bound n ¯>0 on the number of iterations and set the iteration counter n = 0. 2. Choose initial estimates θn of θ as follows Ωn = IT , βn = εn = ant = 0, t = 1, . . . , T − 1. ¯ 3. Do while µn , θn are changing and n < n (a) Complete the squares in exponent (4) with respect to µ so that you get tr [(OAn − µBn )(OAn − µBn ) ] + Λn , −1 where An = Ωn Hn , Bn = Hn−1 are regular matrices determined by the latest estimates θn through the identity Hn Hn = Ωn + βn ∆n ∆n . The unique matrix remainder Λn collects the terms independent of µ. (b) Find the estimate (µBn )n of (µBn ) by applying standard PCA to the scaled data (OAn ) and compute the estimate µn = (µBn )n Bn −1 of µ. (c) Substitute µn into (4), find θn+1 as the maximizer of the obtained expression (it can be mostly done analytically) and increase the iteration counter n.
3
Experiments
SPCA was implemented in Matlab [11] and its performance evaluated in experiments with simulated and clinical data of dynamic scintigraphy. Two illustrative examples are presented: a simple mathematical phantom and a dynamic PET study of the brain with 11 C labelled radioligand to serotonin transporters [12]. f.image 2
f.curve 2
time
f.image 3
f.curve 3 counts
counts
f.curve 1 counts
f.image 1
time
time
Fig. 1. Factor images and curves used for simulation of dynamic scintigraphic data. The mathematical phantom consisted of 60 images of size 64 × 64. Each image was a linear combination of three factor images with circular structures. They are shown in Figure 1 which includes also the curves simulating intensity changes with time. A flat background and uncorrelated Gaussian noise (1) with a high variance was added to the simulated images. Figure 2 demonstrates six
Smoothness Prior Information in PCA
5
15
25
35
45
229
55
Fig. 2. Six samples from the analyzed series of 60 noisy images. of 60 images (no. 5, 15, 25, 35, 45, and 55) in the resulting image series. PCA of the simulated data should recognize three underlying dynamic components. The first three most significant principal components (PCs) produced by PCA are demonstrated in Figure 3, those produced by SPCA in Figure 4. PC2 image
53.9 %
time
2.52 %
PC2 curve
PC3 image
time
1.02 %
PC3 curve weight
weight
PC1 curve weight
PC1 image
time
Fig. 3. The first three most significant PCs produced by PCA of simulated data. Numbers in % are relative contributions of PCs to original data. In noiseless data, true contributions of the first three PCs are 95.0, 4.5, and 0.5 %. The curves show the weights of respective PCs in original images. Thick lines show true weights of PCs extracted from noiseless data. SPC2 image
76.2 %
time
3.6 %
SPC2 curve
SPC3 image
time
1.2 %
SPC3 curve weight
weight
SPC1 curve weight
SPC1 image
time
Fig. 4. The first three most significant PCs produced by SPCA of simulated data. The third PC is well defined and the curve reflects well the corresponding dynamics (polarity of PCs is arbitrary). Unlike PCs in Figure 3, PCs in Figure 4 can be successfuly rotated in order to recover the images and curves of underlying dynamic structures shown in Figure 1. A dynamic PET study of the brain with 11 C labelled radioligand to serotonin transporter sites consisted of 18 images recorded in progressively extended time intervals in order to compensate for a very fast decay of 11 C and to obtain an acceptable contrast between the specific and non-specific binding of the radioligand that increases with time. Figure 5 demonstrates six of 18 images in the recorded image series. PCA was expected to recognize two underlying dynamic components (the signal of specific and non-specific binding). The first two most significant PCs produced by PCA are demonstrated in Figure 6, those produced by SPCA in Figure 7.
230
ˇ ıdl et al. V´ aclav Sm´
t = 0.6 min
t = 2.5 min
t = 7 min
t = 17.5 min
t = 40 min
t = 85 min
Fig. 5. Six samples from the analyzed series of 18 dynamic PET images.
PC2 image
94.7 %
time
0.9 %
PC2 curve weight
PC1 curve weight
PC1 image
time
Fig. 6. The first two most significant PCs produced by PCA of dynamic PET brain study. Only the first PC shows the brain structure, the second PC reflects mostly noise.
4
Discussion and Conclusions
Preliminary experiments with simulated and clinical data have shown that in comparison with PCA, the SPCA is able to improve the separation of the signal from noise, and to enhance contrast in the images of principal components. We believe that the method proposed in this paper may improve the results of PCA applied to dynamic scintigraphic data recorded with varying acquisition intervals, in several energy windows, and studies with short-lived radionuclides. All those data are occasionally corrupted by potentially strong, correlated, and variable noise that may result in suboptimal performance of PCA. The prior information used in the proposed method is rather general and not necessarily restricted to scintigraphic data. In addition, alternative prior information - better suited to a specific problem - can be chosen and the methodology proposed in this paper still used with benefit. The method can be further developed to support the estimation of the number of significant factors and to benefit from similar prior information applied also to the images of principal components. Formally, these extensions are relatively straightforward. However, the increase in complexity of calculations is significant and approximations have to be found in order to make the solution feasible.
Acknowledgements The work has been partially supported by the following grants: Austro-Czech project Kontakt II-16 (ME-228), GACR 102/99/1564, IGA MZCR NN53823/99, NIH no.AA11653, and NIH no.AG14400.
Smoothness Prior Information in PCA SPC2 image
95.6 %
time
0.5 %
SPC2 curve weight
SPC1 curve weight
SPC1 image
231
time
Fig. 7. The first two most significant PCs produced by SPCA of the dynamic PET brain study. The second PC is weak but well differentiated from noise. Unlike the PCs in Figure 6, the first two PCs in Figure 7 can be successfully rotated to the realistic images and curves of underlying specific and non-specific binding maps.
References 1. Anderson TW. Estimating linear statistical relationships. Ann Statist 1984; 12:145. 2. Fine J, Pousse A. Asymptotic study of the multivariate functional model. Application to the metric choice in principal component analysis. Statistics 1992; 23: 63-83. 3. Benali H, Buvat I, Frouin F, Bazin JP, DiPaola R. A statistical model for the determination of the optimal metric in factor analysis of medical image sequences. Phys Med Biol 1993; 38:1065-1080. 4. Pedersen F, Bergstroem M, Bengtsson E, Langstroem B. Principal component analysis of dynamic positron emission tomography studies. Eur J Nucl Med 1994; 21:1285-1292. 5. Hermansen F, Lammertsma AA. Linear dimension reduction of sequences of medical images: I. Optimal inner products. Phys Med Biol 1995; 40:1909-1920. ˇ amal M, K´ 6. S´ arn´ y M, Benali H, Backfrieder W, Todd-Pokropek A, Bergmann H. Experimental comparison of data transformation procedures for analysis of principal components. Phys Med Biol 1999; 44:2821-2834. 7. Hermansen F, Ashburner J, Spinks TJ, Kooner JS, Camici PG, Lammertsma AA. Generation of myocardial factor images directly from the dynamic oxygen-15-water scan without use of an oxygen-15-carbon monoxide blood-pool scan. J Nucl Med 1998; 39:1696-1702. 8. Berger JO. Statistical Decision Theory and Bayesian Analysis. New York, Springer, 1985. 9. Rao CR. Linear Statistical Inference and its Application. New York, Wiley, 1973. 10. Golub GH, VanLoan CF. Matrix Computations. Baltimore, J Hopkins Univ Press, 1989. 11. Matlab v. 5.3.1 (R11.1), The MathWorks Inc., Natick, MA 01760-1500, USA, http://www.mathworks.com. 12. Parsey RV, Kegeles LS, Hwang D-R, Simpson N, Abi-Dargham A, Mawlawi O, Slifstein M, Van Heertum RL, Mann J, Laruelle M. In vivo quantification of brain serotonin transporters in humans using [11 C] McN 5652. J Nucl Med 2000; 41(9):1465-1477.
Estimation of Baseline Drifts in fMRI Fran¸cois G. Meyer1 and Gregory McCarthy2 1
Department of Electrical Engineering, University of Colorado at Boulder Department of Radiology, University of Colorado Health Sciences Center
[email protected] 2 Brain Imaging and Analysis Center, Box 3808 Duke University Medical Center, Durham, NC 27710
Abstract. This work provides a new method to estimate and remove baseline drifts in the fMRI signal. The baseline drift in each time series is described as a superposition of physical and physiological phenomena that occur at different scales. A fast algorithm, based on a wavelet representation of the data yields detrended time-series. Experiments with fMRI data demonstrate that our detrending technique can infer and remove drifts that cannot be adequately represented with low degree polynomials. Our detrending technique resulted in a noticeable improvement by reducing the number of false positive and the number of false negative.
1
Introduction
Blood Oxygenation Level-Dependent (BOLD) fMRI uses deoxyhemoglobin as a contrast agent : deoxygenated hemoglobin induces a difference in magnetic susceptibility relative to the surrounding. The cascade of physiological events that trigger the changes in the BOLD signal remains an area of active research [12,2].Unfortunately, changes in the fMRI signal are only of the order of a few percents. The detection of changes in the BOLD signal is further complicated by the presence of a large number of instrumental and physiological noises that contaminate the fMRI signal [5]. Long term physiological drifts and instrumental instability contribute to a systematic increase or decrease in the signal with time. While the exact cause for the drift of the baseline signal is not completely understood [11], this structured trend constitutes a basic hurdle to any statistical analysis of the data. In order to obtain a baseline from which one can estimate the effect of the stimulus it is thus essential to infer and remove the systematic drift, or trend, in the data. In this paper we address the problem of estimating and removing the baseline drift that contaminates the fMRI response to a stimulus. We propose an approach that removes the trend using a multiscale technique.
This work was supported by a Whitaker Foundation Biomedical Engineering Research Grant.
M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 232–238, 2001. c Springer-Verlag Berlin Heidelberg 2001
Estimation of Baseline Drifts in fMRI
2
233
Some Background on Wavelets
We introduce in this section the notations associated with a discrete wavelet transform. These notations will be used in the sequel of the paper. Let ψ(t) be the wavelet, and let φ(t) be the scaling function associated with a multiresolution analysis [9]. Let {hn } be the lowpass filter, and let {gn } be the high pass filter associated with this wavelet transform. Let x = {xn }, n = 0, · · · , N − 1 be a discrete signal. For the simplicity of the presentation we assume that N = 2J . The wavelet coefficients of x are defined by the following recursions : sx0k = xk k = 0, · · · , N − 1 j+1 j sxk = n gn−2k sxn k = 0, · · · , 2−j−1 N − 1 = n hn−2k sxjn k = 0, · · · , 2−j−1 N − 1 dxj+1 k
(1)
The wavelet transform W at scale J is a linear operator that maps x to Wx given by : , dxJ−1 , · · · , dxj0 , · · · dxj2−j N −1 , [sxJ0 , dxJ0 , dxJ−1 0 1 · · · · · · · · · , dx10 , · · · · · · , dx12−1 N −1 ]t .
(2)
We also require that the wavelet ψ have p vanishing moments. As a consequence, polynomials of degree p − 1 will have a very sparse representation in such a wavelet basis : all the djk are equal to zero, except for the coefficients located at the border of the dyadic subdivision (k = 0, 1, 2, 4, · · · , 2J−1 ).
3
Wavelet Estimation of the Drift
While the origin of the baseline drift is not completely understood [6,8,11], a number of artifacts can cause large scale (low frequencies) fluctuations in the signal. Baseline drifts have been described by linear [3,8], and polynomial [4] functions of time. Signal processing techniques such as Kalman filters have recently been proposed [7]. A standard practice consists in approximating trends with polynomials, or a truncated Fourier series [1]. On the one hand, there is no reason to believe that the trend is a periodic function of time that will be well approximated with a few Fourier coefficients. On the other hand, a polynomial provides only a descriptive device. In fact no substantive physical or physiological interpretation can be given to the coefficients. We propose therefore to describe the trend as a superposition of physical and physiological phenomena that occur at different scales. Model of the drift. We consider the following model for the fMRI time series at a voxel inside the brain : y(t) = θ(t) + a(t) + n(t)
(3)
where θ(t) is the trend, or baseline drift, a(t) is a the response to the stimulus, induced by neuronal activation. This signal will only exist if the voxel is inside
234
Fran¸cois G. Meyer and Gregory McCarthy
a functionally activated brain area. n(t) is a white noise caused by thermal and quantum noise. An appropriate model for the trend is provided by a linear combination of large scale wavelets : θ(t) =
sJ0 φ(2−J t)
+
−j J−1 2N
j=J0 k=0
djk ψ(2−j t − k).
(4)
This model assumes that all the fine scale coefficients, djk , 0 ≤ j ≤ J0 − 1, are zero. The smallest scale J0 characterizes the complexity of the trend. Estimation of the drift. Let y = {yn }, n = 0, .., N − 1 be the time series at a given voxel in the brain. One expands y into a wavelet basis : (5) , dJ−1 , . . . , dj0 , . . . dj2−j N −1 , . . . d10 , . . . . . . , d12−1 N −1 Wy = sJ0 , dJ0 , dJ−1 0 1 ˆ of the wavelet transform of the trend is obtained by taking An estimate, Wθ, −J0 +1 the first 2 N terms from the wavelet expansion of y, and setting the other coefficients to zero : 0 Wθ = sJ0 , dJ0 , . . . , dJ0 0 , . . . , dJ2−J , 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . , 0 (6) 0 N −1 ˆ of the detrended time series, a, is obtained by setting Alternatively, an estimate a the first 2−J0 +1 N terms in the wavelet expansion (5) to zero, and reconstructing by applying the inverse wavelet transform W−1 . 1 1 0 −1 ˆ = W−1 0, . . . . . . , 0, dJ0 0 −1 , .., dJ2−J (7) a , . . . , d , . . . . . . , d −1 +1 0 2 N −1 0 N −1 What is the scale of the trend ? The selection of the optimal value of J0 is performed as follows. We start with J0 = J which provides the description ˆ is of the trend with the minimum number of parameters. The significance of a then tested, and we compute the P -value. We successively test more and more complex models of the trend by decreasing J0 . Because the scale of the trend should be larger than the scale of the stimulus, we stop before J0 reaches the scale of the stimulus. Finally, one selects that J0 which provides the smallest P value. As shown in the experiments, the same value can be used for all activated voxels. This approach guarantees that the detrending algorithm will not increase the P values.
4
Experiments
We illustrate here the principle of the algorithm with some data that demonstrate left posterior temporal lobe activation during auditory comprehension [10]. The study involved several subjects who listened passively to alternating sentences spoken in English (their native language), and Turkish (which they did not understand). Each time series was composed of 28 alternating auditory segments of English and Turkish. Each segment lasted for 6 seconds, and images
Estimation of Baseline Drifts in fMRI
235
were acquired every 1.5 s. There was a delay of 12 seconds from the first image to the onset of the first sentence. TR=1,500, slice thickness=9mm, skip = 2mm, imaging matrix= 128× 64, voxel size = 3.2 × 3.2 × 9 mm. More details about the experiments are available in [10]. Analysis of the detrending performance. We have compared the performance of the detrending algorithm for several values of the scale J0 of the trend θ(t). The same value of J0 was used for all pixels. A time series was extracted from the region of interest (ROI) B in slice 5 (voxel (75,21)), shown in Fig. 2. Figure 1 shows this same time-series with the trend superimposed, for several values of J0 . We note that a piecewise linear trend (such as the one obtained for J0 = 8) fails to track the long term variability of the signal. A Student t-test was designed to compare the signal under the two conditions: English sentences, or Turkish sentences. Pixels with a P -value less than 0.005 were deemed activated, and colored in red in the activation maps. Result of the first experiment Figure 2 shows the result of the t-test for the slices 4 and 5 after detrending with J0 = 4. The activation maps were thresholded at P = 0.005 and are superimposed on the raw EPI data. The left side of the brain is represented on the right side of the image. The maps were generated with two runs of alternating Turkish/English intervals, starting with Turkish. The maps clearly show activated pixels in the left inferior frontal lobe (region A and B). For each slice we selected a region of interest (ROI) that contained strongly activated voxels (P < 10−4 ). The activation in these regions was assumed to be truly caused by the stimulus and not by physiological or random noise. The two ROIs are shown as yellow rectangles, and are pointed at by the arrows A and B in slice 4 and 5 respectively. For each value of the scale of the trend, the performance of the detrending in each ROI was quantified using the following factors : (1) the number of activated voxels inside the ROI, (2) the mean P -value for all the voxels inside the ROI, and (3) the smallest P -value inside the ROI. These numbers are reported in table 1. For both slices the detrending resulted in a noticeable improvement by increasing the number of activated voxels, while decreasing the mean P -value inside the ROIs. The optimal effect was obtained for a scale equal to 4. One notes that as the scale of the trend becomes finer (e.g. J0 = 3), the trend starts tracking the variations in the BOLD signal that are due to the stimulus response, and results in a poorer performance. Because the ROIs in this experiment can be considered as truly activated voxels, this experiments demonstrates that the detrending helps to decrease the number of false positive. Indeed, on can significantly decrease the level of the threshold while keeping the truly activated voxel still activated in the ROIs A and B. Result of the second experiment A second experiment was conducted with a different data set. Figure 3 shows the result of the t-test for the slices 3 and 4 after detrending with J0 = 4. The activation maps were thresholded at P = 0.005 and are superimposed on the raw EPI data. The maps were generated with two runs of alternating English/Turkish intervals, starting with English. The maps show in red activated pixels in the left posterior temporal lobe (regions C and D). For each slice we again selected a region of interest (ROI) that contained
236
Fran¸cois G. Meyer and Gregory McCarthy 920
fMRI signal trend
900
Amplitude
880 860 840 820 800
0
16
32
48
64
80
96 112 128 144 160 176 192 208 224 240 256 Time
920
fMRI signal trend
900
Amplitude
880 860 840 820
800
0
16
32
48
64
80
920
96 112 128 144 160 176 192 208 224 240 256 Time
fMRI signal trend
900
Amplitude
880 860 840 820 800
0
16
32
48
64
80
96 112 128 144 160 176 192 208 224 240 256 Time
Fig. 1. Trend for different values of the scale J0 . From top to down J = 4, 6, 8.
strongly activated voxels (P < 10−3 ). We note that the mean P -value before detrending was not as high as in the previous experiment. The two ROIs are shown as yellow rectangles, and are pointed at by the arrows A and B in slice 4 and 5 respectively. For each value of the scale of the trend, the performance of the detrending in each ROI was quantified using the same factors as in the previous experiments. These numbers are reported in table 2. For both slices the detrending resulted in a noticeable improvement by increasing the number of activated voxels, while keeping the mean P -value inside the ROIs at the same value. The optimal effect was again obtained for a scale equal to 4. This experiment demonstrates that detrending can help reducing the number of false negative : after detrending, there were 4 times more voxels activated in the ROI D, than before detrending.
Estimation of Baseline Drifts in fMRI
237
References 1. T.W. Anderson, The statistical analysis of time series, Wiley, 1971. 2. P.A. Bandettini, The temporal resolution of functional MRI, Functional MRI (C.T.W. Moonen and P.A. Bandettini, eds.), Springer-Verlag, 1999, pp. 205–220. 3. P.A. Bandettini, A. Jesmanowicz, E.C. Wong, and J.S. Hyde, Processing strategies for time-course data sets in functional MRI of the human brain, Magn. Reson. Med. 30 (1993), 161–173. 4. G.H. Glover, Deconvolution of impulse response in event-related bold fMRI, NeuroImage (1999), no. 9, 416–429. 5. P. Jezzard, Physiological noise: strategies for correction, Functional MRI (C.T.W. Moonen and P.A. Bandettini, eds.), Springer-Verlag, 1999, pp. 173–182. 6. V. Kiviniemi, J. Jauhiainen, O. Tervonen, E. P¨aa ¨kk¨ o, J. Oikarinen, V. Vainionp¨ aa ¨, H. Rantala, and B. Biswal, Slow vasomotor fluctuation in fMRI of anesthetized child brain, Magn. Reson. Med. 44 (2000), 373–378. 7. F. Kruggel, D.Y von Cramon, and X. Descombes, Comparison of filtering methods for fMRI datasets, NeuroImage (1999), no. 10, 530–543. 8. M.J. Lowe and D.P. Russell, Treatment of baseline drifts in fMRI time series analysis, Journal of Computer Assisted Tomography 23(3) (1999), 463–473. 9. S. Mallat, A wavelet tour of signal processing, Academic Press, 1999. 10. M.J. Schlosser, N. Aoyagi, R.K. Fullbright, J.C. Gore, and G. McCarthy, Functional MRI studies of auditory comprehension, Human Brain Mapping 6 (1998), 1–13. 11. A.M. Smith, B.K. Lewis, U.E. Ruttimann, F.Q. Ye, T.M. Sinnwell, Y. Yang, J. H. Duyn, and J.A. Frank, Investigation of low frequency drift in fMRI signal, NeuroImage (1999), no. 9(5), 526–533. 12. I. Vanzetta and A. Grinvald, Increased cortical oxidative metabolism due to sensory stimulation: implications for functional brain imaging, Science 286 (1999), 1555–8.
A
B
Fig. 2. Turkish-English. Left : slice 4. Right : slice 5. Activation map (p = 0.005). The scale of the trend was J0 = 4.
238
Fran¸cois G. Meyer and Gregory McCarthy
C
D
Fig. 3. English-Turkish. Left : slice 3. Right : slice 4. Activation map (p = 0.005). The scale of the trend was J0 = 4.
Table 1. Turkish-English. Left : slice 4, ROI A (4 voxels). Right: slice 5, ROI B (6 voxels). Scale # activated mean J0 voxels P-value 3 1 6.90e-05 4 3 1.36e-05 5 3 1.56e-04 6 3 1.83e-04 7 3 7.27e-04 8 2 6.41e-05 No trend 2 1.29e-04
minimum Scale # activated mean P-value J0 voxels P-value 6.90e-05 3 3 2.20e-05 4.57e-06 4 4 1.82e-04 8.16e-06 5 4 2.47e-04 8.00e-06 6 4 3.15e-04 1.07e-05 7 4 3.43e-04 6.06e-05 8 3 9.10e-04 7.10e-05 No trend 2 1.20e-03
minimum P-value 1.17e-05 3.33e-08 1.03e-07 2.09e-07 2.38e-07 5.04e-07 8.88e-07
Table 2. English-Turkish. Left : slice 3, ROI C (8 voxels). Right : slice 4, ROI D (9 voxels). Scale # activated mean J0 voxels P-value 3 4 2.10e-03 4 6 1.11e-03 5 5 8.17e-04 6 6 1.81e-03 7 5 1.39e-03 8 5 1.31e-03 No trend 4 1.20e-03
minimum Scale # activated mean P-value J0 voxels P-value 8.65e-05 3 1 3.31e-03 4.20e-06 4 4 6.09e-04 1.39e-05 5 4 8.51e-04 1.81e-05 6 4 1.15e-03 3.25e-05 7 4 1.49e-03 6.32e-05 8 4 1.50e-03 3.75e-04 No trend 1 1.89e-03
minimum P-value 3.31e-03 3.11e-05 4.70e-05 7.60e-05 1.81e-04 1.39e-04 1.89e-03
Analyzing the Neocortical Fine-Structure Frithjof Kruggel1 , Martina K. Br¨ uckner2 , Thomas Arendt2 , 1 Christopher J. Wiggins , and D. Yves von Cramon1 1
Max-Planck-Institute of Cognitive Neuroscience, 04103 Leipzig, Germany
[email protected] 2 Paul-Flechsig-Institute for Brain Research, 04109 Leipzig, Germany
Abstract. Cytoarchitectonic fields of the human neocortex are defined by characteristic variations in the composition of a general six-layer structure. It is commonly accepted that these fields correspond to functionally homogeneous entities. Diligent techniques were developed to characterize cytoarchitectonic fields by staining sections of post-mortem brains and subsequent statistical evaluation. Fields were found to show a considerable interindividual variability in extent and relation to macroscopic anatomical landmarks. With upcoming new high-resolution magnetic resonance (MR) scanning protocols, it appears worthwile to examine the feasibility of characterizing the neocortical fine-structure from anatomical MR scans, thus, defining cytoarchitectonic fields by in-vivo techniques.
1
Introduction
There is little doubt regarding a close correspondence between the functional organization of the neocortex and the cytoarchitectonic fields, which have been characterized by different histological staining techniques in post-mortem brains for about the last 100 years [2], [14]. These fields are defined by varying compositions of the general six-layered neocortical fine-structure, which are characterized by the properties and densities of neurons and their connecting fibers. One of the most recent techniques for delineating the borders of cytoarchitectonic fields is called objective cytometry [13]. This technique examines radial intensity profiles across the neocortical sheet in stained brain sections, which are compared statistically along a trajectory on the surface. Local maxima in the classification function indicate a border between two fields. It is now well accepted these fields show a considerable interindividual variability with respect to macroscopic landmarks (e.g., sulcal and gyral lines and their substructures) [1], [11], [12]. It is an open issue whether macroscopic landmarks (e.g., gyri and sulci) are sufficient for describing the position of functional activation (such as revealed by in-vivo magnetic resonance (MR) scanning), or whether it is necessary to resort to atlas-based descriptions of cytoarchitectonic fields (which are obtained in-vitro from different subjects in the form of a probabilistic map). Recent investigations revealed that a spatial resolution of 0.25 mm for anatomical MRI scanning is feasible. At this resolution, the neocortical sheet is mapped M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 239–245, 2001. c Springer-Verlag Berlin Heidelberg 2001
240
Frithjof Kruggel et al.
as a layer of 12 voxels, which may be sufficient to recognize the layer structure of the cortex. Suitable image post-processing techniques may be designed to classify cortical intensity profiles, and thus, to define borders of cytoarchitectonic fields in-vivo. Since the MR signal strength is related to the local cellular environment (e.g., biopolymer content) in a volume, it is not unreasonable to assume that stained histological intensity profiles and MR intensity profiles show some similarity (albeit at a much lower spatial resolution). Thus, we will compare previously published results obtained by objective cytometry [15] with a MR-based neocortical fine-structure analysis.
2
Materials and Methods
Brain preparation and scanning: An isolated left brain hemisphere (female, 72 years of age) obtained from an routine autopsy was fixed in 3% formalin and embedded in agar gel. MR acquisition was performed on a Bruker 3 T Medspec 100 system using a T1 -weighted 3D MDEFT protocol [7] (FOV 96x192x128 mm, matrix 256x512x512, voxel size 0.375x0.375x0.25 mm, scanning time 12 h). Preprocessing: Scan data were interpolated to an isotropical voxel size of 0.25 mm by a fourth-order b-spline method. Intensity inhomogeneities were corrected by a modification of the AFCM [10], yielding a segmentation into three classes (background: BG, grey matter: GM, white matter: WM) and an intensity-corrected version of the input image. The cerebellum and brainstem were manually removed using an image editor and to yield a voxel-based representation of the cerebral WM compartment. Surface generation: A raw triangular surface was generated from the WM segmentation using the marching tetrahedra algorithm [9] and subsequent mesh optimization to 200k faces [3]. This surface was adaptated to the grey-white matter boundary (WMS) using a deformable model approach [6]. Similarly, a second surface representing the grey matter-background boundary (GMS) was obtained. Intensity profiles: For each vertex on the WMS, the closest point on the GMS was computed [6]. Along a line through both points, an intensity profile was sampled from the intensity-corrected image at regular intervals of 0.1 mm. In order to define the WM-GM and GM-BG boundary points consistently, lines were adaptated to the rising flank of the profile (at the GM-WM boundary) and to the falling flank (corresponding to the GM-BG boundary, see Fig. 1). The exact position of the GM-WM boundary was determined at intensity I = 135 (GM-WM-boundary at I = 100), and their distance was recorded as the local cortical thickness th. Because layers are found at a rather constant relative position within the cortex, profiles were resampled at 1% intervals of th. Thus, we obtained for each vertex on the WMS the cortical thickness and a normalized intensity profile of 101 data points. Modeling profiles: Intensity profiles were characterized for statistical evaluation by (i) the slope of the rising flank at the GM-WM boundary m0 (see Fig. 1), (ii) the slope of the intra-cortical portion m1 , (iii) the slope of the falling flank at the GM-BG boundary m2 . In addition,
Analyzing the Neocortical Fine-Structure
241
the position (bp), intensity (bi) and width (bw) of an intra-cortical band were determined by adaptation of a Gaussian function to the intra-cortical profile segment.
200
m1
Intensity [units]
m0
Position (sp), intensity (si) and half-width (sw) of Gennari’s band
150 m2
WM-GM boundary
100 GM-BG boundary
50
0
0
0,5
1
1,5
2
2,5
Distance [mm]
Fig. 1. Example intensity profile across Area 17. The rising flank (on the left) crosses the WM-GM border, whereas the slope of the intra-cortical segment is comparatively flat. The falling flank mostly results from the partial volume effect on the GM-BG boundary. A Gaussian function is used to model the position, intensity and width of intra-cortical bands, such as Gennari’s band.
Statistical evaluation: Cortical areas with similar fine structure were determined by comparing profile properties of a template region with a local cortical patch. For the template, profile properties were collected from a surface patch of 5 mm diameter centered in a region of interest (typically 60-100 vertices). Properties of the test region were sampled from a given vertex and its first and secondorder neighbors (typically 10-30 vertices). Six statistical tests were heuristically selected to measure the similarity between both regions: (z1 ) Pearson’s correlation coefficient of the averaged profile in both regions, (z2 ) Pearson’s correlation coefficient of the first derivative of the averaged profiles, (z3 ) a t-test comparing the cortical thickness th, (z4 ) a t-test comparing the rising slope m0 , (z5 ) a t-test comparing the intra-cortical slope m1 , and (z6 ) a t-test comparing the band intensity bi. As indicated, tests measures were converted into z-scores, and a similarity measure was derived as: zsim = z1 +z2 −|z3 |−|z4 |−|z5 |−|z6 |. If both regions contain similar profiles, z3 -z6 contribute values close to 0, while z1 and z2 provide positive scores, summing up to some (small) positive quantity. For dissimilar regions, negative similarity measures are expected. A (heuristically derived) threshold of zsim ≥ −1 was used in all subsequent figures.
3
Results
We selected three different anatomical regions which are well studied by histological techniques. We were interested in comparing intensity profiles with the
242
Frithjof Kruggel et al.
known descriptions of local layer structure, and in comparing the extent of statistically homogeneous regions with known cytoarchitectonic fields. Note that the T1 contrast is ”inverted” by fixation: regions of higher neuron content (i.e., cortical layers 1-3, 5 and 6, basal ganglia) show a higher signal intensity than fiber-containing regions (i.e., the white matter).
200
200
150
150
100
Intensity [units]
Intensity [units]
Gennari’s Band
2.75 mm
Anterior Bank 100
2.66 mm 50
0 -3.5
Posterior Bank
3.61 mm
2.33 mm
3.89 mm
2.19 mm
50
-3
-2.5
-2
-1.5
-1
Distance [mm]
-0.5
0
0.5
1
0 -6
-5
-4
-3
-2
-1
0
1
2
3
4
Distance [mm]
Fig. 2. Left: A sample profile through the visual cortex reveals Gennari’s band as an intensity drop at ≈ 52 % of the cortical width. A region is detected by statistical classification which compares well with neuroanatomical knowledge. Right: A sample profile through the anterior (Area 4, motor cortex) and posterior (Area 3, sensory cortex) bank of the central sulcus. Using a spot at Broca’s knee as a template, a region is detected which is similar in extent to the motor cortex.
Visual Cortex (Area 17): The visual cortex is distinguishable from the surrounding Area 18 by the presence of Gennari’s band, which corresponds to an intracortical horizontal fiber system. This structure is easily detected in the acquired MR dataset as a darker band in the bright cortex (see Fig. 2, top left).
Analyzing the Neocortical Fine-Structure
243
200
200
150
150
100
Intensity [units]
Intensity [units]
The cortical thickness on the banks of the calcarine fissure were determined as 1.86±0.10 mm ([15]: 1.84 mm), the position of the center of Gennari’s band as 52±6 % ([15]: 55 %), and the thickness of this band as 0.30± 0.10 mm ([15]: 0.28 mm). According to von Economo [14], Area 17 is located on the walls and lips of the calcarine fissure, and at the gyral crowns at the occipital pole. This description compares nicely with the automatically generated statistical classification as shown in Fig. 2, bottom left.
2.25 mm
100
2.97 mm
2.52 mm
3.20 mm
50
0 -3
50
-2.5
-2
-1.5
-1
Distance [mm]
-0.5
0
0 -3.5
-3
-2.5
-2
-1.5
-1
-0.5
0
0.5
Distance [mm]
Fig. 3. Top: Axial (enlarged) and sagittal section through the inferior frontal gyrus. Profile were taken from the pars triangularis (Area 45) and the pars opercularis (Area 44). The cortex is thinner in Area 45, but exhibits a more prominent banded structure. Lateral view of the white matter surface. Area 45 (left) and Area 44 (right) were detected from model position shown above.
244
Frithjof Kruggel et al.
Motor and Sensory Cortex (Area 4 and 3): Next, we tried to differentiate the primary motor cortex (Area 4) on the anterior bank of the central sulcus from the somatosensory cortex (Area 3) on its posterior bank (see Fig. 2, top right). The most distinctive feature here is the cortical thickness: on the anterior bank, the motor cortex reaches values up to 3.8 mm, compared to less than 2.2 mm for the sensory cortex [8]. Intensity profiles in Area 4 mostly showed three maxima (see Fig. 2, middle right), which roughly correspond to the transition between layer II/III, layer III/V and layer V/VI as described by Amunts et al. [1]. The somatosensory cortex on the posterior bank exhibited much less substructure. A statistical classification was initialized by a manually specified region close to the hand field and yielded the full extent of the motor cortex well in agreement with previously published histological classifications (see Fig. 2, bottom right). Broca’s Area (Area 44 and 45): As a final example, we selected Broca’s speech region, which corresponds to Area 44 (the pars opercularis of the inferior frontal gyrus) and Area 45 (the pars triangularis of the inferior frontal gyrus). As described by Amunts et al. [1], the cortex of Area 44 is not sharply delineable from the white matter, which corresponds to a flat slope of m0 (see Fig. 3, middle right). The cortex of Area 45 (see Fig. 3, middle left) is thinner and features a more distinct horizontal layering. Classification results are shown superimposed on the white matter surface, separated for Area 45 (bottom left) and Area 44 (bottom right).
4
Discussion
Results shown for three different brain areas demonstrate the feasibility of analyzing the neocortical substructure from high-resolution MR data. The qualitative properties of the MR intensity profiles and quantitative descriptors (e.g., cortical thickness, band position and width) corresponded well with descriptions found in reference publications based on histological examinations. Using statistical descriptors of the profiles obtained from a template region, the extent of target regions was determined by comparing local descriptors with the template. Regions found correspond well with prior knowledge from histological examinations. There is a striking qualitative similarity of our MR intensity profiles with photometric studies of the myeloarchitecture [4], and theoretical studies [5] demonstrated the equivalence of Nissl-stained cytometric intensity profiles with Weigert-stained myelin profiles. Although the spatial resolution of the our MR data is at least one order of magnitude lower than traditional histological techniques results suggest that perhaps a microscopic resolution is not required if a classification of cortical areas is sought for. However, at a higher resolution (say, 0.1 mm), even more detail might be revealed, thus leading to more powerful statistical classifiers. We want to emphasize the preliminary nature of this feasibility study. First of all, the validation of our regional classification by histological examination of the same specimen is missing. It is an open issue how much the approach
Analyzing the Neocortical Fine-Structure
245
described here may be translated to in-vivo studies, given the limited scanning time when examining test subjects and unavoidable motion artefacts. The possibility of studying the neocortical fine-structure by MR imaging, i.e., introducing a myeloarchitecture-related parcellation of an individual brain, offers exciting perspectives for the analysis of structure-function relationships in the brain on a mesoscopic level.
References 1. Amunts, K., Schleicher, A., B¨ urgel, U., Mohlberg, H., Uylings, H.B.M., Zilles, K.: Broca’s region revisited: cytoarchitecture and intersubject variability. J. Comp. Neurol. 412 (1999), 319–341. 2. Brodmann, K.: Die vergleichende Lokalisationslehre der Grosshirnrinde. Barth, Leipzig (1909). 3. Garland, M., Heckbert, P.S. Optimal triangulation and quadric-based surface simplification. J. Comp. Geom. 14 (1999), 49–65. 4. Hopf, A.: Registration of the myeloarchitecture of the human frontal lobe with an extinction method. J. Hirnforschung 10 (1968), 259–269. 5. Hellwig, B.: How the myelin picture of the human cerebral cortex can be computed from cytoarchitectonic data. A bridge between von Economo and Vogt. J. Hirnforschung 34 (1993), 387–402. 6. Kruggel, F., von Cramon D.Y.: Measuring the neocortical thickness. In: Mathematical Methods in Biomedical Image Analysis (Hilton Head), pp. 154-161. IEEE Press, Los Alamitos (2000). 7. Lee, J.H., Garwood, M., Menon, R., Adriany, G., Andersen, P., Truwit, C.L., Ugurbil, K.: High contrast and fast three-dimensional magnetic resonance imaging at high fields. Magn. Reson. Med. 34 (1995), 308–312. 8. MacDonald, D, Kabani, N., Avis, D., Evans, A.C.: Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. Neuroimage 12 (2000), 340– 356. 9. Payne, B.A., Toga, A.W.: Surface mapping of brain function on 3D models. IEEE CGA 10 (1990), 33–41. 10. Pham, D.L., Prince J.L.: An adaptive fuzzy segmentation algorithm for threedimensional magnetic resonance images. In: Information Processing in Medical Imaging (IPMI’99), LNCS 1613, pp. 140–153. Springer, Heidelberg (1999). 11. Rademacher, J., Caviness, V.S., Steinmetz, H., Galaburda, A.M.: Topographical variation of the human primary cortices: implications for neuroimaging, brain mapping and neurobiology. Cereb. Cortex 3 (1995), 313-329. 12. Rajkowska, G., Goldman-Rakic, P.S.: Cytoarchitectonic definition of prefrontal areas in the normal human cortex: II. Variability in locations of areas 9 and 46 and relationship to the Talairach coordinate system. Cereb. Cortex 5 (1995), 323– 337. 13. Schleicher, A., Zilles, K.: A quantitative approach to cytoarchitectonics: analysis of structural inhomogeneities in nervous tissue using an image analyzer. J. Microscopy 157 (1990), 367–381. 14. von Economo, C.: Zellaufbau der Grosshirnrinde des Menschen. Springer-Verlag, Wien (1927). 15. Zilles, K., Werners, R., B¨ usching, U., Schleicher, A.: Ontogenesis of the laminar structure in areas 17 and 18 of the human visual cortex. Anat. Embryol. 174 (1986), 339–353.
Motion Correction Algorithms of the Brain Mapping Community Create Spurious Functional Activations Luis Freire1,2 and Jean-Fran¸cois Mangin1 1 2
Service Hospitalier Fr´ed´eric Joliot, CEA, 91401 Orsay, France Instituto de Medicina Nuclear, FML, 1600 Lisboa, Portugal
Abstract. This paper describes several experiments that prove that standard motion correction methods may induce spurious activations in some motion-free fMRI studies. This artefact stems from the fact that activated areas behave like biasing outliers for the least square based measure usually driving such registration methods. This effect is demonstrated first using a motion-free simulated time series including artificial activation-like signal changes. Several additional simulations explore the influence of activation on registration accuracy for a wide-range of simulated misregistrations. The effect is finally highlighted on an actual time series obtained from a 3T magnet. All the experiments are performed using four different realignment methods, which allows us to show that the problem is overcome by methods based on robust similarity measures like mutual information.
1
Introduction
Realignment of functional magnetic resonance imaging (fMRI) time-series is today considered as a required preprocessing step before analysis of functional activation studies. Indeed, when the subject movement is correlated with the task, the changes in signal intensity which arise from head motion can be confused with signal changes due to brain activity [1]. Nevertheless, standard realignment procedures are often not sufficient to correct for all signal changes due to motion. For instance, a non-ideal interpolation scheme used to resample realigned images leads to motion-correlated residual intensity errors [2]. Other motion-correlated residuals may stem from “the spin history effect”, which occurs when excited spins in the acquisition volume do not have time to return to equilibrium before the next excitation pulse occurs [3,4]. Finally, other motion-related artefacts can confound fMRI time series, such as intrascan motion and the interaction between motion and susceptibility artefacts [5,6]. It has been reported that a number of residual motion-related artefacts after realignment are reduced by covarying out signal correlated with functions of the motion estimates [2,3]. It has to be noted, however, that when motion estimates are highly correlated with the cognitive task, this regression-based approach is bound to erase some actual activations. While this cost may appear as the price M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 246–258, 2001. c Springer-Verlag Berlin Heidelberg 2001
Motion Correction Algorithms of the Brain Mapping Community...
247
to pay in order to obtain a good protection against false positives, using this approach raises the issue of the motion estimate reliability. Indeed, if ever signal changes induced by the cognitive task slightly bias motion estimates in a systematic task-correlated way, the price of this correction may be very high. Without the correction, however, realignment from task-correlated motion estimates could induce spurious activations. Hence task-correlated motion estimates would be the worst artefact that can be imagined for a realignment method. In this paper, several simulations and some experiments with real data show that this artefact may occur with realignment methods that do not take into account potential outlier voxels related to functional activations when defining the similarity measure which is optimized to assess registration parameters. A number of papers reporting brain mapping results obtained from fMRI experiments consider the realignment stage of their processing methodology as reliable simply because it has been done using one of the standard packages (SPM: [3], AIR: [7,8]). Recurrent difficulties observed in our institution relative to realignment by SPM of time series acquired with our 3T magnet, however, have led our neuroscientists to follow a rather surprising strategy: compute motion estimates but do not resample time series if motion is small relative to voxel size. This strategy stems from a number of past observations where predictable activation patterns were obtained without realignment whereas resampling the time series led to the usual task-correlated motion artefacts (spurious activations along brain edges). This paper describes several experiments, which demonstrate that this bias is induced by activated areas, which behave like outliers for the registration method. The fact that the bias magnitude is highly related to the signal change amplitude may explain why our 3T magnet led to more difficulties than more usual 1.5T scanners. Our prediction for the near future, however, is an increasing number of unsatisfied users of standard motion correction procedures simply related to the wide diffusion of high field magnets. Fortunately, our experiments show that more robust similarity measures like “mutual information” should overcome the problem [9,10,11,12,13]. It has to be understood that the difficulties with usual measures like the least squares used in SPM and AIR is not really related to accuracy of the motion estimates. Indeed, reaching a high subvoxel accuracy for motion estimates in actual time series may require better models of the motion induced signal changes. For instance, spatial distortions related to echo-planar imaging (EPI) depend on the subject position in the scanner, which may confound motion estimation [14]. Therefore, our experiments mainly focus on the potential task-correlated bias observed in motion estimates whatever the estimate actual accuracy.
2 2.1
Materials and Methods fMRI Acquisitions
All fMRI studies were performed on a Bruker scanner operating at 3T using a 30slice 2D EPI sequence (slice array of 64×64 voxels). This sequence had in-plane
248
Luis Freire and Jean-Fran¸cois Mangin
resolution of 3.75 mm and slice thickness of 4mm. The potential bias induced by activations in realignment algorithms was evaluated in a human study using a design of two alternating visual stimuli. The subjects head was cushioned inside the Bruker proprietary head rf coil assembly, and two adjustable pads exerted light pressure to either side of the head. 2.2
Similarity Measures
Four different similarity measures are used in our experiments: LS-SPM - the standard least-square based realignment algorithm in SPM96 [19] 1 ; LS-AIR - a second implementation of the least-square available in AIR 2.0 [8]; RIU-AIR the ratio image uniformity similarity function of AIR 2.0 [8]; and MI - mutual information [9,10,11,12,13]. Each underlying implementation depends on a few parameters, which may slightly modify the realignment results. A number of works have been dedicated to evaluation of registration methods accuracy [8,15,16,17,18]. While this is clearly a key point to compare similarity measures, such work requires the study of each parameter influence, which is far beyond the scope of this paper. Since our main goal is to highlight the potential bias induced by activations, we have chosen to set each parameter either to the best choice leading to acceptable computation time, or to the value commonly used by standard users. 2.3
Simulations
Evaluation of the putative biasing effect due to activations was first achieved using artificial time-series. Each volume in the time series was created by applying an artificial rigid-body motion Tsim to a reference image using a cubic spline based interpolation method [20,21] available on the World Wide Web 2 . The reference image (64×64×30, 3.75×3.75×4 mm) was one of the EPI BOLD image of the study mentioned above denoised with a standard 3×3×3 median filter. Gaussian noise was added to the reference image and to all frames of the time series in order to simulate the effects of thermal noise in fMRI scans (standard deviation = 2.5% of mean cerebral voxel value). An artificial activation was then added either to the reference image or to the rest of the time series according to the simulation requirement. One activation pattern was manually drawn in the occipital lobe in order to mimic a visual activation observed during the underlying neuroscience study. The activation pattern size is 12.4% of total brain voxels, with mean and maximum signal increase in activated voxels corresponding to 1.26% and 2.04% respectively. Each frame of the artificial time series is aligned on the reference image using one of the four registration methods, which yields an estimated rigid-body transformation Test . Hence the alignment error is given by the residual rigid−1 body transformation Tres = Tsim ×Test , where each transformation is represented 1 2
http://www.fil.ion.ucl.ac.uk/spm http://bigwww.epfl.ch/algorithms.html
Motion Correction Algorithms of the Brain Mapping Community...
249
by a standard homogeneous matrix. The translation (Et ) and rotation (Er ) alignmenterrors are given by: Et = T (1, 4)2 + T (2, 4)2 + T (3, 4)2 (in mm) and Er = cos−1 [(T (1, 1) + T (2, 2) + T (3, 3) − 1) /2] (in degrees). When required, the six motion parameters of a transformation T are given by: tx = T (1, 4), ty = T (2, 4), tz = T (3, 4), ry = sin−1 T (1, 3), rx = sin−1 (T (2, 3)/ cos(ry )), rz = sin−1 (T (1, 2)/ cos(ry )).
3 3.1
Experiments Simulated Activations without Motion
The first experiment investigates whether some realignment method may lead to artefactual task-related motion estimates in the absence of any initial misaligment in the time series. A second issue is whether motion estimates biased by actual activations may induce additional spurious activations. The different steps of this experiment can be summarized as follows: - Generate an artificial time-series by duplicating the reference image 40 times. - Include in each frame the activation pattern described above multiplied by an intensity which varies throughout the time series according to the time course given in Figure 1 (two square stimuli convolved with a simple hemodynamic response; the maximal mean activation is 2.52%); - Run the four registration packages; Evaluate the six transformation parameters of Test for each package (see Fig.1); - Compute cross-correlation between each parameter and activation time course (see Fig.1); - Infer activated areas from the four realigned time series using SPM99 (see Fig.2). Several realignment parameters related to the least-square based methods (LS-SPM and LS-AIR) demonstrate a high correlation with the time course of the simulated activation (see Fig.1): for LS-SPM, the highest correlation is obtained for the yaw parameter (0.99); for LS-AIR, the maximum correlation is obtained for the ty parameter (0.97). The highest amplitude of the task-related parameter time course is 0.05 mm (ty ) and 0.15 deg (yaw) for LS-SPM, and 0.05 mm (ty ) and 0.04 deg (pitch) for LS-AIR. A lower but significant correlation is observed for some parameters related to MI (0.67 for tz ), but amplitude of the task-related time course is smaller: 0.01 mm (tz ) and 0.02 deg (pitch). Finally, no significant correlation is observed for RIU-AIR (0.10 for the highest one), but the realignment curves include more noise than for the other methods.
250
Luis Freire and Jean-Fran¸cois Mangin
Fig. 1. Artificial activations are added to a motion-less constant time series according to the time course given on top; the six realignment parameters are displayed for the four packages. Each parameter time course is cross-correlated with the activation profile.
Motion Correction Algorithms of the Brain Mapping Community...
251
The initial time series was realigned from each of the four motion estimations using a cubic-spline interpolation. The generalized linear model was then used to fit each voxel with the artificial profile of Fig.1 using SPM99 after the following standard preprocessing: spatial Gaussian smoothing (full-width at half maximum 5mm) and low-pass temporal filtering by a Gaussian function with a 2-volumes width. The voxels were reported as activated if the p-value exceeded a threshold of 0.001 uncorrected for multiple comparisons. An illustration of the consequence of the activation-correlated motion estimates is proposed for a slice of the brain in Fig.2. Spurious activated voxels can be observed along brain edges after LSSPM, LS-AIR and MI motion corrections. The worst artefact is obtained after LS-SPM correction, which led to 6 spurious activated clusters with an extent exceeding a threshold of p=0.05 corrected for multiple comparison across the volume. In return no such cluster was observed for LS-AIR and MI. In order to illustrate another potential artefact induced by the presence of activations, we performed a second study from the initial time series. Each frame was divided by its mean value before fitting the artificial profile. This approach is sometimes used to discard global scaling effects related to MR acquisitions [22]. Here, because of the bias induced on frame mean values by the presence of activations, a lot of voxels turned out to be anti-correlated with the artificial profile (p < 0.001 uncorrected) (see Fig. 2 - Global scaling artefact).
Fig. 2. One axial slice of the activation maps obtained from SPM99 after using the different registration methods. Spurious activation voxels can be observed when using LS-Based methods.
252
3.2
Luis Freire and Jean-Fran¸cois Mangin
Simulated Activations with Motion
The second experiment investigates the influence of activations on registration method accuracy. A method robust to the presence of activations in the time series should keep the same level of accuracy whatever the activation features. The important point here is not the absolute accuracy of the method, which could depend on the tuning of some intrinsic parameters, but the potential accuracy weakening induced by signal change in activated areas relative to the reference image. This experiment relies on a huge number of simulated volumes, which allows us to study the influence of several parameters on a statistical basis. In order to get rid of potential bias related to field of view variations after simulated motion, all volumes were stripped from their border voxels before realignment in order to reach a 62×62×28 geometry. To eliminate simulated motion specificities relative to the reference volume axes as a potential confound, the simulated translations were applied systematically in the 20 directions of a regular dodecahedron, and the simulated rotations were applied around the 20 different axes defined by the same dodecahedron. Hence, for a given translation or rotation amplitude, accuracy was assessed from means and standard deviations of translation (Et ) and rotation (Er ) errors relative to 20 different realignments.
Fig. 3. Influence of activation on registration accuracy for a wide range of simulated misregistrations: for each methods, accuracy in the situation of no activation, (NA - lighter color) is compared with accuracy when 12.4% of the brain (occipital area) is activated in the reference volume with mean signal increase of 2.52% (A - darker color). Charts refer to simulated translation (Top) and simulated rotation (Bottom) experiments and produce means and standard deviations of Et (Left) and Er (Right).
Motion Correction Algorithms of the Brain Mapping Community...
253
To study the influence of motion amplitude, 11 time series of 20 volumes were generated according to the strategy mentioned above for six translation amplitudes (0.1, 0.2, 0.5, 1.0, 2.0, and 5.0 mm) and five rotation amplitudes (0.1, 0.2, 0.5, 1.0, and 2.0 degrees). For each method and each motion amplitude, accuracy without activation is compared with accuracy when 12.4% of the brain (in occipital area, defined by the activation pattern) is activated in the reference volume with mean signal increase of 2.52% (see Fig. 3). In all cases, activations produce significant decline of LS-SPM accuracy, whereas this effect is restricted to the translation error (Et ) and the smallest translations for LS-AIR. A less significant but similar effect is observed for RIU-AIR. In return MI accuracy does not depend on activation. 3.3
Experiments with Actual Time Series
Finally, the four registration methods were run on an actual time series made up of 180 frames. The repeated stimulus period corresponds to 18 frames (2s acquisition per frame). Each period alternates two 9-frames long presentations of two cognitively different visual stimuli. The six rigid-body registration parameters are displayed in Fig. 4 for the four registration packages. The general trends of the six parameters estimations are consistent across methods apart from the yaw parameter. It should be noted that according to the estimation results, the actual motion amplitude was rather small (less than 0.15◦ and 0.15 mm for all frames). Some of the charts clearly display stimulus correlated periodic variations. The most impressive periodic effect is observed on the pitch chart for LS-SPM and LS-AIR, while this periodic trend is less clear for RIU-AIR and MI. Like in the first experiment, the actual time series was realigned from each of the four motion estimations using a cubic spline interpolation. SPM99 was used then to perform detection of activations. The following standard preprocessing was applied: spatial Gaussian smoothing (full-width at half maximum 5mm), high-pass temporal filtering (period 120s) and low-pass temporal filtering by a Gaussian function with a 4s width. The generalized linear model was used then to fit each voxel with a linear combination of two functions: the first one was derived by convolving a standard hemodynamic response function with the periodic stimulus, the second one was the time-derivative of the first one in order to model possible variations in activation onset. The voxels were reported as activated if the p-value exceeded a threshold of 0.05 corrected for multiple comparisons. An illustration of the consequences of the stimulus-correlated motion estimates is proposed for a few slices of the brain in Fig. 5. Considering the activation map obtained from the raw time series as a reference, a number of additional activated voxels are observed along some high contrast brain edges after LS-SPM motion correction and to a smaller extent after LS-AIR correction. RIU-AIR and MI corrections have a very small influence on the activation map. The effect related to LS-SPM correction has been observed for numerous cognitive experiments in our institution.
254
Luis Freire and Jean-Fran¸cois Mangin
Fig. 4. Motion correction parameters for the four registration methods for an actual time series of 180 frames. The underlying stimulus is made up of 10 periods of 18 frames, each period consisting of two alternating 9-frame long blocks with a different visual presentation. Stimulus-correlated periodic trends can be observed on some of the charts, specially four least-square based methods.
Motion Correction Algorithms of the Brain Mapping Community...
4
255
Discussion
All retrospective image registration algorithms rely on a similarity measure, which has to be maximized in order to achieve the result. A huge number of different measures have been proposed in the literature [23]. One important feature leading to distinguish two classes of similarity measures is the robustness to potential outliers, namely voxels that do not verify some of the assumptions underlying the measure design. Robust measures have been classically proposed to register multimodal images, while simpler least-square based measures are usually employed for time-series motion correction. The experiments performed in this paper indicate that this choice may be questioned because of the presence of activated areas in standard fMRI time series. Indeed, least-square based approaches are known to be highly sensitive to such outliers.
Fig. 5. A few slices of the activation maps obtained from SPM99 after realignment using the different methods.
The first simulation has shown that LS-SPM and LS-AIR motion parameter estimations are biased by signal changes related to activated areas. Furthermore, this experiment has proved that this bias may induce spurious activations along high contrast brain edges during the following data analysis. Of course, some of the features of this simulation may be discussed as unrealistic (activation level and size, noise model, no spatial distortions, etc). This simulation, however, highlights a weakness of the least-square based measures that may be overcome
256
Luis Freire and Jean-Fran¸cois Mangin
by more robust measures. The fact that LS-SPM motion correction led to the apparition of spurious activated clusters with a large extent, indeed, is especially disturbing. While LS-AIR seems less sensitive, this simulation has shown that it is not biasproof. While almost insensitive to activations in this simulation, the two other measures have presented two qualitatively different behaviors. RIU-AIR measure seems to lead to local maxima difficulties (perhaps related to a bad tuning of the method during our experiment). This results in a low accuracy, which hides any potential activation related bias. MI has presented the best behavior with a very small bias amplitude without important influence on the activation detection process. Of course, this simulation does not prove that MI would have a correct behavior in any situation. The behavior of RIU-AIR method during the first experiment highlights an important point to be understood. The problem induced by activation related bias is not related to actual accuracy. For instance, corrupting LS-SPM motion parameter estimations with a reasonable random noise may be sufficient to get rid of spurious activations while preserving actual ones. This observation is illustrated by the results of the second experiment where the activation influence on registration accuracy is only significant for small motions. Indeed, larger motions lead to a lower registration accuracy which masks the activation related bias. This could explain the surprising heuristics of our institutions neuroscientists, which discard realignment only for small amplitude estimated motions. Our experiment with actual time series seems to be consistent with our interpretation of the simulation studies. The arguments that lead us to discard actual task correlated motion during data acquisition are the following: the periodic motion amplitude estimated by LS-SPM and LS-AIR on the pitch chart is different. Moreover, the two other methods do not detect this putative motion. Finally, this periodic motion amplitude is approximately the same for each stimulus period, which would be rather surprising for an actual motion. The fact that all methods do not agree on the estimated yaw parameter is of course very difficult to understand. One possible explanation could stem from the fact that the rigid body transformation is not sufficient to correct for all the consequences of the motion because of distortions. The discord on the periodic motion, however, seems of a different nature and leads to alarming effects on activation maps. If our interpretation is correct, LS-SPM correction, and to a smaller extent LS-AIR, create spurious clusters of activated voxels along high contrast brain edges. In our opinion, the localization of these spurious clusters depends only on the brain edge orientation relative to the actual activation localization. This could mean that spurious activations may appear at the same place across individuals implied in the same cognitive experiment and hence survive to group analysis. While we hope that this alarming prediction is too pessimistic, it calls for trying to minimize the risk. Our work has shown that more sophisticated similarity measures like MI could clarify the situation thanks to their robustness to outliers. While MI was used for historical reason during our work, this may not be the best choice for
Motion Correction Algorithms of the Brain Mapping Community...
257
motion correction - first because of computational time considerations, second because recent results have shown that MI is prone to local maxima problems [24,25,26]. While RIU-AIR may appear as an alternative at first glance, its nonconvexity problems seems worst than for MI. In fact, the field of robust similarity measure is currently very active and should provide other adequate solutions [24,27,28,29,30].
References 1. Hajnal, J. V., Mayers, R., Oatridge, A., Schwieso, J. E., Young, I. R., and Bydder, G. M.: Artefacts due to stimulus correlated motion in functional imaging of the brain. Magn. Reson. Med. 31 (1994) 289–291 2. Grootoonk, S., Hutton, C., Ashburner, J., Howseman, A. M., Josephs, O., Rees, G., Friston, K. J., and Turner, R.: Characterization and correction of interpolation effects in the realignment of fMRI time series. NeuroImage, 11 (2000) 49–57 3. Friston, K. J., Williams, S., Howard, R., Frackowiak, R. S. J., and Turner, R.: Movement-related effects in fMRI time-series. Magn. Reson. Med. 35 (1996) 346– 355 4. Robson, M. D., Gatenby, J. C., Anderson, A. W., and Gore, J. C.: Practical considerations when correcting for movement-related effects present in fMRI time-series. In Proc. ISMRM 5th. Annual Meeting, Vancouver, (1997) 1681 5. Birn, R. M., Jesmanowicz, A., Cor, R., and Shaker, R.: Correction of dynamic Bz-field artifacts in EPI, in Proc. ISMRM 5th Annual Meeting, Vancouver, (1997) 1913 6. Wu, D. H., Lewin, J. S., and Duerl, J. L.: Inadequacy of motion correction algorithms in functional MRI: role of susceptibility-induced artefacts. J. Mag. Res. Image. 7 (1997) 365–370 7. Woods, R. P., Cherry, S. R., and Mazziotta, J. C.: Rapid automated algorithm for aligning and reslicing PET images, J. Comput. Assist. Tomogr. 16 (1992) 620–633 8. Woods, R. P., Grafton S. T., Holmes C. J., Cherry, S. R., and Mazziotta, J. C.: Automated image registration: I. General methods and intrasubject, intramodality validation. JCAT, 22(1) (1998) 139–152 9. Wells W. M., Viola P., Atsumi H., and Nakajima S.: Multi-modal volume registration by maximization of mutual information. Medical Image Analysis, 1(1) (1996) 35–51 10. Maes F., Collignon A. Vanderneulen D., Marchal G., and Suetens P.: Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imag., 16(2) (1997) 187–198 11. Viola P. and Wells W. M.: Alignment by maximization of mutual information. International Journal of Computer Vision, 24(2) (1997) 137–154 12. Studholme C., Hill D. L. G., and Hawkes D. J.: Automated three-dimensional registration of magnetic resonance and positron emission tomography brain images by multiresolution optimization of voxel similarity measures. Medical Physics, 24(1) (1997) 25–35 13. Meyer C. R., Boes J. L., Kim B., Bland P. H., Zasadny K. R., Kison P. V., Koral K., Frey K. A., and Wahl R. L.: Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations. Medical Image Analysis, 1(3) (1997) 195–206
258
Luis Freire and Jean-Fran¸cois Mangin
14. Jezzard P. and Clare S.: Sources of distortions in functional MRI data. Hum. Brain Mapp. 8 (1999) 80–85 15. Jiang A. P., Kennedy D. N., Baker J. R., Weiskoff R. M., Tootell R. B. H., Woods R. P., Benson R. R., Kwong K. K., Brady T. J., Rosen B. R., and Belliveau J. W.: Motion detection and correction in functional MR imaging. Hum. Brain Mapp. 3 (1995) 224–235 16. Frouin V., Messegue E., and Mangin J.-F.: Assessment of two fMRI motion correction algorithms. Hum. Brain Mapp. 5 (1997) S458 17. West J. et al.: Comparison and evaluation of retrospective intermodality brain image registration techniques. J. Comput. Assist. Tomogr. 21(4) (1997) 554–566 18. Holden M., Hill D. L. G., Denton E. R. E., Jarosz J. M., Cox T. C. S., Rohlfing T., Goodey J., and Hawkes D. J.: Voxel similarity measures for 3D serial MR brain image registration, IEEE Trans. Med. Imag. 19(2) (2000) 94-102. 19. Friston K. J., Ahsburner J., Frith C. D., Poline J.-B., Heather J. D., and Frackowiak R. S. J.: Spatial registration and normalization of images. Hum. Brain Mapp. 2 (1995) 165–189 20. Unser M., Aldroubi A., and Eden M.: B-Spline Signal Processing: Part I–Theory, IEEE Transactions on Signal Processing, 41(2) (1993) 821–832 21. Unser M., Aldroubi A., and Eden M.: B-Spline Signal Processing: Part II–Efficient Design and Applications, IEEE Transactions on Signal Processing, (2) (1993) 834– 848 22. Andersson J. L. R.: How to estimate global activity independent of changes in local activity, NeuroImage, 6 (1997) 237–244 23. Maintz J. B. A. and Viergever M. A.: A survey of medical image registration. Medical Image Analysis, 2(1) (1998) 1–36 24. Roche A., Malandain G., Pennec X., and Ayache N.: The correlation ratio as a new similarity measure for multimodal image registration. In Proc. MICCAI98, LNCS-1496, Springer Verlag, (1998) 1115–1124 25. Studholme C., Hill D. L. G., and Hawkes D. J.: An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognition, 32(1) (1999) 71–86 26. Pluim J. P. W., Maintz J. B., and Viergever M.: Image registration by maximization of combined mutual information and gradient information, In Proc. MICCAI00, LNCS-1935, Springer Verlag, (2000) 452–461 27. Nicou C., Heitz F., Armspach J.-P., Namer I.-J., and Grucker D.: Registration of MR/MR and MR/SPECT brain images by fast stochastic optimization of robust voxel similarity measures, NeuroImage 8(1) (1998) 30–43 28. Roche A., Pennec X., Rudolph M., Auer D. P., Malandain G., Ourselin S., Auer L. M., and Ayache N.: Generalized Correlation Ratio for Rigid Registration of 3D Ultrasound with MR images. In Proc. MICCAI00, Pittsburgh, USA, LNCS-1935, Springer Verlag (2000) 567–577 29. Pluim J. P. W., Maintz J. B., and Viergever M.: Interpolation artefacts in mutual information-based image registration, Computer Vision and Image Understanding 77 (2000) 211–232 30. Jenkinson M., and Smith S. M.: A global method for robust affine registration of brain images, Medical Image Analysis (2001, in press).
Estimability of Spatio-temporal Activation in fMRI Andre Lehovich1,2,3 , Harrison H. Barrett1,2,3,4 , Eric W. Clarkson1,2,3,4, and Arthur F. Gmitro2,4 1
Center for Gamma-Ray Imaging, University of Arizona, Tucson AZ 85721, USA
[email protected], http://gamma.radiology.arizona.edu 2 Department of Radiology, University of Arizona 3 Program in Applied Mathematics, University of Arizona 4 Optical Sciences Center, University of Arizona
Abstract. Event-related functional magnetic resonance imaging (fMRI) is considered as an estimation and reconstruction problem. A linear model of the fMRI system based on the Fourier sampler (k-space) approximation is introduced and used to examine what parameters of the activation are estimable, i.e. can be accurately reconstructed in the noisefree limit. Several possible spatio-temporal representations of the activation are decomposed into null and measurement components. A causal representation of the activation using generalized Laguerre polynomials is introduced.
1
Introduction
In functional magnetic resonance imaging (fMRI), the signal is produced by a temporary physiologically induced change in the magnetization of a brain region. This change is called the activation. (For an introduction to fMRI see [15].) Most prior work has considered fMRI to be a signal-detection problem: for a given region of interest in the brain, usually a voxel, did the average magnetization significantly change after the subject received some stimulus? Typically the results of signal detection on many voxels are displayed as an activation map. Instead, we focus on fMRI as an estimation problem: how much has the average magnetization in the region changed t seconds after the stimulus? We prefer estimation to signal detection for several reasons: First, there has been much debate over the optimal signal-detection strategy. Yet we know from other signal-detection problems that good understanding of the signal is helpful in formulating the optimal detection strategy. Second, in many signal-detection algorithms the first step is to estimate the signal. Third, detection reduces the data to a binary value (or activation map of binary values), yet information about the signal magnitude might be of interest. Finally, without knowledge about the true activation it is difficult to produce the ROC curves needed to compare the performance of different signal-detection systems. In any imaging system the accuracy of estimates (reconstructions) is affected by factors such as measurement noise, errors in the mathematical model of the M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 259–271, 2001. c Springer-Verlag Berlin Heidelberg 2001
260
Andre Lehovich et al.
imaging system, and those aspects of an object that the system is incapable of measuring. The latter is the focus of this paper. We answer the question “In the best case of noise-free data and no modeling error, what parameters of the activation can we linearly estimate using data from an fMRI system?” (We consider the fMRI system to include both the MRI hardware and the scan sequence software.) An equivalent but perhaps more interesting question is “Even with the generous assumptions of no noise and no modeling error, what parameters of the activation can we not reconstruct?” Answering these questions allows us to compare the tradeoffs in spatial vs. temporal resolution of different scan sequences. In section 2 we present a linear model relating the fMRI measurements to the activation we wish to reconstruct and the parameters that must be estimated. Our model explicitly treats the activation as a spatio-temporal function and the imaging system as a continuous-to-discrete1 mapping. Several ways to represent the activation are suggested, including a novel representation using generalized Laguerre polynomials. In section 3 we introduce estimability and the decomposition of activation viewed through an fMRI system into null and measurement components. This decomposition tells us for a specific fMRI system what can be accurately reconstructed in the absence of measurement noise. The same analysis can be used either to match the activation representation to a specific imaging system or to optimize the fMRI system for a given activation representation. In section 4 we compute the measurement and null components of several representations of the activation.
2
Linear Model & The fMRI Inverse Problem
Our model of the imaging process begins with the spatial Fourier sampler (kspace) approximation derived in most texts, including [13] and [12]. The basic measurement equation is M (r, t)e−2πir·k(t) p(t − tj ) dr dt + nj , (1) gj = where g is measured, M (r, t) is the transverse magnetization in the rotating frame at time t, the spatial Fourier components k(t) are controlled by the scan sequence software, p(t − tj ) is the temporal sampling blur of the MRI hardware, and n is zero-mean white Gaussian noise. (Field strength, excitation/echo times, and other details of the MRI system are included in M (r, t), as are facets of the experimental subject such as T2∗ (r, t).) Mental activity causes a temporary change in M (r, t). The magnetization can be partitioned into baseline equilibrium and activation components
M (r, t) = M eq (r, t) + δM (r, t). 1
(2)
The activation function is defined on a continuous set of points, but need not be a continuous function in the usual sense; for example, discontinuities might occur at anatomical boundaries.
Estimability of Spatio-temporal Activation in fMRI
261
Combining (1) and (2) gives
g = g eq + ∆g gj = M eq (r, t)e−2πir ·k(t) p(t − tj ) dr dt + δM (r, t)e−2πir ·k(t) p(t − tj ) dr dt + nj .
(3)
(4)
In an fMRI experiment using the event-related paradigm the subject is asked to perform a cognitive task2 after being exposed to a stimulus [10]. For example, the subject might be shown a stimulus of three letters with the task to think of a word beginning with that syllable. During the experiment the MRI system records data by repeatedly executing a scan sequence. Because the change in magnetization produces a change in data of similar magnitude to the noise, the stimulus cycle is often repeated many times to average over noise realizations. In the discussion below we will use index c to denote the stimulus cycle, index s to denote the scan sequence repetition within the cycle, and index j to denote a measurement within the scan sequence. We assume that the activation is reproducible over stimulus cycles3 and that the magnetization is linear with the number of stimuli, so δM (r, t) = f (r, t − tl ), (5) tl ∈C
where s is a voxel of the volume, t is the index of the volumes, f is the luminance function, w is the expected 3D displacement field, S is the voxel lattice, C is the set of neighboring pairs and α controls the balance between the two energy terms. To overcome some classical difficulties of that formulation (large displacements, noise and intensity inhomogeneities, discontinuities), we introduce two robust estimators, one for the data term (ρ1 ) and the second one for the regularization term (ρ2 ). The minimization is therefore equivalent to the minimization
of the augmented function, noted U :
U (w, δ, β; f ) =
s∈S
+α
δs (∇f (s, t) · ws + ft (s, t))2 + ψ1 (δs )
<s,r>∈C
2
βsr (||ws − w r ||) + ψ2 (βsr ),
(2)
318
Pierre Hellier and Christian Barillot
where δs and βsr are auxiliary variables acting as weights. This cost function has the advantage to be quadratic with respect to w. Furthermore, when a voxel s does not validate the model, the corresponding weight δs = φ1 ([∇f (s, t) · ws + ft ]2 ) decreases, making this formulation more robust. More details may be found in [7]. Multiresolution and multigrid minimization: In order to cope with large displacements, a classical incremental multiresolution procedure is used (successive Gaussian smoothing and subsampling). At each resolution level, a multigrid minimization based on successive partitions of the initial volume is achieved. At each grid level, a 12-dimension parametric increment field is estimated over each cube of the grid. The resulting field is a rough estimate of the desired solution, and it is used to initialize the next grid level. This hierarchical minimization strategy improves the quality and the convergence rate.
4 4.1
Extraction of Local Cortical Constraints Related Work: Sulci Extraction
Descriptive anatomy of the cerebral cortex is based on its subdivision in a set of sulci and gyri. MRI accurately represents this in-vivo cortical anatomy, and is used in the daily practice in morphological and functional brain imaging. In the context of this paper, these sulci serve as cortical landmarks to constrain the registration process. The sulci segmentation and identification task is a real three-dimensional data analysis issue. There are two main issues within this recognition topic : one addressing the problem of feature extraction for the segmentation of sulci, and one addressing the problem of the identification of cortical structures (lobes, gyri, sulci). Some methods dealing with the problem of representing cortical sulci define a sulcus from its structural information, basically a cortical fold filled by CSF [13,18,19,20,24,25]. Other methods define a sulcus from its shape; basically a sulcus is a highly convoluted shape [10,22]. For the recognition task, the identification of cortical regions like lobes, gyri or sulci can use the atlas matching paradigm [1,11,19] or be based on a symbolic/numeric data fusion scheme using probabilities [11,17] or heuristics [18]. One problem of these methods is the great variability of the cortical folds and the difficulty in analyzing them hierarchically (even if statistical maps can be used for a gross automatic labeling of some major sulci [11] or even neural networks based on a learning data set [17]). 4.2
Modeling of Sulci Using Active Ribbons
A compact numerical description of a sulcus can be obtained by modeling this sulcus with a surface representing its deep part. The method we use is based on the active contour paradigm evolving from a 1D curve located at the external part of the brain to a 2D surface modeling the median axis of the sulcus. In order
Cooperation between Local and Global Approaches to Register Brain Images
319
to characterize sulci from gyri, the curvature of iso-intensity surfaces belonging to the cortical regions is computed. The operator used to compute curvature information is a 3D extension of the Lvv operator. Then, a compact and a parametric description of a sulcus is obtained by a median surface representing the buried part of this sulcus. 4.3
Matching of Sulci
The cortical sulci are extracted using the algorithm described in [10]. Contiguous Sulci: The sulci are modeled with B-splines, which facilitates their numerical manipulation. In particular, the resampling of these shapes is possible on each axis. For the registration of a subject A towards a subject B, we consider the homologous sulci between these two subjects. For each pair of homologous sulci, the sulcus containing with fewer control points is resampled so that the two homologous sulci are finally described by the same number of points. Then, we explicitly put in correspondence the control points of these two sulci. This is summarized on the figure 2. On the support of the constraint sulci of the source volume, we thus define a sparse constraint deformation field which can be integrated thereafter in the estimate. Interrupted Sulci: In general we have also to deal with interrupted sulci [16] (see figure 2). In this case, we use the following procedure : – If we note pm the maximum depth (pm = max{p0 , p1 , p2 }), the sulci are resampled along the depth direction in order to obtain the same number of control points (pm ) for all the sulci segments on their depth axis. – Note lm = max{l1 + l2 , l0 }. If lm = l1 + l2 , the sulcus 0 is resampled by a factor of llm0 , and on the other hand sulci segments i, i ∈ {1, 2} are resampled m by a factor of l1l+l . 2 On the figure 2, the curvilinear abscissa of point M0 along the sulcus 0 length is l1 l0 , after normalization between 0 and 1. At M0 , the constraint field corresponding to the sulci matching is not contiguous. When two homologous sulci are both interrupted, we match each segment as if they were continuous. This is possible because we have a labeling of each piece, Inferior-Superior (for the precentral sulcus for instance), or Antero-Posterior (for the superior temporal sulcus for instance). We have presented an approach for a sulcus described by two distinct segments, but this method can easily be extended if we have to deal with sulci having more segments. Sparse Constraint Deformation Field: In every cases, we finally obtain two sulci each describes by N control points. The sulcus of the source volume is described by a set of control points S1 = {CS1 . . . CSN }, when the homologous sulcus in the target volume is described by S2 = {CT1 . . . CTN }. For each point
320
Pierre Hellier and Christian Barillot l2
l1
N
l
M
1
p2
2
Sulcus 2
Sulcus 1
Subject A
N
M
p1
p
l0
M
0
Subject B
N
p
N
Sulcus 0
l
p0
Fig. 2. Left: Matching of two homologous sulci. The sulcus with less control points is resampled in order to obtain the same number of control points on each axis, namely Nl et Np . Right: Matching of two sulci, with one of them interrupted. At M0 (curvilinear abscissa ll10 ), the constraint field is discontinuous. −−−→ S1 , a constraint field can be explicitly computed : ∀k ∈ {1 . . . N }, wck = CSk CTk . Note Sc = S1 the kernel of the sparse constraint field. Contrary to the matching approaches based on a distance measure (chamfer, ICP), this algorithm matches explicitly all sulci points. This mapping procedure may be discussed, because we do not know if it is anatomically plausible. Is it necessary to explicitly match the sulci extreme points? How to manage the sulci interruptions? In the absence of anatomical assumptions, it appeared more reasonable to us, in any cases in a primer stage, to implement this cortical mapping process. At the end of this process, we obtain a constraint field wc , which is defined on the support of source volume. In order to reduce the sensitivity of the constraint field to sulci segmentation errors, we perform a non isotropic regularization of the field which can in a way be compared as a 3D adaptation of the Nagao filtering [15]. In the following, the approach we propose considers the existence of a field wc , and is therefore not limited to the incorporation of sulcal constraints.
5
Integration of Sparse Constraints
The sparse deformation field wc , deduced from the matching of cortical ribbons, must be integrated in the formulation of the registration problem. As in [6], we incorporate that information as a third energy term. The cost function therefore becomes: U (w; f, wc ) = [∇f (s, t) · w s + ft (s, t)]2 +α
<s,r>∈C
s∈S
||w s − wr ||2 + αc
||ws − wcs ||2 ,
s∈Sc
where αc is a parameter that balances the weight of the cortical constraint.
(3)
Cooperation between Local and Global Approaches to Register Brain Images
321
The matching of cortical ribbons, as presented in the past section, might not be correct for all the points of the ribbons. As a matter of fact, there might be some segmentation errors, and these points should not be used as a hard constraint. Furthermore, we do not really know if it is anatomically correct to assume a one-to-one correspondence between the sulci. Therefore, we introduce a robust estimator ψ3 on the cortical constraint term. The cost function is modified as:
U(w, δ, β; f, wc ) =
δs (∇f (s, t) · w s + ft (s, t))2 + ψ1 (δs )
s∈S
+α
2
βsr (||w s − wr ||) + ψ2 (βsr )
<s,r>∈C c
+α
γs (||w s − wcs ||)2 + ψ3 (γs ).
s∈Sc
(4)
Of course, the sparse constraint and the associated robust function introduce two new external parameters, αc and ψ3 . However, these parameters are fixed with relative high values, as the constraints must be largely taken into account. The minimization scheme is unchanged, compared to our previous work. We alternate between estimating the weights of the robust functions are estimating the deformation field. Once the weights are estimated and “frozen”, the multigrid estimation of the field is performed through an iterative Gauss-Seidel scheme. Further details about that minimization strategy may be found in [7]. The local cortical constraint will have a relative spatial influence for two reasons. On the one hand, the standard regularization term propagates the local constraint because the minimization is alternated. One the other hand, the multigrid minimization, described in details in our previous work [7] makes it possible to estimate a deformation model on specified cubes, which will propagate the local constraint to a large group of voxels that compose the cube.
6
Results on a Database of 18 Subjects
To evaluate the benefits of cortical constraint, we have acquired a database of 18 subjects. For each subject, 6 main sulci per hemisphere have been extracted: central, precentral, postcentral, sylvian, superior temporal and superior frontal. Among the 18 subjects, one has been chosen as a reference subject, and all the subjects have been registered onto that subject, with the previous method and the constrained one. For both methods, we keep the same set of parameters for all the subjects, showing that the algorithm is quite robust with respect to parameter settings. We have designed global and local measures to assess the quality of the registration, and to compare the original method to the constrained one.
322
6.1
Pierre Hellier and Christian Barillot
Global Measures
Once every subject of the database has been mapped into the reference subject, it is possible to compute an average volume, by normalizing the 17 deformed volumes. Note that all the volumes are computed with the estimated deformation field and trilinear interpolation. The results are presented on figure 3, with axial, sagittal and coronal views for both methods. Looking at cortical areas, we can see that the average volume is less blurred and more similar to the reference volume. On other regions, the results are identical.
Previous work
Constrained registration
Reference subject
Fig. 3. Average volumes, computed with the 17 deformed subjects. The volume corresponding to the constrained method is more precise on the cortex.
Cooperation between Local and Global Approaches to Register Brain Images
323
In order to assess numerically the difference between the methods, we compute the mean square error (MSE 1 ) between the averaged volumes and the reference volume. We compute two average errors: one for all the voxels of the volume, and one restricted to the segmentation mask of the reference brain. Results are presented on table 1, and show the benefit of the cortical constraint, as the error decreases by 25%. The MSE might not be a good measure when considering only two subjects. Notwithstanding that, it is valuable when comparing the average volume (computed with 17 registered volumes) to the reference volume, and is also valuable when comparing two different methods. Table 1. Mean Square error (MSE) between the average volume and the reference volume. We present two mean errors: the first one is computed for all the voxels of the volume, whereas the second one is computed only on the segmentation mask of the reference brain. The benefit of the cortical constraint is visible, as we can see a 25% decrease of the error. Computation for Computation restricted to all the voxels the segmentation mask of the brain Previous work 750.4 624.5 Constrained registration 545.2 506.7
At that stage, the evaluation is not fair, as the criterion is more or less related to the similarity measure that is used to drive the registration process. Therefore, we design another “global” evaluation of the registration based on the registration of anatomical features. For each subject, we have the classification of the cortex into grey matter and white matter, obtained with the method [9]. We compare the classification of each subject, projected into the reference subject, and the reference classification, by computing overlapping measure such as sensitivity, specificity, and total performance. For concision sake, we keep the total performance measure, and compute the mean and the deviation of that measure over the database of subjects. Results are presented on table 2. The results are comparable for the two techniques, but the deviation of the constrained technique is lower, meaning that the algorithm is more robust. 6.2
Local Measures
In that section, we want to measure locally the benefits of the cortical constraints. Therefore, we designed measures to evaluate the quality of the matching of sulci. Looking at the figure 1, one can see the sulci of the reference subject, R, and two deformed sulci 1 and 2. The two sulci should ideally match the reference sulci, and which one is the best deformed sulci? If we consider the global positioning, 1
P
i=N 2 M SE = N1 i=1 (I1 (i) − I2 (i)) , where I1 and I2 are the volumes to compare, and N is the number of voxels.
324
Pierre Hellier and Christian Barillot
Table 2. Overlapping measures between grey/white matter. For each subject, the classification of the cortex is projected into the reference subject and compared to the classification of the reference subject. We choose an overlapping measure defined by the total performance, and compute the mean and deviation of that measure over the database of subjects. Tissue grey white Constrained registration grey white Previous work
Mean Deviation 93.5 0.07 95.1 0.07 93.5 0.05 95.3 0.05
sulcus 1 seems to be the best, but if we consider the shape, sulcus 2 appears to be better deformed. In that section, we first visualize the deformed sulci, then we will evaluate numerically the registration of sulci, with respect to global positioning an with respect to shape recovering. Visualization of Deformed Sulci: If we consider one particular sulcus over the database of 18 subjects, one can deform each sulcus with the estimated deformation field and look at the superposition with the reference sulcus. The visualizations, for the central left sulcuss and the left sylvian fissure are presented on figures 4 and 5 respectively. One can observe the variability of the registered sulci with the previous method, and the way sulci are registered with the modified method.
Previous method
Constrained registration
Fig. 4. The left central sulci of each subject (in blue) are deformed toward the reference subject, and compared to the left central sulcus of the reference subject (in yellow). For the reference subject, we have also traced the left precentral sulcus in red and the left postcentral sulcus in green. The constrained method leads to a significantly lower variability, compared to the previous method.
Cooperation between Local and Global Approaches to Register Brain Images
Previous method
325
Constrained registration
Fig. 5. The left sylvian sulci of each subject (in blue) are deformed toward the reference subject, and compared to the left sylvian sulcus of the reference subject (in red). We have also traced the left superior temporal sulcus of the reference subject in green.
Average Sulcus As the sulci are defined by their control points, it is possible to compute, after resampling, an average sulcus with the deformed sulci of each subject. The control points of the average sulcus are the mean of the corresponding control points of the deformed sulci. We can then visualize the average sulcus and compare it to the reference sulcus. The figure 6 present the average sulci for the central left sulcus and the left sylvian fissure. We observe that the average sulci obtained with the constrained method are almost stuck to the reference sulci, thus showing that the local constraint has been well integrated in the registration process. Numerical Evaluation Euclidean distances between sulci: In that section, we intend to assess numerically the impact of the registration on the matching of sulci. As the sulci are defined by their control points, we can first compute a mean distance between sulci after resampling. However, the distance does not reflect how shapes are similar, therefore we also perform a principal component analysis (PCA) to evaluate the similarity of shape. For each subject, and for each sulcus, we compute a distance between the deformed sulci of one subject and the corresponding sulci of the reference subject. We then average the distances, for the 18 subjects and for the 12 sulci (per subject), and present the results in table 7. The constrained method gives significantly better results, with an average distance that is equivalent to the resolution of MR images. We also evaluate the global positioning of sulci, by computing a distance between the center of gravity of the sulci. For the constrained method, the alignment is good as the mean distance between the registered sulci is 0.2 voxels.
326
Pierre Hellier and Christian Barillot
Fig. 6. Left: For the left central sulcus, the average sulcus obtained with the previous method is drawn in cyan, while the average sulcus obtained with the constrained method is drawn in blue. We have also put three sulci of the reference subject, the left central in red, the left postcentral in green and the left precentral in orange. Right: For the left sylvian sulcus, the average sulcus obtained with the previous method is drawn in green, while the average sulcus obtained with the constrained method is drawn in blue. We have also put two sulci of the reference subject, the left sylvian in red, the left temporal superior in blue.
Principal component analysis: However, the distance between sulci does not reflect how shapes are similar. The principal component analysis makes it possible to define a metric that is adapted to the comparison of shapes. We will not describe here the PCA technique, thus referring the reader to [5] for further information. For each method, the population of shapes is composed by a sulcus of the reference subject, and the corresponding sulci of each subjects, deformed toward the reference subject by a given registration method. We consider the trace of the covariance matrix, because it reflects the total amount of variation along the axis of the decomposition, and because the trace is invariant. The table 7 compares the traces obtained with the previous method and the constrained registration, for three different sulci: the left central, the left frontal superior and the left sylvian. The constrained method gives significant better results, as the trace is almost divided by a factor 10.
7
Conclusion
We have presented in that paper a cooperation method between an “iconic” registration method, and a landmark based registration. We have chosen to incorporate sparse sulcal constraints, because sulci are relevant landmarks, from a functional and morphological point of view. The sulci are extracted with the active ribbon method, presented in [10]. The energy-based framework [7] makes
Cooperation between Local and Global Approaches to Register Brain Images
327
Fig. 7. Left: Average distances between registered sulci. The mean is performed for all the sulci (12 per subject) and for all the subjects (18). We compute two distances, one between the control points and the other between the center of gravity. Right: For three populations of deformed sulci of the left hemisphere, the trace of the covariance matrix reflects the total amount of variation that is observed along the axis of the decomposition. The trace is invariant, which makes it possible to compare the results of both methods.
it possible to incorporate naturally the local sparse constraints and to express the complete registration problem in the same formalism. On a database of 18 subjects, on which 6 major sulci have been extracted by hemisphere for each subject, we have demonstrated the significant impact of sparse constraints. We have designed global measures (average volume, mean error, overlapping of brain tissues), as well as local measures (distance between sulci after registration, shape comparison based on PCA). Both measures have shown the impact of cortical constraints.
Acknowledgements This work has been partly supported by the Brittany Country Council under a contribution to the student grant. Grant support for the acquisition of the data was provided by the GIS Project “cognition science”.
References 1. A. Caunce and CJ. Taylor. Using local geometry to build 3D sulcal models. Proc. IPMI, pages 196–209, 1999. 2. H. Chui, J. Rambo, J. Duncan, R. Schultz, and A. Rangarajan. Registration of cortical anatomical structures via robust 3D point matching. Proc. IPMI, pages 168–181, 1999. 3. L. Collins, G. Le Goualher, and A. Evans. Non linear cerebral registration with sulcal constraints. Proc. of MICCAI, pages 974–985. Springer, October 1998. 4. L. Collins, G. Le Goualher, R. Venugopal, A. Caramanos, A. Evans, and C. Barillot. Cortical constraints for non-linear cortical registration. Proc. VBC, pages 307–316, September 1996. 5. T. Cootes, C. Taylor, D. Hooper, and J. Graham. Active shape models- their training and application. CVIU, 61(1):31–59, 1995.
328
Pierre Hellier and Christian Barillot
6. J. Gee, L. Le Briquer, C. Barillot, and D. Haynor. Probabilistic matching of brain images. In Bizais et al., editor, Proc. Information Processing in Medical Imaging, Brest, June 1995. Kluwer academic publisher. 7. P. Hellier, C. Barillot, E. Mmin, and P. Prez. An energy-based framework for dense 3d registration of volumetric brain image. In IEEE CVPR, pages 270–275, Jun 2000. 8. B. Horn and B. Schunck. Determining optical flow. Artificial Intelligence, 17:185– 203, 1981. 9. F. Lachmann and C. Barillot. Brain tissue classification from mri data by means of texture analysis. In SPIE Press, editor, Spie Medical Imaging, pages 72–83, 1992. 10. G. Le Goualher, C. Barillot, and Y. Bizais. Modeling cortical sulci with active ribbons. IJPRAI, 8(11):1295–1315, 1997. 11. G. Le Goualher, E. Procyk, L. Collins, R. Venegopal, C. Barillot, and A. EvansG. Automated extraction and variability analysis of sulcal neuroanatomy. In IEEE TMI, 18(3):206-217, Mars 1999. 12. J. Maintz, MA. Viergever. – A survey of medical image registration. Medical Image Analysis, 2(1):1–36, 1998. 13. J.-F. Mangin, V. Frouin, I. Bloch, J. Regis, and J. Lpez-Krahe. From 3d magnetic resonance images to structural representations of the cortex topography using topology preserving deformations. Journal of Math. Ima. and Vis., 5(4):297–318, 1995. 14. J. Mazziotta, A. Toga, A. Evans, P. Fox, and J. Lancaster. A probabilistic atlas of the human brain: theory and rationale for its development. Neuroimage, 2:89–101, 1995. 15. M. Nagao and M. Matsuyama. Edge preserving smoothing. CVGIP, 9:394–407, 1979. 16. M. Ono, S. Kubik, C. Abernathey. – Atlas of the cerebral sulci. – Verlag, 1990. 17. D. Riviere, JF. Mangin, D. Papadopoulos-Orfanos, J. Martinez, V. Frouin, and J. Regis. Automatic recognition of cerebral sulci using a congregation of neural networks. In Proc. of MICCAI, 2000. 18. N. Royackkers, H. Fawal, M. Desvignes, M. Revenu, and JM. Travere. Morphometry and identification of brain sulci on three-dimensional MR images. Proc. IPMI, pages 379–380, June 1995. 19. S. Sandor and R. Leahy. Surface-based labeling of cortical anatomy using a deformable atlas. IEEE TMI, 16(1):41–54, 1997. 20. G. Szekely, Ch. Brechbhler, O. Kbler, R. Ogniewicz, and T. Budinger. Mapping the human cerebral cortex using 3d medial manifolds. In VBC, pages 130–144, 1992. SPIE. 21. J. Talairach and P. Tournoux. Referentially oriented cerebral MRI anatomy. Georg Thieme Verlag, New-York, 1993. 22. J.P. Thirion and A. Gourdon. The marching lines algorithm: new result and proofs. Research report 1881, INRIA, March 1993. 23. P. Thompson and A. Toga. A surface-based technique for warping threedimensional images of the brain. IEEE TMI, 15(4):402–417, 1996. 24. M. Vaillant and C. Davatzikos. Hierarchical matching of cortical features for deformable brain image registration. Proc. IPMI, pages 182–195, June 1999. 25. X. Zeng, L.H. Staib, R.T. Schultz, H. Tagare, L. Win, and J.S. Duncan. A new approach to 3D sulcal ribbon finding from MR images. MICCAI, pages 148–157, September 1999.
Landmark and Intensity-Based, Consistent Thin-Plate Spline Image Registration Hans J. Johnson and Gary E. Christensen Department of Electrical and Computer Engineering, The University of Iowa, Iowa City IA 52242, USA
[email protected],
[email protected] Abstract. Landmark-based thin-plate spline image registration is one of the most commonly used methods for non-rigid medical image registration and anatomical shape analysis. It is well known that this method does not produce a unique correspondence between two images away from the landmark locations because interchanging the role of source and target landmarks does not produce forward and reverse transformations that are inverses of each other. In this paper, we present two new image registration algorithms that minimize the thin-plate spline bending energy and the inverse consistency error—the error between the forward and the inverse of the reverse transformation. The landmarkbased consistent thin-plate spline algorithm registers images given a set of corresponding landmarks while the intensity-based consistent thinplate spline algorithm uses both corresponding landmarks and image intensities. Results are presented that demonstrate that using landmark and intensity information to jointly estimate the forward and reverse transformations provides better correspondence than using landmarks or intensity alone.
1
Introduction
There are many image registration algorithms based on matching corresponding landmarks in two images [7]. The thin-plate spline image (TPS) registration technique pioneered by Fred Bookstein [1,4,2] is the most commonly used landmark driven image registration algorithm. Generalizations of this procedure include Krieging methods [11,10] that use regularization models other than the thinplate spline (TPS) model, anisotropic landmark interactions [12], and directed landmarks [3]. The TPS algorithm (see Section 2.2) defines a unique smooth registration from a template image to a target image based on registering corresponding landmarks. Correspondence away from the landmark points is defined by interpolating the transformation with a TPS model. Although TPS interpolation produces a smooth transformation from one image to another, it does not define a unique correspondence between the two images except at the landmark points. This can be seen by comparing the transformation generated by matching a set of source landmarks to a set of target landmarks with the transformation M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 329–343, 2001. c Springer-Verlag Berlin Heidelberg 2001
330
Hans J. Johnson and Gary E. Christensen
generated by matching the target landmarks to the template landmarks. If the correspondence is unique then the forward and reverse transformations will be inverses of one another (see Fig. 1). This is not the case as shown by the examples in Section 3. In this paper, the idea of consistent image registration [5,9,6] was T(y)
S(x) q 1 =h(p1) p1
−1
q1
q1 =g (p1)
q2 p2 =h−1(q2)
p2
p 2 =g(q2)
Fig. 1. Consistent image registration is based on the principle that the mappings h from T to S and g from S to T define a consistent point-by-point correspondence between the coordinate systems of T and S. Consistency is enforced mathematically by jointly estimating h and g while constraining h and g to be inverse mappings of one another.
combined with the thin-plate spline algorithm to overcome the problem that the forward and reverse transformations generated by the TPS algorithm are not inverses of one another. In the consistent image registration approach, the forward and reverse transformations between two images are jointly estimated subject to the constraints that they minimize the TPS bending energy and that they are inverses of one another. The merger of these two approaches produced a landmark-based consistent TPS (CL-TPS) and a landmark and intensity-based consistent TPS (CLI-TPS) image registration algorithms. The CL-TPS algorithm (see Section 2.3) provides a means to estimate a consistent pair of forward and reverse transformations given a set of corresponding points. The CLI-TPS algorithm (see Section 2.4) combines both landmark and intensity information to estimate a consistent pair of forward and reverse transformations.
Landmark and Intensity Consistent TPS Image Registration
2
331
Methods
2.1
Notation
Figure 1 shows two MRI images with corresponding landmarks that define the notation used throughout the paper. Assume that the template T (y) and target S(x) images are defined on the continuous domain Ω = [0, 1)2 and were constructed from N1 × N2 pixel images using bilinear interpolation. Let qi ∈ Ω, and pi ∈ Ω, for i = 1, . . . , M , define corresponding landmarks in the template T and target S images, respectively. The forward transformation h : Ω → Ω is defined as the mapping that transforms T into the shape of S and the reverse transformation g : Ω → Ω is defined as the mapping that transforms S into the shape of T . The forward transformation h(x) = x + u(x) defines the mapping from the coordinate system of the template to the target and the reverse transformation g(y) = y + w(y) defines the mapping from the coordinate system of the target image to the template for x, y ∈ Ω 1 . The inverse of the forward transformation ˜(y) and the reverse transformation is defined as is defined as h−1 (y) = y + u g −1 (x) = x + w(x). ˜ 2.2
Landmark-Based, Thin-Plate Spline Image Registration with Cyclic Boundary Conditions
The landmark-based, TPS image registration algorithm [1,4,2] registers a template image with a target image by matching corresponding landmarks identified in both images. Registration at non-landmark points is accomplished by interpolation such that the overall transformation smoothly maps the template into the shape of the target image. In general, the landmark image registration problem can be thought of as a Dirichlet problem [10] and can be stated mathematically as finding the displacement field u that minimizes the cost function ||Lu(x)||2 dx (1) C= Ω
subject to the constraints that u(pi ) = qi − pi for i = 1, . . . , M . The operator L denotes a symmetric linear differential operator [8] and is used to interpolate u away from the corresponding landmarks. When L = ∇2 , the problem reduces to the TPS image registration problem given by 2 2 2 2 2 2 ∂ ui (x) ∂ ui (x) ∂ ui (x) C= ||∇ u(x)|| dx = +2 dx1 dx2 + ∂ 2 x1 ∂x1 ∂x2 ∂ 2 x2 Ω i=1 Ω
2
2
(2) subject to the constraints that u(pi ) = qi − pi for i = 1, . . . , M . 1
The mappings h and g are Eulerian coordinate system transformations
332
Hans J. Johnson and Gary E. Christensen
It is well known [1,4,2] that the TPS displacement field u(x) that minimizes the bending energy defined by Eq. 2 has the form u(x) =
M
ξi φ(x − pi ) + Ax + b.
(3)
i=1
where φ(r) = r2 log r and ξi are 2 × 1 weighting vectors. The 2 × 2 matrix A = [a1 , a2 ] and the 2 × 1 vector b define the affine transformation where a1 and a2 are 2 × 1 vectors. The unknown parameters W = [ξ1 , . . . , ξM , a1 , a2 , b]T are determined by substituting the landmark constrains into Eq. 3 and solving the resulting equations. Let φi,j = φ(|pi − qj |) build the matrix
K=
Φ Λ ΛT O
where
φ1,1 φ1,2 φ2,1 φ2,2 Φ= . .. .. . φM,1 φM,2
. . . φ1,M . . . φ2,M .. , .. . . . . . φM,M
p1,1 p1,2 p2,1 p2,2 Λ= . .. .. . pM,1 pM,2
1 1 .. , . 1 (4)
and O is a 3 × 3 matrix of zeros. Also, define the (M + 3) × 2 matrix of landmark displacements as D = [d1 , . . . , dM , 0, 0, 0]T where di = qi − pi for i = 1, . . . , M . The equations formed by substituting the landmark constrains into Eq. 3 can be written in matrix form as D = KW . The solution W to this matrix equation is determined by least squares estimation since the matrix K is not guaranteed to be full rank. The TPS interpolant φ(r) = r2 log r is derived assuming infinite boundary conditions, i.e., Ω is assumed to be the whole plane R2 in Eq. 2. A TPS transformation is truncated at the image boundary when it is applied to an image. This presents a mismatch in boundary conditions at the image edges when comparing forward and reverse transformations between two images. It also implies that a TPS transformation is not an one-to-one and onto mapping between two image spaces. To overcome this problem and to match the cyclic boundary conditions assumed by the intensity-based consistent image registration algorithm [5,6], we use the following procedure to approximate cyclic boundary conditions for the TPS algorithm. Figure 2 illustrates the concept of cyclic boundary conditions for the landmark TPS registration problem. Cyclic boundary conditions implies a toroidal coordinate system such that the left-right and top-bottom boundaries of the domain Ω are mapped together. Modifying the boundary conditions in this manner causes an infinite number of interactions between landmarks for a given finite set of landmark points. Panel (b) shows two such interactions between landmark points p1 and p2 ; one within the domain Ω and another between adjacent domains. It is not practical to solve Eq. 4 with the resulting infinite dimensional matrix, so the cyclic boundary conditions are approximated by replicating the landmark locations in the eight adjacent domains as shown in panel (b) of Fig. 2.
Landmark and Intensity Consistent TPS Image Registration
333
This provides a good approximation to cyclic boundary conditions since the the kernel function, φ(r) = r2 log r, causes interactions between landmarks to decrease rapidly as the distance between landmarks increases. In our tests, there d3 d2
d1
d3 d2
d4 p1
p2
d1
p2 p1 d4 p3
p3
Fig. 2. Diagrams describing the coordinate system and points used to ensure that the resulting displacement field demonstrates continuous cyclic boundary conditions. The left panel is a depiction of the toroidal coordinate system. The right panel shows the layout of the point used to solve the TPS with approximate circular boundaries. were significant differences between the transformations found using infinite and cyclic boundary conditions but there was nearly no difference in terms of the magnitude of the fiducial landmark errors. The major differences between the two sets of boundary conditions was in the location of the maximum inverse consistency error. The maximum inverse consistency error was located on the image boundaries in the case of infinite boundary conditions while it was away from the boundaries for the case of cyclic boundary conditions (see Fig. 4). 2.3
Landmark Consistent Thin-Plate Spline Registration
The CL-TPS image registration is solved by minimizing the cost function given by 2 ||Lu(x)||2 + ||Lw(x)||2 dx + χ ||u(x) − w(x)|| ˜ + ||w(x) − u˜(x)||2 dx C =ρ +
Ω M
Ω
ζi ||pi + u(pi ) − qi ||2 + ζi ||qi + w(qi ) − pi ||2 .
(5)
i=1
The first integral of the cost function defines the bending energy of the TPS for the displacement fields u and w associated with the forward and reverse transformations, respectively. The second integral is called the inverse consistency constraint (ICC) and is used to enforce that the forward and reverse transformations are inverses of one another. The third term of the cost function defines the correspondence between the normalized landmarks and is minimized when
334
Hans J. Johnson and Gary E. Christensen
pi + u(pi ) = qi and qi + w(qi ) = pi for i = 1, . . . , M . The constants ρ, χ, and ζi define the relative importance of each term of the cost function. Equation 5 must be minimized numerically since the inverse consistency con˜(x) and inverse-reverse straint is a function of the inverse-forward h−1 (x) = x + u ˜ transformations. To do this, Eq. 5 is discretized. Define the g −1 (x) = x + w(x) T T landmark locations p¯i = [N1 pi,1 , N2 pi,2 ] and q¯i = [N1 qi,1 , N2 qi,2 ] to be the landmark coordinates in the space of the original N1 × N2 pixel images. Let Ωd = {(n1 , n2 )|0 ≤ n1 < N1 ; 0 ≤ n2 < N2 ; and n1 , n2 are integers} represent a discrete lattice of indexes associated with the pixel coordinates of the discrete images T and S. Let Ωd = Ωd \{¯ p1 , . . . , p¯M } and Ωd = Ωd \{¯ q1 , . . . , q¯M } represent the points in Ωd not including the target and template landmark points, respectively. The discrete version of Eq. 5 is given by ρ ρ ||Lud [n]||2 + ||Lwd [n]||2 N1 N2 N1 N2 n∈Ωd n∈Ωd χ 2 + ||ud [n] − w ˜d [n]|| + ||wd [n] − u˜d [n]||2 N1 N2
C=
n∈Ωd
+
M
ζi ||pi + ud [¯ pi ] − qi ||2 + ζi ||qi + wd [¯ qi ] − pi ||2
(6)
i=1 n n n n where hd [n] = h( N ) = N + u( N ) = N + ud [n] is the discrete forward transn n n n formation and gd [n] = g( N ) = N + w( N )= N + wd [n] is the discrete reverse n1 n2 T n , ] transformation. The notation N is defined as the 2 × 1 column vector [ N 1 N2 for n ∈ Ωd .
The last term of Eq. 6 places constraints on the displacement fields u and w at the landmark locations. This term effectively produces a soft constraint at the landmarks so that there does not have to be exact correspondence. The first two summations of Eq. 6 places constraints on the displacement fields u and w at each point of the discretized domain Ωd except at the landmark locations. These terms penalize large derivatives of the displacement fields at all of the non-landmark points which effectively interpolates a smooth displacement field between the landmark points. The discrete displacement fields are defined to have the form ud [n] =
k∈Ωd
µ[k]ej
and wd [n] =
η[k]ej
(7)
k∈Ωd
for n ∈ Ωd where the basis coefficients µ[k] and η[k] are (2 × 1) complex-valued 1 2πk2 T vectors and θ[k] = [ 2πk N1 , N2 ] . The basis coefficients have the property that they have complex conjugate symmetry, i.e., µ[k] = µ∗ [N − k] and η[k] = η ∗ [N − k]. The notation < ·, · > denotes the dot product of two vectors such that 1 n1 2 n2 + 2πk < n, θ[k] >= 2πk N1 N2 .
Landmark and Intensity Consistent TPS Image Registration
335
The forward and reverse Fourier series parameterized displacement fields are initialized with the TPS solution found by Eq. 3 using µ[k] = ud [n]e−j and η[k] = wd [n]e−j (8) n∈Ωd
n∈Ωd
n n where ud [n] = u( N ) and wd [n] = w( N ) are given by Eq. 3 for the forward and reverse transformations, respectively. The minimizer of Eq. 5 is determined by gradient descent.
2.4
Intensity-Based Consistent Thin-Plate Spline Registration with Landmark Thin-Plate Spline Initialization
The landmark and intensity-based consistent registration algorithm generalizes the consistent image registration presented in [5,9,6] to include landmark constraints. It is based on minimizing the cost function given by |T (h(x)) − S(x)|2 + |S(g(x)) − T (x)|2 dx (9) C =σ Ω 2 +ρ ||Lu(x)||2 + ||Lw(x)||2 dx + χ ||u(x) − w(x)|| ˜ + ||w(x) − u˜(x)||2 dx Ω
Ω
subject to the constraints that u(pi ) = qi − pi and w(qi ) = pi − qi for i = 1, . . . , M . The intensities of T and S are assumed to be scaled between 0 and 1. The first integral of the cost function defines the cumulative squared error similarity cost between the transformed template T (h(x)) and target image S(x) and between the transformed target S(g(y)) and the template image T (y). To use this similarity function, the images T and S must correspond to the same imaging modality and they may require pre-processing to equalize the intensities of the image. This term defines the correspondence between the template and target images as the forward and reverse transformations h and g, respectively, that minimized the squared error intensity differences between the images. The second integral is used to regularize the forward and reverse displacement fields u and w, respectively. This term is minimized for TPS transformations. The third integral is called the inverse consistency constraint and is minimized when the forward and reverse transformations h and g, respectively, are inverses of each other. The last term is the landmark constraint that keeps the landmarks aligned. The constants σ, ρ, χ, ζi define the relative importance of each term of the cost function. As in the previous section, the cost function in Eq. 10 must be discretized in order to numerically minimize it. The forward and reverse transformations h and g and their associated displacement fields u and w are parameterized by the discrete Fourier series defined by Eq. 7. The basis coefficients µ[k] and η[k] of the forward and reverse displacement fields are initialized with the result of the CL-TPS algorithm. The discretized version of Eq. 10 is then minimized using gradient descent as described in [5,6].
336
3
Hans J. Johnson and Gary E. Christensen
Results
3.1
Landmark Registration
The eight corresponding landmarks shown in Fig. 3 will be used to demonstrate the landmark-based consistent TPS (CL-TPS) algorithm. In this example, the four inner landmarks correspond to the four outer landmarks and the four corner landmarks in both images correspond to each other. The forward transformation h is defined as the transformation, in Eulerian coordinates, that maps the four inner points to the four outer points causing an expansion of the grid in the center of the image. The reverse transformation g maps the outer points to the inner points causing a contraction of the grid in the center of the image. Forward Trans. h(x) Reverse Trans. g(y)
(24,76)
(24,24)
(76,76)
(24,76)
(34,66) (66,66)
(34,34) (66,34)
(34,34) (66,34) (24,24)
Reverse Trans.
(76,76)
(34,66) (66,66)
(76,24)
Forward Trans.
(76,24)
Fig. 3. The location of local displacements at the landmarks points for the forward, and reverse transformations of images with 100×100 pixels. Application of the TPS deformation fields to uniformly spaced grids for the forward and reverse transformations.
The top row of Fig. 4 shows the locations and magnitudes of the inverse errors after application of TPS interpolation to the landmarks in the forward and reverse directions. In these images, B and D point to landmark locations in the forward and reverse transformations respectively, B and D point to locations adjacent to landmarks, and A and C point to non-landmark locations. The inverse consistency errors associated with each of these points is listed in tables to the right of the images. The inverse consistency error at the landmark points is nominal both with and without enforcing the inverse consistency constraint (ICC). The bottom row of Fig. 4 shows that the ICC reduces the inverse consistency error uniformly across the displacement fields. The ICC has the least effect on inverse consistency errors at points in the neighborhood of landmarks. A pair of transformations are point-wise consistent if the mapping of a point through the composite function h(h−1 (xi )) maps xi to itself. Any deviation from this identity mapping is a point-wise consistent error. By applying this composite mapping to a uniformly spaced grid one can visualize the magnitude, location, and direction of the point-wise inconsistencies as is shown in Fig. 5. The left
Landmark and Intensity Consistent TPS Image Registration Inv. Consistency Err. Inv. Consistency Err. ||g(y) − h−1 (y)|| ||h(x) − g −1 (x)|| D B
Inv. Consistency Err. 5.0
C
A 0.00
D B
337
0.01
C
A 0.00
Label A B B C D D
Point Pixel Err. (10,50) 5.0 (24,76) 0.008 (24,77) 0.27 (20,40) 3.9 (34,66) 0.008 (34,67) 0.33
Label A B B C D D
Point Pixel Err. (10,50) 0.003 (24,76) 0.003 (24,77) 0.014 (20,40) 0.005 (34,66) 0.001 (34,67) 0.018
Fig. 4. The left and center panels are the inverse errors due to the forward and reverse transformation, respectively. The right panels are tables listing the fiducial errors associated with selected image points. The top row and bottom rows are the inverse consistency errors associated with TPS interpolation and CL-TPS, respectively.
panel shows that there is a considerable amount of inverse error in the TPS interpolant. The right panel shows that application of the inverse consistency constraint has reduced the point-wise consistency error considerably. Table 1 reports that the CL-TPS algorithm reduced the maximum and average inverse consistency error by a factor of 277 and 740 times, respectively, as compared to the TPS algorithm. The trade-off for this gain was that the average fiducial error increases by a factor of 2, but this is still small relative to the pixel size. The Jacobian error calculated as 12 |min{Jac(h)} − 1/max{Jac(g)}| + 1 2 |min{Jac(g)} − 1/max{Jac(h)}| provides an indirect measure of the inconsistency between the forward and reverse transformations. The Jacobian error is zero if the forward and reverse transformations are inverses of one another, but the converse is not true. Notice that the Jacobian error was five times smaller for the CL-TPS algorithm compared to the TPS algorithm. 3.2
Landmark and Intensity Registration
In this section we investigate the use of landmark registration on intensity-based images. Corresponding 64 × 80 isotropic 4 millimeter pixel 2D slices from a set of MRI acquired brains were used in this experiment. A set of 41 corresponding landmarks were manually defined as shown in Fig. 1.
338
Hans J. Johnson and Gary E. Christensen Concat. of forward & reverse Concat. of forward & reverse TPS transformations CL-TPS transformations applied to grid applied to grid
Fig. 5. Deformed grids showing the error between the forward and reverse transformations estimated with the landmark-based TPS algorithm(left panel) and the CL-TPS algorithm(right panel). The grids were deformed by the transformation constructed by composing the forward and reverse transformations together, i.e., g(h(x)). Ideally, the composition of the forward and reverse transformations is the identity mapping which produces no distortion of the grid as in the right panel.
In the first of four experiments the set of landmark points are used to perform the landmark TPS registration as in the in the previous section 3.1. The next experiment used the CL-TPS algorithm to register the two images. The third experiment is initialized with the results from the CL-TPS, but adds the image intensity as a driving force for the CLI-TPS registration. In each of the consistent registrations the ICC, landmark, TPS, and similarity constraints are imposed by iterative estimation of the Fourier series parameters for a total of 2000 iterations. In practice only the lowest 18 harmonics, 8 and 10 harmonics in x and y directions respectively, of the Fourier series parameters are estimated. The final experiment is an CI-TPS registration, and uses no landmark information in the estimation of the transformation parameters. It should be noted Table 1. Comparison between Thin-plate spline image registration with and without the inverse consistency constraint (ICC). The table columns are the Experiment, (ICC), transformation Direction (TD), average fiducial error (AFE) in pixels, maximum inverse error (MIE) in pixels, average inverse error (AIE) in pixels, minimum jacobian value (MJ), inverse of the maximum jacobian value (IJ), and the jacobian error (JE). Experiment ICC TD Landmark TPS No Forward Reverse CL-TPS Yes Forward Reverse
AFE MIE AIE MJ 0.00004 5.0 2.2 0.25 0.00004 4.3 2.0 0.24 0.0008 0.012 0.0031 0.29 0.0008 0.011 0.0027 0.28
IJ JE 0.43 0.13 0.32 0.33 0.025 0.29
Landmark and Intensity Consistent TPS Image Registration
339
that for this experiment, estimation of the Fourier parameters is limited to the first 2 harmonics initially, and is incremented to include additional harmonics after every 250 iterations. This has the effect of doing a global registration first and progressively becoming more local with each harmonic parameter added to the estimation. This approach allows for a much faster convergence of the parameters. It should also be observed that this approach stagnated in a local minima after 7 harmonics are estimated, and that the estimation of additional parameters had only marginal effects on the results. The results were computed on a 667MHz, 21264 alpha processor. The landmark-based TPS registration took about 4 seconds to compute, the CL-TPS and CLI-TPS registrations took approximately 12 minutes to compute, and the CITPS registration took less than 3 minutes to compute. Figure 6 is a comparison
Fig. 6. Comparison of deformed images to originals when TPS initialization, inverse consistency, landmark, and similarity constraints are imposed. The left panels are the original images, the center panels are the deformed images, and the right panels are the absolute difference images between the original and deformed images.
of deformed images to originals from the CL-TPS and CLI-TPS registration. The left panels are the original images, the center panels are the deformed images, and the right panels are the absolute difference images between the original and deformed images. These images demonstrate that the deformed images closely match the appearance of the original images. From Table 2 it can be seen that the two consistent intensity-based registrations obtain almost identical average intensity both with and without the landmark constraints. The deformed and
340
Hans J. Johnson and Gary E. Christensen
absolute difference images for the consistent intensity-based registration are indistinguishable from those in Fig. 6. CL-TPS For. Tns. Jac. Rev. Tns. Jac.
0.56
1.7
0.56
1.7
CLI-TPS For. Tns. Jac. Rev. Tns. Jac.
0.44
2.1
0.44
2.1
Fig. 7. Jacobian images that show locations of deformation for both CL-TPS(left two panels) and CLI-TPS(right two panels). Bright pixels represent expansion, and dark pixels represent contractions. The image intensity difference between the original and deformed images for the intensity-based consistent TPS registrations with and without the landmark constraints are similar, but the transformations used in attaining the deformed images have different properties. Figure 7 are images displaying the Jacobian values at each pixel location for the landmark-based consistent TPS with and without the intensity constraints. The magnitude of local displacement is encoded such that bright pixels represent expansion, and dark pixels represent contractions. Notice that combining the intensity information with the landmark information provides additional local deformation as compared to just using the landmark information alone. The inverse error images for the intensity-based consistent TPS registrations with and without the landmark constraints are shown in Fig. 8. Notice that the inverse consistency error is distributed uniformly across the image domain in both cases. However, the magnitude of the inverse consistency error is one third as large in the landmark constrained case. Table 2 is a summary of representative statistics that can be taken from each of the experiments. From this table, the TPS and CL-TPS show that the addition of ICC can improve the inverse consistency of the transformations with only a small degradation of the fiducial landmark matching. It should be noted that the inverse consistency error in the TPS initialization tends to be be larger as one moves away from landmarks and that inverse consistency error associated with the TPS interpolation can be decreased by defining more points of correspondence manually. The CLI-TPS uses intensity information to refine the transformation resulting from the CL-TPS. Table 2 demonstrates that the CITPS registration has the smallest average intensity difference, but the largest
Landmark and Intensity Consistent TPS Image Registration
341
CLI-TPS Inv. Consistency Error CI-TPS Inv. Consistency Error ||h(x) − g −1 (x)|| ||g(y) − h−1 (y)|| ||h(x) − g −1 (x)|| ||g(y) − h−1 (y)||
0.0mm
0.84mm 0.0mm
0.84mm 0.0mm
3.0mm0.0mm
3.0mm
Fig. 8. Images that display the magnitude of inverse consistency errors for both CLI-TPS(left two panels) and CI-TPS(right two panels).
fiducial landmark errors. The CLI-TPS has marginally larger average intensity difference, but much smaller fiducial landmark errors. It should be noted that the large number of landmarks used in the CLI-TPS registration limits the effect of the intensity driving force in neighborhoods of the landmarks. In practice, when the the landmark points are more sparse the intensity driving force plays a more important role.
4
Summary and Conclusions
This work presented two new image registration algorithms based on thin-plate spline regularization: landmark-based, consistent thin-plate spline (TPS) image registration and landmark and intensity-based consistent TPS image registration. It was shown that the inverse consistency error between the forward and reverse transformations generated from the traditional TPS algorithm could be minimized using the landmark-based, consistent TPS algorithm. Inverse consistency error images showed that the largest error occurred away from the landmark points for the traditional TPS algorithm and near the landmark points for the consistent TPS algorithm. The average inverse consistency error was reduced by 100 times in the inner-to-outer dots example and greater than 15 times in the MRI brain example. The maximum inverse consistency error was reduced by almost 500 times for the inner-to-outer dots example but only 10 times for the MRI brain example. The Jacobian error was reduced from 0.13 to 0.025 for the inner-to-outer dots example and from 0.1 to 0.0 for the MRI brain example. The trade-off between better inverse consistency was that the fiducial error increased by over ten times in both examples. Using landmark and intensity information with the MRI brain example gave a better correspondence between the images then just using the landmark information as shown by a decrease in the average intensity difference. It was shown that using landmark and intensity information gave a better registration of the MRI brain images than just using the inten-
342
Hans J. Johnson and Gary E. Christensen
Table 2. Comparison between registering two 64 × 80 pixel MRI images with 41 landmarks, as shown in Fig. 1, using Landmark-based TPS, CL-TPS, CLITPS, and CI-TPS registration algorithms. The table columns are the 2D MRI Experiment, landmark initialization(LI), inverse consistence constraint (ICC), similarity constraint (SC). transformation Direction (TD), average fiducial error (AFE) in pixels, maximum inverse error (MIE) in pixels, average inverse error (AIE) in pixels, average intensity difference (AID), minimum jacobian value (MJ), inverse of the maximum jacobian value (IJ), and the jacobian error (JE). 2D MRI Exp. LI ICC SC TD AFE MIE AID AIE MJ IJ JE Landmark TPS Yes No No Forward 0.060 9.2 1.1 0.014 0.41 0.67 0.1 Reverse 0.060 7.2 1.2 0.012 0.61 0.55 CL-TPS Yes Yes No Forward 1.3 0.48 0.066 0.011 0.56 0.66 0.0 Reverse 1.4 0.56 0.062 0.0096 0.66 0.56 CLI-TPS Yes Yes Yes Forward 1.4 0.72 0.10 0.0081 0.44 0.66 0.25 Reverse 1.5 0.84 0.10 0.0067 0.65 0.48 CI-TPS No Yes Yes Forward 3.3 2.4 0.33 0.0049 0.34 0.56 0.125 Reverse 3.6 3.0 0.31 0.0049 0.47 0.48
sity information for the following measures: the average fiducial error, Jacobian error, maximum inverse error, and average inverse error.
Acknowledgments We would like to thank John Haller and Michael W. Vannier of the Department of Radiology, The University of Iowa for providing the MRI data. This work was supported in part by the NIH grant NS35368 and a grant from the Whitaker Foundation.
References 1. F.L. Bookstein. The Measurement of Biological Shape and Shape Change, volume 24. Springer-Verlag: Lecture Notes in Biomathematics, New York, 1978. 2. F.L. Bookstein. Linear methods for nonlinear maps: Procrustes fits, thin-plate splines, and the biometric analysis of shape variability. In A. Toga, editor, Brain Warping, pages 157–181. Academic Press, San Diego, 1999. 3. F.L. Bookstein and W.D.K. Green. Edge information at landmarks in medical images. In Richard A. Robb, editor, Visualization in Biomedical Computing 1992, pages 242–258. SPIE 1808, 1992. 4. Fred L. Bookstein. Morphometric Tools for Landmark Data. Cambridge University Press, New York, 1991. 5. G.E. Christensen. Consistent linear-elastic transformations for image matching. In A. Kuba and M. Samal, editors, Information Processing in Medical Imaging, LCNS 1613, pages 224–237. Springer-Verlag, June 1999.
Landmark and Intensity Consistent TPS Image Registration
343
6. G.E. Christensen and H.J. Johnson. Consistent image registration. Submitted to IEEE Transactions on Medical imaging, 1999. 7. I.L. Dryden and K.V. Mardia. Statistical Shape Analysis. Wiley, New York, NY, 1 edition, September 1998. 8. U. Grenander and M. I. Miller. Computational anatomy: An emerging discipline. Quarterly of Applied Mathematics, LVI(4):617–694, December 1998. 9. Hans J. Johnson. Method for consistent linear-elastic medical image registratio. Master’s thesis, Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, May 2000. 10. S.C. Joshi, M.I. Miller, G.E. Christensen, A. Banerjee, T.A. Coogan, and U. Grenander. Hierarchical brain mapping via a generalized Dirichlet solution for mapping brain manifolds. In R.A. Melter, A.Y. Wu, F.L. Bookstein, and W.D. Green, editors, Vision Geometry IV, Proceedings of SPIE Vol. 2573, pages 278–289, 1995. 11. J.T. Kent and K.V. Mardia. The link between kriging and thin-plate splines. In F.P. Kelly, editor, Probability, Statistics and Optimisation. John Wiley and Sons, 1994. 12. K. Rohr, M. Fornefett, and H.S. Stiehl. Approximating thin-plate splines for elastic registration: Integration of landmark errors and orientation attributes. In A. Kuba and M. Samal, editors, Information Processing in Medical Imaging, LCNS 1613, pages 252–265. Springer-Verlag, June 1999.
Validation of Non-rigid Registration Using Finite Element Methods Julia A. Schnabel1 , Christine Tanner1 , Andy D. Castellano Smith1 , Martin O. Leach2 , Carmel Hayes2 , Andreas Degenhard2 , Rodney Hose3 , Derek L.G. Hill1 , and David J. Hawkes1 1
Computational Imaging Science Group, Radiological Sciences, Guy’s Hospital, Guy’s, King’s and St. Thomas’ School of Medicine, London SE1 9RT, UK
[email protected] CRC Clinical MR Research Group, The Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey SM2 5PT, UK Clinical Sciences Division, Department of Medical Physics and Clinical Engineering, Royal Hallamshire Hospital, University of Sheffield, Sheffield S10 2JF, UK
2 3
Abstract. We present a novel validation method for non-rigid registration using a simulation of deformations based on biomechanical modelling of tissue properties. This method is tested on a previously developed non-rigid registration method for dynamic contrast enhanced Magnetic Resonance (MR) mammography image pairs [1]. We have constructed finite element breast models and applied a range of displacements to them, with an emphasis on generating physically plausible deformations which may occur during normal patient scanning procedures. From the finite element method (FEM) solutions, we have generated a set of deformed contrast enhanced images against which we have registered the original dynamic image pairs. The registration results have been successfully validated at all breast tissue locations by comparing the recovered displacements with the biomechanical displacements. The validation method presented in this paper is an important tool to provide biomechanical gold standard deformations for registration error quantification, which may also form the basis to improve and compare different non-rigid registration techniques for a diversity of medical applications.
1
Introduction
Validation of registration, in particular non-rigid registration, is an on-going research topic as there is often no ground truth available against which a registration can be compared. There are several approaches to address this problem: Robustness: Testing the bias sensitivity of a registration algorithm by using different starting estimates or by adding noise or inhomogeneity to the images, can help to establish the measurement precision, although not the accuracy, of a registration method [2]. Consistency: Widely used for intra-modality rigid body registration applications such as for serial MRI [3], consistency checks assess the capability of M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 344–357, 2001. c Springer-Verlag Berlin Heidelberg 2001
Validation of Non-rigid Registration Using Finite Element Methods
345
the registration method to find circular transformations, but can be sensitive to bias. Furthermore, many non-rigid registration methods do not generate invertible transformations which complicates this approach. Visual assessment: Registration results can be qualitatively assessed by viewing difference images (for intra-modality registration), contour overlays (for inter-modality registration), alternate pixel displays, or by comparing anatomical landmarks. These approaches have been applied to rigid registration [4], and since they involve inspection of the entire volume domain of the image pair, they can be extended to non-rigid registration [5]. However, visual assessment is an observer dependent validation method. Gold standard: Extrinsic markers such as bone-implanted markers, stereotactic frames, or cadaver implants, can be used as a gold standard to quantify residual registration errors. Apart from being invasive, such an approach suffers from the error involved in the localization of fiducials. For intermodality rigid registration, bone-implanted markers have been successfully used to compare different state-of-the-art registration methods [6], but are not applicable to non-rigid registration of soft, deforming tissues. Another obstacle is the difficulty of applying controlled deformations. Simulation: A ground truth can be simulated by misregistering an image pair by a known amount, and by quantifying the subsequent residual registration error. For non-rigid registration, this can be based on displacing a set of landmarks and interpolating the results with thin-plate splines [8]. However, such simulations are in general not very realistic since they do not take the underlying tissue properties into account, so that different tissues can undergo non-plausible deformations. We propose a novel approach to validation of non-rigid registration in which misregistration is simulated using biomechanical models, whose solutions yield physically plausible deformations. We apply this validation method to contrast enhanced MR mammography and a non-rigid registration algorithm previously developed for this purpose [1]. The proposed method would however also be applicable to other medical applications and non-rigid registration methods.
2 2.1
Methods Materials
We have previously acquired dynamic sequences of Gd-DTPA enhanced MR mammography of patients with confirmed breast cancer on a Philips 1.5T Gyroscan ACS2 using a fast gradient echo sequence with TR=12ms, TE=5ms, 35o flip angle, FOV=350mm and axial slice direction [5]. A dynamic sequence of one scan before, and five scans after contrast injection of Gd-DTPA was acquired. For the purpose of this study, we have selected the pre-contrast scan and the second post-contrast scan of three patient cases. These cases were selected because there was little subject motion between acquisitions. The images are of dimension 256×256×25 with in-plane voxel size of 1.37mm×1.37mm (patients 1
346
Julia A. Schnabel et al.
and 2) and 1.48mm × 1.48mm (patient 3), with 4.2mm slice thickness. We have extracted a volumetric region of interest containing one breast for each patient. Manual segmentations into fat and fibroglandular tissue have been obtained from the contrast enhanced images, and the tumour segmentations were obtained from the subtraction images. Fig. 1 shows 2D example slices through the ROIs of the image pairs, the subtraction images as well as the segmentations.
Fig. 1. 2D slices through pre- and post-contrast enhanced MR breast image volumes, subtraction images, and tissue segmentations. From top to bottom: patients 1–3. From left to right: pre-contrast image, post-contrast image, subtraction image (post – pre), and segmentation into fat (dark grey), fibroglandular tissue (light grey), and tumour (white). The subtraction images show little motion between pre- and post-contrast scans.
2.2
Non-rigid Registration
In previous work, an algorithm for non-rigid registration for 3D contrast enhanced MR mammography was developed by Rueckert et al. [1] and was shown to significantly improve the image quality of the subtraction images for a cohort of 54 patient cases [5]. This algorithm is based on free-form deformations (FFDs) using B-splines and normalized mutual information (NMI) as a voxel-similarity measure [9]. It models global patient motion using an affine transformation, followed by modelling local motion by deforming an underlying mesh of B-spline control points. The combined global and local motion model at each image point (x, y, z) is expressed as
Validation of Non-rigid Registration Using Finite Element Methods
T(x, y, z) = Tglobal (x, y, z) + Tlocal (x, y, z)
347
(1)
The flexibility and computational complexity of the local motion model is related to the control point spacing. The algorithm makes no assumption about the underlying material properties of the different tissue types in the breast. Recently, we have found that this algorithm can cause volume changes in regions of enhanced lesions in MR mammography [10]. These volume changes may occur due to the similar intensity of fatty tissue and contrast enhanced fibroglandular tissue, but are physically not plausible given the incompressibility of the breast tissue, and the dynamic acquisition at a single examination time. It is therefore interesting to study the behaviour of this algorithm using simulations of patient motion in contrast enhanced MR mammography. 2.3
Finite Element Modelling of the Breast
The modelling of biomechanical tissue properties has gained considerable interest in a range of clinical and research applications. Finite Element Methods (FEMs) can be used to model the inter-relation between different tissue types by applying displacements or forces. This can help to predict mechanical or physical deformations during surgical procedures, and to derive and quantify tissue properties from observed deformations. For example, FEMs for brain modelling have been investigated for model updating of image guided surgery procedures [11], and have been integrated into physically based non-rigid registration methods [12,13]. For mammography, FEMs have been explored for predicting mechanical deformations during biopsy procedures [14], for generating compressions similar to X-ray mammography in MR mammography [15], and for improving the reconstruction of elastic properties in elastography [16,17,18]. In order to simulate plausible breast deformations, we have constructed isotropic, linear and nearly incompressible elastic models incorporating skin surface, fat, and tumorous tissue for the patient cases shown in Fig. 1. Remaining tissues like fibroglandular tissue and ductile tissue are other important breast structures which can have nonlinear behaviour. However, since the aim of this study is to obtain approximate breast models which can produce plausible deformations, rather than to build optimal models, these tissues have been modelled for sake of simplicity as fatty tissue. Using published values [19], the Young’s moduli were set to 1 kPa for the fatty tissue, and to 16.5 kPa for the carcinoma. A Young’s modulus of 88 kPa was chosen for the skin, representing a linear approximation of the nonlinear stress-strain curve for abdominal skin parallel to the cranio-caudal median investigated by Park [20] for strains up to 30%. For near-incompressibility of the tissue, the Poisson’s ratio was set to 0.495. We have obtained 3D triangulations of the tumours and fatty tissue using standard marching cubes and decimation techniques provided by the Surface Evolver package [21], with minimal edge lengths of 12mm (fat) and 2mm (tumour). Using the ANSYS FEM software package [22], the triangulations were meshed into isoparametric tetrahedral structural solids (elements). The elements consist of four corner nodes and an additional node in the middle of each edge. Each node
348
Julia A. Schnabel et al.
Fig. 2. Wire-frame renderings of FEM models for patient breast images shown in Fig. 1. From left to right: Patients 1–3. The tumours have finer meshing than surrounding tissue.
has three associated degrees of freedom (DOF) which define translation into the nodal x-, y- and z-directions. Each element has a quadratic displacement behaviour, and provides nonlinear material properties as well as consistent tangent stiffness for large strain applications. The skin was modelled by adding shell elements consisting of eight nodes onto the surface of the fatty tissue. Fig. 2 shows wireframe renderings of the FEM models. The models were solved using ANSYS for a range of displacements: Regional displacement simulates a uniform surface displacement by translating a set of surface nodes. Point puncture displaces a single surface node which simulates a very localized displacement, e.g. as occurring during a biopsy without any breast fixation. One-sided contact displaces surface nodes on one side onto a plane, which simulates the deformation of the breast when moving against the scanner RF coil. Similarly, two-sided contact models the deformation when the breast is fixed at both sides, by displacing surface nodes onto a plane on each side. In all cases, the nodes adjacent to the deep pectoral fascia are fixed, assuming no movement of the pectoralis muscle and pectoral fascia. 2.4
Deformation Simulation Based on Finite Element Solutions
Using the FEM solutions of the three breast models, the average displacement of the whole breast volume and within individual tissues, is obtained by integrating the displacement vectors ui = (dx, dy, dz) associated with each node ni = (x, y, z): EF EM =
N 1 ui N i=1
(2)
where N is the number of nodes in the model or tissue. Tab. 1 lists the average and maximum displacements for all patient solutions at the nodes within the whole breast as well as only in the tumorous tissue. The maximum displacements are around 10mm, mostly occurring in fatty tissues close to the skin surface. To obtain dense displacements, we have used a scattered data interpolation technique described in [23]. This approach is based on a coarse-to-fine B-spline
Validation of Non-rigid Registration Using Finite Element Methods
349
Table 1. Average (maximum) FEM node displacements EF EM and interpolation errors EI in mm computed at FEM nodes over total breast volumes (total) and in individual tumour tissue. FEM
regional
point puncture
one-sided contact
two-sided contact
EF EM
Patient 1 2 3 1 2 3 1 2 3 1 2 3
Total 2.6306 6.5811 6.3715 0.4628 1.1586 0.8326 0.8333 1.7461 1.0325 2.0039 1.5969 2.0389
(10.27) (10.42) (10.50) (10.51) (10.15) (10.37) (10.21) (10.01) (10.01) (11.71) (10.02) (11.80)
EI Tumour
2.1877 7.6625 6.9902 0.3546 1.3503 0.8801 0.5581 2.1061 0.9976 2.2992 1.8249 2.0784
(2.35) (9.15) (7.52) (0.38) (1.89) (0.95) (0.60) (2.73) (1.08) (2.53) (2.57) (2.46)
Total 0.0143 0.0390 0.0540 0.0024 0.0065 0.0087 0.0034 0.0099 0.0079 0.0070 0.0082 0.0158
(0.50) (1.67) (3.40) (0.27) (0.26) (0.29) (0.43) (0.47) (2.12) (0.86) (1.10) (2.39)
Tumour 0.0553 0.1010 0.1027 0.0086 0.0174 0.0050 0.0109 0.0258 0.0147 0.0247 0.0185 0.0309
(0.07) (0.18) (0.13) (0.01) (0.03) (0.12) (0.02) (0.05) (0.02) (0.08) (0.04) (0.04)
hierarchy whose sum approaches the desired interpolation, and which can be reformulated into one equivalent B-spline interpolator TI . Ideally, TI maps all displaced FEM nodes, ni + ui , back to the original node positions ni , with an inverse displacement of −ui . However, due to the approximating nature of B-splines, a residual error remains at the node positions: EI =
N 1 ni − TI (ni + ui ) N i=1
(3)
Tab. 1 lists the residual interpolation error EI for all FEM solutions within the breast volumes and the tumours based on B-spline hierarchies of decreasing mesh spacing of 20mm, 10mm, 5mm, 2.5mm, down to 1.25mm. Overall errors are below 0.06mm, with maximum errors between 0.26mm and 3.4mm mainly occurring near the displaced skin surface. Maximum errors within the tumours are below 0.18mm, with an average error between 0.01 and 0.1mm, with higher errors mainly occurring for patients 2 and 3 with tumours lying close to the displaced skin surface. Fig. 3 shows the deformed post-contrast images of the three patients based on the interpolated displacement fields.
3
Results
To demonstrate the potential of the proposed validation scheme, we have used it to test the non-rigid registration algorithm described in section 2.2 using a control point resolution of 10mm, which corresponds to the expected maximum displacements imposed by the FEM. We have chosen to deform only the post-contrast images in order to first assess whether the deformation can be retrieved by registering the original post-contrast images to a deformed version of
350
Julia A. Schnabel et al.
Fig. 3. Example 2D slices through warped post-contrast image volumes. From top to bottom: patients 1–3. From left to right: regional displacement, point puncture, one-sided and two-sided plate contact. Compare with original postcontrast images in Fig. 1, and subtractions of original post-contrast images in Fig. 4.
themselves. A more realistic setting, where patient motion or deformation has occurred between pre- and post-contrast scans, is then simulated by registering the original pre-contrast images to the deformed post-contrast images. This approach involves assuming that there was no motion between the original pre- and post-contrast images. This is a reasonable assumption because the three patient image pairs were selected because very little deformation was discernable. Example 2D slices through the subtracted images volumes before and after registration of the post- and pre-contrast images to the warped post-contrast images are shown in Figs. 4 and 5, respectively. The subtraction images before registration show the considerable amount of deformation imposed by the FEM solutions near to the skin surface, and to a lesser degree within the breast tissues. After registration, the deformation appears to be mostly recovered within the breast tissue, with remaining misregistrations only near the skin surface, and at the edge of the field of view (FOV). Note from Fig. 5 that the tumours are visible before registration, but cannot be clearly distinguished from the surrounding bright motion artefacts, and that after registration these artefacts have been mostly removed. Although the registered subtraction images in Fig. 5 are not directly comparable to the original subtraction images in Fig. 1, they appear to be of similar quality.
Validation of Non-rigid Registration Using Finite Element Methods
351
Fig. 4. Example 2D slices through subtraction image volumes of post-contrast image volumes from warped post-contrast image volumes before (rows 1, 3, 5) and after (rows 2, 4, 6) non-rigid registration. Rows 1-2: patient 1. Rows 3-4: patient 2. Rows 5-6: patient 3. From left to right: regional displacement, point puncture, one-sided contact, two-sided contact.
352
Julia A. Schnabel et al.
Fig. 5. Example 2D slices through subtraction image volumes of pre-contrast image volumes from warped post-contrast image volumes before (rows 1, 3, 5) and after (rows 2, 4, 6) non-rigid registration. Rows 1-2: patient 1. Rows 3-4: patient 2. Rows 5-6: patient 3. From left to right: regional displacement, point puncture, one-sided contact, two-sided contact.
Validation of Non-rigid Registration Using Finite Element Methods
353
In addition to qualitative visual assessment, the registration error can be quantified either at the FEM node positions (analogously to equation (3)), or over the entire interpolated displacement field within the warped breast volume ∗ . The latter approach is adopted here, as it is more consistent in the sense Ipost that it takes the interpolation error EI into account. The residual registration error for a given transformation TR is then defined for all tissues as: TI (x) − TR (x) (4) ER = ∗ x∈Ipost
Table 2. Average (maximum) registration errors ER in mm after non-rigid registration of post- and pre-contrast images to warped post-contrast images. The registration errors were evaluated over the whole warped breast volume (total) as well as within individual tumour tissue. See also Fig. 6 for percentile errors. FEM
Post/Pre Patient
Post regional displacement Pre
Post point puncture Pre
Post one-sided contact Pre
Post two-sided contact Pre
1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
ER Total 0.6987 (6.78) 0.2955 (1.50) 0.5516 (7.83) 0.9406 (4.10) 0.5762 (4.39) 0.8151 (5.11) 0.2129 (7.04) 0.1461 (6.01) 0.1698 (6.57) 0.8208 (3.31) 0.4352 (3.65) 0.5894 (2.95) 0.3726 (6.30) 0.2158 (3.21) 0.3332 (8.80) 0.9896 (4.92) 0.5011 (3.90) 0.7341 (4.65) 0.4838 (7.66) 0.3630 (8.37) 0.7450 (10.39) 1.0850 (5.20) 0.6252 (7.83) 0.9703 (9.23)
Tumour 0.2416 (0.31) 0.1571 (0.34) 0.2919 (0.58) 0.3306 (0.38) 1.3184 (1.96) 0.5006 (1.19) 0.0825 (0.12) 0.0859 (0.19) 0.1112 (0.20) 0.4174 (0.49) 0.8989 (1.21) 0.4682 (0.80) 0.1180 (0.15) 0.1460 (0.30) 0.1248 (0.20) 0.4046 (0.48) 1.1921 (1.86) 0.6773 (1.16) 0.1362 (0.40) 0.1514 (0.48) 0.1873 (0.37) 0.3467 (0.64) 1.4500 (2.22) 0.6204 (1.25)
The residual registration errors are listed in Tab. 2 for the whole breast volume and tumorous tissues, and error percentiles are illustrated in Fig. 6. For
Mean Error
354
Julia A. Schnabel et al.
10
10
10
8
8
8
6
6
6
4
4
4
2
2
2
0
1
2 Patient
3
0
1
2 Patient
3
0
FEM 1 FEM 2 FEM 3 FEM 4
1
2 Patient
3
Fig. 6. Mean displacements and registration errors in mm between 5% and 95% of the error distribution computed over the whole breast volume for the patient cases. Left: FEM displacements. Centre: post-contrast registration error. Right: pre-contrast registration error. FEM 1: regional displacement. FEM 2: point puncture. FEM 3: one-sided contact. FEM 4: two-sided contact. Tab. 2 lists average and maximum errors.
both post- and pre-contrast registrations, the average error is about 1mm, and in some cases as low as 0.08mm in tumorous tissue and 0.15mm in the overall tissue. The maximum errors are still comparatively high, ranging between 1.5mm and 10mm, but have been found to be very sparse and localized near the edge of the FOV and the skin surface. The maximum error within the tumours is between 0.12mm and 2.22mm, with the larger errors mainly occurring for patient 2 where the tumour lies close to the displaced skin surface. The overall errors are slightly lower for the post-contrast images, which was to be expected as they have been registered to a warped version of themselves, but a slightly higher maximum error remains. The fact that the pre-contrast images have been registered to enhanced images of different intensities, and possibly different noise and a small amount of patient movement in the original scans, is reflected by the slightly higher overall registration error and higher maximum error within the tumours.
4
Discussion and Conclusion
We have developed a novel validation tool for non-rigid registration using Finite Element Methods (FEMs), and have tested it on three contrast enhanced MR mammography image pairs using an existing non-rigid registration algorithm developed for that application [1]. FEM solutions were obtained for a range of different displacements, yielding physically plausible displacements at each node of the patient models. Dense displacement fields were obtained using scattered data interpolation, and the original post-contrast scans were deformed accordingly. The original image pairs, which had little motion between them, were
Validation of Non-rigid Registration Using Finite Element Methods
355
then registered to the deformed post-contrast images, and the residual registration error was quantified at all breast tissue locations. The non-rigid registration algorithm was successful in recovering most tissue deformations generated by the FEMs, which is reflected by overall low registration errors for both post- and pre-contrast image registrations. The average performance on the post-contrast images was slightly better which was expected as in this case the images were registered to a deformed copy of themselves. The registration errors that we were able to identify with this validation technique could be used to help improve this particular non-rigid registration algorithm. The validation method has scope for further improvement and extension. For example, the FEMs constructed in this work treat all breast tissue as linear, isotropic, homogeneous, and incompressible, which only holds for strains of less than 1% [18]. In further work we will investigate the incorporation of fibroglandular and ductile tissue with non-linear elastic and anisotropic behaviour (such as Cooper’s ligaments), as well as non-linear properties of skin and cancerous tissue, for which a range of in-vitro quantifications exists [24,25]. The use of a B-spline interpolator to obtain dense displacements leads to residual approximation errors, which can be avoided by computing instead the continuous displacement field for all points within each tetrahedral element via the node displacements uj of the ten element nodes, weighted by their quadratic shape function [26]: u(x, y, z) =
10
S(j)uj
(5)
j=0
However, this only allows deformation of image regions within the mesh. Moreover, the overall low residual interpolation errors down to sub-voxel accuracy as listed in Tab. 1 are an indication for the adequate performance of the B-spline interpolator. Errors occur mainly at the skin surface close to the edge of the FOV, which can be increased for further improvement. Another aspect of the B-spline interpolator is that it may favour spline-based registration algorithms like the one we have used. Whereas the interpolation is based on scattered data displacements and a dense coarse-to-fine B-spline hierarchy, the non-rigid registration algorithm used only a single B-spline of 10mm resolution (corresponding to the maximum amount of expected deformation), and is based on maximizing the voxel similarity of the image pairs. Finally, in very localized regions we have observed a surprisingly poor performance of the non-rigid registration algorithm for the post-contrast images in comparison to the pre-contrast images. Since our deformation simulation does not change the noise field, and the non-rigid registration algorithm is based on measures of entropy (NMI), its performance may well be affected if two images have the unrealistic property of the same underlying noise field. A solution could be to add separate Rician distributed noise fields to the images which is currently a topic of investigation. In summary, the FEM based validation tool for non-rigid registration was shown to be successful for quantifying breast motion recovery, and enables us
356
Julia A. Schnabel et al.
to detect, localize and quantify registration errors. It is not restricted to any particular non-rigid registration method, and given that other anatomy such as the brain or liver can be modelled by FEMs, can straight forwardly be extended to other medical applications as well.
Acknowledgements The authors would like to thank Dr. Luke Sonoda from CISG, and Dr. Erica Denton and Dr. Sheila Rankin from Guy’s Hospital for access to the image database, Dr. Frans Gerritsen and Marcel Quist from Philips Medical Systems, Dr. Daniel Rueckert from Imperial College London for useful discussions, and Dr. Philippe Batchelor from CISG and Justin Penrose from the University of Sheffield for their help in the model construction. The work on biomechanical tissue modelling using ANSYS was funded by EPSRC, and segmentations were carried out using ANALYZE. JAS has received funding from Philips Medical Systems, EasyVision Advanced Development. CT and ADCS have received funding from EPSRC grants GR/M52779 and GR/M47294, respectively.
References 1. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes. Non-rigid registration using Free-Form Deformations: Application to breast MR images. IEEE Transactions on Medical Imaging, 18(8):712–721, 1999. 2. C. Studholme, D. L. G. Hill, and D. J. Hawkes. Automated 3D registration of MR and PET brain images by multi-resolution optimisation of voxel similarity measures. Medical Physics, 24:25–35, 1999. 3. M. Holden, D. L. G. Hill, E. R. E. Denton, J. M. Jarosz, T. C. S. Cox, T. Rohlfing, J. Goodey, and D. J. Hawkes. Voxel similarity measures for 3D serial MR brain image registration. IEEE Transactions on Medical Imaging, 19(2):94–102, 2000. 4. J. M. Fitzpatrick, D. L. G. Hill, Y. Shyr, J. West, C. Studholme, and C. R. Maurer Jr. Visual assessment of the accuracy of retrospective registration of MR and CT images of the brain. IEEE Transactions on Medical Imaging, 17:571–585, 1998. 5. E. R. E. Denton, L. I. Sonoda, D. Rueckert, S. C. Rankin, C. Hayes, M. Leach, D. L. G. Hill, and D. J. Hawkes. Comparison and evaluation of rigid and non-rigid registration of breast MR images. Journal of Computer Assisted Tomography, 23(5):800–805, 1999. 6. J. West et al. Comparison and evaluation of retrospective intermodality brain image registration techniques. Journal of Computer Assisted Tomography, 21(4):554– 566, 1997. 7. K. C. Chu and B. K. Rutt. Polyvinyl alcohol cryogel: an ideal phantom material for MR studies of arterial flow and elasticity. Magnetic Resonance in Medicine, 37:314–319, 1997. 8. K. Rohr, M. Fornefett, and H. S. Stiehl. Approximating thin-plate splines for elastic registration: integration of landmark errors and orientation attributes. In A. Kuba, M. Samal, and A. Todd-Pokropek, editors, Information Processing in Medical Imaging: Proc. 16th International Conference (IPMI’99), volume 1613 of Lecture Notes in Computer Science, pages 252–265. Springer Verlag, 1999.
Validation of Non-rigid Registration Using Finite Element Methods
357
9. C. Studholme, D. L. G. Hill, and D. J. Hawkes. An overlap entropy measure of 3D medical image alignment. Pattern Recognition, 32:71–86, 1999. 10. C. Tanner, J. A. Schnabel, D. Chung, M. J. Clarkson, D. Rueckert, D. L. G. Hill, and D. J. Hawkes. Volume and shape preservation of enhancing lesions when applying non-rigid registration to a time series of contrast enhancing MR breast images. In S. L. Delp, A. M. DiGioia, and B. Jaramaz, editors, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2000, volume 1935 of Lecture Notes in Computer Science, pages 327–337. Springer Verlag, 2000. 11. M. I. Miga, K. D. Paulsen, J. M. Lemery, S. D. Eisner, A. H. Hartov, F. E. Kennedy, and D. W. Roberts. Model-updated image guidance: initial clinical experiences with gravity-induced brain deformation. IEEE Transactions on Medical Imaging, 18(10):866–874, 1999. 12. A. Hagemann, K. Rohr, H. S. Stiehl, U. Spetzger, and J. M. Gilsbach. Biomechanical modelling of the human head for physically based, nonrigid image registration. IEEE Transactions on Medical Imaging, 18(10):875–884, 1999. 13. M. Ferrant, S. K. Warfield, A. Nabavi, F. A. Jolesz, and R. Kikinis. Registration of 3D intraoperative MR images of the brain using a finite element biomechanical model. In S. L. Delp, A. M. DiGioia, and B. Jaramaz, editors, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2000, volume 1935 of Lecture Notes in Computer Science, pages 19–28. Springer Verlag, 2000. 14. F. S. Azar, D. N. Metaxas, and M. D. Schall. A finite model of the breast for predicting mechanical deformations during biopsy procedure. In IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, pages 38–45. IEEE, 2000. 15. A. Samani, J. Bishop, M. J. Yaffe, and D. B. Plewes. Biomechanical 3D Finite Element Modeling of the human breast using MRI data. Technical report, Dept. of Medical Biophysics, Sunnybrook and Women’s College Health Sciences Centre, Toronto, CA, 2000. Submitted. 16. R. Sinkus, J. Lorenzen, D. Schrader, M. Lorenzen, M. Dargatz, and D. Holz. High-resolution tensor MR elastography for breast tumour detection. Physics in Medicine and Biology, 45:1649–1664, 2000. 17. D. B. Plewes, J. Bishop, A. Samani, and J. Sciaretta. Visualization and quantification of breast cancer biomechanical properties with magnetic resonance elastography. Physics in Medicine and Biology, 45:1591–1610, 2000. 18. M. M. Doyley, P. M. Meaney, and J. C. Bamber. Evaluation of an iterative reconstruction method for quantitative elastography. Physics in Medicine and Biology, 45:1521–1539, 2000. 19. A. Sarvazyan, D. Goukassian, E. Maevsky, and G. Oranskaja. Elastic imaging as a new modality of medical imaging for cancer detection. In Proc. International Workshop on Interaction of Ultrasound with Biological Media, pages 69–81, 1994. 20. J. B. Park. Biomaterials Science and Engineering. Plenum Press, 1984. 21. K. Brakke. The Surface Evolver. Experimental Mathematics, 1(2):141–165, 1992. 22. ANSYS. http://www.ansys.com. 23. S. Lee, G. Wolberg, and S. Y. Shin. Scattered data interpolation with multilevel B-splines. IEEE Transactions on Visualization and Computer Graphics, 3(3):228– 244, 1997. 24. T. A. Krouskop, T. M. Wheeler, F. Kallel, B. S. Garra, and T. Hall. Elastic moduli of breast and prostrate tissues under compression. Ultrasonic Imaging, 20:260–274, 1998. 25. P. S. Wellman. Tactile Imaging. PhD thesis, Harvard University, 1999. 26. A. J. Davis. The Finite Element Method: A First Approach. Oxford University Press, 1980.
A Linear Time Algorithm for Computing the Euclidean Distance Transform in Arbitrary Dimensions Calvin R. Maurer, Jr.1 , Vijay Raghavan2, and Rensheng Qi3 1
3
Department of Neurosurgery, Stanford University, Stanford, CA calvin
[email protected] 2 Department of Computer Science, Vanderbilt University, Nashville, TN Department of Biomedical Engineering, University of Rochester, Rochester, NY
Abstract. A sequential algorithm is presented for computing the Euclidean distance transform of a k-dimensional binary image in time linear in the total number of voxels. The algorithm may be of practical value since it is relatively simple and easy to implement and it is relatively fast (not only does it run in linear time but the time constant is small).
1
Introduction
A k-dimensional (k-D) binary image is a function I from the elements (voxels) of an n1 × . . . × nk array to {0, 1}. Voxels of value 0 and 1 are called background and feature (or foreground) voxels, respectively. For a given distance function, the distance transform (DT) of an image I is an assignment to each voxel x of the distance between x and the closest feature voxel in I. The closest feature transform (FT) of an image I is an assignment to each voxel x of the identity of the closest feature voxel in I. It is clear that a DT can be computed from a FT in time linear in the total number of voxels N = n1 × . . . × nk . DTs are widely used in medical image processing. For example, in surfacebased image registration, the DT of a binary image in which the feature voxels represent a surface provides a convenient and efficient method for precomputing and storing point-to-surface distance. DTs have also been used in non-rigid image registration, morphological image segmentation, volume visualization, and shape-based interpolation. Sometimes the Euclidean DT (EDT) is used, but often, even when an exact EDT is desired, an approximation of the EDT such as the chamfer DT is used because it is substantially faster to compute. For some applications an exact EDT is required. For example, various approximations of the EDT have been used to generate skeletons of binary objects, but only the exact EDT can produce an accurate skeleton that is reversible, rotationally invariant, and minimal. The 3-D EDT has recently been used to generate skeletons of targets for treatment planning and optimization in multi-isocentric stereotactic radiosurgery. Breu et al. [1] presented an algorithm for computing the EDT of a 2-D image in O(N ) time. This method first computes the Euclidean FT in O(N ) time by M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 358–364, 2001. c Springer-Verlag Berlin Heidelberg 2001
A Linear Time Algorithm for Computing the Euclidean Distance Transform
359
constructing and sampling the intersection of the Voronoi diagram whose sites are the feature voxels with each row of the image. Then the EDT is computed from the FT. In this paper, we first generalize this approach and present an algorithm for computing the FT of a k-D binary image in O(N ) time. This algorithm can be used for a wide class of distance functions, including Euclidean distance as well as the Lp and chamfer metrics. We then present an algorithm for computing directly the EDT in arbitrary dimensions that runs in O(N ) time. We believe that this is the first such algorithm for which the correctness and time complexity is formally verified. This algorithm may be of practical value since it is relatively simple and easy to implement and it is relatively fast.
2
Distance Functions and Properties
k We are interested in the Lp distance metric d(x, y) = ( i=1 |xi − yi |p )1/p , where x and y are k-tuples, xi and yi are the i-th coordinates of x and y, and 1 ≤ p ≤ ∞. The L1 , L2 , and L∞ metrics are known as the Manhattan or city-block, Euclidean, and chessboard distances, respectively. We are specifically interested in the Euclidean distance. We are more generally interested in distance functions d : k × k → that satisfy the following properties: Property 1. Positive definiteness. d(x, y) = 0 iff x = y. Property 2. Symmetry. d(x, y) = d(y, x) for any x and y. Property 3. Triangle inequality. d(x, z) ≤ d(x, y) + d(y, z) for any x, y, z. Property 4. Monotonicity. Let x and y be two k-tuples that differ only in the values of the d-th coordinates (i.e., xi = yi , i =d). For concreteness, assume that xd < yd . For any u and v such that either (a) d(x, u) ≤ d(x, v) and d(y, v) < d(y, u) or (b) d(x, u) < d(x, v) and d(y, v) ≤ d(y, u) holds, ud < vd . Property 5. Let x and y be two k-tuples that differ only in the values of the d-th coordinates (i.e., xi = yi , i =d). Let u and v be two k-tuples with identical values of the d-th coordinates (i.e., ud = vd ). If d(x, u) ≤ d(x, v), then d(y, u) ≤ d(y, v). The Lp metric satisfies Properties 1–4. Property 5 follows from the contrapositive of Property 4. Thus, the Lp metric also satisfies Property 5.
3
The FT Algorithm
Our approach is based on the idea of dimensionality reduction and partial Voronoi diagram construction. At each dimension level, the FT is determined by constructing directly the intersection of the Voronoi diagram whose sites are the feature voxels with each row of the image. This construction is performed efficiently by using the FT in the next lower dimension. The algorithm takes as input the k-D binary image I and outputs the FT F . The DT can easily be computed from F . For each voxel x in I, F (x) is the closest feature voxel in I. It is helpful to define the binary subimage Id,id+1 ,... ,ik or simply Id if id+1 , . . . , ik are understood, formed from I by holding the d+1, . . . , k
360
Calvin R. Maurer, Jr., Vijay Raghavan, and Rensheng Qi Procedure ComputeFT(d, jd+1 , . . . , jk ) 1. if d = 1 then /* Compute FT in d − 1 dimensions */ 2. for i1 ← 1 to n1 do 3. if I(i1 , j2 , . . . , jk ) = 1 then 4. F (i1 , j2 , . . . , jk ) ← (i1 , j2 , . . . , jk ) 5. else 6. F (i1 , j2 , . . . , jk ) ← φ 7. endif 8. endfor 9. else 10. for id ← 1 to nd do 11. ComputeFT(d − 1, id , jd+1 , . . . , jk ) 12. endfor 13. endif /* Compute FT in d dimensions */ 14. for i1 ← 1 to n1 do 15. ··· 16. for id−1 ← 1 to nd−1 do 17. VoronoiFT(d, i1 , . . . , id−1 , jd+1 , . . . , jk ) 18. endfor 19. ··· 20. endfor
coordinates constant. It is also helpful to define the FT Fd , where for each voxel x in Id , Fd (x) is the closest feature voxel in Id . Obviously Ik = I and Fk = F . We define F0 (x) = x if I(x) = 1, otherwise F0 (x) = φ, where φ is the null set. The Voronoi diagram VS of a set of sites S = {fi } for i = 1, . . . , nS consists of a set of disjoint Voronoi cells VS = {Cfi } for i = 1, . . . , nS . The Voronoi cell Cf is the set of all points whose closest point is f together with the cell boundary formed by points equidistant from f and one or more other sites. The site f is also known as the Voronoi center of Cf . The FT of a binary image can be thought of as a discretized version of the Voronoi diagram whose sites are the feature voxels of the image. If the complete Voronoi diagram is constructed, the FT can be easily computed by querying the Voronoi diagram. In this algorithm, we do not construct the complete Voronoi diagram. Instead, at each dimension level, we construct the intersection of the Voronoi diagram with each row of the image. Let Xd = {xi = (j1 , . . . , jd−1 , i, jd+1 , . . . , jk )} for i = 1, . . . , nd denote the set of nd voxels in I formed by varying the d-th coordinate from 1 to nd and fixing all other coordinates. Let Rd denote the “row” (the continuous line) running through the set of voxels Xd . There are N/nd such rows. Let Sd denote the set of feature voxels in Id . Let Vd∗ = VSd ∩ Rd denote the intersection of the Voronoi diagram VSd whose sites are the set of feature voxels Sd with the row Rd . Let Sd = {Fd−1 (xi )} denote the set of closest feature voxels in the next lower dimension for each voxel xi on the row Rd . Clearly Sd ⊆ Sd . Remark 1. Let f = Fd−1 (x), where x is a voxel on the row Rd . Clearly the feature voxel f belongs to the set Sd . Let g be any other feature voxel belonging
A Linear Time Algorithm for Computing the Euclidean Distance Transform
361
Procedure VoronoiFT(d, j1 , . . . , jd−1 , jd+1 , . . . , jk ) 1. k ← 0 /* Construct partial Voronoi diagram */ 2. for i ← 1 to nd do 3. xi ← (j1 , . . . , jd−1 , i, jd+1 , . . . , jk ) 4. if (fi ← F (xi )) =φ then 5. if k < 2 then 6. k ← k + 1, gk ← fi 7. else 8. while k ≥ 2 and DeleteFT(gk−1 , gk , fi , Rd ) do 9. k ←k−1 10. endwhile 11. k ← k + 1, gk ← fi 12. endif 13. endif 14. endfor 15. if (nS ← k) = 0 then 16. return 17. endif 18. k ← 1 /* Query partial Voronoi diagram */ 19. for i ← 1 to nd do 20. while k < nS and d(xi , gk ) > d(xi , gk+1 ) do 21. k ←k+1 22. endwhile 23. F (xi ) ← gk 24. endfor
to the set Sd such that f and g have identical values of the d-th coordinate (i.e., fd = gd ). By Property 5, all points on the row Rd are closer to f than g, which means that the Voronoi cell for site g does not intersect Rd . Since all feature voxels in the set Sd are either in the set Sd or have the same d-th coordinate as a feature voxel in the set Sd , Vd∗ = VSd ∩ Rd = VSd ∩ Rd . Thus, to construct Vd∗ , it is sufficient to consider the set Sd (rather than the larger set Sd ). Let Sd∗ denote the subset of Sd that are the centers of Voronoi cells in Vd∗ , i.e., that are the centers of cells in VSd that intersect Rd . Clearly Sd∗ ⊆ Sd ⊆ Sd . Remark 2. Let f and g be feature voxels belonging to the set Sd∗ . Let x and y be voxels on the row Rd that lie in the Voronoi cells Cf and Cg , respectively. By Property 4, if xd < yd , then fd < gd . Also, if fd < gd , then xd < yd . Thus Vd∗ is a set of disjoint line segments Vd∗ = {Cf∗i }. If the set of Voronoi centers (feature voxels) Sd∗ are sorted by the d-th coordinate, the associated Voronoi cells are similarly ordered. That is, as the row Rd is traversed from low values of the d-th coordinate to high values, Cf∗ is visited before Cg∗ iff f precedes g in the ordered set Sd∗ . To compute Fd for each voxel on the row Rd , it is not necessary to actually construct Vd∗ = {Cf∗i }. It is sufficient to determine the ordered set Sd∗ and visit each voxel by traversing the row in d-th coordinate order. Remark 3. Let xuv denote the point on the line Rd that is equidistant from u and v, i.e., d(u, xuv ) = d(v, xuv ), and let (xuv )d denote the d-th coordinate
362
Calvin R. Maurer, Jr., Vijay Raghavan, and Rensheng Qi
of this point. Let u, v, and w be three feature voxels belonging to the set Sd such that ud < vd < wd . By Property 4 and Remark 2, Cv does not intersect Rd if (xuv )d > (xvw )d . The algorithm for computing the FT F from the binary image I is performed with the initial invocation ComputeFT(k). The algorithm variables I, F , n1 , . . . , nk are global variables. The procedure ComputeFT implements dimensionality reduction using recursion. The procedure VoronoiFT constructs and queries the partial Voronoi diagram Vd∗ = VSd ∩ Rd = VSd∗ ∩ Rd . The algorithm variable F contains successively F0 , F1 , . . . , Fk−1 , Fk = F . It contains Fd−1 before the call to VoronoiFT and Fd upon return. As noted in Remark 2, the algorithm does not actually construct Vd∗ but instead determines the ordered set Sd∗ (VoronoiFT, lines 1–14) and queries the diagram (visits each voxel) by traversing the row in d-th coordinate order (lines 18–24). The set Sd∗ = {gk } is constructed from the set Sd = {fi } by deleting those feature voxels in Sd that are the centers of Voronoi cells that do not intersect Rd . As noted in Remark 1, it is sufficient to consider the set Sd = {Fd−1 (xi )}. This is the fundamental basis of the dimensionality reduction approach. The set Sd∗ is constructed in lines 1–14. It is initialized with the first two feature voxels of Sd . In the outer loop, additional feature voxels are added from Sd one at a time. In the inner loop, feature voxels that are the center of Voronoi cells that do not intersect Rd are deleted. This is accomplished with the procedure DeleteFT(u, v, w, Rd ), which returns true if (xuv )d > (xvw )d , false otherwise (see Remark 3). Let Sd∗ = {g1 , . . . , gk , fi+1 , . . . , fnd } denote an intermediate set of feature voxels during construction. Before entering the outer loop, Sd∗ = {f1 , . . . , fnd } = Sd . It is easy to verify that at the end of the inner loop, VSd ∩ Rd = VSd∗ ∩ Rd . It is also easy to verify that at the end of the inner loop, all Voronoi cells in V{g1 ,... ,gk } intersect Rd . Thus, after exiting the outer loop, Sd∗ = {g1 , . . . , gnS } = Sd∗ . In summary, Vd∗ = VSd ∩ Rd = VSd ∩ Rd = VSd∗ ∩ Rd = VSd∗ ∩ Rd . Initialization of F0 (ComputeFT, lines 2–8) takes O(N ) time. At each dimension d, the procedure VoronoiFT is executed for each of the N/nd rows. For each row, construction of Sd∗ takes O(nd ) time, since there are nd feature voxels in Sd , and each feature voxel is added to and removed from Sd∗ at most once. This assumes that calculating xuv requires O(1) time. Querying (visiting each voxel by traversing the row) simply requires O(nd ) time. Thus, at each dimension, the time complexity is O(nd × N/nd) = O(N ), and the algorithm for computing the FT of I runs in O(N ) time. Finally, it is clear that the DT of I can be computed from the FT in O(N ) time.
4
The EDT Algorithm
If the distance function is Euclidean distance, then the procedure DeleteFT can be implemented using only integer arithmetic. The distance between u and xuv can as d2 (u, xuv ) = d2 (u, Rd ) + (ud − (xuv )d )2 , where d2 (u, Rd ) = be computed 2 i=d (ui − ri ) is the distance between u and the row Rd . Since xuv denotes the point on Rd that is equidistant from u and v, d2 (u, xuv ) = d2 (v, xuv ), which can
A Linear Time Algorithm for Computing the Euclidean Distance Transform
363
Procedure ComputeEDT(d, jd+1 , . . . , jk ) 1. if d = 1 then /* Compute DT in d − 1 dimensions */ 2. for i1 ← 1 to n1 do 3. if I(i1 , j2 , . . . , jk ) = 1 then 4. D(i1 , j2 , . . . , jk ) ← 0 5. else 6. D(i1 , j2 , . . . , jk ) ← ∞ 7. endif 8. endfor 9. else 10. for id ← 1 to nd do 11. ComputeEDT(d − 1, id , jd+1 , . . . , jk ) 12. endfor 13. endif /* Compute DT in d dimensions */ 14. for i1 ← 1 to n1 do 15. ··· 16. for id−1 ← 1 to nd−1 do 17. VoronoiEDT(d, i1 , . . . , id−1 , jd+1 , . . . , jk ) 18. endfor 19. ··· 20. endfor
be rearranged to obtain (xuv )d = [d2 (v, Rd ) − d2 (u, Rd ) + vd2 − u2d]/[2(vd − ud )]. A similar expression can be found for (xvw )d , from which it is easy to verify that the inequality (xuv )d > (xvw )d is equivalent to the inequality c · d2 (v, Rd ) − b · d2 (u, Rd ) − a · d2 (w, Rd ) − abc > 0,
(1)
where a = vd − ud , b = wd − vd , c = wd − ud = a+ b. This inequality requires only eleven integer arithmetic operations to evaluate if the squared distances between the feature voxels u, v, and w and the row Rd are known (e.g., precomputed). The algorithm in the previous section provides a method for computing the FT of the binary image I. The DT still needs to be computed from the FT. For the Lp metric in general, and the L2 metric in particular, it is possible to compute the DT directly. Let us consider the squared EDT D. For each voxel x in I, D(x) = d2 (x, F (x)) is the squared Euclidean distance between x and the closest feature voxel in I. By analogy with the definition of Fd in the previous section, let Dd (x) = d2 (x, Fd (x)). We define D0 (x) = 0 if I(x) = 1, otherwise D0 (x) = ∞. We observe that if u = Fd−1 (x), then d2 (u, Rd ) = d2 (x, u) = d2 (x, Fd−1 (x)) = Dd−1 (x). This observation allows us to simply modify the FT algorithm procedures ComputeFT and VoronoiFT to obtain the squared EDT algorithm procedures ComputeEDT and VoronoiEDT. The algorithm for computing the squared EDT D from the binary image I is performed with the initial invocation ComputeEDT(k). The algorithm variable D contains successively D0 , D1 , . . . , Dk−1 , Dk = D. It contains Dd−1 before the call to VoronoiEDT and Dd upon return. In VoronoiEDT, the procedure
364
Calvin R. Maurer, Jr., Vijay Raghavan, and Rensheng Qi Procedure VoronoiEDT(d, j1 , . . . , jd−1 , jd+1 , . . . , jk ) 1. k ← 0 /* Construct partial Voronoi diagram */ 2. for i ← 1 to nd do 3. xi ← (j1 , . . . , jd−1 , i, jd+1 , . . . , jk ) ∞ then 4. if (fi ← D(xi )) = 5. if k < 2 then 6. k ← k + 1, gk ← fi , hk ← i 7. else 8. while k ≥ 2 and DeleteEDT(gk−1 , gk , fi , hk−1 , hk , i) do 9. k ←k−1 10. endwhile 11. k ← k + 1, gk ← fi , hk ← i 12. endif 13. endif 14. endfor 15. if (nS ← k) = 0 then 16. return 17. endif 18. k ← 1 /* Query partial Voronoi diagram */ 19. for i ← 1 to nd do 20. while k < nS and gk + (hk − i)2 > gk+1 + (hk+1 − i)2 do 21. k ←k+1 22. endwhile 23. D(xi ) ← gk + (hk − i)2 24. endfor
variable fi = Dd−1 (xi ) = d2 (fi , Rd ), gk = d2 (gk , Rd ), and hk is the d-th coordinate of gk . The feature voxel deletion procedure for the squared EDT algorithm is DeleteEDT(d2 (u, Rd ), d2 (v, Rd ), d2 (w, Rd ), ud , vd , wd ), which returns true if the inequality in Eq. 1 holds, false otherwise. The algorithm as presented produces the squared EDT for isotropic voxels of unit dimension. All computations can be implemented in integer arithmetic. The output can be scaled and/or square-rooted as necessary. The algorithm can be easily modified to accommodate the weighted EDT, e.g., for medical 3-D images with anisotropic voxel dimensions. The squared EDT algorithm executes substantially faster than the FT algorithm because much of the distance computation necessary for the feature voxel deletion procedure (see Eq. 1) is inherently stored in Dd−1 . The execution time of a straightforward implementation of the EDT algorithm on a relatively typical current workstation (Sun Ultra 10 with 440 MHz cpu) was ∼ 1 µsec/voxel (∼ 1 Mvoxel/sec) for 3-D images over a wide range of sizes.
References 1. H Breu, J Gil, D Kirkpatrick, M Werman. Linear time Euclidean distance transform algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 17: 529–533, 1995.
An Elliptic Operator for Constructing Conformal Metrics in Geometric Deformable Models Christopher Wyatt1 and Yaorong Ge2 1
Department of Medical Engineering Wake Forest University School of Medicine, Winston-Salem NC 27157, USA
[email protected] 2 Department of Computer Science Wake Forest University, Winston-Salem NC 27109, USA
[email protected] Abstract. The geometric deformable model (GDM) provides a useful framework for segmentation by integrating the energy minimization concept of classical snakes with the topologically flexible gradient flow. The key aspect of this technique is the image derived conformal metric for the configuration space. While the theoretical and numerical aspects of the geometric deformable model have been discussed in the literature, the formation of the conformal metric itself has not received much attention. Previous definitions of the conformal metric do not allow the GDM to produce reliable segmentation results in low-contrast or highblur regions. This paper examines the desired properties of the conformal metric with regard to the image information and proposes an elliptic partial differential equation to construct the metric. Our method produces similar results to other metric definitions in high-contrast regions, but produces better results in low-contrast, high-blur situations.
1
Introduction
Active contour models for segmentation in medical images have been an intensive area of research in the computer vision community, and have produced two similar approaches for describing the contour movement. The first, physically deformable models (snakes), adjusts the contour to minimize an energy functional using a Lagrangian formulation [9,14]. The second, implicit deformable models [4,13], uses a Eulerian formulation and an implicit representation for the contour using level sets. The main advantages of the implicit deformable models are the topological flexibility and the conceptual link to shape analysis. The work by Caselles et al. [3] and Kichenassamy et al. [11] unites the energy minimization and implicit form for the contour evolution into the geometrical deformable model (GDM). Subsequent analysis [10] and application [17] in both 2D and 3D have shown the approach to be effective for medical image segmentation. The advantages of the GDM over either the physically or implicit deformable models lie in the definition of the length functional over a Riemannian manifold. This shift from a Euclidean geometry is accomplished by multiplying the distance M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 365–371, 2001. c Springer-Verlag Berlin Heidelberg 2001
366
Christopher Wyatt and Yaorong Ge
metric by a positive image-derived function called the conformal mapping. The subsequent minimization of the functional leads to a gradient flow equation, similar to that of the implicit model. A result from variational calculus, the Maupertuis principle [5], equates the minimum of the modified length functional and the minimum integral energy defined by the conformal mapping. Extensions of the basic GDM approach have focused on modification of the functional to include other terms or to base the conformal mapping on an edge measure other than the gradient. The actual definition of the conformal mapping has been limited to that proposed in the original derivation with some preprocessing applied. Motivated by the reconstruction of images from edge maps as in [6], we describe a new approach to construction of the conformal mapping and subsequently the conformal metric using an elliptic partial differential equation (PDE). The advantages are improved contour stability over time, reduced dependence on parameters, and the elimination of the contour location bias [12] along high confidence edges. The new method is compared to previous formulations of the conformal mapping by applying the GDM to virtual colonoscopy images.
2
Methods
The two dimensional GDM is based on a length minimizing gradient flow using a conformal (non-Euclidean) metric. A family of curves is produced that iteratively move, in the steepest descent direction, toward a minimum of a length functional. The movement of the closed contour Γ is described by the following PDE: ∂Γ = φ(κ + ν)N − (∇φ · N ) N . ∂t
(1)
The conformal mapping is denoted by φ. The term κ is the curvature. The constant ν is the expansion force which selects the direction of evolution and enforces an image independent contour motion. In this paper, the conformal mapping is represented as a non-edge confidence in [0, ∞). An elliptic PDE is used to construct the conformal metric given an estimated edge set. This PDE captures the desired shape of the conformal mapping and constrains the value in areas where the edge location is known with some confidence. 2.1
Constructing the Conformal Metric
The approach presented here explicitly identifies high confidence edge locations using a noise model and an edge detection algorithm. The conformal mapping is then constructed so that the value is zero at the high confidence edges. The mapping values elsewhere are constrained to have a Laplacian similar to that of the (negative) gradient magnitude by defining an appropriate elliptic PDE. The motivation for this approach is that the evolving contour should effectively stop near pixels that are most likely to be edges. Where the edge measure
An Elliptic Operator for Constructing Conformal Metrics in GDM
367
is uncertain, the contour should be allowed to choose the best configuration from available information. In effect, the GDM becomes an edge linking process, with the conformal mapping defining the edge confidence in the spaces between known edge locations. Given an image, I(x, y), the gradient magnitude, r1 (x, y), can be calculated at some (possibly spatially varying) scale using the Gaussian filter g(x, y, σ) and its derivatives [8]. The second derivative of I in the gradient direction, r2(x, y) can be computed similarly. From this gradient and second derivative we can statistically identify high confidence edge candidates, given some information about the noise, to produce an edge set E. The Laplacian of r1 is given by: r1 =
∂ 2 r1 ∂ 2 r1 + . ∂x2 ∂y 2
(2)
Suppose there is a function u(x, y) such that u = uxx + uyy = −σ r1 ,
(3)
subject to the boundary conditions u(xi , yi ) = 0
where
(xi , yi ) ∈ E .
(4)
The operator σ denotes the Laplacian of Gaussian with width σ. The solution, u, is used as the conformal mapping, φ, in the GDM evolution. The PDE (3) and the boundary conditions (4) capture the desired behavior of the conformal mapping both at high confidence edges and less confident regions. Conceptually, this can be thought of as inverting the gradient, adding a constant to make all values positive, and multiplying by a nonnegative function to enforce the boundary conditions. In order for the solution to have a Laplacian similar to that of the inverted gradient, the corresponding nonnegative function must be smooth. A weak solution to equation 3 is equivalent to the result of such a procedure. The high confidence edge set, E, is determined using a simplification of the minimum reliable scale (MRS) edge detection algorithm of Elder and Zucker [7]. Briefly, an additive Gaussian noise model is used to derive the statistical response of the gradient and second derivative estimates to noise. A global type 1 error, α, over the entire image is set, producing a threshold for reliable derivative estimates. At each image point the scale used is the smallest which guarantees a reliable estimate. This produces a spatially varying scale that is larger away from edges and decreases as it moves toward edges, much like nonlinear diffusion. The scale space is sampled linearly in increments of 0.5. The largest gradient scale used is 3, while the largest second derivative scale used is 8. We do not attempt to locate low-precision edges since the confidence in them is low by definition. The rules for determining a high confidence edge (Elder and Zucker use the term high-precision) is as follows: 1. The gradient must be reliably detected at the point. 2. The directed second derivative must be reliably detected.
368
Christopher Wyatt and Yaorong Ge
3. The interpolated gradient (in the gradient direction) must be nonzero at the next grid intersection. 4. The interpolated second derivative (in the gradient direction) must be negative at the next grid intersection. The output of the edge detection scheme is the gradient magnitude calculated at the minimum reliable scale and the high confidence edge set. The input requires a setting for α and an estimated noise level sn . In the examples below, α is set to 0.05 and sn is estimated using a hand drawn region of interest. 2.2
Implementation: Full Multigrid and Level Set Methods
A weak solution to the elliptic PDE can most efficiently be determined using a multigrid approach [2,15]. Equation 3 is discretized using row-column ordering resulting in the system Au = b, where A is an N 2 xN 2 matrix, u and b are column vectors of length N 2 , and N is the size of the image (N must be a power of 2). Reflective conditions are used at the image boundaries. The multigrid approach solves the system by simplifying to a coarser grid, solving, and interpolating back to the finer grid, using a relaxation method at each step. We use a full multigrid method with half-weighted restriction, bilinear interpolation, and red-black Gauss-Seidel relaxation [15]. The number of relaxation steps for pre- and post-smoothing was set at 6. The number of cycles was increased until the differential residual error for the computer phantoms was less than one percent of the maximum of u (300 cycles). The same number of cycles was used for the clinical images. The level set implementation of the deformable model in equation (1) closely follows the recommendations in [13]. A narrow tube structure [16] is used to make the update step efficient.
3
Results
The derived elliptic conformal mapping was compared to two previous formulations for the mapping using a virtual colonoscopy (VC) dataset. Fig. 1(a) shows a single slice from the VC dataset with an initial contour inside a fluid filled region of the lumen. Fig. 1(b) shows the conformal mapping obtained using a monotonic function of the gradient given in [17], φ=
1 1 + ∇I
(5)
where ∇I is the gradient computed at a fixed scale of 2.0 pixels. Fig. 1(c) shows that mapping obtained using the same form as equation (5) with an anisotropic diffusion (chapter 3 [1]) pre-filter applied. The edge enhancement threshold, K, was set at 40.0 and the filter was run for 100 iterations. These values were experimentally chosen to give the best results for the VC images. Fig. 1(d) shows the elliptic conformal mapping.
An Elliptic Operator for Constructing Conformal Metrics in GDM
(a) Original Image
(b) Monotonic Mapping
(c)Filtered Monotonic Mapping
(d) Elliptic Mapping
369
Fig. 1. Original image from a VC dataset with conformal mappings obtained using two previous approaches and the elliptic method. Fig. 2 shows the result of applying the GDM to the VC data using the conformal mapping in Fig. 1(b). The GDM evolution was stopped in all experiments when the integral contour change over several iterations was visually insignificant or when the contour was obviously outside the region of interest. The smaller inflation force of 5.0 moves the contour very slowly once it reaches the vicinity of the edge and produces an under-segmented result after as many as 2000 iterations. The larger expansion force improves the segmentation speed, but causes the contour to bypass the more diffuse edges. Fig. 3 shows the result of applying the GDM to the VC data using the conformal mapping in Fig. 1(c). The diffusion filter reduces the effect of noise, resulting in a faster segmentation for the same inflation force. The contour has still not reached the higher contrast boundary after 1000 iterations due to the large gradient. Increasing the inflation force moves the contour more toward the contrast edge and speeds up the segmentation, but again, causes the contour to bypass the more diffuse edges. Fig. 4 shows the result of applying the GDM to the VC data using the elliptic conformal mapping in Fig. 1(d). The segmentation speed is improved and the contour is more stable over a wider range of inflation forces.
4
Discussion
This new method for constructing the conformal mapping is more suitable than previous formulations to segmentation tasks where the object may have varying
370
Christopher Wyatt and Yaorong Ge
(a) 2013 iterations
(b) 476 iterations
Fig. 2. GDM contours obtained using the conformal mapping in Fig. 1(b) with two different expansion forces. (a) ν = 5.0, 2013 iterations. (b) ν = 7.0, 476 iterations.
(a) 1015 iterations
(b) 505 iterations
Fig. 3. GDM contours obtained using the conformal mapping in Fig. 1(c) with two different expansion forces. (a) ν = 5.0, 1015 iterations. (b) ν = 7.0, 505 iterations. contrast and blur. Multi-scale derivatives and noise models provide better differentiation of edges similar to preprocessing. Edge constraints, however, are a departure from the functions used previously to define the conformal mapping and may be a starting point for other methods focused on tailoring the metric in the GDM.
Acknowledgments This work was funded partially by NIH grant No. 1 R01 CA 78485-01A1.
References 1. Bart M. ter Harr Romeny (Ed.): Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, 1994 2. Briggs, W. L.: A Multigrid Tutorial. SIAM Press, 1987 3. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic Active Contours. Proc. 5th Int. Conf. Computer Vision, pp. 694-699, 1995
An Elliptic Operator for Constructing Conformal Metrics in GDM
(a) 1021 iterations
371
(b) 516 iterations
Fig. 4. GDM contours obtained using the conformal mapping in Fig. 1(d) with two different expansion forces. (a) ν = 2.0, 1021 iterations. (b) ν = 5.0, 516 iterations. 4. Caselles, V., Catte, F., Coll, T., Dibos, F.: A geometric model for active contours in image processing. Numer. Math., vol. 66, pp. 1-31, 1993 5. Dubrovin, B.A., Fomenko, A.T., Novikov, S.P.: Modern Geometry: Methods and Applications, Part 1. Springer-Verlag, New York, NY, 1984 6. Elder, J.H.: Are Edges Incomplete?. Int. J. Computer Vision, vol. 34, no. 2, pp. 97-122, 1999 7. Elder, J.H., Zucker, S.W.: Local Scale Control for Edge Detection and Blur Estimation. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 699-716, 1998 8. Freeman, W., Adelson, E.: The Design and Use of Steerable Filters. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891-906, 1991 9. Kass, M., Witkin, A., Terzopoulos, D.: Snakes:Active Contour Models. Int. J. Computer Vision, vol. 1, no. 4, pp. 321-331, 1988 10. Kichenassamy, S., Kumar, A., Olver, P., Tannenbaum, A., Yezzi, A.: Conformal Curvature Flows: From Phase Transitions to Active Vision. Arch. Rational Mech. Anal., vol. 134, pp. 275-301, 1996 11. Kichenassamy, S., Kumar, A., Olver, P., Tannenbaum, A., Yezzi, A.: Gradient Flows and Geometric Active Contour Models. Proc. ICCV, pp. 810-815, June 1995 12. Ma, T., Tagare, H.D.: Consistency and Stability of Active Contours with Euclidean and Non-Euclidean Arc Lengths. IEEE Trans. Image Processing, vol. 8, no. 11, pp. 1549-1559, 1999 13. Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape Modeling with Front Propagation: A Level Set Approach. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 2, pp. 158-175, 1995 14. McInerney, T., Terzopoulos, D.: Topology Adaptive Deformable Surfaces for Medical Image Volume Segmentation. IEEE Trans. Medical Imaging, vol. 18, no. 10, pp. 840-850, 1999 15. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B. P.: Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 1992 16. Sethian, J.A.: Level Set Methods and Fast Marching Methods. Cambridge University Press, Cambridge, U.K. 1999 17. Yezzi, A., Kichenassamy, S., Kumar, A., Olver, P., Tannenbaum, A.: A Geometric Snake Model for Segmentation of Medical Imagery. IEEE Trans. Medical Imaging, vol. 16, no. 2, 1997
Using a Linear Diagnostic Function and Non-rigid Registration to Search for Morphological Differences Between Populations: An Example Involving the Male and Female Corpus Callosum David J. Pettey1 and James C. Gee2 1
GRASP Laboratory Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia PA 19104, USA
[email protected] 2 Dept. of Radiology, University of Pennsylvania, Philadelphia PA 19104, USA
[email protected] Abstract. Supplied with image data from two distinct populations we apply a non-rigid registration technique to place each image into correspondence with an atlas. Having found the appropriate transformations we then use the use determinant of the Jacobian of the corresponding transformations and find the linear discriminant function which can best distinguish between the populations on the basis of this data. We apply the method to a collection of mid-sagittal slices of the corpus callosum for a group of 34 males and 52 females. We find that there appear to be no statistically significant differences between the relative sizes of regions in the corpus callosum between males and females.
1
Introduction
As the medical community collects more and more image data via MRI, PET, fMRI, etc. from the general population, it becomes tempting to determine whether we can use the data to build image-based diagnostic tools. Equivalently, though not necessarily for diagnostic purposes, we are interested in uncovering structural or functional differences between populations. In fMRI sequences Wildgruber et al. [14] were interested in characterizing differences in the activation of regions in the brain between two populations. We are not always interested in using this information for classification, distinguishing between the two groups may already be a trivial task. We may be interested in whether there are differences between groups with respect to certain structures (or functionality) in the hope of gaining a better understanding of how the brain functions. Nevertheless, when these methods are applied to populations which are difficult to distinguish we hope to find differences which are of use in performing classification. There has been ongoing discussion of the morphological differences between the male and female corpus callosum [5,3]. Davatzikos, et al. investigated whether M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 372–379, 2001. c Springer-Verlag Berlin Heidelberg 2001
Linear Discriminants and Non-rigid Registration
373
one can discern any characteristic morphological differences from mid-sagittal MRI sections [3,11,8]. Paul Thompson [13] provides a nice overview of the disagreement between various groups in their assessment of morphological variation. More recently, Machado and Gee investigated whether similar techniques can be used to find characteristic differences between the morphology of the corpus callosum in schizophrenic and non-schizophrenic subjects [10,9]. We present a method to search for characteristic differences between populations via non-rigid registration and linear statistical techniques.
2
Correspondences and the Atlas
We have a collection of MRI mid-sagittal sections of the corpus callosum of 34 males and 52 females (see Fig. 1 for a representative selection of the image data). Each image is digitized as an n × m (512 × 512) array of pixels with a range of 256 gray-level intensities. Consider that we have been given one additional such image, but without a label telling us whether it is from a man or a woman. Can we correctly determine the sex of the unknown subject; more precisely, can we make an educated guess at the gender with better-than-even odds?
Fig. 1. A representative collection of mid-sagittal MRI sections used in our studies. The top two rows are images from four females subjects and the bottom two rows are from four male subjects. The corpus callosum is clearly discernible in each image.
Take Ii,j (s) to be the intensity of pixel (i, j) in the image obtained from subject s. If we knew nothing about what information the image contained we would likely have to stop here. At best we might consider each image to be a collection of random variables (RV’s) (one for the intensity of each pixel) which we could then subject these to any number of standard statistical techniques involving perhaps principal components analysis, factor analysis, structural equation modeling, linear discriminant analysis, independent component analysis or other
374
David J. Pettey and James C. Gee
techniques related to a general linear model wherein we look for one or more new RV’s which are hopefully better at discriminating between the populations. We would be unlikely to uncover any useful results, because linear models are not capable of capturing all relevant correlations within a data set. Fortunately, we do know something about the data and are willing to make a simple assumption whose consequences will be far-reaching. Namely, that there exists a meaningful correspondence between each pair of corpus callosa. This is equivalent to the assumption that there exists some ideal corpus callosum which can be put in to correspondence with each actual corpus callosum. In practice, we can choose can choose any one to be the atlas and then find the correspondences between the atlas and every subject. We choose at random one image to be the image of our atlas corpus callosum. We take NA to be the number of pixels in the chosen image occupied by the A th pixel corpus callosum and further define (iA k , jk ) to be the coordinates of the k in our atlas. Having chosen an atlas we need to find the correspondences. There are many proposed methods, none of which seem to have a great deal of physical justification. Christensen [2] and Bro-Nielsen [1] use a fluid model letting one image flow into the other. Thirion [12] employs a different model using optical flow techniques and Dawant et al. [4] has demonstrated the repeatability and agreement with human identified correspondences using this method. We have chosen to employ an elastic-membrane model (described in detail in [7,6]) though make no claims as to the superiority of this model over other methods. Whichever method is chosen we obtain for each subject a vector field or deformation field u.,. (s) for each subject. Where 1 2 uiA A (s) ≡ u (s), u (s) (1) A A A A i ,j i ,j k ,jk k
k
k
k
tells us the displacement needed to place the k th atlas pixel into the correct correspondence with the corpus callosum of subject s. So now instead of a collection of images as our data set we have a collection of deformation fields over a common atlas. We could at this point again attempt to apply some linear statistical techA . But we still do not believe nique to the k vector-valued random variables uiA k ,jk this would be fruitful. First, we can easily see that if the images were not rigidly registered to begin with then we would be very unlikely to obtain useful measures distinguishing the populations from the deformation fields, since the largest contribution to the field may come from rigidly registering the images. Rather, we need to find characteristics of the fields which will capture the relevant distinguishing characteristics of u.
3
Determinant of the Jacobian and Size Variations
We choose to focus our attention on local size differences between corresponding portions of the corpus callosum. This is by no means the only measure that one could consider. But, as it has been examined previously with some dispute as to
Linear Discriminants and Non-rigid Registration
375
the results ([5,3,11,8,13]), and because it is a simple scalar-valued field, we have chosen this to be the quantity of interest in our study. Now, ui,j (s) is a displacement field for subject s. In order to calculate the Jacobian of the transformation which takes the atlas into the corpus callosum of subject s it helps to consider ui,j (s), in a slight abuse of notation, to be a vector field us (x). That is, we consider the atlas to be a region in the plane rather than merely a collection of discrete points. Then we need to recall that the transformation which takes the atlas corpus callosum into subject s’s is given by, Ts (x) = x + us (x).
(2)
Finally the quantity we wish to examine is the determinant of the Jacobian of this transformation, ∂Ts (3) ∂x . Subsequently, we prefer to go back to our discretized space and denote by Jk (s) the value of the determinant of the Jacobian of the transformation at pixel A (iA k , jk ) in the atlas. We have now reduced our original image data into a collection of k numbers A related to the expansion or contraction required by the atlas at pixel (iA k , jk ) in order to achieve correspondence. The most important aspect of the new random variables Jk is that they relate to what we consider to be an important physical characteristic (local size) as well as having the feature that Jk (s1 ) and Jk (s2 ) refer to the corresponding physically meaningful measures. At this point Gee, Machado and Davatsikos [8,3] decided to examine each Jk individually and compute an effect size for the difference between the two populations for each pixel. Take µfk and µm k to be the average of Jk over the females and males respectively. Then defining σk to be the variance of Jk over the joint population we compute the effect size for the k th pixel, eJk =
µfk − µm k . σk
(4)
This is a measure of how different Jk is between the two populations. We can now determine which regions of the atlas are deformed in characteristically different ways for males and females by examining which pixels have a large effect size associated with them. Typically, one looks for an effect size greater than 1 as an indication that the distributions are significantly different. We can set a specified threshold and then shade all of the pixels whose effect size is greater than that threshold to see which regions of the atlas are relevant for discriminating between males and females. With our samples, we find that there are no points where the effect size is greater than 1 and even for a relatively small threshold there do not appear to be very many relevant pixels (see Fig. 2). This is in contrast to earlier results on smaller sample sizes which found that the region of the splenium
376
David J. Pettey and James C. Gee
0.2
0.4
0.3
0.5
Fig. 2. Thresholded images of the effect size performed pointwise. The threshold values used were 0.2, 0.3, 0.4 and 0.5, as indicated in the images above. At the 0.5, level we see very few pixels which have a corresponding effect size above this threshold, indicating that there are not any significant pointwise differences in j between males and females.
appeared to be significantly different between the male and female populations [3,11,8]. If we had observed large effect sizes for some pixels we would have expected them to be in clusters. We expect that if one pixel reveals a large effect size then neighboring pixels would be more likely to have a large effect size also. Furthermore, if Jk is larger in the males than in the females then we further anticipate that Jk for neighboring pixels will also be larger. Succinctly, we expect there to be correlations between the different Jk ’s. Looking at the effect size alone ignores these correlations. However, by looking at linear combinations of the Jk ’s we can capture some of the information in these correlations, namely the correlations which can be attributed to the two-point correlation function, or to pair-wise correlations. At this point Machado and Gee [11] perform a type of factor analysis employing the principal components of the Jk ’s scaled so as to have unit variance. They then find a collection of factors or simply new random variables which are linear combinations of the Jk ’s. Some of these factors appear to be localized to particular regions of the corpus callosum though none seem to give rise to random variables which can be used to classify the populations. A word of caution is in order; even though the resulting random variables or factors may have a large effect size they still may be of little use for classification, because of the small size of the data sets used. Even in our study one really must perform a blind removal test to be confident that the results are not simply due to noise. We hope to spell this out more clearly in a future paper. Nevertheless we will proceed. Since our data set is fairly small compared to the number of pixels in the atlas we first thin down the number of random variables of relevance. The more random variables we try to keep the more susceptible our tests will be to random noise. We chose to keep only the first few principal components of the Jk ’s, where
Linear Discriminants and Non-rigid Registration
pi (s) =
ˆ eik Jk (s) = ˆ eik Jk (s)
377
(5)
k
is the ith principal component and ˆ eik is a unit vector in the direction of the th i principal axis (in the last line we have employed the Einstein summation convention of summing over repeated indices). Recall that p1 (s) is the linear combination of the Jk ’s with the largest variance, p2 (s) has the same property for all linear combinations whose principal axis is orthogonal to ˆ eik and so on. We chose to retain only the first 15 principal components though we have not extensively examined the effects of this choice. Keeping slightly more did not appear to alter our results much and retaining fewer did. The first 15 principal components accounted for 70% of the variance and we believe the other principal components are largely artifacts of noise. Finally we apply linear discriminant analysis to the 15 pi ’s to find f (s) =
15
di pi (s)
(6)
i=1
the linear function of the pi ’s which best discriminates between the two populations. Here d is again a unit vector. Since each pi is a linear combination of the Jk ’s we can write f (s) as a linear combination of the Jk ’s as well: (LDF )
f (s) = ˆ ek
Jk (s).
(7)
Now by examining which pixels influence f (s) the most (which values of k (LDF ) have large “weights”, ˆ ek ) we can see which regions of the brain are most associated with differences between the two populations. While in Fig. 3 we see that the posterior portions of the corpus callosum do appear to be associated with large weights in f , we must qualify this by noting that the effect size of f (s) is disappointingly small (0.6) and that blind classification results appear to be rather poor (only about 60% correct). Thus we are led to believe that there is likely little or no difference in the relative sizes of the male and female corpus callosum.
(a)
(b)
Fig. 3. In (a) we see a gray-scale image of the absolute value of the weights for the best linear diagnostic function. Notice that portions of the splenium appear to be regions of high weights. In (b) we have highlighted those weights whose absolute values are at least one standard deviation larger than the average weight, again highlighting the fact that some portions of the corpus callosum appear to be significantly more important for the diagnostic function than others.
378
4
David J. Pettey and James C. Gee
Conclusions
Although we find a negative result for classification based upon relative size differences using single slice data we hope that the method presented here will be useful in future studies searching for clinically relevant differences in morphology between populations. We have introduced the use of the linear discriminant function as a means to account for correlations between disparate regions of the anatomy, and to isolate the regions most important for performing classification. These regions are by definition of the linear discriminant function, those which exhibit the most disparity between the populations.
References 1. M. Bro-Nielsen and C. Gramkow. Fast fluid registration of medical images. In Proc. Visualization Biomedical Computing Conf, volume 1131, pages 267–276, Berlin, Germany, 1996. Springer-Verlag. 2. G. E. Christensen, M. I. Miller, and M. Vannier. 3d brain mapping using a deformable neuroanatomy. Phys. Med. Biol., 39(3):609–618, 1994. 3. C. Davatzikos, M. Vaillant, S. M. Resnick, J. L. Prince, S. Letovsky, and R. N. Bryan. A computerized approach for morphological analysis of the corpus callosum. J. Comput. Assist. Tomogr., 20(1):88–97, 1996. 4. B. M. Dawant, S. L. Hartmann, J. P. Thirion, F. Maes, D. Vandermeulen, and P. Demaerel. Automatic 3-d segmentation of internal structures of the head in mr images using a combination of similarity and free-form transformations:part i, methodology and validation on normal subjects. IEEE Transactions on Medical Imaging, 18(10):909–916, 1999. 5. C. de Lacoste-Utamsing and R. L. Holloway. Sexual dimorphism in the human corpus callosum. Science, 216:1431–1432, 1982. 6. J. C. Gee. On matching brain volumes. Pattern Recognition, 32:99–111, 1999. 7. J. C. Gee and R. K. Bajcsy. Elastic matching: Continuum mechanical and probabilistic analysis. In A. W. Toga, editor, Brain Warping, pages 183–197. Academic Press, San Diego, 1999. 8. A. M. C. Machado and J. C. Gee. Atlas warping for brain morphometry. In Medical Imaging 1998: Image Processing, pages 642–651. SPIE, Bellingham, WA, 1998. 9. A. M. C. Machado, J. C. Gee, and M. F. M. Campos. Exploratory factor analysis in morphometry. In International Conf. Medical Image Computing and ComputerAssited Intervention MICCAI 1999, pages 378–385, Heidelberg, 1999. SpringerVerlag. 10. A. M. C. Machado, J. C. Gee, and M. F. M. Campos. Exploratory and confirmatory factor analysis of the corpus callosum morphometry. In Proc. SPIE Medical Imaging 2000, pages 718–725, Bellingham, WA, 2000. SPIE. 11. A. M. C. Machado, J. C. Gee, and M. F. M. Campos. A factor analytic approach to structural characterization. In Mathematical Methods in Biomedical Image Analysis, pages 219–226. IEEE Computer Society, Los Alamitos, CA, 2000. 12. J. P. Thirion. Non-rigid matching using demons. Med. Image Analysis, 2(3):243– 260, 1998.
Linear Discriminants and Non-rigid Registration
379
13. P. M. Thompson, K. L. Narr, R. E. Blanton, and A. W. Toga. Mapping structural alterations of the corpus callosum during brain development and degeneration. In Proc. of the NATO ASI on the corpus callosum. Kluwer, In Press. 14. D. Wildgruber, H. Ackermann, M. Klein, A. Riecker, and W. Grodd. Brain activation during identification of affective speech melody: influence of emotional valence and sex. Neuroimage, 11(5), 2000.
Shape Constrained Deformable Models for 3D Medical Image Segmentation J¨ urgen Weese1 , Michael Kaus1 , Christian Lorenz1 , Steven Lobregt3 , Roel Truyen3 , and Vladimir Pekar1,2 1
Philips Research Laboratories, Division Technical Systems, R¨ontgenstraße 24-26, D-22335 Hamburg, Germany 2 Medical University of L¨ ubeck, Institute for Signal Processing, Seelandstraße 1a, D-23569 L¨ ubeck, Germany 3 EasyVision Advanced Development, Philips Medical Systems Nederland B. V., Veenpluis 4-6, NL-5680 DA Best, The Netherlands
Abstract. To improve the robustness of segmentation methods, more and more methods use prior knowledge. We present an approach which embeds an active shape model into an elastically deformable surface model, and combines the advantages of both approaches. The shape model constrains the flexibility of the surface mesh representing the deformable model and maintains an optimal distribution of mesh vertices. A specific external energy which attracts the deformable model to locally detected surfaces, reduces the danger that the mesh is trapped by false object boundaries. Examples are shown, and furthermore a validation study for the segmentation of vertebrae in CT images is presented. With the exception of a few problematic areas, the algorithm leads reliably to a very good overall segmentation.
1
Introduction
Many tasks in medical image analysis require the segmentation of anatomical objects. However, the time for data preparation continues to be a limiting factor for the routine clinical use of such methods, because accurate and robust (semi-) automatic segmentation of 3D images remains a widely unsolved task. To improve the robustness of segmentation methods, more and more approaches take prior knowledge into account. We present a segmentation method for object surfaces in 3D which uses a priori shape information, and combines the advantages of active shape models and elastically deformable models. Active shape models [1,2] are a fast and robust method to segment an object, but because of the restriction to a model with a few parameters segmentation accuracy is limited. Furthermore, numerous data sets are required to build a representative model. Elastically deformable models [3,4,5] are more flexible, but have the well-known drawback of requiring a close initialization. This is due to the presence of image features others than those belonging to the object of interest, which drive the model surface towards false object boundaries. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 380–387, 2001. c Springer-Verlag Berlin Heidelberg 2001
Shape Constrained Deformable Models for 3D Medical Image Segmentation
381
An obvious way of combining active shape models with elastically deformable models is a procedure, where the former method is used for global model adaptation first and the latter one for local refinement afterwards. In contrast to this, the algorithm presented here embeds a shape model in an elastically deformable surface model. Adaptation to the image is governed by an external energy, which is derived from local surface detection, and an internal energy, which constrains the deformable surface to stay close to the subspace defined by the shape model. Compared to other elastically deformable surface models (compare e.g. with [3,4,5]), there are three important differences: – The internal energy is defined with respect to the shape model. The pose and the parameters of the shape model are adapted together with the mesh vertices representing the elastically deformable surface model. – The internal energy has been designed to maintain the distribution of mesh vertices given by the shape model. – Evaluation of the external energy requires local surface detection. The elastically deformable model is not attracted by the detected surface points themselves, but by the surface patches associated with each of these points. Since our shape constrained deformable model is not restricted to the subset of modeled shapes it can capture anatomical objects even if they cannot be exactly described by the model. It is therefore suited for applications such as orthopedic planning [6], where a pathology may go along with a deformation, but where the shape and geometric topology are broadly preserved. In the following section we describe the shape constrained deformable model, and present some examples. Section 3 discusses a validation study for the segmentation of vertebrae in CT images. The final section summarizes the results and draws conclusions.
2
Shape Constrained Deformable Models
The deformable model is represented by a mesh consisting of V vertices with coordinates xi and T triangles. To adapt the mesh to the image, an iterative procedure is used, where each iteration consists of a surface detection step and a mesh reconfiguration step. Mesh reconfiguration is done by minimizing E = Eext + αEint .
(1)
The external energy Eext drives the mesh towards the surface patches obtained in the surface detection step. The internal energy Eint restricts the flexibility of the mesh. The parameter α weights the relative influence of each term. The different components of the algorithm are described in the subsequent sections. 2.1
Surface Detection
For surface detection, a search is performed along the triangle normal ni to find ˜ i with the optimal combination of feature value Fi (˜ the point x xi ) and distance ˆ i: δj to the triangle center x
382
J¨ urgen Weese et al.
˜i = x ˆ i + ni δ arg max x
j=−l,... ,l
Fi (ˆ xi + ni δj) − Dδ 2 j 2 .
(2)
The parameter l defines the search profile length, the parameter δ is the distance between two successive points, and the parameter D controls the weighting of the distance information and the feature value. The quantity Fi (x) = ±nti g(x)
gmax (gmax + g(x) ) 2 (gmax + g(x) 2 )
(3)
is used as a feature, where g(x) denotes the image gradient at point x. The sign is chosen in dependence on the brightness of the object of interest with respect to the surrounding structures. For image points with a gradient magnitude smaller than the threshold gmax , this quantity is essentially the gradient in direction of the mesh normal. The threshold prevents problems that occur if the object of interest has a considerably smaller gradient magnitude at the boundary than another object in the neighborhood, or if different parts of the object of interest have boundaries with considerably different gradient magnitude. 2.2
External Energy
In analogy to iterative closest point algorithms, the external energy Eext =
T
ˆ i )2 wi = max 0, Fi (˜ xi ) − D(˜ xi − x
2
ˆ i) , wi (˜ xi − x
(4)
i=1
can be used, where the weights wi have been introduced to give the more promis˜ i a larger influence during mesh reconfiguration. With this ing surface points x external energy, the detected surface points would directly attract the triangle centers of the mesh. As a consequence, once a triangle center has been attracted by a surface point in the image, it can hardly move anymore. For this reason, the mesh remains attached to false object boundaries, which are detected frequently at the beginning of the adaptation process. This problem is diminished ˆ i are attracted by the planes perpendicular considerably if the triangle centers x ˜ i: to the image gradient at the surface point x Eext =
T
wi
i=1
2.3
2 g(˜ xi ) ˆ i) . (˜ xi − x g(˜ xi )
(5)
Internal Energy
The starting point for the introduction of the internal energy is a shape model represented by a mesh of triangles (see e.g. [7]) with vertex coordinates mi = m0i +
M k=1
pk mki ;
i = 1, . . . , V.
(6)
Shape Constrained Deformable Models for 3D Medical Image Segmentation
383
In this equation m0i denote the vertex coordinates of the mean model, mki describe the variation of the vertex coordinates associated with the M eigenmodes of the model, and pk represent the weights of the eigenmodes. Since the shape model provides a suitable distribution of mesh vertices, the internal energy has been designed to maintain this distribution. For that purpose the difference vectors between the coordinates of two neighboring mesh vertices are considered. Difference vectors for the deformable model and the shape model are compared, and the deviations between both are penalized: Eint =
V
i=1 j∈N (i)
xi − xj − sR
m0i
−
m0j
+
M
2 pk (mki
−
mkj )
,
(7)
k=1
where the set N (i) contains the neighbors of vertex i. The scale s and the orientation R of the shape model, as well as its weights pk , must be determined in addition to the vertex coordinates xi during mesh reconfiguration. 2.4
Optimization
Mesh reconfiguration by minimization of the total energy of eq. (1) is done in two steps. First, the scaling s and orientation R of the shape model with the current weights of the eigenmodes are determined with respect to the current mesh configuration. This is done with a point-based registration method based on a singular value decomposition. Second, the vertex coordinates xi and the weights pk are updated using the scaling and orientation as determined in the first step. Considering the weights wi in the external energy as constants, the total energy is a quadratic function with about 2000–5000 parameters. Minimization can be done very quickly with a conjugate gradient method taking advantage of the fact that the normal equations are a sparse linear system. 2.5
Examples
Fig. 1 shows segmentations of a vertebra, a femur and an aorta with an aneurysm. For segmenting the vertebra and the femur, shape models were used which were generated as described in [7]. In the case of the aorta, a triangulated cylinder without variation modes was used as a model. In all cases, very good segmentation results were achieved after proper manual positioning of the model.
3
Validation
The validation was performed for 18 vertebrae in 6 CT images. These 18 test vertebrae were used together with 19 additional vertebrae to build individual shape models [7] (618 vertices, 1236 triangles, 10 eigenmodes) for each test vertebra. The test vertebra itself was excluded from the learning set, to avoid a bias. Furthermore, three different initial configurations have been generated for
384
J¨ urgen Weese et al.
each test vertebra by manually adjusting the center, scaling and orientation of the mean shape in the CT image. A few initial configurations were too inaccurate, and the manual procedure was repeated for them. The adaptation was
Fig. 1. Segmentation results for a vertebra, a femur and an aorta in CT images.
Fig. 2. Mean and maximum segmentation error after initialization (◦), after adaptation with the energy of eq. (4) (×) and after adaptation with the energy of eq. (5) ().
Shape Constrained Deformable Models for 3D Medical Image Segmentation
385
Fig. 3. Typical results after adaptation of the deformable model. Except for a few problematic areas (white arrows), a very good overall segmentation is obtained. performed for each of the three starting configurations and each of the test vertebrae. Within this step, the resampled images with an isotropic resolution of 1 mm were used instead of the original CT images with an in-plane resolution between 0.49 mm and 0.72 mm and a slice-to-slice distance of 2 mm. The parameters of the algorithm were set to D = 2, δ = 1 mm, l = 10, gmax = 100, and α = 33.33. The segmentation error was assessed by computing the mean and maximum Euclidean distance of the deformable model surface with respect to a manual reference segmentation as well as vice versa and averaging both values. Fig. 2 contains the results for the segmentation error averaged over the three initial configurations. The adaptation was performed with the external energy of eq. (4) and with our external energy given by eq. (5). According to Fig. 2, the mean (maximum) segmentation error was between 2.5 – 3.5 mm (12 – 20 mm) after manual initialization. The adaptation process took about 30 s on a Sun UltraSparc (400 MHz) and reduced this error to 0.8 – 1.0 mm (4.5 – 7.0 mm) except for vertebra L3 in image CT5. Looking at the values averaged over all test vertebrae, mesh adaptation reduced the mean (maximum) segmentation error from 2.81 mm (13.66 mm) to 0.93 (6.14 mm). This shows that a very good overall segmentation was obtained. This is illustrated in Fig. 3, which shows a very good segmentation result, but also some typical problems of the adaptation procedure. Fig. 4 illustrates the difference between the external energies. If the deformable model is attracted by the detected surface points (eq. (4)), large parts of the deformable model are captured by false object boundaries. In contrast, mostly the correct boundaries are found, if the deformable model is attracted by the surface patches associated with each of the detected surface points (eq. (5)). This is confirmed by the reduction of the segmentation errors (Fig. 2).
4
Results and Conclusions
A novel model-based approach for the segmentation of 3D medical images was presented. Examples illustrate that it can be used for the segmentation of various
386
J¨ urgen Weese et al.
iteration 0
iteration 5
iteration 10
iteration 15
iteration 30
Fig. 4. Intermediate results of the deformable model adaptation. The upper row refers to the external energy of eq. (4) and the lower row to the external energy of eq. (5).
anatomical structures in CT images such as a vertebra, a femur, or a part of the aorta. A validation study based on comparing the results of our method to manually segmented vertebrae shows that the algorithm reliably leads to a very good overall segmentation after proper manual placement of the mean vertebra model. This is reflected by a mean segmentation error of 0.93 mm. However, there are a few problematic areas, where deviations around 4.5–7 mm may occur. In particular it was shown that the robustness of the segmentation approach is considerably improved if the deformable model is attracted by surface patches associated with each of the detected surface points, rather than by the surface points themselves. Due to the use of a conjugate gradient method, which is especially effective for sparse linear systems, the algorithm is fast and enables segmentation within 30 s on a Sun UltraSparc (400 MHz) in our experiments. Acknowledgments We thank Prof. Dr. W. P. Th. M. Mali, Prof. Dr. B. C. Eikelboom and Dr. J. D. Blankensteijn (University Hospital Utrecht) for providing the CT images with the vertebrae and Dr. J. Richolt, Dr. J. Kordelle and Brigham & Women’s Hospital for the femur data. The algorithm was implemented on an experimental version of the EasyVision workstation from Philips Medical Systems.
References 1. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham: Active Shape Models, their Training and Application. Comp. Vis. Imag. Under. 61, 1995, 38-59 2. A. Kelemen, G. Szekely, and G. Gerig: Three-Dimensional Model-Based Segmentation of Brain MRI. IEEE Trans. Med. Imag. 18(10), 1999, 828-839
Shape Constrained Deformable Models for 3D Medical Image Segmentation
387
3. T. McInerney and D. Terzopoulos: Deformable Models in Medical Image Analysis: A Survey. Med. Imag. Anal. 1(2), 1996, 91-108 4. L. Staib and J. Duncan: Model-based Deformable Surface Finding for Medical Images. IEEE Trans. Med. Imag. 15(5), 1996, 720-731 5. J. Montagnat and H. Delingette: Globally constrained deformable models for 3D object reconstruction. Signal Processing 71(2), 1998, 173-186 6. R. H. Taylor, S. Lavallee, G. C. Burdea, and R. M¨osges, eds.: Computer-Integrated Surgery: Technology and Clinical Applications. The MIT Press, Cambridge, 1996 7. C. Lorenz and N. Krahnst¨ over: Generation of Point-Based 3D Statistical Shape Models for Anatomical Objects. Comp. Vis. Imag. Under. 77, 2000, 175-191
Stenosis Detection Using a New Shape Space for Second Order 3D-Variations Qingfen Lin and Per-Erik Danielsson Image Processing Laboratory, Dept. Electrical Engineering, Link¨ opings Universitet, 581 83 Sweden qingfen,
[email protected] Abstract. The prevalent model for second order variation in 3-D volumes is an ellipsoid spanned by the magnitudes of the Hessian eigenvalues. Here, we describe this variation as a vector in an orthogonal shape space spanned by spherical harmonic basis functions. From this new shape-space, a truly rotation- and shape-invariant signal energy is defined, consistent orientation information is extracted and shape sensitive quantities are employed. The advantage of these quantities is demonstrated in detection of stenosis in Magnetic Resonance Angiography(MRA) volume. The new shape space is expected to improve both the theoretical understanding and the implementation of Hessian based analysis in other applications as well.
1
Introduction
The local second order variation of a function f (x, y, z) is measured by convolving the 3-D volume with six derivative operators (derivators) (gxx , gyy , gzz , gxy , gyz , gxz ). These derivators are commonly designed by differentiating a rotationally symmetric Gaussian kernel, which is a reasonable compromise between approximation errors and computational efficiency. The response vector consists of the derivative estimates (fxx , fyy , fzz , fxy , fyz , fxz ), which are assembled as a symmetric 3 × 3 matrix, the Hessian. In order to analyze the neighborhood described by the Hessian matrix, a common procedure is to diagonalize the Hessian and then sort the eigenvalues according to their magnitudes[3]. The eigenvalues are used to detect and discriminate for shape while the corresponding eigenvectors may be used to reveal and discriminate for orientation. In medical applications some of the second order variation shapes have direct anatomical counterparts. String-like blood vessels and plane-like cartilage may be of special interest. Blob-like, string-like (elongated ellipsoids) and plane-like (flattened ellipsoids) all have their distinct eigenvalue responses as first observed and listed in [4]. Rather heuristical approaches [3,5,6] have been then used to create “filters”, procedures, which detect specific shapes. The assumption in all these approaches is that the shapes of second order variation can be modeled with ellipsoids. However, this is true only when all M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 388–394, 2001. c Springer-Verlag Berlin Heidelberg 2001
Stenosis Detection Using a New Shape Space
389
eigenvalues of the Hessian have the same sign. In general, the Hessian is not restricted in this way but has a much richer variation than what is possible to convey by an ellipsoid.
2
The New Shape Space
It is quite commonly assumed that the second derivative estimators (gxx , gyy , gzz , gxy , gxz , gyz ) form an orthogonal basis set. To see that this is false, we just have to note that in the Fourier domain with u = ρ sin θ cos φ, v = ρ sin θ sin φ and w = ρ cos θ, two functions like Guu and Gvv that have the same radial variation and non-negative angular functions as sin2 θ cos2 φ and sin2 θ sin2 φ can not be orthogonal. As shown in [1], an orthonormal set is obtained by linearly combining the second derivators into the spherical harmonic operators h20 (r) 1/6 1/6(gxx + gyy + gzz ) c20 c21 h2 (r) 5/24(3 cos2 ϑ − 1) zz − gxx − gyy ) 5/24(2g c22 h2 (r) 5/8 sin2 ϑ cos 2ϕ 5/8(g xx − gyy ) = = c2 = (1) c23 h2 (r) 5/8 sin2 ϑ sin 2ϕ 5/8 · 2g xy c24 h (r) 5/8 sin 2ϑ cos ϕ 5/8 · 2g 2 xz c25 5/8 · 2gyz h2 (r) 5/8 sin 2ϑ sin ϕ with coefficients chosen to normalize the energy. The response vector f2 = f ∗ c2 can be calculated from (fxx , fyy , fzz , fxy , fxz , fyz ) using the same linear combination of the second derivative responses. Since the derivatives of a Gaussian are separable in the three dimensions, the computation can be implemented in a highly efficient manner, even when embedded in a scale space [1]. Any pattern f can be described as a rotated and amplified version of its prototype p. A local second order variation has six degrees of freedom. The orientation requires three, the magnitude requires one, and the prototype shape accounts for the other two degrees of freedom. We then stipulate that p2 , the prototype response obtained from the response f2 with orientation eliminated, has the form p2 = (p20 , p21 , p22 , 0, 0, 0)T where the responses to (c23 , c24 , c25 ) are zero. The subspace spanned by (c20 , c21 , c22 ) is then sufficient to represent the prototype together with its magnitude. Diagnolization of the Hessian is actually a procedure that recovers the prototype from the second derivative responses, since the three cross-derivators are identical to (c23 , c24 , c25 ) except for a scale factor. From (1) we gather that (c20 , c21 , c22 ) are linear combinations of the three derivators (gxx , gyy , gzz ). The prototype derivatives (pxx , pyy , pzz ) can be identified with the eigenvalues (λ1 , λ2 , λ3 ) of the Hessian, and the corresponding harmonic responses in the three-dimensional space spanned by the orthonormal set (c20 , c21 , c22 ) are
p p 32 3 2 3 2 p 1/6 1/6 p1/6 p20 pxx p p 4p21 5 = 4− 5/24 − 5/24 5/65 4pyy 5 . p p p22
5/8
−
5/8
0
pzz
(2)
390
Qingfen Lin and Per-Erik Danielsson
Fig. 1. The shape-space expanded by (c20 , c21 , c22 ).
We notice that a different ordering of the eigenvalues produces a different vector p2 . In fact, the six different permutations of the eigenvalues correspond to six possible positions of one single shape. In the signal space, this corresponds to six positions of the prototype, all of which are aligned with the x, y, z-axis and 90o rotations therefrom. To remove the ambiguity and create a unique 1-1 correspondence between the eigenvalues and the prototype shape, we should use only one sixth of the (c20 , c21 , c22 )-space. Although there are many choices, we now declare that the “real” prototype response p2 is the one that falls in the 60o wedge of the (c20 , c21 , c22 )-space as shown in Fig. 1. This non-redundant shape space is symmetric around the c22 axis, and is arrived at by ordering the signed eigenvalues as λ1 ≥ λ2 ≥ λ3 and assigning pxx = λ1 , pzz = λ2 and pyy = λ3 . More details on the mapping of the eigenvalues onto the orthogonal shape space are found in [1]. In Fig. 1, the axially symmetric shapes of string, plane and double cone are found at the two boundaries of the wedge, symmetrically located around
Stenosis Detection Using a New Shape Space
391
c22 -axis. A walk along any direction on the wedge results in a gradual change between different shapes. The ellipsoid shapes reside only in the shaded areas on the top and bottom of the wedge. Hence, the shapes in the middle part of the shape space are totally ignored, or misinterpreted, by the ellipsoid model. In the following we will show that awareness of the complete shape-space improves the understanding of second order features and is especially useful to detect stenosis in blood vessels.
3
Stenosis Detection
3.1
Stenosis Detection According to Frangi et al.
A model-based technique for stenosis quantification in Magnetic Resonance Angiography (MRA) data is presented in [2]. The algorithm first enhances the vessel structures and then segments the vessel using a deformable model. The enhancement filtering step is especially important since it provides the input to the deformable model. The filter V(x, σ) is designed to enhance the blood vessels which correspond to the shapes that is called bright string in Fig.1. Using the following non-linear combination of the eigenvalues of the Hessian matrix,
(
V(x, σ) =
0
R2
1 − exp − 2αA2
R2
exp − 2βB2
2
S 1 − exp − 2c 2
if λ2 > 0 or λ3 > 0,
if λ2 ≤ 0 and λ3 ≤ 0, (3)
where RA =
|λ2 | , |λ3 |
RB =
|λ1 | , |λ2 λ3 |
p
S=
sX
λ2j ,
|λ1 | ≤ |λ2 | ≤ |λ3 |,
(4)
j
the three quantities RA , RB and S are designed to punish cross-sectional asymmetry, blobness and low energy, respectively. The parameters α, β and c are set to tune the sensitivity of the filter to such deviations from perfect strings. The filter is applied at different scales by adjusting the parameter σ in the Gaussian kernel of the derivators. The final filter response V (x) is the maximum of V(x, σ) across the scales. From (2) we find 4 S 2 = p2xx + p2yy + p2zz = 2p220 + (p221 + p222 ). 5
(5)
However, from the orthonormal shape-space introduced in the previous section, we know that the rotation- and shape-invariant energy is p2 2 = p220 + p221 + p222 .
(6)
Both quantities are rotation invariant which is a desirable property in the present application. However, comparing the right side of (5) with (6), we see that S over-emphasizes the p20 component. S is not shape-invariant. It returns high values for shapes close to the pole in Fig. 1 and low values for shapes near the equator where stenosis cases are to be found.
392
Qingfen Lin and Per-Erik Danielsson
Vanishing response V(x, σ) in the stenotic area is also due to the ordering scheme of the eigenvalues in (4). The filter is tuned to achieve its maximum at the center of a perfect bright string where the eigenvalues satisfy |λ1 | ≈ 0
λ2 ≈ λ3 0.
(7)
The eigenvalue λ1 that has the smallest magnitude is the second order variation along the symmetric axis of the string. However, in the stenotic shape area λ1 becomes positive and increases its magnitude. If we still order the eigenvalues according to (4), once the magnitude of the positive eigenvalue exceeds one of the other two, the positions of λ1 and λ2 are swapped. Therefore, we now have λ2 > 0 in (3), and the response to the nonlinear filter V(x, σ) will be set to 0. By using multi-scale approaches, this problem is somewhat relieved, since the narrow stenotic vessel is likely to be captured by a small-scale filter. However, when the vessel diameter changes abruptly, even multi-scale filters will fail. Another possible remedy would be to relax the zero-setting condition in (3). Unfortunately, another problem surfaces immediately. The local orientation is assigned to be the direction of the eigenvector corresponding to the eigenvalue with the smallest magnitude, which is λ1 . Once λ1 and λ2 are swapped, an orientation perpendicular to the axis direction is reported. This effect is shown in Fig. 2. Therefore, any tracing program will have difficulties to continue beyond the stenosis. 3.2
The New Stenosis Detector
Based on the new shape space presented in Sec. 2, we propose the following stenosis detection condition
SHAP E =
p20 ≥ α and (p20 ≤ β) and p2
ST EN = (p2 · SHAP E ≥ t)
p21 ≤γ p22
(8)
SHAP E is a binary function, which is set to true when three conditions controlled by the parameters α, β, γ, respectively, are mutually satisfied. The parameters α, β and γ discriminate against shapes which are below, above and to the left, respectively, of the stenosis area indicated in Fig. 1. The parameter t discriminates against low second order energy and ST EN is the final binary function that indicates stenosis. In the experiment below, we have set the parameters to α = −0.58, β = 0 and γ = −0.1. For comparison, a perfect bright string and a double cone have p20 / p2 equal to -0.667 and 0, respectively. For comparison with the filter V(x, σ), we applied the binary functions (8) to a stenosis phantom with the result shown in Fig. 2. The local orientation is taken from the eigenvector associated with pxx . We see that the energy at the stenosis location is better preserved, that the orientation indicator does not change at the stenotic area and that ST EN is set to true at the wanted location. The new stenosis detector has also been applied to two clinical Contrast Enhanced (CE) MRA volumes with carotid arteries. Fig. 3 shows two cases where stenosis indications seem to appear at appropriate places.
Stenosis Detection Using a New Shape Space
393
Fig. 2. Top left: A stenotic vessel phantom. Top right: Wire-frame representation of the phantom, with stars indicating the Sten response. Middle left: A central slice. Local orientation is taken from the eigenvector of the eigenvalue with the smallest magnitude as in Sec. 3.1. Gray-level is computed as in (5). Middle right: Local orientation taken from the eigenvector of pxx as in Sec. 3.2. Gray-level is computed as in (6). Bottom left: The function V (x). Bottom right: The function p2 · SHAP E. So far, only a few data sets have been available for experiment. Therefore, we do not claim to have a general solution for the problem, but rather regard it as one simple example of many possibilities of using the shape-space proposed.
4
Conclusions and Future Works
In this work, we first presented a method that maps second derivative responses vector onto an orthogonal space spanned by the spherical harmonics. Two misunderstandings were pointed out. First, the second degree derivators (gxx , gyy , gzz ) are not orthogonal and second, the ellipsoid model is unable to describe the second order variations in full. A stenosis detection method was proposed and experiments on both a mathematical phantom and real clinical data were shown. Future works should include the validation of the stenosis detector and variations thereof.
Acknowledgment We are indebted to Dr. A. Frangi for access to the CE MRA data sets. We gratefully acknowledge the financial support from the Swedish Foundation for Strategic Research through the VISIT program.
394
Qingfen Lin and Per-Erik Danielsson
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
10
20
30
40
10
20
30
5
5
10
10
15
15
20
40
10
20
30
20 10
20
30
40
10
20
30
40
Fig. 3. The top row from left to right shows: Maximum Intensity Projection (MIP) of a stenotic region in a CE MRA dataset, wire-frame representation with stars denoting the stenotic area, another viewing direction of the volume. The bottom row shows another stenosis case.
References 1. P.-E. Danielsson, Q. Lin, and Q. Ye. Efficient detection of second degree variations in 2D and 3D images. J. of Visual. Com. and Image Repr. (2001) To appear. 2. A. Frangi, W. J. Niessen, P. J. Nederkoorn, O. Van Elgersma, and M. Viergever. Three-dimensional model-based stenosis quantification of the carotid arteries from contrast-enhanced MR angiography. IEEE. workshop on Mathematical Methods in Biomedical Image Analysis (2000) 3. A. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever. Vessel enhancement filtering. Medical Image Conference and Computer Assisted Interventions (1998) 130–137 4. T. M. Koller. From Data to Information: Segmentation, Description and Analysis of the Cerebral Vascularity. PhD thesis, Swiss Federal Institute of Technology Zurich (1994) 5. C. Lorenz, I.-C. Carlsen, T. Buzug, C. Fassnacht, and J. Weese. Multi-scale line segmentation with automatic estimation of width, contrast and tangential direction in 2D and 3D medical images. CVRMed and MRCAS (1997) 6. Y. Sato and S. Tamura. Detection and quantification of line and sheet structure in 3-D image. Medical Image Conference and Computer Assisted Interventions (2000) 154–165
Graph-Based Topology Correction for Brain Cortex Segmentation Xiao Han1 , Chenyang Xu2 , Ulisses Braga-Neto1, and Jerry L. Prince1 1
Center for Imaging Science, Johns Hopkins University, Baltimore MD 21218, USA
[email protected],
[email protected],
[email protected] 2 Siemens Corporate Research, Princeton, NJ 08540, USA
[email protected] Abstract. Reconstructing a topologically correct representation of the brain cortex surface from magnetic resonance images is important in several medical and neuroscience applications. Most previous methods have either made drastic changes to the underlying anatomy or relied on hand-editing. Recently, a new technique due to Shattuck and Leahy yields a fully-automatic procedure with little distortion of the underlying segmentation. The present paper can be considered as an extension of this approach to include arbitrary cut directions and arbitrary digital connectivities. A detailed analysis of the method’s performance on 15 magnetic resonance brain images is provided.
1
Introduction
Automatic reconstruction of the brain cortical surface from magnetic resonance (MR) images is an important goal in medicine and neuroscience. In recent years, there has been a considerable effort in developing methods for this purpose [3,4,10,11]. Because of imaging noise, the partial volume effect, image intensity inhomogeneities, and the highly convoluted nature of the brain cortex itself, it is difficult to produce a representation that is both accurate and has the correct topology. The major topological difficulty is the presence of one or more handles within the surface, which prevents the reconstructed surface from being correctly mapped to the plane or the sphere [4,12]. Manual editing has been one of the most widely employed techniques to guarantee both accuracy in surface representation and correct topology [3,4,10]. Several automatic techniques have also been proposed to generate a topologically correct representation of the WM/GM surface, including the well-known homotopic region growing model [5] and its dual procedure [1]. The problem with the latter approaches is that the topology might be corrected in very unpredictable ways; for example, causing “cuts” across the whole brain. Another approach involving successively filtering the white matter followed by an isosurface algorithm [11] produced the correct topology, but the final surface could be far away from the truth. Recently, a new approach to topology correction was introduced by Shattuck and Leahy [7,8]. Instead of region-growing or global filtering, this approach M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 395–401, 2001. c Springer-Verlag Berlin Heidelberg 2001
396
Xiao Han et al.
examines the connectivity of the binary white matter segmentation to find regions that give rise to incorrect topology, and carefully edits them to correct the topology. Their method is elegant and effective and there is little room for improvement. However, the authors acknowledge that their “cuts” are not natural since they can only be oriented along the cartesian axes. They also describe a particular topological problem in which “slice duplication” is required. Finally, their approach requires 6-connectivity of the digital object, and cannot be used with any other digital connectivity. In this paper, we develop a new algorithm, which we refer to as the graphbased topology correction filter (GTCF), that removes all handles from a binary object. Our method is intrinsically three-dimensional and “cuts” are not forced to be oriented along cardinal axes. It does not require the introduction of slice duplication, and any (consistent) digital connectivity definition can be used. A final distinction of our approach with that of Shattuck and Leahy is that the correct topology can be assured through application of either foreground or background filters alone, resulting in either handles being cut or tunnels being filled exclusively. In the following sections, we give necessary background, describe our method, and provide experimental results that show the overall characteristics and performance of this method.
2
Background
Although we desire a topologically correct surface, the correction is applied on the volume data and an isosurface algorithm is used to generate the surface itself. In this section we review some notions from 3D discrete topology, isosurface algorithms, and the topology of digital meshes. These three areas provide key ingredients in our approach. 2.1 3D Discrete Topology. A 3D digital image V ⊂ Z 3 is defined as a cubic array of lattice points. We follow the conventional definition of n-neighborhood and n-adjacency, where n ∈ {6, 18, 26}. We denote the n-neighborhood of a point x by Nn (x), and the neighborhood of x with x removed by Nn∗ (x). An n-path of length l > 0 from p to q in X means a sequence of distinct points p = p0 , p1 , . . . , pl = q of X such that pi is n-adjacent to pi+1 , 0 ≤ i < l. Two points p, q ∈ X are n-connected if there exists an n-path from p to q in X. A set of points X is called n-connected if every two points p, q ∈ X are n-connected in X. An n-connected component of a set of points X is a non-empty n-connected subset of X that is not n-adjacent to any other point in X. We denote the set of all n-connected components of X by Cn (X). In order to avoid a connectivity paradox, different connectivities, n and n ¯, must be used in a binary image comprising an object (foreground) X and a ¯ For example, (6,18) and (6,26) are two pairs of compatible conbackground X. nectivities. Following [2], we distinguish the 6-connectivity associated with the 18-connectivity by 6+ -connectivity. The following definitions are from [2]:
Graph-Based Topology Correction for Brain Cortex Segmentation
397
Definition 1 (Geodesic Neighborhood) Let X ⊂ V and x ∈ V . The geodesic neighborhood of x with respect to X of order k is the set Nnk (x, X) de∗ fined recursively by: Nn1 (x, X) = Nn∗ (x)∩X and Nnk (x, X) = ∪{Nn (y)∩N26 (x)∩ k−1 X, y ∈ Nn (x, X)}. Definition 2 (Topological Numbers) Let X ⊂ V and x ∈ V . The topological numbers relative to the point x and the set X are: T6 (x, X) = #C6 (N62 (x, X)), 2 T6+ (x, X) = #C6 (N63 (x, X)), T18 (x, X) = #C18 (N18 (x, X)), and T26 (x, X) = 1 #C26 (N26 (x, X)), where # denotes cardinality of a set. 2.2 Isosurface Algorithm. In this paper, we use a modified marching cubes (MC) algorithm to produce a surface representation of a binary digital object (1=object, 0=background). As pointed out in [6], by incorporating both the face and the body saddle points, the MC algorithm can produce a surface consistent with trilinear interpolation and free from topological paradoxes. In this case, we have further shown that, for binary objects, setting the isovalue to be less than 0.25 yields 26-connectivity; setting the isovalue between 0.25 and 0.5 yields 18-connectivity; and setting the isovalue above 0.5 yields 6-connectivity. 2.3 Topology of Surface Meshes. The number of handles on a surface is called the genus of the surface, and is given by g = 1 − χ/2, where χ is the Euler number. The Euler number in turn can be computed using χ = V − E + F , where V , E, and F are the number of vertices, edges, and faces, respectively, of the surface mesh. A surface is topologically equivalent to a sphere when g = 0; however, neither the Euler number nor the genus provides information about the size or location of a handle. Given a topologically consistent isosurface algorithm, there is a one-to-one correspondence between the handles on a binary digital object with n-connectivity and that of its triangulated surface representation. We therefore check the topology of the object by computing the Euler number of its isosurface computed using the correct threshold.
3
Graph-Based Topology Correction Filter
Our graph-based topology correction filter (GTCF) aims to remove all the handles in a binary volume consisting of a single connected foreground object with no cavities (which are easily removed by region-growing). GTCF can operate on the original volume or its complement, giving foreground and background filters respectively. Handles removed by a background filter correspond to tunnels filled in the original volume. Compatible connectivities must be used for the two filters, yielding an n-connectivity foreground filter and an n ¯ -connectivity background filter. For simplicity, we describe the foreground filter with n-connectivity only, but the background filter is the same with n replaced by n ¯. A block diagram of GTCF is shown in Fig. 1(a), and the basic idea is illustrated in Fig. 1(b). The method consists of four major steps, which are repeated at successively increasing scales until all handles or tunnels are removed. We now describe each step; some details are omitted due to lack of space.
398
Xiao Han et al. If a Background Filter, Invert the Volume
Binary Opening with SE at Selected Scale
Body Residue Conditional Topological Expansion
Opening
Connected Component Labeling Graph Construction and Cycle Breaking If a Background Filter, Invert the Volume Back
(a)
CTE + labeling 3 1
3
2
5
6
4
6
2
5 1
7
4
7
(b)
Fig. 1. Topology correction filter: (a) flowchart and (b) illustration. 3.1 Binary Morphological Opening. The morphological opening of an object F with a structuring element B removes all parts of F that are smaller than B, in the sense that they cannot contain any translated replica of B. We use morphological opening to detect handles at different scales. We call the structuring element (SE) used at the smallest scale (scale 1) the basic structuring element. The SE at scale k is obtained by k − 1 successive dilations of the basic SE with itself. In practice, we use a digital ball of radius one — i.e., an 18-neighborhood plus the center point — as the basic SE. The shape of the basic SE is not critical; for example, we could also use a 3D cross, which has only seven points. As illustrated in Fig. 1(b), the opening operation divides the foreground object into two classes. Points that are in the resulting (opened) image are called body points, and points that are removed by the opening operator are called residue points. 3.2 Conditional Topological Expansion. On a complicated shape such as a white matter segmentation, morphological opening removes far more voxels than just those required to break the handles. The residue typically comprises many connected components, several of which are large, complicated shapes. Also, the opening can actually create handles in the body component. For these reasons, we cannot discard residue components at this stage in order to break handles. Instead, we introduce conditional topological expansion (CTE), which aims to transfer as many points as possible from the residue back to the body, without introducing handles. Algorithm 1 (Conditional Topological Expansion (CTE)): 1. Find the set S of residue points that are n-neighbors of the body points X. 2. For each point x ∈ S, if Tn (x, X) = 1 then let X ← X ∪ x. 3. If no point changed its label in Step 2, then stop; otherwise, go to Step 1. The criterion in Step 2 involving the topological number is Property 5 in Bertrand [2]. It guarantees that the set x ∪ X has no more handles than X. Thus,
Graph-Based Topology Correction for Brain Cortex Segmentation
399
CTE can fill handles created by morphological opening, but cannot create new handles. 3.3 Connected Component Labeling. After CTE, the remaining residue pieces form thin “cuts” that separate body components, as depicted in the third diagram of Fig. 1(b). To prepare for a graph-based analysis of topology, it is necessary to label all the connected components. The basic steps are as follows. First, we compute the connected components of the body using n-connectivity. We then compute the topological number of each residue point with respect to each body component. Residue points that are connected to the same body component more than once form simple handles, and are immediately removed. Second, we compute the connected components of the remaining residue and calculate the number of connections between each residue connected component (RCC) and its adjacent body connected components (BCCs). It turns out that this analysis is subtle, because certain handle configurations cannot be detected during the subsequent graph-based analysis and must be addressed here. Finally, we seek to merge RCCs whenever possible. This can be done when body and residue components together form a solid object without any handles. 3.4 Graph Construction and Cycle Breaking. Finally, we build a graph whose nodes represent the RCCs and BCCs and whose edges represent the connections between the RCCs and BCCs, as shown in Fig. 1(b). We then search for cycles in this graph using a depth-first search algorithm. When a cycle is detected, we break it by removing the RCC with the smallest size (number of voxels) among all the RCCs in the cycle. Whenever a cycle is broken, it is necessary to restart the algorithm at the starting node of that cycle so that the modified graph is correctly traversed. After all the cycles are broken, we construct a new object by putting together all the remaining RCCs and the BCCs. This is the output of the topology correction filter. If a background filter is used, the new volume is inverted back. We then apply our MC algorithm with an isovalue chosen in the correct range to reflect the desired connectivity. If its genus is zero, then the topology of the new volume is correct, and we stop (and compute the final surface using MC with the appropriate isovalue). Otherwise, we either switch to the opposite filter at the same scale (if not already applied) or increase the scale of the current filter, and continue the topology correction on the new volume.
4
Results
The object depicted in Fig. 2(a) illustrates the results of applying a foreground filter (n = 18) and a background filter (¯ n = 6+ ) to the same handle. The foreground filter removed the handle by breaking it along a thin part, while the background filter filled the tunnel with a thin sheet. In both cases, the “cuts” are small and clearly not oriented in cartesian directions. We applied our proposed topology correction filter to 15 MR brain image volumes obtained from the Baltimore Longitudinal Study on Aging [9]. The typical
400
Xiao Han et al.
(a)
(d)
(b)
(e)
(f)
(c)
(g)
(h)
Fig. 2. (a) A handle taken from an actual white matter volume. The result of using (b) a foreground filter and (c) a background filter. (d)-(h): consecutive slices showing the cut made by the foreground filter. image size after cropping the background was 140 × 200 × 160. All images were preprocessed and segmented using an updated version of the method described in [11]. The filter was then applied to all 15 brain volumes in a sequence alternating between foreground (F) and background (B) filtering, and then increasing in scale. We used n = 18 and n ¯ = 6+ , and the basic SE was an 18-connected digital ball. Tables 1 and 2 show the original genus (number of handles) in the white matter isosurface and the genus after each filter stage from top to bottom. The bottom row of each table shows the number of voxels that were changed from either foreground to background or background to foreground for each brain. Comparing the results of the two tables, we see that the change to the volume is less if we apply the background filter first. The reason is that the background filter uses 6-connectivity while the foreground is 18; therefore, narrower “swaths” can be used to break handles. On the other hand, beginning with the foreground filter yields a faster reduction in the number of handles. From the results, it is also shown that the ratio of the number of voxels changed to the number of handles in the original volume is around 3, which is comparable to the results reported in [7,8].
5
Conclusion
We have developed and evaluated an automatic method called GTCF to remove handles in 3D digital images. GTCF is intrinsically three-dimensional, does not require the introduction of half-thickness slices as in [7,8], any consistent digital connectivity can be used, and it can optionally be used to exclusively cut handles or fill tunnels if desired. It has been shown to work well on 15 magnetic resonance segmented volumes. Acknowledgments. We thank Drs. Sinan Batman, John Goutsias, Susan Resnick, and Dzung Pham for their contributions. This work was supported in part by NSF/ERC grant CISST#9731748 and NIH/NINDS grant R01NS37747.
Graph-Based Topology Correction for Brain Cortex Segmentation
401
Table 1. Genus and Number of Voxels Changed Using a F-B Sequence. Brain S1 S2 Original 724 955 f1 4 5 b1 0 0 f2 0 0 b2 0 0 Changes 2398 3284
S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 1376 744 1031 776 562 886 688 825 986 597 1944 1280 801 19 0 5 5 1 11 4 0 5 5 16 9 4 1 0 2 0 0 2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4407 1973 3081 1872 1563 2584 2023 2118 2691 1618 6416 3678 2165
Table 2. Genus and Number of Voxels Changed Using a B-F Sequence. Brain S1 S2 Original 724 955 b1 46 32 f1 0 0 b2 0 0 f2 0 0 Changes 1456 2054
S3 1376 32 0 0 0 2846
S4 744 40 0 0 0 1509
S5 1031 31 0 0 0 3200
S6 776 24 1 1 0 1728
S7 562 16 0 0 0 1104
S8 886 36 0 0 0 2141
S9 688 26 1 1 0 1359
S10 825 23 0 0 0 1555
S11 986 20 0 0 0 1807
S12 597 17 0 0 0 1135
S13 1944 57 0 0 0 4213
S14 1280 36 0 0 0 2653
S15 801 21 0 0 0 1589
References 1. Z. Aktouf, G. Bertrand and L. Perroton, “A 3D-hole closing algorithm,” in 6th Int. Workshop on Discrete Geometry for Computer Imagery, 36–47, 1996. 2. G. Bertrand, “Simple points, topological numbers and geodesic neighborhoods in cubic grids,” Patt. Recog. Lett., 15:1003–1011, 1994. 3. G. Carman, H. Drury, and D. Van Essen, “Computational methods for reconstructing and unfolding the cerebral cortex,” Cerebral Cortex , 5:506–517, 1995. 4. A. Dale, B. Fischl, and M. Sereno, “Cortical surface-based analysis I & II,” NeuroImage, 9:179–207, 1999. 5. J.-F. Mangin, V. Frouinh, J. Regis, and J. Lopez-Krahe, “From 3D magnetic resonance images to structural representations of the cortex topography using topology preserving deformations,” J. Math. Imag. Vision, 5:297–318, 1995. 6. B. Natarajan, “On generating topologically consistent isosurfaces from uniform samples,” The Visual Computer , 11(1):52–62, 1994. 7. D. Shattuck and R. Leahy, “Topological refinement of volumetric data,” in Proc. of the SPIE , 3661:204–213, Feb. 1999. 8. D. Shattuck and R. Leahy, “Topologically constrained cortical surfaces from MRI,” in Proc. of the SPIE , 3979:747–758. Feb. 2000. 9. S. M. Resnick, A. F. Goldszal, C. Davatzikos, S. Golski, M. A. Kraut, E. J. Metter, R. N. Bryan, and A. B. Zonderman, “One-year age changes in MRI brain volumes in older adults”, Cerebral Cortex, 10(5): 464–472, 2000. 10. P. Teo, G. Sapiro, and B. Wandell, “Creating connected representations of cortical GM for functional MRI visualization”, IEEE Trans. Med. Imag., 16:852–863, 1997. 11. C. Xu, D. Pham, M. Rettmann, D. Yu, and J. Prince, “Reconstruction of the human cerebral cortex from MR images,” IEEE Trans. Med. Imag., 18(6):467– 480, 1999. 12. D. Van Essen and J. Maunsell, “Two dimensional maps of cerebral cortex,” J. of Comparative Neurology, 191:255–281, 1980.
Intuitive, Localized Analysis of Shape Variability Paul Yushkevich, Stephen M. Pizer, Sarang Joshi, and J.S. Marron Medical Image Display and Analysis Group, University of North Carolina, Chapel Hill, NC 27599, USA.
[email protected] Abstract. Analysis of shape variability is important for diagnostic classification and understanding of biological processes. We present a novel shape analysis approach based on a multiscale medial representation. Our method examines shape variability in separate categories, such as global variability in the coarse-scale shape description and localized variability in the fine-scale description. The method can distinguish between variability in growing and bending. When used for diagnostic classification, the method indicates what shape change accounts for the discrimination and where on the object the change occurs. We illustrate the approach by analysis of 2D clinical corpus callosum shape and discrimination of simulated corpora callosa.
1
Introduction
Analysis of shape has begun to emerge as a useful area of medical image processing because it has the potential to improve the accuracy of medical diagnosis, the correctness of image segmentation, and the understanding of processes behind growth and disease. We present a novel 2D shape analysis method that can describe shape variability in intuitive terms, and pinpoint the places where variability is most pronounced. We use our method to analyze the shape of the mid-sagittal slice of the corpus callosum. For example, consider the shapes in Fig. 1 which shows characteristic representatives of three classes of shapes. Our method can detect that the three classes are different. It can show that there is a global difference in width and bending between classes 1 and 2, and that near the middle of the object there is a local difference between classes 1 and 3.
Fig. 1. Representatives of three classes of shapes whose differences can be described globally and locally. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 402–408, 2001. c Springer-Verlag Berlin Heidelberg 2001
Intuitive, Localized Analysis of Shape Variability
403
Methods in 2D shape analysis can typically be divided into three high level steps. First, a geometric representation is established. Second, a set of features is derived from the representation; these features must be invariant under the similarity transform. Finally, a statistical analysis method is chosen and applied to the features. The shape analysis literature can be categorized by the decisions taken at each step. In their seminal paper on shape analysis, Cootes et al. represent shapes using a point boundary model, which is a list of coordinates of points on the object boundary. Invariance under rigid transform is achieved by alignment via the Procrustes algorithm; aligned boundary positions form the features. Principal component analysis (PCA) is used to gain both a qualitative and a quantitative description of global shape variability [2]. Both Staib & Duncan and Szekely et al. represent boundaries in 2D as a weighted sum of Fourier basis functions and perform statistical analysis on the weights [8,9]. In both methods the representation inherently provides invariance under the similarity transform. Bookstein and others use biological landmarks to represent shapes [3]. Landmarks are aligned by the Procrustes algorithm. Analysis is based upon thin plate spline warps which map one set of landmarks into another. In a study of corpora callosa, Golland et al. represent 2D objects using a fixed topology skeleton, which is a snake-like approximation to the medial axis of an object [5]. Width and approximate curvature are sampled along the skeleton and serve as features. These features are inherently invariant under similarity transform. Classification is performed using linear discrimination and support vectors. Our method also uses the same three step framework. We make the following decisions at each step. We describe shapes using a multiscale medial representation. A set of features, similar to those by Golland [5], is derived from the representation; the features are invariant under the similarity transform. We classify shapes using Fisher linear discriminants. Our method is unique because it focuses on dividing the description of the shape variability into parts. We can analyze variability in the coarse-scale description of entire object separately from the fine-scale variability in a part of the object. The method also allows separate examination of growth-type shape changes, such as narrowing and elongation, and bending-type shape changes. Our choice of representation makes these two types of separability possible. We describe objects using m-reps, which are formally defined by Pizer et al. as a discrete multiscale medial representation of shape [6]. M-reps capture shape in intuitive terms, such as widening, bending, and elongation because they are medial. According to Blum, whose medial axis work lead to the development of m-reps, the medial description is especially suitable for biological objects [1]. We say that m-reps are multiscale because they have an inherent level of boundary tolerance. A coarse-scale m-rep describes the general properties of shape, paying little attention to the details of the boundary. A fine-scale mrep captures detailed shape properties. Both types of m-reps provide different
404
Paul Yushkevich et al.
information about shape, and a rich description is obtained when m-reps at different levels of detail are used together. To discriminate between classes of objects based on their shape, we apply existing classification methods at multiple scales and locations. For example, in a simulated set of corpora callosa, which Fig. 1 illustrates, we find that discriminability between classes 1 and 3 is strongest at the bump location.
2
Methods
prediction m-rep fine m-rep
-
coarse m-rep
prediction m-rep
fine m-rep
coarse features
refinement features
Linear Dicrimination
coarse m-rep
Feature Extraction
Segmentation
image
Resampling
Fig. 2 summarizes our localized shape discrimination method. As most shape classifiers, ours is trained on a sample set of shapes extracted from images. Presently these are binary images of the corpus callosum.
global classification
locations of major differences in shape between classes
Fig. 2. The components and flow of the localized shape discrimination method. Shape features are extracted from each input image.
Our method analyzes shape at multiple levels of detail. Each shape is represented by both a coarse scale and a fine scale model. M-reps, defined in [6] serve as the shape representation because they incorporate scale-sensitive metrics and provide a geometrically rich shape description. A pair of m-reps, one with five medial atoms and a large boundary tolerance and another with nine atoms and a smaller tolerance are fitted to each image; these m-reps are called the coarse and the fine m-reps (Fig. 3). The coarse m-rep is computed first by warping a template five-atom m-rep to maximize image match along its implied boundary. Image match is computed using a Gaussian derivative operator with aperture proportional to local width
Intuitive, Localized Analysis of Shape Variability
405
Fig. 3. A typical simulated corpus callosum image (top left), a coarse m-rep (top right), a prediction m-rep (bottom left), and a fine m-rep (bottom right).
of the m-rep; the constant of proportionality is set large (ρ = 1.0) for coarse m-reps. Medial atoms are constrained to remain at equal distances from each other during warping. Using a medial interpolation technique outlined in [10] we resample the coarse m-rep, inserting a new medial atom half-way between each pair of existing atoms to form a prediction m-rep. The latter has the same implied boundary as the coarse m-rep but 9 atoms instead of 5. The prediction m-rep is again warped to fit the image, this time using a smaller aperture-to-width ratio ρ = 0.5. The three m-reps computed for each input image are used to derive statistical features. These features are geometrical in nature and describe shape properties such as growth and bending; these features are invariant under similarity transform. Two sets of features are computed. From the coarse m-rep we derive coarse features which describe relationships between neighboring medial atoms. From fine and prediction m-reps we derive the refinement features which measure geometrical differences between corresponding pairs of medial atoms. Refinement features describe the residual information gained from measuring shape at a smaller scale. We use coarse features to discriminate between classes of shapes based on global shape properties. We perform three types of global discrimination, one based on just the bending features, one based on just the growth features, and one on the whole set of global features. When we compare the strengths of the three discriminations, we learn whether the differences between the classes are characterized more by differences in bending or growth. We use refinement features to find locations on the shape where differences between classes are most profound. We perform a separate dicrimination based on the refinement features of each of the 9 atoms present in the fine m-rep. By comparing the relative strengths of the discriminations we find the locations on the object where the two classes differ most significantly. For each feature set, discrimination between two classes is performed by first reducing the features to one dimension by projection on the Fisher linear
406
Paul Yushkevich et al.
discriminant and then performing the Student t test. [4]. The p-value of this test indicates the separability strength between the two classes.
Fig. 4. Primary mode of variability in bending features (left), growth features (middle), and combined growth and bending features (right) in a class of corpus callosum shapes. Displayed are implied boundaries of m-reps corresponding to points at −2, 0, and +2 standard deviations from the mean along the primary mode. Additionally, we compute the primary modes of shape variability in each class or whole population, following a technique similar to Cootes et al. [2]. The feature extraction step is invertible, allowing reconstruction of m-reps from points in feature space, and hence modes of variability can be visualized as animations. We can analyze and visualize shape variability separately in terms of growth and bending (Fig. 4).
3
Experimental Results
We demonstrate the diagnostic ability of our method in a case that supports discrimination by constructing three artificial classes of objects based on the corpus callosum shape with representatives shown in Fig. 1. Classes 1 and 2 differ slightly in coarse shape while classes 1 and 3 have same basic coarse shape but differ locally because class 3 has a random bump at the midbody. Our hypothesis is that the method would be able to discriminate between classes 1 and 2 globally while discriminating locally between classes 1 and 3. Our simulation is based on elliptical harmonic representation of the segmented corpora callosa, kindly provided by the group headed by Guido Gerig [9]. The flexibility of the harmonics representation allows one to easily generate artificial shapes that resemble the corpus callosum. We create two Gaussian distributions in the PCA space of the spherical harmonics. These have different means and the same covariance. We take a random sample of 25 points from each distribution; each point corresponds to a corpus callosum shape that is rasterized. Thus we obtain training images for simulated classes 1 and 2. The third simulated class is sampled from the same distribution as class 1 but each shape in this class has an artificial bump. To create this class, we follow the same procedure as for class 1, but before rasterization we add a perturbation in the shape of a truncated cosine function to the boundary. The location and amplitude of the perturbation follow the normal distribution. We use leave-oneout analysis to test the classification ability of our method. Using coarse features,
Intuitive, Localized Analysis of Shape Variability
407
Table 1. Decimal exponents of p-values from Student t-test that show separability between class pairs 1 vs. 2 and 1 vs. 3 for nine sets of refinement features. 1 vs 2 1 vs 3
1 2 3 4 5 6 7 8 9 -1.63 -3.49 -1.00 -1.62 -2.64 -3.75 -2.82 -2.72 -6.80 -1.77 -1.26 -4.83 -9.72 -8.86 -9.23 -0.83 -1.25 -2.20
we can discriminate between classes 1 and 2 with 70% accuracy. This result is encouraging because the corresponding classes in spherical harmonics coefficient space have 80% discrimination accuracy. Table 1 demonstrates our ability to locate the bump. Here discrimination between classes was performed on the refinement features at each of the medial atoms. For classes 1 and 3, the p-values for atoms near the middle of the figure are much smaller than at the ends, indicating stronger separability. Contrast with the same discriminations for classes 1 and 2. The strongest separability is found at one of the ends.
4
Discussion and Conclusions
The major contribution of this paper is the development of a shape analysis method that leverages the intuitive and multi-scale properties of the medial representation. We demonstrate this technique by the analysis of simulated data. The application to real data remains. Our statistical features have distributions that qualitatively do not appear non-Gaussian, rather distributions of some features have outliers and are multimodal. Further examination is needed to improve the normality of the features. Statistical methods that do not assume normality may also improve analysis. We are extending the method to three dimensions because most of the potential medical applications deal with 3D images of human anatomy. Transition to 3D is possible in practice because recent progress in medial segmentation lets us extract m-reps of 3D anatomical structures semi-automatically [7]. M-rep interpolation and resampling pose the major theoretical difficulty. We plan use the method to analyse hippocampal shape in alzheimers disease. To be useful in practice, our method can not be limited to single figure objects. Few shapes can be accurately represented by a single chain of medial atoms (or a single mesh in 3D). The capability to analyze multi-figural objects can be achieved easily if the medial branching topology is the same for all shapes in the training set. In this case, we must simply add new features that describe figure-to-figure relationships. Homology is a problem common to all extant boundary and medial based shape analysis approaches, including ours. We establish homology by sampling medial atoms at equal spacing between the ends of the medial axis. Such correspondence is too sensitive to the placement of the ends. Establishment of homology based on the training statistics requires considerable research effort.
408
Paul Yushkevich et al.
Acknowledgements This work would not have been possible without the invaluable advice and support from many MIDAG members, most notably, Stephen Aylward, Daniel Fritsch, Guido Gerig, Sean Ho, Martin Styner, and Andrew Thall. We thank Guido Gerig and Sean Ho for provision of corpus callosum data and harmonic analysis methods. We are grateful to Yonatan Fridman and Gregg Tracton for aid in m-rep segmentation. This work was carried out under the partial support of NIH grants P01 CA47982 and R01 CA67183. Some of the equipment was provided under a gift from the Intel Corporation.
References 1. H. Blum: A transformation for extracting new descriptors of shape, Models for the Perception of Speech and Visual Form, MIT Press, 1967. 2. T. Cootes, C. Taylor, D. Cooper, and J. Graham: Active shape models their training and application, Computer Vision, Graphics, and Image Processing: Image Understanding 1 (1994), no. 61, 3859. 3. I. Dryden and K. Mardia: Statistical shape analysis, John Wiley & Sons, New York, 1998. 4. R. Duda and P. Hart: Pattern classification and scene analysis, John Wiley & Sons, New York, 1973. 5. P. Golland, W.E.L. Grimson, and R. Kikinis: Statistical shape analysis using fixed topology skeletons: Corpus callosum study, International Conference on Information Processing in Medical Imaging, LNCS 1613, Springer Verlag, 1999, pp. 382388. 6. S. Pizer, D. Fritsch, P. Yushkevich, V. Johnson, and E. Chaney: Segmentation, registration, and measurement of shape variation via image object shape, IEEE Transactions on Medical Imaging 18 (1999), 851865. 7. S.M. Pizer, T. Fletcher, Y. Fridman, D.S. Fritsch, A.G. Gash, J.M. Glotzer, S. Joshi, A. Thall, G Tracton, P. Yushkevich, and E.L. Chaney: Deformable m-reps for 3d medical image segmentation, In Review, ftp://ftp.cs.unc.edu/pub/users/nicole/defmrep3d.final.pdf, 2000. 8. L.H. Staib and J.S. Duncan: Boundary finding with parametrically deformable models, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992), no. 11, 10611075. 9. G. Sz´ekely, A. Kelemen, Ch. Brechb¨ uhler, and G. Gerig: Segmentation of 2-D and 3-D objects from MRI volume data using constrained elastic deformations of exible Fourier contour and surface models, Medical Image Analysis 1 (1996), no. 1, 1934. 10. P. Yushkevich, S.M. Pizer, S. Joshi, and J.S. Marron: Intuitive, localized analysis of shape variability, UNC Dept of Computer Science Technical Report. http://www.cs.unc.edu/∼pauly/ipmi2001/ipmi2001.pdf, 2001.
A Sequential 3D Thinning Algorithm and Its Medical Applications K´ alm´an Pal´ agyi1, Erich Sorantin2 , Emese Balogh1 , Attila Kuba1 , Csongor Halmai1 , Bal´ azs Erd˝ ohelyi1 , and Klaus Hausegger2 1
Department of Applied Informatics, University of Szeged, Hungary {palagyi, bmse, kuba, halmai}@inf.u-szeged.hu 2 Department of Radiology, University Hospital Graz, Austria {erich.sorantin, klaus.hausegger}@kfunigraz.ac.at
Abstract. Skeleton is a frequently applied shape feature to represent the general form of an object. Thinning is an iterative object reduction technique for producing a reasonable approximation to the skeleton in a topology preserving way. This paper describes a sequential 3D thinning algorithm for extracting medial lines of objects in (26, 6) pictures. Our algorithm has been successfully applied in medical image analysis. Three of the emerged applications (analysing airways, blood vessels, and colons) are also presented.
1
Basic Notions and Results
Let p be a point in the 3D digital space ZZ 3 . Let us denote Nj (p) (for j = 6, 18, 26) the set of points j–adjacent to point p (see Fig. 1/a). The sequence of distinct points x0 , x1 , . . . , xn is a j–path of length n ≥ 0 from point x0 to point xn in a non–empty set of points X if each point of the sequence is in X and xi is j–adjacent to xi−1 for each 1 ≤ i ≤ n. (Note that a single point is a j–path of length 0.) Two points are j–connected in the set X if there is a j–path in X between them. A set of points X is j–connected in the set of points Y ⊇ X if any two points in X are j–connected in Y . The 3D binary (m,n) digital picture P is a quadruple P = (ZZ 3 , m, n, B) [2]. Each element of ZZ 3 is called a point of P. Each point in B ⊆ ZZ 3 is called a black point and value 1 is assigned to it. Each point in ZZ 3 \B is called a white point and value 0 is assigned to it. Adjacency m belongs to the black points and adjacency n belongs to the white points. A black component (or object ) is a maximal m–connected set of points in B. A white component is a maximal n–connected set of points in B ⊆ ZZ 3 . We are dealing with (26,6) pictures. It is assumed that any picture contains finitely many black points. A black point in a (26, 6) picture is called border point if it is 6–adjacent to at least one white point. A border point p is called U–border point if the point marked by “U” in Fig. 1/a is white. We can define N–, E–, S–, W–, and D–border points in the same way. A black point is called end–point if it has exactly one black 26–neighbor (i.e., the set N26 (p) ∩ (B\{p}) is singleton). M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 409–415, 2001. c Springer-Verlag Berlin Heidelberg 2001
410
K´ alm´ an Pal´ agyi et al.
◦ U • • ◦ ◦ • ◦ • pN E • W • S • ◦ • ◦ • D • • ◦ ◦ a
•
0 4 1 5 2 7 8 6 9 p10 1311 12 15 16 14 17 2118 2219 20 24 25 23 b
3
Fig. 1. (a) The frequently used adjacencies in ZZ 3 . The set N6 (p) contains the central point p and the 6 points marked U, N, E, S, W, and D. The set N18 (p) contains the set N6 (p) and the 12 points marked “•”. The set N26 (p) contains the set N18 (p) and the 8 points marked “◦”. (b) Indices assigned to points in N26 (p)\{p}
A black point is called simple point if its deletion does not alter the topology of the picture. We make use of the following result for (26,6) pictures: Theorem 1. [4] Black point p is simple in picture (ZZ 3 , 26, 6, B) if and only if all the following conditions hold: 1. 2. 3. 4.
2
the the the the
set set set set
N26 (p) ∩ (B\{p}) is not empty (i.e., p is not an isolated point); N26 (p) ∩ (B\{p}) is 26–connected (in itself ); (ZZ 3 \B) ∩ N6 (p) is not empty (i.e., p is a border point); and (ZZ 3 \B) ∩ N6 (p) is 6–connected in the set (ZZ 3 \B) ∩ N18 (p).
Skeletonization by Thinning
The notion of skeleton was introduced by Blum [1] as a region–based shape descriptor which summarises the general form of objects/shapes. The thinning is a frequently used method for producing an approximation to the skeleton in a topology–preserving way [2]. Border points of a binary object that satisfy certain topological and geometric constraints are deleted in iteration steps. The entire process is repeated until only the “skeleton” is left. In case of “near tubular” 3D objects (e.g., airway, blood vessel, and gastro–intestinal tract), Thinning has a major advantage over the other skeletonization methods since curve thinning can produces medial lines easily [5]. Most of the existing thinning algorithms are parallel, but some sequential thinning algorithms have been proposed [6,7] and there is a hybrid one (i.e., deletable points are marked in parallel then a sequential re–checking phase is needed) [3]. This paper presents an effective sequential 3D thinning algorithm for extracting medial lines from elongated binary objects.
A Sequential 3D Thinning Algorithm and Its Medical Applications
3
411
The New 3D Thinning Algorithm
Let (ZZ 3 , 26, 6, B) be a 3D finite picture to be processed. Since set B is finite, it can be stored in a finite 3D binary array X (each voxel being not in X is looked on 0). The pseudocode of the sequential 3D thinning algorithm is given as follows: procedure THINNING(X,Y ) Y = X; repeat modified = 0; modified = modified+SUBITER(Y ,U); modified = modified+SUBITER(Y ,D); modified = modified+SUBITER(Y ,N); modified = modified+SUBITER(Y ,S); modified = modified+SUBITER(Y ,E); modified = modified+SUBITER(Y ,W); until modified > 0;
function SUBITER ( Y , direction ) modified = 0; list =< new empty list >; for each point p in Y do if IS BORDER POINT(Y ,direction,p) then N p = COLLECT 26 NEIGHBORS ( Y , p ); if not IS ENDPOINT ( N p ) then if IS SIMPLE ( N p ) then INSERT LIST ( list , p ); while IS EMPTY ( list ) do p = GET FROM LIST ( list ); N p = COLLECT 26 NEIGHBORS ( p , Y ); if not IS ENDPOINT ( N p ) then if IS SIMPLE ( N p ) then SET ZERO ( Y , p ); modified = modified +1; return modified;
The two parameters of the procedure THINNING are the binary array X representing the picture to be thinned and the binary array Y storing the result. The kernel of the repeat cycle corresponds to one iteration step of the thinning process. Each iteration step is composed of six successive subiterations corresponding to the six kinds of border points. Some U–border points can be deleted in the first subiteration and certain W–border points are deleted in the sixth one. In this way, the elongated objects are shrunk uniformly in each direction. Function SUBITER returns the number of deleted points. Variable modified is to accumulate the number of deleted points. The thinning process is completed when no points are deleted (i.e., no further changes occur). The work of function SUBITER is composed of two phases. All the border points of a given type being simple and non–end–points are inserted in a linked list called list in the first phase (see the for cycle). This phase (i.e., marking points for deletion) is followed by a sequential re–checking procedure (see the while cycle): each point in the list is removed if it remains simple and non–end– points in the actual (modified) image. Function SUBITER uses an additional auxiliary data structure: N p is an array of 26 binary digits. Function COLLECT 26 NEIGHBORS returns such an array storing the 26–neighbors of an investigated point p in an image array Y , where N p[i] corresponds to the neighbor marked “i” in Fig. 1/b (i = 0, . . . , 25). Since both the simplicity and being end–point are local properties, they can be decided in view of array N p. These and IS ENDPOINT, respecproperties are answered by functions IS SIMPLE 25 tively. Function IS ENDPOINT returns NO if i=0 N p[i] > 1. (Note that an isolated point is regarded as an end–point by this function.)
412
K´ alm´ an Pal´ agyi et al.
Function IS SIMPLE is to check the second and the fourth conditions of Theorem 1. The first and the third conditions of Theorem 1 are satisfied, since function IS ENDPOINT returns YES if the investigated point p is isolated and p is always border point of the given type when function IS SIMPLE is called. Function IS COND 2 SATISFIED uses two auxiliary data structures: The first one is the array L of 26 integers, where L[i] stores a label assigned to the element represented by N p[i] (i = 0, . . . , 25). The second one is the key to the labelling process: S26 is an array of 26 sets of indices, where S26[i] = { j | j ∈ N26 (i) and 0 ≤ j < i } (i = 0, . . . , 25). For example: S26[0] = ∅, S26[1] = {0}, and S26[25] = {13, 15, 16, 21, 22, 24} (see Fig. 1/b). All the sets S26[0], . . . , S26[25] can be stored (for example) in explicit arrays. It is easy to see that the black 26–neighbors (stored in the array N p) of a point p is 26–connected if the same label belongs to each black 26–neighbors of p. Note that the function IS COND 4 SATISFIED applies a similar labelling procedure. Let us see the remaining two important functions. function IS SIMPLE ( N p ) if IS COND 2 SATISFIED ( N p ) then if IS COND 4 SATISFIED ( N p ) then return YES; return NO;
4
function IS COND 2 SATISFIED ( N p ) label = 0; for i = 0 to 25 do L[i] = 0; for i = 0 to 25 do if N p[i] = 1 then label = label +1; for each j in S26[i] do if L[j] > 0 then for k = 0 to i − 1 do if L[k] = L[j] then L[k] = label; for i = 0 to 25 do if N p[i] = 1 and L[i] = label then return NO; return YES;
Applications
This section is devoted to the emerged applications applying our sequential 3D thinning algorithm. Each of the following three applications requires the cross–sectional profiles of the investigated tubular organs. The proposed process is sketched as follows: – image acquisition by Spiral Computed Tomography (S–CT), – (semiautomatic snake–based) segmentation (i.e., determining a binary object from the gray–level picture, – morphological filtering of the segmented object, – curve thinning (by using our 3D thinning algorithm), – raster–to–vector conversion, – pruning the vector structure (i.e., removing the unwanted branches), – smoothing the resulted central path, – calculation of the cross–sectional profile orthogonal to the central path.
A Sequential 3D Thinning Algorithm and Its Medical Applications
4.1
413
Assessment of Laryngotracheal Stenosis
Many conditions can lead to laryngotracheal stenosis (LTS), most frequent endotracheal intubation, followed by external trauma, or prior airway surgery. Clinical management of these stenosis requires exact information about the number, grade, and the length of the stenosis. We have developed a method for assessment of LTS. The cross–sectional profiles (based on the central path) of the upper respiratory tract (URT) were calculated for 30 patients with proven LST on fiberoptic endoscopy (FE). Locations of LTS were determined on axial S–CT slices and compared to findings of fiberoptic endoscopy (FE) by Cohen’s kappa statistics. Regarding the site of LTS an excellent correlation was found between FE and S–CT (z = 7.44, p < 0.005). Site of LTS, length and degree could be depicted on the URT cross-sectional charts in all patients. URT cross sectional profiles were presented as line charts. In order to establish anatomic cross-reference, three important anatomic landmarks (vocal cords, caudal border of the cricoid cartilage, and cranial border of the sternum) were marked on the line charts (see Fig. 2). For validation of this method, 13 phantom studies were performed. Phantom studies yielded an error of 1% for length measurements and an excellent correlation was found between the theoretical cross-sectional profile of phantoms and that obtained by our thinning algorithm (p 0.005).
Fig. 2. The segmented URT, its central paths, its cross–sectional profile at the three landmarks, and at the narrowest position (left) and the line chart (right)
4.2
Assessment of Infrarenal Aortic Aneurysms
We used the cross–sectional profile in patients suffering from infra–renal aortic aneurysms (AAA). AAA are abnormal dilatations of the main arterial abdominal vessel due to atherosclerosis. AAA can be found in 2% of people older than 60 years. If the diameter is more than 5 cm than the person is at high risk for AAA rupture, which leads to death in 70–90%. For therapy two main options exist: surgery or endoluminal repair with stentgrafts. For optimal patient management the “true diameter” in 3D as well as the distance to the origin of the renal arteries (proximal aneurysma neck) as well as the extension to the iliac arteries (distal aneurysma neck) have to be known.
414
K´ alm´ an Pal´ agyi et al.
The same algorithm as for LTS was applied. Using an active contour model the abdominal aorta was segmented, followed by the thinning process and computation of the cross sectional profile. Results were again presented as line charts. Figure 3 shows the segmented infrarenal aorta and its central path. Along the central path the cross–sectional profile was computed. The following parameters could be derived from this approach: the maximum diameter in 3D as well as the length of the proximal and distal neck of the aneurysma. Since size of the aneurysma is regarded to be a prognostic factor, the volume of the segmented aneurysma was determined too. At follow–up investigations the same parameters were derived.
Fig. 3. The segmented part of the blood vessel and its central path
Insertion of stent grafts could be planned easily using this charts. At followup investigations in the regular case the volume of the infrarenal aneurysms declined, whereas in the others leakage could be detected in a high proportion. 4.3
Unravelling the Colon
Unravelling the colon is a new method to visualize the entire inner surface of the colon without the need for navigation. This is a minimally invasive technique that can be used for colorectal polyps and cancer detection. In this section we present an algorithm for unravelling the colon which is to digitally straighten and then flatten using reconstructed spiral/helical computer tomograph (CT) images. Comparing to virtual colonoscopy where polyps may be hidden from view behind the folds, the unravelled colon is more suitable for polyp detection, because the entire inner surface is displayed at one view. To test the algorithm we used a cadavric phantom, a 50 cm long cadavric colon. The colon was cleansed and 13 artificial polyps were created using fat tissues. After air insufflation the specimen was placed in a 5 l water bath containing 5 ml Gastrografin solution. The phantom was scanned using multirow detector CT using a collimation of 2.5 mm, and a high quality pitch. Images were reconstructed with a slice thickness of 1.25 mm and an increment of 0.5 mm. Altogether 750 CT slices were reconstructed. The results were compared to the real dissection of the phantom.
A Sequential 3D Thinning Algorithm and Its Medical Applications
415
After calculating the cross–sectional profile the segmented colon is remapped (into a new grey–level 3D data volume) and displayed. Because of the tortuous structure of the colon nearby cross sections may conflict and as a result polyps may be missed or counted multiple times. To avoid this we interpolate and recalculate iteratively the cross sections till we resolve the conflict. This results that the internal and external colon surfaces are slightly stretched or compressed. The last step is to display the straightened and flattened colon using surface rendering. The simulated polyps can be recognized, they appeared as bumps or as asymmetric broadening of the colon folds (see Fig. 4).
Fig. 4. The segmented volume of the cadavric phantom and its central path (left) and the unravelled colon (right)
Acknowledgment This work was supported by the CEEPUS A-34 and FKFP 0908/1997 Grants.
References 1. Blum, H.: A transformation for extracting new descriptors of shape. Models for the Perception of Speech and Visual Form, MIT Press, (1967) 362–380 2. Kong, T.Y., Rosenfeld, A.: Digital topology: Introduction and survey. Computer Vision, Graphics, and Image Processing 48 (1989) 357–393 3. Lee, T., Kashyap, R.L., Chu, C.: Building skeleton models via 3–D medial surface/axis thinning algorithms. CVGIP: Graphical Models and Image Processing 56 (1994) 462–478 4. Malandain, G., Bertrand, G.: Fast characterization of 3D simple points. In: Proc. 11th IEEE International Conference on Pattern Recognition (1992) 232–235 5. Pal´ agyi, K., Kuba, A.: A parallel 3D 12–subiteration thinning algorithm. Graphical Models and Image Processing 61 (1999) 199–221 6. Saha, P.K., Chaudhuri, B.B.: Detection of 3–D simple points for topology preserving transformations with application to thinning. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (1994) 1028–1032 7. Saito, T., Toriwaki, J.: A sequential thinning algorithm for three dimensional digital pictures using the Euclidean distance transformation. In: Proc. 9th Scandinavian Conf. on Image Analysis, SCIA’95 (1995) 507–516
An Adaptive Level Set Method for Medical Image Segmentation Marc Droske1 , Bernhard Meyer2 , Martin Rumpf1 , and Carlo Schaller2 1
Institut f¨ ur Angewandte Mathematik 2 Klinik f¨ ur Neurochirurgie Universit¨ at Bonn
Abstract. An efficient adaptive multigrid level set method for front propagation purposes in three dimensional medical image segmentation is presented. It is able to deal with non sharp segment boundaries. A flexible, interactive modulation of the front speed depending on various boundary and regularization criteria ensure this goal. Efficiency is due to a graded underlying mesh implicitly defined via error or feature indicators. A suitable saturation condition ensures an important regularity condition on the resulting adaptive grid. As a casy study the segmentation of glioma is considered. The clinician interactively selects a few parameters describing the speed function and a few seed points. The automatic process of front propagation then generates a family of segments corresponding to the evolution of the front in time, from which the clinician finally selects an appropriate segment covered by the gliom. Thus, the overall glioma segmentation turns into an efficient, nearly real time process with intuitive and usefully restricted user interaction.
1
Introduction
Front propagation methods based on an implicit representation of the evolving front proved to lead to convincing results for basic segmentation purposes [2,7,6,9,10]. Unfortunately, they require considerable computing time to solve the underlying partial differential equation. Adaptive grid techniques [8] allow to overcome this drawback usually at the cost of storing large hierarchical grid structures explicitly. We present an alternative approach requiring minimal additional data to be stored to describe an adaptive grid of nice regularity properties. This allows the efficient, nearly real time handling of large grids by an adaptive front propagation algorithm. Furthermore, flexible criteria for the segment boundary depending on a class of concrete segmentation problems can be coded into the propagation speed of the front. As an important case study we consider a segmentation problem in brain surgery. One of the major problems in the surgical treatment of intrinsic tumors of the brain is precise determination of the resection zone. Low grade gliomas (WHO grades I and II) and anaplastic gliomas (WHO grade III) may be well visualized on specific MRI sequences, but intraoperative resection control can be ambiguous with marginal differences in consistency between tumor and the M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 416–422, 2001. c Springer-Verlag Berlin Heidelberg 2001
An Adaptive Level Set Method for Medical Image Segmentation
417
surrounding tissue – sometimes even within the tumor itself. The decision as to resect in certain areas depends mainly on the intraoperative impression and the experience of the respective neurosurgeon. Refer to [11,12] for further details. These considerations indicate that user input is required in advance to fix a few parameters and to select seed points in the object. Finally, the user extracts an appropriate candidate from the resulting family of segments.
2
Level Set Based Segmentation
Our aim is the development of a robust and flexible segmentation method on images with non sharp segment boundaries. It is reasonable to expect the clinicians visual perception to start with some safe set A, which should surely be considered as being inside the final segment, then expanded towards the unknown boundary. The expansion should be fast where certain criteria to be inside of the segment are surely fulfilled. More careful and slow expansion is done in areas where the those criteria are only partially fulfilled or a quantitative criteria becomes less significant. Finally, the clinician then has to decide where the criteria are to weak for a further expansion of the segment. This observation motivates our effective semi–automatic procedure: – At first the clinician selects a starting set A, e. g. seed points. – Depending on the class of the segment the speed of propagation must be modeled based on several suitable criteria. We take into account e. g. the image intensity, the intensity gradient, curvature and previous segmentations on the expected complement set. – Based on these parameters the efficient adaptive level set algorithm to be described here propagates the segment boundary outwards, starting with ∂A. The adaptive code allows an almost real time performance of this algorithm in the case no curvatures have to be computed and enables a flexible adjustment of the selected steering parameters. – Finally the clinician interactively inspects the generated family of evolved segment sets S(A, σ)(t) and selects a proper time T and a corresponding final segmentation result S:=S(A, σ)(T ). Visualizing 3D images image slices simultaneously with the 3D segment sets can be considered as a reference for final decision. Let us consider the mathematical modelling for the propagation of the boundary ∂S0 of the initial set S0 :=A in direction of the outer normal N with positive speed F , i. e. we ask for evolution curves in 2D or surfaces in 3D ∂S(t) – bounding ∂ x = F (x)N (x). our expanding segments S(t) – with parametrization x(t) with ∂t In case of a velocity F which is guaranteed to stay positive during the evolution this problem can be reformulated in terms of the field of arrival times T T (x) such of the front (cf. [10]). Thus, we ask for a function T : Ω → R0+ ; x → that the generalized eikonal equation ∇T 2F = 1 with T |S0 = 0
(1)
418
Marc Droske et al.
holds. Hence, as the corresponding segment at any time t we obtain S(t):={x ∈ Ω | T (x) ≤ t}. There are in general no global classical solutions. Therefore we consider generalized viscosity solutions [3] and their numerical approximation in the next section. This solution concept allows for instance topology changes of the evolving sets, which is especially for our application an important property. We suppose the speed function F to depend on local image properties and the shape of the local front. Our model gives homogeneity a measure, here in terms of the speed. Hence in areas where the segment seems to stop, the propagation speed should decrease drastically.
Fig. 1. The different gray value and gradient dependant speed functions. Let us now list some possible choices for F : – For gray value intervals [ − , + ] we consider a convolution of the corresponding characteristic function χ[− ,+ ] by some Gaussian kernel Gσ of width σ, i.e. we choose FI :=Gσ ∗ χ[− ,+ ] . In the application − , + are determined by clicking on some characteristic points corresponding to gray intensities. – We replace a simple threshold for the gradient magnitude by a function that decreases continuously for high gradient magnitudes, i. e. we choose −1 1 2 F∇ := e−α||∇Gσ ∗I|| or F∇ := 1 + λ−2 ||∇Gσ ∗ I||22 , where the parameters α, λ strengthen or weaken the built–in edge indicator. These have to be seldomly changed and can be given experience based values. – In the evolution of interfaces under mean curvature [9] the speed function F = −H is used, where H denotes the mean curvature. We incorporate this term into our speed function f˜ := max(f − max(H, 0), 0) for sufficiently small . This results in a deceleration of the evolution in regions, where the curvature of the interface is positive and large, preventing the growth into other regions which are reachable only via small and narrow passages. – In more complex applications it is often appropriate to combine the latter indicators, e. g. F1 · F2 , min(F1 , F2 ). – We can modulate a given speed function to nearly zero in the regions in already extracted segments which are known not to intersect with the segment under consideration. This turned out to be a good auxiliary tool in some difficult cases.
An Adaptive Level Set Method for Medical Image Segmentation
419
In the application flexibility in the selection of criteria and the choice of parameters is the key for a fast and successful segmentation.
3
An Adaptive Algorithm Based on Hexahedral and Quadrilateral Multilevel Grids
Fig. 2. The adaptive grid grows along with the computation of new nodes. The color indicates the arrival times of propagation.
One of the main contributions of this paper is the computational speedup of the fast marching method [10] by using an adaptively generated grid. The grid is implicitly described by error indicator values η on elements. Given a threshold value we locally stay on fine grid cells or stay on much coarser elements. As grids we consider in 2D quadrilateral and in 3D hexahedral meshes. Our finest level grid corresponds to the pixels or voxels of the original image. On top of this finest grid we build a hierarchical grid, i.e. a quadtree or an octree respectively. Instead of some process solely on the finest grid level which successively visits all fine grid cells inside the segment, our aim is to compute the front propagation on coarse elements in the hierarchy of nested grids whenever possible. Let us denote by M the family of nested grids, each consisting of elements or cells E and nodes N (M) of the grid in quad- or octree representation. Furthermore, let us suppose that some error function η ∗ : Ω → R is given, depending on the image, from which we want to derive an elementwise indicator η : M → R. It is used in combination with a threshold, i. e. it tells to refine an element if η(E) > . We demand the following properties of the indicator: η(E) ≤ η(P(E)) for all E ∈ M ˜ for all E˜ ∈ adj(E) η(E) ≤ η(P(E))
(2) (3)
Here P(E), C(E) and adj(E) denotes the unique parent element, the set of children and the neighbors of E respectively. Later we will use adj(N ) where N ∈ N (M), as the set of all regular nodes connected to N by an edge. Observe that
420
Marc Droske et al.
the inequality (3) ensures the one–level transitions between grid cells, whereas the saturation condition (2) guarantees that the error indicator on coarse cells indicate details on much finer cells. We choose η as the smallest grid indicator M → R satisfying (2, 3) initialized on the finest grid: for l = lmax -1 to 0 step -1 do for each element E of Ml do A := C(E) ∪ adj(C(E)) ˜ η(E) := max (η(E),maxE∈A η(E)) ˜
As a simple choice for the error indicator function we choose the gradient of the image intensity η ∗ (x) = ∇I(x)2 . Concerning the actual front propagation algorithm we consider a modification of the fast marching method presented in [1]. We denote by Tij nodal values approximating the true propagation time T at a grid node xij and by Fij the speed of propagation. Such a node xij appears on some grid level l for the first time. Initially we suppose all T values on the nodes except those on the seed points to be set to ∞. Given all Fij > 0 let us now review the following 2D upwind-scheme [10] – the 3D algorithm is formulated entirely analogous – for the eikonal equation: max(Di− 12 ,j T, −Di+ 12 ,j T, 0)2 + max(Di,j− 12 T, −Di,j+ 12 T, 0)2 = Fij−2 ,
(4)
where Di,j− 12 T := h−1 (Tij − Ti−1,j ) etc. and where h denotes the local gridsize. As described in detail in [10] the eikonal equation can be solved in a single expanding traversal of the grid nodes using for each node only upwind-values. Once all the arrival times T (xl ) at the nodes xl ∈ N (E l ) are known for a given element E l ∈ Ml , all other values can be computed by bi- or trilinear interpolation. Denote by K the set of known nodes of M, i.e. the set of already computed nodes on the grid, T the set of trial nodes of M along the boundary of the area of computed values, D the set of downwind side nodes of M, i.e. nodes with unknown arrival time values. Once the node N with minimal time is extracted from T and made active, all neighboring nodes with respect to the adaptive grid have to be found on the fly. Their values are updated if they are in T by solving the corresponding quadratic equation using as many contributing known values as possible. Here we exploit the fact that our saturation generates only one–level transitions between neighboring cells. We have to make sure that no hanging nodes will be added to T , because those are reconstructed by interpolation. We have constructed an algorithm, which generates a fully computed grid in the inside of the segment by only local operations. Now we can formulate the Algorithm: Adaptive Fast Marching Method while ( T =∅ ) take smallest N with minimal time out of T T = T \ {N } and K = K ∪ {N }
An Adaptive Level Set Method for Medical Image Segmentation
421
˜ ∈ adj(N ) \ K do for all N ˜ ˜ ∈B if N is no hanging node and N ˜ compute time value of N according to (4) ˜ is on a face/edge with a hanging node and if the N all time values on this face/edge are known interpolate the hanging node ˜} T = T ∪{N
Fig. 3. To test our adaptive front propagation segmentation method, we have compared its semi–automatic segmentation mode in type 5a and 5b insular gliomas(first and third from left) with the slice–by–slice demarcation method(second and fourth from left) as performed by experienced neurosurgeons. As can be seen well the segmentation results are very close to manual evaluation by perception even in the extremely ambiguous areas at the border of the tumor, where only marginal differences in image intensities are crucial. Due to adaptivity we have a finely resolved solution along the boundaries of the object. The last image shows the finally rendered isosurface.
4
Conclusions
We have presented a multilevel front propagation algorithm for segmentation purposes on medical images. It is based on the nowadays widespread level set techniques and allows the robust and flexible segmentation of regions with non sharp boundaries with only very limited and intuitive user interaction. The peculiarities of the presented method are the variety of criteria which are considered to flexibly model the speed of propagation, especially including curvature terms which avoid fingering artifacts on the front, and the underlying adaptive grid concepts responsible for the nearly real time performance of the algorithm. Thereby, our adaptive method handles grids solely procedurally without storing graphs for the underlying hierarchical grids. A saturation condition ensures sufficient regularity of the grid. Some future research directions are – the investigation on different, local filters which lead to additional indicators for segment boundaries,
422
Marc Droske et al.
– the collection of a library of speed functions well suited for the segmentation of different types of tumors and other tissue types, – and the improvement of the currently experimental user interface.
References 1. D. Adalsteinsson, R. Kimmel, R. Malladi, and J. A. Sethian. Fast marching methods for computing the solutions to static Hamilton-Jacobi equations. CPAM Report 667, University of Berkeley, 1996. 2. V. Caselles, F. Catt´e, T. Coll, and F. Dibos. A geometric model for active contours in image processing. Numer. Math., 66, 1993. 3. M. G. Crandall and P. L. Lions. Viscosity solutions of Hamilton-Jacobi equations. Tran. AMS, 277, pages pp. 1–43, 1983. 4. M. Droske, T. Preußer, and M. Rumpf A multilevel segmentation method in Proc. Vision, Modeling and Visualization, MPI Informatik, Saarbr¨ ucken, Germany, 2000, pages pp. 327–336. 5. H. Duffau, L. Capelle, M. Lopes, T. Faillot, J. P. Sichez, and D. Fohanno. The insular lobe: Physiopathological and surgical considerations. Neurosurgery 47, pages pp. 801–811, 2000. 6. R. Malladi and J. A. Sethian. Level set methods for curvature flow, image enhancement and shape recovery in medical images. In Proc. of Conf. on Visualization and Mathematics, June, 1995, Berlin, Germany. Springer-Verlag, Heidelberg, Germany, 1997. 7. R. Malladi, J. A. Sethian, and B. C. Vemuri. Shape modelling with front propagation. IEEE Trans. Pattern Anal. Machine Intell., 17, 1995. 8. B. Milne. Adaptive Level Set Methods Interfaces. PhD thesis, PhD. Thesis, Department of Mathematics, University of California, Berkeley, CA., 1995. 9. S. Osher and J. A. Sethian. Fronts propagating with curvature–dependent speed: Algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys., Vol. 79, pages 12–49, 1988. 10. J. A. Sethian. Level Set Methods and Fast Marching Methods. Cambridge University Press, 1999. 11. M. G. Yasargil, K. von Ammon, E. Cavazos, T. Doczi, J. D. Reeves, and P. Roth. Tumours of the limbic and paralimvic systems. Acta Neurochir 118, pages pp. 40–52, 1992. 12. J. Zentner, B. Meyer, A. Stangl, and J. Schramm. Intrinsic tumors of the insula: A prospective surgical study of 30 patients. Neurosurgery 85, pages pp.263–271, 1996.
Partial Volume Segmentation of Cerebral MRI Scans with Mixture Model Clustering Aljaˇz Noe1 and James C. Gee2 1
Faculty of Electrical Engineering, University of Ljubljana Trˇzaˇska 25, SI-1000 Ljubljana, Slovenija
[email protected] 2 Department of Radiology, University of Pennsylvania 1 Silverstein, 3400 Spruce Street, Philadelphia, PA 19104, USA
[email protected] Abstract. A mixture model clustering algorithm is presented for robust MRI brain image segmentation in the presence of partial volume averaging. The method uses additional classes to represent partial volume voxels of mixed tissue type in the data with their probability distributions modeled accordingly. The image model also allows for tissue-dependent variance values and voxel neighborhood information is taken into account in the clustering formulation. The final result is the estimated fractional amount of each tissue type present within a voxel in addition to the label assigned to the voxel. A multi-threaded implementation of the method is evaluated using both synthetic and real MRI data.
1
Introduction
Unsupervised image segmentation is a fundamental task in many applications of medical image analysis, the object of which is to associate with each image voxel a particular class based on its attributes, neighborhood information, or geometric characteristics of objects belonging to the class. This classification is then used by higher-level image analysis and processing algorithms, thus accurate and robust image segmentation is a key element of many medical imaging applications. In this work, we consider the problem of segmenting magnetic resonance (MR) images, which is made difficult by the existence of partial volume (PV) averaging due to limited spatial resolution of the scanner. MR images are also subject to intensity shading artifacts caused by RF field inhomogeneity. To improve the quantitative precision of our segmentation, we focus on the first factor and develop a method for determining the fractional content of each tissue class for so-called partial volume voxels of mixed tissue type. Of specific interest in the current work are the primary tissue constituents of the brain: gray (GM) and white matter (WM) as well as cerebrospinal fluid (CSF). Two approaches have been commonly applied to address the problem of partial volume segmentation. In the first, mixel model, [1,2], every voxel in an image is assumed to be a PV voxel, consisting of a mixture of pure tissue classes. The object of segmentation in this case is to determine the relative fraction of M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 423–430, 2001. c Springer-Verlag Berlin Heidelberg 2001
424
Aljaˇz Noe and James C. Gee
each tissue class present within every image voxel. Because of the number of parameters that must be estimated at each voxel, multi-channel data and/or additional constraints are required to obtain the segmentation solution. A second approach [3,4] has been to marginalize over the variables describing the fractional portions of each pure tissue class. This produces an additional, new set of partial volume classes, with which each image voxel may be associated. In this way, PV voxels may be separately identified using existing “binary” segmentation algorithms. In the current work, this method is used to adapt the maximum likelihood mixture model clustering algorithm [5,6,7] for segmentation of PV voxels in MR images of the brain.
2
Image Model
We generalize the image model proposed in [3,4] to account for tissue dependent intensity variations. Experiments on MRI data show that differences in intensity variation across tissue type are not insignificant: intensity values for CSF voxels always having the largest amount of variability, followed by GM and WM. Let Ii = (Ii,1 , Ii,2 , . . . Ii,M )T be the M -channel observation of the i-th voxel in an input image. Voxels of pure tissue class are described by a particular intensity distribution associated with the image appearance of that tissue type. Partial volume voxels, on the other hand, are represented as a linear combination of the intensity distributions associated with the K possible tissue types that can be found in those voxels: Ii =
K
k=1 ti,k N(µk , Σk ),
K
k=1 ti,k
= 1,
(1)
where the voxel intensity I for pure tissue class k is represented as an M -element column vector of random variables, which are distributed according to the multivariate Gaussian distribution N with µk = (µk,1 , µk,2 , . . . , µk,M )T , the vector of mean intensity values (M channels) for pure tissue class k, and Σk is the associated M by M covariance matrix for the M -channel observation. Term ti,k represents the fraction of pure tissue class k that is present at the i-th voxel. Note that the µk and Σk do not change with location i; that is, we assume that shading artifacts in the MRI data are first removed in a preprocessing step. To determine the fractional amount of specified pure tissue classes within every image voxel, (1) is solved for ti,k . Assuming that the class parameters (µk and Σk ) are known, a solution can be found if M ≥ (K − 1), as shown in [1]. In practice, we are interested in the three classes: CSF, GM and WM. Multi-echo images of high resolution are generally not available and even these would be partially correlated and noisy, so the problem remains ill posed. Additional constraints are therefore necessary and as in [3,4], we make the assumption that each partial volume voxel is a mixture of only two tissue types. We define sets Gk = {k1 , k2 } containing indices of pure classes that are present in the k-th PV class. There are KP V PV classes in an image.
Partial Volume Segmentation with Mixture Model Clustering
425
For voxels of pure tissue class k and PV voxels consisting of pure classes k1 and k2 , respectively, (1) reduces to: Ii = N(µk , Σk )
(2)
and Ii = ti,k1 N(µk1 , Σk1 ) + ti,k2 N(µk2 , Σk2 ) ,
ti,k1 + ti,k2 = 1 .
(3)
To determine the parameters (µk , Σk ) for the pure tissue classes, an extended version of the maximum likelihood mixture model algorithm [5,6,7] is developed below.
3 3.1
Mixture Model Clustering Probability Density Functions
For brevity, we develop here just the likelihood model for PV voxels containing a mixture of pure tissue classes k1 and k2 – see (3): ˆ k (t)−1 (Ii −µ ˆ (t))T Σ ˆ (t)) exp − 12 (Ii −µ k k q PP V (Ii |k1 , k2 , t) = , ˆ k (t)| (4) (2π)M |Σ 2 2 ˆ ˆ k (t) = tµk1 + (1 − t)µk2 , Σk (t) = t Σk1 + (1 − t) Σk2 . µ As in [3,4], we marginalize the density in (4) over t to obtain the likelihood for PV classes. To generalize the notation, we have numbered the PV classes from K + 1 to K + KP V , so that P (Ii |k) expresses the conditional density for both pure tissue and PV classes. The integral in (5) does not have a closed form solution and must therefore be evaluated by numerical integration: 1 PP V (Ii |k1 , k2 , t)dt , k1 , k2 ∈ Gk , k = 1 . . . KP V . (5) P (Ii |k + K) = 0
3.2
Weighting Functions
In [5,6] the probability density function (PDF) for class k is weighted by the current estimate of the voxel count for that class. This weighting is used to update the probabilities in a manner similar to that of a Bayesian prior. Here we introduce an alternative weighting function that favors segmentations, which are spatially extended. Specifically, we use the familiar Potts model that is also applied in [4]: δ(k, kj ) kj = arg max (P (Ij |k )) , 1 ; k Pi (k) = exp −β · (6) Z d(i, j) k = 1 . . . K + KP V , j∈Ni where δ(k1 , k2 ) provides the likelihood of different classes being neighbors as in [4]; k is the class for which the prior probability is being calculated; Ni is the
426
Aljaˇz Noe and James C. Gee
set of D18 neighborhood voxels of voxel i; β is a parameter of the distribution, controlling the amount of influence the weighting function should exert on the likelihood function; and Z is a normalizing constant. Function d(i, j) represents the distance between voxels i and j, which limits the influence of distant neighborhood voxels. 3.3
Parameter Estimation
Given the probability density and weighting functions, the conditional probability P (k|Ii ) is calculated, from which an estimate of the parameters µk and Σk for each pure tissue class k can then be determined as follows: Pi (k)P (Ii |k) P (k|Ii ) = K+KP V (7) , k = 1 . . . K + KP V ; Pi (k )P (Ii |k ) k =1 N N P (k|Ii ) · Ii · ITi i=1 P (k|Ii ) · Ii µk = ; Σk = i=1 − µk · µTk ; (8) hk hk k = 1...K . hk = N i=1 P (k|Ii ) , These parameter estimates then yield new PDFs and the process is repeated until the voxel count in each pure tissue class does not change from one iteration to the next. 3.4
Initialization
Based on extensive experimentation on real and simulated MR images, we have found that the clustering algorithm can be made robust to initialization values by specifying a sufficiently large class variance. Therefore, without additional prior information available, initial mean intensity values are equally distributed between the minimum and maximum intensity values found in the image. Diagonal elements of the covariance matrix are all set to the image intensity range divided by the number of pure classes, whereas off-diagonal elements are set to zero.
4
Partial Volume Tissue Classification
The clustering algorithm determines µk and Σk by iterating over the estimation of P (k|Ii ), until convergence is achieved. Once the intensity distribution and all class parameters are known for each tissue type, the fractional portion ti,k1 for a PV voxel at location i consisting of tissues k1 and k2 can then be obtained by solving (3) for t using maximum likelihood estimation (MLE). To allow segmentation without the need to specify a threshold for distinguishing between partial volume and pure tissue voxels, we require certain information about the pure tissue classes to be included: t∗i,k = P (k |Ii ) +
k
P (k + K |Ii )
(µk − µk2 )T (Ii − µk2 ) , (µk − µk2 )T (µk − µk2 )
(9)
Partial Volume Segmentation with Mixture Model Clustering
427
where the summation is over all PV classes that contain pure class k (for which k ∈ Gk is true) and k2 ∈ Gk , k2 =k. We must also normalize the fractional portions of pure classes so that they sum to unity over all classes k.
5
Implementation and Experimental Results
Two preprocessing steps must be performed prior to segmentation. First, we extract the brain parenchyma from the MR image of the head using the Brain Extraction Tool—details of the method can be found in [8]. Intensity shading artifacts in the extracted image are then removed with the MNI-N3 method [9]. A multi-threaded version of the clustering algorithm was implemented by subdividing the image into a number of segments, which are then processed in separate threads, one for each processor available. All threads are synchronized at 3 time points: before and after the calculation of the weighting values and before the estimation of the new class parameters. The algorithm is outlined below: 1. Initialization - set K, KP V , Gk and initial estimates of class parameters (k ,Σk ) 2. Calculate the PDFs for all classes using multivatiate Gaussian’s and (5) in multiple threads. Wait until all threads complete their processing before proceeding. 3. Calculate the weighting values in multiple threads using (6). Wait until all threads complete their processing before proceeding. 4. Calculate the updated probabilities using (7) for each class k and the new estimates for the class parameters using (8). Wait until all threads complete their processing before proceeding. K 5. Terminate the loop when the change in Σk=1 hk between iterations is less than 1 or number of iterations is 50; otherwise return to step 2. 6. Determinine the fractional amount of each tissue type within every image voxel using (9).
The segmentation algorithm was evaluated using both synthetic and real data. In each of the reported experiments, β was set to 0.3 and algorithm convergence usually occurred after 10–20 iterations. 5.1
Synthetic Image
We constructed a square, 100 by 100, image and subdivided the image into 3 vertically separated regions. The regions to the far left and right were considered pure “tissues” and their image values were drawn from normal distributions with the following mean and variance values, respectively: µ1 = 70, Σ1 = 100 and µ2 = 150, Σ2 = 400. The middle strip of the image, 30 pixels wide, contained partial volume pixels, which modeled a smooth linear transition between the two pure classes. The synthetic image and its segmentation is shown in Fig. 1. The following are the estimated mean and variances for the tissue classes: µ1 = 70.35, Σ1 = 101; µ2 = 148.34, Σ2 = 369.41. Fig. 1 also shows the squared error between the ideal and estimated t values for Nthe class—the total error was 2 E1 = 26.65, where Ei,k = (ti,k − tideal ) , E = k i,k i=1 Ei,k . We can see that the errors occur only at the boundaries where the region with PV voxels meets the regions containing pure classes. We contribute this error largely to noise because it decreases when we reduce the amount of noise variance for the pure classes. This also explains the smaller amount of error in the segmentation of the left half of the image, where the noise variance for the first pure class was smaller.
428
Aljaˇz Noe and James C. Gee
Fig. 1. Synthetic data with segmentation results. (Left) Image to be segmented. (Center) Fractional values t for the first class at each voxel plotted as an 8-bit gray-scale image with intensity = 0 corresponding to t = 0.0 and intensity = 255 to t = 1.0. (Right) Pointwise squared error between estimated and ideal t values for the first class. 5.2
Simulated T1-Weighted Brain Volume
A second, more realistic synthetic dataset of an MRI head scan was created using the Brain-Web simulator [10,11,12]. Each simulation was a 1mm3 isotropic MRI volume with dimensions 181 × 217 × 181. Three datasets incorporating different amounts of noise were segmented and the mean absolute error between the ideal and estimated t values over all voxels were as follows: • 9% noise: GM: 0.08458 (σ=0.11885); WM: 0.04399 (σ=0.08759); CSF: 0.04157 (σ=0.09795) • 3% noise: GM: 0.05435 (σ=0.08597); WM: 0.02923 (σ=0.06414); CSF: 0.02585 (σ=0.06517) • 0% noise: GM: 0.03874 (σ=0.06301); WM: 0.01936 (σ=0.03755); CSF: 0.02077 (σ=0.05612)
Although there appears to be minimal partial volume averaging in the results, the segmentation obtained without the use of PV classes (KP V = 0) had errors about 2 times larger and the algorithm took much longer to converge (> 50 iterations). 5.3
Manually Segmented Real T1 MR Images of the Brain
Twenty normal brain MRI datasets and their manual segmentations were obtained from the Center for Morphometric Analysis at Massachusetts General Hospital—these IBSR datasets are publicly available on the Internet [13]. The volumes were preprocessed to extract brain parenchyma and corrected for intensity inhomogeneities. However, 7 of the preprocessed volumes exhibited strong shading artifacts of relatively high frequency that the MNI-N3 method [9] was unable to remove. These volumes were excluded from further processing. Table 1. Jaccard similarity between estimated and true segmentation of IBSR images. Image 100 23 110 3 111 2 112 2 11 3 13 3 16 3 17 3 191 3 202 3 205 3 7 8 8 4 GM 0.833 0.821 0.811 0.756 0.798 0.845 0.720 0.734 0.819 0.842 0.823 0.776 0.739 WM 0.752 0.707 0.739 0.679 0.723 0.777 0.640 0.628 0.740 0.763 0.768 0.684 0.665
Since the manual segmentations for this set of images do not contain any information about fractional tissue content, we calculated a similarity index for each class by thresholding our partial volume segmentation results. Specifically,
Partial Volume Segmentation with Mixture Model Clustering
429
in table 1 we report the values for the Jaccard similarity = |Se ∩ Sideal |/|Se ∪ Sideal |, where Se and Sideal are the estimated and “true” sets of voxels, respectively, for a given tissue class. The mean Jaccard index was 0.783 and 0.698 for GM and WM, respectively. These results are superior to those reported in the recent literature [4,14].
6
Conclusion
We have presented an algorithm for partial volume segmentation of MR images of the brain. Experimental results are comparable or superior to other published algorithms. Our method is an extension of a probabilistic clustering algorithm, [5,6], to accommodate partial volume voxels and to allow class-dependent model values for the intensity variance. In the current work, the weighting function was augmented to favor spatially contiguous regions in the segmentation but other possibilities are being examined, including the use of prior anatomic information as in [7]. Another, more important feature that is under implementation is the simultaneous correction of intensity inhomogeneities to not only obviate the need for this preprocessing step but to improve on existing techniques.
References 1. H. S. Choi, D. R. Haynor, and Y. Kim, “Partial volume tissue classification of multichannel magnetic resonance images - a mixel model,” in IEEE Transactions on Medical Imaging, vol. 10, pp. 395–407, Sept. 1991. 2. L. Nocera and J. C. Gee, “Robust partial volume tissue classification of cerebral MRI scans,” in SPIE Medical Imaging (K. M. Hanson, ed.), vol. 3034, pp. 312–322, Feb. 1997. 3. D. H. Laidlaw, K. W. Flescher, and A. H. Barr, “Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms,” in IEEE Transactions on Medical Imaging, vol. 17, pp. 74–86, Feb. 1998. 4. D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, and R. M. Leahy, “Magnetic resonance image tissue classification using a partial voume model,” 2000. Submitted. 5. J. A. Hartigan, Clustering algorithms. New York: John Wiley & Sons, Inc., 1975. 6. R. O. Duda and P. E. Hart, Pattern classification and scene analysis. New York: John Wiley & Sons, Inc., 1973. 7. J. Ashburner and K. Friston, “Multimodal image coregistration and partitioning a unified framework,” in Neuroimage, vol. 6, pp. 209–217, Oct. 1997. 8. S. M. Smith, “Robust automated brain extraction,” in Sixth Int. Conf. on Functional Mapping of the Human Brain, p. 625, 1998. 9. J. G. Sled, A. P. Zijdenbos, and A. C. Evans, “A nonparametric method for automatic correction of intensity nonuniformity in MRI data,” in IEEE Transactions on Medical Imaging, vol. 17, pp. 87–97, Feb. 1998. 10. http://www.bic.mni.mcgill.ca/brainweb/. 11. R.-S. Kwan, A. Evans, and G. B. Pike, An Extensible MRI Simulator for PostProcessing Evaluation, vol. 1131 of Lecture Notes in Computer Science, pp. 135– 140. Springer-Verlag, May 1996.
430
Aljaˇz Noe and James C. Gee
12. D. L. Collins, A. Zijdenbos, V. Kollokian, J. Sled, N. Kabani, C. Holmes, and A. Evans, “Design and construction of a realistic digital brain phantom,” in IEEE Transactions on Medical Imaging, vol. 17, pp. 463–468, June 1998. 13. http://neuro-www.mgh.harvard.edu/cma/ibsr. 14. J. C. Rajapakse and F. Kruggel, “Segmentation of MR images with intensity inhomogeneities,” in Image and Vision Computing, vol. 16, pp. 165–180, 1998.
Nonlinear Edge Preserving Smoothing and Segmentation of 4-D Medical Images via Scale-Space Fingerprint Analysis Bryan W. Reutter1,2 , V. Ralph Algazi2 , and Ronald H. Huesman1 1
Center for Functional Imaging, Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720, USA http://cfi.lbl.gov/{∼reutter, ∼huesman} 2 Center for Image Processing and Integrated Computing University of California, Davis, CA 95616, USA http://info.cipic.ucdavis.edu/∼algazi
Abstract. An approach is described which has the potential to unify edge preserving smoothing with segmentation based on differential edge detection at multiple scales. The analysis of n-D data is decomposed into independent 1-D problems. Smoothing in various directions along 1-D profiles through n-D data is driven by local structure separation, rather than by local contrast. Analytic expressions are obtained for the derivatives of the edge preserved 1-D profiles. Using these expressions, multidimensional edge detection operators such as the Laplacian or second directional derivative can be composed and used to segment n-D data. The smoothing and segmentation algorithms are applied to simulated 4-D medical images.
1
Introduction
Nonlinear edge preserving smoothing often is performed prior to medical image segmentation. The goal of the nonlinear smoothing is to improve the accuracy of the segmentation by preserving significant changes in image intensity, while smoothing random noise fluctuations. Methods include median filtering and gray-scale morphology [6], and spatially varying smoothing driven by local contrast measures [1] or nonlinear diffusion [8,9]. By comparison, spatially invariant linear smoothing uniformly blurs boundaries in reducing noise, thus adversely affecting the accuracy of the subsequent segmentation. Rather than irreversibly altering the data prior to segmentation, the approach described here has the potential to unify nonlinear edge preserving smoothing with segmentation based on differential edge detection at multiple scales. The analysis of multidimensional (n-D) image data is decomposed into independent 1-D problems that can be solved relatively quickly. Smoothing in various directions along 1-D profiles through n-D data is driven by a measure of local structure separation, rather than by a local contrast measure. The elementary 1-D smoothing algorithm is described in Section 2 and is generalized to arbitrary dimension in Section 3. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 431–437, 2001. c Springer-Verlag Berlin Heidelberg 2001
432
Bryan W. Reutter, V. Ralph Algazi, and Ronald H. Huesman
In addition, analytic expressions are obtained for the derivatives of the edge preserved 1-D profiles. Using these expressions and the methods described in Section 3, multidimensional edge detection operators such as the Laplacian or the second derivative in the direction of the image intensity gradient can be composed and used to segment n-D data. Computer simulations are used in Section 4 to evaluate the performance of 4-D versions of the n-D smoothing and segmentation algorithms. Preliminary results of a 3-D version of the n-D smoothing algorithm were presented in [2]. Potential applications of these methods include 4-D spatiotemporal segmentation of respiratory gated cardiac positron emission tomography (PET) transmission images to improve the accuracy of attenuation correction [4], and 4-D spatiotemporal segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time activity curves and kinetic parameters for left ventricular volumes of interest [3].
2
1-D Recursive Multiscale Blending
Given linearly smoothed versions of a 1-D signal f(x) and its first two derivatives at J scales, one can perform nonlinear edge preserving smoothing as follows. The linearly smoothed versions of f(x) are denoted by ¯f(x, aj ), and the linearly smoothed first and second derivatives are denoted by ¯f(1) (x, aj ) and ¯f(2) (x, aj ), respectively, for j = 1, . . . , J. The scale coordinate a controls the width of the convolution kernels used in the linear filtering. The kernels are based on the uniform cubic B-spline basis function and its first two derivatives [7]. The cubic B-spline has a support of 4a and approximates a Gaussian with a standard deviation, σ, of 1/3 a. Dyadic sampling of the scale coordinate a is used, yielding aj = 2j−1 a1 . The nonlinearly smoothed versions of f(x), denoted by ˜f(x, aj ), are obtained by recursively blending the linearly smoothed versions: ¯ j=1 ˜f(x, aj ) = f(x, a1 ) (1) [1 − Cj (x)] ˜f(x, aj−1 ) + Cj (x)¯f(x, aj ) j = 2, . . . , J. The blending functions {Cj (x); j = 2, . . . , J} are constrained to range between zero and one and play a role similar to that of the spatially varying diffusion coefficients used in typical implementations of edge preserving smoothing via nonlinear diffusion (e.g., [8,9]). When Cj (x0 ) = 0, smoothing stops in the neighborhood of x0 and ˜f(x0 , aj ) remains unchanged from the value ˜f(x0 , aj−1 ) obtained using nonlinear smoothing at the previous, finer scale. When Cj (x0 ) = 1, smoothing is unabated and ˜f(x0 , aj ) is set to the value ¯f(x0 , aj ) obtained using linear smoothing at the current, coarser scale. Although the recursive multiscale blending cannot be characterized as nonlinear diffusion, it shares the desirable property of generating no spurious extrema, in the following sense. It can be shown that the nonlinearly smoothed signal ˜f(x, aj ) is a convex combination of the linearly smoothed signals {¯f(x, ai ); i = 1, . . . , j} for all x, and therefore is bounded by the extrema of the linearly smoothed signals.
4-D Edge Preserving Smoothing and Segmentation 8 4 a 2 1
−20
0 x
20
433
Fig. 1. Augmented scale-space fingerprint for a noisy ramp edge of width four and a contrast to noise ratio of 2.5. Solid fingerprint lines depict the zero-crossing locations of ¯f(2) (x, a) (i.e., edge and ledge locations) over a continuum of scales. Dashed lines depict the zero-crossing locations of ¯f(1) (x, a) (i.e., ridge and trough locations). Below the fingerprint, the noiseless edge is shown with the noisy edge.
The multiscale blending functions {Cj (x); j = 2, . . . , J} are defined via the following analysis (presented in more detail in [2]) of the augmented scale-space fingerprint for f(x). The augmented scale-space fingerprint is a graphical depiction of the locations of the zero-crossings of the first two derivatives of the linearly smoothed signal as a function of scale (Fig. 1). At a particular scale aj , each zero-crossing location of ¯f(2) (x, aj ) is labeled as either a local maximum (edge) or local minimum (ledge) in gradient magnitude, depending on its proximity to nearby zero-crossing locations of ¯f(1) (x, aj ) (i.e., ridges and troughs). For each edge location, the distance separating the ridge, trough, or ledge on either side of the edge is calculated. The blending function Cj (x) is then assigned a value ranging between zero and one at the edge location, based on the separation distance and the heuristic that larger separation distances are mapped to smaller blending function values. Cj (x) is then defined for all x by interpolating the values at the edge locations with a piecewise quartic spline whose first through third derivatives are zero at the edge locations.
3
n-D Smoothing and Segmentation
Edges can be preserved in n-D data by applying the 1-D smoothing algorithm described in Section 2 independently along the coordinate axis directions, as well as along the diagonal directions of the 2-D planes spanned by the coordinate axes, and averaging the results. This will be referred to as multidirectional 1-D processing, and builds on the work described in [9], in which processing was performed only along the coordinate axis directions. The information obtained along the diagonal directions allows the characterization of the first and second order differential properties of the data in any direction. Using this additional information, multidimensional edge detection operators such as the Laplacian or the second derivative in the direction of the image intensity gradient can be composed and used to segment the data as follows. The n-D data array is denoted by f(x), where x = [ x1 · · · xn ]T is the position vector for the domain of the data and “[ ]T ” denotes the matrix transpose. The 1-D profile passing through the point x0 in the direction v0 is denoted by
434
Bryan W. Reutter, V. Ralph Algazi, and Ronald H. Huesman
fx0 ,v0 (s) = f(x0 + sv0 ),
(2)
where v = [ v1 · · · vn ]T is a unit vector and s is an arc length parameter. The relationships between the first and second derivatives of fx,v (s) and the first and second order partial derivatives of the n-D data f(x) are dfx,v = v · ∇f = vT g ds
d2 fx,v = v · ∇[v · ∇f] = vT Hv, ds2
(3)
where g(x) is the gradient vector and H(x) is the Hessian matrix. One can write 2 vT Hv as the inner product wT h of the ( n 2+n )-element vectors T 2 2vn−1 vn vn2 w = v12 2v1 v2 · · · 2v1 vn v22 2v2 v3 · · · 2v2 vn · · · vn−1 (4) T h = H11 H12 · · · H1n H22 H23 · · · H2n · · · H(n−1)(n−1) H(n−1)n Hnn , (5) 2
f . Thus, given derivative estimates in all 1-D profiles along the where Hij = ∂x∂i ∂x j coordinate axis directions and the diagonal directions of the 2-D planes spanned by the coordinate axes (for a total of n2 directions), one can compute least squares estimates of the gradient vector g(x) and the vector h(x) of Hessian matrix elements as follows. The n2 direction vectors for the 1-D profiles and the corresponding w vectors are stored in the matrices
T V = v1 · · · vn2
T W = w1 · · · wn2 .
(6)
The first and second derivatives along the 1-D profiles are stored in the vectors f (1) (x) =
dfx,v1 ds
···
dfx,v ds
n2
T
f (2) (x) =
d2 fx,v1 ds2
···
d2 fx,v 2 n ds2
T .
(7)
It can be shown that the unweighted least squares estimates for the gradient vector g(x) and the vector h(x) of Hessian matrix elements are −1 T (1) ˆ (x) = VT V g V f
−1 T (2) ˆ W f . h(x) = WT W
(8)
Using these estimates, one can compose multidimensional edge detection operˆ or the second derivative in the direction ators such as the Laplacian, trace(H), ˆ g. ˆ T Hˆ of the gradient, weighted by the squared magnitude of the gradient, g
4
4-D Smoothing and Segmentation Simulations
A 4-D version of the n-D smoothing algorithm was applied to simulated respiratory gated PET transmission images generated using the Mathematical Cardiac Torso (MCAT) phantom [5]. The 4-D image array was composed of 40 contiguous 5 mm-thick transverse slices at 15 respiratory phases. Each transverse
4-D Edge Preserving Smoothing and Segmentation
435
slice had 80×80 pixels with pixel size 5×5 mm. Diaphragm and heart motion of 15 mm in the superior-inferior direction was simulated, in conjunction with chest wall diameter changes of 9.8 mm in the left-right direction and 20 mm in the anterior-posterior direction. Gaussian white noise was added to the images to yield contrast to noise ratios of 5.0 at the air-soft tissue boundary and 3.5 at the soft tissue-lung boundaries (Fig. 2a). The 1-D smoothing algorithm was applied independently along the x, y, z, and t axes of the noisy 80×80×40×15 dataset, as well as along the 12 diagonal directions of the 2-D planes spanned by the axes. Multiscale linear 1-D filtering was performed in each of the 16 directions using a pre-smoother followed by cubic B-spline-based smoothing and differentiation operators operating at three different scales. The filters combined to yield kernels with 1×7, 1×11, √ √ supports and 1×19, which approximated Gaussians with σ = 1, 2, and 6 pixels, respectively. For comparison, linear smoothing was also performed using a 5×5×5×5 separable filter, which approximated a 4-D Gaussian with σ = 0.70 pixels. This small scale separable filter was designed to yield the same noise reduction for independent, identically distributed Gaussian noise, as that obtained by averaging the outputs of the 16 large scale (1×19 B-spline-based) linear 1-D smoothing filters (Figs. 2c,e). Fig. 2g shows the result of averaging the outputs of the 16 large scale nonlinear 1-D smoothing filters obtained using recursive multiscale blending. The differences between the results are subtle. The large scale nonlinear multidirectional 1-D filter and the small scale separable filter blurred the edges the least, while the large scale linear multidirectional 1-D filter blurred the edges the most (Fig. 2b). The linear and nonlinear multidirectional 1-D smoothing results were obtained using an average of 5.8 minutes of processing for each of the 16 directions (195 MHz R10000-based SGI workstation). Results of segmenting the images using 4-D second directional derivative operators are shown in Figs. 2d,f,h. For the linear and nonlinear multidirectional 1-D processing, the 4-D gradient vector and Hessian matrix were calculated in 17 minutes using the methods described in Section 3. For respiratory phase 8, 3-D models for the second directional derivative zero-crossing surfaces were constructed in less than one minute using the methods described in [4]. The large scale nonlinear multidirectional 1-D operator and the small scale separable operator yielded comparable segmentations. Relatively accurate lung surface models were constructed, to which were attached spurious surface elements. For the large scale linear multidirectional 1-D operator, there were fewer spurious surface elements and the lung surface models were less accurate.
5
Future Directions
The computer simulations in Section 4 demonstrate that nonlinear edge preserving smoothing and segmentation of 4-D medical images can be performed in a timely manner on a workstation. Unlike typical implementations based on nonlinear diffusion, recursive multiscale blending requires only a small, fixed number (3–5) of iterations. Although performed serially here, the computations
436
Bryan W. Reutter, V. Ralph Algazi, and Ronald H. Huesman
1 / cm
0.1 0.05 0 0
5
pixel
10
15
(a) original noisy image
(b) edge at diaphragm
(c) 5×5×5×5 linear smoothing
(d) 5×5×5×5 linear smoothing
(e) 16×(1×19) linear smoothing
(f) 16×(1×19) linear smoothing
(g) 16×(1×19) nonlinear smoothing
(h) 16×(1×19) nonlinear smoothing
Fig. 2. Smoothing and segmenting simulated 4-D respiratory gated PET transmission images. (a) Noisy 52×26 pixel sub-image from a coronal cross section. The right dome of the diaphragm is the larger, semicircular structure on the left. The heart is the smaller, circular structure on the right. (b) Profile through right dome of diaphragm, depicted by the white segment in (a). The circles and the dot-dashed line depict noiseless and noisy simulated values, respectively. The dashed, dotted, and solid lines depict values obtained by (c) small scale separable, (e) large scale linear multidirectional 1-D, and (g) large scale nonlinear multidirectional 1-D filtering, respectively. (d,f,h) Segmentation results for (c,e,g), respectively, are depicted as solid lines. The dotted lines depict the true soft tissue-lung boundaries.
4-D Edge Preserving Smoothing and Segmentation
437
can be massively parallelized. Additional work is needed to optimize the multiscale blending functions with respect to spurious zero-crossings in the derivatives of the nonlinearly smoothed data. With the goal of improving the preservation of fine details, further investigation is needed to perform weighted least squares estimation of a 4-D dataset and its partial derivatives from the results of performing recursive multiscale blending in multiple directions.
Acknowledgments The authors thank the University of North Carolina Medical Imaging Research Laboratory for making the MCAT phantom available. This work was supported by the National Heart, Lung, and Blood Institute of the US Department of Health and Human Services under grant P01-HL25840; by the Director, Office of Science, Office of Biological and Environmental Research, Medical Sciences Division of the US Department of Energy under contract DEAC03-76SF00098; and by the University of California MICRO program. This work was developed in part using the resources at the US Department of Energy National Energy Research Scientific Computing (NERSC) Center.
References 1. Kitamura, K., Iida, H., Shidahara, M., Miura, S., Kanno, I.: Noise reduction in PET attenuation correction using non-linear Gaussian filters. IEEE Trans. Nucl. Sci., 47 (2000) 994–999 2. Reutter, B.W., Algazi, V.R., Huesman, R.H.: Computationally efficient nonlinear edge preserving smoothing of n-D medical images via scale-space fingerprint analysis. In Ulma, M. (ed.), 2000 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (2001, in press) 3. Reutter, B.W., Gullberg, G.T., Huesman, R.H.: Direct least-squares estimation of spatiotemporal distributions from dynamic SPECT projections using a spatial segmentation and temporal B-splines. IEEE Trans. Med. Imag., 19 (2000) 434–450 4. Reutter, B.W., Klein, G.J., Huesman, R.H.: Automated 3-D segmentation of respiratory-gated PET transmission images. IEEE Trans. Nucl. Sci., 44 (1997) 2473–2476 5. Segars, W.P., Lalush, D.S., Tsui, B.M.W.: Modeling respiratory mechanics in the MCAT and spline-based MCAT phantoms. In Seibert, J.A. (ed.), 1999 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (2000) 985–989 6. Sternberg, S.R.: Grayscale morphology. Comput. Vis. Graph. Image Proc., 35 (1986) 333–355 7. Wang, Y.-P., Lee, S.L.: Scale-space derived from B-splines. IEEE Trans. Patt. Anal. Mach. Intell., 20 (1998) 1040–1055 8. Weickert, J.: A review of nonlinear diffusion filtering. In ter Haar Romeny, B., Florack, L., Koenderink, J., and Viergever, M. (eds.), Scale-Space Theory in Computer Vision: Proceedings of the First International Conference (1997) 3–28 9. Weickert, J., ter Haar Romeny, B.M., Viergever, M.A.: Efficient and reliable scheme for nonlinear diffusion filtering. IEEE Trans. Image Proc., 7 (1998) 398–410
Spatio-temporal Segmentation of Active Multiple Sclerosis Lesions in Serial MRI Data Daniel Welti1,3 , Guido Gerig2 , Ernst-Wilhelm Rad¨ u3 , Ludwig Kappos3 , and 1 Gabor Sz´ekely 1 2
Computer Vision Laboratory, Swiss Federal Institute of Technology, CH-Z¨ urich Department of Computer Science, University of North Carolina, USA-Chapel Hill 3 Departments of Neuroradiology and Neurology, University Hospital, CH-Basel
Abstract. This paper presents a new approach for the automatic segmentation and characterization of active MS lesions in 4D data of multiple sequences. Traditional segmentation of 4D data applies individual 3D spatial segmentation to each image data set, thus not making use of correlation over time. More recently, a time series analysis has been applied to 4D data to reveal active lesions [3]. However, misregistration at tissue borders led to false positive lesion voxels. Lesion development is a complex spatio-temporal process, consequently methods concentrating exclusively on the spatial or temporal aspects of it cannot be expected to provide optimal results. Active MS lesions were extracted from the 4D data in order to quantify MR-based spatiotemporal changes in the brain. A spatio-temporal lesion model generated by principal component analysis allowed robust identification of active MS lesions overcoming the drawbacks of traditional purely spatial or purely temporal segmentation methods.
1
Introduction
Multiple sclerosis (MS) is a chronic inflammatory disease of the central nervous system (CNS). MS lesions consist of areas of inflammation, myelin loss, axonal degeneration and gliotic scar formation. Magnetic resonance (MR) is the primary paraclinical modality to monitor the natural history of the disease and to evaluate the efficacy of treatment in long-term therapeutic studies. In recent years several segmentation techniques have been developed to quantify brain MRI lesion load. Manual segmentation is not only time consuming but also tedious and error prone. The possibility to acquire multi-echo image data stimulated several attempts to apply classical statistical pattern recognition techniques. But purely intensity based segmentation has strong limitations and does often not provide satisfactory results. Different techniques have been developed and tested to incorporate anatomical knowledge into the segmentation procedure. As 90 − 95% of MS lesions occur in white matter tissue, prior identification of the white matter area can be used to reduce the number of false positive lesions [8]. However, delineation of lesions is often not accurate enough. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 438–445, 2001. c Springer-Verlag Berlin Heidelberg 2001
Spatio-temporal Segmentation of MS Lesion
439
Tissue class distributions overlap and therefore voxels are misclassified. This is especially true for MS lesions. The chronological course of MS lesions can be investigated by looking at significant changes in MR scans at two different time points. By examining temporal changes in consecutive MR scans, rather than to measure absolute intensity values, active MS lesions can be segmented and characterized in a straightforward manner. A simple approach to detect changes in time series MR data is to subtract two consecutive, registered MR images. Other methods to detect and quantify active lesions in image sequences have been introduced in [7] and [6]. Both approaches rely on calculating a non-rigid deformation field between two consecutive images to express the changes of brain tissue appearance due to pathology. Unfortunately this approach cannot always capture the complex behavior of lesion development, because luminance changes can always be traded for deformation and vice versa. A new method to segment active MS lesions has been introduced in [3]. After preprocessing serial MR data including normalization of the brightness and precise registration of serial volume data sets to 4D data, the hypothesis can be established that intensities in static regions remain unchanged over time, whereas local changes in tissue characteristics cause typical fluctuations in the voxel’s time series. A time series analysis has then been applied to reveal active lesion voxels. The described algorithm is highly sensitive to rigid registration, brightness normalization and noise reduction. Whereas successful algorithms are available for the latter two preprocessing steps, the quality of the rigid registration is strongly dependent on the axial resolution of MR scans. Especially at tissue borders, misregistration leads to fluctuations of intensities over time. Lesion development is a complex spatio-temporal process, consequently methods concentrating exclusively on the spatial or temporal aspects of it cannot be expected to provide optimal results. The goal is therefore to characterize lesion evolution by quantitative characterization of MR-based spatio-temporal changes. A spatio-temporal lesion model can be used to improve the segmentation results of the time series analysis described above. False positive lesions should be clearly distinguishable from true lesions considering the expected spatio-temporal behavior of active MS lesions.
2
Model Generation
In order to perform temporal and spatio-temporal analysis of MS lesions, we first acquired time series of data in multiple sequences. 11 patients with definite MS underwent monthly MR scans for one year. Proton-density-weighted (PD), T2-weighted (T2), Flair (FL), and T1-weighted images before and after the application of Gadolinium contrast agent (T1 and GD) were acquired with an axial resolution of 2mm. Bias-field correction and brightness normalization of the images was obtained using the method described in [1]. Due to unavoidable misregistration during repeated acquisition, an intensity based rigid registration
440
Daniel Welti et al.
a
b
c
d
Fig. 1. Spatio-temporal evolution of one lesion in different pulse sequences: T2 (a), PD (b), T1 (c), FL (d). The three axes indicate radius, time and intensity.
algorithm (described in [4]) was used to create intensity-corrected multichannel 4D data. Up to now, data collections were compiled using complete, segmented MRI sequences of MS patients. In order to extract a spatio-temporal model of MS lesions, this patient based view was replaced by a lesion based view. Active lesions were extracted manually from 4D data to form a lesion database. The complete gray-value information of the lesions and their surrounding tissue in all MR sequences at all time-points was stored. In order to characterize the spatio-temporal development, we first looked for a spatial model to describe the lesions at a specific time. As they can have varying shape and size we must normalize them in our database in order to robustly characterize changes of the spatial appearance. Lesions can very often be described as radially-symmetric structures, therefore a 1D model of radial intensity changes can be used to compactly describe their intensity distribution [2]. This approach describes the structure of a lesion as a collection of layers with specific intensities (onion-skin model). Gray-level values at equidistant isosurfaces provide a one-dimensional characterization of a lesion. Adapting the methods applied in [2] to 3D, we used mean curvature evolution to obtain a 1D characterization. Applying mean curvature flow to 3D images of MS lesions, the corresponding (convex) isosurfaces of the lesion first become asymptotically spherical before evolving into a point at the center of the lesion. Consequently, the intensities at the (fixed) center correspond to the intensities of the original isosurfaces. By collecting these values, a 1D radial intensity profile can be obtained. Non-convex surfaces will eventually split into convex parts [5]. However, MS lesions are usually rather ellipsoidal with a more or less clearly defined center. Therefore, even if small parts would be splitted during the flow, the corresponding intensity evolution at the center will capture most of the internal structure of the lesion. For complex-shaped lesions this kind of normalization might be too strong. We therefore also experimented with more detailed spatial lesion descriptions. However, the low number of active lesions in our database forced us to use the highly simplified 1-dimensional description scheme.
Spatio-temporal Segmentation of MS Lesion pca2 2
3 -3
-2
1
6 -1 9
pca4 1 3 21 23 1 22 20 15 142 13 18 12 19 16 8 -1 1 17 9 7
17 15 19 1 11 22 714 13 18 21 20 1 816 -1 12 5 4 -2
a
pca6
2
10
23 2 pca1 5
2
441
11
20
13 2
3 4
pca3 -2
-1 11
12 4-1 6 21 18
10
5
23 3
19 10
22 2 7
-1
15
-2 6
2 1 1
1 16
9
17 2
pca5
8
14 -2
b
c
Fig. 2. Projection of the sample points to the first and second (a), to the third and fourth (b) , and to the fifth and sixth (c) principal component.
To cover the spatio-temporal development in multiple sequences, we chose to observe the lesion over a period of six months. The appearance of a MS lesion is coupled with a steep rise of intensities in the FL sequence. Therefore, for temporal localization, we determined the maximal gradient for each voxel in the FL sequence and used it for the extraction of the time slot on all pulse sequences. Observation started one month before we could identify the lesion in the FL sequence and finished four months after the appearance. However, it is not possible to extract the defined time slot for lesions appearing at the end of the observation period, and those already visible in the first examination. Therefore, such lesions were discarded from further analysis. 23 active lesions remained in the database. The center of the lesions was determined by first applying a few steps of mean curvature flow to the maximum intensity image projected over time in the FL sequence followed by a search for the brightest voxel in the lesion’s image. The spatial extent was given by the manually extracted region of interest. The evolution was followed in four different sequences: T2, PD, T1, FL (Figure 1). For further analysis the extent of each lesion was normalized to a standard size. To determine the variation of the spatio-temporal behavior of MS lesions, a principal component analysis (PCA) was applied to the normalized descriptors in the database. The considered vectors xi , describing one lesion as shown in Figure 1, consist of the normalized intensity profiles (with a length of 40 voxels each) of all considered time points (6 time points) on all pulse sequences (4 sequences). As we have extracted 23 active lesions, we consider a 23 × 960dimensional matrix X consisting of the vectors xi . From the covariance matrix ΣX of X, the eigenvectors ci and eigenvalues λi were calculated. The first four components, which have been used in the subsequent analysis, account for about 90% of the sum of the eigenvalues. These components, representing the corresponding spatio-temporal evolution, define our lesion model. In order to verify that the resulting model can be used as a reference for a “typical” lesion development, the samples were projected to the resulting normalized Eigenspace. In Figure 2 the projections of the 23 samples to the first six principal components are shown.
442
Daniel Welti et al.
Fig. 3. The projection of the spatio-temporal evolution of a lesion voxel to the first four axes of the Eigenspace.
3
Spatio-temporal Segmentation
Results of the time series analysis provide the starting point of our segmentation process [3]. To capture the spatio-temporal aspects of the evolution of voxels in multiple sequences, we extracted for each active voxel (detected by the time series analysis) the same local spatio-temporal evolution characteristics as for the model generation. Mean curvature flow was applied to each 3D image of the 4D data in all sequences. Snapshots of the diffused 3D images were taken at regular time intervals. By observing the intensities at an arbitrary fixed position during the diffusion process, we can extract the hypothetic radial spatial distribution of intensities at that point. Doing this for each time point and for all pulse sequences results in a description of the spatio-temporal evolution in multiple sequences of each active voxel. Heuristics was applied to extract the size of the hypothetical lesion based on the gradient along the radial intensity profile in the FL sequence. The spatial extent was normalized according to the procedure applied during model generation. As our lesion model was defined using a fixed number (6) of time points, we had to extract the appropriate temporal part of the considered evolution, in accordance with the time spread of the lesions in the database. Again, the maximal gradient in the FL sequence was used to define the time point of appearance. Voxels having their maximal gradient at the end of the observation period or their minimal gradient at the first time point (voxel “activated” before the first examination) were excluded from the analysis. By characterizing voxels including their local neighborhood over time, an instrument is provided to reject voxels having a spatio-temporal development dissimilar to the one of MS lesions. We therefore compared the local spatiotemporal evolution of each active voxel with the spatio-temporal evolution of the generated model. The lesion samples are rather homogeneously distributed around the mean value (Figure 2). Therefore, the mean spatio-temporal evolution of all lesions in the database can be regarded as a characteristic model of a “typical” MS lesion. To measure the deviation of the evolution of an unknown candidate voxel from the generated model, the Mahalanobis distance was used.
Spatio-temporal Segmentation of MS Lesion
443
Fig. 4. The projection of the spatio-temporal evolution of a false positive CSF voxel onto the first four axes of the Eigenspace.
In Figure 3 the spatio-temporal evolution of a lesion voxel (large filled circle) projected to the first four axes of the mentioned Eigenspace is shown. The lesions from the database are represented by the small circles to visualize the range of valid evolution. In Figure 4 the evolution of a voxel near the CSF (large circle) projected onto the Eigenspace is shown. This voxel has been wrongly identified as a part of an active lesion by the time series analysis. The extracted spatiotemporal behavior is quite different from the model, which is well visible on the projections. As mentioned before, only voxels resulting from the time series analysis which appeared in the valid observation period (month 2-8) were considered for the spatio-temporal filter. In Figure 5 the Mahalanobis distances of all these voxel candidates on one slice are shown. Low distances are coded as bright intensities. It can be seen that it is easy to choose a threshold to discriminate between lesions and false positive voxels. 100
80
60
40
20
0
0
25
50
75
a
100
125
150
b
Fig. 5. Mahalanobis distances for all voxel candidates revealed by the time series analysis (a), and the corresponding histogram (b).
444
Daniel Welti et al.
a
b 1
13
Fig. 6. Results of the purely temporal-based approach (a), and of the newly proposed spatio-temporal method (b).
In Figure 6 the segmentation results of the time series analysis and of the new, spatio-temporal approach are shown. The color reveals the time point of appearance by taking the maximal gradient of each voxel’s time course into account. Most active voxels wrongly identified as lesions in the temporal approach were successfully eliminated by the spatio-temporal segmentation.
4
Conclusions
A new spatio-temporal approach has been introduced to segment and characterize active MS lesions. Deficiencies of a time series analysis with respect to registration errors are successfully rectified. The spatio-temporal model derived from a manually created lesion data base by PCA has been successfully used to characterize and segment active lesions from 4D data in multiple MR sequences. The statistics of the extracted lesion descriptors in our database seems to be reasonably described by a multivariate Gaussian distribution. This was essential for the applied simple characterization of a “typical” lesion evolution and the deviations from this mean. However, it has to be realized, that the number of collected active lesions is much too small to support a reliable statement about the “true” distribution, which is eventually much more complex than what we have found. This may make the application of more sophisticated algorithms for the identification of the expected spatio-temporal pattern necessary. On the other side, it would be very interesting to find clearly separated distinctive lesion classes in the data. More advanced methods like ICA could be applied to the data to compensate for the clear limitations of the selected simple PCA-based method and to provide better means for the analysis of clustered distributions. First experiments demonstrated slight improvements as compared to PCA-based analysis. The low
Spatio-temporal Segmentation of MS Lesion
445
number of active lesions in our database, however, did not allow us to reliably estimate the potential of the approach. Accordingly, a significantly larger database of active MS lesions is needed in order to eventually find a classification that can distinguish between different lesion behaviors.
References 1. Brechb¨ uhler, C.: Compensation of spatial inhomogeneity in mri based on a multivalued image model and a parametric bias estimate. Visualization in Biomedical Computing (1996) 141–146 2. Gerig, G., Sz´ekely, G., Israel, G., Berger,M.: Detection and characterization of unsharp blobs by curve evolution. In Information Processing in Medical Imaging IPMI’95 (1995) 3. Gerig, G., Welti, D., Guttmann, C., Colchester, A., Sz´ekely, G.: Exploring the discrimination power of the time domain for segmentation and characterization of lesions in serial mr data. In Medical Image Computing and Computer-Assisted Intervention - MICCAI’98 (1998). 4. Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P.: Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging 16 (1997) 187–198 5. Olver, P.J., Guillermo, S., Tannenbaum, A.: Invariant geometric evolutions of surfaces and volumetric smoothing. SIAM J. APPL. MATH. 57(1) (1997) 176–194 6. Rey, D., Subsol, G., Delingette, H., Ayache, N.: Automatic detection and segmentation of evolving processes in 3D medical images: application to multiple sclerosis. In Information Processing in Medical Imaging - IPMI’99 1613 (1999) 154–167 7. Thirion, J.P., Calmon, G.: Deformation analysis to detect and quatify active lesions in 3d medical image sequences (Technical Report 3101). Institut National De Recherche En Informatique Et En Automatique (1997) 8. Warfield, S. et al.: Automatic identification of grey matter structures from mri to improve the segmentation of white matter lesions. J. Image Guided Surg. 1 (1996) 326–338
Time-Continuous Segmentation of Cardiac Image Sequences Using Active Appearance Motion Models Boudewijn P.F. Lelieveldt1 , Steven C. Mitchell2 , Johan G. Bosch1 , Rob J. van der Geest1 , Milan Sonka2 , and Johan H.C. Reiber1 1 2
Dept. of Radiology, Leiden University Medical Center, Leiden, The Netherlands,
[email protected] Dept. of Electrical and Computer Engineering, University of Iowa, Iowa City, USA
Abstract. This paper describes a novel, 2D+time Active Appearance Motion Model (AAMM). Cootes’s 2D AAM framework was extended by considering a complete image sequence as a single shape/intensity sample. This way, the cardiac motion is modeled in combination with the shape and image appearance of the heart. The clinical potential of the AAMMs is demonstrated on two imaging modalities – cardiac MRI and echocardiography.
1
Introduction
Automated segmentation of cardiac image sequences such as cardiac MR images and echocardiograms has shown to be a challenging task. Approaches dedicated to left ventricular (LV) segmentation in MR, CT and echocardiographic data have been based on, among others, active contours and balloons [1], pixel/region classification [2], and dynamic programming [3]. Though partially successful, three major problems are associated with many previously described contour detection strategies: – An expert drawn contour may not always correspond to the locations of the strongest local image features. For example in MR images, many cardiologists draw the endocardial border as a convex hull around the blood pool to exclude the papillary muscles. – Because of noise and acquisition artifacts in cardiac images, image information can be ill-defined, unreliable or missing. To overcome this, knowledge about image appearance, organ shape and common shape variations should form an integral part of a segmentation approach. – Many automated techniques perform a static segmentation on a single 2D or 3D frame, and may therefore produce results that are inconsistent with the dynamics of the cardiac cycle. In previous work [4], we have shown that the Active Appearance Models (AAMs) introduced by Cootes and Taylor [5] are highly suitable for the segmentation of static cardiac MR images, because they exploit prior knowledge M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 446–452, 2001. c Springer-Verlag Berlin Heidelberg 2001
Time-Continuous Segmentation of Cardiac Image Sequences
447
about the cardiac shape, image appearance and observer preference. However, to segment a full cardiac cycle, multiple models are required for different cardiac phases. Moreover, the sequential application of 2D AAMs to a cardiac time sequence does not guarantee a time-continuous result. The primary contribution of this paper is the development of a novel, 2D+time Active Appearance Motion Model (AAMM) that models the dynamics of the cardiac cycle in combination with the shape and image appearance of the heart, therefore yielding time-continuous segmentation results.
2 2.1
Methods Active Appearance Motion Models
In this work, Cootes’ 2D AAM framework [5] was extended by considering a complete image sequence as a single shape/intensity sample. An AAMM is constructed in the following manner: 1. By defining a point-correspondence in the image plane (as in 2D AAMs), but additionally defining a correspondence in the time-dimension, a 2D+time contour sequence is expressed as a 2-dimensional shape sample. “Phase correspondence” is defined by selecting a fixed number of frames covering the full cycle (End Diastole (ED) to End Systole (ES) to the next ED) using a nearest neighbor interpolation. By concatenating the contour points from the phase-normalized image frames, the i-th point in the j-th time frame xij can be indexed in the shape sample xs as xij = xs ((j − 1) × Nc + i), where Nc is a fixed number of contour points per frame, i = 1, 2, ..., Nc and j = 1, 2, ..., Nphases . 2. All shape samples are aligned using a 2D Euclidean transform in the image plane. An average shape sequence and the shape eigenvector matrix Ps are calculated by performing a Principal Component Analysis (PCA) on the sample point distributions. Each shape sample is expressed as a linear combination of eigenvectors bs = PsT (xs − xs ). 3. All image sequences are warped to the average shape sequence using a 2D piecewise affine warping and each sequence is intensity-normalized to the average intensity of 0 and a variance of 1. 4. Each warped image sequence is expressed as a sequence intensity vector g by concatenating the intensity vectors from each phase-normalized frame. 5. A PCA is performed on the normalized intensity vectors. 6. Each intensity sample is expressed as a linear combination bg of eigenvectors, where bg = PgT (g − g) represents the time sequence intensity parameters. 7. The shape coefficient vectors bs and the gray-level intensity coefficient vectors bg are concatenated in the following manner W bs W PsT (x − x) b= = , (1) bg PgT (g − g) where weighting matrix W is a diagonal matrix relating the different units of shape and intensity coefficients.
448
Boudewijn P.F. Lelieveldt et al.
8. A PCA is applied to the sample set of all b vectors, yielding the model b = Qc .
(2)
where Q is a matrix consisting of eigenvectors and c are the resulting appearance model coefficients. Applying this procedure to a set of training time-sequences results in an ‘average heartbeat’ and its characteristic variations in shape, intensity and motion over the cardiac cycle (see Fig. 1).
Fig. 1. Example of an AAMM: an “average heart beat” (middle row) and the first eigenvariation (top row +2 standard deviations, bottom row -2 standard deviations), as derived from 72 cardiac MR sequences.
2.2
Matching the AAMM to Image Sequences
The AAMM can be applied to segmentation of image sequences by minimizing the root-mean-square difference between the model sequence and a target image sequence by deforming the Appearance-Motion model along the characteristic eigenvariations (see [5] for a detailed description of the 2D AAM matching procedure). The AAMM matching procedure differs from 2D AAM matching in the sense that the error criterion and the parameter derivatives are calculated for the full time sequences, as opposed to 2D image frames. Therefore the temporal coherence in the cardiac motion is preserved during the matching, ensuring a segmentation result, which is largely consistent with the cardiac motion patterns in the training set.
3
Case Studies
To test the AAMM in clinically realistic conditions and diverse applications, AAMMs were trained and applied to short-axis cardiac MRI and four-chamber echocardiographic image sequences.
Time-Continuous Segmentation of Cardiac Image Sequences
3.1
449
Cardiac MRI
Cardiac MR sequences were collected from 15 normal subjects and 10 myocardial infarction patients using gradient echo and echoplanar pulse sequences. Image sequences spanned over one complete cardiac cycle. The number of phases per cardiac cycle varied from 16 to 25. Images were acquired with slice thickness of 10 mm, 256x256 matrix, FOV 400-450 mm. Three mid-ventricular slices were available for the validation studies. From each sequence, 16 phases were identified at regular time intervals over the cardiac cycle. Thus, each MR data set consisted of 16 frames at each of the 3 imaged slices, or 48 images per subject. Left ventricular endocardial (ENDO) and epicardial boundaries (EPI) were manually traced by an expert observer in all slices and all phases using dedicated cardiac MR post-processing software. The total data set consisted of 1200 image frames from 25 subjects. Validation was performed using a leave-one-subject-out approach. Therefore, 25 different models were trained on image sequences from 24 subjects using 3 × 24 = 72 image sequences per model. Each model was tested on the 3 MR sequences from the left-out subject. The initial position of the AAMM was automatically defined using a validated Hough-transform based approach [3]. To quantitatively assess the performance of the AAMM approach, the average signed and unsigned border positioning errors were calculated for the ENDO and EPI borders by measuring the distances between corresponding border points along 100 rays perpendicular to the centerline between the manual and the automatic contour. Border positioning errors are expressed in mm as mean ± standard deviation. Negative sign of the signed error value means that the automatically-determined border was inside of the observer-defined border. Four clinically important measures were calculated: ENDO area, EPI area, LV myocardial mass, and LV ejection fraction (EF). Area indices were compared in all image slices and expressed in cm2 . LV mass (grams) and LV EF (%) were determined from the three adjacent slices segmented by the AAMM approach. Results In 23 out of 25 tested subjects, computer-detected borders agreed closely with observer-identified borders (example in Fig. 2). In two, highly pathological cases of post-infarct LV dilation, the automated detection failed. These cases were excluded from further quantitative analyses. Mean signed endo- and epicardial border positioning errors were 0.12 ± 0.91 mm and 0.46 ± 0.97 mm, respectively, showing minimal border detection bias. The mean unsigned positioning errors were 0.63 ± 0.65 mm and 0.77 ± 0.74 mm, respectively, showing small absolute differences from the independent standard. Fig. 3 shows a good correlation of the manually-identified and AAMMdetermined ENDO- and EPI areas. Mean signed and unsigned ED LV mass errors were −0.5 ± 4.5 g and 3.6 ± 2.6 g. Mean signed and unsigned EF errors were small: −1.2 ± 8.2 % and 6.8 ± 4.5 %, respectively.
450
Boudewijn P.F. Lelieveldt et al.
Fig. 2. Example of fully automatically detected ENDO- and EPI contours (bottom row) as compared to manual contours (top row) in a 16-phase MR time sequence. Phases 1,5,9 and 13 are shown, and only subimages are displayed.
3.2
Echocardiography
Echocardiographic 4-chamber sequences were acquired from 129 unselected patients. Images were digitized at 768 × 576 pixels with different calibration factors (0.28 to 0.47 mm/pixel). Intensity distributions were normalized non-linearly to deal with ultrasound-specific intensity properties. All single-beat sequences were phase-normalized to 16 frames. An independent expert manually outlined the ENDO contours in all frames of all image sequences. 2064 ultrasound frames were available with an accompanying independent standard. The data set was split randomly into a training set of 65 patients and a test set of 64 patients. The AAMM was applied to segmentation of the test set. All models were initialized to the same fixed initial position, which was calculated from the average sample pose and scale in the training set. Four quantitative indices were calculated to compare the automatically detected contours with the observer-identified independent standard. Unsigned ENDO border positioning errors were defined as unsigned distances between corresponding contour points. ENDO percent area errors were determined separately for each phase of the cardiac cycle, where ENDO areas were defined as area enclosed by the ENDO border. Area EF was determined as difference between ED area and ES area divided by ED area. Results An example of the matching result is given in Fig. 4. In 62 of all 64 tested patients, the AAMM-defined borders agreed well (average unsigned distance < 8 mm) with the independent standard with mean unsigned border positioning errors of 3.42 ± 1.33 mm. In two cases the matching failed, and
Time-Continuous Segmentation of Cardiac Image Sequences Endocardial Area [cm2]
35
Epicardial Area [cm2]
50
30
2
50
40
40
30
30
451
Endocardial Area [cm ]
20 15
Computer
Computer
Computer
25
20
20
10
0
10
y = 1.00x + 0.04
5
y = 0.92x + 3.21
r = 0.93 0 0
5
10
15 20 Manual
25
(a)
30
y = 0.90x + 1.96 r = 0.87
10
r = 0.91 35
0
10
20 30 Manual
(b)
40
50
0 0
10
20 30 Manual
40
50
(c)
Fig. 3. Comparison of the manually and computer-determined endo- (a) and epicardial areas (b) in the 1104 MRI validation slices. Figure (c) compares the echocardiographic observer-defined and computer-determined LV ENDO areas in the 992 test images from 62 out of 64 patients. All regression analyses compare areas in 16 cardiac phases.
Fig. 4. Example result of fully automated AAMM segmentation of echocardiographic image sequence from the test set, spanning over one heart beat. Segmentation was performed simultaneously in all 16 image phases using a single motion model. these cases were excluded from further analysis. Fig. 3 demonstrates a good correlation of the observer-identified and AAMM-determined LV ENDO areas. Endocardial percent area error averaged over all phases was −3.1 ± 10.3 %, showing a slight negative bias of the AAMM areas. Mean signed and unsigned area ejection fraction errors were small: 0.6 ± 5.5 % and 4.6 ± 3.0 %, respectively.
4
Discussion
The results of the presented cardiac MRI case studies showed a high robustness of our fully automated AAMM approach. In all 15 normal and in 8 out of 10 patient cases, the automatically detected contours demonstrated clinically acceptable accuracy, both in border positioning errors and in EF, LV mass and slice-based ENDO- and EPI area measures. The detected contours were highly similar to the
452
Boudewijn P.F. Lelieveldt et al.
manually defined contours in the sense that papillaries and epicardial fat were successfully excluded from the contours. In two patient cases in both the MRI and the echocardiographic study, the matching failed. In these cases, the shape and motion of the LV differed strongly from that observed in the training set. Consequently, the AAMM method was biased towards a ‘too normal’ contraction pattern. By better balancing the ratio between patients and normal subjects, and by including more patient hearts with large motion abnormalities in the training set, we expect to improve the model generalization for patient cases. Moreover, we expect improvement from an extension to 3D+time of the AAMM, which is a topic of current research. The AAMM matching performed slightly more accurately for MRI than for echocardiograms. This may be due to differences in the measurement method: the contour distances for MRI were measured using a centerline approach, while those for ultrasound were Euclidean distances between corresponding contour points. In case of small rotations or displacements, the latter measure will yield much larger distances than a centerline approach. However, the errors reported for ultrasound compare reasonably well to commonly found inter- and intraobserver variability associated with manual tracing in ultrasound. The AAMM presented in this paper demonstrated a number of key points, which can be summarized as follows : – the AAMM generates time-continuous segmentation results, which are consistent with cardiac dynamics, – the AAMM can be applied in a fully automated manner, – the AAMM demonstrated robustness in two comprehensive clinical studies on substantially different cardiac imaging modalities. Segmentation of a 16-phase image sequence is fast with processing times under 5 s using a 1 GHz Windows machine. Accuracy is comparable to manual tracing and therefore clinically acceptable. Additional development is needed to determine the routine clinical performance in an extensive clinical validation.
References 1. J. Montagnat and H. Delingette, “Space and time constrained deformable surfaces for 4d medical image segmentation,” LNCS, 1935, p. 196–205, 2000. 2. M. Ramze Rezaee, P. M. J. van der Zwet, B. P. F. Lelieveldt, R. J. van der Geest, and J. H. C. Reiber, “A multi-resolution segmentation technique based on pyramidal segmentation and fuzzy clustering,” IEEE TIP, 9, p. 1238–1248, 2000. 3. R. J. van der Geest, V. G. M. Buller, E. Jansen, H. J. Lamb, L. H. B. Baur, E. E. van der Wall, A. de Roos, and J. H. C. Reiber, “Comparison between manual and semiautomated analysis of left ventricular volume parameters from short-axis MR images,” JCAT, 21, p. 756–765, 1997. 4. S. C. Mitchell, B. P. F. Lelieveldt, R. J. van der Geest, H. G. Bosch, J. H. C. Reiber, and M. Sonka, “Multistage hybrid active appearance model matching: Segmentation of left and right ventricles in cardiac mr images,” IEEE TMI (in press), 2001. 5. T. F. Cootes, C. Beeston, G. J. Edwards, and C. J. Taylor, “A unified framework for atlas matching using active appearance models.,” LNCS, 1630, p. 322–333, 1999.
Feature Enhancement in Low Quality Images with Application to Echocardiography Djamal Boukerroui, J. Alison Noble, and Michael Brady Medical Vision Laboratory, Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UK. {djamal,noble,jmb}@robots.ox.ac.uk
Abstract. We propose a novel feature enhancement approach to enhance the quality of noisy images. It is based on a phase-based feature detection algorithm, followed by sparse surface interpolation and subsequent nonlinear post-processing. We first exploit the intensity-invariant property of phase-based acoustic feature detection to select a set of relevant image features in the data. Then, an approximation to the low frequency components of the sparse set of selected features is obtained using a fast surface interpolation algorithm. Finally, a non-linear postprocessing step is applied. Results of applying the method to echocardiographic sequences (2D+T) are presented. We show that the correction is consistent over time and does not introduce any artefacts. An evaluation protocol is proposed in the case of echocardiographic data and quantitative results are presented.
1
Introduction
Intensity inhomogeneity correction for ultrasound images has received little attention. To our knowledge, the first attempt to adapt bias field correction to B-scan ultrasound data is proposed in [1]. The approach is promising. However, it still requires user interaction to set the image model parameters. Some recent intensity-based adaptive segmentation approaches, which intrinsically take into account the non-uniformity of the tissue classes, have yielded promising results [2,3,4]. More recently, a novel technique for finding acoustic boundaries in echocardiographic sequences has been proposed [5]. The most important advantage of this technique is its intensity-independence. However, as the noise rejection in this method involves an intensity-based noise threshold the method is not truly intensity invariant and is highly susceptible to noise. This suggested the need to develop a feature enhancement approach to correct the image. This paper proposes a novel feature enhancement approach (see figure 1). First, image features are detected using the Feature Asymmetry (FA) measure [5] (reviewed in section 3). This provides a normalised likelihood image where the intensity value at any location is proportional to the significance of the detected features. The sparse data at feature locations is then interpolated by a Fast sparse Surface Interpolation (FSI) technique using the likelihoods to estimate the degradation field [6] (section 2). Finally, a novel non-linear processing method M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 453–460, 2001. c Springer-Verlag Berlin Heidelberg 2001
454
Djamal Boukerroui, J. Alison Noble, and Michael Brady
using the degradation field is applied to the original data to enhance or deemphasise feature values (section 4).
2
2D Sparse Surface Interpolation
Surface interpolation from a sparse set of noisy measured points is an ill-posed problem since an infinite set of surfaces can satisfy any given set of constraints. Hence, a regularisation procedure, taking into account the visual relevance and computational efficiency is usually applied, so that the interpolation problem becomes a minimisation of an energy functional of the form: U (f ) = Ud (f, d) + λUr (f ), λ ≥ 0 .
(1)
The first term (cost functional) is a measure of faithfulness to the measured data. The second is the regularisation functional; λ is a parameter (Lagrange multiplier) controlling the amount to which the data is to be considered (piecewise) smooth. A commonly used cost functional is the weighted sum of squares: (2) Ud (f, d) = i wi (f (xi , yi ) − di )2 , which measures the difference between the measured field d = {(xi , yi , di )} and the approximating surface f (xi , yi ); wi ∈ [0, 1] represent the uncertainty associated with the data. The regularisation term is often expressed as a thin-plate energy: 2 2 2 2 2 2 ∂ f ∂ f ∂ f + + dxdy . (3) Ur (f ) = 2 ∂x ∂x∂y ∂y 2 In general, obtaining an analytic solution to the above optimisation problem is difficult. Therefore, an approximation to the continuous problem using discrete operators is used. Suppose that the data d is defined on a regular lattice G = {(xi , yj ), 1 ≤ i, j ≤ N }, and that a discrete representation of the surface is defined using a set of nodal variables v = {vi,j = f (xi , yj )}. Regarding the regularisation term, the finite element method is a good means of converting the continuous expression for the energy into a tractable discrete problem. By concatenating all the nodal variables vi,j and the data di,j respectively into column vectors v and d, the resulting discrete form of (1) is quadratic in v, and is given by, U (v) = v T Av − 2vT b + c; where A is a large, sparse N 2 × N 2 matrix and c is a constant. The minimum v ∗ of this energy function is found by solving the large sparse linear system Av = b. Therefore, this system is nearly singular and
Original Data
Features Detection
Fast Surface Interpolation
Non-linear Post-Processing
Corrected Data
Fig.1 Block diagram of the proposed feature enhancement method.
Feature Enhancement in Low Quality Images
455
results in poor convergence when simple iterative methods are used. To obtain fast surface interpolation, a scheme is needed which can improve the numerical conditioning. Recently, a tractable approach in terms of simplicity and efficiency has been proposed [6]. It utilizes the concept of preconditioning in a wavelet transform space. In other words, the minimisation is carried out in a wavelet space using an asynchronous iterative computation and a biorthogonal spline wavelet basis for the preconditioning step [6]. The Discrete Wavelet Transform (DWT) preconditioning transfers the linear system to an equivalent one with new ˜ much denser than the original ˜ and a new system matrix, A, nodal variables v one A. This implies that a more global connection between the interpolation nodes can be made which considerably improves the convergence rate.
3
Phase-Based Feature Detection
The feature detector that we use is based on phase congruency (PC) [7] since it provides a single unified theory that is capable of detecting a wide range of features, rather than being specialised for a single feature type such as intensity steps. Further, PC is theoretically invariant to brightness and contrast. Hence it is, in principle, robust against typical variations in image formation. Strictly speaking, the concept of PC is only defined in one dimension as its definition involves the Hilbert transform. Typically, the computation of PC uses a pair of quadrature filters, normally log-Gabor filters. A series of orientable 2D filters can be constructed by ‘spreading’ a log-Gabor function into 2D. In this way, an extension to two-dimensions of the 1D phase measure is obtained [7]. In our work, we have used the 2D Feature Asymmetry (FA) measure used in [5] for feature detection. This measure provides good detection of asymmetric image features such as step edges and has the advantage of being intensity invariant. The 2D FA measure is defined by: F A2D (x, y) =
|om (x, y)| − |em (x, y)| − Tm
, om (x, y)2 + em (x, y)2 + m
(4)
which is a sum over m orientations of a normalised measure of the difference between the odd om (x, y) and the even em (x, y) filter responses. Here, denotes zeroing of negative values, is a small positive number to avoid division by zero and Tm is an orientation-dependent noise threshold, defined by: Tm = k · std {|om (x, y)| − |em (x, y)|} ,
(5)
where k is a positive factor controlling the noise threshold.
4
The New Feature Enhancement Algorithm
Briefly, our method involves reconstructing an approximation to the intensity inhomogeneities which can be subtracted from the original corrupted region. A
456
Djamal Boukerroui, J. Alison Noble, and Michael Brady
mathematical model for the intensity inhomogeneity in ultrasound images was developed in [8]. The authors used a multiplicative degradation model. Motivated by this, we define a correction equation as: Ic (x, y) =
I(x, y)/ max (I(x, y)) . v ∗ (x, y)/ max (v ∗ (x, y)) + γ
(6)
Here, v ∗ is an estimation of the degradation field and γ is a positive control parameter that ensures that Ic ∝ I for γ 1. The maximum correction is obtained when γ 1. Assuming that the image intensity of occurrences of a single tissue type should be equal, an estimate of the low frequency components of an intensity data field can be made by taking the image intensities values only at the locations of the relevant features. An estimate of the base frequency of this degradation can be found using the FSI algorithm as follows. We define the set of nodal variables v and the corresponding weighting field w, by:
v = vi,j = maxBi,j I(x, y) if F A2D (xi , yj ) > 0; 1 ≤ i, j ≤ N ; (7) w = {wi,j = F A2D (xi , yj ); 1 ≤ i, j ≤ N } ; where Bi,j is a small window centred at pixel position (xi , yj ). Taking the maximum intensity value in a window centred on the feature position guarantees that we always take the highest value of the step edge.
5
Results and Quantitative Evaluation
To show that the proposed approach is capable of removing (or at least reducing) the bias field without introducing any artefacts, Figure 2 shows two images (ideal one (a) and corrupted one (c)) and their corrections ((b) and (d) respectively). We can see that a significant contrast enhancement is obtained in both cases and that the corrected images are similar. Figure 3 shows the original data and results at intermediate stages of processing for an echocardiographic image. The image used in this experiment is shown in Fig.4(a). Notice the correlation between the likelihood image (a) and the intensity image of the detected features (b). Hence, if the SNR is low, the 2D FA measure does not yield a clean feature detection image. Either the noise threshold has to be set to a higher value, which increases the false negative detection rate, or it has to be set to a low value, in which case the false positive rate will increase. Comparison of the images (b) and (d) provides some (qualitative) insight about how much the features have been enhanced. To illustrate the influence of the control parameter γ (eq. 6), Figure 4 shows an example of the enhancement of an echocardiographic sequence (Data set 1) for γ = 0.2. Figures 4(b) show the results of the 2D FA boundary detection on the original image and on the enhanced one. Significant improvement is observed on the enhanced image, particularly in the apex region where the intensity values of the original image are very low. The plots of line profiles shown in Fig.4(c) clearly demonstrate the influence of γ. Notice that the three results are in good
Feature Enhancement in Low Quality Images
457
agreement where the signal is high and low signal values are more enhanced for γ = 0.05 than 0.2 or 0.4. However, this observation does not mean that the enhancement result for γ = 0.05 (or γ < 0.05) is better than the other two. Indeed, if γ = 0.05 enhances the low signal values better than γ = 0.2(0.4), it does the opposite for high signal values. An objective evaluation and quantitative results of the enhancement are necessary to answer the question as to which values of γ gives the best enhancement.
(a)
(b)
(c)
(d)
Fig.2. Ideal image (a) corrupted (c) and their corresponding enhanced images (b) and (d).
(a)
(b)
(c)
(d)
Fig.3. (a) likelihood image representing the weighting field w; (b) The original data at the location of the detected boundaries representing the data field v; (c) the normalised interpolated surface with the additional shift, γ = 0.2, showing clearly the region where the intensity will be lowered and the region where it will be enhanced. (d) corresponds to (b) but shows it after correction.
(a)
50
50
100
100
150
150
200
200
250
250
300
300
350
350
150
(b)
200
250
300
350
400
450
500
550
150
50
50
100
100
150
150
200
200
250
250
300
300
350
Endocardium border 200
250
300
350
400
450
500
550
(c)
350
150
200
250
300
350
400
450
500
550
150
200
250
300
350
400
450
500
550
Fig.4. Frame 13 of data set 1. (a) Comparaison of the original image and the enhanced image for γ = 0.2. Images (b) shows the corresponding FA results. (c) Vertical lines profiles (line 355). Observe the enhancement of the peak corresponding to the endocardium border on the apex.
458
Djamal Boukerroui, J. Alison Noble, and Michael Brady
Quantitative evaluation of computer vision algorithms is an important task, particularly in the case of medical imaging. The frequent availability of ground truth makes this task easier. Unfortunately, there is no ground truth for the data available for the current study. In the case of echocardiographic images, and for the purpose of this paper, we are interested in the detection of the endocardial boundary. Since these features are often modelled as step edges, a measure of the height of the step is a good evaluation parameter. For each image we define 3 regions next to the interesting features: RC is located in the cardiac cavity near to the endocardial wall; RM is the myocardium and RE is located from the epicardial border outwards. For each image we computed 10 measures: Mean and standard deviation of the cavity signal (RC), of myocardium signal (RM), of the signal beyond the epicardial border (RE) and of the differences (RE – RM) and (RM – RC). We then computed the mean and the standard deviation of each of these measures over time. Table 1 presents an example of the evaluation measures (Data set 2). Note the small values of the standard deviation over time for all the computed measures both for the original and the enhanced images. We observe that the signal enhancement is (indirectly) proportional to γ −1 and the errors for RC and RM increase slightly, but the RE error decreases with γ −1 . This is because, as noted before, the highest grey level values will be reduced, while the lower grey level intensities will be increased (see Fig. 3 (c)). When the enhancement is high, a “saturation phenomenon” appears at the highest intensity values. As the RE region corresponds to the highest grey values in the images, the spatial standard deviation of this region will decrease with enhancement. Analysis of the signal differences reveals that both the signal and the error of the difference (RM – RC) increase as γ decreases. This is not the case for the difference (RE – RM), and is a consequence of the “saturation phenomenon” as the step edge (RE – RM) is at high intensities. These observations enable us to understand more fully the behavior of the enhancement as a function of the parameter γ. However, notice how close the quantitative measures are for the
Table 1. Evaluation results for different values of γ. Here, the signal is the spatial mean of the signal and the Error is its standard deviation. The table shows the means and the standard deviations over the frames. Data Set2 Signal Error Signal RM Error Signal RE Error Signal RE–RM Error Signal RM–RC Error RC
Original Corrected Corrected Corrected Corrected (µ, σ) γ = 0.4 (µ, σ) γ = 0.2 (µ, σ) γ = 0.1 (µ, σ) γ = 0.05 (µ, σ) 6.45 , 0.64 13.88 , 1.12 15.25 , 1.20 16.39 , 1.27 17.11 , 1.33 6.63 , 0.67 11.20 , 0.97 11.89 , 1.05 12.26 , 1.12 12.46 , 1.19 30.57 , 3.54 57.52 , 5.64 63.17 , 5.92 67.24 , 6.05 69.81 , 6.11 19.16 , 2.18 29.36 , 2.87 30.39 , 2.80 30.84 , 2.68 31.09 , 2.59 61.87 , 3.78 102.56 , 4.89 108.39 , 4.91 111.52 , 4.99 112.90 , 5.10 35.67 , 2.49 46.18 , 3.11 45.03 , 3.23 43.35 , 3.32 42.01 , 3.40 31.29 , 4.63 45.04 , 6.50 45.22 , 6.75 44.27 , 6.97 43.09 , 7.14 28.19 , 2.35 42.03 , 2.64 44.28 , 2.70 45.86 , 2.82 46.99 , 3.01 24.12 , 3.49 43.63 , 5.63 47.92 , 5.98 50.86 , 6.18 52.69 , 6.29 16.42 , 2.03 26.48 , 2.84 28.29 , 2.85 29.66 , 2.84 30.64 , 2.86
Feature Enhancement in Low Quality Images 1
0.95
Original Corrected 0.4; 0.991 Corrected 0.2; 0.984 Corrected 0.05; 0.973
End Diastolic
0.94 0.92 0.90
0.90
0.96 0.94 0.92 0.90
0.86
0.88
0.84
0.86
0.82
0.80
0.84
0.80
End Systolic
0.82
0.78 0
5
10
15
20
25
30
35
40
45
50
Original Corrected 0.4; 0.985 Corrected 0.2; 0.975 Corrected 0.05; 0.963
0.98
0.88
0.85
0.75
1
0.96
Original Corrected 0.4; 0.994 Corrected 0.2; 0.990 Corrected 0.05; 0.983
459
0
10
20
30
40
50
60
0.80
0
10
20
30
40
50
60
Fig.5. Correlation coefficients of simultaneous frames over time for the original and the enhanced sequences. (left) Data set 1; (middle) data set 2; (right) data set 3. Corrected 0.2; 0.990 means results obtained for γ = 0.2 and the correlation coefficient of the curve to the original one is 0.990.
different values of the parameter. In our experiments, we found that a value between 0.1 and 0.2 gives good enhancement results. As two key parts of our feature enhancement algorithm do not take into account temporal information, the consistency of the enhancement over time should be studied and should ensure that temporal artefacts are not introduced. Figure 5 shows the correlation curves for the original sequences and their corresponding enhanced sequences. The interesting aspect of these curves is not the absolute values of the correlation but its evolution over the frames. These curves show that the temporal correlation of the original data is well conserved in the corrected sequences (see the correlation coefficients in the figures legend).
6
Conclusion
The performance of the proposed feature enhancement has been illustrated for 2 test images and on 3 echocardiographic sequences. An evaluation protocol has been proposed in the case of echocardiographic data and quantitative results have been presented 1 . The consistency over time of the enhancement of the proposed approach has been demonstrated to ensure that no artefacts are introduced. This is an important point, both for manual processing and analysis by a clinician, and for computer analysis of the sequence. Hence, the corrected images facilitate visual diagnosis by a clinician as the contrast between the heart wall and the cavity is enhanced and significant improvement in the results of the 2D FA detection algorithm has been noted in comparison with its application on the non-enhanced data. Acknowledgements: We are grateful to Dr. M. M. Parada and Dr. J. Declerck, from OMIA Ltd, and Dr. M. Robini, from CREATIS, for providing software used in part of this work. This work was supported by the EC-funded ADEQUATE project.
1
More results and a detailed version of the paper are available at: www.robots.ox.ac.uk/∼djamal/
460
Djamal Boukerroui, J. Alison Noble, and Michael Brady
References 1. Xiao, G., Brady, M., Alison, J. Zhang, Y.: Contrast enhancement and segmentation of ultrasound images–a statistical method. SPIE Med. Imag.: IP(2000) 1116–1125 2. Ashton, E. A., Parker, K. J.: Multiple resolution bayesian segmentation of ultrasound images. Ultrasonic Imaging 17 (1995) 291–304 3. Boukerroui, D., et al.: Segmentation of echocardiographic data. Multiresolution 2D and 3D algorithm based on gray level statistics. MICCAI’99 (1999) 516–524 4. Boukerroui, D.: Segmentation bayesienne d’images par une approche markovienne multiresolution. Phd Thesis CREATIS, INSA de Lyon (France) (2000) 190 5. Mulet-Parada, M., Noble, J. A.: 2D+T acoustic boundary detection in echocardiography. Medical Image Analysis 4 (2000) 21–30 6. Yaou, M-H., Chang, W.-T.: Fast surface interpolation using multiresolution wavelet transform. IEEE Trans. Pattern Anal. Machine Intell. 16 7 (1994) 673–688 7. Kovesi,P.: Image feature from phase congurency. Videre: Journal of Comp. Vision Research 1 3 (1999) 1–26 8. Hughes, D. I., Duck, F. A.: Automatic attenuation compensation for ultrasonic imaging. Ultrasound in Medicine & Biology, 23 (1997) 651–664
3D Vascular Segmentation Using MRA Statistics and Velocity Field Information in PC-MRA Albert C.S. Chung1 , J. Alison Noble1 , Paul Summers2 , and Michael Brady1 1
Department of Engineering Science, Oxford University, Oxford, United Kingdom. {albert,noble,jmb}@robots.ox.ac.uk 2 Department of Clinical Neuroscience, King’s College, London, United Kingdom.
[email protected] Abstract. This paper presents a new and integrated approach to automatic 3D brain vessel segmentation using physics-based statistical models of background and vascular signals, and velocity (flow) field information in phase contrast magnetic resonance angiograms (PC-MRA). The proposed new approach makes use of realistic statistical models to detect vessels more accurately than conventional intensity gradient-based approaches. In this paper, rather than using MRA speed images alone, as in prior work [7,8,10], we define a 3D local phase coherence (LPC) measure to incorporate velocity field information. The proposed new approach is an extension of our previous work in 2D vascular segmentation [5,6], and is formulated in a variational framework, which is implemented using the recently proposed modified level set method [1]. Experiments on flow phantoms, as well as on clinical data sets, show that our approach can segment normal vasculature as well as low flow (low SNR) or complex flow regions, especially in an aneurysm.
1
Introduction
Intracranial aneurysms are increasingly treated using an endovascular technique known as the Guglielmi detachable coil (GDC) method in which platinum coils are guided through the blood vessels for placement in an aneurysm to induce thrombosis. To increase the success rate and procedural safety of the treatment, radiologists need a comprehensive and patient-specific understanding of the 3D shape, size and position of each aneurysm as well as the vasculature in the vicinity of the aneurysm. This has created the need to develop 3D vascular reconstruction and analysis methods for Magnetic Resonance Angiograms (MRA). Aneurysm segmentation is a more complicated problem than vascular segmentation. In particular, regions inside an aneurysm can exhibit complex flow pattern and low flow rate. These phenomena, which induce significant signal loss and heterogeneous signal level within the aneurysm, lower the visibility of the aneurysm and make segmentation difficult. Most prior vascular segmentation techniques [7,8,10], which use TOF-MRA or speed images from PC-MRA, are not sufficient to recover the complete shape of the aneurysm because the aneurysm region does not always form a piecewise homogeneous intensity partition with M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 461–467, 2001. c Springer-Verlag Berlin Heidelberg 2001
462
Albert C.S. Chung et al.
sharp (intensity) boundaries. Equally, conventional edge-based methods often do not work well because the true vessel boundaries may not have a high signal-tonoise ratio (SNR) or intensity gradient. To overcome these problems, we propose an original approach to segmenting aneurysms, as well as normal vasculature, on the basis of original velocity field information (measured by local phase coherence, LPC) and a tailored statistical description of PC-MRA speed images. In this paper, we build on our previous work [5,6] to pose 3D vascular segmentation as a variational problem. The implementation is realised using the modified level set method [1]. The new approach does not require intensity gradient information. Experiments on flow phantoms and on clinical data sets show that the new approach can achieve better quality segmentation in PC-MRA images than either the conventional intensity gradient-based approach, or an approach that uses PC-MRA speed images alone.
2
Segmentation Using MRA Statistics of the Speed Images
This section begins by discussing a potential problem of using intensity gradientbased techniques in MRA segmentation, and then goes on to present a new segmentation method using MRA statistics of speed images. Figure 1a shows a typical vessel cross-section and illustrates an example of segmentation using an intensity gradient-based approach in MRA speed images. Within a slice, the optimal contour is defined as minC C g · ds, [8,9], where the intensity gradient function g is defined as 1/(1 + | G ⊗ I|2 ); the Gaussian variance was set to 0.5 in this implementation. The intensity gradient function tends towards zero in regions of high intensity gradient. It should be noted that the optimal contour lies inside the vessel rather than on the vessel boundary because the low SNR regions (near the boundary) cannot provide sufficiently high intensity gradient (Figure 1a).
(a)
(b)
Fig. 1. Cross-sections of vessel and contours found by (a) intensity gradientbased approach and (b) a method using MRA statistics of the speed images
To counter this, we employ the statistical background and vascular signal models we developed in prior work [5,6] for detecting vessel boundaries. Briefly, the models are based on the physics of MRA image formation and the assumption of laminar flow. We have shown that the background and vascular signal
3D Vascular Segmentation Using MRA Statistics
463
intensity values in speed images follow a Maxwell-Gaussian mixture distribution and uniform distribution respectively [6]. In this new method, S is defined as a family of parametric surfaces. S is defined as [0, 1] × [0, 1] × [0, ∞) → 3 and (q, t) → S(q, t), where q and t are the space and time parameters respectively. Suppose that Pv and Pb are the posterior probabilities of the vessel and background at each voxel respectively. A probabilistic energy functional is then defined as Es (t) = Inside S −Pv · dV + Outside S −Pb · dV , where dV is a volume element. Minimising the probabilistic energy Es amounts to finding an optimal surface in which the total posterior probability is maximum. Solving the EulerLagrange equation with the divergence theorem, the evolution equation of the surface S can be obtained. This is given by ∂S ˆ, = (Pv − Pb ) · N ∂t
(1)
ˆ is the unit outward normal of the surface S and −1 ≤ Pv − Pb ≤ 1. This where N equation governs the motion of geodesic flow towards the minimum and has been implemented using the modified level-set method [1]. Figure 1b illustrates the result obtained using the proposed new approach. It is a significant improvement compared with Figure 1a, as the detected boundaries are correctly placed on the true vessel boundaries.
3
LPC and Integration with MRA Statistics
PC-MRA generates a velocity field by measuring the three orthogonal phase shifts at each voxel. These are directly proportional to the corresponding speeds along the three directional components. By examining the velocity field, it has been observed experimentally that, within the vasculature, blood motion tends to be locally coherent [4]. In prior work we exploited this fact to propose a measure of 2D LPC as a constraint to improve the quality of vascular segmentation [6].
Fig. 2. (a) Speed and (b) LPC images
Specifically, 2D LPC is defined as follows: Given a 3x3 planar mask with a centre on voxel c and that each matrix element, except c, contains a normalised vector indicating the flow direction in 3D, eight pairs of adjacent 3D vectors are formed. The 2D LPC at c is the sum of the dot products of the eight adjacent vector pairs. 3D LPC is then defined as follows: Given three mutually orthogonal planes, three 3x3 planar masks are applied at c and three 2D LPC measures are obtained along each plane. The 3D LPC at c is the average of the three 2D LPC
464
Albert C.S. Chung et al.
measures. Note that the higher the value, the more coherent the blood motion. Figure 2a shows a MRA speed image, in which the intensity values in the middle of the vessel are low and some voxels have intensity values almost as low as the background. However, the 3D LPC image is more homogeneous, with the inside regions exhibiting high LPC values with small variance (Figure 2b). We then combine the physics-based MRA statistics and velocity field information (measured by LPC) in PC-MRA data as follows. A LPC energy functional 2 2 i) o) · dV + Outside S (P −µ · dV , where can be defined as Elpc (t) = Inside S (P −µ Ni No P is the 3D LPC value, Elpc is an energy term representing the total variance of LPC values, µi and µo are the means of LPC values, Ni and No are the number of voxels, subscripts i and o denote inside and outside the surface respectively. To integrate MRA statistics and LPC, we define the total energy Etotal as a weighted sum of the probabilistic energy ES and LPC energy Elpc , as given by Etotal (t) = Ws · ES (t) + Wlpc · Elpc (t), where WS and Wlpc are weights attached to the energy terms. Using the Euler-Lagrange equation with the divergence theorem, we obtain the evolution equation of surface S, which is ∂S ∂t = (Ws · ˆ FS + Wlpc · Flpc ) · N , where FS ≡ Pv − Pb (MRA Statistics Force), Flpc ≡ 2 (P −µo )2 i) ˆ is the outward surface normal. To maintain − (P −µ (LPC Force) and N No Ni similarity of forces and polarity of the LPC force, the LPC force is normalised so that it is dimensionless and its polarity is maintained. As such, the normalised |F | LPC force is given by Flpc = sign(Flpc ) · |Flpclpc |max . The equation of motion can then be re-expressed as: ∂S ˆ, = (Ws · FS + Wlpc · Flpc ) · N ∂t
(2)
where −1 ≤ FS , Flpc ≤ 1. The weights need not sum to one and can be adjusted according to the application. Both were set to one in this implementation. For this application, we used a sub-voxel level set method for accurate surface representation [1]. In addition, to avoid signed distance function re-initialisation, we maintained the signed distance function in every update of the surface by using the Fast Marching method to build the extension forces in all non-zero level sets. The level-set version of Eq. 2 is given by ∂φ ∂t +(Ws ·FS +Wlpc ·Flpc )·|∇φ| = 0, where φ is the evolving level set function. We constructed the initial surface So near the optimal solution using global thresholding [5]. We have found that the convergence rate of the motion equation depends on the size of the aneurysm. The convergence of our implementation is usually reached within 30 iterations for a large aneurysm (12-25mm diameter) and more than 100 iterations for a giant aneurysm (> 25mm diameter).
4
Results
Phantom Study (I): The segmentation approach was validated using a geometrically accurate straight tube with an 8mm diameter (SST Phantom). The tube was scanned using a PC-MRA protocol on a 1.5T GE MR scanner. The data
3D Vascular Segmentation Using MRA Statistics
465
volume was 256x256x81 voxels with voxel dimensions of 0.625mm x 0.625mm x 1.3mm. The flow rate was constant (40cm/s). For ease of reference, we use EDGE, STAT, STAT-LPC to refer to an intensity gradient-based approach, the approach using MRA statistics on speed images alone (WS = 1 and Wlpc = 0 in Eq. 2), and the approach using MRA statistics and LPC respectively. All 3 approaches were implemented using the modified level set method and the same initial surface. The EDGE algorithm followed the method proposed by Lorigo et. al. [8]. As the tube diameter was known, detection accuracy could be quantified by an area measurement error, i.e. [1 − (Areameasured /Areatrue )] × 100%. The area measurement errors of EDGE, STAT and STAT-LPC are shown in Figure 3, in which smaller image slice numbers represent the inflow region of the tube. The SNR of the images decreases with increasing slice number due to progressive saturation of fluid. Also, it is known that imperfections in velocity encoding due to non-linearities in the gradient systems can cause a position dependent deviation in the velocity images [3]. These two factors may have influenced the behaviour of our segmentation method. Note that the area measurement error increases as the slice number increases, where the delineation of true boundary is adversely affected by the partial volume artifact and low SNR. Considering all slices of the tube, the average area measurement errors of EDGE, STAT and STAT-LPC were 34.77% , 16.11% and 12.81% respectively. This demonstrates that STAT-LPC gives more accurate vessel boundaries than EDGE or STAT.
Fig. 3. The area measurement errors (see text for details) Phantom Study (II): The approach was applied to an in-vitro silicon aneurysm model (Middle Celebral Artery Bifurcation Aneurysm-MCA), as shown in Figure 4c. The model was scanned using the PC-MRA protocol as before. The data volume size 256x256x23 voxels with voxel dimensions of 0.8mm x 0.8mm x 1mm. Mean flow rate was set to 300 ml/min. Figures 4a and 5a show the 3D reconstruction and a cross-section of the MCA aneurysm respectively, in which the results of segmentation using MRA statistics on speed images alone are shown. Significant segmentation improvement is achieved using the segmentation method which utilises both MRA statistics and LPC, as shown in Figures 4b and 5b.
466
Albert C.S. Chung et al.
The small circle in the middle of Figure 5b represents the singular point of the velocity field, where the flow is almost zero. It does not affect the quality of visualisation in 3D because it lies inside the aneurysmal surface, and can easily be removed. Indeed, this is a useful feature to detect because it indicates to a radiologist the position of stagnant flow inside the aneurysm.
Fig. 4a. 3D reconstructed aneurysm model using MRA statistics alone
Fig. 4b. 3D reconstructed aneurysm model using MRA statistics & LPC
Fig. 4c. Digital camera view of the aneurysm model
Fig. 5a. Model
Fig. 5b. Model
Fig. 6a. Patient 1
Fig. 6b. Patient 1
Fig. 7a. Patient 2
Fig. 7b. Patient 2
Fig. 8a. Patient 3
Fig. 8b. Patient 3
Case studies: Intracranial scans of 3 patients were acquired using the PCMRA protocol as before. Each data set consists of 256x256x28 voxels of 0.8mm x 0.8mm x 1mm each. We compare segmentation using MRA statistics alone and using MRA statistics and LPC on the three volumes. As shown in Figures 6a, 7a and 8a, the segmentation with MRA statistics alone is good overall but fails in the middle of the aneurysms because of low blood flow, which cannot generate a sufficiently high intensity signal for vessel detection. Figures 6b, 7b and 8b show significant segmentation improvements using MRA statistics and LPC. As in the case of Figure 5b, the delineated contour in Figure 8b does not enclose the whole aneurysm. 2 major causes are likely. First, the flow rate inside the aneurysm was extremely low, which led to serious corruption of velocity field by noise. Secondly, a circular (or deformed circular) flow pattern was formed, which generated singularities in the aneurysm centre. These affect the LPC measure. However, Figure 8b represents a large improvement compared with Figure 8a, and the hole in the middle does not affect the quality of visualisation.
3D Vascular Segmentation Using MRA Statistics
5
467
Conclusions
A new and integrated approach to automatic 3D brain vessel segmentation has been presented, which combines physics-based statistical models of background and vascular signals, and velocity (flow) field information in the PC-MRA data. In this paper, rather than using the MRA speed images alone, as in prior work [7,8,10], we have defined a local phase coherence measure to incorporate the velocity field information. The proposed approach has been formulated in a variational framework implemented using the modified level set method [1]. The proposed new approach was applied to two flow phantoms (a straight tube and an aneurysm model) and three clinical data sets. Using a geometrically accurate flow phantom, it has been shown that our approach can detect vessel boundaries more accurately than either the conventional intensity gradient-based approach, or an approach using MRA speed images alone. The results of experiments on an aneurysm model and clinical data sets show that our approach can segment normal vasculature as well as the low or complex flow regions, especially regions near vessel boundaries and regions inside aneurysms. Future studies will compare these segmentation methods on a larger number of clinical aneurysms. Acknowledgements: AC is funded by a postgraduate scholarship from the Croucher Foundation, Hong Kong. JMB and JAN thank EPSRC for support. The authors would like to thank Prof. J. Byrne for clinical advice related to this work; Prof. D. Rufenacht and Dr. K. Tokunaga for making the aneurysm model; ISMRA Flow and Motion Study Group, Stanford CA for use of the SST phantom.
References 1. Adalsteinsson, D., Sethian, J.A.: The Fast Construction of Extension Velocities in Level Set Methods. IJCP 148 (1999) pp. 2-22 2. Andersen, A.H., Kirsch, J.E.: Analysis of noise in phase contrast MR imaging. Med. Phy. 23(6) (June 1996) pp. 857-869 3. Bernstein, M.A., Zhou, X.J., et al.: Concomitant gradient terms in phase contrast MR: analysis and correction. MRM 39(2) (Feb. 1998) pp. 300-308 4. Burleson, A.C., et al.: Computer Modeling of Intracranial Saccular and Lateral Aneurysms for the Study of Their Hemodynamics.Neurosurgery 37(4) (95)774-84 5. Chung, A.C.S., Noble, J.A.: Statistical 3D vessel segmentation using a Rician distribution. MICCAI’99 (1999) pp.82-89 and MIUA’99 (1999) pp.77-80 6. Chung, A.C.S., Noble, J.A., et al.: Fusing Speed and Phase Information for Vascular Segmentation in Phase Contrast MR Angiograms. MICCAI’00 (2000) pp.166-75 7. Krissian, K., Malandain, G., et al.: Model Based Detection of Tubular Structures in 3D Images. INRIA-Technical Report RR-3736 (1999) 8. Lorigo, L.M., Faugeras, O., et al.: Co-dimension 2 Geodesic Active Contours for MRA Segmentation. IPMI’99 (1999) pp.126-139 9. Malladi, R., Sethian, J.A., et al.: Shape Modelling with Front Propagation: A Level Set Approach. PAMI 17(2) (1995) pp.158-175 10. McInerney, T., Terzopoulos, D.: Medical Image Segmentation Using Topologically Adaptable Surface. CVRMed’97 (1997) pp.23-32
Markov Random Field Models for Segmentation of PET Images Jun L. Chen1 , Steve R. Gunn1 , Mark S. Nixon1 , and Roger N. Gunn2 1
Image, Speech and Intelligent System Research Group, Department of Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK 2 MRC Cyclotron Unit, Hammersmith Hospital, London W12 0NN, UK
Abstract. This paper investigates the segmentation of different regions in PET images based on the feature vector extracted from the timeactivity curve for each voxel. PET image segmentation has applications in PET reference region analysis and activation studies. The segmentation algorithm presented uses a Markov random field model for the voxel class labels. By including the Markov random field model in the expectation-maximisation iteration, the algorithm can be used to simultaneously estimate parameters and segment the image. Hence, the algorithm is able to combine both feature and spatial information for the purpose of segmentation. Experimental results on synthetic and real PET data are presented to demonstrate the performance of the algorithm. The algorithms used in this paper can be used to segment other functional images.
1
Introduction
A PET experiment yields a 4-D data set in space (3-D) and time (1-D) which quantifies the distribution of the tracer over the period of scanning (typically 1-2 hours for radioligands). The changes in the tracer concentration over time, namely Time-Activity Curves (TACs), provide information on the kinetics of the tracer from which the biological parameters may be determined. PET radioligand studies may be analyzed in terms of a reference tissue compartmental model to determine binding parameters when their exists a suitable reference region devoid of receptor sites [1][2]. In these models, this reference region, is used as an input function to the compartmental model and parameter values are determined by the method of least squares fitting to the target tissues TAC. Parametric images of these binding parameters may then be determined by applying this estimation process to each voxel time course. Here the goal is to use segmentation techniques to extract the reference tissue input function automatically from the PET data volume. Automatic PET image segmentation can be achieved by principal component analysis [3], factor analysis [4] and cluster analysis [5]. As most of these techniques focus on the temporal information of the time-activity curves and ignore any spatial correlations that could be learnt. That is these methods rely on the statistical assumption that the time-activity curves are independent, although there is typically a high M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 468–474, 2001. c Springer-Verlag Berlin Heidelberg 2001
Markov Random Field Models for Segmentation of PET Images
469
degree of correlation between those voxels close to each other. Markov random field models are widely used for static image segmentation as they provide a powerful way to incorporate spatial interactions between voxels in the segmentation process. [6] shows satisfactory segmentation result for multispectral MR images by using Markov random field model and segmentation algorithm hierarchically. This paper describes a integral way to apply a Markov random field model as the labeled image model in segmenting functional images where each voxel has temporal information which is characterized by a multi-dimensional feature vector. The paper is organized as follows. The details of Markov random field model and the method to use the model in the functional image segmentation process are described in section 2. Section 3 gives the segmentation example for synthetic data and real PET data. Conclusions and discussions are given in section 4. This paper considers a 2D model which can easily be extended to 3D.
2
Functional Image Segmentation
In this paper, the observed image will be denoted x and the labeled image will be denoted z. The element in z at spatial location i ∈ S, where S is the rectangular voxel lattice on which x and z are defined, is the random variable denoted by zi . Throughout the paper, x = (x1 , x2 , · · · , xn ) and z = (z1 , z2 , · · · , zn ), where n is the total number of voxels in S. In functional image analysis, the vector xi = {xi1 , xi2 , · · · , xim } denotes the m-dimensional feature associated with the voxel i, (i = 1, 2, · · · , n) and zi denotes the unknown class label for vector xi . p(zi |Φ) is the probability density function (pdf) of zi , where Φ is the set of parameters that characterize the pdf. The aim is to estimate Φ using x1 , x2 , · · · , xn only, whilst z1 , z2 , · · · , zn are unknown. 2.1
Markov Random Field Model
The distribution of labeled image z is modeled as a Markov random field [7][8]: p(z) > 0,
p(zi |zi , i =i) = p(zi |zNi ),
(1)
where Ni is the neighborhood of voxel i. Hence a Gibbs distribution can be used to model z, p(z) = D
−1
e
− T1
P V (z)
c∈C
c
,
(2)
where T is a constant analogous to temperature and D is a normalizing constant. Clique potentials Vc (z) describe the neighbor interaction. 2.2
Statistical Model for an Observed Image
The model for the observed image is p(x|z, Φ) =
n i=1
p(xi |z, Φ) =
n i=1
p(xi |zi , Φ).
(3)
470
Jun L. Chen et al.
The conditional distribution is modeled using a Gaussian distribution, 1 1 T −1 exp − (xi − µk ) Σk (xi − µk ) , p(xi |zi = k, Φ) = 2 (2π)m/2 |Σk |1/2
(4)
where k = 1, 2, · · · , K; K is the number of pre-defined underlying clusters. The 2 2 2 vector µk = {µk1 , µk2 , · · · , µkm } and (Σk ) = diag{σk1 , σk2 , · · · , σkm } are the centre and variance of cluster k respectively. 2.3
Functional Image Segmentation Algorithm
After setting up the model for z and x, we need to estimate the distribution z = z1 , z2 , · · · , zn and the parameter Φ. A general and effective algorithm for solving this problem is the Expectation Maximization (EM) [9] algorithm. Starting with (0) , the algorithm iterates: an initial estimate of Φ (t) )] (t) ) = E[log p(x, z|Φ)|(x, Φ – E step: find the function Q(Φ|Φ (t+1) (t) ) = arg max Q(Φ|Φ – M step: find Φ For the observed image model x in Equations 3 and 4, after initializing param(0) eters Φ(0) and p(zik |Φ(0) ), the parameter Φ is updated by (in the following p(·) is a simplified notation of p(·|Φ)): n (t+1) µk
(t)
p(zi
i=1 n
=
i=1 n
2 (t+1) 1 i=1 σkj = m
(t)
p(zi
= k|xi )xi ,
(t)
p(zi
(5)
= k|xi ) (t+1) 2
= k|xi )(xij − µkj
n i=1
(t)
p(zi
)
, j = 1, 2, · · · , m
(6)
= k|xi )
where (t)
(t)
p(zi
(t)
p(xi |zi = k)P (zi = k) = k|xi ) = . n (t) (t) p(xi |zi = k)P (zi = k)
(7)
i=1
P p(z
(t+1)
In the case that zi ’s are independent, the prior can be updated by p(zi n
=
(t) i =k|xi )
. When the independence assumption of image voxels does k) = i=1 N (t+1) = k) (k = 1, 2, · · · , K) is very not hold, the estimation of prior model p(zi difficult. An approximate technique is considered here, using a simple state prior model [8] [10] by p(zi = k| zl , l ∈ Ni ) =
eβδi (k) K k=1
eβδi (k)
,
(8)
Markov Random Field Models for Segmentation of PET Images
471
where δi (k) is the number of neighbors of i in state k and β > 0 is a parameter controlling the influence of neighboring voxels Ni on voxel i. The neighbor of voxel i is selected to be 3 × 3 voxel grid.
3
Segmentation Experimental Result
3.1
Experiment on Synthetic Data
The synthetic data were generated for 60 × 60 voxel image where each voxel is described by an 18 dimensional vector. The image is divided into three different regions (Fig. 1(a), with the data for each region generated from one of three 18-D Gaussian distributions with different means (Fig. 1(b) but the same standard deviation σ1 = σ2 = σ3 = 11. 30
35
25
30
20
25
15
20
25
20
10
10
5
15
10
0
5
−5
0
−10
−5
40
ECAT counts
ECAT counts
30
ECAT counts
15
20
5
0
50
60
10
−15
10
20
30
40
50
60
(a) Labeled Image
0
10
Time (Minutes)
−10
20
−5
0
10
Time (Minutes)
−10
20
0
10
Time (Minutes)
20
(b) Error Bar for Three Centres 30
35
25
30
20
25
15
20
25
20
10
10
10
15
5
10
0
5
−5
0
−10
−5
40
5
0
50
−15
60
ECAT counts
ECAT counts
30
ECAT counts
15
20
10
20
30
40
50
0
60
(c) Independent segmentation
10
Time (Minutes)
20
−10
−5
0
10
Time (Minutes)
20
−10
0
10
20
Time (Minutes)
(d) Error Bar for Three Centres 30
35
25
30
20
25
15
20
25
20
10
10
30
40
ECAT counts
ECAT counts
ECAT counts
15
20
10
15
5
10
0
5
−5
0
−10
−5
0
50
60
5
−15
10
20
30
40
50
60
0
10
Time (Minutes)
20
−10
−5
0
10
Time (Minutes)
20
−10
0
10
Time (Minutes)
20
(e) MRF model based segmentation (f) Error Bar for Three Centres
Fig. 1. Synthetic Functional Image Segmentation
Fig. 1(c),(d) is the independent voxel segmentation result using EM algorithm until a local minimum is reached. The number of clusters is set as three. Fig. 1(c) is the independent voxel segmentation result and Fig. 1(d) shows the error bar for each of the three clusters. The result of dependent voxel MRF segmentation result using the MRF prior (Equ. 8) in EM algorithm is given in Fig. 1(e),(f) with β = 1.5. Table 1 lists the misclassification error and the error for the estimated parameters in the segmentation process. The error Eµk (k = 1, 2, 3) for
472
Jun L. Chen et al.
each estimated cluster centre µk is calculated as its Euclidean distance from the known parameter value. The voxel misclassification error is reduced from 16.53% to 1.03% by using the MRF model for the labeled image in the segmentation process. Also the three estimated centre vectors extracted in MRF model based segmentation are closer to the true centred vectors. Table 1. Error for the Independent and Dependent Voxel Segmentation Method Misclassification Error Eµ1 Eµ2 Eµ3 Independent EM 16.53% 1.36 2.24 1.72 MRF-EM 1.03% 1.21 1.47 1.46
3.2
Experiment on PET Data
The algorithm was also applied to real PET data (obtained with the ligand [11 C](R)-PK11195 which is a marker for activated glial cells [11]) to demonstrate the performance of the algorithm. The subject considered is a normal volunteer and as such they would be expected to have a reference region represented by grey matter. The data contains 3-D 128 × 128 × 25 spatial sampled images over 18 different time instants. Here the data in plane 20 is used to illustrate the segmentation result. Each time activity curve is characterised by its value at 18 different time instants, i.e. mapped into a 18-dimensional feature space. Before segmentation, the data from non-cerebral region with very low measured signal is thresholded out. By using a similar procedure for processing the synthetic data, the result −3
3
3
−3
x 10
2
−3
x 10
5
x 10
1.8
20
2.5
2.5
4 1.6
2
1.4
2
ECAT counts
ECAT counts
3 1.2
1.5
60 1.5
ECAT counts
40
1
1
2
0.8
80
1
1
0.6
0.5
100
0.4
0.5
0
0 0.2
120 20
40
60
80
100
120
(a) Segmented Image
0
−0.5
0
20
40
Time (Minutes)
60
0
0
20
40
Time (Minutes)
60
−1
0
20
40
Time (Minutes)
60
(b) Three Cluster Centres
Fig. 2. Independent Voxel Segmentation Result
of segmenting the PET dynamic images with independent voxels and considering spatial information are generated respectively. Fig. 2 is the independent segmentation of voxels in the image based on the TAC associated with each voxel. The cluster number is manually chosen as three. The left figure shows the three segmented regions in the plane, with the underlying TAC for each region
Markov Random Field Models for Segmentation of PET Images
473
shown in the right figure. Fig. 3 and 4 show the segmentation results with the MRF prior probability (Equation 8) with β = 0.5 and 1.5 respectively. Larger β corresponds to more neighbour influence. The choose of appropriate β depends on prior knowledge and will depend on the ensemble of images being considered. The segmentation algorithm obtained under the independent state assumption −3
3
20
2.5
−3
x 10
3
−3
x 10
2.5
2.5
x 10
5
2
4
1.5
3
ECAT counts
ECAT counts
2 1.5
60 1.5
80
1
ECAT counts
2
40
1
0.5
1
2
1
0.5
100 0.5
0
0
−0.5
−1
0
120 20
40
60
80
100
120
0
(a) Segmented Image
−0.5
0
20
40
Time (Minutes)
60
0
20
40
Time (Minutes)
60
0
20
40
Time (Minutes)
60
(b) Three Cluster Centres
Fig. 3. MRF Model Based Segmentation Result, β = 0.5
−3
3
20
2.5
3
−3
x 10
2.5
2.5
−3
x 10
5
2
4
1.5
3
x 10
ECAT counts
ECAT counts
2 1.5
60 1.5
80
1
ECAT counts
2
40
1
0.5
1
2
1
0.5
100 0.5
0
0
0
120 20
40
60
80
100
120
(a) Segmented Image
0
−0.5
0
20
40
Time (Minutes)
60
−0.5
0
20
40
Time (Minutes)
60
−1
0
20
40
Time (Minutes)
60
(b) Three Cluster Centres
Fig. 4. MRF Model Based Segmentation Result, β = 1.5
often produces noisy segmentation. When spatial correlation exists in image, a Markov random field model, using a spatial distribution on labeled image z can be imposed to provide spatial continuity constraints. Although performance evaluation is difficult for a real data set (as no ground truth is available), the MRF model based segmentation result looks better in terms of visual inspection.
4
Discussions
This paper has extended Markov random field model based image segmentation to functional images by using a vector-based representation for the observable features. An approximate EM algorithm with hidden class information being modeled as a Markov random field is given to provide a integral way to solve the functional image segmentation problem.
474
Jun L. Chen et al.
The experimental results demonstrate the performance of the algorithm for functional image segmentation. The result for synthetic image shows that the MRF model based segmentation performs better than the independent-voxel segmentation in terms of both the image segmentation accuracy and parameter estimation. The performance of the MRF model-based segmentation on real PET data also looks promising. The method is applicable to other dynamic imaging mediums. The parameter β in the MRF model controls the degree of correlations between voxels. The cluster number K controls the segmentation algorithm’s complexity. As the unsupervised nature of the method, β and K can be either chosen by experimenting as in this paer or by computing very expensive methods like Markov chain Monte Carlo method.
Acknowledgements The authors wish to thank the Richard Banati and Ralph Myers at the Medical Research Council Cyclotron Unit for discussions and the provision of data.
References 1. Lammertsma, A.A. and Hume, S.P.: Simplified reference tissue model for PET receptor studies, Neuroimage, 1996, vol. 4, 153-158 2. Gunn, R.N. and Lammertsma, A.A. and Hume, S.P. and Cunningham, V.J.: Parametric imaging of ligand-receptor binding in PET using a simplified reference region model, Neuroimage, 1997, vol. 6, No.4, 270-287 3. I.T. Jollife: Principal Component Analysis, New York, Springer-Verlag 1986 4. H.M. Wu, C.K. Hoh, Y. Choi, H.R. Schelbert, R.A. Hawkins, M.E. Phelps, S.C. Huang: Factor analysis for extraction of blood time-activity curves in dynamic FDG-PET studies, Journal of Nuclear Medicine, 1995, vol. 36, 1714-1722 5. Ashburner, J., Haslam,J., Taylor, C. and Cunningham, V.J.: A Cluster Analysis Approach for the Characterization of Dynamic PET Data, Quantification of Brain Function Using PET 1996, Academic Press, San Diego, CA. 301-306 6. Zhengrong Liang, James R. MacFall and Donald P. Harrington: Parameter Estimation and Tissue Segmentation from Multispectral MR Images, IEEE transactions on Medical Imaging, vol. 13, No. 3, September 1994 7. Geman, S. and Geman, D: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images, IEEE Trans. PAMI, 1984, vol. 6, No. 6, 721-741 8. Besag, J.E.: On the statistical analysis for dirty pictures, Journal of Royal Statistical Society, 1986, vol. B, No.48, 259-302 9. Dempster, A.P., Laird, N.M. and Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society 1977, B39 (1), 1-38 10. Jun Zhang,James W. Modestino and David A. Langan : Maximum-likelihood Parameter Estimation for Unsupervised Stochastic Model-Based Image Segmentation, IEEE transactions on image processing, 1994 vol. 3, No.4, 405-419 11. R.B. Banati, G.W. Goerres and R. Myers: [11 C](R)-PK11195 positron emission tomography imaging of activated microglia in vivo in Rasmussen’s encephalitis, Neurology, 1999, vol. 53, 2199-2203
Statistical Study on Cortical Sulci of Human Brains Xiaodong Tao1,3 , Xiao Han1 , Maryam E. Rettmann2 , Jerry L. Prince1,2,3 , and Christos Davatzikos3 1
2
Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD 21218, USA xtao,xhan,
[email protected] Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
[email protected] 3 Department of Radiology, Johns Hopkins University, School of Medicine, Baltimore, MD 21287, USA
[email protected] Abstract. A method for building a statistical shape model of sulci of the human brain cortex is described. The model includes sulcal fundi that are defined on a spherical map of the cortex. The sulcal fundi are first extracted in a semi-automatic way using an extension of the fast marching method. They are then transformed to curves on the unit sphere via a conformal mapping method that maps each cortical point to a point on the unit sphere. The curves that represent sulcal fundi are parameterized with piecewise constant-speed parameterizations. Intermediate points on these curves correspond to sulcal landmarks, which are used to build a point distribution model on the unit sphere. Statistical information of local properties of the sulci, such as curvature and depth, are embedded in the model. Experimental results are presented to show how the models are built.
1
Introduction
The cortex of the human brain is a thin convoluted surface comprised of gyri and sulci, which are folds oriented outwards and inwards, respectively. It is believed that many cortical sulci are linked to the underlying cytoarchitectonic and functional organization of the brain, although this relationship varies throughout the cortex and is not well understood at present. Recently, there has been great interest within the brain imaging community in developing image analysis methods for characterizing sulcal shapes. Such methods would have several applications. First, sulci are natural pathways to deeper brain structures in certain neurosurgical procedures. Therefore, the better understanding of their structures is important in neurosurgical planning [1]. Second, it has been suggested [2] that sulcal shapes are related to the underlying connectivity of the brain, since they are influenced by forces exerted by connecting fibers. Therefore, shape analysis of the sulci is important in understanding normal variability, as well as in studying developmental disorders or effects of aging. M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 475–487, 2001. c Springer-Verlag Berlin Heidelberg 2001
476
Xiaodong Tao et al.
The third application of sulcal shape analysis is the primary focus of the work described in this paper. Specifically, sulci and gyri can serve as features used in spatial normalization algorithms. Spatial normalization is frequently used to map data to a stereotaxic coordinate system, by removing inter-subject morphological differences, thereby allowing for group analysis to be carried out. The 3D Talairach coordinate system has been extensively used in the brain mapping literature, but surface-based coordinate systems have also been proposed for studying the cortex, which has a surface topology [3,4]. In this paper we describe steps toward building a statistical shape model of major cortical sulci, by using the unit sphere as stereotaxic space. Sulci are projected onto the unit sphere via a conformal mapping procedure [5,6]. Our model captures inter-subject variability of the shape and depth of the sulci, and is intended for automatic labeling and spatial normalization of cortical surfaces extracted from magnetic resonance images. Previous attempts to build statistical models of the sulci have relied on graphs constructed from 3D point-sets [1,7], on ribbons used to model the space between opposite sides of a sulcus [8,9,10,11], or on curves located on the outer cortical surface [12]. Related is the work in [4,13,14], in which sulci are not explicitly modeled, but they are spatially normalized via a curvature matching procedure that stretches individual surfaces into conformation with an average curvature map. Finally, related is also the work in [15], where manually drawn sulcal curves, located on the outer cortical surface, were spatially normalized via a robust matching algorithm. In contrast to most of the previous work [14,16], our sulcal model is comprised of sulcal fundi, the deepest parts of the sulci, which are treated as parameterized curves located on the unit sphere. Fundi are first found via a modified fast marching algorithm [17] applied on cortical surfaces extracted via the method reported in [18]. A conformal mapping algorithm [5,6] is then used to place the sulci on the unit sphere, which serves as the stereotaxic coordinate space. Statistics on the shape variability and on the depth of the fundi are then incorporated into this model. Our current model consists of seven sulci of the lateral surface of the right hemisphere.
2
Methods
In this section, we first describe the steps involved in constructing our statistical model of seven major sulcal fundi. We then describe preliminary work towards registering this model to label an individual brain’s sulcal fundi. In the training stage, parameterized curves running along the fundi are built, using an extension of the fast marching algorithm. These curves are then transformed to the unit sphere via conformal mapping and are aligned via a Procrustes fit [19], resulting in a number of parameterized curves serving as training examples. From these curves we build a model that has two elements. First, an attribute vector [20] is attached to each point on a sulcal fundus. If it is rich enough, this attribute vector can distinguish different sulci, and hence facilitate the subsequent deformation
Statistical Study on Cortical Sulci of Human Brains
477
and labeling process. Second, statistical shape variation of the fundi is captured via the principle eigenvectors of the covariance matrix [21]. 2.1
Spherical Representation of Brain Cortex
The cortical surfaces used in this work are reconstructed from MR brain images using a largely automatic method reported in [18]. Fuzzy segmentation, an isosurface algorithm and a deformable model are used to reconstruct the central layer of brain cortex with correct topology. The method has been validated both qualitatively and quantitatively in [18]. The brain cortex is a thin gray matter sheet and is topologically equivalent to a sphere when closed at the brain stem. This fact has motivated the work of several groups to mapping the cortical surface to a sphere, so that visualization of deep sulci is easier. The sphere can also play the role of a stereotaxic space, within which the location and size of sulcal fundi are normalized, allowing for the calculation of statistical parameters which can be used for automatic recognition. In this paper, we use the conformal mapping method in [6] to map cortical surfaces to the unit sphere in a standard way. The method was developed from the one initially proposed by Angenent et al [5]. The conformal mapping method starts with a reconstructed cortical surface represented by a triangular mesh. A point is chosen on the top of the corpus callosum, which corresponds to the north pole after the cortical surface is mapped to the sphere. The whole cortical surface is then mapped to the complex plane using the technique described in [5]. The points on the complex plane are mapped to the unit sphere using an inverse stereographic projection to generate a spherical map of the original cortical surface. As it is pointed out in [6], the conformal spherical map of a cortical surface is not unique. This fact gives us the flexibility to select the map that minimizes the area distortion in the regions that contain the fundi of our model by adopting a similar technique used in [6] to minimize the overall area distortion. Instead of finding parameters that minimize the total area distortion, we find the parameters that minimize the area distortion in the regions of interest. 2.2
Feature Extraction
A sulcus is the region between two juxtaposed sides of a cortical fold. It is often modeled as a thin convolved ribbon embedded in 3D [8,9,10,11]. Sulcal fundi are 3D curves that lie on the deepest parts of the sulci and are regions of high curvature of the cortex. Because of the convoluted nature of the cortex, it is difficult to visualize sulcal fundi. Manually extracting them is an even more difficult task. For this reason, investigators have reported algorithms for obtaining line representations of sulci and gyri [22,23,24]. We adopt a similar strategy herein, by modeling fundi as parametric curves lying on the unit sphere. In order to build a training set of sulcal fundi, we use a semi-automatic approach based on the fast marching method on triangulated meshes [17]. Interaction is required
478
Xiaodong Tao et al.
by the algorithm in defining the initial, final, and intermediate points along the sulcal fundi. The Fast Marching method is a numerical approach for solving the Eikonal equation [17]: |∇T (x)|f (x) = 1,
x ∈ C,
(1)
where C is a surface represented by rectangular orthogonal grids or triangulated meshes, f (x) is a given non-negative function defined on C and T (x) is the function to be solved. Consider the case where a monotonically advancing front is propagating with a speed f (x) > 0, then T (x) is the time for the front to cross the point x from its initial position. If the front propagates with unit speed over C, i.e. f (x) ≡ 1, T (x) is the geodesic distance from point x to the initial front location. The fast marching method can be used to find the geodesic path between two points A and B on a triangulated surface by first solving Eq. 1 with the boundary condition T (A) = 0, and then back tracking in the negative gradient direction of T from B [17]. Because the surface is treated as a continuum, the geodesic path so constructed has a sub-grid resolution without dividing the grids in any fashion. In order to extract sulcal fundi, we use the fact that the fundi have high curvatures and large depths. By setting proper speed terms f (x), we can make the path calculated by the fast marching method favor trajectories that run along the sulcal fundi. In this work, we set the speed term f (x) in Eq. (1) as follows (numbers are given in pixels, where one pixel is 0.9375 mm): if d(x) < 1.5 0.1 if 1.5 ≤ d(x) < 3.0 , for x ∈ C (2) f (x) = d(x) 2 κm (x) + 3.0 if d(x) ≥ 3.0 Here, d(x) is the geodesic depth at x, which is defined as the geodesic distance between x and the outer surface of the brain obtained via a shrink wrapping procedure [25,26]; κm (x) is the mean curvature at x. In the deep part of a sulcus, where d(x) > 3.0, those points with high curvature will have high speed. This results in a curve running through points with high curvature. In the shallow part of a sulcus, where 1.5 < d(x) ≤ 3.0, the speed term is determined solely by the depth. Therefore in this region, the sulcal curves extracted by this algorithm favor points with large depth. In gyral regions, where d(x) < 1.5, the speed term is set to be a small positive number, so that the curve can climb over the interruptions of a sulcus. The speed term so defined makes the resulting path favor trajectories running along sulcal fundi. The extracted sulcal fundi are 3-D curves represented by ordered lists of points on the surface, which are readily mapped onto the unit sphere via the computed conformal map. The following definitions and notations are used in the following sections. A shape S is a collection of piecewise constant-speed curves Ci (s), i = 1, · · · , k, which are parameterized in the unit interval, s ∈ [0, 1]. After discretization, each curve consists of a number of points, cij = Ci (sj ), which are used in a point
Statistical Study on Cortical Sulci of Human Brains
479
distribution model. We construct these curves so that points of the same parametric coordinate, s, correspond to roughly anatomically homologous regions. Therefore, we call cij landmark points. The coordinate vector V of a shape S is a vector consisting of the coordinates of the landmark points of S. It is arranged as V T = [x1 , y1 , z1 , · · · , xM , yM , zM ], where M is the total number of landmark points of the shape. Crucial points are those landmark points in between which the speed of the curve’s parameterization is constant; they are typically the end points of a sulcus, or intermediate points corresponding to sulcal intersections.
2.3
Statistical Shape Model
In order to build a statistical shape model of the sulcal fundi, it is necessary to explicitly specify the point correspondence between shapes, which is often difficult. In [14], the Iterative Closest Point algorithm was employed to fulfill this task as well as to bring the shapes into alignment. In our work, we first identify a number of crucial points, typically corresponding to the connections between different curves, such as the connection between the superior frontal sulcus and the pre-central sulcus. Since we are dealing with primary sulci that are relatively stable across subjects, the end points and connections are easy to identify. The crucial points are manually picked with reference to the sulcal segmentation [26], in which sulcal regions are segmented using a watershed method based on the geodesic depth. Each point on the cortex with a geodesic depth greater than a certain threshold is considered to be located on a sulcus. The watershed method is then used to group those sulcal points into regions. With the help of these sulcal regions, the end points of the sulci can be consistently identified. Once crucial points are picked, the curve segments between them are parameterized by arc length. In this way, point correspondence is established naturally between any two shapes. Let V1 , V2 , · · · , VN be N coordinate vectors extracted from N brains. By applying the Procrustes fit, the shapes are brought into alignment so that they have the same size, same location and similar pose. The standard Procrustes fit for shapes on a 2D plane is as follows [21,19]: first, translate each shape so that its centroid coincides with the origin of the 2D plane; second, scale each translated shape so that the coordinate vector of the shape has unit norm; and finally, rotate each shape to minimize its distance to the mean shape. We extend this approach by applying it on shapes defined on the unit sphere. As it is in the planar case, we need 4 parameters to specify the location, size and pose of a shape on the unit sphere. They are θ and φ for location, c for size and α for pose. The Procrustes fit on sphere is as follows: 1. Each shape is rotated so that its centroid is on the z-axis. This step is equivalent to the translation step in the planar case. The rotation is done by a coordinate system transformation that transforms the centroid of the shape (x0 , y0 , z0 ) into the north pole (0, 0, 1). For each point (x, y, z) on the unit sphere, the transformation can be expressed in matrix form as:
480
Xiaodong Tao et al.
x cos θ cos φ sin θ cos φ − sin φ x y = − sin θ cos θ 0 y z cos θ sin φ sin θ sin φ cos φ z
(3)
where θ, φ are the spherical coordinates of shape centroid (x0 , y0 , z0 ). The mean and the variance of the location of the shape centroids are calculated. They are ¯ φ, ¯ σ 2 and σ 2 , respectively. By performing this rotation, the patch denoted by θ, θ φ of the sphere containing the curves being modeled is sitting around the north pole.
Z North Pole (0,0,1)
(x,y,z) (u,v,w)
Y X
(u,v)
(x,y)
(x,y) and (u,v) are on the complex plane; (x,y,z) and (u,v,w) are on the unit sphere.
Fig. 1. Scaling a spherical patch, used for subsequent Procrustes fit.
2. The procedure generating the conformal map of the brain cortex enables us to scale the patch that contains the sulcal curves: 1) mapping the unit sphere to the complex plane via stereographic projection and therefore mapping the shape on the unit sphere to a shape on the complex plane; 2) scaling the shape on the complex plane as usual; and 3) mapping the scaled shape on the complex plane back to the unit sphere via inverse stereographic projection. The size of a shape is measured as the sum of the the great circle distances between each landmark point and the north pole. This procedure is illustrated in Fig. 1. A point on the unit sphere (x, y, z) is first mapped to (¯ x, y¯) on the complex plane using stereographic projection. (¯ x, y¯) is then scaled by a factor of c to be (¯ u, v¯) = c(¯ x, y¯). Finally, (¯ u, v¯) is mapped back to the sphere as (u, v, w) via inverse stereographic projection. In this way, a spherical patch around the north pole is scaled.
Statistical Study on Cortical Sulci of Human Brains
481
3. Shapes that are translated and scaled are then rotated around the z-axis so as to minimize the misalignment. This rotation is different from the rotation in Step 1. Here, the rotation changes the pose of the shape, while the rotation in Step 1 changes the location of the shape. Rotation around the z-axis by an angle α can be expressed in matrix form as: cos α − sin α 0 u u v = sin α cos α 0 v (4) w 0 0 1 w After the shapes are brought into alignment, the statistics on the shapes are readily computed. The point distribution model consists of a mean shape and a number of eigenmodes of variation. With this model, any new shape can be approximated by its projection onto the model space. In addition to the point distribution model, each landmark point is associated with an attribute vector, whose elements are the statistics of the local properties at that point. The attribute vectors can include a variety of shape attributes. Currently, we use depth and curvature at different scales. These attribute vectors capture the shape information in the neighborhood at different resolutions at each landmark point. For example, from the results shown in Fig. 6, we can see that the depth profiles along the central, pre-central, and post-central sulci are quite different. Therefore the attribute vectors can potentially help distinguish among different sulci, and hence facilitate automatic labeling. 2.4
Registration
The model built using the algorithm described above can be used to search for and label sulcal fundi in an unseen brain image. The registration stage is divided into two steps: linear and nonlinear. We have currently implemented a linear matching, which is used for initialization of the deformable model. In particular, the mean shape is put onto the spherical map of the unseen brain image. Then by searching for the best values for θ, φ, c and α in the intervals obtained from the training stage, the best estimation for the sulcal fundi in the unseen image can be found and nonlinear registration can be performed thereafter using a hierarchical scheme [12].
3
Results and Discussion
Experiments were conducted using 8 T1-weighted volumetric MR brain images. The images were pre-processed to correct the intensity inhomogeneity introduced by imaging devices, and to strip the irrelevant tissue such as skin, fat, and skull. They were then processed using the reconstruction method reported in [18] to extract the central layer of the cortical surfaces. In our work, a statistical model was built for the central, pre-central, post-central, superior frontal, inferior frontal, superior temporal, and circular insular sulci on the right hemisphere.
482
3.1
Xiaodong Tao et al.
Extracting Sulcal Fundi
Brain cortices were visualized using OpenDX. With the tools provided by the software, we picked the crucial points of each sulcus with reference to a brain atlas [27] and the sulcal segmentation results. For each individual sulcus on a brain cortex, we used the method described in Section 2.2 to compute the distance between each point x on the surface to the starting point of the sulcus, and then extracted the sulcal fundi by back-tracking in the negative gradient direction of the distance function starting from the end point of the sulcus. Because of the nature of our algorithm, the points on the sulcal fundi do not need to lie on vertices. Fig. 2 shows a central sulcus viewed from inside the brain. From the figure, it can be clearly seen that the extracted fundi are quite reliable. Fig. 4(a) shows the fundi of several sulci extracted from one brain image, and Fig. 4(b) shows the spherical map of 7 sulcal fundi of one brain on the unit sphere. After discretization of the resulting parametric curves, the total number of landmark points is 850. 3.2
Model
Fig. 5(a) shows the sulci of 8 data sets after Procrustes fit. The mean shape and the most significant eigenvariation of the resulting model (see Sec. 2.3) are shown in Fig. 5(b). The thick curves are the mean position of the sulcal fundi and the thin ones show the eigenvariation at some landmark points with length equal to one standard deviation to each direction. Figs. 6(a), (b), and (c) show the geodesic depth along the central, pre-central and post-central sulci of eight training brains. The comparison of the means and the standard deviations are shown in Fig. 6(d). As Fig. 6(d) shows, the depth profiles along different sulci are very different. We believe that as many such attributes are included in our model, different sulci will have sufficiently different attribute vectors to allow for robust deformation of the model. From Fig. 6, it can be seen that there are some obvious properties of depth profiles for different sulci: 1. For the central sulcus, somewhere in the middle, there is a consistent decrease in depth; this part is likely to be the area of the pre-central knob. 2. Along the pre-central sulcus, there is a consistent interruption, partial or total, reflected by an abrupt decrease in depth. Fig. 6(d) shows that this interruption of the pre-central sulcus is very consistent across subjects, since the standard deviation of the depth is small in that region. This fact gives us confidence that the pre-central sulcus will be detected fairly easily, thereby making the detection of nearby sulci (central, post-central, superior frontal sulci) easier. 3. The deviation along the central sulcus is generally smaller than those of the pre-central and post-central sulci.
Statistical Study on Cortical Sulci of Human Brains
3.3
483
Linear Registration
In order to label sulcal fundi in a new dataset, a linear registration was first performed, in order to initialize a deformation process. In this step, the mean shape obtained in Sec. 3.2 was put onto the new data set, then it was rotated and scaled so that it had the best size, position and pose in the sense that the local geometric properties at each landmark point fit the statistics of those obtained from the training set. Fig. 3 shows the result of the rigid registration on a new dataset. Currently, we are in the process of implementing a hierarchical deformation mechanism for this model, similar to the one described in [12,20,28].
4
Conclusion
In this paper, the sulcal fundi of a brain cortex are extracted from the brain cortical surface using a semi-automatic method. They are transformed to the unit sphere using a conformal mapping method and parameterized to be piecewise uniform-speed curves. A point distribution model is then built from them, in which each landmark point has some statistics on its location. Moreover, each landmark point has an associated attribute vector, which describes the local geometric properties of the brain. This model can be used to detect and label sulcal fundi on an unseen dataset. In our experiments, we used 8 brains as our training examples. Although this training set is very small, it gives us a clear idea of the consistency of the location and depth profile of several sulci. This statistical information is important in sulcal labeling. Future work includes non-rigid registration using the model built with a larger training set via the methodology presented in this paper. Other attributes will also be examined, which will help uniquely characterize sulci, and hence make the model more precise.
Acknowledgments This work was partially supported by NIH grant R01AG14971, NIH contract N01AG32129, NIH grant R01NS37747 and NSF/ERC grant CISST#9731748. The authors would like to acknowledge the Baltimore Longitudinal Study of Aging which provided the datasets.
References 1. J.-F. Mangin, V. Frouin, I. Bloch, J. Regis and J. Lopez-Krahe, “From 3D magnetic resonance images to structural representations of the cortex topography using topology preserving deformations,” J. Math. Imag. Vis., vol. 5, pp. 297–318, Dec. 1995. 2. D. C. Van Essen and J. H. R. Maunsell, “Two dimensional maps of cerebral cortex,” J. Comp. Neurol., vol. 191, no. 2, pp. 255–281, 1980.
484
Xiaodong Tao et al.
Fig. 2. A central sulcus viewed from inside the brain. Black curve is the sulcal fundus extracted using our method. It divides the entire sulcus into anterior and posterior banks.
Fig. 3. The spherical patch containing the sulcal curves being modeled, after linear registration with a new image. The mean shape has been scaled, translated and rotated to have the best match with the input image.
CS
PostCS
PreCS
Superior Temporal Sulcus
Postcentral Sulcus Central culcus
Precentral Sulcus
Circular Insular Sulcus Inferior Frontal Sulcus Superior Frontal Sulcus
(a)
(b)
Fig. 4. (a) Sulcal fundi on a brain cortical surface. (b) The same fundi after conformal mapping of the cortex onto the unit sphere.
Statistical Study on Cortical Sulci of Human Brains
(a)
485
(b)
Fig. 5. (a) Sulcal fundi of 8 subjects after aligned using Procrustes fit. (b) The mean shape and the most significant eigenmode (1 standard deviation to each side from the mean).
Geodesic depth along sulci (mm)
25
20 15 10 5 0
0
Geodesic depth along sulci (mm)
25
0.2 0.4 0.6 0.8 Parameter of fundal curves 0<s r
Fig. 3. Thickness asymmetry for a left/right hippocampus pair: a,b: M-rep descriptions (a: right, b: left). Radius/color are proportional to the corresponding radius. c: Difference (R − L) at medial atoms with proportional radius/color: r ∼ |R − L|; col ∼ (R − L).
2
Motivation: Shape Analysis in an Asymmetry Study
This section describes an example for the intuitive description of shape changes that is inherent to the medial description. We will show an analysis of left/right asymmetry in hippocampi, a subcortical human brain structure. Asymmetry is defined via the interhemispheric plane, therefore the right hippocampus was mirrored for comparison. The main advantage of medial descriptions is the separation of the local shape properties thickness and location. In the presented case we chose to investigate mainly the thickness information of the medial manifold. In Fig. 1 the hippocampi are visualized and asymmetry is clearly visible. Volume and medial axis length measurements indicate the same result that the right hippocampus is larger than the left: volright = 2184mm3, vollef t = 2023mm3; axisright = 65.7mm, axislef t = 64.5mm. But these measurements do not provide a localization of the detected asymmetry. This asymmetry can easily be computed and intuitively visualized using our new medial description. When analyzing the thickness of the hippocampi along the medial axis (the intrinsic coordinate system), we get a more localized understanding of the asymmetry. In Fig. 2 the right hippocampus is thicker over the full length of the axis, and the difference is most pronounced in the middle part of the axis. In order to relate this thickness information with the appropriate location, we visualize it in the medial samples itself. Each medial atom (sample of the medial surface) is displayed by a sphere of size and color that is proportional to its thickness. This kind of display can also be used to visualize the thickness difference of corresponding locations in the right and left hippocampus. The sphere radius and color is proportional to the difference: r ∼ |R − L|; col ∼ (R − L) (see Fig. 2). As a next step, we take into account a grid of medial atoms and perform the same analysis as for the axis (see Fig. 3). We observe that the right shape is thicker, but the difference is most pronounced in the middle part. All parts of the processing are described in the methods section and more detailed examples are presented in the results section. The applied methods
506
Martin Styner and Guido Gerig PCA 2
PCA Shape space Model Topology
PCA 1
2.
1.
SPHARM
Pruned Voronoi
3.
Fig. 4. Computation of a medial model from an object population. 1. Shape space definition. 2. Common medial branching extraction. 3. Compute minimal sampling.
computed the medial branching topology of one single sheet with volumetric overlap larger than 98% and approximation error less than 10% MADnorm (Mean Absolute Distance). It is evident that in the presented hippocampus study the medial description gives a better understanding of the observed asymmetry than simple measurements like volume or even the length of the medial axis. Limiting ourself to a coarse scale description does not lessen the power of the analysis and asymmetry can reliably be detected and localized. In the result section, we present another example dealing with asymmetry in a lateral ventricles study.
3
Methods
The main problem for a medial shape analysis is the determination of a stable medial model in the presence of biological shape variability. Given a population of anatomical objects, how can we determine a stable medial model automatically? This section describes the methods that were developed to construct such an medial model. Some parts were described in more detail by Styner [19]. Our scheme is subdivided into 3 steps and visualized in Fig. 4. We first define a shape space for our computations using Principal Component Analysis. From this shape space, we generate the medial model in two distinct steps. First, we compute the common branching topology using pruned Voronoi skeletons. Secondly, we calculate the minimal sampling of the medial manifold given a predefined maximal approximation error in the shape space. 3.1
SPHARM and M-rep Shape Description
Two shape description are of importance in this work: a) the boundary description using SPHARM and b) the medial description using m-rep. The SPHARM description (see Brechb¨ uhler [3]) is a global, fine scale description that representa shapes of sphere topology. The basis functions of the parameterized surface are spherical harmonics, which have been demonstrated
Medial Models Incorporating Object Variability for 3D Shape Analysis
507
by Kelemen [13] to be non-critical to shape deformations. SPHARM is a smooth, accurate surface shape representation given a sufficiently small approximation error. Based on a uniform icosahedron-subdivision of the spherical parameterization, we obtain a Point Distribution Model (PDM) directly from the coefficients via a linear mapping x = A · c. Correspondence of SPHARM is determined by normalizing the parameterization to the first order ellipsoid. M-rep is a local medial shape description. It is a set of linked samples, called medial atoms, m = (x, r, F , θ) (see Pizer et al [16]). Each atom is composed of: 1) a position x, 2) a width r, 3) a frame F implying the tangent plane to the medial manifold and 4) an object angle θ. A figure is a non-branching medial sheet. Figures are connected via inter-figural links. The medial atoms form a graph with edges representing either inter- or intra-figural links. In the generic case, this medial graph is non-planar, i.e. overlapping when displayed in a 2D diagram. The sampling is sparse and leads to a coarse scale description. Correspondence between shapes is implicitly given if the medial graphs are equivalent. We express every shape by the same medial graph varying only the parameters of the medial atoms. For every population of similar shapes there is a specific medial graph. 3.2
Shape Space
We define a shape space via the population average and its major deformation modes (eigenmodes) determined by a principal component analysis (PCA). We compute the SPHARM shape representations ci for a population and apply PCA to the coefficients as described by Kelemen [13]. The eigenmodes (λi , pi ) in the shape space cover at least 95% of the variability in the population: Σ=
1 (ci − c¯) · (ci − c¯)T n−1 i
0 = (Σ − λi · In ) · pi ; i = 1 . . . n − 1 c ± 2 · λi · pi }; i = 1 . . . k95% Spaceshape = {¯
(1) (2) (3)
Since the shape space is continuous, we need to determine a shape set by a representative sampling of the shape space. All computations are then applied to the shape set. The number of shapes in this set can be considerably higher than the original number of shapes (see section 4.1 for an example). 3.3
Computing a Common Medial Branching Topology
In this section, we describe how the common medial branching topology is computed in a three step approach. First, we compute for each member of the shape set its branching topology. Then we establish a common spatial frame in order to compare the topology of different shapes. As a last step we determine the common topology via a spatial matching criterion in the common frame. Branching topology of a single shape - The branching topology for a single shape is derived via Voronoi skeletons (see Attali [1]). In order to calculate
508
Martin Styner and Guido Gerig
a
b
c
d
Fig. 5. Voronoi skeleton pruning scheme applied to a lateral ventricles (side views). a. Original ventricle. b: Original Voronoi skeleton (∼ 1600 sheets). d: Pruned skeleton (2 sheets). c: Reconstruction from pruned skeleton (Eoverlap = 98.3%).
the 3D Voronoi skeleton from the SPHARM description, we first calculate a finely sampled PDM. From the PDM, the inside 3D Voronoi diagram is calculated. The Voronoi diagram is well behaved due to the fact that the PDM is a fine sampling of the smooth SPHARM. The medial manifold of a Voronoi skeleton is described by Voronoi vertices connected by Voronoi edges forming planar Voronoi faces. A face-grouping algorithm was developed that is based on the grouping algorithm proposed by Naef [14]. The algorithm groups the Voronoi faces into a set of medial sheets. It is a graph algorithm that uses a cost function weighted by a geometric continuity criterion. Additionally a merging step has been implemented, which merges similar sheets according to a mixed radial and geometric continuity criterion. The computed medial sheets are weighted by their volumetric contribution to the overall object volume: Csheet = (volskel − volskel\sheeti )/volskel . The medial sheet are then pruned using a topology preserving deletion scheme. An example of the scheme is presented in Fig. 5. We are aware that this pruning scheme has to be adapted when dealing with very complex objects like the whole brain cortex. A common spatial frame for branching topology comparison - The problem of comparing branching topologies has already been adressed before in 2D by Siddiqi [18] and others, mainly via matching medial graphs. So far, there is little work done in 3D. The results in 2D and August [2] have shown that the medial branching topology is quite unstable. In 3D, the medial branching topology is even more unstable and ambiguous as in 2D. Thus, we chose to develop a matching algorithm that does matching based on spatial correspondence. In order to apply such an algorithm, we first have to define a common spatial frame. Our choice for a common frame is the average case of the shape space. We map each member of the shape set into the common frame using the correspondence that is given on the boundary. The SPHARM description defines a boundary correspondence via the first order ellipsoid. This correspondence is carried through to the derived PDM, which is the basis of the branching topology computation. By mapping every PDM into the PDM of the common frame, we perfectly overlap the sampled boundaries of every object. To map the whole 3D
Medial Models Incorporating Object Variability for 3D Shape Analysis
509
m-rep
Grid I.MADnorm II.MADnorm
2x6 0.159 0.146
3x6 0.116 0.076
3x7 0.109 0.075
3x12 0.066 0.053
4x12 0.051 0.048
Fig. 6. Sampling approximation errors of the implied surface from a hippocampus structure in the initial position (I) and after deformation (II). The Mean Absolute Distance (MAD) is normalized with the average radius: ravg = 2.67 mm. The images show the m-rep (red lines), points of the implied boundary (dark blue dots) and the original boundary (light blue transparent).
space, we use a Thin-Plate-Spline warp algorithm. Thus, we warp every skeleton into the common frame, where all their boundaries match perfectly. Extraction of a common topology - Given that all objects were mapped into a common spatial frame, we can define a matching criterion between the medial sheets. We visually observed a very high degree of overlap between matching sheets in the common frame. Thus, we chose to use the spatial distribution of the Voronoi vertices of every sheet. The matching criterion is defined as the paired Mahanolobis distance. It is a non-Euclidean distance measure that takes into account the non-isotropic distribution of the vertices. We determined an empirical threshold that represents the rejection of a match if the centers of the spatial distributions are further away than 2 standard deviations. fdist ({µ1 , Σ1 }, {µ2 , Σ2 }) = (M (µ1 , Σ1 , µ2 ) + M (µ2 , Σ2 , µ1 ))/2 M (µ, Σ, x) = Σ
−1
· (µ − x) 2
(4) (5)
The matching procedure starts by assigning the topology of the average case as the initial common topology. The common topology is step by step refined by comparing it to every member in the shape set. All medial sheets that were judged to be non-matching are incorporated into the common topology. This matching procedure does a one-to-many match, since due to skeletal instabilities a single medial sheet in one shape can have two or more corresponding sheets in another shape. The resulting common topology is a set of medial sheets originating from members of the shape set. 3.4
Computing the Minimal Sampling of the Medial Manifold
From the common branching topology we compute the sampling of the associated sheets by a grid of medial atoms. The m-rep model is determined by the common
510
Martin Styner and Guido Gerig
c¯
+2λ1 −2λ1 +2λ2 −2λ2 +2λ3 −2λ3 +2λ4 −2λ4 +2λ5 −2λ5 +2λ6 −2λ6
Fig. 7. Subset of the shapeset from a PCA shape space for a pooled ControlSchizophrenia population: Average case with deformations along eigenmodes.
branching topology and a set of parameters that specify the grid dimensions for each sheet. The sampling algorithm is based on the longest 1D thinning axis of the edge-smoothed 3D medial sheet. A detailed description of the algorithm would exceed the scope of this paper and thus is omitted here. The set of grid parameters is optimized to be minimal while the corresponding m-rep deforms into every member of the shape set with a predefined maximal approximation error. The approximation error is computed as Mean Absolute Distance (MAD) of the implied and the original boundary. In order to have an error independent of the object size, we normalize with the average radius of the skeleton: MADnorm = MAD/ravg . In Fig. 6 we show the error of various sampling parameters. The limiting error was chosen at 10% of the average radius. 3.5
Deforming the M-rep into an Individual Shape
Once a m-rep model is computed, we want to fit it into individual shapes. This fit is done in two steps. First we obtain a good initial estimate which is then refined in a second step to fit to the boundary. The initial estimate is obtained by warping the m-rep model from the common frame into the frame of the individual shape using the correspondence on the boundary given by the PDM. From that position we run an optimization that changes the features of the m-rep atoms to improve the fit to the boundary (see Joshi [12]). Local similarity transformations as well as rotations of the local angulation are applied to the medial atoms. The fit to the boundary is constrained by an additional Markov random field prior that guarantees the smoothness of the medial manifold. 3.6
Issues of Stability
We consider the stability of the common medial branching topology to be good. Several experiments with various anatomical objects confirmed this hypothesis. The least stable part of the common topology is its dependency on the boundary correspondence for the topology matching procedure. Although we did not yet experience problems with the quality of the boundary correspondence, it is evident that for objects with a high degree of rotational symmetry our first order ellipsoid correspondence is not appropriate. The borders of the Voronoi skeleton are sensitive to small perturbations on the boundary, unlike the center
Medial Models Incorporating Object Variability for 3D Shape Analysis
a
b
c
d
e
511
f
Fig. 8. Topology matching scheme applied to the shape space of hippocampusamygdala objects. a: Final topology with 3 medial sheets as clouds of Voronoi Vertices. b-f: Examples of members from the shape set.
part of the skeleton which is quite stable. Thus, the sampled medial grid is quite stable in its center, but rather unstable at the edges. The parameters of the grid sampling (rows, columns) are neither very stable, nor unstable. If we compute the common medial model for 2 similar populations (e.g. 2 different studies of the same object regarding the same disease), we expect the same medial branching topology, similar grid parameters, and very similar m-rep properties in the center, but unlikely similar m-rep properties at the boundaries of the grid. We are currently verifying this hypothesis. Because we construct the model only once and its extraction is deterministic and repeatable, the stability of a medial model for a single population is regarded as stable.
4 4.1
Results Medial Model for a Hippocampus-Amygdala Population
The main contribution in this paper is the scheme to extract a medial model that incorporates biological variability. This section presents the scheme applied to a hippocampus-amygdala population from a schizophrenia study (30 subjects). The SPHARM coefficients were determined and normalized regarding rotation and translation using the first order ellipsoid. The scale was normalized with the individual volume. The first 6 eigenmodes λi of the PCA shape space contain 97% of the√variability in the population and thus the shape space is defined as {¯ c ± 2 · λi ;i = 1 . . . 6} . The shape set was computed by uniformly sampling 5 shapes along each eigenmode (see Fig. 7 for a subset). The 3D Voronoi skeletons were computed from the PDM’s and pruned. No member of the shape set had a branching topology comprised by more than 4 medial sheets. In Fig. 8 the individual medial topology of several members of the shape set are displayed together with the common medial topology, which consists of only three medial sheets. Every member of the shape set had a volumetric overlap larger than 98% of the reconstruction with the original object.
512
Martin Styner and Guido Gerig Eoverlap Init. m-rep EMAD Def. m-rep EMAD
Boundary Skeleton 6mm
c¯
98.64%
0.081
0.069
98.35%
0.118
0.099
98.02%
0.185
0.096
0mm 6mm
c¯ + 2 · λ2
0mm 6mm
Case 0mm
Fig. 9. Approximation errors: The common model applied to the average (top), a shape set member (middle) and an individual case (bottom). The pruned Voronoi skeletons and the volumetric overlap is presented. In the middle columns, the initial m-rep (after the warp) and its MADnorm error is displayed (ravg = 3.6mm). On the right, one can see the deformed m-rep and the corresponding MADnorm . After deformation, the lower boundary for the MADnorm error (≤ 0.10) was satisfied for all shapes.
The minimal sampling of the medial topology was computed with an maximal MADnorm ≤ 0.1. The application of the m-rep model to all individual cases did not produce a MADnorm larger than 0.1 (see also Fig. 9). This highly encouraging result strengthens the validity of our assumptions in the proposed scheme. 4.2
Asymmetry Analysis of Lateral Ventricles of Twins
This section presents our scheme applied to a population of lateral ventricles, a fluid filled structure in the center of the human brain. The data is from a Mono/Di-zygotic twin study and consists of 20 twin subjects (10 pairs, 5 each). In order to do asymmetry analysis, we mirrored the left objects at the interhemispheric plane. The SPHARM coefficients were determined and normalized regarding rotation and translation using the first order ellipsoid. The scale was normalized by the individual volume. In Fig. 11 the SPHARM correspondence is shown. The first 8 eigenmodes λi of the PCA shape space contain 96% of the √variability in the population and thus the shape space is defined as {¯ c ± 2 · λi ;i = 1 . . . 8} . The PDM of the shape set was computed and a common medial branching topology for all objects using Voronoi skeletons was
Medial Models Incorporating Object Variability for 3D Shape Analysis
513
4.5mm
Twin case 1
L
R
0.6mm 4.5mm
Twin case 2
L
R
0.6mm
Fig. 10. Visualization of the lateral ventricles of two monozygotic twins and its corresponding m-reps. Radius/color is proportional to the thickness. The figure shows a clear asymmetry in both cases of the boundary display.
determined. The medial branching topology consisted of one to three medial sheets with an volumetric overlap of more than 98% for each. The common medial topology was computed to be a single sheet. The minimal sampling of the medial topology was computed with an maximal MADnorm ≤ 0.1 in the shape space. The application of the medial model to all individual cases did not produce a MADnorm larger than 0.1, which is about 0.35mm. Since the lateral ventricle is a thin, long structure (medial axis ∼ 115mm), this is a very small error for a coarse scale description. We examined a monozygotic twin pair more closely for an asymmetry and similarity analysis (see Fig. 10). An asymmetry is clearly present, also in the volume measurements: twin 1 L/R = 9365/12442mm3; twin 2 L/R = 8724/6929mm3. An analysis of the thickness is shown in Fig. 12. It is obvious, that the asymmetry in the twin 1 is considerably higher than in twin 2. Moreover, the similarity is much higher on the left side than on the right.
5
Conclusions and Discussion
We present a new approach to the description and analysis of shape for objects in the presence of biological variability. The proposed description is based on a boundary and a medial description. Using the medial description we can visualize and quantitate locally computed shape features regarding asymmetry or similarity. Since a correspondence is given on both the boundary and the medial manifold a statistical analysis can be directly applied. We showed several shape analysis examples of the medial description and the results are very encouraging. The choice of a fixed topology for the m-rep has several advantages, e.g. enabling an implicit correspondence for statistical analysis. On the other hand, a fixed topology m-rep model cannot precisely capture the topology of an individual object. The determined m-rep is therefore always an approximation, which emphasizes our decision of a coarse scale m-rep description.
514
Martin Styner and Guido Gerig
Twin 1 - left
Twin 1 - right
Twin 2 - left
Twin 2 - right
Fig. 11. Visualization of the normalization SPHARM parameterization net using the first order ellipsoid for the lateral ventricle example. The top row shows a top view and the bottom row a side view. The three axes are in corresponding colors. 1.5mm
-0.3mm a: R1 − L1
b: R2 − L2
c: R1 − R2
d: L1 − L1
Fig. 12. Asymmetry/similarity analysis of the thickness. M-rep radius and color is proportional to the difference (the same range has been applied to all objects). Maximal differences: a) 1.5mm, b) 0.28mm, c) 1.7mm, d) 0.42mm. In the first twin there is a higher degree of asymmetry. The similarity of the left side is higher than the right side.
The SPHARM description and thus also the derived m-rep is constrained to objects of sphere topology. The SPHARM boundary correspondence has shown to be a good approach in the general case, but it has inherent problems e.g. in case of rotational symmetry. This paper describes a scheme which is work in progress. All parts of the scheme have already been implemented, applied and tested. The scheme has been applied to populations of several structures of neurological interest: amygdalahippocampus(60 cases), hippocampus(20), thalamus(56), globus pallidus(56), putamen(56) and lateral ventricles(40). The generation of the medial description takes into account the biological variability of a set of training shapes, which is a novel concept. We have shown that we can compute a stable medial description for a population of shapes. The biological variability is captured efficiently, and therefore it is a step further towards a natural shape representation. As next steps, we have to study the robustness of our scheme and to apply shape statistics for discrimination and similarity studies. Methods for statistical shape analysis of medial structures have been performed at our lab in 2D (Yushkevich [21]). Applications to clinical studies in Schizophrenia and other neurological diseases are in progress.
Medial Models Incorporating Object Variability for 3D Shape Analysis
515
Acknowledgments Miranda Chakos and MHNCRC image analysis lab at Psychiatry UNC Chapel Hill Hospitals kindly provided the original MR and segmentations of the hippocampi. Ron Kikinis and Martha Shenton, Brigham and Women’s Hospital, Harvard Medical School, Boston provided the original MR and segmentations of the amygdala-hippocampus study. We further acknowledge Daniel Weinberger, NIMH Neuroscience at St. Elizabeth, Washington for providing the twin datasets that were segmented in the UNC MHNCRC image analysis lab. We are thankful to Christian Brechb¨ uhler for providing the software for surface parameterization and SPHARM description. We are further thankful to Steve Pizer, Tom Fletcher and Sarang Joshi of the MIDAG group at UNC for providing us with m-rep tools. We would like to thank Steve Pizer for his insightful comments and discussions about m-rep and other medial shape descriptions. Sarang Joshi is acknowledged for discussions about the m-rep deformation.
References 1. Attali, D. and Sanniti di Baja, G. and E. Thiel: Skeleton simplification through non significant branch removal. Image Proc. and Comm. (1997) 3 63–72 2. August, J. and Siddiqi, K. and Zucker, S.: Ligature instabilities in the perceptual organization of shape. Computer Vision and Pattern Recognition (1999) 3. Brechb¨ uhler, C.: Description and analysis of 3-D shapes by Parametrization of closed surfaces. (1995), Dissertation, IKT/BIWI, ETH Z¨ urich, Hartung Gorre 4. Blum, T.O: A transformation for extracting new descriptors of shape. Models for the Perception of Speech and Visual Form (1967) MIT Press 5. Burbeck, C.A. and Pizer, S.M and Morse, B.S. and Ariely, D. and Zauberman, G. and Rolland, J.: Linking object boundaries at scale: a common mechanism for size and shape judgments. Vision Research (1996) 36 361–372, 6. Christensen, G.E. and Rabbitt, R.D. and Miller, M.I.: 3D brain mapping using a deformable neuroanatomy. Physics in Medicine and Biology (1994) 39 209–618 7. Csernansky, JG and Joshi, S and Haller, J and Gado, M and Miller, JP and Grenander, U and Miller, MI: Hippocampal morphometry in Schizophrenia via high dimensional brain mapping. Proc. Natl. Acad. Sci. USA (1998) 95 11406–11411 8. Davatzikos, D. and Vaillant, M. and Resnick, S. and Prince, J.L and Letovsky, S. and Bryan, R.N.: A computerized method for morphological analysis of the Corpus Callosum. Comp. Ass. Tomography (1998) 20 88–97 9. Giblin, P. and Kimia, B.: A formal classification of 3D medial axis points and their local geometry. Computer Vision and Pattern Recognition (2000) 566–573 10. Golland, P. and Grimson, W.E.L. and Kikinis, R.: Statistical shape analysis using fixed topology skeletons: Corpus Callosum study. IPMI (1999) LNCS 1613,382–388 11. Joshi, S. and Miller, M and Grenander, U.: On the geometry and shape of brain sub-manifolds. Pat. Rec. and Art. Intel. (1997) 11, 1317–1343 12. Joshi, S. and Pizer, S. and Fletcher, T. and Thall, A. and Tracton, G.: Multi-scale deformable model segmentation based on medial description. IPMI (2001) 13. Kelemen, Andr´ as and Sz´ekely, G´ abor and Gerig, Guido: Elastic model-based segmentation of 3D neuroradiological datasets. IEEE Med. Imaging (1999) 18 828–839 14. N¨ af, M.: Voronoi skeletons: a semicontinous implementation of the ’Symmetric Axis Transform’ in 3D space. ETH Z¨ urich, IKT/BIWI (1996), Dissertation
516
Martin Styner and Guido Gerig
15. Ogniewicz, R. and Ilg, M.: Voronoi skeletons: theory and applications. Computer Vision and Pattern Recognition (1992) 63–69 16. Pizer, S. and Fritsch, D. and Yushkevich, P. and Johnson, V. and Chaney, E.: Segmentation, registration, and measurement of shape variation via image object Shape. IEEE Transactions on Medical Imaging (1999) 18 851–865 17. Siddiqi, K. and Kimia, B. and Zucker, S. and Tannenbaum, A.: Shape, shocks and wiggles. Image and Vision Computing Journal (1999) 17 365–373 18. Siddiqi, K. and Ahokoufandeh, A. and Dickinson, S. and Zucker, S.: Shock graphs and shape matching. Computer Vision (1999) 35 13–32 19. Styner, M and Gerig, G.: Hybrid boundary-medial shape description for biologically variable shapes. Math. Methods in Biomedical Image Analysis (2000) 235–242 20. Subsol, G and Thirion, J.P. and Ayache, N.: A scheme for automatically building three-dimensional morphometric anatomical atlases: application to a skull atlas. Medical Image Analysis (1998), 2 37–60 21. Yushkevich, P. and Pizer, S.: Coarse to fine shape analysis via medial models. IPMI (2001)
Deformation Analysis for Shape Based Classification Polina Golland1 , W. Eric L. Grimson1 , Martha E. Shenton2 , and Ron Kikinis3 1
2
Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA. {polina,welg}@ai.mit.edu Laboratory of Neuroscience, Clinical Neuroscience Division, Department of Psychiatry, VAMC-Brockton, Harvard Medical School, Brockton, MA.
[email protected] 3 Surgical Planning Laboratory, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA.
[email protected] Abstract. Statistical analysis of anatomical shape differences between two different populations can be reduced to a classification problem, i.e., learning a classifier function for assigning new examples to one of the two groups while making as few mistakes as possible. In this framework, feature vectors representing the shape of the organ are extracted from the input images and are passed to the learning algorithm. The resulting classifier then has to be interpreted in terms of shape differences between the two groups back in the image domain. We propose and demonstrate a general approach for such interpretation using deformations of outline meshes to represent shape differences. Given a classifier function in the feature space, we derive a deformation that corresponds to the differences between the two classes while ignoring shape variability within each class. The algorithm essentially estimates the gradient of the classification function with respect to node displacements in the outline mesh and constructs the deformation of the mesh that corresponds to moving along the gradient vector. The advantages of the presented algorithm include its generality (we derive it for a wide class of non-linear classifiers) as well as its flexibility in the choice of shape features used for classification. It provides a link from the classifier in the feature space back to the natural representation of the original shapes as surface meshes. We demonstrate the algorithm on artificial examples, as well as a real data set of the hippocampus-amygdala complex in schizophrenia patients and normal controls.
1
Introduction
Statistical studies of anatomical shape in different populations are important in understanding anatomical effects of diseases (comparing patients vs. normal controls) or biological processes (for example, comparing different age groups). Volume and area measurements were originally used for such studies, but recently more sophisticated shape based techniques have been used to identify M.F. Insana, R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, pp. 517–530, 2001. c Springer-Verlag Berlin Heidelberg 2001
518
Polina Golland et al.
statistical differences in the shape of a particular organ in different subject groups [1,6,7,8,9,11,13]. One of the important steps in the analysis is the interpretation of the mathematical description of shape variation in terms of image related quantities. It allows us to argue about the shape differences in biologically meaningful terms of organ development and deformation. Most work in constructing models of shape and its variation has been motivated by and used for segmentation tasks, where a generative model of shape variation guides the search over the space of possible deformations of the “representative” shape template, with the goal of matching it to a new input image. Typically, this approach uses Principal Component Analysis (PCA) to build a linear model of variation and compress it to a few principal modes of deformation [5,10,12,18]. In this case, the shape variation within the population is represented using these principal modes of deformation. In our previous work, we extended this approach to linear discriminant analysis by visualizing the shapes resulting from moving the input feature vectors along the vector normal to the best separating hyperplane [8]. In this paper, we present a generalized approach to deriving the deformation that represents the differences between two classes, where that deformation is deduced from a non-linear classifier learned from training examples. Note that unlike pattern recognition, where the learned classifier is used to label new examples, we are much more interested in the classifier’s spatial structure, from which we infer the shape differences in terms of organ deformations. The paper is organized as follows. In the next section, we discuss the problem of statistical shape analysis and briefly outline our approach to learning the shape differences that is described in detail elsewhere [9]. It is followed by the derivation of the deformation that corresponds to differences in the shape represented by the classifier function. The algorithm is then demonstrated on artificial examples, as well as a real data set of the hippocampus-amygdala complex in schizophrenia patients compared to a normal control group.
2
Background on Shape Based Classification
Shape Representation. The first step in shape based statistical analysis is extraction of the shape parameters, or a shape descriptor. The wealth of shape descriptors used in medical image analysis and computer vision include parametric models: such as Fourier descriptors [17,18] or spherical harmonic functions [2,10], as well as a variety of non-parametric models: landmark based descriptors [1,5], distance transforms [9,12], medial axes [8,14] and deformation fields obtained by warping a template to each of the input shapes [6,7,11,13]. We choose to use distance transforms as shape descriptors. A distance transform, or a distance map, is a function that for any point in the image is equal to the distance from the point to the closest point on an outline. By first segmenting example images into structures of interest, we can then convert each example into a distance map, based on the boundary of the segmented object. We then sample points in the distance transform to form a feature vector, which captures
Deformation Analysis for Shape Based Classification
519
the shape of the segmented object. Since the values of the distance transform at neighboring voxels are highly correlated, using it as a shape descriptor provides the learning algorithm with redundant information on the structure of the feature space. Moreover, the local nature of the distance transform’s sensitivity to noise in the object’s pose and segmentation makes it an attractive shape representation for statistical analysis [9,12]. The shape information extraction is an inherently noisy process, consisting of imaging, segmentation and feature extraction, with every step introducing errors. The vectors in the training set are therefore computed with uncertainty, and it is important that small errors in any of the steps do not cause significant displacements of the resulting feature vectors. Thus, our feature extraction procedure consists of the following steps: – Example images are segmented into relevant structures; – These images are aligned by computing a distance map, then computing the transformation that aligns the moments of the maps; – The aligned images are clipped to a common size; – The aligned and clipped 3D distance transforms are used as feature vectors, with individual voxels being the vector components. 2.1
Statistical Analysis
Once the feature vectors are extracted from the images, they are used to study the statistical properties of two groups and to compare two populations, for example, a population of normals and a population of schizophrenics. As mentioned in the introduction, statistical techniques have been extensively used to build generative models of shape variation within a single population by computing the principal modes of deformation [5,10,12,14,18]. This approach has been previously used to identify and visualize the differences between two populations by comparing generative models derived separately for each population, for example by studying the first few principal modes of deformation in the two groups [7,13]. In addition, several techniques have been demonstrated that are based on learning the differences between the classes directly from the training examples by either comparing individual landmarks [1,7] or by actually training a classifier to label new examples [6,11]1 . Our technique [8,9] belongs to this last group and is based on the premise that in order to automatically detect statistical shape differences between two populations, one should try to build the best possible classifier for labeling new examples. In the optimal case, the classifier function will exactly represent the important differences between the two classes, while ignoring the intra-class variability. We use the Support Vector Machines (SVM) learning algorithm to arrive at a classification function. In additional to the sound theoretical foundations of 1
In both papers, a classifier was learned from the training data. However, it was not used to demonstrate the shape differences, but only to establish statistical significance.
520
Polina Golland et al.
SVMs, they have been demonstrated empirically to be quite robust and seemingly free of the over-fitting problems to which other learning algorithms, such as neural networks, are subject. Below, we state without proof the main facts on SVMs that are necessary for derivation of our results. The reader is referred to the original publications on SVMs [3,20] for more details on the method. The Support Vector Machines algorithm exploits kernel functions for nonR, such that for some linear classification. A kernel is a function K : Rn × Rn → mapping function Ψ : Rn → Rm from the original space to a higher dimensional space (n ≤ m), the value of the dot product in Rm can be computed by applying K to vectors in Rn : K(u, v) = Ψ (u) · Ψ (v) , ∀u, v ∈ Rn .
(1) n
Thus, while mapping the points from the original space R to the higher dimensional space Rm and performing computations explicitly in Rm can be prohibitive because of the dimensionality of the space, we can compute certain quantities in Rm without ever computing the mapping if the answer depends only on dot products of the vectors in Rm . Given a training set of l pairs {(xk , yk )}l1 , where xk ∈ Rn are observations and yk ∈ {−1, 1} are corresponding labels, the SVM learning algorithm searches for the optimal classification function fK (x) =
l
αk yk Ψ (x) · Ψ (xk ) + b =
k=1
l
αk yk K(x, xk ) + b
(2)
k=1
that given a new example x assigns it a label yˆ(x) = sign(f (x)). In the higher dimensional space defined by the mapping Ψ , the separation boundary is a hyperplane whose normal is a linear combination of Ψ (xk )’s w=
l
αk yk Ψ (xk ),
(3)
k=1
but it can be an arbitrary complex surface in the original space. Training vectors with non-zero α’s are called support vectors, as they define, or “support”, the hyperplane. The optimal classifier (i.e., the optimal set of coefficients αk ’s and the threshold b) is found by maximizing the margin between the classes with respect to the separating boundary, which can be reduced to a constrained quadratic optimization problem defined in terms of the pairwise dot products of Ψ (xk )’s, and can therefore be solved without explicitly computing the mapping into the higher dimensional space. For a linear kernel K(u, v) = u · v , the mapping Ψ is the identity, and the classifier is a linear function in the original space. Several non-linear kernels have been also proposed. In this work, we use a family of Gaussian kernel functions 2
K(u, v) = e−u−v
/γ
,
where the parameter γ determines the width of the kernel. One of the important properties of this family of classifiers is its locality: moving a support vector
Deformation Analysis for Shape Based Classification
521
slightly affects the separation boundary close to the vector, but does not change it in regions distant from the vector. Following the discussion in the previous section, this is a desirable property in the presence of noise in the training examples. Interpretation of the Analysis in the Image Domain. Once the mathematical description of the shape variation is obtained, it needs to be mapped back into the image domain for visualization and anatomical interpretation. In the generative case, this is traditionally done by visualizing the first few principal modes of deformation derived from PCA [5,10,12,18]. Since the model is linear, this step is particularly simple and intuitive. This technique can also be used in linear discrimination by noting that for a linear classifier, the deformation that represents the differences between the two classes is fully determined by the normal to the separating hyperplane and can be used for visualization in the image domain [8]. In fact, this is a special case of the general solution presented in the next section. Recently, a non-linear kernel based version of PCA was proposed in [19], following the approach of kernel based SVMs. It was utilized in [15] for constructing a non-linear model of shape variation of face contours for a wide range of the viewing angles. In this application, the generative linear model in the higher dimensional space had to be mapped back into the image domain for sampling new examples of face contours. The analysis presented in the next section derives an analogous result for the discriminative case.
3
From a Classifier to Shape Differences
In this section, we first analyze the classification function in the original feature space and derive the direction that corresponds to the maximal change in the classifier output while minimizing displacement within the class. Then we note that the shape descriptors used for classification are not necessarily the optimal representation for modeling shape differences in the image domain and show how to express the “optimal direction” in terms of deformations of the outline mesh of the segmented object, which is a more appropriate representation. 3.1
Shape Differences in the Feature Space
Eq. (2) and Eq. (3) imply that the classification function f (x) is directly proportional to the signed distance from the input point to the separating boundary computed in the higher dimensional space defined by the mapping Ψ . It therefore depends only on the projection of the vector Ψ (x) onto the normal to the hyperplane w, while ignoring the component of Ψ (x) that is perpendicular to w. This suggests that in order to create a displacement of Ψ (x) that corresponds to the difference between the two classes, one should change the vector’s projection onto w while keeping its perpendicular component constant. Below, we derive the displacement of x in the original space that achieves the desired displacement of Ψ (x).
522
Polina Golland et al.
As we move from x to x + dx in the original space, the corresponding vector in the range of Ψ moves from z = Ψ (x) to z + dz = Ψ (x + dx). The deviation of the displacement vector dz from the direction of the classification vector w 2 2 dz · w 2 = dz 2 − dz · w dz −
e = 2
w
w
(4)
can be expressed as a function of dot products in the range of Ψ as shown here: 2
w = αk yk Ψ (xk ) k αk αm yk ym Ψ (xk ) · Ψ (xm ) = 2
(5) (6)
k,m
=
αk αm yk ym K(xk , xm ),
(7)
k,m
dz · w =
αk yk (Ψ (x + dx) − Ψ (x)) · Ψ (xk )
(8)
αk yk (K(x + dx, xk ) − K(x, xk ))
(9)
k
=
k
=
k
∂K(u, v) αk yk dxi ∂u i (u=x,v=xk ) i
= ∇f (x)dx,
(10) (11)
where ui denotes the i-th component of vector u, and 2
2
dz = Ψ (x + dx) − Ψ (x) = K(x + dx, x + dx) + K(x, x) − 2K(x + dx, x) 2 1 ∂ 2 K(u, u) K(u, v) ∂ − dxi dxj = 2 ∂ui ∂uj ∂ui ∂uj i,j
u=x
∂ 2 K(u, v) dxi dxj = ∂u ∂v i j (u=x,v=x) i,j = Hx (i, j) dxi dxj = dxT Hx dx,
(12) (13) (14)
(u=x,v=x)
(15) (16)
i,j
where matrix Hx is the upper right quarter of the Hessian of the kernel function K evaluated at (x, x). Substituting into Eq. (4), we obtain
e 2 = dxT Hx − w −2 ∇f T (x)∇f (x) dx. (17)
Deformation Analysis for Shape Based Classification
523
This is a quadratic form that defines the deviation of the displacement dz from w as a function of the displacement dx in the original feature space. The error is minimized when dx is aligned with the smallest eigenvector of matrix −2
Qx = Hx − w
∇f T (x)∇f (x).
(18)
We call this direction principal deformation, in analogy to the principal modes of variation in the generative case. We can compute Qx in the original space using Eq. (7), Eq. (10), and Eq. (15) to compute w , ∇f (x) and Hx respectively. Note that since the mapping Ψ is non-linear, the principal deformation is generally not spatially uniform and has to be determined for every input feature vector x separately. Interestingly, an analytical solution of this problem exists for a wide family of kernel functions, including the linear kernel (K(u, v) = u · v), as well as all kernel functions that depend only on the distance between the two input vectors (K(u, v) = k( u − v )). For these functions, Hx = cI, where c is a constant and I is an identity matrix, and the principal deformation vector is parallel to the largest eigenvector of the matrix w −2 ∇f T (x)∇f (x), which is the gradient of the classification function ∇f T (x). It is a well known fact that in order to achieve the fastest change in f (x), one moves along the gradient of the function. In our case, the gradient of the function also corresponds to the smallest distortion of the feature vector along irrelevant dimensions. For the linear kernel, Hx is the identity matrix (c = 1), and the principal deformation is equal to the classification vector (the distortion is zero). For the Gaussian kernel, c = γ2 , and while the principal deformation is parallel to the function gradient, it does not achieve the classification direction precisely. One example of commonly used kernel functions for which Hx is not diagonal, and therefore the general eigenvalue problem has to be solved, is the polynomial kernel family: K(u, v) = (1 + u · v)d ,
(19)
where d is the degree of the polynomial. 3.2
From Shape Differences to Deformation
If the feature vectors extracted from the input images can be used successfully by a generative model of shape variation, the result derived above is sufficient: in order to visualize shape differences between the classes, one should estimate the principal deformation and then generate examples in the image domain that demonstrate the changes that the shape undergoes as its feature vector moves along the principal deformation direction. However, many shape representations that are well suited for classification do not lend themselves naturally to this approach. The distance transform is one such representations. Distance transforms do not form a linear vector space – a linear combination of two distance transforms is not always a valid distance transform – and therefore cannot be used directly for generating new examples of the shape. Some representations do not provide an easy way to model deformations locally, which could be a
524
Polina Golland et al.
desired property. In this section, we demonstrate how one can use two different representations for classification and visualization and still obtain the optimal reconstruction. We choose to use surface meshes extracted from the segmented images to model shape deformation. By selectively changing locations of the mesh nodes, this representation can be used very effectively for modeling local deformations of the shape. Furthermore, if we form a feature vector by concatenating node positions in the image, we can change the values of the vector components arbitrarily and still represent a particular shape in the image domain. However, this representation is unsuitable for classification purposes, mainly because the feature vector length could be different for different example shapes, and moreover, it is difficult to establish correspondences between nodes in different meshes. A finite set of points, often manually selected landmarks, can be used as a shape descriptor [1,5], but we are interested in a dense representation that can be extracted fully automatically from the segmented images. Once the classifier function has been learned in the original feature space (distance transforms in our example), we can perform the search for the principal deformation direction as described in the previous section. But instead, we transfer the search into the space of node displacements in the original surface mesh by computing a local Jacobian of the distance transform components with respect to the displacements of the mesh nodes. We restrict the node motion to the direction of the normal to the outline at each node, combining the displacements into a new feature vector s. For the distance transform descriptors, each voxel is only affected by the closest node in the outline mesh2 . We can express infinitesimal changes in the distance transform feature vectors dx as a function of the displacements of the points on the outline of the shape ds: ∂xi 1, node j is the closest node to voxel i = , (20) Jx (i, j) = 0, otherwise ∂sj or, in matrix notation, dx = Jx ds. Substituting into Eq. (18), we can see that the principal deformation in the space of node displacements s is the smallest eigenvector of the matrix Qs = JxT (H − w
−2
∇f T (x)∇f (x))Jx .
(21)
Note that Jx is very sparse, and in fact, only one element in every row is non-zero (or a few, for the skeleton voxels), and that the dimensionality of the new space is significantly lower than the dimensionality of the distance transform feature vectors, which simplifies the computation. 2
For most voxels, there is only one such node, but for voxels that belong to a skeleton of a shape there are at least two closest nodes. However, the number of skeleton voxels is typically small (5-8% in our experiments) and they do not affect the results significantly. We ignore the skeleton voxels in the derivation, but our experiments demonstrated that the resulting deformations do not change almost at all if we incorporate the skeleton voxels into the analysis.
Deformation Analysis for Shape Based Classification
525
In the case of linear or Gaussian kernels, the principal deformation in the space of the mesh deformations can be computed explicitly: s = JxT ∇f T (x),
(22)
or equivalently, for node i, the optimal displacement is sj =
∂f (x) ∂xi = ∂xi ∂sj i
i, such that node j is the closest node to voxel i
∂f (x) . ∂xi
(23)
To summarize, we use deformations of the original shapes to demonstrate shape differences learned from example images. The shape descriptors used for classification are linked to the node displacements in the outline mesh through a linear model, which allows searching for the principal deformation in the space that is particularly well suited for deformation representation. This approach can be applied to a variety of shape descriptors, as long as one can establish local dependence of the descriptor components on the displacements of the mesh nodes.
4
Experimental Results
In this section, we demonstrate the method on two data sets. The first one is an artificially constructed example, for which the shape differences are known in advance. The second example is a real medical data set. In the artificial example, we generated 30 examples of ellipsoidal shapes of varying sizes. The width, height and thickness of the shapes were sampled out of a ±10 voxel variation range centered at 20, 30 and 40 voxels respectively. The 10 first examples were chosen to be the first class and had a spherical bump added to them. Fig. 1a shows example shapes from each class. We extracted the distance transforms (Fig. 1b), performed the classification and constructed the optimal deformation both in the distance transform space (Fig. 1c) and in the space of the mesh node displacements (Fig. 2). We can see that the classifier successfully identified the area of the bump as the area of the highest differences between the two groups. Fig. 3 shows several frames in the deformation process for two different input shapes. We also report the results of the method applied to a data set that contains MRI scans of 15 schizophrenia patients and 15 matched controls. In each scan, the hippocampus-amygdala complex was manually segmented. More details on the subject selection and data acquisition can be found in [16]. The same paper reports statistically significant differences in the left hippocampus based on relative volume measurements (the volume of the structure normalized by the total volume of intracranial cavity). We previously showed statistically significant differences between the groups based on SVM classification using distance transforms as feature vectors [9]. Here, we present the deformations that correspond to the learned differences. Fig. 4 and Fig. 5 show two examples of the
526
Polina Golland et al.
(a) Three example shapes from each class.
(b) Several slices of the distance transform for the top left shape.
(c) Several slices of the principal deformation for the top left shape. The values are linearly scaled so that dark regions correspond to negative values, bright areas correspond to positive values.
Fig. 1. Artificial data set. Example shapes are shown for each class, as well as the distance transform and the principal deformation for the top left shape.
most prominent deformations identified for normal controls and schizophrenia patients respectively. These examples were among those selected as support vectors (a total of seven support vectors was selected, other vectors represent similar deformations to shown here) by the algorithm and each represents a different region in the feature space and the corresponding deformation. We can see that the deformation is well localized, and that the hippocampus in the schizophrenia group seems to be thinner and shorter than the normal one. While this result and its physiological implications need further validation, it demonstrates our approach. We are currently working on validating the technique on several other data sets.
5
Conclusions
In this paper, we have demonstrated a general technique for deriving the deformation representing statistical differences between two classes of anatomical shapes. We identify the differences by training a classifier in the space of feature vectors (distance transforms). Then the classifier is analyzed to arrive at the
Deformation Analysis for Shape Based Classification
527
(a) Principal deformation in the mesh space.
(b) Two more views of the principal deformation.
Fig. 2. Example shapes from Fig 1a with the principal deformation values painted on. Colors are used to indicate the direction and the magnitude of the deformation, varying from blue (moving inward) to green (zero deformation) to red (moving outwards).
surface based representation of the deformation. We use kernel based classification technique (SVMs), but this approach can be extended to other classification methods that express the classification function analytically in terms of the training data. In order to derive the deformation, we compute the derivative of our representation with respect to small displacements of the points on the outline and use it to express the changes in the classification function in terms of the deformations of the outline. Surface based representations are excellent for expressing deformations, which could be easy and intuitive to interpret in anatomical terms. However, these representations are usually difficult to use as feature vectors, mostly because it is difficult to reliably establish correspondences between the points on the meshes that represent example shapes. Our approach overcomes this problem by performing classification using volume based feature vectors and then mapping them back to the surface based representation for every individual shape example. The proposed technique can provide the currently missing step of detailed interpretation and visualization of a mathematical model of the statistical differences in the anatomical shape detected from example images. Acknowledgments. Quadratic optimization was performed using PR LOQO optimizer written by Alex Smola.
528
Polina Golland et al.
Fig. 3. Several frames in the deformation process for the first example shape in each group in Fig. 1a. Note how the shape of the protrusion changes as we vary the amount of deformation. This research was supported in part by NSF IIS 9610249 grant. M. E. Shenton was supported by NIMH K02, MH 01110 and R01 MH 50747 grants, R. Kikinis was supported by NIH PO1 CA67165, R01RR11747, P41RR13218 and NSF ERC 9731748 grants.
References 1. F. L. Bookstein. Landmark methods for forms without landmarks: morphometrics of group differences in outline shape. Medical Image Analysis, 1(3):225-243, 1996. 2. C. Brechb¨ uhler, G. Gerig and O. K¨ ubler. Parameterization of closed surfaces for 3-D shape description. CVGIP: Image Understanding, 61:154-170, 1995. 3. C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. 4. C. J. C. Burges. Geometry and Invariance in Kernel Based Methods. In Advances in Kernel Methods: Support Vector Learning, Eds. B. Sch¨ olkopf, C. J. C. Burges, A. J. Smola, MIT Press, 89-116, 1998. 5. T. F. Cootes, C. J. Taylor, D. H. Cooper and J. Graham. Training Models of Shape from Sets of Examples. In Proc. British Machine Vision Conference, 9-18, Springer-Verlag, 1992. 6. J. G. Csernansky et al. Hippocampal morphometry in schizophrenia by high dimensional brain mapping. In Proc. Nat. Acad. of Science, 95(19):11406-11411, 1998. 7. C. Davatzikos, et al. A Computerized Method for Morphological Analysis of the Corpus Callosum. J. Computer Assisted Tomography, 20:88-97, 1996. 8. P. Golland, W. E. L. Grimson and R. Kikinis. Statistical Shape Analysis Using Fixed Topology Skeletons: Corpus Callosum Study. In Proc. IPMI’99, LNCS 1613:382-387, 1999. 9. P. Golland, W. E. L. Grimson, M. E. Shenton and R. Kikinis. Small Sample Size Learning for Shape Analysis of Anatomical Structures. In Proc. MICCAI’2000, LNCS 1935:72-82, 2000. 10. A. Kelemen, G. Sz´ekely, and G. Gerig. Three-dimensional Model-Based Segmentation. In Proc. IEEE Intl. Workshop on Model Based 3D Image Analysis, Bombay, India, 87–96, 1998.
Deformation Analysis for Shape Based Classification
(a) Normal control #1.
529
(b) Normal control #2.
Fig. 4. Two examples (support vectors) from the normal control group with the principal deformation painted on the surface. In each row, the right and the left hippocampus are shown from a different viewpoint (anterior, posterior and lateral respectively). The color-coding scheme is identical to Fig. 2.
11. J. Martin, A. Pentland, and R. Kikinis. Shape Analysis of Brain Structures Using Physical and Experimental Models. In Proc. CVPR’94, 752-755, 1994. 12. M. E. Leventon, W. E. L. Grimson and O. Faugeras. Statistical Shape Influence in Geodesic Active Contours. In Proc. CVPR’2000, 316-323, 2000. 13. A. M. C. Machado and J. C. Gee. Atlas warping for brain morphometry. In Proc. SPIE Medical Imaging 1998: Image Processing, SPIE 3338:642-651, 1998. 14. S. M. Pizer, et al. Segmentation, Registration, and Measurement of Shape Variation via Image Object Shape. IEEE Transactions on Medical Imaging, 18(10): 851-865, 1996. 15. S. Romdhani, S. Gong and A. Psarrou. A Multi-View Nonlinear Active Shape Model Using Kernel PCA. In Proc. BMVC’99, 483-492, 1999. 16. M. E. Shenton, et al. Abnormalities in the left temporal lobe and thought disorder in schizophrenia: A quantitative magnetic resonance imaging study. New England J. Medicine, 327:604-612, 1992. 17. L. Staib and J. Duncan. Boundary finding with parametrically deformable models. IEEE PAMI, 14(11):1061-1075, 1992.
530
Polina Golland et al.
(a) Schizophrenia patient #1.
(b) Schizophrenia patient #2.
Fig. 5. Two examples (support vectors) from the schizophrenia group with the principal deformation painted on the surface. In each row, the right and the left hippocampus are shown from a different viewpoint (anterior, posterior and lateral respectively). The color-coding scheme is identical to Fig. 2.
18. G. Sz´ekely et al. Segmentation of 2D and 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier contour and surface models. Medical Image Analysis, 1(1):19-34, 1996. 19. B. Sch¨ olkopf, A. Smola, and K.-R. M¨ uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299-1319, 1998. 20. V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
Subject Index
3D 3D 3D 3D 4D
shape models, 50 thinning, 409 ultrasound images, 169 volume, 388 imaging, 431
active appearance models, 446 active shape models, 50, 380 anisotropy, 121 atlas matching, 315 attribute vector, 475 augmented reality, 169 automatic correspondence, 50 automatic landmarking, 50, 162 B-spline filtering, 431 Bayesian estimation, 155, 225 biomechanical model, 183 brain atlas, 488 brain atrophy, 141 brain connectivity, 106 brain image, 423 brain imaging, 272 brain networks, 218 brain registration, 475 brain shift, 183 brain volume, 141 CAD, 134 capillaries, 204 cardiac, 211 cardiac deformation, 36 cardiac MRI, 446 causal representation, 259 central path, 409 cerebral cortex, 488 characterization, 438 classification, 402 classifications, 134 clustering, 211 conformal metric, 365 connectivity, 121 corpus callosum, 372 correspondence, 329
cortical sulci, 315 cortical surface reconstruction, 395 covariant PDEs, 488 decision support systems, 134 deformable model, 183 deformable models, 239, 365, 380 deformable templates, 64, 329 deformation, 155, 517 detection models, 1 diffusion, 92, 121 digital topology, 395, 409 distance transform, 78, 358 drift, 232 DT-MRI, 92 DTI, 106 dynamic image data, 225 dynamics, 286 echocardiography, 446 echocardiograpy, 453 edge preserving smoothing, 431 EEG, 286 elastic model, 36 elastic registration, 78 error analysis, 24 estimability, 259 Euclidean distance, 358 factor analysis, 211 feature detection, 453 feature matching, 162 feature transform, 358 finite element methods, 344 fixed effect model, 190 flow field, 461 fluoroscopy, 204 fMRI, 190, 197, 218, 232, 259 free-form deformations, 344 front propagation, 416 functional image segmentation, 468 functional magnetic resonance imaging, 232 Gabor Filter, 176
532
Subject Index
Hessian, 388 high-resolution MRI, 239 human vision, 1 image enhancement, 453 image guided surgery, 169 image quality, 453 image registration, 36, 162, 329 image sequences, 446 image-guided surgery, 155 inverse problem, 272 inverse transformation, 329 isosurface algorithm, 395 kinetic modeling, 211 L2E, 176 landmark registration, 329 landmarking, 78 left ventricular deformation, 36 lesion-model, 438 level set, 106 level set methods, 416, 461 linear discriminant analysis, 372 linear structures, 162 local phase coherence, 461 m-reps, 402 magnetic resonance angiography (MRA), 461 magnetic resonance imaging, 395, 423 mammography, 1, 134, 162 Markov model, 190 Markov random field, 468 measurement space, 259 medial description, 64 medial representation, 402 medial shape description, 502 MEG, 286 MEG/EEG, 272 model guidance, 183 model-based segmentation, 380 morphology, 372 MRI, 106, 141, 211, 286, 488 multi-scale, 64 multigrid methods, 365 multigrid minimization, 315 multilevel methods, 416 Multimodal, 176 multiple sclerosis, 141
multiple-sclerosis, 438 multiscale, 155 multivariate regression, 197 myeloarchitecture, 239 neocortical layers, 239 non rigid registration, 169 non-rigid registration, 344, 372 nonlinear registration, 488 nonseparable variogram model, 197 normalised mutual information, 344 null space, 259 observer performance, 1 observer templates, 24 outcome measure, 141 partial volume, 423 PCA, 190, 225 permeability, 204 phase synchronization, 272 positron emission tomography, 468 prior information, 225 psychophysics, 1 reconstruction, 286 registration, 204, 315 regularization, 36, 92, 272 replicator dynamics, 218 robust estimators, 315 Robust Registration, 176 saturation condition, 416 scale selection, 365 scale-space analysis, 431 scleroderma, 204 segmentation, 64, 365, 416, 423, 431, 438, 446, 461 shape analysis, 502 shape based classification, 517 shape difference, 502 shape variability, 402 shape-space, 388 skeleton, 409, 502 spatio-temporal, 438 spatio-temporal covariance model, 197 spatio-temporal random process, 197 spatiotemporal analysis, 190 statistical analysis, 517 statistical shape model, 475 statistical shape models, 50, 78
Subject Index stenosis, 388 sulcal extraction, 475 sulcal labeling, 475 surface reconstruction, 183 tensor, 121 tensor fields, 92 three-dimensional, 78 tissue classification, 423 topology correction, 395 topology preservation, 409 tracking, 169 tractography, 106, 121
533
trend, 232 two-alternative forced-choice detection, 24 ultrasound, 155, 453 unbiased estimation, 24 validation, 344 variational methods, 92 visual strategy, 24 Voronoi diagram, 358 wavelet, 232
Author Index
Abbey, C.K., 24 Alberdi, E., 134 Alexander, D.C., 92 Algazi, V.R., 431 Allen, P.D., 204 Anderson, M., 204 Arendt, T., 239 Arnold, D.L., 141 Arridge, S.R., 92 Atkinson, D., 121 Ayache, N., 169 Backfrieder, W., 225 Balogh, E., 409 Barillot, C., 315 Barker, G.J., 106 Barrett, H.H., 12, 259 Batchelor, P.G., 121, 155 Benali, H., 197 Blackall, J.M., 155 Bosch, J.G., 446 Boukerroui, D., 453 Br¨ uckner, M.K., 239 Brady, M., 453, 461 Braga-Neto, U., 395 Burgess, A.E., 1 Cachier, P., 169 Calamante, F., 121 Castellano Smith, A.D., 344 Chen, J.L., 468 Christensen, G.E, 329 Chui, H., 300 Chung, A.C.S., 461 Clarkson, E., 12 Clarkson, E.W., 259 Collins, D.L., 141 Constable, R.T., 36 Cootes, T.F., 50 Coulon, O., 92 von Cramon, D.Y., 218, 239 Danielsson, P.-E., 388 Davatzikos, C., 475 David, O., 272 Davies, R.H., 50
Degenhard, A., 344 Di Bella, E.V.R., 211 Dione, D.P., 36 Droske, M., 416 Duncan, J.S., 36, 183, 300 Eckstein, M.P., 24 Erd˝ ohelyi, B., 409 Evans, A.C., 141 Fletcher, P.T., 64 Fox, J., 134 Frangi, A.F., 78 Freire, L., 246 Fuchs, A., 286 Gallippi, C.M., 148 Garnero, L., 272 Ge, Y., 365 Gee, J.C., 372, 423 van der Geest, R.J., 446 Gerig, G., 438, 502 Gmitro, A.F., 259 Golland, P., 517 Grimson, W.E.L., 517 Gunn, R.N., 468 Gunn, S.R., 468 Halmai, C., 409 Han, X., 395, 475 Hausegger, K., 409 Hawkes, D.J., 155, 344 Hayes, C., 344 Hellier, P., 315 Herrick, A.L., 204 Hill, D.L.G., 121, 155, 344 Hoppin, J., 12 Hose, R., 344 Huesman, R.H., 431 Jacobson, F.L., 1 Jantzen, K.J., 286 Jirsa, V.K., 286 Johnson, H.J., 329 Joshi, S., 64, 402 Judy, P.F., 1
536
Author Index
Kappos, L., 438 K´ arn´ y, M., 225 Kastis, G., 12 Kaus, M., 380 Kelso, J.A.S., 286 Kikinis, R., 517 King, A.P., 155 Kruggel, F., 197, 239 Kuba, A., 409 Kupinski, M., 12 Leach, M.O., 344 Lee, R., 134 Lehovich, A., 259 Lelieveldt, B.P.F., 446 Lin, Q., 388 Liu, J., 176 Lobregt, S., 380 Lohmann, G., 218 Lorenz, C., 380 Mangin, J.-F., 246 Marron, J.S., 402 Marroquin, J.L., 176 Marti, R., 162 Maurer, Jr., C.R., 358 McCarthy, G., 232 Mega, M.S., 488 Meyer, B., 416 Meyer, F.G., 232 Mitchell, S.C., 446 Montagnat, J., 141 Moore, T., 204 Nabavi, A., 183 Niessen, W.J., 78 Nixon, M.S., 468 Noble, J.A., 453, 461 Noe, A., 423 Onat, E.T., 36 Pal´ agyi, K., 409 Papademetris, X., 36 Parker, G.J.M., 106 Pekar, V., 380 P´el´egrini-Issac, M., 197 Pennec, X., 169 Penney, G.P., 155 Pettey, D.J., 372
Piyaratna, J., 190 Pizer, S.M., 64, 402 Prince, J.L., 395, 475 Qi, R., 358 Rad¨ u, E.-W., 438 Raghavan, V., 358 Rajapakse, J.C., 190 Rangarajan, A., 300 Rapoport, J.L., 488 Reiber, J.H.C., 446 Rettmann, M.E., 475 Reutter, B.W., 431 Rubin, C., 162 Rueckert, D., 78 Rumpf, M., 416 ˇamal, M., 225 S´ Schaller, C., 416 Schnabel, J.A., 78, 344 Schultz, R., 300 Shenton, M.E., 517 Sinusas, A.J., 36 Sitek, A., 211 ˇ Skrinjar, O., 183 ˇ ıdl, V., 225 Sm´ Sonka, M., 446 Sorantin, E., 409 Sordo, M., 134 Studholme, C., 183 Styner, M., 502 Summers, P., 461 Szabo, Z., 225 Sz´ekely, G., 438 Tanner, C., 344 Tao, X., 475 Taylor, C.J., 50, 204 Taylor, P., 134 Thall, A., 64 Thompson, P.M., 488 Todd-Pokropek, A., 134 Toga, A.W., 488 Tracton, G., 64 Trahey, G.E., 148 Truyen, R., 380 Varela, F.J., 272 Vemuri, B.C., 176 Vidal, C., 488
Author Index Weese, J., 380 Welti, D., 438 Wheeler-Kingshott, C.A.M., 106 Wiggins, C.J., 239 Win, L., 300 Wyatt, C., 365
Xu, C., 395 Yushkevich, P., 402 Zijdenbos, A.P., 141 Zwiggelaar, R., 162
537