Modeling and Imaging of Bioelectrical Activity Principles and Applications
BIOELECTRIC ENGINEERING Series Editor:
Bin He University of Minnesota MinneapoH~ Minnesota
MODELING AND IMAGING OF BIOELECTRICAL ACTIVITY Principles and Applications Edited by Bin He
Modeling and Imaging of Bioelectrical Activity Principles and Applications Edited by
Bin He University of Minnesota Minneapolis, Minnesota
Kluwer Academic/ Plenum Publishers
New York, Boston, Dordrecht, London, Moscow
Library of Congress Cataloging-in-Publicat ion Data Modeling and imaging of bioelectrical activity: principles and applications/edited by Bin He. p. ; cm. - (Bioelectric engineering) Includes bibliographical references and index. ISBN 0-306-48112-X 1. Heart-Electric properties-Mathematical models. 2. Heart-Electric properties-Computer simulation. 3. Brain-Electric properties-Mathematical models. Brain-Electric properties-Computer simulation. I. He, Bin, 1957- II. Series.
4.
QP112.5.E46M634 2004
612'.0142T 011- dc22 2003061963
ISBN 0-306-48112-X ©2004 Kluwer Academic /Plenum Publishers, New York 233 Spring Street, New York, New York 10013 http://www.wkap.nl/ 10 9 8
7 6
5
4
3 2
1
A C.I.P. record for this book is available from the Library of Congress All rights reserved No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system , for exclusive use by the purchaser of the work. Permissions for books published in Europe:
[email protected] Permissions for books published in the United States of America: permissions @wkap.com Printed in the United States of America
PREFACE
Bioelectrical activity is associated with living excitable tissue. It has been known, owing to efforts of numerous investigators, that bioelectrical activity is closely related to the mechanisms and functions of excitable membranes in living organs such as the heart and the brain. A better understanding of bioelectrical activity, therefore, will lead to a better understanding of the functions of the heart and the brain as well as the mechanisms underlying the bioelectric phenomena. Bioelectrical activity can be better understood through two common approaches. The first approach is to directly measure bioelectrical activity within the living tissue. A representative example is the direct measurement using microelectrodes or a microelectrode array. In this direct measurement approach, important characteristics of bioelectrical activity, such as transmembrane potentials and ionic currents, have been recorded to study the bioelectricity of living tissue. Recently, direct measurement of bioelectrical activity has also been made using optical techniques. These electrical and optical techniques have played an important role in our investigations of the mechanisms of cellular dynamics in the heart and the brain. The second approach is to noninvasively study bioelectrical activity by means of modeling and imaging. Mathematical and computer models have offered a unique capability of correlating vast experimental observations and exploring the mechanisms underlying experimental data. Modeling also provides a virtual experimental setting, which enables well controlled testing of hypothesis and theory. Based on the modeling of bioelectrical activity, noninvasive imaging approaches have been developed to detect, localize, and image bioelectrical sources that generate clinical measurements such as electrocardiogram (ECG) and electroencephalogram (EEG). Information obtained from imaging allows for elaboration of the mechanisms and functions of organ systems such as the heart and the brain. During the past few decades, significant progress has been made in modeling and imaging of bioelectrical activity in the heart and the brain. Most literature, however, has treated these research efforts in parallel. The similarity arises from the biophysical point of view that membrane excitation in both cardiac cells and neurons can be treated as volume current sources. The clinical observations of ECG and EEG are the results of volume conduction of currents within a body volume conductor. The difference among bioelectrical activity originating from different organ systems is primarily due to the different physiological mechanisms underlying the phenomena. From the methodological point of view, v
vi
Preface
therefore, modeling and imaging of bioelectrical activity can be treated within one theoretical framework. Although this book focuses on bioelectric activity of the heart and the brain, the theory, methodology, and state-of-the-art research that are presented in this book should also be applicable to a variety of applications. The purpose of this book is to provide a state-of-the-art coverage of basic principles, theories , and methods of modeling and imaging of bioelectrical activity with applications to cardiac and neural electrical activity. It is aimed at serving as a reference book for researchers working in the field of modeling and imaging of bioelectrical activity, as an introduction to investigators who are interested in entering the field or acquiring knowledge about the current state of the field, and as a textbook for graduate students and seniors in a biomedical engineering, bioengineering, or medical physics curriculum. The first three chapters deal with the modeling of cellular activity, cell networks, and whole organ for bioelectrical activity in the heart. Chapter I provides a systematic review of one-cell models and cell network models as applied to cardiac electrophysiology. It illustrates how modeling can help elucidate the mechanisms of cardiac cells and cell networks, and increase our understanding of cardiac pathology in three-dimension and whole heart models . Chapter 2 provides a thorough theoretical treatment of the forward problem of bioelectricity, and in particular electrocardiography. Following a review of the theoretical basis of equivalent dipole source models and state-of-the-art numerical methods of computing the electrical potential fields, Chapter 2 discusses the applications of forward theory to whole heart modeling and defibrillation. Chapter 3 reviews important issues in whole heart modeling and its implementation as well as various applications of whole heart modeling and simulations of cardiac pathologies. Chapter 3 also illustrates important clinical applications the modeling approach can offer. The following two chapters review the theory and methods of inverse imaging with applications to the heart . Chapter 4 provides a systematic treatment of the methods and applications of heart surface inverse solutions . Many investigation s have been made in order to inversely estimate and reconstruct potential distribution over the epicardium, or activation sequence, over the heart surface from body surface electrocardiograms. Progress has also been made to estimate endocardial surface potentials and activation sequence from catheter recordings. These approaches and activities are well reviewed in Chapter 4. Chapter 5 reviews the recent development in three dimensional electrocardiography tomographic imaging . Recent research shows that, by incorporating a priori information into the inverse solutions, it is possible to estimate three-dimensional distributions of electrophysiological characteristics such as activation time and transmembrane potentials, or equivalent current dipole distribution. Inparticular, a whole-heart-model based tomographic imaging approach is introduced, which illustrates the close relationship between modeling and imaging and the merits of model-based imaging . Chapter 6 deals with a noninvasive body surface mapping technology - surface Laplacian mapping. Compared with well-established body surface potential mapping , body surface Laplacian mapping has received relatively recent attention in its enhanced capability of identifying and mapping spatially separated multiple activities . This chapter also illustrates that a noninvasive mapping technique can be applied to imaging of bioelectrical activity originated from different organ systems, such as the heart and the brain. The subsequent two chapters treat inverse imaging of the brain from neuromagnetic and neuroelectric measurements, as well as functional magnetic resonance imaging (fMRI).
Preface
vii
Chapter 7 reviews the forward modeling of magnetoencephalogram (MEG), and neuromagnetic source imaging with a focus on spatial filtering approach. Chapter 8 provides a general review of tMR!, linear inverse solutions for EEG and MEG, and multimodal imaging integrating EEG, MEG and tMR!. Along with Chapters 4 and 5, these four chapters are intended to provide a solid foundation in inverse imaging methods as applied to imaging bioelectrical activity. Chapter 9 deals with tissue conductivity, an important parameter that is required in bioelectric inverse solutions. The conductivity parameter is needed in establishing accurate forward models of the body volume conductor and obtaining accurate inverse solutions using model-based inverse imaging. As most inverse solutions are derived from noninvasive measurements with the assumption of known tissue conductivity distribution, the accuracy of tissue conductivity is crucial in ensuring accurate and robust imaging of bioelectrical activity. Chapter 9 systematically addresses this issue for various living tissues. This book is a collective effort by researchers who specialize in the field of modeling and imaging of bioelectrical activity. I am very grateful to them for their contributions during their very busy schedules and their patience during this process. I am indebted to Aaron Johnson Brian Halm, Shoshana Sternlicht, and Kevin Sequeira of Kluwer Academic Publisher for their great support during this project. Financial support from the National Science Foundation, through grants of NSF CAREER Award BES-9875344, NSF BES0218736 and NSF BES-020l939, is also greatly appreciated. We hope this book will provide an intellectual resource for your research and/or educational purpose in the fascinating field of modeling and imaging of bioelectrical activity. Bin He Minneapolis
CONTENTS
1
1 1.1 1.1.1 1.1.2 1.1.3 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.5 1.4
FROM CELLULAR ELECTROPHYSIOLOGY TO ELECTROCARDIOGRAPHy......................................................... Nitish V. Thakor, Vivek Iyer; and Mahesh B. Shenai Introduction The One-cell Model Voltage Gating Ion Channel Kinetics (Hodgkin-HuxleyFormalism) .. Modeling the Cardiac Action Potential.......... .. Modeling Pathologic Action Potentials Network Models Cell-cell Coupling and Linear Cable Theory Multidimensional Networks .. Reconstruction of the Local Extracellular Electrogram (Forward Problem) Modeling Pathology in Cellular Networks Modeling Pathology in Three-dimensional and Whole Heart Models Myocardial Ischemia Preexcitation Studies Hypertrophic Cardiomyopathy Drug Integration in Three-dimensional Whole Heart Models Genetic Integration in Three-dimensional Whole Heart Models. Discussion References
THE FORWARD PROBLEM OF ELECTROCARDIOGRAPHY: THEORETICAL UNDERPINNINGS AND APPLICATIONS................ Ramesh M. Gulrajani 2.1 Introduction.................................................................................. 2.2 Dipole Source Representations 2.2.1 Fundamental Equations 2.2.2 The Bidomain Myocardium.............................................................. 2.3 Torso Geometry Representations 2.4 Solution Methodologies for the Forward problem 2.4.1 Surface Methods............................................................................
1 1 3 3 7 10 17 17 18 20 23 29 31 31 34 35 35 36 38
2
ix
43
43 44 44 46 53 53 54
x
Contents
2.4.2 2.4.3 2.5 2.5.1 2.5.2 2.5.3 2.6
VolumeMethods............................................................................ Combination Methods Applications of the Forward Problem................................................... Computer Heart Models Effects of Torso Conductivity Inhomogeneities Defibrillation................................................................................ Future Trends References
3
WHOLE HEART MODELING AND COMPUTER SIMULATION 81 Darning Wei Introduction 81 Methodology in 3D Whole Heart Modeling........................................... 82 Heart-torso Geometry Modeling......................................................... 82 Inclusion of Specialized Conduction System 83 Incorporating Rotating Fiber Directions 85 Action Potentials and Electrophysiologic Properties 89 Propagation Models........................................................................ 94 Cardiac Electric Sources and Surface ECG Potentials 100 Computer Simulations and Applications 103 Simulation of the Normal Electrocardiogram 103 Simulation of ST-T Waves in Pathologic Conditions 107 Simulation of Myocardial Infarction 108 Simulation of Pace Mapping 110 110 Spiral Waves-A New Hypothesis of VentricularFibrillation Simulation of Antiarrhythmic Drug Effect 110 Discussion 111 References 114
3.1 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.2.6 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4 4
4.1 4.1.1 4.1.2 4.1.3 4.2 4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2 4.4.3
HEART SURFACEELECTROCARDIOGRAPHIC INVERSE SOLUTIONS Fred Greensite Introduction The Rationale for Imaging Cardiac Electrical Function A Historical Perspective Notation and Conventions The Basic Model and Source Formulations Heart Surface Inverse Problems Methodology Solution Nonuniqueness and Instability Linear Estimation and Regularization Stochastic Processes and Time Series ofInverse Problems Epicardial Potential Imaging Statistical Regularization Tikhonov Regularization and Its Modifications Truncation Schemes
58 61 61 62 70 72 75 75
119
119 120 120 123 123 128 129 132 135 138 138 139 141
Contents
xi
4.4.4 4.4.5 4.4.6 4.4.7 4.4.8 4.4.9 4.4.10 4.5 4.6 4.6.1 4.6.2 4.7
Specific Constraints in Regularization Nonlinear Regularization Methodology An Augmented Source Formulation Different Methods for Regularization Parameter Selection The Body Surface Laplacian Approach Spatiotemporal Regularization Recent in Vitro and in Vivo Work Endocardial Potential Imaging Imaging Features of the Action Potential Myocardial Activation Imaging Imaging Other Features of the Action Potential Discussion References
142 143 143 143 144 145 146 147 149 149 154 155 156
5
THREE-DIMENSIONAL ELECTROCARDIOGRAPHIC TOMOGRAPHIC IMAGING
161
Bin He Introduction '" Three-Dimensional Myocardial Dipole Source Imaging Equivalent Moving Dipole Model Equivalent Dipole Distribution Model Inverse Estimation of 3D Dipole Distribution Numerical Example of 3D Myocardial Dipole Source Imaging Three-Dimensional Myocardial Activation Imaging Outline of the Heart-Model based 3D Activation Time Imaging Approach Computer Heart Excitation Model Preliminary Classification System Nonlinear Optimization System Computer Simulation Discussion Three-Dimensional Myocardial Transmembrane Potential Imaging Discussion References
161 163 163 163 164 165 167 167 168 169 170 171 174 175 178 180
BODY SURFACE LAPLACIAN MAPPING OF BIOELECTRIC SOURCES
183
5.1 5.2 5.2.1 5.2.2 5.2.3 5.2.4 5.3 5.3.1 5.3.2 5.3.3 5.3.4 5.3.5 5.3.6 5.4 5.5
6
Bin He and lie Lian 6.1 Introduction 6.1.1 High-resolution ECG and EEG 6.1.2 Biophysical Background of the Surface Laplacian 6.2 Surface Laplacian Estimation Techniques 6.2.1 Local Laplacian Estimates 6.2.2 Global Laplacian Estimates 6.2.3 Surface Laplacian Based Inverse Problem 6.3 Surface Laplacian Imaging of Heart Electrical Activity
183 183 184 186 186 188 190 192
xii
Contents
6.3.1 6.3.2 6.3.3 6.4 6.4.1 6.4.2 6.4.3 6.5
High-resolution Laplacian ECG Mapping Performance Evaluation of the Spline Laplacian ECG Surface Laplacian Based Epicardial Inverse Problem Surface Laplacian Imaging of Brain Electrical Activity High-resolution Laplacian EEG Mapping Performance Evaluation of the Spline Laplacian EEG Surface Laplacian Based Cortical Imaging Discussion References
7
NEUROMAGNETIC SOURCE RECONSTRUCTION AND INVERSE MODELING Kensuke Sekihara and Srikantan S. Nagarajan Introduction Brief Summary of Neuromagnetometer Hardware Forward Modeling Definitions Estimation of the Sensor Lead Field Low-rank Signals and Their Properties Spatial Filter Formulation and Non-adaptive Spatial Filter Techniques Spatial Filter Formulation Resolution Kernel Non-adaptive Spatial Filter Noise Gain and Weight Normalization Adaptive Spatial Filter Techniques Scalar Minimum-variance-based Beamformer Techniques Extension to Eigenspace-projection Beamformer Comparison between Minimum-variance and Eigenspace Beamformer Techniques Vector-type Adaptive Spatial Filter Numerical Experiments: Resolution Kernel Comparison between Adaptive and Non-adaptive Spatial Filters Resolution Kernel for the Minimum-norm Spatial Filter Resolution Kernel for the Minimum-variance Adaptive Spatial Filter Numerical Experiments: Evaluation of Adaptive Beamformer Performance Data Generation and Reconstruction Condition Results from Minimum-variance Vector Beamformer Results from the Vector-extended Borgiotti-Kaplan Beamformer Results from the Eigenspace Projected Vector-extended Borgiotti-Kaplan Beamformer Application of Adaptive Spatial Filter Technique to MEG Data Application to Auditory-somatosensory Combined Response
7.1 7.2 7.3 7.3.1 7.3.2 7.3.3 7.4 7.4.1 7.4.2 7.4.3 7.4.4 7.5 7.5.1 7.5.2 7.5.3 7.5.4 7.6 7.6.1 7.6.2 7.7 7.7.1 7.7.2 7.7.3 7.7.4 7.8 7.8.1
192 193 199 200 200 200 206 208 209
213 213 214 215 215 216 219 221 221 222 222 225 226 226 227 228 230 232 232 234 235 235 238 238 238 243 243
xiii
Contents
7.8.2 Application to Somatosensory Response: High-resolution Imaging Experiments References 8
8.1 8.2 8.2.1 8.3 8.3.1 8.3.2 8.3.3 8.3.4 8.4 8.4.1 8.4.2 8.4.3 8.4.4 8.5
9
9.1 9.1.1 9.1.2 9.1.3 9.1.4 9.1.5 9.2 9.2.1 9.2.2 9.2.3 9.2.4 9.3 9.3.1 9.3.2 9.3.3 9.4 9.4.1 9.4.2
MULTIMODAL IMAGING FROM NEUROELECTROMAGNETIC AND FUNCTIONAL MAGNETIC RESONANCE RECORDINGS Fabio Babiloni and Febo Cincotti Introduction Generalities on Functional Magnetic Resonance Imaging Block-design and Event-Related tMRI Inverse Techniques Acquisition of Volume Conductor Geometry Dipole Localization Techniques Cortical Imaging Distributed Linear Inverse Estimation Multimodal Integration of EEG, MEG and tMRI Data Visible and Invisible Sources Experimental Design and Co-registration Issues Integration of EEG and MEG Data Functional Hemodynamic Coupling and Inverse Estimation of Source Activity Discussion References THE ELECTRICAL CONDUCTIVITY OF LIVING TISSUE: A PARAMETER IN THE BIOELECTRICAL INVERSE PROBLEM Maria J. Peters, Jeroen G. Stinstra, and Ibolya Leveles Introduction Scope of this Chapter Ambiguity of the Effective Conductivity Measuring the Effective Conductivity Temperature Dependence Frequency Dependence Models of Human Tissue Composites of Human Tissue Conductivities of Composites of Human Tissue Maxwell's Mixture Equation Archie's Law Layered Structures The Scalp The Skull A Layer of Skeletal Muscle Compartments Using Implanted Electrodes Combining Measurements of the Potential and the Magnetic Field
245 247
251 251 252 254 254 255 256 257 259 261 261 262 263 267 275 276
281 281 282 283 284 287 287 289 289 292 296 300 307 307 308 310 311 311 312
xiv
Contents
9.4.3 Estimation of the Equivalent Conductivity using Impedance Tomography 9.5 Upper and Lower Bounds 9.5.1 White Matter 9.5.2 The Fetus 9.6 Discussion References INDEX
312 313 314 314 316 316 321
1
FROM CELLULAR ELECTROPHYSIOLOGY TO ELECTROCARDIOGRAPHY by Nitish V. Thakor, Vivek Iyer, and Mahesh B. Shenai t Department of Biomedical Engineering, The Johns Hopkins University, 720 Rutland Ave.,
Baltimore MD 21205
INTRODUCTION Since many cardiac pathologies manifest themselves at the cellular and molecular levels, extrapolation to clinical variables, such as the electrocardiogram (ECG), would prove invaluable to diagnosis and treatment. One ultimate goal of the cardiac modeler is to integrate cellular level detail with quantitative properties of the ECG (a property of the whole heart). This magnificent task is not unlike a forest ranger attempting to document each leaf in a massive forest. Both the modeler and ranger need to place fundamental elements in the context of a broader landscape. But now, with the recent genome explosion, the modeler needs to examine the "leaves" at even much greater molecular detail. Fortunately, the rapid explosion in computational power allows the modeler to span the details of each molecular "leaf" to the "forest" of the whole heart. Thus, cardiac modeling is beginning to span the spectrum from DNA to the ECG, from nucleotide to bedside. Extending cellular detail to whole-heart electrocardiography requires spanning several levels of analysis (Figure 1.1). The one-cell model describes an action potential recording from a single cardiac myocyte. By connecting an array of these individual myocytes (via gap junctions), a linear network (cable), two-dimensional (20) network or threedimensional (3D) network (slab) model of action potential propagation can be constructed. The bulk electrophysiological signal recorded from these networks is called the local extracellular electrogram. Subsequently, networks representing tissue diversity and realistic heart geometries can be molded into a whole heart model, and finally, the whole heart model can be placed in a torso model replicating lung, cartilage, bone and dermis. At each level, one can reconstruct the salient electric signal (action potential, electrogram, ECG) from the cardiac sources by solving the forward problem of electrophysiology (Chapter 2). Simply put, cardiac modeling is equivalent to solving a system of non-linear differential (or partial differential) equations, though vigorous reference must be made to numerous
2
N. V. Thakor, V. Iyer, and M. B.Shenai
Cell
Network (lD, 2D)
~
Action Potential
~
Electrog ram
Whole
.a
ECG
·SG ·100 -f--~--r------,
FIGURE 1.1. Levels of Analysis. One-cell models include the study of compartments and ion channels and their interactions. The basic electrophysiological recording is the action potential. Network models investigate the connectivity of one-cell units organized in arrays. An electrical measure of bulk network activity is the extracellular electrogram. Finally, many patches molded into the shape of a whole heart (in addition to torso variables) gives rise to the ECG. See the attached CD for color figure.
laboratory experiments which aim to determine the nature and coefficients of each equation. These equations provide a quantitative measure of each channel, each cell, and networks of cells. As more experiments are done and data obtained, the model can be made more complex by adding appropriate differential equations to the system. Thus, as more information about the cellular networks, tissue structure, heart and torso anatomy are obtained, a better reconstruction of the ECG becomes possible. Until recently, however, modeling efforts have primarily focused on accurately reconstructing normal behavior. But with the accumulating experimental history of cardiac disease (such as myocardial ischemia, long-QT syndrome and heart failure), modelers have also begun to revise and extend the quantitative description of these models to include important abnormal behaviors. This chapter will first focus on the theoretical one-cell equations, which are only solved in the time domain. Subsequently, the one-cell model will be expanded to represent multiple dimensions with the incorporation of partial differential equations in space. At each level of analysis, the appropriate electrical reconstruction is discussed in the context of relevant pathology to emphasize the usefulness of cardiac modeling.
From Cellular Electrophysiology to Electrocardiography
3
1.1 THE ONE-CELL MODEL The origins of the one-cell model actually take root from classical neuroscience work conducted by A.L. Hodgkin and A.F. Huxley in 1952 (Hodgkin and Huxley 1952). In famous experiments conducted on the giant axon of the squid, they were able to derive a quantitative description for current flow across the cell membrane, and the resulting action potential (AP). This model mathematically formulated the voltage-dependent "gating" characteristics of sodium and potassium ion channels in the nerve membrane. Since similar ion channels exist in cardiac cells, this Hodgkin-Huxley formalism was applied to model the Purkinje fiber action potential by McCallister, Noble and Tsien (McAllister et al. 1975). However, it was determined that the cardiac action potential is considerably more complex than the neuronal action potential, presumably due to a larger diversity of ion channels present in the cardiac myocyte, the intercellular connections, and its coupling to muscular contraction. With the addition of the "slow-inward" calcium current in 1976, Beeler and Reuter (Beeler and Reuter 1976) were able to successfully describe the ventricular action potential with the characteristic "plateau phase" necessary for proper cardiac contraction. Since then, numerous ion channels and intracellular calcium compartment dynamics have been added (DiFrancesco and Noble 1985; Luo and Rudy 1991; Luo and Rudy 1994), making the current AP model considerably more complex and robust. Nevertheless, many of these membrane channels still follow the same Hodgkin-Huxley formalism, reviewed below for the cardiac myocyte. In addition, the cardiac myocyte contains a prominent intracellular calcium compartment-the sarcoplasmic reticulum.
1.1.1 VOLTAGE GATING ION CHANNEL KINETICS (HODGKIN-HUXLEY FORMAUSM) At the most fundamental level of electrophysiology, an ion (K+, Na", Ca2+) must cross the membrane via the transmembrane ion channel. Typically, the ion channel is a multidomain transmembrane protein with "gates" that open and close at certain transmembrane voltages, Vm (= Vin - Vout ). The problem, however, is to characterize the opening and closing of these gates, a process symbolically represented by the following equation: (1.1)
where k 1 and L 1 are the forward and reverse rates of the process, respectively, and n open and nclosed are the percentage of open or closed channels (which is proportional to channel "concentration"). Thus, by simple rate theory, one would expect the rate of channel opening (dn/dr) to equal (note that nclosed = 1 - n open):
(1.2) The voltage dependence of these ion channels can be understood if these gates are treated as an "energy-barrier" model, described with Eyring Rate Theory (Eyring et al. 1949; Moore and Pearson 1981). Given the concentration of the charged particle on the inside and outside ([Cil, [Co]), an energy barrier (LlG o) located at a relative barrier position
4
N. V. Thakor, V. Iyer, and M. B. Shenai
Extracel lular
K+
Na+
V
..
1l1
FIGURE 1.2. A Battery-Resistor-Capacitor model of a generic excitable membrane. Ions flow (current) to and from the extra- and intracellular domains. across a resistor (or conductance ). The membrane has an inherent capacitance, due to its charge-separating function. The current relates to a transmembrane voltage, V rn-
(8) along the transmembrane route , and a transmembrane voltage (V m), Eyring Rate Theory predicts the forward and reverse rates for ion transfer as: kI
=K ·
(
e
_~ ) RT
•
(
e
-( )-. )t FVm ) RT
"GO) . (e ~) k_ 1 = K· (e-7/T RT
(1.3)
where K is a constant, R is the gas constant, T is the absolute temperature, and z is the valence of the ion. While, the solution in Eq. ( 1.3) is an extremely simplified version of reality, it readily suggests that the forward and reverse rates are voltage-dependent (thus these rates can be represented as k, ( V) and L) ( V» . While the "energy-barrier" model predicts voltage-dependence, it does not account for the time-varying features in opening and closing channels. A model that takes time-variance into account was developed by Hodgkin and Huxley in 1952 (Hodgkin and Huxley 1952). The Hodgkin and Huxley model likens the biological membrane to a Battery-ResistorCapacitor (BRC model, Figure 1.2) circuit. The resistor (1/conductance) represents the ion channel, through which ions pass to create an ionic current (lion). Since the membrane confines a large amount of negatively-charged protein within the cell, it separates positively and negatively charged compartments, thus acting as a capacitor (Cm ). Finally, as ions cross the membrane and enter (or leave) the intracellular compartment, electrical repellant charge begins to build that counteracts Vm . The V m at which a certain ion is at equilibrium (lion = 0) is termed the Nemst potential (Eion ), the "battery" which depends on valence , intracellular [C] i and extracellular [C] o ion concentrations:
-RI T n ([Clo) E ion -_ - zF [Cli
(1.4)
Thus , from simple circuit analysi s of Figure 1.2, the ionic current for a certain ion can be
5
From Cellular Electrophysiology to Electrocardiography
written as: (1.5) Where g(V, t) is the voltage-dependent, time-varying ion channel conductance. To determine the dynamics of an individual ion channel, Hodgkin and Huxley assumed that the channel was a "gate" as described in Eq. (1.2), which can be rewritten solely in terms of open probability nopen or simply, n (the forward and reverse rates, k1(V) and k-1(V) are replaced with a(V) and {3(V), respectively) : dn(t, V) -d-t- = {3(V)[l - n] - a(V)[n]
(1.6)
Eq. (1.6) is a first-order differential equation, which has a particular solution under several boundary conditions. Following a voltage step LlV(Vm = Vrest + LlV) from the resting membrane potential, n(t) follows an inverted exponential time course with the following characteristics:
1 r(v' ) - - - - n
m - a(Vm)
+ {3(Vm)
(1.7)
The quantity of noo(Vm) represents the steady-state proportion of open channels after a step voltage has been applied for a near-infinite amount of time. The variable roo(Vm) characterizes the time the system takes to reach this noo(Vm). Rewriting Eq. (1.6) in terms of the quantities derived in Eq. (1.7), gives a differential equation that describes the time course of the open probability for a channel: dn dt
noo(Vm) - n r(Vm)
(1.8)
Using an elegant experimental set-up that applied a voltage-clamp to a giant-squid axon (Cole 1949; Marmont 1949), Hodgkin and Huxley were able to define regression equations for noo(V) and rm(V), which represent the gating variables for the potassium channel. To obtain a suitable fit to experimental data, they arrived at the open channel probability of n(V, t)4 .Thus, by substituting the open probability into Eq. (1.5), the outward potassium current can be represented as: dV l« = C - = dt
gK .
4
n(V, t) (V - E K )
(1.9)
An analogous equation can be written for the inward sodium current with the addition of an inactivation mechanism (Figure 1.3). Following the data fitting, the experimental sodium channel was represented by Hodgkin and Huxley as three voltage-activated gates similar to the potassium activation gates described by Eq. (1.8). As with the potassium channel, increased membrane voltages stochastically increase the probability that these three gates open. Inactivation follows the same kinetics as Eq. (1.8), except that the inactivation gate closes with increased voltages (Figure lAc). Thus, the sodium response to an applied voltage stimulation is biphasic. First, the faster activation gates rapidly open, allowing
K+
Na+
Open PrOb3bility
Open Prob 3bility
n
111
n
111
n
111
n
h Probability aJJ gates are open
[
I III" I
114 1
FIGURE 1.3. Idealized ion channels. The potassium channel is generally modeled with four voltage-activation gates. The sodium channel is represented by three rapidly-activating voltage-sensitive gates, with an additional slowly acting voltage-senstive inactivation gate. The lumped probability that all potassium gates will be open is n", while the probability that the activation and inactivation gates of the sodium channel is m'h.
A) I1aJ(V) 09
-sc
Volts (mV)
B) Illoo(V)
.....c;..
~
~ ~ ~
I::l.,
~! §l
C)~(V)
09
~
06
~
07 ,
~
06
~
C
06 '
l... 0$ ·
0$
I::l.,
04
~
03
Cl
02 01
.?oo
0&
~
0 7·
09 ·
. ....
0 4' 03
1
02 ' 0 \'
-ee
Volts (mV)
so
.?oo
-eo
se
Volts (mV)
FIGURE 1.4. Activation curves for (A) potassium channels, n; (B) activation curve for sodium channel, m; and (C) inactivation curve for sodium channel, h.
From Cellular Electrophysiology to Electrocardiography
7
inward current to develop. However, with increased voltage, the slower inactivation gates will close, forcing a decrease in the inward current. There is no conceptual change in the nature of the current equation-the activation gate n is simply replaced with m and h (though these gates all differ quantitatively, m and n both increase with more positive Vm» while the value of h decreases with more positive Vm). The sodium current can be represented as:
(1.10) The biphasic nature of the inward sodium current is crucial to the rapid elicitation of an action potential and the characteristic biphasic shape of the action potential. This simplified approach assumes that the cell membrane contains two distinct types of voltage-gated channels (Na+ and K+) that conducting currents in the opposite direction. With the addition of other inward and outward channels (see later sections), a generalized differential equation can be written:
dV
dt
1 = C UK M
+ INa + Iotherchannels + I stim)
(1.11)
where I stim represents a stimulation current (provided from a stimulating lead or adjacent cells), and Iotherchannels is provided via many other channels that vary among cell-types (atrial vs. ventricular cells) and various excitable tissues (heart vs. nervous system). Note that l«, INa, and other channels are represented by non-linear terms (i.e. n4 and m 3 h), and are both voltage and time-dependent. Thus, Eq. (1.11) coupled with gating equations for each channel (Eq. (1.8)), represents a system of non-linear differential equations that must be solved using techniques of numerical integration.
1.1.2 MODELING THE CARDIAC ACTION POTENTIAL While the model of an action potential was originally described for a neuron, the methods were quickly adapted to represent the cardiac action potential. Although there are slight differences in the quantitative description of the sodium and potassium channels described above, the cardiac myocyte also exhibits a considerable inward calcium current that is responsible for the distinguishable "plateau" phase-which coincides with the muscular contraction in the ventricular myocyte. Additionally, the cardiac myocyte uniquely expresses a diverse set of ion channels-which give unique electrophysiological properties to different types of heart tissue, in normal and diseased heart function. Within the heart, there exist a variety of cell types that require different considerations when developing a model. Pacemaker cells in the sino-atrial node express channels that allow an autonomous train of action potentials, while Purkinje fibers represents an efficient conducting system specialized for the fast, uniform excitation of the ventricular myocytes. Ventricular myocytes express the proper proteome to parlay the electrical excitation into force generating elements that ultimately produce the cardiac output and blood delivery to the rest of the body. Even within the ventricle, different models exist for transmural orientation (endocardial cells, middle-myocardial cells (M-cells), and epicardial cells). Models for each type of these cells have been extensively developed and are described in Table 1.1, and the history of these modeling developments is described below.
8
N. V.Thakor, V. Iyer, and M. B. Shena!
TABLE1.1. Classical and Modern Models of Various Cardiac Cell Types
Classical Models Hodgkin-Huxley (1952) McCallister, Noble, Tsien (1974) Beeler-Reuter (1977) Modem Models DiFrancesco-Noble (1985) Luo-Rudy Phase I (1991) Luo-Rudy Phase II (1994) Priebe-Beuckelmann (1998) Zhang et al. (2000)
Type
Novelty
Squid Axon Purkinje Cell Ventricular Cell
INa,IK
Purkinje Cell Ventricular Cell Ventricular Cell
INaCa, INaK, ICa-L, Ica- T Updated INa, IK
Human Ventricular Cell Sinoatrial Nodal cells
Ix!,IK2 lSi (slow-inward Ica)
Updated INaCa, INaK, Ica-L, Ica- T; Ca-buffering Updated with human data Updated Ca handling
1.1.2.1 Classical modelsofthe cardiac actionpotential In 1975, McCallister, Noble and Tsien introduced a prototype numeric model for the rhythmic "pacemaker activity" of cardiac Purkinje cells by using the voltage-clamp method to study an outward potassium current, I K2 (McAllister et al. 1975). After repolarization of the action potential, the deactivation of outward I K 2 current allows a net inward current to produce a diastolic slow wave of depolarization in between action potentials (Figure 1.5). As this slow wave of depolarization brings membrane potential towards threshold, I K2 is a prominent current in producing the automaticity of pacemaker cells. Additionally, the McCallister, Noble and Tsien (M-N-T) model reconstructed the entire action potential, using a modified Hodgkin-Huxley sodium conductance for the rapid upstroke phases, while using voltage-clamp methods to describe an lXI, a generalized plateau and repolarization current. Thus, this landmark model was able to simultaneously describe characteristic pacemaker activity and rapid conduction velocities associated with Purkinje cells. However, given the vast diversity of cardiac cell types, the M-N-T model could not describe the characteristics of ventricular action potentials-namely, the prominent plateau phase that is crucial for forceful contraction. To this end, Beeler and Reuter developed a numerical model (the B-R model) for the ventricular myocyte in 1977 (Beeler and Reuter 1976). This model incorporates an Is component, a slow inward calcium current that is responsible for the slow depolarization and the prominent plateau phase. This Is current follows Hodgkin-Huxley formalism, in that state variables d (activation) and f (inactivation) describe time-varying conductances of the slow inward current. However, unlike other Hodgkin-Huxley ions, the initial low level of intracellular calcium, [Ca2+]j does not remain constant with the arrival of the transmembrane Is current. In fact, the range of [Ca2+]i can range from 1 to 10- 7 M, widely altering the Nemst potential, E s. Thus, Beeler and Reuter modeled the intracellular handling of calcium by assuming it flows into the cell and accumulates while being exponentially reduced by an uptake mechanism (in the sarcoplasmic reticulum). At any given state, the flux of [Ca2+]j can be described by:
T
d[Ca]·
= _10- 7 . Is
+ .07(10- 7 -
[Cali)
(1.12)
En toto, the model incorporated four major components: the familiar INa current, the Is calcium current, the time-activated outward IXl current and IK1, a time-independent
9
From Cellular Electrophysiology to Electrocardiography
Beeler and Reu ter (Ventricular Fiber)
McCallister, Noble, Ts ien (Purkinje Fiber)
-. -
.. . •
.
t.CJ~ ~.t. ou r ".
.
t
-
,
.. . 8 ( 0
aVL = (3j2)(EL - Ewe,) aVF = (3j2)(EF - Ewe,)
Vj(I = 1, 2, .. . , 6) Ewo! is called Wilson Central Terminal. Ewct = (EI.+ ER+ EF)/3 . E denotes potential, subscripts L. R, and F denote left arm, right arm and left foot, respectively.See Fig. 3.8 for electrode positions of V;.
TABLE 3.3. Electrode Positions for Recording 12-lead Electrocardiogram Electrode
Position
Left arm (R) Right arm (L) Left foot (F) Right foot (G)
Left wrist Right wrist Left ankle Right ankle Right sternal margin, fourth intercostal space Left sternal margin, fourth intercostal space Midway betwee n V2 and V4 Left midclavi cular line, fifth intercostal space Left anterior axillary line, V4 level Left midaxi llary line, V4 and V5 level
VI V2 V3 V4 V5
V6
the simulation results can be evaluated with the excitation sequence of the heart model, the simulated vectorcardiogram (VCG) , l2-lead ECG, and body surface isopotential maps. The 12-lead ECG is the most popular lead system used in clinical practice. To evaluate models and simulations with different pathologies, the results are usually compared with clinical recorded l2-lead ECGs. Recording the l2-lead ECG requires 10 electrodes. Four electrodes are placed on limbs to record six limb leads and the other six are placed on the precordial chest wall to record the precordial leads. The definition of l2-lead ECG is summarized in Table 3.2, and the electrode positions for recording are shown in Fig. 3.8(a) and described in Table 3.3 (Horacek, 1989). In model studies, torso models usually do not include the limbs . The limb leads are moved to the closed positions on the torso . Because the potential difference on the limb is sufficiently small , this does not sign ificantly change the simulation results. Most torso models are represented by polygon meshes, and the nodal point s may not exactly overlap the electrode position. In this case, interpolation is usually needed to calculate either the electrode position s from positions of the surrounding nodal points, or the ECG potentials from the potentials on surrounding nodal points. A typical ECG waveform is illustrated in Fig. 3.8(b). The waves of the ECG are designated by Einthoven as P, Q, R, S, T, as shown in this figure . It is well known that the P
Whole Heart Modeling and Computer Simulation
105
.
.
,; ' : " .r '
..·..·.\..'1 :
~
,. '
.,
•. '
".
.
V2-·. · · ....·-,f -. V3 -::
. .
'.
'. -,
(a)
QRS
Interval
R
l 1---+---1-+---+---+-----4STse~ent
8, ~
~
u
0.2 (b )
0.4
0.6
time ( 1mm=O,04s)
0,8
1.0
FIGURE 3.8. (a) Electrode positions for the precordial leads. (b) A typical electrocardiogram.
wave corre sponds to atrial depolarization, the QRS complex to ventricular depolarization, and the T wave to ventricular repolarization. The U wave is occasionally recorded after the T wave, and its mechanism remains unclear (di Bernardo and Murray, 2002). The criteria of normality for the ECG include time and amplitude standards. Clinically, the following parameters are evaluated for the normality of ECG: • • • • • •
Cardiac rhythm and heart rate PR interval and segment QRS interval QT interval Pattern and amplitude of P wave Pattern and amplitude of QRS complex
106
D.Wei
• Pattern and amplitude of T wave • ST segment and T wave • Mean electrical axis (frontal plane) The limit of normal ECG can be found in MacFarlane and Lawrie (1989) and the textbook of de Luna (1993). The following are typical values: P wave interval = 100 ms, QRS interval = 90 ms, PR interval = 150 ms, and QT interval = 380 ms. Because most heart models do not yield absolute values for ECG potentials, the amplitudes can be evaluated by patterns and relative amplitudes. For example, the aVR wave should have an inversed waveform. The R wave should gradually increase from VI to V4 or V5 and fall from V5 or V6. In simulation studies, the use of VCG is very convenient in evaluating the simulation results. By summing the cardiac dipole sources in all model cells at each instance into a single dipole and plotting it in the frontal, horizontal and saggital planes, the VCG is obtained. It is theoretically close to the clinical VCG. From these figures, the mean electrical axises can easily be estimated. The normal P, QRS, and T axises should be between 0° to +90 30° to + 110°, and 0° to +90°. The body surface isopotential maps are also easily used for evaluation. The typical patterns for a normal heart can be found in many articles and textbooks (e.g., MacFarlane and Lawrie, 1989). Fig. 3.9 shows the simulated results of a normal heart mode by Wei at al. In Fig. 3.9(c), potential distributions on the torso during the QRS period are shown by isopotential countour maps. In most instances, the distributions show dipole fields with one positive maximum on the anterior chest and one negative potential minimum on the back at the early QRS and gradually reversed in the late QRS. In a short period of the middle QRS (time 183-189), we can find a multipole field represented by one potential maximum and two negative minimums. This is a typical pattern in the isopotenials maps representing the right ventricular breakthrough, as experimentally observed by Taccardi (1963). Simulation of a normal heart is generally a repetitive procedure to adjust the model until simulated excitation isochrones fit the experimental data, and the simulated vectorcardiogram and surface ECGs fall within the normal range. Generally, obtaining a normal P wave is not difficult. The excitation from the sinus node spreads leftward and downward does reproduce a normal P wave in the simulated ECG. Getting a normal QRS lead in the 12-lead ECG is more difficult. To keep normal waveforms in most leads on the surface of the torso model, including the normal time periods and normal amplitude relationships among the leads, it is important to correctly mount and adjust the specialized conduction system in the heart model. The simulation ofthe T wave requires a deep understanding of the T wave mechanismthe reason why positive T waves are measured in most of the 12-lead ECG. Suppose the action potential is uniform at all ventricular myocardium, the depolarization and repolarization would be along the same direction, and thus the T waves would have the opposite polarity as that of the QRS waves. But actually, positive T waves are observed in most leads. The mechanism can be interpreted with a theoretical model that assumes longer action potential duration in the sides of the endocardium and apex than toward the epicardium and the base (Harumi et al., 1964). In this way, the repolarization spreads along the opposite direction to the depolarization. As a result, the T wave polarity is the same as that of the QRS wave. The T wave model is consistent to the experimental measurement on the epicardium (Spack et al., 1977). A detailed description of a T wave mechanism can be found in Barr 0
, -
Whole Heart Modeling and Computer Simulation
avtvvt- r vt- vl v:k- v~ ~ I~ I~
( a)
Horizontal
107
av~ av~
frontal
sagittal
(b)
I(~6:~ II ~68 ~ II ~11 ~ II 8
~
e I [J~) °1 117 ms
pT~ 9~~ ~ ~ f'Pr8JlJ I~A'\ II ~ Mil (@) 01[ @) 01 195 ms
(c )
114 ~
198 ~
201 ~
204 ~
201
~
FIGURE 3.9. Simulation results of a normal heart model: (a) ECG ; (b) VCG; and (c) body surface isopotent ial maps. In (c), "+" and "-" show positions of potential maximum and minimum , respectively. Lines of light black represent contours of zero potential. The left area of six tenths correspond s to the anterior torso chest, and the right of four tenths corresponds to the back. Time is counted from the onset of the P wave. (Figures of a and bare reproduced with permiss ion from Wei et al., 1995).
(1989). The algorithm used in simulation to distribute the action potential in the 3D whole heart model is introducedin Section 3.2.4. 3.3.2 SIMULATION OF ST-T WAVES IN PATHOLOGIC CONDITIONS
Studyingthe pathologicchanges in the ST segmentand the T wavesare typicaland importantapplications of whole heart models. This is becausethe ST-T changes are associated with seriousheart diseases such as myocardial ischemia, hypertrophy, and cardiomyopathy.
D.Wei
108
>'E
-
•
J.
TIme
1St
zoo (ms)
:&SO
FIGURE 3.10. Pre-defined action pote ntials assig ned to cells in normal and middle, moderate , and severely ischemic regions in the simulation of Dube et al. (Reproduced with permission. from Dube et al., 1996).
Myocardial ischemias arise from insufficient blood flow due to an occlusion of coronary arteries. In recent years , the percutaneous trasluminal coronary angioplasty (PTCA ) provides an opportunity to precisely confirm the relationship between the ECG features and the sides of the blood block by controlling the balloon inflation during the PTCA operation. Dube et al. (1996) simulated clinical body surface potential maps and ECGs using the PTCA protocol. Because the action potential change of the ischemic tissue is the direct cause of the ST segment changes in the ECG, the way of setting action potential in the heart model is a key to the simulation. Three transmural zones , middle, moderate and severe ischemia, were set to the heart model, located in the vicinity of the left anterior descending, left circumflex, and right coronary arteries . The action potentials, as shown in Fig. 3.10, were set to these regions, representing action potentials under middle, moderate and severe ischemias. The simulation produced ECG maps quantitatively similar to clinical maps. Fig. 3.11 shows another example that simulates "giant negative T waves" known as the main feature of the apical hypertrophic cardiomyopathy (Harumi, 1989b). The simulated ECG and VCG give surprisingly similar results to clinical findings. The results were obtained by modifying the APD gradient and the conductivity value for the pathologic zone. Unlike the ischemia, the heart with hypertrophic cardiomyopathy is impossible to make with experimental animals . In this sense , computer simulation is the only way of in vivo experimentation to study the unknown mechanism.
3.3.3 SIMULATION OF MYOCARDIAL INFARCTION Myocardial infarction is a typical concern in heart modeling. The example introduced in this section demon strates that model study is not only useful for understanding the mechanism, but also it is useful for helping to develop a diagnosis tool for clinical practice.
109
Whole Heart Modeling and Computer Simulation
Frontal r - - - r - - - - - , r - - -r --
--, Sagittal
FIGURE 3.11. Simulated ECG and VCG in the heart modelof apical myocardial cardiomyopathy (Reproduced with permission from Harumi et aI., 1989).
Myocardial infarction is caused by the occlusion of the coronary artery. The development of myocardial infarction is usually classified in three phases by ECG patterns (de Luna, 1993). The early phase of ischemia is characterized by T wave changes. The later phase of injury is characterized by ST segment changes. The final phase of necrosis is characterized by Q wave changes. The ST and T wave changes can be simulated with a whole heart model by changing the action potentials for the model cells. The action potentials during the reduced blood flow can be measured experimentally. The location and size of myocardial infarction is an important aspect in clinical diagnosis. In clinical practice, the locat ion and size of infarction are qualitatively interpreted with the theory of vectorcardiogram. With a whole heart model , StarttlSelvester et al. (1989) systematically simulated the infarcts due to three major coronary artery distributions, and expected Q waves were obtained in each case. They found that in any case the degree of QRS change was proportional to the degree of local infarction. Dividing the ventricles into four walls by 12 segments, they developed a quantitative method to estimate the location and size of the infarction. The result led to a practical tool known as the ECGNCG scoring system, where each point scored was set up to repre sent 3% of the left ventricular myocardium. The scoring system predicted a distribution of damage in the 12 left ventricular
110
D.Wei
segments in good correlation with the average of planometric pathology found in the same subdivisions. They further developed a monogram based on the same simulation study, which relates the VCG changes to the infarct size. If the duration and magnitude of QRS deformity are measured before and after infarction, the infarct size can be simply found on the monogram. Details can be found in Startt/Selvester et al. (1989).
3.3.4 SIMULATION OF PACE MAPPING Pace mapping is a new technique that is used to speed up the procedure for localizing ectopic focus in catheterization (SippensGroenewegen et aI., 1993). In SippensGroenewegen et aI., body surface potential maps in patients with cardiac arrhythmias but with no evidence of structural heart disease were recorded during ectopic beats by catheter stimulation at different endocardial sites in the ventricles. Based on these data, they were able to classify the QRS integral map patterns with respect to the location of ectopic beats. The same procedure was reproduced by Xu et aI. (1996) with the heart model of Lorange et aI. In the simulation, 38 selected endocardial sites (25 on the left ventricle and 13 on the right ventricle) corresponding to SippensGroenewegen et aI. were paced to initialize the excitation process of the heart model. With a more detailed heart model (0.5 mm spatial resolution), Hren and Horacek (1997) generated a database of 155 QRS integral maps by pacing the epicardial surfaces in the left and right ventricles. This database would be useful in catheter pace mapping during treatment of ventricular arrhythmia. The simulation of pace mapping is a good example to show the clinical usefulness of whole heart models.
3.3.5 SPIRAL WAVES-A NEW HYPOTHESIS OF VENTRICULAR FIBRILLATION Reentrant excitation is recognized as a mechanism of life-threatening arrhythmias. Among several hypotheses to explain the reentrant excitation, spiral wave is the latest one and is getting more and more support from experimental studies (Gray, 1995, 1996). In addition to experimental studies, computer simulation of the spiral wave using the whole heart model with realistic geometry and anatomy is a useful tool because it is capable of producing comparable results with experiments. Fig. 3.12 shows an example of spiral waves simulated with a whole heart model (Gray and Jalife, 1996). The simulated isochrones and ECGs (right) are comparable with the measured results.
3.3.6 SIMULATION OF ANTIARRHYTHMIC DRUG EFFECT The example introduced in the following demonstrates how the drug effect is confirmed by a simulation study with a whole heart model (Wei et aI., 1992). The study was based on an experiment that investigated the relationship (Harumi, 1989a) between the restitution of premature action potential and the stimulation coupling interval for Purkinje fiber and ventricular muscle in dogs before and during the infusion of antiarrhythmic drugs. In the simulation, lines through linear regression as shown in Fig. 3.13(a) were obtained to approximate the experimental data. The slopes of these lines corresponding to a parameter called dynamic coefficient (DC) were input to the heart model. Note that, the figure shows different slopes for the ventricular muscle and the Purkinje fiber before and during the infusion
Whole Heart Modeling and Computer Simulation
111
FIGURE 3.12. Simulation of spiral wave.(Reproducedwith permission, from Gray and Ja1ife, 1996).
of antiarrhythmic drugs. In the simulation, ten successive extra-stimuli of 170 ms interval started at 300 ms after the first sinus pacing were applied to the epicardium of the ventricle. The simulated ECG shown on the top of Fig. 3.13(b) corresponds to the case where the APD changes follow the lines before the drug infusion. In this case, the stimulation caused two tachycardia-like waveforms followed by sustained VF. When APD changes follow the lines during the drug infusion, normal waveforms are restored after the stimulation, as shown by the waveform in the bottom of Fig. 3.13(b). Figure 3.13(c) shows a picture of animation developed for visualizing the propagation of excitation during the fibrillation. A number of propagation wavefronts developed due to a number of reentries can be seen in this picture. This simulation supported the assumption that different ratios of restitution in premature APD between the ventricular muscle and the Purkinje fiber may playa role in the induction of VF. It also demonstrated the antiarrhythmic drug effect in suppressing VF.
3.4 DISCUSSION Principles, methodology and applications in 3D whole heart modeling are described in this chapter. The significance of 3D whole heart models is that, as compared to experimental studies, computer simulations with whole heart models are always in vivo so as to provide information relating intracardiac events to the body surface electrocardiogram in different
Ventricular muscle Purkinje fiber
Ventricular muscl e Purkinje fiber
,
Q
~, then using Eq. (4.1) we can rearrange Eq. (4.5) as - V' . [(G e
+ G i) V'4>] =
V' . [G i V' 4>m].
(4.6)
We have the further condition that no current leaves the body, so the component of current density normal to the body surface is zero. Thus, for y on the body surface and D y a unit normal to the body surface at y, we have D y . G e V'4>(y) = o. A uniformly zero boundary condition (such as this) is referred to as "homogeneous". Equation (4.6) is a partial differential equation, specifically, Poisson's equation. Let us imagine that we are given V' . [G i V' 4>m] (the "source"), and we wish to compute the resulting potential 4>-the so-called "Forward Problem" (see Chapter 2 of this book). One very important feature of the Poisson equation is its linearity. That is, if a solution to - V' . [(G e + G i) V'4>] = f satisfying the homogeneous boundary condition is known as 4>f' and a solution to - V' . [(G e + G i) V'4>] = g satisfying the homogeneous boundary condition is known as 4>g, then it is easy to verify that the solution to a solution to -V'. [(G e + GJV'4>] = c.] + C2g is given by cl4>f + c24>g (for Cl, C2 constants), and this solution also satisfies the homogeneous boundary condition. This means that if we know the solutions to Eq. (4.6) for a source localized to any single location in the heart, then to determine the solution for any more geometrically complex source within the heart volume V we only need to add up (integrate) the solutions that would be obtained for each single location comprising the complex source. Thus, consider Eq. (4.6) where its right-hand-side (the source) has unit strength when integrated over all space, but is zero everywhere except at x (i.e., the source is an electric monopole). Denoting the solution as 1/r(x, y) (where 1/r(x, y) as a function of y satisfies the homogeneous boundary condition), we have -V' . [(G e
+ GJV'1/r(x, y)]
= 8(x - y),
(4.7)
where the divergence and gradient operators are with respect to the field point y in threespace. Thus, 1/r(x, y) is the potential at point y in the body that would be induced by a unit strength source that was zero everywhere in the heart except at the location x (a unit strength source localized to a point is mathematically represented by a delta function). 1/r(x, y) is known as a Green's function. Since Poisson's equation is linear, we can now write the
126
F. Green site
solution to Eq. (4.6) for the geometrically complicated source (on its right-hand-side) as (4.8)
cPe(Y) = Iv 1fr(x , y )V . [Gi(x)VcPm(x)]dVx,
where V is the heart volume. Following Yamashita and Geselow itz (1985), integration by parts applied to Eq. (4.8) (i.e., application of the Divergence Theorem and the identity V . (fV g) = fV . Vg + V f . Vg ) leads to q,(y ) =
L
1fr(x , y) [G;VcPm(x ) , llx]dS x - Iv V1fr (x , y) . [G; VcPm (x )]dVx
= - Iv[G ;V1fr(X,y)] . VcPm(x)dVx
(4.9) (4.10)
where S is the surface surrounding the heart muscle volume V . Equation (4.10) follows firstly because the surface integral in Eq. (4.9) vanishes, i.e., G;VcPm ' n, is zero for x on the heart surface (G; VcPm is the source current, and therefore confined to the heart , so that it will have no component normal to the heart surface ). The integrand on the right-hand-side of Eq. (4.10) results becau se of the symmetry of G; (i.e., for vectors a, b and symmetric matrix C , a' Cb = b'Ca). A second integration by parts , now applied to Eq. (4.10), gives
Given that y is not in V (e.g., we typically consider y to be on the body surface ), the volume integral on the right-hand side ofEq. (4.11) is zero if G, is proportional to G e (equal anisotropy). This is because in that case we would have for some scalar a V · G;V1fr(x , y) = aV · (G;
+ Ge)V1fr(x, y) =
0,
(4. 12)
where the second equality ofEq. (4.12) follows from Eq. (4.7) since y on the body surface is external to the source domain V. In this case, the imaging equation (4.11) becomes cP (y) = - L[G;V1fr(X , y )] . ll xcPm(x )dSx.
(4.13)
Since 1fr(x , y ) satisfies Eq. (4.7), as a function of y it can be thought of as the field generated by a monopole at x. Thus, [G ;V1fr(x , y) ] . n, = V1fr (x , y) . [G;llx] can be thought of as the field generated by a current dipole at x pointing in the G;llx direction. This can be verified by introducing the second source -8(x ' - y) where x' is a point close to x with the line between x and x ' oriented as G;llx' This monopole of opposite polarity is associated with a second Green 's function -1fr(x ' , y ), so that the composite of monopole sources of opposite sign at x and x ' approach a dipole . The appropriate limiting procedure leads to a field determined by (G; n) . V1fr as in the integrand above.
Heart Surface Electrocardiographic Inverse Solutions
127
Thus, we have dervied a linear relationship between transmembrane potential at the cardiac surface S (endocardium plus epicardium), and measurable body surface potential. In the practical setting where the geometry is discretized, the function ¢ over body surface points is expressed as a column vector whose components are measured potentials at various electrode sites, while ¢m is a vector whose components are transmembrane potential at some set of locations on the heart surface, and -l[G;V1/J(X, y)]. nA·)dSx
(4.14)
is a matrix. The forward problem associated with the above transmembrane potential formulation requires knowledge of the anisotropic conductivity of the heart in construction of the operator Eq. (4.14). For example, solution of Eq. (4.7) for any given source point x requires knowledge of G; + G, throughout the body volume, including in the heart (where we have to know G; and G e as tensors). There is an additional complication in that the equal anisotropy assumption is not accurate, so one must also consider the second integral on the right-hand-side of Eq. (4.11). However, in the portion of the body external to the heart muscle we have that G; V¢; = 0, since G; is zero outside the heart. In that volume, we have from Eq. (4.5) that (4.15) i.e., Laplace's equation. The relevant volume is bounded by the epicardium and the body surface. Thus, the boundary conditions are divided into two parts: 1) the zero normal component of current density at the body surface, 2) the (unknown) epicardial potentials. Suppose we know the solution to this equation for the situation where the epicardial potential is identically zero except for having unit strength concentrated at location x-and call this solution kj(x, y). From the linearity of Eq. (4.15), we can then find the solution for any geometrically complex epicardial potential distribution by simply adding together such elemental solutions, just as was done for Poisson's equation. This again defines a linear relationship as (4.16) where S is now the epicardial surface and ¢ep; is the epicardial potential. Thus, we have the linear relationship between epicardial potentials and (measured) body surface potentials. The Green's function k, (x, y) is provided by solution of the forward problem (see Chapter 2 in this book). This formulation avoids having to consider the anisotropic myocardium in the construction of k, (x, y), since the volume under consideration for the partial differential equation does not include the heart volume. The last source formulation we will consider is that of the endocardial potentials. If we design a transvenous catheter such that its tip is embedded with many electrodes (the "probe"), pass it into a cardiac chamber, and register its location with respect to the endocardium (e.g., via ultrasound or electronic means), then we can consider the volume between the catheter probe and the endocardium. Laplace's equation (4.15) still holds for this volume (it is source-free). The boundary conditions are the zero component of current
128
F. Greensite
density normal to the probe surface, and the (unknown) endocardial potentials. Analogously, we can again use Green's functions to derive a linear relationship between the endocardial potentials and the probe electrode potentials, and we again have an equation of the form as above, i.e., el' i exists. Electrical potential satisfies Laplace's equation in the body volume external to the epicardium. The mixed boundary condit ions of the (unknown) epicardial potential 1>el' i , and absen t current component normal to the body surface, fully determine potential 1> in the body volume external to the heart. Green's second identity can be used to derive a linear dependence between potential at the body surface and potential at the epicardium. However, computation of this linear dependen ce (the forward problem) is dependent on knowledge of the extracellular conductivity Ge - which is very heterogeneous, e.g., due to the lungs, fat, and muscle (however, there is recent evidence that the impact of these inhomogeneities on an inverse solution may be small (Ramanathan and Rudy, 200 1» . Endocardial potential imaging problem: The innermost ellipse represents the surface of a catheter electrode probe, and the next ellipse represents the endocardial surface. The region between these two surfaces (the blood-filled lumen of a cardiac chamber) contains no current sources, so Laplaces equation holds in this volume. The boundary cond itions are the endocardial potential, and the absent component of current normal to the probe surface. Again, a linear relationship can be derived-this time between measured probe potentials, and the (unknown) endocardi al potential. The latter linear operator is much easier to compute than the corresponding operator for the epicardial potential problem, since the conductivity in the relevant volume is uniform (simply being the conductivity of blood), and the geometry is easily measured (i.e., it does not require CT or MRl ). However, the attendant advantages are tempered by the fact the the method is invasive.
emphasize the statistical approach, from which other methods (e.g., those of Tikhonov) can be interpreted as special cases . Secondly, one must be prepared to deal optimally with a time series of such problems-i-i.e., stochastic processes.
4.3.1 SOLUTION NONUNIQUENESS AND INSTABILITY As noted in the last section, we are required to deal with equations of the form h = F g, where the components of vector h are our noninvasive or minimally invasive measurements at various spatially acces sible locations, F is a tran sfer matrix (supplied by the forward
130
F. Greensite
problem solution), and g is the image of transmembrane, epicardial, or endocardial potential. Assuming that F has an inverse, one would presume that we could supply the image as g = F-1h.
To understand the naivity of such an approach, we can consider that our situation is similar to being given a highly blurred image of a scene (our electrodes are located remote from the sources, and all the sources potentially contribute to what is measured at each electrode). We have some knowledge of the "point spread function", and can accordingly attempt some "image enhancement" with the objective of "deconvolving" the source image from the point spread function. But a blurring operator attenuates the information responsible for higher resolution. Thus, the signal power of the high resolution information relative to the power of the low resolution information is much smaller in the blurred image than in the unblurred source image. At the same time, noise (not described by the blurring operator) is also present in the resulting blurred image. That is, the noise (being added in addition to the blurring) will not be blurred away (much of it can be thought of as being added after the blurring operation, e.g., due to electrode noise or imprecisions in computation of the transfer matrix F). Thus, it will be typical that the noise power exceeds the signal power in the high resolution subspaces of the data. If one naively applies the inverse of the blurring operator, the noise will continue to dominate the high resolution subspaces-thus assuring continued absence of identifiable high resolution features. Furthermore, since the blurring operator severely attenuates high resolution, its inverse must involve a marked amplification relevant to the high resolution subspaces-which then also markedly amplifies the noise in these subspaces, so that noise will dominate the entire solution (i.e., contribute most of the power to the resulting image). This means that a nonsense solution estimate will be obtained, also characterized by its instability to small changes in the data. Because of the dominance of noise in the high resolution subspaces, one must simply forgo restoration of the high resolution subspaces and be content with restoring those subspaces where the blurred signal outweighs the noise-unless physiologically meaningful and valid extrinsic constraints can be imposed. In other medical imaging modalities, one does not have this difficulty. In MR!, for example, the spins (the hydrogen protons in the body water or fat) are induced to produce a signal (picked up by an antenna) whose frequency reflects their spatial location. Thus, the amplitude ofthe signal at a particular frequency reflects the number of spins at corresponding locations. In fact, following selective excitation of the spins in a single slice of the body, the spins at a particular location in the slice are cleverly given two frequencies in different bands (one from their spatially-varying NMR gyroscopic precessional frequency resulting from a magnetic field gradient applied along one spatial direction, and another one via manipulations of their relative phase via application of magnetic field gradient pulses in the direction orthogonal to the first). The relationship of frequency to magnetic field strength (which varies in space due to the applied gradients) means that an image of the tissue can be obtained by applying a two-dimensional Fourier Transform to the antenna data, so that the magnitude of the resulting function is an image of spin density in the tissue slice. Unlike a blurring matrix, a (discrete) Fourier Transform does not attenuate information in any source subspace more or less than in any other subspace. Thus, inverting the effects of the operator does not involve any differential noise amplification.
Heart Surface Electrocardiographic Inverse Solutions
131
To intelligently approach our dilemma, one ultimately needs to make the above notions more quantitative. For this, it is useful to introduce the singular value decomposition (SVD) of a matrix. A matrix represents a linear transformation, mapping a domain vector to a range vector. For any linear transformation, it can be shown that there exists a particular orthogonal coordinate system in the domain space, and a particular orthogonal coordinate system in the range space, such that a vector pointing along a coordinate axis of the domain space, is mapped to a vector pointing along a coordinate axis of the range space, and whose magnitude is amplified by a nonnegative scalar depending only upon which domain axis it was pointing along. Since any vector in either space can be written as a linear combination of unit vectors in the above coordinate axis systems, the preceding wordy statement is equivalent to the assertion that any matrix F can be written as the SVD, F = USV I ,
(4.19)
where the columns of matrices V and U are the requisite orthogonal bases of the domain and range coordinate systems alluded to above, and S = diagts}, ... , sn) is a diagonal matrix whose diagonal entries (the singular values) are the amplification constants referred to above (they are arranged in order from largest to smallest). U and V are each orthogonal matrices, and S is referred to as the singular value matrix. Note that each singular value s, is associated with corresponding one-dimensional domain and range subspaces (the i-th columns of V and U). By convention, we will take U to be an (m x m) matrix, V to be an (n x n) matrix, so that S is a (m x n) matrix. We are now in a position to understand the severe mathematically determined difficulty of our problem. In any noninvasive imaging technique for cardiac electrophysiology, F is always severely ill-conditioned-because the field ¢ diminishes with distance from the source, and the field at a point has contributions from all sources (there is "blurring"). As a result, the ratio of its largest and smallest positive singular values is "large" (the value of the smallest positive singular value is "small" compared to the value of the largest singular value). That is, F is "ill-conditioned". In particular, the noise in the data in many of the singular subspaces (columns of U) dominates the signal in those subspaces. However, F- 1 = V S-I U I (assuming the inverse exists). A solution of the form F- 1h thereby entails application of l/si to the data component of h in the Uu subspace. If this is one of the many subspaces for which 1[s, is very large and in which the noise dominates the signal, we can appreciate that the noise is this subspace will be markedly amplified in the solution estimate (this will also imply that the solution estimate will be very unstable to small noise perturbations). Intuitively, we would thereby expect that it will be necessary to somehow attenuate the solution components associated with many (or most) of the subspacesmeaning that there will be a severe limit on the number of degrees of freedom in a meaningful estimate for g in Eq. (4.18) (essentially given by the number of singular values large enough not to attenuate signal components below the noise amplitude in the subspace defined by the corresponding column of U). Thus, much of the structure of an estimate for g must come in the form of a priori constraints-s-either by default (imposed as artifacts of the regularization procedure), or by design (constraints that truly reflect the class ofphysiologically meaningful
F. Greensite
132
solutions). Without such constraints, the solution estimate would be nonunique-since the addition of any vector in the supressed high resolution subspace to any solution estimate gives a new estimate that is also consistent with the accessed data. The field of Inverse Problems typically deals with situations where one is given data reflecting the effect of some operator on a "source" we would like to estimate, but where the inversion procedure (undoing the effect of the operator) is inherently unstable (e.g., highly noise amplifying), and (in practical terms) solution estimates are not unique. Such problems are loosely referred to as "ill-posed" (the latter term has a quite precise meaning in general Hilbert space settings, that we will not go into further). 4.3.2 LINEAR ESTIMATION AND REGULARIZATION
Taking noise vector v into account, Eq. (4.18) becomes h=Fg+v,
(4.20)
where it is required to estimate g given hand F. It might first occur to us that a useful estimate for g would be the one which maximizes p(hlg)-i.e., the choice that maximizes the conditional probability (density) that the measured h would occur given a particular candidate for g. This is known as the "maximum likelihood estimate". Assuming F has an inverse F- 1 , and the noise has zero mean, this is given by gml = F- 1h-because the likeliest value for the noise (its mean) is the zero vector (under conditions of zero mean Gaussian noise, the maximum likelihood solution coincides with the least-squares solution, i.e., if F- 1 does not exist, F- 1 is replaced by the pseudoin-verse). If F is ill-conditioned, we have seen that this is in general a very poor estimate for g, and is very unstable to noise variations. But one could alternatively take the estimate for g to be the choice which maximizes p(glh)-i.e., the choice that maximizes the chance that a particular g will be present given the observed noisy data h. This is referred to as the "maximum a posteriori" estimate, gmap. The relationship between the two conditional probability densities above is given by a version of Bayes Theorem: p(glh) = [p(h Ig)p(g)]/ p(h). gmap has the great advantage of being stable to noise variations-but the disadvantage that its calculation requires one to first supply nontrivial information concerning statistical properties of g (in fact, the entire "posterior" probability density p(g)).1f such is available and reliable, the methodology is referred to as "Bayesian". If one supplies the statistical properties as "drawn out of the air", or perhaps estimated from the given data h itself ("noninformative"), the methodology is referred to as "empirically" Bayesian. The general approach is also referred to as "Satatistical Regularization" (evidently, in discrete settings, the maximum likelihood estimate is equivalent to the Bayesian approach in a minimum information setting where every realization of g is considered equally likely to occur). All of this suggests that, once noise is introduced as in Eq. (4.20), it is useful to carefully consider statistical notions. We will endeavor now to make these a little more precise. Specifically, each component of noise vector v is a "realization" (outcome) of a "random variable", the composite of which defines random vector v. For our purposes, a random variable is an entity that associates a probability density value with every real number. From this, we can compute the probability that some realization of the random
Heart Surface Electrocardiographic Inverse Solutions
133
variable (outcome of a measurement) will yield a value falling in some given interval. Accordingly, the expectation tTl of some expression involving a random variable is the integral of the expression over all possible values of the random variable weighted by the probability density associated with each value (a zero mean random variable a is such that Era] = 0). Furthermore, a "Gaussian" random variable a has a Gaussian probability density (the familiar bell-shaped curve), and is fully characterized by its particular expectation £[a] and variance £[(a - £[a])2]. Similarly, a zero mean Gaussian random vector W is a column vector of zero mean jointly Gaussian random variables Wi, i.e., W = (WI, ... , wnY. A zero mean random vector is further characterized by its autocovariance matrix £[ww t ], which describes the dependence between all different pairs of components of W (note that the product of jointly Gaussian random variables is Gaussian). Similarly, the cross-covariance matrix of zero mean random vectors v, w is given by £[Dw t ] , and describes the mutual dependence of ii and W. Just as v is a realization of a random vector v, so too can g be considered to be an (unknown) realization of random vector g. For notational simplicity, in this subsection we will suppress the superscript" - ", and denote a random variable and its realization by the same symbol. However, we will resume the notational distinction in the next subsection. In approaching Eq. (4.20), a good objective is to find gopt such that £[lIg - gopt 11 2 ] is minimum (this being the "minimum-mean-square-error" estimate). If g and v are realizations of zero mean Gaussian random vectors, gopt is obtained via the Wiener filter. Under these conditions, the maximum a posteriori estimate gmap is equivalent to gopt. This linear estimation procedure develops as follows. A linear estimate of g is given by application of an "estimation matrix" Mest to data h, i.e., gest = Mesth. Ideally, we desire the solution estimate
(4.21) such that £[llg - Mopth 11 2 ] is minimum. Thus, it is sufficient to calculate M opt' The way to proceed follows from the "Orthogonality Principle", which asserts that gopt minimizes the mean-square-error when
(4.22) i.e., when the cross-covariance matrix of the "error of the estimate" (g - gest) and the "data vector" h is the zero matrix (so that the error and the data have no dependence). Intuitively, the Orthogonality Principle assures that every bit of useful information is extracted from the data h in making the solution estimate gopt. Substitution of Eq. (4.20) and Eq. (4.21) into Eq. (4.22) immediately gives
(4.23) Thus, assuming that g and v are independent (i.e., £[gv t] as
where Cg
== £[ggf]
and C;
== £[vv t]
= 0), Eq. (4.23) can be written
are the autocovariance matrices of signal g and
F. Greensite
134
noise v. Hence, (4.24) Thus, the optimal solution estimate for Eq. (4.20) is provided by Eq. (4.24) and Eq. (4.21)assuming we know the autocovariance matrices of signal and noise. It is interesting to express this estimation matrix M op I as a modification of F- 1 , assuming the latter exists. We have from Eq. (4.24) that
where Ci; = (FCgF I + C v ) is the autocovariance matrix of h (as is seen via Eq. (4.20)). Thus, M op I involves an initial preprocessing of the data h via (C h - CV)Ch1 (the classical Wiener filter (Papoulis, 1984)) followed by application of F- 1 (note that providing Ci; nominally requires knowledge of Cs- although one can attempt to estimate Ch using the given measurement h itself-essentially the problem of spectral estimation (Papoulis, 1984)). If Cv = I (i.e., if the noise is white), Eq. (4.24) becomes
a;
M op l =
(
2 -1)-1 rrs «;c; F, I
I
(4.26)
since (FIF+a;C;-I)CgF I = FI(FCgFI+a;l). If Cg=aiI, using the SVD, F= U SV I , we can write Eq. (4.26) as (4.27) An alternative applicable formulation for treating Eq. (4.20) is provided by the maximum likelihood method in concert with a deterministic constraint (such as that the signal power is equal to some a priori vaule E). Maximum likelihood then corresponds to minimizing IIFg - hll 2 subject to IIgl1 2 = E (assuming Gaussian noise). This also leads to a linear estimation matrix of similar form to the above. Ultimately, the distinction between the Bayesian and constrained maximum likelihood methods are that the latter treat only the noisy measurements in a statistical fashion, while the former also treats the underlying signal statistically. Thus, the Bayesian methods potentially allow (or require) introduction of a larger class of a priori information. A third alternative, Tikhonov regularization (Tikhonov, 1977), could be viewed as either a hybrid or an empirical Bayesian method. The linear estimation matrix supplied by it can typically be interpreted as resulting from the assumption of white noise and selection of a term proportional to Cg /a 2 in Eq. (4.26), where the proportionality scalar (the regularization parameter) is selected using either a deterministic or data-dependent constraint. Comparing the right-hand-side of Eq. (4.27) to F- 1 = V S-I U I (assuming the inverse exists), we can appreciate the violence that estimation theory does to the notion of a high resolution reconstruction of g. Reversal of the effects of F would require application of 1/s i to the data component in the U; subspace. Instead, for the regularized estimate, the factor sd(s; + y) is applied. As s, becomes small, this factor bears no resemblance to 1/ Si-SO there is no attempt at faithful reconstruction of components of the associated source subspaces. Another way of looking at the situation, is that a square-integrable
Heart Surface Electrocardiographic Inverse Solutions
135
(i.e., well-behaved) function (or image) must be such that its higher order Fourier coefficients tend to zero (we imagine the Fourier coefficients to be with respect to the SVD domain coordinate system of F, given by the columns of V in the discretized approximation). In the presence of white noise, whose Fourier coefficients therefore do not tend to zero, it is clear that higher order Fourier coefficients of data h are hopelessly noise-corrupted. The Wiener filter, and Tikhonov regularization, achieve stable results by removing any attempt at meaningful reconstruction of the high resolution components. There is only one way out of the dilemma of resolution loss. If physiological constraints exist which effectively reduce the dimension of the solution space to be commensurate with the number of useful data Fourier coefficients, one can anticipate that it will be possible to preserve spatial resolution. For example, for the inverse electroencephalography problem (where one wishes to image the brain sources of the scalp electrical potentials), it might be true that only a single focus is responsible for inciting an epileptic seizure, and that this focus can be modeled as a single current source dipole located at some unknown location in the brain. In that case, one is searching for an entity with six degrees of freedom (reflecting its location, orientation, and magnitude). High spatial resolution could conceivably be possible assuming there are six or more data Fourier coefficients (with respect to the transfer matrix SVD-derived coordinate system) that are not dominated by noise. At first blush, such an obvious constraint does not appear to be physiological in the heart, since the heart is not faithfully modeled as a single dipole. However, a deeper look at the geometry reveals that such constraints do in fact apply (in principle) for the "critical points" of ventricular activation-from which an activation map can be fashioned (see Section 4.6).
4.3.3 STOCHASTIC PROCESSES AND TIME SERIES OF INVERSE PROBLEMS The data available in our problem are distributed in time as well as space. Thus, Eq. (4.20) would be more appropriately written as hi
= Fig, + Vi,
(4.28)
i = 1,2, ... , n, where i indexes the time instants at which measurements are made (note that we leave open the possibility that the transfer matrix is time-varying, thus we write it as F;). Underlying Eq. (4.28) are the time series of random vectors, hi, gi, and vi-i.e., stochastic processes. The important additional feature of a stochastic process is that the i -th random vector may have correlations with the j-th random vector for j =1= i. However, a "state variable model" which embodies known correlations between gi and gj, for i =1= j, is not explicitly available in our problems. Given the lack of such explicit accurate constraints, it is usual to adopt a "minimum information" perspective. Though such an approach seems reasonable, and suggests that the equations ofEq. (4.28) might best be treated independently of each other (by simply applying the methods of the prior section to each one), the reality is more subtle. For convenience, let us define matrices H, G, N such that Hi = hi, G:i = gi, Ni = Vi-SO that Eq. (4.28) becomes Hi = Fi G:i
+ Ni'
(4.29)
F. Greensite
136
i = 1, ... , n. We then have the underlying random matrices as if, G,
N such
that their i-th columns are hi, gi, Vi, respectively. The usual assumption is that the entries of G are independent and identically distributed random variables (and similarly for the entries of N). This is equivalent to the statement that all row autocovariance matrices are proportional to the identity matrix (with the same proportionality constant), and all row cross-covariance matrices are the zero matrix (this is also equivalent to the statement that all column autocovariance matrices are proportional to the identity matrix, with the same proportionality constant, and all column cross-covariance matrices are the zero matrix). This minimum information assumption would imply that each member of equation sequence Eq. (4.28) can be treated independently of every other member of the sequence. However, if we leave open the possiblity that there are correlations between the different gi, the members of the equation sequence can no longer be considered necessarily independent-and an optimal processing of the data is subject to specification (or identification) of appropriate choices of the cross-covariance matrices of the columns of G. Thus, suppose we continue to assume that the row cross-covariance matrices of G are the zero matrix and the row autocovariance matrices of G are identical, but that the latter are not necessarily proportional to the identity matrix (thus, we will be rejecting the minimum information approach). This means that the column autocovariance matrices are proportional to the identity matrix-but we still have not specified the column cross-covariance matrices. Estimates for these will be derived from the data, i.e., empirically. This is actually not a radical thing to do, since even in the minimum information approach one typically derives the signal power (or signal-to-noise ratio) from the given data (thus, the minimum information approach is by no means "pure" in this respect). In fact, under the present conditions, there is a favored nontrivial choice of each cross-covariance matrix £[Gi 0 G:j]. For the purposes of linear estimation, Eq. (4.29) can be equivalently written in block matrix form as
(~:l) H: n
=
[~' .. ~....~ .. : ] (~l) + (~:1 ). 0
0
...
F;
G';
(4.30)
Nn
The Wiener filter (detailed in the last subsection) supplies an optimal estimate of the entries of G as given by
(4.31)
where diag(Fi ) is the block diagonal matrix on the right-hand-side of Eq. (4.30), and
c., = (£[G: i 0 C N = (£[N: i 0
G:j])i,j
s.»;
(4.32)
(4.33)
137
Heart Surface Electrocardiographic Inverse Solutions
i.e., Cc is the block matrix whose (i, j) block entry is the cross-covariance matrix of ifi with etc. Thus, C c is the (large) autocovariance matrix ofthe random vector consisting of the entries of G. Equation (4.31) is simply the composite of Eq. (4.21) and Eq. (4.24) applied to Eq. (4.30). In the "Standard Method" (the minimum information approach), one takes E[G: i 0 G: j ] to be the zero matrix when i #- j. In the "New Method", for i #- j, one takes
c.;
(4.34) assuming the trace of F! F, is not zero. It can be shown that the mean-square-error in the resulting estimate of signal autocovariance matrix C c is smaller than the estimate used in the Standard Method (Greensite, 2002). For the case of white noise, and assuming the F, are identical (i.e., F, = F, for all i), it can also be shown that the New Method reduces to the following procedure: Instead of individually treating the equations
Hi = FG:i + Ni,
(4.35)
i = 1, ... , n, we instead individually treat the equations
HX: i = FGX: i
+ NX: i ,
(4.36)
i = 1, ... , n, where the columns of n x n matrix X are the eigenvectors of HI H. Denoting the solution to the i-th equation ofEq. (4.36) as (GX):i (the i-th column of a matrix (GX)), we take the solution estimate for G to be (4.37) The method generalizes to the case where there are nontrivial spatial correlations, and also to the case where a priori constraints are available regarding time correlations. However, if nonwhite characteristics of the noise are known, the route of Eq. (4.37) is unavailable, and one is left with the computationally complex method resulting directly from Eqs. (4.30)-(4.34) (Greensite, 2002). Underlying the New Method is the recognition of a fundamental asymmetry regarding H = F G + N. That is, the signal G undergoes a spatial transformation, but does not undergo a time transformation. For an equation of the form h = F g + v, where we consider h, g, v to be spatial vectors, it is quite reasonable in a filtering context to impose a signal (g) autocovariance matrix proportional to the identity matrix. This would imply an autocovariance matrix for the noiseless portion of h as given by F I F -implying nontrivial filtering of noisy h (see the second equality in (4.25)). However, for an equation of the form h = h o + v, where the h and v are considered time series, it is quite unreasonable to set the signal (h o) autocovariance matrix proportional to the identity, since the resulting (Wiener) filter is no filter at all (assuming white noise). Thus, since F is a spatial transformation, while G is spatiotemporal, one cannot simply impose the minimum information condition that the entries of G are independent and identically distributed-assuming that one wishes to
138
F. Greensite
effectively filter in the time domain. In the setting of "minimum constraints" as opposed to "minimum information", the New Method is a means of performing spatiotemporal filtering in a manner dictated by the broken-symmetry of the problem , and the desire to minimize mean-square-error in the utilized signal autocovariance matrix .
4.4 EPICARDIAL POTENTIAL IMAGING The source formulations for the inverse problem of electrocardiography have included those of a single moving dipole (Gabor and Nelson , 1954), two moving dipoles (Gulrajani et al., 1984), dipole arrays (Lynn et al., 1967; Barber and Fischman, 1961; Bellman et al., 1964; He and Wu, 2001), multipole expansion coefficients (Geselowitz, 1967), and a heart excitation model (Li and He, 2001). But in a very influential letter to the editor, Zablow (1966) asserted the need to reconstruct an actual anatomically-based entity that was already being accessed invasively-so that artifacts of the source model might be minimized, and the result could be thought of as representing some sort of verifiable physiological truth. In essence, he noted that a linear relationship existed between the epicardial potentials and measurable potentials at the body surface, and suggested the former as the source formulation to be reconstructed. This was particularly attractive, since physiologists were already engaged in measuring the epicardial potentials invasively, and these were deemed useful. Over the next several decades many investigators pursued the objective of epicardial potential imaging. The many proposed refinements of technique can be divided into those pertaining to • • • • • • • • •
Statistical regularization, Tikhonov regularization, Truncated SVD regularization, Constrained least squares regularization, Nonlinear regularization methods , Augmented source formulation , Different methods for selecting regularization parameters, Preprocessing the data, Introduction of spatiotemporal constraints.
Before embarking on a discussion of these refinements, we observe that there is no consensus regarding which method s are the most worthy of employment-and we do not attempt such value judgements here . A comprehensive approach to this question is itself a sizable objective that has not yet been achieved. Ultimately, the difficulty is in the experimental setup required-i.e., the need for (ideally ) simultaneous collection of body surface and epicardial data , with coincident anatomical imaging and electrode registration, in a series of animals and human subjects with a variety of pathological conditions (Nash et al., 2000).
4.4.1 STATISTICAL REGULARIZATION In what was apparently the first serious treatment of the epicardial potential source formulation , Martin and Pilkington (1972) reported on the dismal prospects of "unconstrained"
Heart Surface Electrocardiographic Inverse Solutions
139
inverse epicardial potenti al imaging , identifying implications of the problem ill-po sedness discus sed in Section 4.3. They subsequently applied the Weiner filter, and reported more encouraging results in followup simul ations (Martin et aI., 1975). This approach requires estimates of both the signal and noise autocovariance matrices. While the noise might be considered white (ignoring inaccuracies in the forward problem construct F), a choice for the signal autocovariance matrix is less obvious. They proposed two ways of choosing one. The first was based on estimating the spatial autocovariance from time ensembles of epicardial potential maps supplied from a representative set of activation sequences. The second was a Monte Carlo method, whereby each epicardial location was given some a priori probability of being activated at any given time, and epicardial maps were then generated by random numbers assigned to each location-thus leading to a computation for the signal autocovariance matrix . Following the innovat ions of Barr et al. (1977) on the forward problem, Barr and Spach (1978) reported on inverse calculation of epicardial potential in twelve dogs with chronically implanted epicardial electrodes. In applying the Wiener filter, they simply opted to take the signal autocovariance matrix as proportional to the identity (i.e., as random variables , the epicardial potential s at all locations on the epicardium were presumed to be independent and identically distributed). They concluded that some features of the epicardial potential distribution through time can be imaged, particularly in dogs for which detailed geometry measurements.were available (postmortem). Recently, van Oosterom (1999 ) has re-examined the statistical regularization approach, concluding that impressive improvements in accuracy (compared with other regularization methods ) are possible if a nontrivial accurate signal covariance matrix is available. He suggested that the signal autocovariance matrix could be based on prior estimation of the activation sequence via other techniques (e.g., as in Section 4.6).
4.4.2 TIKHONOV REGULARIZATION AND ITS MODIFICATIONS Despite the presence of noise v in Eq. (4.20), we are still tempted to view our problem as one of applying a kind of "inverse" of F . Indeed, given that the magnitude of v is "small", we are even tempted to pretend that we are dealing with h = F g . Our prior discussion has surely revealed that one cannot expect to apply the actual inverse of F to h becau se of instability problems, but it is also useful to specifically address the (typical) setting where F doesn 't even have an inverse (which will always be the case if F is not square). Firstly, it could be that h is not even in the range of F. In that case , we might (naively) choose the solution estimate g est that minimizes (4.38) over all point s g in m-space (the expres sion Eq. (4.38) is known as the "residual" or "discrepancy"). But a second problem could be that the residual may not have a unique minimizer (as occurs when the dimension of h is smaller than the dimension of g ). One could then ask for the estimate gest that is of minimum norm among all the minimizers of the residual. From convexity arguments, it can be shown that the minimizer of the residual Eq. (4.38) of smallest norm is unique . In fact, it can be shown that the minimum-norm
140
F. Greensite
least-squares solution for gin h = Fg is given by
where Ft is the "pseudoinverse" of F. For the SVD of F as in Eq. (4.19), the pseudoinverse is given by
where st is the diagonal matrix whose i -th diagonal entry 1/s, if s, -=I- 0, and zero otherwise. Intuitively, it is easy to see why this works: The null space of Ft is the subspace orthogonal to the range of F. Thus, Ft h does not burden the estimate with any component that doesn't contribute to fitting the data. Otherwise, Ft simply undoes the attenuation s, that components of g experience when F is applied. However, the above is simply a fix for the situation where F does not have an inverse. From Section 4.3, we know that, even if F has an inverse, we are faced with solution estimate instability if F is ill-conditioned-because of the subspaces corresponding to small positive singular values. This consideration obviously will still hold for the pseudoinversebased solution. We have already encountered the Wiener filter regularization approach, which requires a priori estimates of at least the form of the signal and noise autocovariance matrices. Colli-Franzone et at. (1985) introduced the Tikhonov regularization method to the epicardial potential imaging problem, ostensibly avoiding the problem of providing estimates for the signal autocovariance and noise autocovariance matrices. This approach skirts the usual notions of stochastic processes, and instead begins with the desire to minimize Eq. (4.38). But instead of simply searching for a solution estimate gest which minimizes the residual (which would lead to an estimate that is exceedingly noise sensitive and unstable), in the Tikhonov approach one searches for the solution estimate that minimizes (4.39) where R is some matrix. Thus, one seeks an estimate for which the residual II Fgest - h f is "small", while at the same time some other property of the estimate, measured by II Rg est [[2, is also small. For example, R might be the identity, in which case one is looking for an estimate with small residual as well as a small norm (unstable nonphysiological solutions will tend to have large norms). Alternatively, R could be such that II Rg est 11 2 reflects the first or second spatial derivative of the solution estimate-so that the regularized estimate would have to be relatively "smooth". These approaches require selection of a "regularization parameter" y, which regulates how strong an influence the co-minimized second property has in determining the solution estimate. In the Tikhonov approach, the regularized estimate is given by (4.40)
obtained by setting to zero the derivative of Eq. (4.39) with respect to g (a gradient), assembling the simultaneous equations into a matrix equation, and solving for g (note
Heart Surface Electrocardiographic Inverse Solutions
141
that Eq. (4.39) is a function of the variable g-a point in m-space; thus, its gradient is a vector in m-space). In the sense of Section 4.3, Eq. (4.40) evidently describes a linear estimation method-the estimation matrix being the expression in brackets on the righthand-side of Eq. (4.40). Many alternative regularization operators R can be used. With "zero-order Tikhonov", the estimation matrix results from the choice R = I-so the technique corresponds to statistical regularization under the assumption that the signal and noise covariance matrices are both proportional to the identity matrix, with the regularization parameter presumptively being the inverse of the square of the signal-to-noise ratio (i.e., compare Eq. (4.40) to the first equality in Eq. (4.27)). First-order and second-order Tikhonov regularization correspond to a choice of R derived from the gradient and Laplacian operators. Thinking of these in the context of statistical regularization (compare Eq. (4.40) to Eq. (4.26)), these symmetric higher-order Tikhonov reguarization operators correspond to a signal autocovariance matrix that is a smoothed version of the sharp "ridge" represented by the identity matrix (with the assumption of white noise). That is, there is now a nonzero covariance between spatially proximate locations-instead of these being taken to be independent (as with a signal autocovariance matrix proportional to I). Although Colli-Franzone et at. (1985) suggested that first-order Tikhonov regularization was more accurate in in vitro experiments, Messenger-Rapport and Rudy (1988) found no significant differences in the results obtained with zero-order, first-order, or second-order Tikhonov regularization. When expressed in terms of an SVD, F = U sv', the zero-order Tikhonov solution estimate is given by (4.41) which employs a linear estimation matrix comparable to that in Eq. (4.27). A so-called "regional regularization" scheme was suggested by Oster and Rudy (1997), whereby the solution is given as
where D is a diagonal matrix whose diagonal elements take on a few different values-in effect, the diagonal values of D represent multiple regularization parameters. Again, this can be interpretted as an attempt to supply the signal autocovariance matrix. The "spatial regularization" method (Velipasaoglu et al., 2000) selects R in a manner inspired by the fact that the noisy data fails to satisfy the Discrete Picard Condition (Hansen, 1992; Throne and Olsen, 2000). The latter condition asserts that a stable solution requires that the squares of the Fourier coefficients of the data with respect to the eigenvectors of F P, should on average decay faster than the eigenvalues of F Ft. A means of modifying the data to conform to this Condition is the basis of this approach. 4.4.3 TRUNCATION SCHEMES As noted earlier, the minimum-norm least-squares solution for gin h = Fg is given by gest = Ft h = Y sto' h, where st is the diagonal matrix whose i-th diagonal entry I/si if s, =J 0, and zero otherwise. With this in mind, one could consider a regularized solution
142
F. Greensite
estimate given by (4.42) where st is the diagonal matrix whose i -th diagonal entry is 1/ s, if s, > f and 0 otherwise. This regularization method is known as Truncated SVD, or TSVD regularization (Hansen, 1992). The value f functions as a regularization parameter. TSVD performance is usually very similar to that of zero-order Tikhonov regularization. The truncation idea is also incorporated in the "Generalized Eigensystems" approach of Throne and Olson (Throne and Olsen, 1994), which is relevant to a finite element discretization of the body (rather than a boundary element discretization). Instead of truncating a solution expanded in the singular vectors of the transfer matrix F (as in Eq. (4.42)), they consider a set of generalized eigenvectors defined over the entire finite element mesh, having the properties that each generalized eigenvector satisfies the boundary conditions on the forward problem, as well as Laplace's equation within the volume, and the "subvectors" consi sting of the components on the epicardial surface are orthogonal. One then constructs a linear combination of the generalized eigenvectors such that the body surface potential data h is fitted by the components that correspond to locations on the body surface . The components corresponding to the epicardial potential locations then take on values determined by this linear combination-which would be the presumed inverse solution desired . However, this will nominally lead to an unstable noise-dominated solution . Therefore, one truncates the linear combination, using only the generalized eigenvectors associated with the largest generalized eigenvalues-thus achieving a stable solution estimate . Instead of expanding (and truncating) the solution series in terms of the eigenvectors of F F' as with a TSVD (a sequence which most efficiently represents the effects of F for a given (truncated) number of terms), one is truncating a series derived from a set of field vectors that most efficiently pack the power of the field over the entire body volume in a given (truncated) set of components. Truncation is also employed in the "local regularization" scheme of Johnson and MacLeod (Johnson, 2001). In this approach, it is recognized that F is expressed in terms of the "inverses" of three different submatrices, when a finite element discretization of the forward problem is employed. Since these matrices have much different condition number, the implication is that they should each be receive different degrees of regularization (e.g., individualized SVD truncation).
4.4.4 SPECIFIC CONSTRAINTS IN REGULARIZATION The Tikhonov type formulation Eq. (4.39) also suggests a way in which other types of constraints could be formulated . For example, suppose one has knowledge that the solution shares some features with a preliminary estimate grough ' One could then suggest the minimization of (4.43) The explicit expression for g esl is then obtained by setting the gradient of the above expression to 0, and solving for g. This is known as the Twomey method (Oster and Rudy, 1992).
Heart Surface Electrocardiographic Inverse Solutions
143
Iakovidis and Gulrajani (1992) introduced a method whereby a deliberately overregularized estimate (to be used to estimate the location of the epicardial zero-potential line, but otherwise having too few interesting features ) was used to constrain the solution for what would otherwise be an under-regularized estimate (i.e., the second regularization parameter would have been too small if the constraint had not been present, in the sense that the estimate obtained would have been unstable and "noisy").
4.4.5 NONLINEAR REGULARIZATION METHODOLOGY Overwhelmingly, linear estimation methods have been applied to obtain regularized (i.e.•noise-stable) estimates of linear formulations of the inverse electrocardiography problems. However, nonlinear (e.g., information theoretic) methods have also been applied (a particularly recent example is provided in (He et al., 2000)). We will not describe such approaches here.
4.4.6 AN AUGMENTED SOURCE FORMULATION If one treats Eq. (4.5) via a Green's function approach along the lines of what was done with Eq. (4.6) (i.e., performing two integrations-by-parts analogous to Eq. (4.9) and Eq. (4.11)), one obtains an expression relating body surface potentials to the composite of epicardial potentials and the normal component of epicardial current density (as in (Greensite, 2001, p. 151-152),
Noting that such a formulation occurs as an intermediate step in the development of the forward problem method expounded by Barr et al. (1977) , Horacek and Clements (1997) investigated solutions obtained where both the epicardial potential and epicardial normal current density are inversely computed. They suggested that this problem might be slightly better posed than the traditional epicardial potential formulation, and they also investigated refinements in the regularization technique.
4.4.7 DIFFERENT METHODS FOR REGULARIZATION PARAMETER SELECTION The Tikhonov estimate Eq. (4.40) requires selection of a value for the regularization parameter. Some guidance in this regard is provided by the "Discrepancy Principle" (Hansen, 1992). Here the reasonable assumption is made that the discrepancy (or residual) IIF g - h 11 2 is not zero-because of the noise present. Rather, the discrepancy (for the true solution g) is most likely to be the noise power. Thus, it makes sense to look for a solution estimate which produces this discrepancy. So, consider the constrained minimization problem: "minimize IIgll 2 such that I Fg - hll 2 = f" . Again , IIgl1 2 and II Fg - hll 2 are each functions of g (which varies over points in m-space). We know from Calculus that minimization of the first function subject to the constraint on the second function implies (under rather general conditions) that the gradients of the two functions at the constrained minimum gm in lie on the same ray-i.e., the gradients are proportional at gmi n' Another
144
F. Greensite
way of saying this is that there exists a scalar y such that the gradient of Eq. (4.39) is zero at gmin (in this case R = 1). As we know, the requisite gmin is given by the right-hand-side of Eq. (4.40), where the regularization parameter y was to be determined. But now, the regularization parameter is simply given as that which produces a Tikhonov solution estimate gmin satisfying the discrepancy expression [IFgmin - h 11 2 = E, where E is the noise power. However, the error in the data (noise power E) is not known (being a composite of electrode noise and modeling errors in F). There are actually several other methods for regularization parameter selection. In the so-called "Lcurve method" (Hansen, 1992), one computes a log-log plot of the first term in Eq. (4.39) versus the second term in Eq. (4.39) (the residual versus the solution estimate seminorm). The solution estimate is chosen as the one corresponding to the "comer" of the above L-shaped graph (a balance between small seminorm and small discrepancy). It should be noted, however, that a comer on the L-curve does not always exist. The Composite Residual and Smoothing Operator (CRESO) method (Colli et al., 1985) chooses the smallest positive value of the regularization parameter for which the second derivative of the first term in Eq. (4.39) with respect to the regularization parameter equals the second derivative of the second term in Eq. (4.39) with respect to the regularization parameter. The cross-validation method (Whaba, 1977) is another important regularization parameter selection method, though it has not been prominently applied in the context of the inverse electrocardiography problem.
4.4.8 THE BODY SURFACE LAPLACIAN APPROACH As we have previously noted (see equation (4.25)), statistical regularization can be viewed as the composite of a particular "preprocessing step" followed by application of the standard inverse (if such exists). This suggests the possible application of other preprocessing steps. He and Wu (1997) proposed preprocessing the data to be in the form of body surface Laplacian measurements, and solving the inverse problem using this processed data as input. The resulting transfer matrix is of a different nature (evidently, it results from left multiplying the usual transfer matrix by the same matrix that is used to preprocess the body surface data), in that the surface Laplacian of potential at a body surface location is significantly influenced by many fewer source locations than potential itself (i.e., there is less "blurring"). As a trade-off, the effect of the source on the "data" falls of more rapidly (with the fourth power of the distance from the source, rather than the second power of the distance, as with body surface potential), and there is a theoretical noise amplification in numerical differentiation of the body surface potential. Ignoring the curvature of the body surface, the body surface Laplacian of body surface potential h with respect to an (x, y) body surface coordinate system is given by
Ultimately, we can write this as L[h], where L is the Laplacian-a linear operator. After the problem is discretized for numerical treatment, L[·] is simply a matrix. Thus, h = Fg + v
Heart Surface Electrocardiographic Inverse Solutions
145
becomes Lh = (L F) g
+ L v,
Our data is now Lh, and we now need to "invert" (L F) to estimate epicardial potential g. The regularization tools remain the same as before. L is a differential operator-and thus can be thought of as akin to a high pass filter. With respect to a Fourier expansion of h , application of L amplifies the high frequency terms. In particular, the high frequency components of the noise are greatly amplified (in the nondiscretized setting, the noise is "unboundedly" amplified). However, the latter is in principle taken care of by the fact that LF is more "singular" than F -meaning that it's "inverse" will be smoother (i.e., application of the inverse is more stable, and tends to smooth noise). If one has an expectation of somehow reducing the L v contribution to data Lh prior to application of the "inverse", one might expect greater stability and fidelity in the estimate of g than that obtainable with the direct treatment of h = Fg + v. Such an expectation could be reasonable with use of Laplacian electrodes (He and Cohen, 1992), though it has not been established that these can be accurately designed in practice. In the absence of such, investigations have proceeded with direct application of L to measured data h . The approach seems to have potential in identifying and spatially distinguishing cardiac sources close to the body surface electrodes (Johnston , 1997; He and Wu, 1997). 4.4.9 SPATIOTEMPORAL REGULARIZATION
Ultimately, one is faced with a time series of problems (4.44) i = 1, 2, ..., where the subscript i now refers to the source and data at the i-th time point in the cardiac cycle. Oster and Rudy (1992) suggested using preliminary (e.g., zero-order Tikhonov) estimates at time points i - I and/or i + 1 to constrain the regularization at the i-th time step. This was done using a Twomey regularization formalism via equat ion Eq. (4.43). On the other hand, Eq. (4.44) evidently describes a stochastic process, which begs the question of applying Kalman filter theory. This requires that a stochastic model be applied, defining the presumed interdependence of the epicardial potentials between different times (Joly et al., 1993)-is itself a not entirely trivial problem. Temporal and spatial constraints can also be joined by the the method of Brooks et al. which employs two or more regularization parameters in a traditional constrained minimization format (Brooks et al., 1999). Thus , Eq. (4.44) is written as
(4.45)
One now writes a functional to be minimized, consisting of the residual (for this augmented
146
F. Greensite
problem), a spatial regularizing operator (e.g., expressing the sum of the norms of the solution estimates at each time point), and a temporal regularizing operator (e.g., the magnitude of the discretized "time derivative" of the solution estimates over all the time points)where the latter two operators are given their own regularization parameters. The solution estimate is ultimately expressed as
where diag(F) is the block matrix on the right-hand-side ofEq. (4.45), B is the discretized version of a temporal differential operator, and Yl, Yz are the two regularization parameters. The "admissible solution" approach of Ahmed et al. (1998), posits that any solution satisfying a sufficiently robust composite of constraints is deemed satisfactory, and such constraints can include those related to time. The solution algorithm requires the constraints to be convex, e.g., the "ball" of vectors g satisfying IIg liZ < C is an example of a convex set (any line joining any two members of the set consists only of members lying in the set). The need for regularization parameters is replaced by the need for bounds defining the required convexity. Finally, the approach of Greensite (1998; 2002) (described in Section 4.3.3) effectively replaces the original sequence of (nonindependent) Eq. (4.35) (or Eq. (4.44)) with a smaller number of mutually independent equations Eq. (4.36)-without the imposition of any extrinsic temporal constraints (or temporal regularization parameter). The method derives from the recognition that there is something intrinsically wrong with the assumption that the entries of G are realizations of independent and identically distributed random variables. Indeed, this symmetry condition is broken once one poses the problem described by Eq. (4.44). Given the assumption that the rows of G are independent and have identical autocovariance matrices, the solution mechanism uses a more accurate signal autocovariance matrix estimate than the other methods (in a mean-square-error sense).
4.4.10 RECENT IN VITRO AND IN VIVO WORK There has been a significant amount of in vitro work done with the Utah torso tank. The experimental setup employs a heart suspended in a torso shaped electrolytic tank, perfused by an anesthetized dog external to the tank (Oster et al., 1997). Electrodes are present on the outer margin of the tank, and also in proximity to the epicardium. Such work has addressed inverse reconstruction of epicardial potentials and activation in the setting of sinus rhythm, pacing, and arrhythmias (Burnes et al., 2001), and has also been used to assess the impact of torso inhomogeneities (Ramanathan and Rudy, 2001). Perhaps the most impressive demonstration of the potential promise of the epicardial potential imaging formulation is in the quasi-in vivo work where densely sampled epicardial potential data was accessed from infarcting dogs, and used in simulations with numerical human torsos to test the fidelity of the zero-order Tikhonov inversion under realistic conditions of modeling and electronic noise (figure 4.4) (Burnes et al., 2000). This approach has also been applied to demonstrate the feasibility of reconstructing repolarization properties of interest (e.g., increased dispersion of repolarization (Ghanem et al., 2001)). On the other hand, earlier results of a different group using invasive data from patients undergoing arrhythmia surgery, suggested that the usual epicardial potential regularization
Heart Surface Electrocardiographic Inverse Solutions
147
FIG URE 4.4. Maps of epicardial potential at two different times during ventricular activation. in the study of (Burnes et al., 2000). The invasively measured epicardial potential data (from dogs) is used to forward generate body surface potentials on a numerical human torso surface. Geometri cal and "electronic" noise is added to these body surface potentials , and inversely reconstructed epicardial potential maps are computed. [From: Burnes, J. E., Taccardi , B.• Macleod, R. S., and Rudy, Y, 2000, Noninvasive ECG imaging of elec trophysiologcially abnorma l substrates in infarcted hearts, a model study, Circulation. 101: 533- 540. Used by permission.]
methodology was able to usefully image epicardial potential during the QRS inteval only in its initial portions (Shahidi et ai., 1994). Finally, a report by Penney et ai. (2000) (extending work by MacLeod et al. (1995)) identified local changes in inversely computed epicardial electrograms in patients whose data was accessed during coronary catheterization, preceed ing and following angioplasty balloon catheter inflation. In the eighteen study patients, the predicted region of ischemia following balloon inflation correlated with the expected region of perfusion deficit based on the vessel occluded.
4.5 ENDOCARDIAL POTENTIAL IMAGING Interventional cardiologists employ transvenous catheter procedures to treat arrhythmogenic foci and aberrant conduction pathways. Such treatment first requires mapping the endocardial potential. This initial invasive imaging is cumbersome, tedious, and lengthy. Typically, a roving probing trasvenous electrode catheter is brought into contact with many endocardial location s over the course of many heartbeats, and a depiction of an endocardial activation map is thereby inferred (an improved technology along these lines is given by Gepstein et ai. (1996)). The number of sites accessed is limited , and there is no accounting for beat-to-beat activation variability when reconstructing the maps from the many beats. Recently introduced expandible basket electrode arrays (Schmitt et ai., 1999) have their own problems related to limited numbers of electrodes, the need to contact (and perhap s
148
F. Greensite
irritate) the endocardium, and the possibility of difficulties in collapsing the basket at the end of the acquisition. These problems can be potentially addressed by the use of a transvenous catheter whose tip is studded with multiple electrodes, and which is placed somewhere in the midst of a cardiac chamber (without contacting the endocardium). Once the catheter location relative to the endocardium is registered, it becomes theoretically possible to inversely compute the endocardial potentials from a single heartbeat-indeed, to follow dynamic isopotential maps within a single beat, as well as beat-to-beat changes in activation maps. For this inverse problem, the volume is bounded by the endocardial surface and the multielectrode probe surface. Laplace's equation holds in this volume, and the boundary conditions are the (unknown) endocardial potentials, and the zero normal current density at the multi-electrode probe surface. As in Section 4.2, a linear relationship is derived between the endocardial potentials and the catheter electrode potentials. Notwithstanding the inconvenience of the required cardiac catheterization, there are two very significant advantages of this formulation over the technique of imaging the epicardial potentials from the body surface. First, the electrodes are relatively close to all portions of the surface to be imaged (e.g., as opposed to the distance between body surface electrodes and the posterior wall of the heart). Second, the relevant volume is composed only of blood in the lumen of the cardiac chamber. Therefore, the geometric modeling required for estimation of the transfer matrix is vastly less, and the uncertainties in the values of key components of the model (i.e., tissue conductivities) are markedly diminished (the blood has uniform isotropic conductivity). The initial proposal and work on a multielectrode noncontact array, placed in a cardiac chamber for purposes of accessing endocardial potentials, was due to Taccardi et al. (1987). In the past few years there has been much significant work reported on successors to this idea. For example, in experiments on dogs, Khoury et al. (1998) used a 128 electrode catheter, inserted via a purse string suture in the left ventricular apex, and showed that faithful renditions of endocardial activation, both with paced and spontaneous beats, was possible by solving the inverse problem. Ischemic zones were also well defined. A spiral catheter design has also been investigated (Jia et al., 2000). An impressive series of experiments has been performed with a competing system, developed by Endocardial Solutions, Inc. In addition to a 64-electrode 7.5 ml inflatable balloon catheter, a second transvacular catheter is passed and dragged along the endocardium. As it is dragged, a several kHz signal is passed between it and the electrode catheter, localizing its position with respect to the electrode catheter. In this way, a rendition of the endocardium with respect to the electrode catheter is produced. Following construction of a "virtual endocardium" via a convex hull algorithm applied to the above anatomical data, the inverse problem is then solved, generating several thousand "virtual electrograms" on the virtual endocardium (figure 4.5 and figure 4.6). The literature on this subject is growing rapidly, and we site only a few of examples. Overall, very impressive utility and fidelity is being established. For example, a report by Schilling et al. (2000) describes the classification of atrial fibrillation in humans in terms of numbers of independent reentrant wavefronts identified. A report by Strickberger et al. (2000) describes the successful ablation of fifteen instances of ventricular tachycardial guided by this catheter system. A recent report by Paul et al. (2001) describes the utility of the system in directing catheter ablative therapy in subjects with atrial arrhythmias refractory to pharmacologic therapy.
Heart Surface Electrocardiographic Inverse Solutions
ECGI v---,·r.-...._v.... C
R
J\""~------
J
149
......./___.f'-
,~
'\
r
.>
~ ~j "\~/'~,/'/~r~
-~ J ~r-'v~~{/r'~~~-"V\j)
ECGI C
R
"J""/\V'-'---\jl \,/"''"'\
r
ECGI ~~ C '\,.-"'""-..rVvr-_'\_\J,(~---...,.. N/~ R -'-----~~/\.../"'v'\~ FIG URE 4.5. Surface ECG from lead I (ECG I), endocardial electrogram via a contact electrode (C), and inversely reconstructed electrogram using input from a noncontact multielectrod e probe in the atrium (R), with C and R from the same location, in three patient s with atrial fibrillation, in the study of (Schilling et al., 2000, [From: Schillin g, R. J., Kadish, A. H., Peters, N. S., Goldberger, J., Wyn Davies, D., 2000, Endocard ial mappin g of atrial fibrillation in the human right atrium using a non-contac t catheter, European Heart Journal. 21: 550-564. Used by permission of the publisher, WB Saunders .]
4.6 IMAGING FEATURES OF THE ACTION POTENTIAL 4.6.1 MYOCARDIAL ACTIVATION IMAGING Epicardial and endocardial potential imaging addresses the need to reconstruct something that is currently accessed invasively, and is thus of evident interest. However, such potentials are not themselves a clinical endpoint. Ultimately, clinicians are interested in the action potential---or at least , feature s of the action potential. The most important features of the action potential are activation time (time of arrival of phase zero at every location, the aggragate of which globally describe conduction disturbances), phase zero amplitude (reflecting ischemia), and action potential duration (reflecting refractory periods, potentially associated with propensity for re-entrant arrhythmias). The marker for activation in an electrogram (a tracing of epicardial or endocardial potential at a given cardiac site) is the "intrinsic deflection"-defined as the steepest downward deflection of the electrogram. Recall that the source is the gradient of transmembrane potential (e.g., Eq. (4.10» , and that during cardiac activation this is usually appreciably nonzero only at the locus of points undergoing action potential phase O. This locus is approximately a surface (the interface between depolarized and nondepolarized muscle). Electrically, this behaves approximately as a propagating surface of dipole moment density (a double layer). There is a discontinuity of potential as the double layer is crossed. Ideally, as an extracellular location is passed over by the activation wavefront, there will then be a sharp downward deflection in the extracellular (electrogram) potential-the intrinsic deflection. However,
150
F. G reensi te
FIGURE 4.6. Time sequential views of a portion of the "virtual endocardium" depiction of the atria in the study of (Paul et al., 2001), showing isopotential maps at six successive times. Spreading endocardial activation wavefronts can be appreciated (e.g., two of these collide in E and F) during an atrial reentrant tachycardia. See the attached CD for color figure. [From: Paul, T., Windhagen-Mahnen , B., Kriebel, T., Bertram, H., Kaulitz, R.. Korte, T., Niehaus, M., and Tebbenjohanns, 1., 200 I, Atrial Reentrant Tachycardia After Surgery for Congenital Heart Disease Endocardial Mapping and Radiofrequency Catheter Ablation Using a Novel, Noncontact Mapping System, Circulation. 103: 2266-227 1. Used by permission.]
the reality is that it is not infrequent that there is more than one reasonable candidate for the intrinsic deflection within a given location's electrogram. Furthermore, the intrinsic deflection is often rather lengthy, so the selection of a single activation time within the intrinsic deflection is to some extent arbitrary (Ideker et al., 1989; Paul et al., 1990). The activation time is presumably the inflection point of the deflection (which itself is poorly defined in the noisy setting). To a large extent, these problems are inherent in the source formulation : The epicardial (or endocardial) potential at a location actually reflects contributions from electrical activity at all surrounding locations , when in fact we desire to resolve results of the membrane function at a single location-i.e., the local action potential. In this section we examine work done on imaging the myocardial activation feature of the action potential , rather than the epicardial potential. Enthusiasm for immedi ately attacki ng the problem of recon structing the transmembrane potential m(x), or its gradient, is tempered by recogniti on of a dimen sionality problem : our measurements are confined to a surface (of the body), while the source V . (G i Vm) permiates a volume (the heart). Inherentl y, we are faced with a "projection" of the three
151
Heart Surface Electrocardiographic Inverse Solutions
dimensional source on the two dimensional body volume (a further exacerbation of the already described ill-posedness of the problem). However, building on the work of Wilson et at. (1933), Frank (1954) noted that the source during the QRS inteval was roughly a double layer (i.e., a surface), which tends to mitigate the above dimensionality problem. While Frank was interested in quantifying the inaccuracy of the single moving dipole model of the heart via forward computations (rather than imaging the double layer), two decades later Dotti (1974) made an interesting observation: Neglecting anisotropic conductivity of the heart, assuming uniform action potential amplitude, and recognizing the fact that the gradient of transmembrane potential propagates as a dipolar wavefront (double layer), he noted that the source surface at any time is electrically equivalent (as regards points external to the heart) to a double layer consisting of the portions of the endocardium and epicardium already depolarized (figure 4.7). This is a consequence of the well-known fact from electrostatics that a closed uniform double layer in an isotropic medium generates no external potential. This means that one can derive a relationship between the body surface potential at a given time, and the locus of points on the cardiac surface that have been activated. Thus, the dimensionality problem resolves, and the surface of interest is actually fixed. Dotti presented a very small scale two dimensional simulation illustrating this concept. Similar observations were made independently by Salu (1978) a few years later. However, the concept can be said to have been formally introduced in a more complete engineering context by Cuppen and van Oosteroom in the early 1980s. They presented the imaging equation as ¢(y, t)
=
i
(4.46)
A(x, y)H(t - r(x»dS,
where S is the composite of the endocardial and epicardial surfaces, and the action potential (during the QRS interval) is modeled using the Heaviside function H(t) (zero for t < 0, unity for t > 0). Thus, the action potential (figure 4.1) is taken to be the step function a(x)
+ b(x)H(t -
r(x».
(4.47)
The action potential amplitude b(x) is assumed to be constant over the ventricles, and is subsumed into the transfer function A(x, y). The offset a(x) is also assumed constant, and thus has no effect since A(x, y)dS = 0 (i.e., a uniform closed double layer generates no external potential). Note that in the absence of reentrant arrhythmias there is no repolarization during the QRS interval, so the action potential can then be modeled as a step function in that interval. Thus, Eq. (4.46) is fully consistent with Eq. (4.13) (dervied from the bidomain). Using Eq. (4.46), one wishes to determine rex), the time that point x on the surface surrounding the heart undergoes action potential phase zero. Note that the equation is nonlinear. Equation Eq. (4.46) achieves a superficially satisfying form upon integration over the activation QRS interval,
Is
f QRS
¢(y, t)dt = [ A(x, y)f
1s
QRS
H(t - r(x»dtdSx = -
[ A(x, y)r(x)dSx .
1s
(4.48)
152
F. Greensite
B
c FIGURE 4.7. An electrical double layer in an infinite homogene ous volume conduct or of infinite extent generates potential at a point proportional to the solid angie subtended by the point and the double layer. The above diagram depicts how ventricular intramural depolarization wavefronts generate potential equivalent to that generated by "virtual" double layers on the epicardium and/or endoca rdium. [From: van Oosterom, A., 1987, Comp uting the depolar ization sequence at the ventricular surface from body surface potentials, in: Pediatric and Fundam ental Electro cardiography , (1. Liebman, R. Plonsey, and Y. Rudy, eds.), Martinu s Nijhoff, Zoetermeer, The Netherlands, pp. 75-89. Used by permissi on.]
Apparently, the imaging of myocardial activation is also a linear problem. But attempts to solve Eq. (4.48) soon run up against the problem that the computed activation times are entirely unrealistic-because regularization schemes typically favor solution estimates with lower norm (even with higher order Tikhonov regularization). Thus, the computed
Heart Surface Electrocardiographic Inverse Solutions
153
QRS interval becomes highly contracted. Furthermore, there is the impressionthat one has been wasteful of the temporally resolved (dynamical) information inherent in Eq. (4.46), by integrating it all away in Eq. (4.48). This is unacceptable in an already very ill-posed problem.Huiskampand van Oosterom(1988)addressedthis objectionablefeature by using the regularized solution to Eq. (4.48) as a seed for a quasi-Newton routine for solving a regularized version of the full nonlinear expression Eq. (4.46). As with the basic Newton procedure from Calculus, which extracts the root of a nonlinear function nearest the seed, the quasi-Newton procedure applied here is a means of finding a root (i.e., the appropriate r(x» for a regularizedversionof¢(y, t) A(x, y)H(t - r(x»dS = O.However,aswith the basic Newton method, one is dealing with an intrisically local procedure that does not perform a global optimization. The solution estimate obtained is highly influencedby the initial seed from Eq. (4.48).On the other hand, there is no reason why a global optimization routine such as simulated annealing, could not be used (in fact, this is proposed in a very recent paperon activation time and actionpotentialamplitudeimaging(Ohyu et al., 2002». However, a furtherproblemis that Eq. (4.46) is validonly under the assumptionthat cardiac muscle has isotropicconductivity, or satisfies equal anisotropy. A differentapproachwas taken by Greensite(1994; 1995).The general idea is that the myocardial surface activation function r(x), like any (nominally differentiable) function, is greatly characterized by its relative extrema-e.g., its relative maxima and minima. Predominantly, these are the epicardial breakthrough points and activation sinks of the transmuraldepolarization wavefront. Indeed, since t (x) is definedover a compact domain (the heart surface), and has a finite range (the QRS interval), knowledge of these "critical points" reduces the space of admissible solutions to that of a compact set of functions. The problem of reconstructing the rest of r(x) from Eq. (4.48) is nominally a well-posed problem.In simpleterms,if the relativemaximaand minimaof r (x) are known,the problem of determining the rest of r(x) becomes simply a matter of optimized interpolation-for which the constraints embodied by Eq. (4.46) should be sufficient. An efficientmeans for computing the critical points was given in the Critical Point Theorem (Greensite, 1995). Consider the "data operator"
Is
¢[.] =
1
¢(y, t)(·)dy.
QRS
In the practical setting, ¢ is a space-time matrix, each of whose rows is the body surface potential time series (ECG) at a particular electrode location. The Critical Point Theorem states that x' is a critical point of r(x) if and only if A(x l , y) is in the space spanned by the eigenfunctions of ¢¢t. In fact, the Theorem holds even in the case of an anisotropic myocardium. Complications ensue once noise is added to the formulation, but an efficient algorithm employing these ideas in a noisy context was proposed in (Huiskamp and Greensite, 1997). Oostendorp et at. at the University of Nijmegen/University of Helsinki have produced work evaluating the latter approach both in vitro (Oostendorp et aI., 1997) and in vivo (Oostendorp and Pesola, 1998) (validation in hearts removed at the time of cardiac transplantation, figure 4.8). Work on invasive validation of these latter ideas has also recently been undertaken by a group at the Technical University of Graz, (Tilg et aI., 1999; Modre et aI., 2001a,
154
F. Greensite
FIGURE 4.8. One of a series of four hearts, removed at transplantation, in the study of (Oostendorp and Pesola, 1998). The two upper images of the anterior and posterior ventricular epicardium show the activation maps obtained at the time of surgery (prior to cardiac transplantation) via application of an epicardial electrode sock (epicardial electrode locations indicated by circles). The lower two images show the corresponding preoperative activation map, inversely computed from body surface potential electrode data. [From: Oostendorp, T., and Pesola, K., 1998, Non-invasive determination of the activation time sequence of the heart: validation by comparison with invasive human data, Computers in Cardiology. 25:313-316. Copyright IEEE. Used by permission].
Wach et ai., 2001; Tilg et ai., 2001; Modre et ai., 2001b), and a group at the University of Auckland/University of Oxford (Pullan et ai., 2001).
4.6.2 IMAGING OTHER FEATURES OF THE ACTION POTENTIAL Let tl be a time during the TP interval of the ECG-i.e., a time during which all ventricular locations are in action potential phase 4 (fully repolarized). Let t: be a time during the ST interval of the ECG-i.e., a time during which all ventricular locations are in phase 2 (fully depolarized). From Eq. (4.13) and the action potential model Eq. (4.47), ¢(y, t2) - ¢(y, t1)
= L[G iV1/r(X, y)]. nA¢m(x, t2) = L[GiV1/r(X, y)]. nxb(x)dS.
¢m(x, tl»dS
(4.49)
Note that if b(x) is a constant, both sides of the above equation will be zero (e.g., a closed uniform double layer generates no external potential). Indeed, the body surface potential
155
Heart Surface Electrocardiographic Inverse Solutions
during the TP and ST segments have the same value in healthy subjects. However, in the case of cardiac ischemia, the action potential amplitude is spatially varying. In that setting, one can imagine solving the above integral equation to obtain the spatially-varying action potential amplitude-up to a spatial constant (the null space of the operator is the space of constant functions). Since the phase 0 amplitude in healthy myocytes is already known to be approximately 90 millivolts, one can then (in principle) image the action potential amplitude h(x) fully. Reflecting on the approach of Cuppen and van Oosterom (1984), Geselowitz (1985) noted that it would be possible to image the area under the action potential (i.e., the integral with respect to the baseline of the action potential) by simply extending the time interval of integration in Eq. (4.48) to be the interval (encompassing the time period of activation and repolarization). Thus, {
JQRST
(y, t)dt = -
{
([G;V1/f(x, y)]. Dxm(X, t)dSdt
JQRST Js
=- ([G;V1/f(X,y)].Dxb(X) (
Js
= -l[G;V1/f(X,
JQRST
H(t-r(x))dtdS
-n DxfL(X)dS,
where fL(x) is the area under the action potential at x. Now one can use knowledge of h(x) and fL(X) to create an image of action potential duration as fL(X)jb(x). Thus, one might anticipate imaging action potential attributes such as action potential amplitude and action potential duration, in addition to phase 0 time (activation imaging). Apparently, this joining of the prior two paragraphs has not been investigated, and the practicality of such manipulations is speculative.
4.7 DISCUSSION Among the many engineering challenges posed by the imaging problem treated in this chapter, the necessity of a proper mathematical understanding of the computational difficulties (and their optimal treatment) has some pre-eminence. In this regard, there are lively controversies regarding which is the favored source formulation to be imaged (epicardial/endocardial potentials versus action potential features), the possible role of preprocessing the raw signals (e.g., Laplacian electrocardiography), the reductionist role in activation imaging (e.g., the Critical Point Theorem), and the desirability of integrating the temporal data from a stochastic processes standpoint. Recent history has shown that there is surely room for improvement in algorithmic technique. Methodological refinements continue to be proposed by many different groups. At the same time, the biophysical understanding, technical apparatus, and mathematical methodology, are clearly already in place to create images of extracellular potential and action potential features on the epicardial and endocardial surfaces. The principal question is whether the resulting images are either too blurred to be of much use, or are otherwise unreliable and misleading (e.g., due to the inherent ill-posedness of the problem, lack
156
F. Greensite
of sufficiently powerful mitigating constraints, or insufficiently accurate forward problem solutions due to uncertainties in knowledge of body tissue conductivities and anisotropies). Thus, image validation is presently a major question of interest in this field. Such validation is fairly well advanced in the case of the minimally invasive techniques of imaging endocardial potential via trans vascular catheter probe electrode arrays. However, for the epicardial imaging approaches, it is particularly difficult to address the validation question adequately, because validation ideally requires the simultaneous acquisition of epicardial signals, body surface signals, and anatomical body imaging (via CT or MRI). That is, the body imaging should be conducted closed chest (otherwise, the body surface signals would be subject to an unrealistic transfer matrix), despite the simultaneous need for "gold standard" invasively obtained epicardial potentials for the validation. Nevertheless, the latter validation goal is being aggressively pursued by a number of groups worldwide. It is likely that the field is maturing to the extent that the next several years will see clarification of the true potential and promise of the methodologies discussed in this chapter. Accordingly, the era of Noninvasive Imaging of Cardiac Electrophysiology (NICE) could soon be at hand.
REFERENCES Ahmed, G. E, Brooks, D. H., and MacLeod, R. S., 1998, An admissible solution approach to inverse electrocardiography, Ann. Biomed. Eng. 26:278-292. Barber M. R, and Fischman, E. J., 1961, Heart dipole regions and the measurement of dipole moment, Nature. 192:141-142. Barnard, A. C. L., Duck, J. M., Lynn, M. S., and Timlake, W. P., 1967, The application of electromagnetic theory to electrocardiography, II, Biophys. 7:463-491. Barr, R. C, Ramsey, M., and Spach, M. S., 1977, Relating epicardial to body surface potential distributions by means of transfer coefficients based on geometry measurements, IEEE Trans. Biomed. Eng. 24: I-II. Barr, R., and Spach, M., 1978, Inverse calculation of QRS~ T epicardial potentials from body surface potential distributions for normal and ectopic beats in the intact dog, IEEE Trans. Biomed. Eng. BME-42:661~675. Basser, P., MattieIlo, J., and LeBihan, D., 1994, MR diffusion tensor spectroscopy and imaging, Biophys. J. 66:259-267,1994. Bellman, R., Collier, C, Kagiwada, H., Kalaba, R, and Se!vester, R., 1964, Estimation of heart parameters using skin potential measurements, Comm. ACM. 7:666-668. Brooks, D. H., Ahmad, G., MacLeod, R. S., and Maratos, G. M., 1999, Inverse electrocardiography by simultaneous imposition of multiple constraints, IEEE Trans. Biomed. Eng. BME-46:3-18. Burnes, J. E., Taccardi, B., MacLeod, R S., and Rudy, Y, 2000, Noninvasive ECG imaging of electrophysiologcially abnormal substrates in infarcted hearts, a model study, Circulation. 101:533-540. Burnes, J. E., Taccardi, B., Ershler, P. R., and Rudy, Y, 2001, Noninvasive ECG imaging of substrate and intramural ventricular tachycardia in infarcted hearts, 1. Am. Co!' Cardio!' in press. Colli-Franzone, P., Guerri, L., Tentoni, S., Viganotti, C; Baruffi, S., Spaggiari, S., and Taccardi, B., 1985, A mathematical procedure for solving the inverse potential problem of electrocardiography. Analysis of the time-space accuracy from in vitro experimental data, Math. Biosci. 77:353-396. Cuppen, J., and van Oosterom, A., 1984, Model studies with inversely calculated isochrones of ventricular depolarization, IEEE Trans. Biomed. Eng. BME-31:652-659. Dotti, D., 1974, A space-time solution of the inverse problem, Adv. Cardiol., 10:231-238. Einthoven, W., 1912, The different forms of the human electrocardiogram and their signification. Lancet. 1912 1 :853-861. Foster, M., 1961, An application of the Wiener-Kolmogorov smoothing theory to matrix inversion, J. SIAM. 9:387-392. Frank, E., 1954, The images surface of a homogeneous torso, Amer. Heart 1. 47:757-768.
Heart Surface Electrocardiographic Inverse Solutions
157
Gabor, D., and Nelson, C v., 1954, Determination of the resultant dipole of the heart from measurements on the body surface, J. Applied Physics. 25:41 3-4 16. Gepstein, L., Hayam, G., and Ben-Haim, S. A., A novel method for nonfluoroscopic catheter-based electroanatomical mapping of the heart: in Vitro and in vivo accuracy results, Circulation. 95:16111622. Gelemter, H. L., and Swihart, J. C , 1964, A mathematical-physical model of the genesis of the electrocardiogram, Biophys. J., 4:285-301. Geselowitz, D. B., 1967, On bioelectric potenti als in an inhomogene ous volume conductor, Biophys. J. 7:1-1 1. Geselowitz, D. B., 1985, Use of time integrals of the ECG to solve the inverse problem, IEEE Trans. Biomed. Eng. BME·32 :73-75. Ghanem, R. N., Burnes, 1. E., Waldo, A. L., and Rudy, R., 200 I, Imaging dispersion of myocardial repolarization. II, Circulation, 104:1306-1 312. Golub. G., and van Loan, C , 1996, Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore. Green site, E , 1994. Well-posed formul ation of the inverse problem of electrocardiography, Ann . Biomed. Eng. 22:172-183. Greensite , E, 1995. Remote reconstruction of confined wavefront propagation , Inverse Problems . 11:36 1- 370. Greensite, E, and Huiskamp , G., 1998, An improved method for estimating epicardial potenti als from the body surface, IEEE Trans. Biomed. Eng.. BME-45:1-7. Greensite, E , 2001. Myocardial Activation Imaging, in: Computational Inverse Problems in Electrocardiography, (P. Johnston, ed.), WIT press, Brisol, pp. 143-190. Greensite, E , 2002, A new treatment of the inverse problem of multivariate analysis, Inverse Problems. 18: 363-379. Gulrajan i, R., Roberge, E , and Savard, P., 1984, Moving dipole inverse ECG and EEG solutions, IEEE Trans. Biomed. Eng. BME-3 1:903- 9 10. Gulrajani, R. M., Roberge, E A., and Savard, P., 1989, The inverse problem of electrocard iography, in: Comprehensive Electrocardiology, Volume I ( P. W. Macfarlane, and T. T. Veitch Lawrie, eds.), Pergamon, Oxford, pp. 237-288. Gulrajan i, R. M., 1998, The forward and inverse problems of electrocardiograprhy,lEEE Eng. Med. Bio!' 17:84-
\0 1. Hansen, P. C , 1992, Numerical tools for analysis and solution of Fredholm integral equations of the first kind, Inverse Problems . 8:849-872. He, B., and Cohen, R. J., 1992, Body surface Laplacian ECG mapping, IEEE Trans. Biomed. Eng. 39:1179-119 1. He, B., and Wu, D., 1997, A bioelectric inverse imaging technique based on surface Laplacians, IEEE Trans. Biomed. Eng. BME-16: 133- 138. He, R., Rao. L., Liu, S., Yan, w., Narayana, P. A., and Brauer, H., 2000,The method of maximum mutual information for biomedical electromagneti c inverse problems, IEEE Transaction on Magnetics. 36:1741-1744. He, B., and Wu, D., 2001, Imaging and visualization of 3-D cardiac electric activity, IEEE Trans. Inf. Techno!' Biomed.5:181-186. Henriquez, C , 1993, Simulating the electrical behavior of cardiac tissue using the bidomain model, Crit. Rev. Biomed. Eng. 21:1- 77. Horacek, B. M., 1997, The inverse problem of electrocardiography: a solution in terms of single- and double-layer sources on the epicardial surface, Math. Biosci. 144:119- 154. Huiskamp, G., and van Oosterom , A., 1988, The depolariz ation sequence of the human heart surface computed from measured body surface potentials, IEEE Trans. Biomed. Eng. BME·35 : 1047- 1058. Huiskamp, G., and van Oosterom , A., 1989, Tailored versus realistic geometry in the inverse problem of electrocardiography, IEEE Trans. Biomed. Eng. BME-36:827-835. Huiskamp, G., and Green site, E , 1997, A new method for myocardial activation imaging , IEEE Trans. Biomed. Eng. BME-44: 433-446. Iakovidis,l., and Gulrajan i, R. M., 1992, Improving Tikhonov regulariza tion with linearly constrained optimization: application to the inverse epicardial potential solution, Math. Biosci., 112:55- 80. Ideker, R. E., Smith, W. M., Blanchard , S. M., Reiser, S. L., Simpson, E. V., Wolf, R. D., and Danieley, N. D., 1989, The assumptions of isochronal cardiac mapping , PACE. 12:456-478. Jackson, J. D., 1975. Classical Electrodynamics, Wiley, New York. Jia, P., Punske, B., Taccardi, B.. and Rudy, Y, 2000, Electrophys iologic endocardial mapping from a noncontact nonexpandable catheter, J. Cardiovasc. Electrophysio!., 11:1238-1251 .
158
F. Greensite
Johnston, P. R, 1997, The Laplacian inverse problem of electrocardiography: an eccentric spheres study, IEEE
Trans. Biomed. Eng. 44:539-548. Johnson, C, 2001, Adaptive finite element and local regularization methods for the inverse problem of electrocardiography, in: Computational Inverse Problems in Electrocardiography, (P. Johnston, ed.), WIT press, Brisol, pp. 51-88. Joly, D., Goussard, Y, and Savard, P, 1993, Time-recursive solution [0 the inverse problem of electrocardiography: a model-based appraoch, in: Proc. 15th Ann Int confIEEE Eng. Med. Bio!. Soc., IEEE Press, New York, pp. 767-768. Kadish, A., Hauck, 1., Pederson, B., Beatty, G., and Gornick, C, 1999, Mapping of Atrial Activation With a Noncontact, Multielectrode Catheter in Dogs, Circulation. 99:1906-1913. Keener, 1., 1988, Principles ofApplied Mathematics, Addison Wesley, Redwood City, CA, pp. 135-146. Dirar S. Khoury, PhD; Keith L. Berrier, BS; Shamim M. Badruddin, MD; William A. Zoghbi, Khoury, D. S., Berrier, K. L., Badruddin, S. M., and Zoghbi, W. A., 1998, Three-Dimensional Electrophysiological Imaging of the Intact Canine Left Ventricle Using a Noncontact Multielectrode Cavitary Probe: Study of Sinus, Paced, and Spontaneous Premature Beats, Circulation. 97:399-409. Leder, U., Pohl, H., Michaelson, S., Fritschi,T, Huck, M., Eichhorn, 1., Muller, S., and Nowak, H., 1998, Noninvasive biomagnetic imaging in coronary artery disease based on individual current density maps of the heart,
Int. J. Cardio!. 64:83-92. Li, G., and He, B., 2001, Localization of the site of origin of cardiac activation by means of a heart-model-based electrocardiographic imaging approach, IEEE Trans. Biomed. Eng. 48:660-669. Lynn, M. S., Barnard, A. C L., Holt, J. H., and Sheffield, L. T., 1967, A proposed method for the inverswe problem in electrocardiography, Biophys. J. 7:925-945. MacLeod, R S., Gardner, M., Miller, R M., and Horacek, B. M., 1995, Application of an electrocardiographic inverse solution to localize ischemia during coronary angioplasty, J. Cardiovasc. Electrophys. 6: 2-18. MacLeod, R. S., and Brooks, D. H., 1998, Recent progress in inverse problems of electrocardiograprhy, IEEE
Eng. Med. Bio!. 17:73-83. Malmivuo, J., and Plonsey, R, 1995, Bioelectromagnetism: principles and applications of bioelectric and biomagnetic fields, Oxford University Press, New York. Martin, R. 0., and Pilkington, T C, 1972, Unconstrained inverse electrocardiography: epicardial potentials, IEEE Trans. Biomed. Eng. BME-19:276-285. Martin, R. 0., Pilkington, T C., and Morrow, M. N., 1975, Statistically constrained inverse electrocardiography, IEEE Trans. Biomed. Eng. BME-22:487-492. Messinger-Rapport, B. J., and Rudy, Y., 1988, Regularization of the inverse problem of electrocardiography: a model study, Math. Biosci., 89:79. Modre, R., Tilg, B., Fischer, G., and Wach, P., 2001, An iterative algorithm for myocardial activation time imaging,
Computer Methods and Programs in Biomedicine 64:1-7. Modre, R, Tilg, B., Fischer, G., Hanser, E, Messnarz, B., Wach, P., Pachinger, 0., Hintringer, E, Berger, T, Abou-Harb, M., Schoke, M., Kremser, C, and Roithinger, E, 2001, Stability of activation time imaging from single beat data under clinical conditions, Biomedizinishe Technik 46:213-215. Nash, M. P, Bradley, C P., Cheng, L.K., Pullan, A. J., and Paterson, D. J., in press, An in-vivo experimentalcomputational framework for validating ECG inverse methods, Int!' J. Bioelectromagnetism. Ohyu, S., Okamoto, Y, and Kuriki, S., 2001, Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrphysiological properties, IEEE Trans. Biomed. Eng. in press. Oostendorp, T, MacLeod, R., and van Oosterom, A., 1997, Non-invasive determination of the activation sequence of the heart: validation with invasive data, Proc. 19th Annual Int. Con! IEEE EMBS, CD-ROM, 1997. Oostendorp, T, and Pesola, K., 1998, Non-invasive determination of the activation time sequence of the heart: validation by comparison with invasive human data, Computers in Cardiology. 25:313-316. Oster, H., and Rudy, Y, 1992, The use of temporal information in the regularization of the inverse problem of electrocardiography, IEEE Trans. Biomed. Eng. BME-39:65-75. Oster, H. S., and Rudy, Y, 1997a, Regional regularization of the electrocardiographic inverse problem: a model study using spherical geometry, IEEE Trans. Biomed. Eng. 44:188-199. Oster, H., Taccardi, B., Lux, R., Ershler, P, and Rudy, Y, 1997, Noninvasive electrocardiographic imaging, Circulation. 96:1012-1024.
HeartSurfaceElectrocardiographic Inverse Solutions
159
Papoulis, A., 1984, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York. Paul, T, Moak, J. P., Morris, C., and Garson, A., 1990, Epicardial mapping: how to measure local activation, PACE. 12:285-292. Paul, T, Windhagen-Mahnert, B., Kriebel, T., Bertram, H., Kaulitz, R, Korte, T, Niehaus, M., and Tebbenjohanns, J., 2001, Atrial Reentrant Tachycardia After Surgery for Congenital Heart Disease Endocardial Mapping and Radiofrequency Catheter Ablation Using a Novel, Noncontact Mapping System, Circulation. 103:2266-2271. Penney, C. 1., Clements, J. C., and Horacek, B. M., 2000, Non-invasive imaging of epicardial electrograms during controlled myocardial ischemia, Computers in Cardiology 2000.27:103-106. Plonsey, R., 1969, Bioelectric Phenomena, McGraw-Hill, New York. Pullan, A. J., Cheng, L.K., Nash, M.P., Bradley, c.P., Paterson, DJ., 2001, Noninvasive electrical imaging of the heart: theory and model development. Ann. Biomed. Eng. 29:817-836. Ramanathan, C., and Rudy, Y, 2001, Electrocardiographic Imaging: II. Effect of torso inhomogeneities on noninvasive reconstruction of epicardial potentials, electrograms, and isochrones. J. Cardiovasc. Electrophysiol. 12:242-252. Reese, T, Weisskoff, R., Smith, R, Rosen, B., Dinsmore, R, and Wedeen, v., 1995, Imaging myocardial fiber architecture in vivo with magnetic resonance, Magnetic Resonance in Medicine. 34:786-791. Rudy, Y, and Messinger-Rapport, B. J., 1988, The inverse problem in electrocardiography: solutions in terms of epicardial potentials, Crit. Rev. Biomed. Eng. 16:215-268. Salu, Y, 1978, Relating the multipole moments of the heart to activated parts of the epicardium and endocardium, Ann. Biomed. Eng., 6:492-505. Schilling, R. J., Kadish, A. H., Peters, N. S., Goldberger, J., Wyn Davies, D., 2000, Endocardial mapping of atrial fibrillation in the human right atrium using a non-contact catheter, European Heart Journal. 21: 550-564. Schmitt, C; Zrenner, B., Schneider, M., Karch, M., Ndrepepa, G., Deisenhofer, I., Weyerbrock, S., Schreieck, J., and Schoemig, A., 1999, Clinical experience with a novel multielectrode basket catheter in right atrial tachycardias, Circulation. 99:2414-2422. Schmitt, O. H., 1969, Biological information processing using the concept of interpenetrating domains, in Information Processing in the Nervous System, (Leibovic, K. N. ed.), Spinger-Verlag, New York. Shahidi, A. v., Savard, P., and Nadeau, R., 1994, Forward and inverse problems of electrocardiography: modeling and recovery of epicardial potentials in humans, IEEE Trans. Biomed. Eng. 41:249-256. Strickberger, S. A., Knight, B. P., Michaud, G. E, Pelosi, E, and Morady, E, 2000, Mapping and ablation of ventricular tachycardia guided by virtual electrograms using a noncontact, computerized mapping system. 1. Am. Col. Cardiol. 35:414-421. Taccardi, B., Arisi, G., Macchi, E., Baruffi, S., and Spaggiari, S., 1987, A new intracavitary probe for detecting the site of origin of ectopic ventricular beats during one cardiac cycle, Circulation. 75: 272-281. Throne, R, and Olsen, L., 1994, A generalized eigensystem approach to the inverse problem of electrocardiography, IEEE Trans. Biomed. Eng. 41:592-600. Throne, R. D., Olsen, L. G., 2000, A comparision of spatial regularization with zero and first order Tikhonov regularization for the inverse problem of electrocardiography, Computers in Cardiology. 27:493--496. Tikhonov, A., and Arsenin, v., 1977, Solutions of Ill-Posed Problems, John Wiley and Sons, New York. Tilg, B., Wach, E, SippensGroenwegen, A., Fischer, G., Modre, R., Roithinger, E Mlynash, M., Reddyuu, G., Roberts, T., Lesh, M., and Steiner, P., 1999, Closed-chest validation of source imaging from human ECG and MCG mapping data, in: Proceedings of the 21st Annual International Conference of the IEEE EMBS, October 19991 First Joint BMESIEMBS Conference, IEEE Press. Tilg, B., Fischer, G., Modre, R., Hanser, E, Messnarz, B., Wach, P., Pachinger, 0., Hintringer, E, Berger, T., Abou-Harb, M., Schoke, M., Kremser, C., and Roithinger, E, 2001, Feasibility of activation time imaging within the human atria and ventricles in the catheter laboratory, Biomedizinishe Technik 46:213-215. Tuch, D. S., Wedeen, V. J., Dale, A. M., and Belliveau, J. 1997, Conductivity maps of white matter fibertracts using magnetic resonance diffusion tensor imaging, Proc. Third int. conf. On Fundamental Mapping of the Human Brain, Neuroimage. 5:s44. Twomey, S., 1963, On the numerical solution of Fredholm integral equations of the first kind by the inversion of the linear system produced by quadrature, J. ACM, 10:97-101. Ueno, S., and Iriguchi, N., 1998, Impedance magnetic resonance imaging: a method for imaging of impedance distribution based on magnetic resonance imaging, J. Appl. Phys. 83:6450-6452.
w.,
160
F. Greensite
van Oosterom, A., 1987, Computing the depolarization sequence at the ventricular surface from body surface potentials, in: Pediatric and Fundamental Electrocardiography, (1. Liebman, R. Plonsey, and Y. Rudy, eds.), Martinus Nijhoff, Zoetermeer, The Netherlands, pp. 75-89. van Oosterom, A., 1999, The use of the spatial covariance in computing pericardial potentials. IEEE Trans. Biomed. Eng. 46:778-787. Velipasaoglu, E. P., Sun, H., Zhang, E, Berrier, K. L., and Khoury, D. S., 2000, Spatial regulariation of the electrocardiographic inverse problem and its application to endocardial mapping, IEEE Trans. Biomed. Eng. 47:327-337. Wach, P., Modre, R., Tilg, B., Fischer, G., 2001, An iterative linearized optimization technique for non-linear ill-posed problems applied to cardiac activation time imaging, COMPEL 20:676-688. Waller, A., 1889, On the electromotive changes connected with the beat of the mammalian heart, and of the human heart in particular, Phil. Trans. R. Soc. Lond. B. 180: 169-194. Waller, A., 1911, quoted in Cooper 1. K., 1987, Electrocardiography 100 years ago: origins, pioneers, and contributors, NEJM. 315:461-464. Wahba, G., 1977, Practical approximated solutions to linear operator equations when the data are noisy, SIAM 1. Numer. Anal. 14:651-667. Wilson, EN., Macleod, A. G., and Barker, P. S., 1933, The distribution of the action currents produced by heart muscle and other excitable tissues immersed in extensive conducting media, 1. Gen. Physiol. 16:423-456. Wilson, EN., Johnston, E D., and Kossmann, C. E., 1947, The substitution of the tetrahedron for the Einthoven triangle. Am. Heart J., 33:594--603. Yamashita, Y., and Geselowitz, D., 1985, Source-field relationships for cardiac generators on the heart surface based on their transfer coefficients, IEEE Trans. Biomed. Eng. BME-32:964--970. Zablow, L., 1966, An equivalent cardiac generator which preserves topolgraphy, Biophys. 1. 6:535-536.
5
THREE-DIMENSIONAL ELECTROCARDIOGRAPHIC TOMOGRAPHIC IMAGING Bin He* University of Illinois at Chicago
5.1 INTRODUCTION Cardiac electrical activity is distributed over the three dimensional (3D) myocardium. It is of significance to noninvasively image distributed cardiac electrical activity throughout the 3D volume of the myocardium. Such knowledge of the source distribution would play an important role in our effort to relate the electrocardiographic inverse solutions with regional cardiac activity. Historically, attempts to noninvasively obtain spatial information regarding cardiac electrical activity started from body surface potential mapping by using a larger number of recording leads covering the entire surface of the body (Taccardi, 1962). From such measurements, instantaneous equipotential contour maps on the body surface have been obtained and shown to provide additional information when compared to a conventional electrocardiogram (See Flowers & Horan, 1995 for review). Since body surface potential maps (BSPMs) are manifestation of cardiac electrical sources on the body surface, efforts have been made to solve the electrocardiography inverse problem-to seek the generators of BSPMs. Equivalent dipole solutions have been investigated with the aim of extracting useful information regarding cardiac electrical activity. Such efforts included (l) Moving Dipole Solutions (Mirvis et al., 1977; Savard et al., 1980; Okamoto et al., 1983; Gulrajani et al., 1984), in which one or more current dipoles are estimated at the location(s) that best describe the body surface recorded electrocardiograms; and (2) Fixed Dipoles
* Present address for correspondence: University of Minnesota, Department of Biomedical Engineering, 7-105 BSBE, 312 Church Street, Minneapolis, MN 55455 E-mail:
[email protected] 161
162
B.He
Solutions (Barber & Fischman, 1961; Bellman et aI., 1964; He & Wu, 2001), in which an array of dipoles are arranged at fixed locations where their moments are determined by minimizing the difference between the model-generated and the measured body surface electrocardiograms. It has been demonstrated that the single moving dipole solution can provide a good representation of well-localized cardiac electrical activity (Savard et aI., 1980). Efforts have also been made to estimate two moving dipole solutions, although technical challenges exist when the number of equivalent dipoles increases from one to two (Okamoto et aI., 1983; Gulrajani et al., 1984). Due to the ill-posedness of the inverse problem, currently there is no well established method to estimate three or more moving dipoles. In addition to equivalent dipole approach, equivalent multipole models have also been investigated (Geselowitz 1960; Hlavin & Plonsey, 1963; Pilkington & Morrow, 1982) during the early stage of electrocardiography inverse solutions, in an attempt to obtain equivalent 3D information on cardiac electrical activity. The limitation of the multipole approach, however, is its inability of localizing cardiac electrical activity. In the past decade, most research on the electrocardiography inverse problem has been carried out in the line of heart surface inverse solutions. As reviewed in Chapter 4, these research efforts are mainly related to epicardial potential inverse solutions or heart surface activation imaging. Three dimensional electrocardiographic tomographic imaging has received much attention since 2000. He and Wu reported their effort on electrocardiographic tomography in their presentation at the World Congress on Medical Physics and Biomedical Engineering held in Chicago in 2000. In this work, He & Wu demonstrated the feasibility in a computer simulation study to image the 3D distribution of cardiac dipole source distribution from noninvasive body surface electrograms by using the Laplacian weighted minimum norm approach (He & Wu, 2000, 2001). In subsequent work, He and coworkers developed a heart-model based 3D activation imaging approach (He & Li, 2002; He et al., 2002), and a 3D transmembrane potential (TMP) imaging approach (He et al., 2003), which were introduced in a presentation at the 4th International Conference of Bioelectromagnetism in 2002 (He & Li, 2002). Ohyu et al. (2002) have developed an approach to estimate the activation time and approximate amplitude of the TMP from magnetocardiograms using the Wiener estimation technique. Skipa et al. presented their effort to estimate transmembrane potentials from body surface electrocardiograms at the 4th International Conference on Bioelectromagnetism (2002). In this Chapter, we review the principles and methods of performing 3D electrocardiographic tomographic imaging, with a focus on introducing the recently developed distributed 3D electrocardiographic tomographic imaging techniques.
5.2 THREE-DIMENSIONAL MYOCARDIAL DIPOLE SOURCE IMAGING 5.2.1 EQUIVALENT MOVING DIPOLE MODEL Equivalent dipole inverse solutions were among the earliest efforts of obtaining electrocardiography inverse solutions. Early efforts were made on moving dipole inverse solutions
Three-Dimensional Electrocardiographic Tomographic Imaging
163
where one or two equivalent dipoles were used to represent cardiac electrical activity in the sense that the dipole-generated body surface potential maps (BSPMs) matches the measured BSPMs well (Mirvis et aI., 1977; Savard et al., 1980; Okamoto et al., 1983; Gulrajani et aI., 1984). In the moving dipole model, the locations of the equivalent dipoles vary from time to time which provide information on the centers of gravity of electrical activity within the heart. Such location information offers an important capability for moving dipole solutions to localize the regions of myocardial tissues which are most responsible for the measured BSPMs. A limitation of this approach, however, is that the inverse solution is sensitive to measurement noise and thus limiting the number of moving dipoles that can be reliably estimated from the measured BSPMs. For this reason, the moving dipole inverse solution may be useful in localizing a focal cardiac source during the initial phase ofcardiac activation for a single activity. For general cardiac activation, the moving dipole inverse solution fails to represent the complex cardiac electrical activity. A detailed review on an equivalent moving dipole solution can be found in reference (Gulrajani et al., 1984).
5.2.2 EQUIVALENT DIPOLE DISTRIBUTION MODEL As early as the 1960's, Barber and Fischman suggested the possibility of modeling cardiac electrical activity using an array of current dipoles located at fixed locations within the myocardium (Barber & Fischman 1961; Bellman et aI., 1964). In this model, the dipoles are not moving butfixed over time, while its moments remain variable. Yet this model did not receive much attention in the field of electrocardiography inverse problem in the past three decades, partially due to the dominance of the epicardial potential (Barr et aI., 1977; Frazone et aI., 1978; Shahidi et aI., 1994; Throne & Olson, 1994, 1997; Johnston & Gulrajani, 1997; Oster et aI., 1997; He & Wu, 1997; Greensite & Huiskamp, 1998; Burnes et al., 2000) and the heart-surface activation time (Cuppen and van Oosterom 1984; Huiskamp & Greensite 1997; Greensite 2001; Modre et al., 2001; Pullan et al., 2001) inverse solutions developed during the same period. Recently, the fixed dipole array model has been expanded into a volume distribution of current dipoles for the purpose of tomographic dipole source imaging (He & Wu, 2000, 2001). In this work, He & Wu modeled cardiac electrical sources by means of a large number of current dipoles over the 3D volume of the ventricles. Each of the dipoles was located at a particular position, representing the local electrical activity, while the moments of the dipoles were varied over time. The magnitude function of the regional dipoles provided a spatial distribution of current strength with the 3D myocardial volume. Estimation of this current dipole moment provides a means of imaging the spatial distribution of current sources.
5.2.3 INVERSE ESTIMATION OF 3D DIPOLE DISTRIBUTION The key hypothesis under the 3D dipole distribution imaging is based on the assumption that the electrical sources located at a small region of myocardial tissue are coherent and
164
B.He
can be approximated by a current dipole. By assigning one such current dipole to each "small" region of the myocardium, the following mathematical model, which relates the current dipole distribution inside the myocardium to the body surface ECG measurements, can be obtained:
V=AX
(5.1)
where V is the vector consisting of m body surface-recorded ECG signals, X is the unknown vector consisting of the moments of the current dipoles, which are located at n sites covering the entire myocardial volume, and A is the transfer matrix. The measurement at each electrode sensor is produced by a linear combination of all dipole components, with columns in A serve as weighting factors. By solving (5.1), one obtains an estimation of 3D current dipole source distribution corresponding to each measured BSPM. Since the number of measurement electrodes is always far less than the dimension of the unknown dipole source vector X, this problem is an underdetermined inverse problem and a proper regularization strategy is necessary for obtaining a reasonable solution to (5.1). The minimum norm (MN) solution is one of the feasible solutions (Hamalainen & Ilmoniemi, 1984) (5.2) where (*)+ denotes the Moore-Penrose inverse. As the minimum norm solution is intrinsically biased towards the superficial position, the weighted minimum norm solution (Jeffs et aI., 1987) and the Laplacian weighted minimum norm solution (LWMN) (Pascual-Marqui et aI., 1994) has been proposed to solve the linear inverse problem. He & Wu investigated 3D electrocardiography dipole source imaging using the principles of LWMN (He & Wu, 2000,2001). LWMN utilizes a weighting operator LW, where L is a Laplacian operator and W is a diagonal 3n by 3n matrix with Wii = I[ Ai II and Ai is the z-th column of the transfer matrix A. Assuming the weighting factor is nonsingular, then (5.3)
and the LWMN solution of (5.1) becomes (5.4)
When using LWMN the resulting solution tends to be over-smoothed due to the constraints of minimizing the Laplacian of the signal. For well-focused cardiac sources, such as the sites of origins of cardiac arrhythmias, a recursive weighting strategy, which was previously developed for improving the performance of MN MEG imaging (Gorodnitsky et aI., 1995), has been used to search for focal sources in the heart from initial LWMN estimates. This algorithm recursively enhances the values of some of the initial solution elements, while decreasing the rest of the elements until they become zero. In the end, only a small number of winning elements remain non-zero, yielding the desired type of localized energy distribution of the solution.
Three-Dimensional Electrocardiographic Tomographic Imaging
165
5.2.4 NUMERICAL EXAMPLE OF 3D MYOCARDIAL DIPOLE SOURCE IMAGING Computer simulation results for 3D myocardial dipole source imaging have recently been reported (He & Wu, 2001). A 3D heart-torso inhomogeneous volume conductor model (Wu et aI., 1999) was used in the simulation. Considering the low conductivity of the lungs , the conductivity ratio of torso to lungs was set to 1:0.2; and the conductivity for myocardial muscle was assumed to be the same as the torso (Mulmivuo & Plonsey, 1995; Gulrajani , 1998). The ventricles were divided into an equi-distant lattice structure of 1,124 nodes with a resolution of 6.7 mm. A regional current dipole was assigned on each of the nodes, resulting in 1,124 regional dipoles. Fig. 5.1 illustrates an example of3D cardiac dipole source imaging . Two dipole sources , oriented from the waist towards the neck, were used to approxiate two localized cardiac sources, which were located close to the endocardium at the right ventricle and the epicardium at the left ventricle (Fig. 5.l(a)). Gaussian white noise of 5% was added to the body surface potentials calculated from assumed cardiac dipole sources to simulate noisecontamin ated body surface ECG measurements. The inverse imaging algorithm described in Section 5.2.3 was used to attempt to reconstruct the source distribut ion within the myocardium , without a priori knowledge of the number of primary current dipoles. The LWMN solution is illustrated in Fig . 5.1(b), where the red and yellow color s illustrate the strength of the equivalent dipole source distribution throughout the ventricle s. Fig. 5.1(b) shows that the LWMN solution reached maxima in both the right ventricle and the left ventricle, overlying with the locations of the source dipoles. The LWMN solution showed a stronger source distribution over the left ventricle probably because the dipole in the left ventricle is located closer to the chest when compared with the dipole located in the right ventricle. In addition, Fig . 5.1(b) shows that there is another major area of activity appearing over the posterior ventricular wall in the LWMN solution . The recursively weighted LWMN solution is illustrated in Fig. 5.l (c) where after 20 iterations the source strength distribution is well focused at two locations. One of the source localization results was consistent with the "true" dipole at the right ventricle , while the other was located at the left ventricle, but shifted about 1 em towards the direction of the endocardium from the "true" diople position . Previous studies have shown that the equivalent dipole solution suffers from existing experimental noise if the number of the moving dipoles increases to two or more (Okamoto et al., 1983). In a clinical setting, however, it is necessary to localize and image sites of origins of arrhythmias even without knowing in advance how many dipoles should be used. Hence , there is a need to develop a technique, which can localize and image sites of origins of cardiac arrhythmias without a priori constraints on the number of equivalent moving dipoles (such as one dipole ). The LWMN approach which He & Wu (2000, 2001) have applied to cardiac dipole source imaging , on the other hand, does not attempt to make assumptions on the number of focal cardiac sources. Some of the a priori inform ation being taken into account in the 3D cardiac dipole source imaging approach is that the myocardial electrical activation is smooth over a reasonably small region. With such constraints , the estimated inverse solution provide s a smoothed distribution of current density over a large area of myocardium (Fig. 5.1(b)). For focal sources, as illustrated in Fig. 5.1(a ), additional strategy
(b)
(a)
(c)
Recursive So lution
FIGURE 5.1. A numeri cal example of 3D cardiac dipole source imaging. (a) Two dipole sources (red dots), oriented from the waist toward s the neck, were used to approxiate two localized cardiac sources , located close to the endocardium at the right ventricl e and the epicardium at the left ventricle . Gaussian white noise of 5% was added to the body surface potentials to simulate noise-contaminated body surface ECG measurem ents. (b) The LWMN solution shows smoo thed distrubution of current den sity distributi on within the myocardium. where the red and yellow colors illustrate the strength of the equivalent dipole source distribution. (c) The recursively weighted LWMN solution shows well focused source strength distribution which correspond well with the two "true" dipole sources as shown in (a). See the attached CD for color figure. (from He & Wu, 200 1 with permission) @ IEEE
LWMN Solution
Simu lated Sources
Three-Dimensional Electrocardiographic Tomographic Imaging
167
such as recursive focusing is needed to obtain the inverse solution, in which it is assumed that the sites of origins of cardiac arrhythmias are localized over small regions inside the myocardium. With this additional constraint, the inverse dipole source distribution shows localized distribution of current density close to the original "true" dipole sources. Note, however, that the LWMN inverse solution with a recursive weighting strategy still shows certain shift towards the "interior" of the myocardium from the "true" solution positions (Fig. 5.1). Both the LWMN algorithm and the recursive weighting algorithm may contribute to such "shift." Although the work reported by He & Wu suggests the promise of imaging cardiac electrical activity using LWMN approach, a systematic study should be conducted to evaluate the reconstruction results for a number of source configurations including sources located in various regions of the heart with various orientations.
5.3 THREE-DIMENSIONAL MYOCARDIAL ACTIVATION IMAGING Myocardial activation imaging received much attention in recent years, in which local activation time over the heart surface is estimated from BSPMs (Cuppen and van Oosterom 1984, Huiskamp and Greensite 1997, Greensite 2001, Modre et al., 2001, Pullan et al 2001). As reviewed in Chapter 4, this approach is based on the bidomain theory, which allows direct linking of the heart surface activation time with body surface potentials under the assumption of the electrical isotropy (or "equal anisotropy") within the myocardium (Greensite 2001). Recently, the concept of myocardial activation imaging has been extended from 2D heart surface to 3D myocardial volume (He et al., 2002; He & Li, 2002; Ohyu et al., 2002). In these approaches, the activation time throughout the 3D myocardium is estimated from body surface electrograms by means of a heart-excitation-model (He et al., 2002; He & Li, 2002) or a Wiener inverse filter (Ohyu et al., 2002). In this Section, the heart-model-based 3D activation imaging approach (He et al., 2002) is presented.
5.3.1 OUTLlNE OF THE HEART-MODEL BASED 3D ACTlVATION TIME IMAGING APPROACH
The 3D distribution of activation time throughout the ventricles has been estimated with the aid of a heart-model based approach, in which the a priori knowledge on cardiac electrophysiology is embedded. Fig. 5.2 illustrates a schematic diagram of this approach. A realistic geometry computer heart-torso model is used to represent the relationship between 3D activation sequences within the myocardium with BSPMs. The a priori knowledge on cardiac electrophysiology and the detailed anatomic information on the heart and torso are embedded in this heart-torso model. The 3D myocardial activation sequence is estimated as the parameter of the heart-model and obtained by means of a nonlinear estimation procedure.
168
B.He
FIGURE 5.2. Schematic diagram of 3D electrocardiography tomographic imaging. See attached CD for color figure (From He et aI., Phys Med & BioI, 2002 with permission)
A preliminary classification system (PCS) is employed to determine the cardiac status based on the a priori knowledge of the cardiac electrophysiology and the measured BSPM by means of an artificial neural network (ANN) (Li & He, 2001). The output of the ANN based PCS provides the initial estimate of heart model parameters to be used later in a nonlinear optimization system. Using these initial parameters as the result of PCS, the optimization system then minimizes objective functions that assess the dissimilarity between the measured and heart-torso-model-calculated BSPMs. The heart model parameters corresponding with the calculated BSPM are employed to produce 3D myocardial activation sequence if the measured BSPM and the heart-torso-model-calculated BSPM matches well. Before the objective functions satisfy the given convergent criteria, the heart model parameters are adjusted with the aid of the optimization algorithms and the optimization procedure proceeds.
5.3.2 COMPUTER HEART EXCITATION MODEL Numerous efforts have been made to develop computer heart models that can simulate cardiac electrophysiological processes as well as the relationship between cardiac activities and BSPMs (See Chapter 2 for review). Although the more detailed information incorporated the better, the cellular automaton heart excitation model (Aoki et al., 1987; Lu et al.,
Three-Dimensional Electrocardiographic Tomographic Imaging
169
1993) has been used in the 3D activation time imaging research due to its capability of simulating cardiac activation, BSPMs, and computational efficiency. In this work (He et aI., 2002), we used a cellular automaton ventricle model that was constructed as a 3D array of approximately 42,000 myocardial cell units with a spatial resolution of 1.5 mm. The ventricles consisted of 50 layers with inter-layer distance of being 1.5 mm, and were divided into 53 myocardial segments. Each segment is comprised of approximately the same number of myocardial cell units. The action potential of each of heart units was already determined according to the cardiac action potential experimentally observed and stored in the action potential data file. From the epicardium to the endocardium, the refractory period of the action potential of cardiac cellular units gradually increased for the T-wave simulation. The primary current dipole sources are proportional to the gradient of the transmembrane potentials at adjacent cardiac units (Miller & Geselowitz, 1978). The anisotropic propagation of excitation in the ventricular myocardium was incorporated into this heart model (He et aI., 2003) in order to obtain more accurate simulation of the body surface ECG and myocardial activation sequence (Nenonen et al., 1991, Lorange and Gulrajani 1993; Wei et al., 1995; Franzone et al., 1998; Huiskamp, 1998; Fischer et aI., 2000). Ventricular myocardium was divided into different layers with thickness of 1.5 mm from epicardium to endocardium. The myocardial fiber orientations were rotated counterclockwise over 1200 from the outermost layer (epicardium, -60°) to the innermost layer (endocardium, +60°) (Streeter et aI., 1969) with identical increment between the consecutive layers. All units on a myocardial layer of ventricles from epicardial layer to endocardial layer had identical fiber orientation. For each myocardial unit, a fiber direction vector, which is located on its local tangential plane, was determined by its fiber angle. The fiber orientations of all myocardial units of ventricles were determined, and put in the realistically shaped inhomogeneous torso model for calculating the body surface ECG. Excitation conduction velocity of myocardial units was set to 0.6 m/s and 0.2 m/s along the longitudinal and transverse fiber direction, respectively. Electrical conductivity of myocardial units was set to 1.5 mS/cm along the longitudinal fiber direction and 0.5 mS/cm along the transverse fiber direction (Nenonen et aI., 1991). Fig. 5.3 shows the realistic geometry inhomogeneous heart-torso model (a), an example of simulated sinus rhythm (b), and an example of paced activity (c). Fig. 5.3(b) shows the activation sequence corresponding to sinus rhythm (left), and an example of the anterior BSPM and a chest ECG lead simulated during sinus rhythm (right). Fig. 5.3(c) shows an example of the simulated BSPM on anterior chest (middle-bottom) at 30 ms following pacing the anterior wall of the ventricle, and the ventricular excitation sequence over the epicardium (middle-top) corresponding to the pacing site (left), and a chest ECG lead (right). The pacing site is shown on the left.
5.3.3 PRELIMINARY CLASSIFICATION SYSTEM There are a large number of parameters associated with the cellular automaton heart model that need to be determined in order to estimate the activation sequence. A Preliminary Classification System (PCS) is used to approximately classify, from BSPMs, cardiac status with the a priori knowledge on cardiac electrophysiology and the mapping relationship between cardiac activation and resulting BSPMs. An ANN has been used to serve as a PCS. In this implementation (Li & He, 2001), a three-layer feed-forward ANN was used, with the
170
B.He
Sin
Rhythm
Pa in
l
~ ( a)
( c)
FI G URE 5.3. Illustration of computer heart-torso modeling and simulation. (a) Realistic geometry heart torso model. (b) Simulation of sinus rhythm: Left panel-Intracardiac activation sequence over three slices within the ventricles; Right panel-An example of simulated anterior BSPM and chest ECG lead. (c) Simulation of epicardial pacing: Left panel- Heart model and a pacing site at middle anterior epicardium of ventricle. Middle panels- Anterior view of simulated epicardial isochrone (top), and an example of the BSPM over the anterior chest following pacing (bottom). Righ panel-A simulated chest ECG lead. See attached CD for color figure. (Modified from He et aI., Phys Med & BioI, 2002 with permission)
number of neurons in the input layer being set to the number of body surface electrod es, and the number of neurons in the output layer being set to the number of myocardial segments being studied. Gaussian white noise (GWN) was added to the BSPMs to simulate noisecont aminat ed body surface ECG measurements . The BSPM maps during 25 to 50 ms after initial activation were used as inputs to train the ANN.
5.3.4 NONLINEAR OPTIMIZATION SYSTEM The heart model parameters associated with myocardial activation sequence are estimated by minimizing dissimilarity between the measured and heart-torso-model-generated (referred to as "simulated" below) BSPMs. The dissimilarity between the measured and simulated BSPMs is described by appropri ate characteristic parameters extracted from the BSPMs. The following three objective functions (Li & He, 200 1) have been used to reflect the dissimilarity between the measured and simulated BSPMs for paced ventricular activation: (a) Ecd x), which was constru cted with the average correlation coefficient (CC) between the measured and simulated BSPM s from instant T 1 to instant T2 of the cardiac excitation after detection of initial activation, is defined as:
L [l Tz
EccCx) =
I=T,
CCms(x , t ))/ (T 2
-
T J)
(5.5)
171
Three-Dimensional Electrocardiographic Tomographic Imaging
where CCms(X, t) is the CC between the measured and simulated BSPMs at instant t. x is a parameter vector of the spatial location of initial activation in the computer heart-excitation model. (b) Eminp(x), which was constructed with the deviation of the positions of minima of the measured and simulated BSPMs from instant T] to instant T2 , is defined as: T2
Eminp(X) =
L II P~n(O - r:», 0[1
(5.6)
t=T,
where P~n(O and P~in(x, t) represent the positions of the minima in the measured and simulated BSPMs at instant t, respectively. The definition of x is the same as that in Eq. (5.5). (c) ENPL(x), which was constructed with the relative error of the number of body surface recording leads, at which the potentials are less than a certain negative threshold, in the measured and simulated BSPMs from instant T 1 to instant T2 , is defined as: (5.7)
where L%(t) = L~~l U(¢T - ¢(t, i» and L~(x, t) = L;:;] U(¢T - ¢(x, t, i), are the numbers of recording leads, at which the potentials are less than a given threshold ¢T( < 0), in the measured and simulated BSPMs at instant t, respectively. ¢(t, i) and ¢(x, t, i) are the ith-lead measured and simulated potentials at instant t, respectively. u(e) is the unit-step function, which gives a unity output if the potential at a lead is less than the pre-set threshold. NL is the number of body surface recording leads. The definition of x is the same as that in Eq. (5.5). Combining the above objective functions, the mathematical model of the optimization for this heart-model-based electrocardiographic imaging can be represented as the following minimization problem: min(Ecdx» = E~c' Eminp(X) < XEX
£minp,
ENPL(x) < £NPL
(5.8)
where X is the probable value region of the parameters in the computer heart-excitationmodel. x is a vector of heart model parameters. E~c is the optimal value of the objective function Ecdx). £minp and £NPL are the allowable errors of the objective function Eminp(x) and ENPL(x), respectively. Eq. (5.8) was solved by means of the Simplex Method.
5.3.5 COMPUTER SIMULATION The feasibility of 3D myocardial activation imaging has been suggested in a computer simulation study (He et al., 2002), and presented in this section. In this simulation study, pacing protocols were used to simulate paced cardiac activation. By setting pacing sites in different myocardial regions of the heart excitation model, sequential pace maps were obtained by solving the forward problem using the heart-torso computer model. Two pacing protocols, single-site pacing and dual-site pacing, were used to evaluate the performance of
172
B. He
the 3D myocardial activation imaging approach. Gaussian white noise (GWN) of 10 !.LV was added to the BSPMs at each time instant after the onset of pacing, to simulate the noisecontaminated body surface potential measurements. The maximum value of the BSPM during the QRS complex was set to 3 m V. The performance of activation time imaging was tested by single-site pacing in 24 different sites throughout the ventricles. The CC and RE between the vector of simulated activation time and the vector of estimated activation time were calculated for each of the 24 pacing sites. The vector of activation times consists of the activation time of each voxel within the ventricles. Averaged over all 24 sites, the RE and CC between the "true" and estimated activation times are 0.07 ± 0.03 and 0.9989 ± 0.0008, respectively, suggesting the high degree of fidelity of the inverse estimation of activation time in the ventricles. Fig. 5.4 shows two typical simulation examples. The top rows show the simulated "true" activation sequence, and the bottom rows show the inversely estimated activation sequence. Each row shows the activation sequence in 5 longitudinal sections «b)'""-'(f)) and I transverse section of ventricles (a). Five horizontal lines in the transverse section
True
(A) Estimated (8)
(b)
(c)
(d)
(b)
(c)
(d)
(e)
(I)
True
(B) Estimated (8)
(e)
(I)
FIGUR E 5.4. Two examples of activation time imaging results during dual-site ventricular pacing. The activation sequence within the ventricles was inversely estimated from the BSPM with !Of!. V Gaussian white noise being added. First row shows the simulated "true" activation sequence , and second row the activation sequence corresp onding to the inversely estimated result. Each row shows the isochrone in 5 longitudinal sections «b)- (f» and I transverse section of the ventricles (a). Five horizontal black lines in the transverse section of the ventricles from top to bottom respectively indicate the positions of 5 longitudinal sections from (b) to (f). The unit of color bar is millisecond. The distance between both neighboring sections is 4.5 mm. (A) One pacing site at septum endoc ardium of left ventricle, and another at intramural of left-anterior wall (marked by yellow dots). (B) Both pacing sites at left-posterior intramural adjacent to endocardium (marked by yellow dots). See attached CD for color figure. (From He et aI., Phys Med & Bioi, 2002 with permission)
Three-Dimensional Electrocardiographic Tomographic Imaging
173
TABLE 5.1. Effects of heart and torso geometry uncertainty on the activation time imaging Pacing Region
BA
BRW
BLW
BS
BP
Mean
NM LSX RSX FSY BSY NTM+ 10% NTM - 10%
0.08 0.08 0.06 0.08 0.08 0.09 0.09
0.03 0.03 0.03 0.03 0.06 0.11 0.10
0.06 0.06 0.06 0.07 0.07 0.08 0.06
0.04 0.10 0.09 0.10 0.12 0.05 0.10
0.05 0.06 0.07 0.06 0.05 0.07 0.07
0.05 0.06 0.06 0.05 0.07 0.08 0.08
Note: NM: LSXlRSX: FSYIBSY: NTM + IO%/NMT - 10%:
± SD
± 0.02 ± 0.03 ± 0.02 ± 0.02 ± 0.03 ± 0.02 ± 0.02
Normal Model. Left/Right shift along the x-direction. Front/Back shift along the y-direction. 10% enlargement and reduction of the normal torso model.
of ventricles from top to bottom respectively indicate the positions of the 5 longitudinal sections from (b) to (f). In panel (A), one pacing site is located at the left ventricular septal endocardium, and another pacing site is located at the left anterior intramural wall. In panel (B), both pacing sites are located at left-posterior intramural adjacent to endocardium. In both cases, 10 f.L V GWN was added to the BSPMs to simulate noise-contaminated body surface ECG recordings. Fig. 5.4 suggests that, for dual site pacing at two separate locations (A) or adjacent locations (B), the 3D myocardial activation imaging can reconstruct well the activation sequence, although the estimated activation sequence showed little delayed activation as compared with the "true" activation sequence. Effects of heart-torso geometry uncertainties were tested by selecting five pacing sites, in five different regions adjacent to the AV-ring (BA: basal-anterior; BRW: basal-right-wall; BP: basal-posterior; BLW: basal-left-wall; BS: basal-septum). The modified (enlarge or reduce by 10%) torso models or position-shifted heart models (in 4 directions) were used in the forward BSPM simulation, and 10 f.L V GWN was added to the simulated BSPMs. By using the modified heart-torso models in the forward simulations while the standard model in the inverse calculation, the effect of inter-subject geometry variation was initially evaluated. Table 5.1 shows the RE between the simulated "true" ventricular activation sequence and the estimated activation sequence following a single-site pacing. NM refers to normal case, in which only measurement noise is introduced without geometry uncertainty. Note that the heart-torso geometry uncertainty showed little effect on the activation sequence estimation as determined by the RE measure. For example, 2% increase in RE was obtained for the backward shift of the heart along the y-direction (BSY), as compared with the NM case. The estimation errors associated with the 10% enlarged or reduced torso models have averaged REof8%. Effects of conduction velocity of ventricular activation in the heart model were assessed by varying the conduction velocity in the forward heart model. The BSPMs were simulated by using this altered ventricle model and noise was added to simulate noise-contaminated BSPM measurements. The standard heart model, in which the average conduction velocity was used, was then used to estimate the inverse solutions. Fig. 5.5 shows a typical simulation example, with the same format as in Fig. 5.4. GWN of 10 f.L V was added to the BSPMs
174
B.He 126ll'll'
Troe
(a)
(0)
(c)
(d)
(e)
(0
FIGURE 5.5. An example of activation time imaging results with variation in the conduction velocity. Same format of display as in Fig. 5.5. The top rows show the simulate d "true " activation sequence with altered condu ction velocity in the forward heart model, and the bottom rows show the inversely estimated activation sequence. The results correspond to 10% increase in con duction velocity following a single-site pacing. See attached CD for color figure. (From He et aI., Phys Mcd & Bioi, 2002 with permission)
to simulate noise-contaminated body surface ECG recordings. The top row shows the simulated "true" activation sequence with altered conduction velocity in the forward heart model, and the bottom row shows the inversely estimated activation sequence, following a single-site pacing when the conduction velocity of the forward heart model was increased by 10%. When the average conduction velocity is different in the forward heart model when compared with that in the heart model used in the inverse procedure, the estimated activation time at specific region s within the ventricles differs from the original activation time distribution in the forward solution. In particular, the early activation moved down toward the apex direction. Nevertheless, the overall distributions of the activation time are not affected substantially. The RE and CC between the "true" and estimated activation sequences within the ventricles are 0.0916/0.106 and 0.996/0 .998, respectively, corre sponding to 5%110% increa se in the conduction velocity.
5.3.6 DISCUSSION In this Section, a new approach for noninvasive 3D cardiac activation time imaging by means of a heart-excitation-model is reviewed. This approach is based on the observation that a priori information regarding cardiac electrophysiology should be incorporated into the cardiac inverse solutions in order to obtain useful information on the 3D cardiac activation from the two-dimensional electrical measurements over the body surface . In this approach, the a priori information on cardiac electrophysiology is incorporated into the heart-excitation-model, which is not an equivalent physical source model but an equivalent physiological source model. By linking this physiological source model with body surface ECG measurements, physiological parameters of interest are estimated from body surface ECG recordings. A unique feature of such an approach is that the rich knowledge we have gained in the forward whole heart modeling (See Chapters 2 and 3) can be directly applied to the 3D cardiac imaging. Furthermore, the anisotropic nature of myocardial propagation can also be incorporated in the 3D myocardial activation imaging , as shown in this Section. Such a priori electrophysiological information serves as constraints when solving the inverse problem, leading to robust 3D inverse solutions.
Three- Dimensional Electrocardiographic Tomographic Imaging
175
Although a cellular-automaton heart-excitation-model has been used for the 3D activation time imaging, it is anticipated that more sophisticated 3D heart-excitation-models (e.g. Gulrajani et al., 2001) can be realized in 3D activation time imaging in the future, providing needed spatial resolution for more complicated cardiac activation sequences. With rapid advancement in computer technology, this goal seems to be more feasible than expected. In parallel to the heart-excitation-model based 3D activation imaging approach, Ohyu et al. (2002) has reported another 3D activation imaging approach in which the action potentials are approximated by a step function with the initial activation being varied from site to site within isotropic ventricles. The distribution of activation time was then connected to the BSPMs and estimated by means of Wiener inverse filter. Both this activation imaging approach (Ohyu et al., 2002) and the dipole source distribution imaging approach (He & Wu, 2001) reviewed in Section 5.2 use linear systems connecting the inverse solution parameters directly with the BSPMs. It would be of interest to evaluate the heart-model-based activation imaging reported by He et al. (2002), with linear system based activation imaging reported by Ohyu et al. (2002).
5.4 THREE-DIMENSIONAL MYOCARDIAL TRANSMEMBRANE POTENTIAL IMAGING In association with activation time, the transmembrane potential (TMP) reflects important electrophysiological properties on the local myocardial tissues. The TMP has been estimated over the heart surface from magnetocardiograms (Wach et al., 1997). Recently, efforts have been extended from the heart surface to the 3D myocardium. He and coworkers reported (2002, 2003) their effort to estimate TMP distribution within the 3D ventricles from body surface electrocardiograms by means of a heart-model based imaging approach. Ohyu et al. has developed an approach to estimate the activation time and approximate amplitude of the TMP from magnetocardiograms using Wiener inverse filter (2002). Skipa et al. reported their initial results on estimation of TMP distribution within the heart from BSPMs (2002). In this Section, we present the heart-mode-based 3D TMP imaging approach and its applications to imaging TMP distributions associated with paced ventricular activity, and acute myocardial infarction. The whole procedure of the heart-model-based TMP imaging approach may also be illustrated in the schematic diagram in Fig. 5.2, except that the reconstructed 3D cardiac sources are spatio-temporal distribution of TMP. Similar to the 3D heart-model based activation imaging, the following procedures are used. A realistic geometry 3D heart-torso-model is constructed based on the knowledge of cardiac electrophysiology and geometric measurements via CTIMRI. The anisotropic nature of myocardium can be incorporated into this computer heart model. The BSPMs are linked with the 3D TMP distribution by means of the heart-torso-model. To reduce the dimensionality of the parameter space, a preliminary classification system (PCS) is employed to classify cardiac status based on the a priori knowledge of cardiac electrophysiology and the BSPM, by means of an ANN. The output of the PCS provides the initial estimate of heart model parameters which are employed later in a nonlinear optimization system. The nonlinear optimization system then minimizes the objective functions that assess the dissimilarity between the "measured" and model-generated BSPMs.
B.He
176
If the "measured" BSPM and the heart-torso-model-generated BSPM match well, the 3D distribution of transmembrane potentials is determined from the heart model parameters corresponding with the resulting BSPM. If the results do not match, the heart model parameters are adjusted with the aid of the optimization algorithms and the optimization procedure proceeds until the objective functions satisfy the given convergent criteria. When the procedure converges, the TMP distribution throughout the 3D myocardium is determined. The feasibility of imaging 3D TMP distribution has been suggested in computer simulation in paced activities (He et al., 2003). The performance of the above TMP imaging approach was tested by single-site pacing in 24 different sites throughout the ventricles. GWN of 10 J.L V was added to the BSPMs, and GWN of 10 mm was added to the body surface electrode positions, to simulate noise-contaminated body surface ECG recordings. Fig. 5.6 shows the TMP amplitude distributions (a-b) of ventricular depolarization following a single-site pacing at the septum in 5 longitudinal sections within the ventricles (c). The TMP distribution in each longitudinal section at 8 typical instances (from 6 ms to 48 ms with a time step of 6ms) after the onset of pacing is shown in one row. The 5 longitudinal sections within the ventricles arc illustrated in Fig. 5.6(c) by 5 horizontal black lines in the
Layel'-3
•
(a)
6ms
12ms
18ms
24ms
3Jms
35ms
42ms
48ms
FI GURE 5.6. An exam ple of TMP imaging results during single-site ventricular pacing. (a) and (b) illustrate the forward and inverse solution of the TMP distributions in 5 longitudinal sections within the ventricles, duri ng ventricu lar depo larization following a single-site pacing at the septum . The TMP distri bution in each longitud inal section at 8 typical instances (from 6 ms to 48 ms with a time step of 6 ms) after the onset of pacing is shown in one row. The 5 longitudinal sections within the ventricles are illustrated in (c) by 5 horizontal black lines in the transverse section of the ventricles from top to bottom indicating their positions. The gray regions in the longitudina l sections indicate the resting cell units. The Max and Min of color bars correspond to the maximum and minimum values of the TMP amplitude during the first 60 ms from the onset of activation. See attac hed CD for color figure.
Three-Dimensional Electrocardiographic Tomographic Imaging
La
177
3
uyu.1S (b)
6nu
12ms
JSms
(c)
FIGURE 5.6. (cont.)
transverse section of the ventricles from top to bottom indicating their positions. The gray regions in the longitudinal sections indicate the resting cell units . The Max and Min of color bars correspond to the maximum and minimum values of the TMP amplitude during the first 60 ms from the onset of activation . Fig . 5.6 suggests that the inverse TMP distribution captures well the overall spatio-temporal patterns of the forward TMP distribution following a single site pacing at the septum, but with a slight shift of the area of initial activation towards the base (as observed in Layer 3 in the inverse TMP distribution). Averaged over 24 sites for single-pacing, the RE and CC between the "true" and estimated TMP distributions are 0.1266 ± 0.0326 and 0.9915 ± 0.0041 , respectively, indicating that the 3D TMP imaging approach can reconstruct well the TMP distributions within the ventricles corresponding to a well-localized ventricular activati on.
B.He
178
(a)
(b)
FIGURE 5.7. A numerical example of myocardial infarction imaging. (a) Green shows preset acute myocardial infarction. (b) Red shows estimated infarcted area over the same layer in the heart model. See attached CD for color figure.
The effects of heart-torso geometry uncertainties on the performance of the 3D TMP imaging approach has also been evaluated with torso size, heart position uncertainties being considered. The 3D TMP imaging approach was found to be robust for up to 10% torso size variation and up to 10 mm heart position shift. See reference (He et al., 2003) for detailed descriptions of the results for TMP imaging of paced activities. The 3D TMP imaging approach has also been applied to image acute myocardial infarction (MI) (Li & He, in press). Fig. 5.7 illustrates an example of myocardial infarction imaging. In this case, GWN of5 f..L V was added to the BSPMs to simulate noise-contaminated body surface ECG recordings. Fig. 5.7(a) shows the "true" MI that is marked by green color, and Fig. 5.7(b) shows the inversely estimated MI that is marked by red color. A "true" MI is located in the middle left wall (MLW) of ventricle and distributes in 7 myocardial layers (not shown here). The estimated MI is located close to the "true" MI site, and has similar shape with the "true" MI, except that some small MI areas occurred in layers adjacent to the 7 "true" MI layers (not shown here). Fig. 5.7 suggests the feasibility of applying the 3D TMP imaging approach to imaging the spatial location and extent of the acute MI in the ventricles .
5.5 DISCUSSION In this Chapter, we reviewed the recent progres s in the development of noninvasive 3D electrocardiography tomographic imaging approaches, including 3D dipole source distribution imaging , 3D activation imaging , and 3D transmembrane potential imaging. These activities may be classified as two general approaches in terms of methodology. One is to solve the system equations connecting electrophysiological characteristics (such as current density, activation time and TMP) to BSPMs. This approach involves solving the system equations using inverse techniques such as weighted minimum norm (He & Wu, 2000,200 I ; Skipa et al., 2002) and Weiner technique (Ohyu et al., 2002). Another approach is to solve the electrocardiography tomographic imaging problem indirectly, by means of a heart-model. In this heart-model based approach, we have developed a localization approach to localize the site of origin of activation from body surface ECG recordings (Li & He, 2001), an activation imaging approach to image the activation time distribution (He et al., 2002) , and an TMP imaging approach to image distribution of transmembrane potential s
Three-Dimensional Electrocardiographic Tomographic Imaging
179
throughout the ventricles from BSPMs (He et al., 2003). The 3D TMP imaging approach has been applied to image dynamic spatiotemporal patterns of activation induced by pacing (He et al., 2003), and to image and localize the site and size of acute myocardial infarction (Li & He, in press). The heart-model based approaches are based on our observation that a priori information regarding the distributed cardiac electrophysiological process should be incorporated into the cardiac inverse solutions in order to obtain useful information on the distributed 3D cardiac electrical activity from the two-dimensional BSPMs. In the present approach, the a priori information on cardiac electrophysiology is incorporated into the distributed heart-model, which is not an equivalent physical source model but an electrophysiological source model, in which knowledge of electrophysiology and pathophysiology is imbedded. The distributed electrophysiological process within the heart is represented by cellular automata, on each of which the site of origin of activation, activation time, or transmembrane potential are determined based on the knowledge of cardiac electrophysiology. In such approaches, since substantial electrophysiology a priori information is incorporated into the inverse solutions, more accurate inverse solutions are anticipated as compared with other approaches without taking this information into account. Such electrophysiology a priori information not only includes more accurate forward solution at each time point, but also a more realistic time-varying dynamics as set by the heart electrophysiology model. Therefore, it is not surprising that good matches between "true" cardiac electrical activity and estimated inverse solutions are obtained by means of the heart model based approaches. On the other hand, the system equation approach has the benefit that there is no need to limit the search space for heart model parameters, as currently being practiced in the heartmodel based approaches. The inverse solutions are obtained directly by solving the system equations that link the electrophysiological properties with BSPMs via biophysical relationships. It would be of interest to compare the performance of these two approaches for 3D electrocardiography tomographic imaging. The inverse problem of electrocardiography has been solved by means of equivalent point sources (dipole localization), distributed two-dimensional heart surface imaging methods (epicardial potential imaging, and heart surface activation imaging), and 3D distributed source imaging approaches. While the 3D distributed source imaging, as reviewed in this chapter, represents an important advancement in the field of electrocardiography inverse problem, all 3D electrocardiography tomographic imaging approaches have only been evaluated, up to date, in computer simulations. It is of ultimate importance and significance to experimentally validate the 3D distributed source imaging approaches, in order to establish electrocardiography tomographic imaging as a useful means for imaging noninvasive three dimensional distribution of cardiac electrical activity, for aiding clinical diagnosis and management of a variety of cardiac diseases, and for guiding radio-frequency catheter ablative interventions.
ACKNOWLEDGEMENT The author wishes to thank his postdoctoral associates and graduate students, Dr. Guanglin Li, Dr. Dongsheng Wu, and Xin Zhang, with whom this work was conducted. This work was supported in part by NSF BES-0201939, a grant from the American Heart Association #0140132N, and NSF CAREER Award BES-9875344.
180
B.Re
REFERENCES Aoki, M., Okamoto, Y.,Musha, T., and Harumi, K.: Three-dimensional simulation of the ventricular depolarization and repolarization processes and body surface potentials: normal heart and bundle branch block. IEEE Trans. Biomed. Eng., 34: 454-462, 1987. Barber, M.R., Fischman, E.1.: Heart dipole regions and the measurement of dipole moment. Nature, 192: 141-142, 1961. Barr, R.C., Ramsey, M., Spach, M.S.: Relating epicardial to body surface potential distributions by means of transfer coefficients based on geometry measurements. IEEE Trans. Biomed. Eng., 24: I-II, 1977. Bellman, R., Collier, C, Kagiwada, H., Kalaba, R., Selvester, R.: Estimation of heart parameters using skin potential measurements. Comm. ACM, 7: 666-668,1964. Burnes, J.E., Taccardi, B., MacLeod, R.S., Rudy, Y.: Noninvasive ECG imaging of electrophysiologcially abnormal substrates in infarcted hearts: a model study. Circulation, IOJ: 533-540, 2000. Cuppen, J.1.M., Van Oosterom, A.: Model studies with inversely calculated isochrones of ventricular depolarization. IEEE Trans. Biomed. Eng., 31: 652-659,1984. de Guise, J., Gulrajani, R.M., Savard, P., Guardo, R., Roberge, EA.: Inverse recovery of two moving dipoles from simulated surface potential distributions on a realistic human torso mode!. IEEE Trans. Biomed. Eng., 32: 126-135, 1985. Fischer, G., Tilg, B., Modre, R., Huiskamp, G.1., Fetzer, J., Rucker, w., Wach, P.: A bidomain model based BEM-FEM coupling formulation for anisotropic cardiac tissue. Ann Biomed Eng., 28: 1229-1243, 2000. Flowers, N.C., Horan, L.G.: Body surface potential mapping. In: Cardiac Electrophysiology, edited by Zipes, D.P., Jalife, J. 2nd Edition. W.B. Saunders Company, pp. 1049-1067, 1995. Frazone, P.c., Taccardi, B., Viganotti, c.: An approach to the inverse calculation of epicardial potentials from body surface maps. Adv. Cardio!., 21: 50-54, 1978. Franzone, P.C., Guerri, L., Pennacchio, M., Taccardi, B.: Spread of excitation in 3-D models of the anisotropic cardiac tissue. III. Effects of ventricular geometry and fiber structure on the potential distribution. Math. Biosci., 151: 51-98,1998. Geselowitz, D.B.: Multipole representation for an equivalent cardiac generator. Proc. IRE., 48: 75-79, 1960. Gorodnitsky, I.E, George, 1.S., Rao, B.D.: Neuromagnetic source imaging with FOCUS: a recursive weighted minimum norm algorithm. Electroenceph. & din. Neurophysio!., 95: 231-251,1995. Greensite, E, Huiskamp, G.: An improved method for estimating epicardial potentials from the body surface. IEEE Trans. Biomed. Eng., 45: 1-7, 1998. Greensite, E: Myocardial Activation Imaging. In: Computational Inverse Problems in Electrocardiography, edited by Johnston, P. WIT Press Brisol, 143-190,2001. Gulrajani, R., Roberge, EA., Savard, P.: Moving dipole inverse ECG and EEG solutions. IEEE Trans. Biomed. Eng., 31: 903-910,1984. Gulrajani, R.M., Trudel, M.C., Leon, L.1.: A membrane-based computer heart model employing parallel processing. Biomedizinische Technik, Brand 46, Erganzungsband, 2: 20-22, 2001. Hamalainen, M., Ilmoniemi, R.: Interpreting measured magnetic fields of the brain: estimates of current distributions. Helsinki University of Technology Report, TKK-F- A559, 1984. He, 8., Wu, D.: A bioelectric inverse imaging technique based on surface Laplacians. IEEE Trans. Biomed. Eng., 44:529-538,1997. He, B., Wu, D.: Three-dimensional source imaging of cardiac electric activity. Proc. of World Congress on Medical Physics and Biomedical Engineering, CD-ROM, 2000. He, 8., Wu, D.: Imaging and visualization of 3-D cardiac electric activity. IEEE Trans. Inf. Techno!. Biomed., 5: 181-186,2001. He, 8., Li, G., Zhang, X.: Noninvasive Three-dimensional Activation Time Imaging of Ventricular Excitation by Means of a Heart-Excitation-Mode!' Physics in Medicine and Biology, 47: 4063--4078, 2002. He, B., Li, G.: Noninvasive three-dimensional myocardial activation time imaging by means of a heart-excitationmode!. Int. J. of Bioelectromagnetism, 4(2): 87-88, 2002. He, B., Li, G., Zhang, X.: Noninvasive Imaging of Ventricular Transmembrane Potentials within Three-dimensional Myocardium by Means of a Realistic Geometry Anisotropic Heart Mode!' IEEE Trans. Biomed. Eng., 50( I0): 1190-1202,2003.
Three-Dimensional Electrocardiographic Tomographic Imaging
181
Hlavin, J.M., Plonsey, R.: An experimental determination of a multipole representation of a turtle heart. IEEE Trans. Bimed. Eng., 10: 98, 1963. Huiskamp, G., Greensite, E: A new method for myocardial activation imaging. IEEE Trans. Biomed. Eng., 44: 433-446,1997. Huiskamp, G.: Simulation of depolarization in a membrane-equations-based model of the anisotropic ventricle. IEEE Trans. Biomed. Eng., 45: 847-855, 1998. Jeffs, B., Leahy, R., Singh, M.: An evaluation of methods for neuromagnetic image reconstruction. IEEE Trans. Biomed. Eng., 34: 713-723,1987. Johnston, P.R., Gulrajani, R.M.: A new method for regularization parameter determination in the inverse problem of electrocardiography. IEEE Trans. Biomed. Eng., 44: 19-39, 1997. Li, G., He, 8.: Localization of the site of origin of cardiac activation by means of a heart-model-based electrocardiographic imaging approach. IEEE Trans. Biomed. Eng., 48: 660-669, 2001. Li, G., He, B.: Noninvasive Estimation of Myocardial Infarction by Means of a Heart-Model-Based Imaging Approach.: simulation study Med. Bio!' Eng. & Comput., In press. Lorange, M., Gulrajani, R.M.: A computer heart model incorporating anisotropic propagation. I. Model construction and simulation of normal activation. 1. Electrocardiol., 26: 245-261,1993. Lu, W, Xu, Z., Fu, Y Microcomputer-based cardiac field simulation model. Med. Bio. Eng. Comput., 31: 384--387, 1993. Malmivuo, J., Plonsey, R.: Bioelectromagnetism. Oxford University Press, 1995. Gulrajani, R.M.: Bioelectricity and Biomagnetism. John Wiley & Sons, 1998. Martin, RO., Cox, J.W, Keller, EW, Terry, EH., Brody, D.A.: Equivalent cardiac generators; Two moving dipoles and moving dipole and quadripole. Ann. Biomed. Eng., 2: 164--183, 1974. Miller, W.T., Geselowitz, D.B.: Simulation studies of the electrocardiogram I. The normal heart. Circ. Res., 43: 301-323,1978. Mirvis, D.M., Keller, EW., Ideker, R.E., Cox, J.W, Dowdie, RJ., Zettergren, D.G.: Detection and localization of multiple epicardial electrical generators by a two-dipole ranging technique. Circ. Res., 41: 551-557, 1977. Modre, R., Tilg, B., Fischer, G., Wach, P.: An iterative algorithm for myocardial activation time imaging. Computer Methods and Programs in Biomedicine, 64: 1-7,2001. Nenonen, 1., Edens, J., Leon, LJ., Horacek, B.M.: Computer model of propagated excitation in the anisotropic human heart: 1. Implementation and algorithms. In: Compter in Cardiology, 545-548,1991. Okamoto, Y, Teramachi, Y, Musha, T.: Limitation of the inverse problem in body surface potential Mapping. IEEE Trans. Biomed. Eng., 30: 749-754,1983. Ohyu, S., Okamoto, Y, Kuriki, S.: Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties. IEEE Trans. Biomed. Eng., 49: 509519,2002. Oster, H.S., Taccardi, B., Lux, R.L., Ershler, P.R, Rudy, Y: Noninvasive electrocardiographic imaging: reconstruction of epicardial potentials, electrograms, and isochrones and localization of single and multiple electrocardiac events. Circulation, 96: 1012-1024, 1997. Pascual-Marqui, R.D., Michel, C.M., Lehmann, D.: Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. Int. J. Psychophysiol., 18: 49-65, 1994. Pilkington, T.c., Morrow, M.N.: The usefulness of multi poles in electrocardiography. CRC Crit. Rev. Biomed. Eng., 7:175,1982. Pullan, AJ., Cheng, L.K., Nash, M.P., Bradley, CP, Paterson, DJ.: Noninvasive electrical imaging of the heart: theory and model development. Ann. Biomed. Eng., 29: 817-36,2001. Savard, P., Roberge, EA., Perry, J., Nadeau, R.A.: Representation of cardiac electrical activity by a moving dipole for normal and ectopic beats in the intact dog. Circ. Res., 46: 415-425, 1980. Selvester, RH.S.: Recommendation for nomenclature of myocardial subdivisions. J. Electrocardil., 25: 161-162, 1992. Shahidi, A.Y., Savard, P.,Nadeau, R: Forward and inverse problems of electrocardiography: modeling and recovery of epicardial potentials in humans. IEEE Trans. Biomed. Eng., 41: 249-256, 1994. Skipa, 0., Sachse, N.E, Werner, C; Dossel, 0.: Transmembrane potential reconstruction in anisotropic heart mode!. Proc. of International Conference on Bioelectromagnetism, 17-18, 2002. Streeter, Jr. D.D., Spotnitz, H.M., Patel, Jr. D.P., Ross, J., Sonnenblick, E.H.: Fiber orientation in the canine left ventricle during diastole and systole. Circ. Res., 24: 339-347,1969.
182
B.Ue
Taccardi, B. Distribution of heart potent ial on the thoracic surface of normal human subjects. Circ. Res., 12: 341-352, 1963. Throne, R.D., Olson, L.G.: A genera lized eigensystem approach to the inverse problem of electrocardiography. IEEE Trans. Biomed . Eng., 41: 592-600,1994. Throne, RD., Olson, L.G. : Generalized eigensys tem techniques for the inverse problem of electrocardiography applied to a realistic heart-torso geometry. IEEE Trans. Biomed. Eng., 44: 447-45 4,1 997. Wach, P. Tilg, B., Lafer, G., Rucker, W.: Magnetic source imaging in the human heart: estimating cardiac electrical sources from simulated and measured magnetocardiogram data. Med. BioI. Eng. Comput., 35: 157-166, 1997. Wei, D., Okazaki, 0 ., Harurni, K., Harasawa, E., Hosaka, H.: Compar ative simulation of excita tion and body surface electrocardiogram with isot ropic and anisotropic computer heart model. IEEE Trans. Biomed. Eng., 42: 343- 357, 1995. Wu, D., Tsai, H.C., He, B.: On the Estimation of the Laplacian Electrocard iogram during Ventricular Activation. Ann. Biomed. Eng., 27: 731-745 ,1999.
6
BODY SURFACE LAPLACIAN MAPPING OF BIOELECTRIC SOURCES Bin He* and lie Lian Department of Bioengineering, University of Illinois at Chicago
6.1 INTRODUCTION 6.1.1 HIGH-RESOLUTION ECG AND EEG Targeting two of the most life-critical organs, the heart and brain, the electrocardiogram (ECG) and the electroencephalogram (EEG) are the two important bioelectric recordings to study the cardiac and neural activity. Conventional ECG and EEG have many advantages. First, they are noninvasive measurement. Second, they are very convenient for application and have relatively low cost. More importantly, they have unsurpassed millisecond-scale temporal resolution, which is essential for revealing rapid change of dynamic patterns of heart and brain activities. However, the major limitation of the conventional ECG and EEG is their relatively low spatial resolution as compared to some other imaging modalities, such as the computed tomography (CT) or the magnetic resonance imaging (MRI). One reason contributing to the low spatial resolution is the limited spatial sampling. Conventional EEG uses the standard international 10-20 system, which has about 20 electrodes over the scalp, with corresponding inter-electrode distance of about 6 em (Nunez et al., 1994). For the ECG measurement, the most commonly used configuration in a clinical setting is the 12-lead ECG. Despite its great success in many clinical applications, it has a major limitation in that it contains very little spatial information, and doctors have to infer the cardiac status mainly based on temporal analysis of the ECG waveforms. Therefore, one way to enhance the spatial resolution of ECG and EEG is to increase the spatial sampling, by using larger number of surface electrodes in ECG and EEG measurement. *Address all correspondence to: Bin He, Ph.D. University of Minnesota, Department of Biomedical Engineering, 7-105 BSBE, 312 Church Street, Minneapolis, MN 55455. E-mail:
[email protected] 183
B. He and J. Lian
184
However, even with very high-density spatial sampling, the spatial resolution of the EEG and ECG is still limited, because of the volume conduction effect. In other words, the electrical signals will get smeared as they pass through the media between the bioelectric sources and the body surface sensors. For the brain, it's the head volume conductor, particularly the skull layer, which has low conductivity (Nunez, 1981, 1995). For the heart, it's the torso volume conductor, including the effects of lungs, the ribs and other tissues (Mirvis et al., 1977; Spach et al., 1977; Rudy & Plonsey, 1980). Therefore, advanced techniques are desired in order to compensate for the volume conduction effect and enhance the spatial resolution of the ECG and EEG. As reviewed in Chapters 4 and 5 with applications to the heart, one of such methods is to solve the so-called inverse problem, which attempts to estimate the bioelectric sources from the body surface potential measurements. Another method is the surface Laplacian, which will be thoroughly discussed in this chapter.
6.1.2 BIOPHYSICAL BACKGROUND OF THE SURFACE LAPLACIAN The concept of the Laplacian originated centuries ago, and the Laplacian operator has been widely used in digital image processing as a spatial enhancement method. Similarly, the Laplacian technique can also be used for high-resolution bioelectric mapping. By definition, the surface Laplacian (SL) is defined as the 2nd order spatial derivative of the surface potential. Due to its intrinsic spatial high-pass filtering characteristics, the SL can reduce the volume conduction effect by enhancing the high-frequency spatial components, therefore can achieve higher spatial resolution than the surface potentials (Figure 6-1). Consider the non-orthogonal curvilinear coordinate system on a general surface Q, u = x, v = y, and z = f (u, v), where f(u, v) is a continuous function whose 2nd order partial derivatives exist. If V(u, v) is the analytical surface potential function (whose 2nd order partial derivatives exist) on Q, the SL of V (u, v) can be written in tensorial formulation (Courant & Hilbert, 1966; Babiloni et al., 1996): Vs2 V
1 =.y'g
{aau ["fi (av av)] + ava ["fi (av av)]} gIla-;; + g12a;;g21a-;; + g22a;;-
(6-1)
where the components of the metric tensor are given by (Babiloni et al., 1996): (6-2a)
(6-2b)
af af au av
---
gl2 = g21 =
g22
=
g
1+(~r g
(6-2c)
(6-2d)
Body Surface Laplacian Mapping of Bioelectric Sources
185
FIGURE 6-1. Schematic illustration of the SL as a spatial enhancement method. The cardiac activity located at the anterior apex (black circle) is sensed by potential measurement over the larger area on the chest (light grey) but by Laplacian measurement over the smaller area on the chest (dark grey). (From Tsai et aI., Electromagnetics, 2001 with permission)
For the plane model where z = j(u, v) = 0, u = x and v = y, the SL is reduced to: (6-3)
For the sphere model, assume by (Perrin et al., 1987a):
z=
.Jl - u 2 - v 2 ,
U
= x and v = y, the SL is then given
(6-4) The Laplacian electrogram (we refer electrogram to either ECG or EEG when the heart or the brain is concerned) shall be defined as the negative SL of the surface potential electrogram (He, 1999; He & Wu, 1999), to facilitate the interpretation of the Laplacian maps in comparison to the potential maps. As stated in equation (6-3), assuming a planar surface in the vicinity of the observation point, a reasonable approximation of the local area of the body surface would be the tangential plane at the point of interest, over which a local Cartesian coordinate system (x, y, z) can be considered. Assuming z to be normal to the tangential plane, the Laplacian ECGIEEG at the observation point becomes (He, 1999; He & Wu, 1999):
where J denotes the current density and Jeq is an equivalent current source (He & Cohen, 1992a, 1995; He, 1997, 1998a, 1999; He & Wu, 1999). Unlike the ECG and EEG inverse problems, the SL approach does not attempt to locate the bioelectric sources inside the heart and brain. Instead, the Laplacian ECGIEEG
B. He and J. Lian
186
can be viewed as a two-dimensional (2D) projection of the three-dimensional (3D) bioelectric source onto the body surface. Therefore, as shown in equation (6-5), the Laplacian ECG/EEG can be interpreted as an equivalent current source density on the body surface, which has the similar physical units as the primary bioelectric source density. On the other hand, compared to the ECG and EEG inverse approaches, the SL approach does not require exact knowledge about the conductivity distribution inside the torso and head volume conductors and has unique advantage of reference-independence as compared with the potential measurement.
6.2 SURFACE LAPLACIAN ESTIMATION TECHNIQUES 6.2.1 LOCAL LAPLACIAN ESTIMATES If the body surface in the vicinity of the measurement point can be approximately represented by a planar surface, the SL can be calculated using equation (6-3). In practice, the second order derivatives can be approximated by means of finite difference (Hjoth, 1975). Consider a grid of unipolar electrodes with equal inter-electrode distance b on the body surface (Figure 6-2A), the regular Laplacian electrogram at each non-boundary electrode (the cross-hatched circle) can be estimated by using the regular finite difference representation (Hjoth, 1975; He & Cohen, 1992a; Wu et al., 1999; Lian et al., 2002):
LR(i, j)
~~
{V(i, j) -
~ [V(i -
1, j) + V(i
+ 1, j) + v«, j
- 1) + V(i, j
+ I)]} (6-6)
where V(i, j) and LR(i, j) represent the potential and the regular Laplacian electrogram at the electrode (i, j), respectively. For each non-boundary electrode, equation (6-6) uses the potential measurement at five electrodes (the cross-hatched circle and its four neighboring open circles in Figure 6-2A) to estimate the Laplacian electrogram at the center electrode (the cross- hatched circle). Similarly, the Laplacian electrogram at electrode (i, j) can also be estimated from the potential recorded from this electrode and those recorded from its other four neighboring electrodes in the diagonal direction (neighboring black circles surrounding the cross-hatched circle in Figure 6-2A). Denote the distance from the center electrode to its diagonal neighboring electrodes as d (for uniform grid, d = .fib), the diagonal Laplacian electrogram can also be estimated by (Wu et al., 1999; Lian et al., 2002):
LD(i, j)
~
:2
{V(i, j) -
~[V(i -
1, j - I) + V(i - 1, j
+ 1)
+V(i+l,j-l)+V(i+l,j+I)]} where L D(i, j) represents the diagonal Laplacian electrogram at the electrode (i, j).
(6-7)
187
Body Surface Laplacian Mapping of Bioelectric Sources
••••••••••••••• •••••••••••••••
,.
•• 0 ••••••••••••
• { •, •
• 0 0 ••••••••••• •• 0 ••••••••••••
••••••••••••••• ••••••••••••••• ••••••••••••••• ••••••••••••••• ••••••••••••••• A
....... .
'
.
• •• •I
.-.--
./
••
B
FIGURE 6-2. Schematic illustration of the local Laplacian estimates. (A) Regular or diagonal 5-point local Laplacian estimation. (B) Circular finite difference local Laplacian estimation.
A more general form of the finite difference reprentation of the SL utilizes the potential information from more local electrodes to realize the circular Laplacian electrode (He & Cohen, 1992a). As illustrated in Figure 6-2B, to estimate the Laplacian electrogram at the center electrode, the unipolar potential data are obtained from this electrode as well as from n electrodes located along a small circle (with radius r) surronding it, and the finite difference representation of the Laplacian electrogram is given by (Le et aI., 1994; Wei et al., 1995; He, 1997, 1998a, 1999; Wei & Mashima, 1999; Wei, 2001):
4 ( Vo u, ~ 2"
r
n ) -1 LVi
n
(6-8)
i=!
where Vo and La represent the potential and circular Laplacian electrogram at the center electrode, respectively, and V;(i = 1, 2, ... , n) represents the potential at one of the surrounding electrodes. Another local Laplacian estimate uses bipolar concentric electrode that consists of two parts: a conductive disk at the center and a surrounding conductive ring (Fattorusso et al., 1949; He & Cohen, 1992a). In the bipolar approach, the Laplacian electrogram may be estimated as (He & Cohen, 1992a, 1995; He, 1997):
Lo
~ i2 (Vo __1_ 1. Vdl) r
2nr
r
(6-9)
where the integral is taken around a circle of radius r. In addition, some other local-based Laplacian algorithms were proposed in order to achieve more acurate numerical estimates, for instance, by local modeling of the scalp and the potential distribution (Le et aI., 1994), or by means of the local polynomial fitting (Wang & Begleiter, 1999).
B. He and J. Lian
188
6.2.2 GLOBAL LAPLACIAN ESTIMATES From equations (6-1) and (6-2), it can be seen that both the analytical models of the potential distribution V (u, v ) and the surface geometry f eu, v) are required to calculate the SL. However, in real applications, the only data available are the limited number of potential recording s and body surface geometri c coordinate samplings. Thus, V (u , v) and f eu, v ) must be interpolated or approximated , and the most widely used interpolation scheme is the spline interpolation. Among the investigations on the spline SL, of noteworthy is the spherical spline SL (Perrin et al., 1987a,b, 1989), the ellipsoidal spline SL (Law et al., 1993), and the realistic geometry spline SL (Babiloni et al., 1996, 1998; He et al., 2001, 2002 ; Zhao & He, 2001). Considering the non-planar shape of the body surface , a global SL estimation using spline technique will be more accurate than the local-based SL estimates. Furthermore, as a major advantage over the local-based SL estimation, the spline SL has been shown to provide a more robust characteristic against noise (Perrin et al., 1987a; Law et aI., 1993; Babiloni et al., 1996; He et al., 2001, 2002). In the following, a recently developed realistic geometry spline Laplacian estimation algorithm is presented (He et al., 2001, 2002). Estimation of the parameters associated with the spline Laplacian is formulated by seeking the general inverse of a transfer matrix. The number of spline parameters, which need to be determined through regularization, is reduced to one in the present approach , thus enabling easy implementation of the realistic geometry spline Laplacian estimator.
6.2.2.1 Spline interpolation ofthe surface geometry Given body surface sampling point s (Xi, Yi, z.) . i = 1, ... , M, where M is the number of sampling points for coordinate measurement, the mathematical model of the body surface geometry Z = f (x , y) can be described by 2D thin plate spline (Harder & Desmarais, 1972; Perrin et al., 1987a,b; Babiloni et al., 1996; He et al., 2001, 2002): M ~
M ~
i=1
i=l
2(m- l)
z = f (x , Y ) = ~ Pi Km - I + Qm - l = ~ p;d;
m- I
d
( 2 2) ~ ~ d-k k log d +W + ~ ~ qdk x Y i
d=Ok=O
(6-10)
where m (spline order) is set to 2 (Perrin et aI., 1987a,b; Babiloni et aI., 1996; He et aI., Qm - I are basis function and osculating function , respectively, and w is a constant which accounts for effective radius of the recording sensor (Harder & Desmarais, 1972; Perrin et al., 1987a). The coefficients Pi and qdk are the solutions of following matrix equation s (Duchon, 1976; Perrin et aI., 1987a,b; He et al., 2002):
2001,2002 ), d; = (x - Xi)2 + (y - Yi)2, Km- I and
KP +EQ=Z
(6-11a)
ETp =0
(6- 11b)
189
Body Surface Laplacian Mapping of Bioelectric Sources
where P, Q, and Z are the vectors containing Pi , qdk. and Zi respectively, the matrices K and
E are composed of elements of basis function and sampling coordinates, respectively. 6.2.2.2 Spline interpolation ofthe surface potential distribution Similarly, given body surface potential recordings Vi at positions ( Xi, Yi, z. ), i = 1, 2, . . . , N, where N is the number of recording electrodes, the body surface potential distribution over the 3D space at an arbitrary point (x, y, z) can be modeled by the 3D spline (Babiloni et al., 1996, 1998; He et aI., 2001 , 2002): N
V (x,y, z )
= ~ti ~
H
m-I
+
R
m-I
N
= ~tiri ~
i=1
(2m- 3)/2
m- I d k ~~~
+ ~~~rdkgX d-k Yk- gZ g
(6-12 )
d= O k=O g= O
i=1
where m (spline order) is set to 3 (Law et aI., 1993; Babiloni et aI., 1996, 1998; He et aI., 2001, 2002), rl = (x - Xi)2 + (y - Yi)2 + (z - Zi)2, Hm - l and R m-l are basis and osculating functions, respectively, and the coefficients t i and r dkg can be determined by solving the matrix equations (Law et aI., 1993; Babiloni et aI., 1996; He et al., 2002): (6-13a) (6-13b ) where T, R, and V are the vectors containing t., r dkg, and V i , respectively, the matrices H and F are composed of elements of basis function and electrode coordinates, respectively.
6.2.2.3 Determination ofthe spline parameters In an attempt to overcome the ill-poseness of the systems, approximation instead of interpolation of the surface geometry and potential distribution are used by introducing correction terms in equations (6-11a) and (6-13a), which are respectively changed to (Babiloni et aI., 1996, 1998; He et aI., 2001, 2002):
+ wI)P + EQ = Z ( H + AI)T + F R = V
(K
(6-14a) (6-14b)
where 1 is the identity matrix, parameters wand A. are used to improve the numerical stability of the systems. The optimal values of these two parameters need be determined separately by either "tuning procedure" or other regularization techniques (Babiloni et al., 1996, 1998). Instead of searching the optimal parameters in two dimensions, the above equations can be reformulated by combining equations (6-11a,b) and (6-13a,b) into one linear system equation (He et aI., 2001 , 2002):
AX=B
(6-15)
B. He and J. Lian
190
where
A-
[ff,
E
0 0 0
0
0
x= [P B = [Z
Q T 0
V
0 0
~] Rf
(6-16a)
H
FT
(6-16b)
O]T
(6-16c)
Then the problem becomes seeking the solution of equation (6-15). Applying the concept of the general inverse, we have (He et al., 2001, 2002): (6-17) where A# is the pseudo-inverse of A. Matrix A is ill-posed, thus regularization methods must be used to improve the stability of the system. Notably, after reformulating the matrix equations into one unified linear system in equation (6-15), only one single regularization parameter needs to be determined when seeking the general inverse A #. Therefore, the present method not only can significantly reduce the computation effort and improve the efficiency and stability of the spline SL algorithm, but also can be combined with many regularization techniques which have been extensively studied to determine the parameter in seeking the general inverse A# (for details on the regularization techniques, see Chapter 4). 6.2.3 SURFACE LAPLACIAN BASED INVERSE PROBLEM
The SL-based ECG or EEG inverse problem has also been explored to achieve highresolution heart or brain electric source imaging. One of the approaches is to estimate the epicardial potentials from the body surface Laplacian ECG (He, 1994; Wu et al., 1995, 1998; He & Wu, 1997, 1999; Johnston, 1997; Throne & Olson, 2000), or estimate the cortical potentials from the scalp Laplacian EEG (He, 1998; Babiloni et al., 2000; Bradshaw & Wikswo, 2001). As illustrated in Figure 6-3, if Vis an isotropic homogeneous volume conductor surrounded by an outer surface S 1 and an inner surface Sz, and there is no current source existing within V, the potential on the inner surface can be related to the potential or Laplacians on the outer surface. Applying Green's second identity to the volume V results in (Barr et al., 1977): U("'l r
= -.L ~
If
51
U .
dQ - -.L ~
If
52
u . dQ - -.L ~
If
1.r . 2E..dS ~
(6-18)
52
where u(r'l-the electrical potential at the observation point r* dQ-the solid angle of an infinitesimal surface element ds as seen from r* -the first derivative of potential u with respect to the outward normal to dS aau r;
191
Body Surface Laplacian Mapping of Bioelectric Sources
Sl
FIGURE 6-3. Schematic illustration of the arbitrarily shaped volume conductor.
By discretizing the surfaces Sl and S2 into triangular elements, and taking the limit of observation point approaching the surface element on St and S 2 , respectively, from the inside of V, the following matrix equations can be obtained: ~
PIIU, ~
P21U j
-
+ P'2U2 + G I2r2 = 0 + P22U2 + G22r2 = 0 ~
(6-19) (6-20)
where U k is the vector consisting of the electrical potentials at every surface element on Si , and r k is the vector consisting of the normal derivatives of the electrical potentials at every triangle element on Sk but just inside of VI. P lI , P 12, P21 , P22, G12, and G22 are coefficient matrices (Barr et aI., 1977). Solving equations (6-19) and (6-20) leads to the followingequation that relates the inner surface potential U 2 to the outer surface potential U,: (6-2 1)
where Tl2 = (P lI - Gl 2G221 P2d-l(G 12G221 P22 - P12 ). The surface Laplacian of the potential at the position r *at the outer surface Sl can be written as follows (Wu et aI., 1998; He & Wu, 1999): L s(r4* ) = -
1 4rr
If
2
(a
Q U • d a,;T
Ir*) +
1 4rr
SI
If
2
U •
I )+ 1
Q r* d (aan2
S2
4rr
If
2
I)
au · ( an2( a ,:) 1 r* dS ar.
S2
(6-22) where n is the normal direction of the surface Sl at r*. Similarly, by discretizing the surfaces Sl and S2 into triangular elements, the following matrix equation can be obtained: ........
....
--->.
......
Ls=AU,+BU 2+Cr 2
(6-23)
192
B. He and
J. Lian
~
where L, is the vector consisting of the surface Laplacians at every surface element on S" and A, B, and C are coefficient matrices (Wu et aI., 1998; He & Wu, 1999). From equations (6-19) , (6-:?l) , and (6-23 ), we can relate the inner surface potential U 2 to the outer surface Laplacian L, by transfer matrix H: (6-24) where H = A · T l2 + B - C · G 22' . ( P2, . Tl2 + P22) The potential-based inverse problem seeks the inner surface potentials from the outer surface potentials by solving equation (6-2 1): (6-25) On the other hand, the SL-based inverse problem seeks the inner surface potentials based on solving equation (6-24 ): (6-26) where # denotes the general inverse of the transfer matrix. In addition, the hybrid potential-Laplacian-based inverse solution can also be solved by minimizing the error function (He & Wu, 1999; Throne & Olson, 2000): (6-27) where ex is a weighting coefficient. The resulting inverse solution is given by (He & Wu, 1999): (6-28) Equation (6-28) suggests that the inner surface potentials can be estimated from both the outer surface Laplacians and out surface potentials.
6.3 SURFACE LAPLACIAN IMAGING OF HEART ELECTRICAL ACTIVITY 6.3.1 HIGH-RESOLUTION LAPLACIAN ECG MAPPING By applying the SL technique to the potential ECG , body surface Laplacian mapping (BSLM) was first proposed by He and Cohen (l992a,b). Theoretical and experimental studies have been carried out, demonstrating the unique feature of BSLM in effectively reducing the torso volume conduction effect and enhancing the capability of localizing and mapping multiple simultaneously active myocardial electrical events (He & Cohen,
Body Surface Laplacian Mapping of Bioelectric Sources
193
1992a,b; He et aI., 1993, 1995, 1997,2002; Oostendorp & van Oosterom, 1996; Umetani et aI., 1998; Wei & Harasawa, 1999; Wu et aI., 1999; Tsai el aI., 2001; Besio et al., 2001 ; Wei et aI., 2001 ; Li et aI., 2002 ).
6.3.2 PERFORMANCE EVALUATION OF THE SPLINE LAPLACIAN ECG Through human experiments and computer simulations, we have systematically evaluated the signal to noise ratio of the Laplacian ECG , during ventricular depolarization and repolarization, and demonstrated the feasibility of recording the Laplacian ECG, using the 5-point local SL estimator (Wu et aI., 1999; Lian et aI., 2001 ,2002). Further improvement of the Laplacian ECG estimation may be achieved by using the global-based spline Laplacian technique. In this section, we present the performance evaluation of the 3D spline SL algorithm (see Section 6.2.2). Computer simulations were conducted using both a spherical model and a realistic geometry heart-torso model, and comparison studies were also made with the 5-point local SL estimator (He et aI., 2002). Given the torso surface geometry coordinates and the potential measurement, the Laplacian ECG was estimated by using the realistic geometry spline SL algorithm as detailed in Section 6.2.2. The linear inverse problem in equation (6-17) was solved by using the truncated singular value decomp osition (TSVD) (Shim & Cho , 1981), and the truncation parameter was determined by means of the discrepancy principle (Morozov, 1984).
6.3.2.1 Effects of noise We first evaluated the effects of noise on the SL estimation, by approximating the torso volume conductor as a homogeneous single-layer unit-radius sphere model , with normalized interior conductivity of 1.0. A radial dipole or a tangential dipole was used to represent a localized cardiac electrical source. Three eccentricities (0.5, 0.6, 0.7) were used to assess the effect of the source depth on the SL estimation. The Gaussian white noise (GWN) of different noise levels (5%, 7%, 10%) was added to the dipole-generated surface potentials sampled from 129 surface electrodes, simulating noise-contaminated potential ECG measurement. Two cases of geometry noise were also considered (2% geometry noise plus 10% potential noise, and 5% geometry noise plus 5% potential noise). For each noise level, ten trials of noise were generated and simul ations were conducted. The correlation coefficient (cq values between the estimated SL and the analytical SL for all ten trials were averaged and shown in Table 6-1. The SL was estimated by three different methods : (1) 5-point local SL (5PL), (2) two-parameter spline SL (2SL), and (3) the recently developed one-parameter spline SL (1SL). The two-parameter spline SL was estimated optimally by using the tuning procedure (Babiloni et aI., 1996, 1998) to find the optimal values of (J) and A in equation (6-14) and the one-parameter spline SL was estimated optimally by searching the optimal truncation parameter in TSVD procedure (He et aI., 2002). The optimal parameters correspond to the maximum CC between the analytical SL and the estimated SL. The quant ity w in equation (6-10) was set to 0.16 in spline SL estimation.
B. He and J. Lian
194
TABLE 6-1. The CC values between the analytical and estimated SL under different levels of noise in one-sphere model GWN I-RD, r=0.5 I-RD, r = 0.6 I-RD, r= 0.7 I-TD, r = 0.5 I-TD, r = 0.6 I-TD, r = 0.7
5%PN 5PL 2SL ISL
7%PN 5PL 2SL ISL
10%PN 5PL 2SL ISL
2% GN+IO%PN 5%GN+5%PN 5PL 2SL ISL 5PL 2SL ISL
0.79 0.90 0.96 0.71 0.84 0.94
0.73 0.85 0.93 0.62 0.78 0.91
0.63 0.77 0.89 0.50 0.67 0.85
0.58 0.58 0.76 0.65 0.60 0.71
0.98 0.97 0.96 0.98 0.95 0.96
0.96 0.98 0.96 0.96 0.98 0.96
0.97 0.96 0.95 0.97 0.95 0.95
0.95 0.97 0.95 0.95 0.95 0.95
0.96 0.95 0.93 0.96 0.94 0.91
0.94 0.95 0.92 0.94 0.95 0.93
0.93 0.91 0.89 0.90 0.92 0.88
0.92 0.93 0.90 0.91 0.92 0.88
0.72 0.70 0.80 0.67 0.66 0.79
0.91 0.91 0.92 0.88 0.92 0.91
0.90 0.92 0.93 0.90 0.92 0.92
Note: RO-radial dipole, TO-tangential dipole, PN-potential noise, GN-geometry noise.
Three findings are obvious from Table 6-1. First, for all three different SL estimators, the higher the noise level, the smaller the Cc. Second, for all the cases studied, the spline SL has superior performance (higher CC) than the 5-point local SL, while the two-parameter spline SL and one-parameter spline SL have similar performance. Third, the 5-point local SL has the best performance for superficial sources and under low potential noise level, but its performance degrades dramatically as the source moves to deeper position or under higher noise levels. On the other hand, the spline SL generally has good performance over a broader source depths (from 0.5 to 0.7), and shows more robust characteristics against the noise in potential measurement. Specifically, for the one-parameter spline SL estimator, the CC values for all cases studied are greater than 0.92 under 10% potential noise, and equal or greater than 0.90 under 5% potential noise plus 5% geometry noise.
6.3.2.2 Effects ofnumber ofrecording electrodes Table 6-2 shows the effects of number of recording electrodes on the SL estimation. One or multiple dipoles with varying orientations were placed in the spherical conductor model (see note under Table 6-2 for dipole configurations), and 5% GWN was added to the dipoles-generated potentials. The CC values between the analytical SL and the estimated SL with different electrode numbers and different dipole configurations are shown in Table 6-2. Similarly, three different SL estimators were evaluated and compared. The quantity w in equation (6-10) was set to 0.16, 0.18, 0.20, and 0.24 corresponding to 129,96,64, and TABLE 6-2. The CC values between the analytical and estimated SL corresponding to different electrode numbers and dipole configurations in one-sphere model Electrode Number
129 5PL 2SL ISL
96 5PL 2SL ISL
64 5PL 2SL ISL
32 5PL 2SL ISL
Config. A Config. B Config. C Config. D Config.E
0.84 0.90 0.77 0.87 0.74
0.78 0.92 0.82 0.88 0.75
0.88 0.92 0.80 0.88 0.77
0.72 0.85 0.59 0.49 0.47
0.95 0.97 0.96 0.96 0.94
0.98 0.98 0.99 0.98 0.98
0.97 0.97 0.94 0.97 0.89
0.97 0.98 0.97 0.95 0.94
0.96 0.97 0.94 0.97 0.92
0.97 0.97 0.96 0.90 0.92
0.95 0.96 0.92 0.73 0.67
0.84 0.96 0.88 0.71 0.83
Note: Configurations A: I-TO at r= 0.6; B: I-RO at r = 0.6; C: two +z-direction dipoles at (±0.3, 0.0, 0.5); 0: one -l-x-direction dipole at (0.0, 0.0, 0.7) and two -l-z-direction dipoles at (0.0, ±OA, 0.5); E: 4-RO at r = 0,6, each one is ttl] with respect to the z-axis. RO-radial dipole, TO-tangential dipole.
Body Surface Laplacian Mapping of Bioelectric Sources
195
32 electrodes, respectively. The spline SL was estimated optimally by seeking the optimal regularization parameter(s). Table 6-2 clearly indicates the correlation between the goodness of SL estimation and the number of surface electrodes. In general, the more electrodes being used, the higher CC of the SL estimation. The CC values drop significantly when 32 electrodes are used, which is consistent with the fact that a minimum sampling in the space domain is needed to restore the spatial frequency spectrum. Again, Table 6-2 indicates that two-parameter spline SL and one-parameter spline SL have comparable performance, and are more robust against measurement noise than the 5-point local SL estimation.
6.3.2.3 Effects ofregularization In simulations, the optimal SL can be estimated by means of a priori information of the analytical SL, i.e., by seeking the optimal parameter that maximizes the CC between the analytical SL and the estimated SL. In real applications, the SL can also be estimated without using the a priori information of the analytical SL, for example, by using the discrepancy principle (Morozov, 1984). In Table 6-3, the upper rows show the CC between the analytical SL and the optimal estimated SL. The lower rows show the CC between the analytical SL and the estimated SL obtained by means of the discrepancy principle, without the a priori information on the analytical SL. In this simulation, multiple dipoles with varying orientations were placed in the spherical conductor model (see note under Table 6-3 dipole configurations), and 5% GWN was added to the analytical surface potentials sampled at 129 recording electrodes. The quantity w in equation (6-17) was set to 0.16 in spline SL estimation. Table 6-3 indicates that the regularization results always have lower CC than the optimal results (by definition). However, the results obtained via regularization (by using the discrepancy principle in this case) are comparable to the optimal SL estimates. Out of four source configurations, the CC values for configurations B and D are almost similar for these two types of results. For Configuration C, the CC of the regularization is smaller than the optimal result by 1%. For configuration A, the CC of the regularization result is smaller than the optimal result by less than 3%. However, the absolute CC is 96% or above, suggesting the feasibility of the estimation of the SL through regularization. Figure 6-4 depicts one typical example of the normalized surface potential maps and the SL maps corresponding to source Configuration C in Table 6-3. In this figure, (A) is the noise-contaminated surface potential map, (B) is the analytical Laplacian ECG map, (C) is the optimal spline Laplacian ECG map estimated by means of a priori information, (D) is TABLE6-3. Comparison of the optimalestimated spline SL and the spline SL estimated by using the discrepancy principle in one-sphere model Dipole Configuration
A
B
C
D
Optimal spline SL Regularized spline SL
0.99 0.96
0.98 0.98
0.98 0.97
0.98 0.98
Note: Configurations A: two +z-direction dipoles at (±O.3, 0.0, 0.5); B: one +x-direction dipole at (0.0, 0.0, 0.7) and two +zdirection dipoles at (0.0, ±O.4, 0.5); C: 4-RD at r = 0.6, each one is nl] with respect to z-axis; D: 4-TD at r = 0.7, each one is rr/4 with respect to z-axis. RD-radial dipole, TD-tangential dipole.
B. He and J. Lian
196
.r~ .~8:J_~8:J-~8:J.~~ ·1
0
A
1 -
0
1
-I
0
BC
1·
0
D
1 ·1
0
1
E
FIGURE 6-4. A typical example of the normalized potential ECG map (A) and the Laplacian ECG maps (B-E). See text for details. See the attached CD for color figure. (From He et al., IEEE-TBME, 2002 with permission) © IEEE
the spline Laplacian ECG map estimated by means of the discrepancy principle, and (E) is the Laplacian ECG map estimated by the 5-point SL estimator. Figure 6-4 indicates that, from the viewpoint of imaging and mapping, the regularization spline SL estimate is almost identical to the optimal spline SL estimate, and similar to the analytical SL result, for the case studied. This is consistent with the high CC values obtained (Table 6-3) between the analytical SL, the optimal spline SL, and the regularization spline SL estimates. Also noted, the 5-point local SL is more sensitive to the measurement noise as compared to the spline SL, especially at the border regions.
6.3.2.4 Simulation in a realistic geometry heart-torso model The performance of the present 3D spline SL estimator was further examined using a realistic geometry heart-torso model, where cardiac electric activity was simulated by pacing one or two sites in the ventricles of the heart model (He et al., 2002) (Figure 6-5A). The potential ECG induced by ventricular pacing was simulated by means of the boundary element method (Aoki et al., 1987). The GWN was added to the simulated surface potentials to simulate noise-contaminated potential BCG measurement. The Laplacian ECG was estimated from the noise-contaminated potential ECG using the 3D spline SL algorithm, and comparison was also made with the conventional 5-point local SL estimation. Figure 6-5B depicts the simulation results when simultaneously pacing two sites (site #1 at the free wall of right ventricle, site #2 at the ventricular anterior) in the ventricular base, and the single site pacing results corresponding to these two sites are shown in Figure 6-5C and Figure 6-5D, respectively. In these figures, (i) shows the activation sequence inside the ventricles induced by the pacing. (ii) shows the 5% GWN contaminated body surface potential map over the anterior chest immediately following the pacing. (iii) and (iv) respectively show the estimated body surface Laplacian ECG maps over the anterior chest, by using the 5-point local SL estimator and the one-parameter spline SL estimator. Note that in Figure 6-5B, the estimated Laplacian ECG maps provide multiple and more localized areas of activity overlying the two pacing sites, whereas the body surface potential map does not reveal the spatial details on this source multiplicity, due to the smearing effect of the torso volume conductor. Also noted, that the spline SL estimate is less noisy and can separate the two areas of activity more efficiently than the 5-point local SL estimate. In Figures 6-5C-D, both the potential ECG and Laplacian ECG maps corresponding to single site pacing reveal one pair of negative/positive activity, while the Laplacian ECG maps
Body Surface La placian Mapping of Bioelectric Sources
197
•
(I) 1
1 lUX
-4 1
IIAX
) -42
-43
O.
-4~--x---
IIAX
-41 (I )-42 -43
-4~.:----:O--""'"A" O. 0
Of
AX
.01 (IY) -42 -43 ~_-::-_--=O. -4. 0
B
1)1
Of
41~
-42 -43 -4.
O.
0
.0 1
-4 1
.02 -43 ,
.02 -43
.0 .
O.
0
C
IIAX
-4~.--~--O ~. Nor o
o
FIGURE 6-5. Computer simulation of the spline Laplacian mapping in a realistic geometry heart-torso model. (A) Heart-torso model and the locations of two pacing sites. (B) Dual-site pacing example. (C) Single site (#1) pacing example. (D) Single site (#2) pacing example. See text for details. See the attached CD for color figure. (From He et aI., IEEE-TBME, 2002 with permission) © IEEE
provide much more localized spatial pattern. The two pairs of negative/positive activities revealed in Figure 6-5B correspond well to the activities observed in Figures 6-5C-D. Consistently, the 5-point local SL estimates are noisier than the spline SL estimates.
6.3.2.5 Spline Laplacian ECG mapping in Humans Applying the spline Laplacian algorithm we have developed , body surface Laplacian mapping has been explored in a group of healthy male subjects during ventricular and atrial depolarization. Ninety-five channel body surface potential ECG was recorded simultaneously over the anterolateral chest in the subjects. The Laplacian ECG was estimated from
B. He and J. Lian
198
{
- +2-. 5 '- .
(A)
2.55
(B)
r:
I
~+fr. 5 { In' . 1'3 I P2
.25
' - . _ .:0.25 (C)
(D)
FIGURE 6-6. Body surface potential and Laplacian maps of a healthy human Subject around the peak ofR-wave. Time instant is referenced to the onset of QRS and illustrated by a vertical line labeled in the Lead I ECG tracing (A). The BSPM map is shown in (B). The spline BSLM map is shown in (C) with spatial details denoted by letter 'P' and 'N' followed with a numerical number for positive and negative activities, respectively. The corresponding BSLM map estimated by the 5-point SL estimator is shown in (0). The physical units of the color bars in the BSPM and the BSLM maps are mVand mv/cmr, respectively. See the attached CO for color figure.
the recorded potentials during QRS complex and the P-wave by means of the one-parameter spline SL estimator. For all subjects, more spatial details were observed in the SL ECG maps as compared with the potential ECG maps, with spline SL more robust against noise than the 5-point SL (Li et al., 2003). Figure 6-6 shows one example of the SL ECG map over the anterolateral chest of a healthy male subject around the peak of R-wave (Figure 6-6A). Figure 6-6B shows the potential map, which shows a pair of positivity and negativity over the anterolateral chest. The corresponding spline BSLM map is shown in Figure 6-6C, illustrating a localized negative activity, N2, located over the central chest, a positive activity P2 slightly shifted toward the left lateral chest with respect to the position of N2, another positive activity to the left of P2, and another negative activity N3 appeared in left-superior area. Figure 6-6D shows the SL ECG map estimated using the 5-point local SL estimator. Note that the local 5-point SL estimate (Figure 6-6D) shows more focused activities as compared with the potential map (Figure 6-6B), but failed to reveal the spatial details as illustrated in the spline SL map (Figure 6-6C). The negative and positive activities observed in the group of human subjects have been related to the epicardial events (Li et al., 2003). Figure 6-7 shows an example of spline SL mapping in a healthy human subject during atrial depolarization (Lian et al., 2002b). Compared with the diffused potential map (Figure 6-7B), the corresponding spline BSLM map (Figure 6-7C) clearly shows two major positive activities, PI and P2, representing the local maxima on the right and left anterior chest,
2
Nl
42 ms A
(8 )
(C)
FIGURE 6-7. Body surface potential and Laplacian maps during the mid P wave in a healthy human subject. (a) The potential P wave recorded from the left lower anterior chest, and the time instant for constructing the maps. (b) The instantaneous BSPM map shows smooth pattern of potential distribution (colorbar unit: j..L V). (c) The instantaneous BSLM map shows two major positive activities PI and P2, associated with three negative activities N I, N2, and N3 (colorbar unit: j..LV/cm"), See the attached CO for color figure.
199
Body Surface Lapl acian Mapping of Bioelectric Sources
respectively. Correspondingly, three associated negative activities can also be observed and denoted as Nl , N2, and N3, representing the local minima on right, middle and left chest separated by PI and P2, respectively. Data analysis andcomputer simulation studies suggest that the positivities PI and P2 may correspond to the activation wavefronts in the right and left atria, respectively (Lian et al., 2002b). Compared to the smooth pattems of the BSPMs, more spatial details are revealed in the BSLM maps during ventricular and atrial activation, which could be correlated with the underlying multiple myocardial activation wavefronts. 6.3.3 SURFACE LAPLACIAN BASED EPICARDIAL INVERSE PROBLEM
The feasibility on solving the ECG inverse problem by means of the Laplacian ECG (He, 1994; Wu et aI., 1995, 1998; He & Wu, 1997, 1999; Johnston, 1997), and a hybrid approach using both potential ECG and Laplacian ECG for epicaridial inverse problem (He & Wu, 1999;Throne & Olson, 2000) have also been explored (see Section 6.2.3). As an example, Figure 6-8 shows the simulation results for testing the feasibility of the Laplacian ECG based epicardial potential inverse solution in a realistically shaped heart-torso model. Current dipoles located inside the anterior myocardium pointing from endocardium to the epicardium were used to simulate anterior sources. To simulate the noise-contaminated experimental recordings, up to 20%GWN wasadded to the heart-model generated body surface ECG signals before the epicardial potentials were reconstructed. Figure 6-8 plots the relative error (RE) and CC between the epicardial potentials calculated from two anteriordipoles using the boundary element method, and the epicardial potentials
1.0
0.11
tt::a:::::::::::;t
.......................................................................~ """"0- Relative Error (Lap.) .........
Correlation Coefficient (La p.) Relative Error (pot.) ......... Correlation Coefficient (pot.)
--
.=-u c
-
0.6
:l I.
t il
C
o
u
~
Q:;
::~-----: 11.11 +-- - - - - - -- . - - - -- - - - - .--- - - - - - - - - . 11.3 0.2 11. 1 0.0 oise Level in Body Surface Signals
FI G UR E 6-8. RE and CC values between the forward epicard ial potent ials and the epicardial potentials recon structed from the potential ECG and the Laplacian ECG in a realistically shaped heart- torso model with two dipoles located in the ante rior ventricular wall. (Fro m He & Wu, Crit Rev BME, 1999 with permission from Begell House)
200
B. He and J. Lian
reconstructed from the noise-contaminated potential ECG and surface Laplacian ECG over the whole torso. Note that the Laplacian ECG based epicardial inverse solutions always provide smaller RE as compared with the potential ECG based epicardial inverse solutions for the same noise level. For a larger noise level of 20% in the Laplacian ECG, the Laplacian ECG based epicardial inverse solutions still show a comparable performance as compared with the potential ECG based inverse solutions at a lower noise level of about 3%. Due to increased noise level in Laplacian ECG as compared with the potential ECG, the real merits of the Laplacian ECG based inverse solution would depend on how accurate one may record or estimate the Laplacian ECG in an experimental setting. The data comparing the epicardial inverse solutions obtained from the body surface potentials with those obtained from the body surface Laplacians, which are estimated from the noise-contaminated potentials, are currently lacking in the literature.
6.4 SURFACE LAPLACIAN IMAGING OF BRAIN ELECTRICAL ACTIVITY 6.4.1 HIGH-RESOLUTION LAPLACIAN EEG MAPPING As a spatial enhancement method, the SL technique has been applied for many years to high-resolution EEG mapping. The SL has been considered an estimate of the local current density flowing perpendicular to the skull into the scalp, thus it has also been termed current source density or scalp current density (Perrin et aI., 1987a,b; Nunez et aI., 1994). In addition, the relationship between the SL and the cortical potentials has also been investigated (Nunez et aI., 1994; Srinivasan et aI., 1996). Since Hjorth's early exploration on scalp Laplacian EEG (Hjorth, 1975), a number of efforts has been made to develop reliable and easy-to-use SL techniques. Of noteworthy is the development of spherical spline SL (Perrin et aI., 1987a,b), ellipsoidal spline SL (Law et aI., 1993), and the realistic geometry spline SL (Babi1oni et al., 1996, 1998; He, 1999; Zhao & He, 2001; He et al., 2001).
6.4.2 PERFORMANCE EVALUATION OF THE SPLINE LAPLACIAN EEG In this section, we present the performance evaluation of the realistic geometry spline Laplacian estimation algorithm in high-resolution EEG mapping (He et al., 2001). The evaluation was conducted by computer simulations using both a 3-concentric-sphere head model (Rush & Driscoll, 1969) and a realistic geometry head model. In addition, we examined the performance of the spline SL algorithm in high-resolution mapping of neural sources using experimental visual evoked potential (VEP) data. The realistic geometry spline SL estimation algorithm is detailed in Section 6.2.2. The linear inverse problem in equation (6-17) was solved by the TSVD (Shim & Cho, 1981), and the truncation parameter was determined by means of the discrepancy principle (Morozov, 1984).
6.4.2.1 Effects ofnoise The analytical SL was used to evaluate the numerical accuracy and reliability of the spline SL estimation from the scalp potentials. The accuracy of the spline SL estimator was evaluated by the CC and RE between the estimated and analytic SL distributions.
201
Body Surface Laplacian Mapping of Bioelectric Sources
1 -A-R d RE (,) (,)
a:
.6
.4
.2
: : :=:
~: 2
5
1
15
RedCC ........ 1 RE
--T
CC
2
Hoi FIGURE 6-9. Effects of noise on the spline SL estimation in a 3-sphere inhomogeneous head model. (From He et aI., Clin Neuroph y, 200 1 with permission)
Figure 6-9 shows an example of the simulation results for the effects of noise in the 3-concentric-sphere head model. A radial dipole (Rad) or a tangential dipole (Tag) located at an eccentricity of 0.6 was used to represent a well-localized areas of brain electrical activity. The scalp potential and the scalp SL at 129 electrodes were calculated analytically (Perrin et al., 1987a). Noise of up to 25% was added to the analytical potentials to simulate noise-contaminated scalp potential measurements. For each noise level, ten trials of GWN were generated and simulation conducted . The RE and the CC between the estimated SL and the analytical SL for all ten trials were averaged and displayed in Figure 6-9. The SL was estimated by minimizing the RE between the analytical SL and the SL estimate obtained by solving equation (6-17) for a regularization parameter. The parameter w in equation (6-10) was set to 0.16. Figure 6-9 indicates that the higher the noise level the larger the RE (or smaller the CC). The CC value was greater than 96% for the radial dipole , and greater than 91% for the tangential dipole, for up to 25% noise level. Figure 6-9 suggests that the one-parameter realistic geometry SL estimator is robust against the additive white noise in the scalp potential measurements.
6.4.2.2 Effects ofnumber ofrecording electrodes Table 6-4 shows an examp le of the simulation results with different number of surface electrodes in the 3-concentric-sphere head model. The RE and CC between the analytical SL and the estimated spline SL with different electrode number and different dipole configurations are shown in Table 6-4. One or multiple dipoles with unity strength were placed in the spherical conductor model (see note under Table 6-4 for detailed description of the parameters of the dipole configurations). The GWN of specified noise level was added to the scalp potentials to simulate noise contaminated EEG measurements. The parameter w in equation (6-10) was set to 0.16,0.18,0.20 and 0.24 corresponding to 129,96,64 and 32 electrodes, respectively. The SL was estimated by minimizing the RE between the analytic SL and the SL estimate. Table 6-4 clearly indicates the correlation between the goodne ss of the SL estimation and the number of surface electrodes. In general , the more electrodes used, the higher
202
B. He and J. Lian
TABLE 6-4. The RE and CC values between analytical and estimated SL corresponding to different electrode numbers and dipole configurations in 3-sphere model Electrode Number Configuration A Configuration B Configuration C Configuration D Configuration E
RE CC RE CC RE CC RE CC RE CC
129
96
64
32
0.25 0.97 0.30 0.95 0.37 0.93 0.28 0.97 0.20 0.98
0.27 0.96 0.31 0.95 0.40 0.92 0.33 0.94 0.22 0.98
0.29 0.96 0.40 0.92 0.46 0.89 0.36 0.93 0.28 0.96
0.32 0.95 0.51 0.86 0.51 0.86 0.64 0.77 0.42 0.91
Note: Configurations A: I-RO at r = 0.5, with 15% GWN; B: I-TO at r = 0.7, with 5% GWN; C: I dipole at (0.0, 0.1,0.75) pointing to +x direction, another dipole at (0.0, -0.1,0.6) pointing to +z direction, with 5%GWN; 0: 2-RO at r = 0.7, and I-TO at r = 0.6, each has an angle of rr/6 with respect to z-axis, with 5% GWN; E: 2-RO at x-axis and 2-TO at y-axis, all at r = 0.65, and each has an angle of rr/6 with respect to z-axis, with 10%. RO: radial dipole. TO: tangential dipole.
CC (or lower RE) of the SL estimation. This phenomenon is consistent with the fact that a minimum sampling in the space domain is needed to restore the spatial frequency spectrum. As can be seen from Table 6-4, the CC for the SL estimation is 92% or larger for all cases when 96 or more electrodes were used. The CC for Configuration C was about 89% when 64 electrodes were used and 86% when 32 electrodes were used. This relatively low CC values, as compared with other configurations may be explained by the large eccentricity of the dipoles in the Configuration C. The closer the dipole is located to the scalp, the sharper the spatial distribution of the scalp Laplacian. Thus higher spatial sampling rate is desired. This phenomenon is further observed when only 32 electrodes are used, the CC values dropped lower than 90% for Configurations B, C and D. Of interesting is the low CC value for Configuration D when 32 electrodes were used. The CC dropped from 93%, when 64 electrodes were used, to 77% when 32 electrodes were used. Table 6-4 suggests that a high-density electrode array of 96 or more is desirable for scalp spline SL mapping.
6.4.2.3 Effects ofregularization Table 6-5 shows examples of the simulation results comparing the SL estimate obtained by means of a priori information of the analytical SL (denoted below as the optimal SL estimate), and the SL estimate without using the a priori information of the analytical SL. In this simulation, the discrepancy principle (Morozov, 1984) was used to determine the truncation parameter of the TSVD procedure. In Table 6-5, the upper rows show the RE and CC between the analytical SL and the optimal SL estimate. The lower rows show the RE and CC between the analytical SL and the estimated SL obtained by means of the discrepancy principle. In this simulation, 129 electrodes were uniformly distributed over the upper hemisphere, the parameter w in equation (6-10) was set to 0.16, and GWN of varying noise level was added to the analytical scalp potentials to simulate noise-contaminated potential measurements.
203
Body Surface Laplacian Mapping of Bioelectric Sources
TABLE6-5. Comparison of the optimal estimated spline SL and the spline SL estimated by using the discrepancy principle in 3-sphere model Dipole Configuration RE CC RE CC
Optimal TSVD Estimated TSVD
A
B
C
D
0.36 0.95 0.38 0.94
0.34 0.94 0.40 0.93
0.20 0.98 0.25 0.97
0.29 0.98 0.44 0.91
Note: ConfigurationsA: 2-RD at r = 0.70, each has an angle of '!f/8 with respect to z-axis, with 5% GWN; B: 2-RD at r = 0.6, and I-TD at r = 0.7, each has an angle of '!f/6 with respect to z-axis, with 10% GWN; C: 2-RD
and 2-TD, all at r = 0.65, and each has an angle of n 16with respect to z-axis, with 10% GWN; D: 4-RD at r = 0.75, each has an angle of '!f17 with respect to z-axis, with 5% GWN. RD: radial dipole. TD: tangential dipole.
o.s
o.s o ·I
o
o
~
-I
A
-I
~
., ~-~---:'
B FIGURE 6-10. Two examples of the normalized potential EEG and Laplacian EEG maps. See text for details. See the attached CD for color figure. (From He et a!., CEn Neurophy, 2001 with permission)
Table6-5 indicatesthat the regularizationresults are always worse than the optimal SL estimates (by definition), since the optimal SL estimates are obtained by minimizingthe RE betweenthe analyticaland estimatedSL. However, Table6-5 showsthat the results obtained via regularization (by using the discrepancy principle in this case) are comparable to the optimal SL estimates.Out of four source configurations, the CC for Configurations A, Band C are almost similar for these two types of results. For Configuration D, the CC of the regularization is smaller than the optimal SL estimate by about 6.5%. Howeverthe absolute CC is above 91%, suggesting the feasibility of the estimation of the SL through regularization. Figure6-10depictstwoexamplesof the normalizedpotentialand LaplacianEEGdistributionscorrespondingto Configuration A and Configuration B in Table6-5. For each source configuration, the first panel shows is the noise-contaminatedscalp potential map, the second panel showsthe analytical splineLaplacianEEG map, the third panel showsthe optimal
204
B. He and J. Lian
Dipole #1 L
./ R
. L
R
Dipole #2.
FIGURE 6-11. A realistic geometry head model built from one subject with two simulated dipoles located within the brain. (From He & Lian, Crit Rev BME, 2002 with permission from Begell House)
estimated Laplacian EEG map by means of a priori information, and the last panel shows the regularization estimated Laplacian EEG map by means of the discrepancy principle.
6.4.2.4 Simulation in a realistic geometry head model The computer simulation was also conducted by using a realistic geometry head model built from one healthy subject (male, 34 years old) who was later involved in the VEP experiment. The CT images of the subjects were obtained, and the BEM models of the scalp, the skull, and the brain surfaces of the subject were constructed (Figure 6-11). Two artificial dipoles were used to simulate two simultaneously active brain electrical sources. One dipole source is located in right medial temporal lobe with orientation tangential to the cortical surface, and another dipole is located in right inferior frontal lobe with radial orientation. Figure 6-12 shows the result of spline Laplacian imaging of the simulated dipole sources in realistic geometry head model. The scalp potentials generated by the two artificial dipoles were contaminated with 5% GWN to simulate the measurement noise, and show a blurred dipole pattern of distribution with frontal positivity and posterior negativity. The spline Laplacian EEG map, however, effectively reduces the blurring effect caused by the head volume conductor, and clearly reveals two localized activities corresponding to the underlying dipole sources.
6.4.2.5 Surface Laplacian imaging of visual evoked potential activity Besides simulations, human VEP experiments were carried out to examine the performance of the spline SL estimator. The same above subject who gave written informed consent was studied in accordance with a protocol approved by the UIC/IRB. Visual stimuli were generated by the STIM system (Neuro Scan Labs, VA). The 96-channel VEP signals referenced to right earlobe were amplified with a gain of 500 and band pass filtered from 1 Hz to 200 Hz by Synamps (Neuro Scan Labs, VA), and were acquired at a sampling rate of
205
Body Surface Laplacian Mapping of Bioelectric Sources
FIGURE 6-12. Spline Laplacian mapping of the simulated dipole sources in a realistic geometry head model. (A) Scalp potential map. (B) Estimated spline Laplacian EEG map. See the attached CD for color figure. (From He & Lian, Crit Rev BME, 2002 with permission from Begel! House)
1,000 Hz by using SCAN 4.1 software (Neuro Scan Labs, VA). The electrodes' locations were measured using Polhemus Fastrack (Polhemus Inc. , Vermont). Full or half visual field pattern reversal checkerboards (black and white) with reversal interval of 0.5 sec served as visual stimuli and 300 reversals were recorded to obtain averaged VEP signals . The display had a total viewing angle of 14.3 0 by ILl 0 , and the checksize was set to be 175' by 135' expressed in arc minutes . The SL was estimated at the peak of the P100 component. Figure 6-13 shows the recorded scalp potential maps and the estimated spline Laplacian EEG maps at the PIOO peak time point of the pattern reversal VEP recorded from 96
A
B
c
o
FIGURE 6-13. Spline Laplacian mapping of YEP activity in a human subject. (A) Scalp potential map elicited by the full visual field stimuli. (B) Spline Laplacian EEG map in response to the full visual field stimuli. (C) Scalp potential map elicited by the left visual field stimuli. (D) Spline Laplacian EEG map in response to the left visual field stimuli. See the attached CD for color figure. (From He et aI., Clin Neurophy, 200 1 with permission)
B. He and J. Lian
206
electrodes over the scalp. As shown in Figure 6-13A , the scalp potential map elicited by the full visual field stimuli is characterized by the strong but diffused activity that distributed symmetrically over the occipital area. The estimated spline Laplacian EEG map (Figure 6-13B), on the other hand, greatly improves the spatial resolution and clearly reveals two dipole-like sources located in the visual cortices in both hemispheres. Notably, the distribution of the positivity and negativity on each part of the scalp suggests orientation of cortical current sources , pointing toward the contralateral hemisphere. As shown in Figure 6-13C, in response to the left visual field stimuli, a dominant positive potential component was elicited with a widespread distribution on the left scalp . However, the estimated spline Laplacian EEG map (Figure 6-13D) shows a dominant dipole-like current source located in the right visual cortex. Similarly, the positivity-negativity distribution on the right scalp in the spline Laplacian EEG map suggests orientation of cortical current sources, pointing toward the contralateral hemisphere.
6.4.3 SURFACE LAPLACIAN BASED CORTICAL IMAGING Using the procedure as detailed in Section 6.2.3, the application of using the scalp SL to reconstruct the cortical potentials has also been explored (He, 1998b). Figure 6-14 shows
(a)
(b)
(c)
(d)
FIGURE 6-14. An example of cortical imaging from scalp Laplacian EEG. (a) The "ture" cortical potential distribution generated by four radial dipoles located at eccentricity of 0.8 in the 3-sphere head model. (b)-(d) cortical potential distributions reconstructed from scalp Laplacian EEG with (b) 10%, (c) 30%, and (d) 50% Gaussion white noise, respectively. See the attached CD for color figure. (From He, IEEE-EMB, 1998 with permission) © IEEE
Body Surface Laplacian Mapping of Bioelectric Sources
207
FIGURE 6-15. Cortical imaging of the SEP activity over a realistic geometry head model in a human subject, using standard and the SL pre-filtered WMN estimate. See text for details. See the attached CD for Color figure. (From Babiloni et aI., MBEC, 2000 with permission)
an example of the simulation study based on the 3-concentric-sphere head model (Rush & Driscoll, 1969). Figure 6-14(a) shows the "true" cortical potential distribution generated by four radial dipoles. Figures 6-14(b)-(d) show the inverse cortical potential distributions estimated from the scalp SL with 10%, 30%, and 50% GWN added to the scalp SL, respectively. Notice that the higher the noise level in the scalp SL, the higher the background noise level in the estimated cortical potentials. However, reconstruction of the multiple extrema of the cortical potential distribution is quite robust, suggesting the feasibility and unique feature of the cortical potential imaging from the scalp SL. In a separate study, by investigating the spatial filter characteristics of the source-Laplacian relationship, Bradshaw and Wikswo (2001) demonstrated that dramatic improvement is evident in the SL-based inverse solution, as compared with inverse reconstruction from the raw data. In another approach, the SL pre-filtered EEG data was used as input for the weighted minimum norm (WMN) linear inverse estimate of the cortical current sources, in order to remove subcortically originated EEG potentials from the scalp potential distribution (Babiloni et aI., 2000). As an example, Figure 6-15 shows the application of this technique to the cortical imaging of the somatosensory evoked potentials (SEPs) over the realistic geometry head model of one subject. For all the SEP components being examined (P20N20, P22, N30-P30), the cortical imaging inverse solutions have enhanced spatial resolution than the scalp potential maps. Moreover, with respect to the WMN estimate, the SL prefiltered WMN estimate presented enhanced spatial information content, in that the potential maxima over the cortical surface were sharper and more localized.
208
B. He and J. Lian
6.5 DISCUSSION The SL, as demonstrated by many investigators, enjoys enhanced spatial resolution and sensitivity to regional bioelectrical activity located close to the surface recording electrodes, and has unique advantage of reference independence. Conventionally, the local-based SL operators have been used to estimate the Laplacian ECG or Laplacian EEG, by approximating planar surface at the recording electrode. More accurate estimation can be achieved by taking into account the realistic geometry of the body surface using the spline interpolation scheme. On the other hand, due to the high-pass spatial filtering characteristics of the local SL operator, amplification of the noise associated with the potential measurements is unavoidable. Spatial low-pass filters, such as the Gaussian filter (Le et al., 1994) and Wiener filter (He, 1998a), have been shown to be useful in improving the signal-to-noise ratio of the SL. The spline SL, on the other hand, has been shown to provide an intrinsic spatial-low-pass filtering in addition to its spatial-high-pass filtering characteristics (Nunez et al., 1994; Srinivasan et al., 1998). Estimation of the spline SL from potentials does not require the information on the conductivity distribution inside the volume conductor. On the other hand, the spline SL estimation techniques do need a mathematical model describing the geometry of the surface over which the SL is to be estimated. The spline SL has been estimated over a spherical, ellipsoidal, and a realistic geometry surface (Perrin et al., 1987a,b, 1989; Law et al., 1993; Babiloni et aI., 1996, 1998; He, 1999; Zhao & He, 2001; He et al., 2001, 2002). Furthermore, the recently developed 3D spline SL algorithm (see Section 6.2.2) eliminates the need of determining two spline parameters, and provides a rational determination of the spline parameter through regularization process (He et al., 2001,2002). Such new approach provides comparable computational accuracy and stability, while substantially reduces the computational burden of optimizing two independent regularization parameters, as required in the previously reported approaches. The performance of the present realistic geometry spline SL estimator has been evaluated through a series of computer simulations. Based on the one-sphere homogeneous volume conductor model, the simulation results demonstrate that the performance of the one-parameter spline SL algorithm is comparable with that of the traditional two-parameter spline SL algorithm (Tables 6-1-6-3), but with much greater computational efficiency. The simulation study also demonstrates that the spline SL estimators are more robust against additive noise in both potential and geometry measurements as compared to the 5-point local SL estimator (Table 6-1), and consistent results are found for different numbers of recording electrodes (Tables 6-2). An interesting finding is that the 5-point local SL has good performance only for shallow sources and under low noise level, while the spline SL has good performance over a broader source depths and under variant noise levels (Table 6-1). This can be explained by the high-pass spatial filter property of the 5-point local SL estimator, versus the band-pass spatial filter property of the spline SL estimator (Nunez et al., 1994; Srinivasan et al., 1998). Table 6-3 and Figure 6-4 further suggest that the SLcan be estimated by using the well established regularization techniques, such as the discrepancy principle (Morozov, 1984), without a priori information of the "true" SL. In addition, based on the 3-sphere inhomogeneous volume conductor model, consistent evidence is shown that the spline SL estimation algorithm is robust against noise in potential measurements (Figure 6-9), provides consistent performance for different number of recording electrodes (Table 6-4), and can be estimated by means of the discrepancy principle (Table 6-5).
Body Surface Laplacian Mapping of Bioelectric Sources
209
The application of the 3D spline SL estimator to the realistic geometry volume conductor models further suggests the potential use of the realistic geometry spline Laplacian ECGIEEG estimation. Figure 6-5 indicates that the Laplacian ECG maps provide a much-localized projection onto the body surface in the areas directly overlying the heart, and are especially useful in identifying and characterizing the source multiplicity as compared with the potential ECG maps. Similarly, Figure 6-12 indicates that the Laplacian EEG map can effectively reduces the smoothing effect of the head volume conductor, and clearly localizes the underlying brain electrical sources. The human YEP experiments further suggest the usefulness of the Laplacian EEG mapping. The full visual field stimuli elicited symmetrical potential distribution about the midline of the occipital scalp. The estimated Laplacian EEG map showed much more localized dipolar current sources in both visual cortices, with dipolar orientations pointing toward the respective opposite hemisphere. It is widely accepted that the half visual field stimuli activates the visual cortex on the contralateral hemisphere of the brain. But paradoxically, the left visual field stimuli elicited stronger positive potential distribution over the ipsilateral side of the scalp, which might be misinterpreted as left visual cortex activation. However, by using the 3D spline SL method, the estimated Laplacian EEG map clearly indicated that the right visual cortex was activated, and these results are consistent with previous reports (Barrett et aI., 1976; Blumhardt et aI., 1977; Towle et aI., 1995). In summary, the SL, as a spatial enhancement method, can enhance the high-frequency spatial components of the surface EEG and ECG. The 3D spline SL algorithm can take into consideration of the realistic geometry of the body surface, and is applicable to both brain and heart electrical source imaging. Only one spline parameter needs to be determined through regularization procedure in this spline SL algorithm, thus enabling easy implementation of the spline SL in an arbitrarily shaped surface of a volume conductor. Both computer simulations and preliminary human experiments have demonstrated the excellent performance of the 3D spline SL in high-resolution ECG and EEG mapping, suggesting it may become an alternative for noninvasive mapping of heart and brain electrical activity.
ACKNOWLEDGEMENT The authors would like to thank their colleagues Dr. G. Li and Dr. D. Wu for useful discussions. This work was supported in part by a grant from American Heart Association #0140132N, NSF CAREER Award BES-9875344, and NSF BES-0201939.
REFERENCES Aoki, M., Okamoto, Y, Musha, T., and Harurni, K.: Three-dimensional simulation of the ventricular depolarization and repolarization processes and body surface potentials: normal heart and bundle branch block. IEEE Trans. Biomed. Eng., 34: 45~62, 1987. Babiloni, E, Babiloni, C, Carducci, E, Fattorini, 1., Onorati, P., and Urbano. A.: Spline Laplacian estimate of EEG potentials over a realistic magnetic resonance-constructed scalp surface model. Electroenceph. din. Neurophysiol., 98: 363-373,1996. Babiloni, E, Carducci, E, Babiloni, C, and Urbano, A.: Improved realistic Laplacian estimate of highly-sampled EEG potentials by regularization techniques. Electroenceph. Clin. Neurophysiol., 106: 336-343, 1998. Babiloni, E, Babiloni, C, Locche, 1., Cincotti, E, Rossini, P.M., and Carducci, E: High-resolution electroencephalogram: source estimates of Laplacian-transformed somatosensory-evoked potentials using a realistic
210
B. He and J. Lian
subject head model constructed from magnetic resonance images. Med. Bioi. Eng. Comput., 38: 512-519, 2000. Barr, R.C., Ramsey, M. III, and Spach, M.S.: Relating epicardial to body surface potential distributions by means of transfer coefficients based on geometry measurements. IEEE Trans. Biomed. Eng., 24: I-II, 1977. Barrett, G., Blumhardt, L., Halliday, A.M., Halliday, E., and Kriss, A.: A paradox in the lateralisation of the visual evoked response. Nature, 261: 253-255,1976. Besio, WG; Lu, C.C., and Tarjan, P.P.: A feasibility study for body surface cardiac propagation maps of humans from Laplacian moments of activation. Electromagnetics, 21: 621-632, 2001. Blumhardt, L.D., Barrett, G., and Halliday, A.M.: The asymmetrical visual evoked potential to pattern reversal in one half field and its significance for the analysis of visual field defects. British J. Ophthalmology, 61: 454-461, 1977. Bradshaw, L.A., and Wikswo, J.PJr.: Spatial filter approach for evaluation of the surface Laplacian of the electroencephalogram and magnetoencephalogram. Ann. Biomed. Eng., 29: 202-213, 2001. Courant, R. and Hilbert, D.: Methods of mathematical physics. New York: Interscience. 1966. Duchon, J.: Interpolation des fonctions de deux variables suivant Ie principe de la flexion des plaques minces. R.A.I.R.O. Anal. Num., 10: 5-12,1976. Fattorusso, v., Thaon, M., Tilmant, 1.: Contribution of I'etude de I'electrocardiogramme precordial. Acta Cardiologica., 4: 464-487,1949. Harder, R. and Desmarais, R.: Interpolation using surface spline. 1. Aircraft, 9: 189-191, 1972. He, B. and Cohen, RJ.: Body surface Laplacian ECG mapping. IEEE Trans. Biomed. Eng., 39: 1179-1191, I992a. He, B. and Cohen, RJ.: Body surface Laplacian mapping of cardiac electrical activity. Am. J. Cardiol., 70: 1617-1620, 1992b. He, B., Kirby, D., Mullen, T., and Cohen, RJ.: Body surface Laplacian mapping of cardiac excitation in intact pigs. Pacing Clin. Electrophysiol., 16: 1017-1026, 1993. He, B.: On the Laplacian inverse electrocardiography. Proc. Ann. Int. Conf. IEEE Eng. Med. BioI. Soc., 145-146, 1994. He, B., Chernyak, Y, and Cohen, RJ.: An equivalent body surface charge model representing three dimensional bioelectrical activity. IEEE Trans. Biomed. Eng., 42: 637-646, 1995. He, B. and Cohen, RJ.: Body surface Laplacian ECG mapping-A review. Crit. Rev. Biomed. Eng., 23: 475-510, 1995. He, B.: Principles and applications ofthe Laplacian electrocardiogram. IEEE Eng. Med. BioI. Mag., 16: 133-138, 1997. He, B. and Wu, D.: A bioelectric inverse imaging technique based on surface Laplacians. IEEE Trans. Biomed. Eng., 44: 529-538, 1997. He, B., Yu, X., Wu, D., and Mehdi, N.: Body surface Laplacian mapping of bioelectrical activity. Methods In! Med., 36: 326-328, 1997. He, B.: Theory and applications of body-surface Laplacian ECG mapping. IEEE Eng. Med. BioI. Mag., 17: 102-109,1998a. He, B.: High resolution source imaging of brain electrical activity. IEEE Eng. Med. BioI. Mag., 17: 123-129, I998b. He, B.: Brain electrical source imaging: Scalp Laplacian mapping and cortical imaging. Crit. Rev. Biomed. Eng., 27: 149-188, 1999. He, B. and Wu, D.: Laplacian electrocardiography. Crit. Rev. Biomed. Eng., 27: 285-338,1999. He, B., Lian, 1., and Li, G.: High-resolution EEG: a new realistic geometry spline Laplacian estimation technique. Clin. Neurophysiol., 112: 845-852,2001. He, B., Li, G., and Lian, J.: A spline Laplacian ECG estimator in a realistic geometry volume conductor. IEEE Trans. Biomed. Eng., 49: 110-117, 2002. He, B., Lian, J.: Spatio-ternporal Functional Neuroimaging of Brain Electric Activity. Critical Review ofBiomedical Engineering, 30: 283-306, 2002. Hjorth, B.: An on-line transformation of EEG scalp potentials into orthogonal source derivations. Electroenceph. Clin. Neurophysiol., 39: 526-530, 1975. Johnston, P.R.: The Laplacian inverse problem of electrocardiography: an eccentric spheres study. IEEE Trans. Biomed. Eng., 44: 539-48,1997.
Body Surface LaplacianMappingof BioelectricSources
211
Law, S.K., Nunez, P.L., and Wijesinghe, R.S.: High-resolution EEG using spline generated surface on spherical and ellipsoidal surfaces. IEEE Trans. Biomed. Eng., 40: 145-153, 1993. Le, J., Menon, V, and Gevins, A.: Local estimate of surface Laplacian derivation on a realistically shaped scalp surface and its performance on noisy data. Electroenceph. Clin. Neurophysiol., 92: 433-441, 1994. Lian, 1., Srinivasan, S., Tsai, H., and He, B.: Comments on "Is accurate recording of the ECG surface Laplacian feasible?" IEEE Trans. Biomed. Eng., 48: 610-613, 2001. Lian, J., Srinivasan, S., Tsai, H., Wu, D., and He, B.: On the estimation of noise level and signal to noise ratio of Laplacian ECG during ventricular depolarization and repolarization. Pacing Clin. Electrophysiol., 25(10): 1474-1487,2002. Lian, J., Li, G., Cheng, J., Avitall, B., and He, B.: Body surface Laplacian mapping of atrial depolarization in healthy human subjects. Med. Bio!. Eng. Comput., 40(6): 650-659, 2002b. Li, G., Lian, J., He, B.: On the Spatial Resolution of Body Surface Potential and Laplacian Pace Mapping. Pacing and Clinical Electrophysiology, 25: 420-429, 2002. Li, G., Lian, J., Salla, P., Cheng, J., Ramachandra, 1., Shah, P., Avitall, B., and He, B.: Body surface Laplacian electrogram of ventricular depolarization in normal human subjects. J. Cardiovasc. Electrophysiol., 140): 16-27,2003. Mirvis, D.M., Keller, EW., Ideker, RE., Cox, r.w, Zettergren, D.G., and Dowdie, RJ.: Values and limitations of surface isopotential mapping techniques in the detection and localization of multiple discrete epicardial events. 1. Electrocardiol., 10: 347-358,1977. Morozov, VA.: Methods for solving incorrectly posed problems. Berlin: Springer-Verlag, 1984. Nunez, PL.: Electric field ofthe brain. London: Oxford University Press, 1981. Nunez P.L.: Neocortical dynamics and human EEG rhythms. New York: Oxford University Press, 1995. Nunez, P.L., Silibertein, RB., Cdush, PJ., Wijesinghe, R.S., Westdrop, A.E, and Srinivasan, R.: A theoretical and experimental study of high resolution EEG based on surface Laplacian and cortical imaging. Electroenceph. Clin. Neurophysiol., 90: 40-57,1994. Oostendorp, T.E and van Oosterom, A.: The surface Laplacian of the potential: theory and application. IEEE Trans. Biomed. Eng., 43: 394-403, 1996. Perrin, E, Bertrand, 0., and Pernier, J.: Scalp current density mapping: value and estimation from potential data. IEEE Trans. Biomed. Eng., 34: 283-288, 1987a. Perrin, E, Pernier, J., Bertrand, 0., Giard, M.H., and Echallier, J.E: Mapping of scalp potentials by surface spline interpolation. Electroenceph. Clin. Neurophysiol., 66: 75-81, 1987b. Perrin, E, Pernier, J., Bertrand, 0., and Echallier, J.E: Spherical splines for scalp potential and current density mapping. Electroenceph. Clin. Neurophysiol., 72: 184-187, 1989. Rudy, Y and Plonsey, R.: A comparison of volume conductor and source geometry effects on body surface and epicardial potentials. eire. Res., 46: 283-291,1980. Rush, S. and Driscoll, D.A.: EEG electrode sensitivity-an application of reciprocity. IEEE Trans. Biomed. Eng., 16: 15-22, 1969. Shim, YS. and Cho, Z.H.: SVD pseudoinversion image reconstruction. IEEE Trans. Acoust. Speech. Processing, 29: 904-909,1981. Spach, M.S., Barr, R.C., Lanning, C.E, and Tucek, P.c.: Origin of body surface QRS and T-wave potentials from epicardial potential distributions in the intact chimpanzee. Circulation, 55: 268-278, 1977. Srinivasan, R., Nunez, P.L., Tucker, D.M., Silberstein, RB., Cadusch, PJ.: Spatial sampling and filtering of EEG with spline Laplacian to estimate cortical potentia!. Brain Topography, 8(4): 355-366, 1996. Srinivasan, R, Nunez, P.L. and Silberstein, R.B.: Spatial filtering and neocortical dynamics: estimates of EEG coherence. IEEE Trans. Biomed. Eng., 45: 814-826,1998. Throne, R.D. and Olson, L.G.: Fusion of body surface potential and body surface Laplacian signals for electrocardiographic imaging. IEEE Trans. Biomed. Eng., 47: 452-462, 2000. Towle, VL., Cakmur, R., Cao, Y, Brigell, M., and Parmeggiani, L. Locating VEP equivalent dipoles in magnetic resonance images. Int. J. Neurosci., 80: 105-116, 1995a. Tsai, H., Ceccoli, H., Avitall, B., and He, B.: Body surface Laplacian mapping of anterior myocardial infarction in man. Electromagnetics, 21: 607-619, 2001. Umetani, K., Okamoto, Y, Mashima, S., Ono, K., Hosaka, H., and He, B.: Body Surface Laplacian mapping in patients with left or right ventricular bundle branch block. Pacing Clin. Electrophysiol., 21: 2043-2054, 1998.
212
B. He and J. Lian
Wang, K. and Begleiter,H.: Local polynomialestimate of surface Laplacian. Brain Topogr., 12: 19-29, 1999. Wei,D., Harasawa,E., and He, B.: Simulatedbody surfacepotentialand Laplacianmaps during the left ventricular breakthrough. Proc. ofthe Ann. Int. Can! OfIEEE Eng. In Med. & Biol. Soc., 223-224, 1995. Wei, D., Mashima, S.: Predictionof accessory pathway locations in Wolff-Parkinson-White syndrome with body surface potential Laplacian maps: A simulation study. Jpn Heart J. 40: 451-459,1999. Wei, D.: Laplacian electrocardiogramssimulated using realisticallyshaped heart-torso model during normal and abnormal ventriculardepolarization. Electromagnetics, 21: 593-605, 2001. Wu, D., Saul, J.P., and He, B.: Epicardial Inverse Solutions from Body Surface Laplacian Maps: A Model Study. Proc. ofthe Ann. Int. Can! Of IEEE Eng. In Med. & Biol. Soc., Montreal, 1995. Wu, D., Wang,Y.,and He, B.: Reconstructionof epicardial potentialsfrom body surface Laplacianmaps by using a realistically shaped heart-torso model. Proc. Ann. Int. Can! IEEE Eng. Med. Biol. Soc., 83-85, 1998. Wu,D., Tsai, H.C., and He, B.: On the estimationof the Laplacianelectrocardiogramduring ventricularactivation. Ann. Biomed. Eng., 27: 731-745, 1999. Zhao, F. and He, B.: A new algorithm for estimating scalp Laplacian EEG and its application to visual-evoked potentials. Electromagnetics, 21: 633-640, 2001.
7
NEUROMAGNETIC SOURCE RECONSTRUCTION AND INVERSE MODELING Kensuke Sekihara' and Srikantan S. Nagarajarr' I Departmentof
ElectronicSystems and Engineering, Tokyo Metropolitan Institute of Technology, Asahigaoka 6-6, Hino, Tokyo 191-0065, Japan 2Department of Radiology, University of California, San Francisco, 513 Parnassus Avenue, S362 San Francisco, CA 94143, USA
7.1 INTRODUCTION The human brain ha s approximately 1010 neurons in its cerebral cortex. Their electrophy siological activity generates weak but measurable magnetic fields outside the scalp. Magnetoencephalography (MEG) is a method which measure s these neuromagnetic fields to obtain information about these neu ral activitie s (Hamalainen et al., 1993 ; Roberts et al., 1998; Lewine et al., 1995). Among the various kind s of funct ional neuroimaging methods, such a neuro-electromagnetic approach has a major advantage in that it can provide fine time resolution of millisecond order. Therefore, the goal of neuromagnetic imaging is to visualize neural activit ies with such fine time resolution and to provide functional information about brain dynamics. To attain this goal, one technical hurdle must be overcome. That is, an efficient method to reconstruct the spatio-temporal neural activities from neuromagnetic measurements needs to be developed. Toward this goal, a number of algorithms for reconstructing spatio-temporal source activities have been investigated (Baillet et al., 2001) . Th is chapter deals with this neuromagnetic reconstruction problem . However, we do not provide a general review of variou s algorithms for this reconstruction problem. Instead, we describe a particular class of source reconstruction techniques referred to as the spatial filter, which allows the spat io-temporal recon struction of neural activities without assuming any kind of source model. Furthermore, among the spatial filter techniques, we focus on adapti ve spatial filter techniques. The se techniques were origi nally developed in the field s of array signal processing , including radar, sonar, and seismic exploration , and have been
Corresponding author: Kensuke Sekihara Ph.D., Tokyo Metropolitan Institute of Technology, Asahigaoka 6-6, Hino, Tokyo 191-0065, Japan, Tel:8 1-42-585-8642, Fax:8 1-42-585-8642, E-mail: ksekiha@cc .tmit.ac.jp 213
214
K. Sekihara and S. S. Nagarajan
widely used in such fields (van Veen et al., 1988). Nonetheless, the adaptive spatial filter techniques are relatively less acknowledged in the MEG/EEG community. In this chapter, we also formulate reconstruction techniques based on linear leastsquares methods (Hamalainen and Ilmoniemi, 1984) as the non-adaptive spatial filter. This formulation enables us to compare the least-squares-based techniques with the adaptive spatial filter techniques on a common, unified basis. Actually, we compare these two types of techniques by using the same figure of merit called the resolution kernel, and show that the adaptive techniques can provide much higher spatial resolution than the least-squares-based methods. The organization of this chapter is as follows: Following a brief review of the neuromagnetometer hardware in Section 7.2, we describe the forward modeling and some basic properties of MEG signals in Section 7.3. Section 7.4 presents the formulation of the linear least-squares-based methods as the non-adaptive spatial filter. Section 7.5 describes the adaptive spatial filter techniques. In Section 7.6, we present a quantitative comparison between the adaptive and non-adaptive methods using the resolution kernel criterion. Section 7.7 presents a series of numerical experiments on the adaptive spatial filter performance. In Section 7.8, we demonstrate the effectiveness of the adaptive spatial filter techniques by applying them to two sets of MEG data.
7.2 BRIEF SUMMARY OF NEUROMAGNETOMETER HARDWARE It is generally believed that the neuromagnetic field is generated by the post-synaptic ionic current in the pyramidal cells of cortical layer III. Here, neuronal cells are organized into so-called columnar structure, and the synchronous activity of these cells results in superimposed magnetic fields strong enough to be measured outside the human head. The average intensity of this neuromagnetic field, however, is around a few hundred femtoTesla (fT)*. To measure such an extremely weak magnetic field, a neuromagnetometer uses a special device called a super-conducting quantum interference device (SQUID) (Clarke, 1994; Drung et al., 1991). This device is so sensitive that it can in principle measure a single quantum of magnetic flux. When measuring such a weak neuromagnetic field, a major problem arises from the background environmental magnetic noise. Background magnetic noise is generated by electronic appliances such as computers, power lines, cars, and elevators. Such noise is common at a site where the neuromagnetometer is installed, and its average intensity is usually five to six orders of magnitude greater than the neuromagnetic field. One obvious way to reduce it is to use a magnetically-shielded room. However, a typical medium-quality shielded room, most commonly used in MEG measurements, can reduce the background noise by up to only three orders of magnitude. To further reduce the background noise, neuromagnetometers are usually equipped with a special type of detector configuration called a gradiometer (Hamalainen et al., 1993; Lewine et al., 1995). The first-order gradiometer consists of two coils of exactly the same area; they are connected in series, but wound in opposite directions. Therefore, the gradiometer cancels the electric current induced by the background noise fields because the * One IT is equivalent to I x 10- 15 T.
215
Neuromagnetic Source Reconstruction and Inverse Modeling
sources of such noise fields are generally far from the gradiometer and induce nearly the same amount of electric current in both coils. The gradiometer can achieve two to three orders of magnitude reduction in the background noise, and can remove the influence of the residual noise field within the magnetically-shielded room . The reduction performance, however, depends on the manufacturing precision of the two coils. Aside from the gradiometer, several other methods of removing the external noise have been investigated (Vrba and Robinson , 2001; Adachi et ai., 200 1). Many of these methods use extra sensors that measure only the noise fields. (Such sensors are usually located apart from the sensor array which measures the neuromagnetic fields.) Quasi-real-time electronics then perform the on-line subtraction between the outputs from the extra sensors and from the regular sensors. A method that does not require additional sensor channels has also been developed. This method applies a technique called signal-space projection, and can be implemented completely as a post-processing procedure (Parkkonen et ai., 1999). These external noise cancellation methods make it possible to use a magnetometer as a sensor coil instead of the gradiometer. They also permit the measurement of neuromagnetic fields outside a magnetically-shielded room. The most remarkable advance in neuromagnetometer hardware over the last ten years has been the rapid increase in the number of sensors. Since neuromagnetometers with a 37-channel sensor array became commercially available in the late 80s (Lewine et al., 1995), the number of sensors in commercially available neuromagnetometers has constantly increased . The latest neuromagnetometers are equipped with 200-300 sensor channels, with whole-head coverage of the sensor array.
7.3 FORWARD MODELING 7.3.1 DEFINITIONS Let us define the magnetic field measured by the rnth detector coil at time t as bm(t), and a column vector bet) = [b,(t) , b2(t) , . .. , bM(t)]T as a set of measured data where M is the total number of detector coils and the superscript T indicates the matrix transpose . A spatial location is represented by a three-dimensional vector r : r = (x, y , z), The sourcecurrent density at r and time t is defined as a three-dimensional column vector s(r, t) . The magnitude of the source current is denoted as s(r, t) (= js(r, t)[), and the orientation of the source is defined as a three-dimensional column vector "'l(r, t) = s(r, t)/s(r, t) = [1JxCr, t), 1Jy(r, t), 1Jz(r, t)] T, whose ~ component (where ~ equals x, y, or z in this chapter), is equal to the cosine of the angle between the direction of the source moment and the ~ direction. Let us define l!n(r) as the rnth sensor output induced by the unit-magnitude source located at r and directed in the ~ direction. The column vector I I;(r) is defined as II;(r) = [fr(r) , l~(r) , . . . , It(r)] T . Then , we define a matrix which represents the sensitivity of the whole sensor array at r as L(r) = [IX(r ), per ), F(r)]. The rnth row of L(r ), I m(r) = [f~(r ), l~(r), l~(r )], represents the sensitivity at r of the rnth sensor. Then, using the superposition law, the relationship between b(t) and s (r , t ) is expressed as bet) =
f
L(r)s(r , t)dr
+ net)·
(7.1)
216
K. Sekihara and S. S. Nagarajan
Here, n(t) is the noise vector at t. The sensor sensitivity pattern, represented by the matrix L(r), is customarily called the sensor lead field (Hamalainen et al., 1993; Sarvas, 1987), and this matrix is called the lead-field matrix. We define, for later use, the lead-field vector in the source-moment direction as l(r), which is obtained by using l(r) = L(r)'Y/(r).
7.3.2 ESTIMATION OF THE SENSOR LEAD FlEW The problem of the source reconstruction is the problem of obtaining the best estimate of s(r, t) from the array measurement b(t). It is thus apparent that, to solve this reconstruction problem, we need to have a reasonable estimate of the lead-field matrix. In this subsection, we describe how we can obtain the lead field using the spherically symmetric homogeneous conductor model, which is most commonly used in estimating the MEG sensor lead field. Also, we briefly mention a realistically-shaped volume-conductor model, which can generally provide more accurate lead field estimates, particularly for non-superficial brain regions. Estimation of the lead-field is called the forward problem, because this is equivalent to estimating the magnetic field from a point source located at a known location; this problem stands in contrast to the inverse problem in which the source configuration is estimated from a known magnetic field distribution. Let us define the electric potential as V, the magnetic field as B, and the electric current density as j. The source current s(r) defined in Section 7.3.1 is called the primary current, (alternatively called the impressed current), which is directly generated from the neural activities. There is another type of electric current called the return current or volume current. It results from the electric field in the conducting medium, and it is not directly caused by the neural activities. Defining the conductivity as p, the return current is expressed as - p V V where - V V is equal to the electric field. Thus, the total electric current j is expressed as j(r) = s(r) - pVV.
(7.2)
The relationship between the total current j and the resultant magnetic field B is given by the Biot-Savart law, /.Lo B(r) = -
4n
f itr': .
x r - r' ,dr,I
Ir - r'l·
(7.3)
where /.Lo is the magnetic permeability of the free space. In order to derive the analytical expression for the relationship between the primary source current and the magnetic field, we first consider the case in which a whole space is filled with a conductor with constant conductivity p. In this case, it is easy to show that the following relationship holds: /.Lo Bo(r)=-
4n
fs(r)x i r -r' ,dr. I
Ir - r'l'
(7.4)
Here the magnetic field is denoted as Bo for later convenience. Note that Eq. (7.4) is similar to Eq. (7.3). The only difference is that the total current density j(r) is replaced by the primary current density s(r).
217
Neuromagnetic Source Reconstruction and Inverse Modeling
We then proceed to deriving a formula for the magnetic field outside a sphericallysymmetric homogeneous conductor, B. To do so, we make use of the fact that the radial component of the magnetic field B is equal to the radial component of B o in Eq. (7.4) (Cuffin and Cohen, 1977; Sarvas, 1987), i.e.,
B . I, = B o . I"
(7.5)
where I, is the unit vector in the radial direction defined as I, = r Ilrl. (Note that we set the coordinate origin at the sphere origin.) The relationship V x B = 0 holds outside the volume conductor, because there is no electric current. Thus, B can be expressed in terms of the magnetic scalar potential U (r), B = -J),oVU(r).
(7.6)
This potential function is derived from
1
00
U(r) = - 1
~
0
B(r
+ TI,)· l,dT =
1
00
-1
~
0
Bo(r
+ TI,)· f .d»,
(7.7)
where we use the relationship in Eq. (7.5). By substituting Eq. (7.4) intoEq. (7.7), we finally obtain 1 U(r) = - -
4;rr
f
s(r') x r' . r , dr A '
(7.8)
where A=
Ir -r'I(lr -r/lr + Irl2 -r' -r).
The formula for B is then obtained by substituting Eq. (7.8) into Eq. (7.6), i.e., B(r) = J),o
4;rr
f ~[AS(r/) A
x r' - (s(r /) x r' . r)VA]dr',
(7.9)
where
VA = [
Ir-r
Irl
/2 1
+
(r-r')·r Ir -
r'l
+21r -r/l +2Irl]r -
[
Ir -r/l
+21rl +
(r-rl).r] Ir - r/l
r',
To obtain the component of the lead-field matrix, l~(ro), we first calculate B(r m ) (where r m is the mth sensor location) by using Eq. (7.9) with s(r ') = ft;o(r ' - ro), where I ~ is the unit vector in the ~ direction. When the sensor coil is a magnetometer coil, (which measures only the magnetic field component normal to the sensor coil), l~(ro) is calculated from l~(ro) = B(r m) . I,::il where I'::il is a unit vector expressing the normal direction of the mth sensor coil. When the sensor coil is a first-order axial gradiometer with a baseline This l~(ro) of D, l~(ro) is calculated from l~(ro) = B(r m) . I,::il - B(r m + D I,::il) .
rr.
K. Sekihara and S. S. Nagarajan
218
represents the sensitivity of the mth sensor to the primary current density located at r o and directed in the ~ direction. One of the important properties of the lead field obtained using the sphericallysymmetric homogeneous conductor model is that if s(r o) and r o are parallel, i.e., if the primary current source is oriented in the radial direction, no magnetic fields are generated outside the spherical conductor from such a radial source. Also , we can see that when r o approaches the center of the sphere, l~ (r 0) becomes zero, and no magnetic field is generated outside the conductor from a source at the origin. The spherically-symmetric homogeneous conductor is generally satisfactory in explaining the measured magnetic field when only superficial sources exist , i.e., when all sources are located relati vely close to the sensor array. This is because the curvature of the upper half of the brain is well approximated by a sphere. However, for sources located in lower regions of the brain, the model becomes inaccurate because the curvature of the lower brain regions significantly differs from a sphere. Such errors caused by misfits of the model may be reduced by using a spheroidally-symmetric conductor model (Cuffin and Cohen , 1977) or an eccentric sphere conductor model (Cuffin, 1991). More fundamental improvements can be obtained by using realistically-shaped volume-conductor models. Such conductor models can be constructed by first extracting the brain boundary surface from the subject's 3D MRI. We denote this surface ~. We then use the following Geselowitz formula (Ge selowitz, 1970; Sarvas , 1987) to calculate magnetic fields outside the volume conductor:
J.lo B(r ) = Bo(r ) - p 4rr
l ' :E
V Cr )JzJr') x r - r''3 d S, Ir - r I
(7.10 )
where the integral on the right-hand side indicates the surface integral over ~; r' represents a point on ~, and! :E(r' ) is a unit vector perpendicular to ~ at r ' . Here , we assume that the conductivity within the brain is uniform, and denote it p. The second term on the right-hand side of Eq. (7.10) represents the influence from the volume current, and to calculate this term, we need to know VCr) on ~ , which is obtained by solving (Sarvas, 1987)
p
p
-V(r)=Vo(r ) - 2 4rr
1 ' :E
,
V(r)!:E(r) ·
r -r' 3d S,
Ir - r 'l -
(7.11 )
where Vo(r )
= -1
4rr
/ s(r ' ). r -r' :J dr ,, Ir - r 'I '
(7.12)
and rand r' are points on the boundary surface. Because the brain boundaries are irregular, the magnetic field B (r ) can only be obtained numerically using the boundary element method (BEM ). In this calculation, we first estimate the electric potential V (r) on the brain boundary surface ~ by numerically solving Eq. (7. 11). We then calculate magnetic fields outside the brain using Eq. (7.10). The details of these numerical calculations are out of the scope of this chapter, and they are found in (Barnard et al., 1967; Hamalainen and Sarva s, 1989). The numerical method
219
Neuromagnetic Source Reconstruction and Inverse Modeling
mentioned so far assumes uniform conductivity within the brain boundary, and is called single-compartment BEM. It is usually used in estimating MEG sensor lead fields (Fuchs et al., 1998). It can be extended to the multiple-compartment BEM and such models are usually used for estimating the EEG sensor lead fields. The BEM-based realistically-shaped volume conductor models generally provide significant improvements in the accuracy of the forward calculation particularly for deep sources (Fuchs et al., 1998; Cuffin, 1996), although they are computationally expensive. Improvements in the computational efficiency of the BEM have been reported (Bradley et al., 2001; van't Ent et al., 2001).
7.3.3 LOW-RANK SIGNALS AND THEIR PROPERTIES Let us consider specific cases where the primary current consists of localized discrete sources. The number of sources is denoted Q, and we assume that Q is less than the number of sensors M. The locations of these sources are denotedr , .r-, ... ,r Q. The source-moment distribution is then expressed as Q
s(r, t) = LSD(rq, t)8(r - r q),
(7.13)
q=]
where s D(rq) = f s(r)dr and this integral extends the small region around r q where the qth source is confined. This type of localized source is called the equivalent current dipole with moment s D. Here, s D is called the moment because it has a dimension of current x distance. The basis underlying the equivalent current dipole is physiologically plausible (Okada et al., 1987), and sources of neuromagnetic fields are often modeled with the current dipoles. Since there is little advantage to explicitly differentiating the current density s(rq v t) and the current moment sD(r q, t), for simplicity we keep the same notation s(r q, t) to express the current moment. Then, the Q-dimensional source magnitude vector is defined as vet) = [s(r], t), s(r2, t), ... , s(rQ' t)]T. We define a 3Q x Q matrix that expresses the orientations of all Q sources as Wet) such that
lJi"(t)=
[
TJ(r~ , t)
o
o
o
The composite lead-field matrix for the entire set of Q sources is defined as (7.14) Then, substituting Eq. (7.13) into Eq. (7.1), we have the discrete form of the basic relationship between bet) and vet) such that
bet) = [LclJi"(t)]v(t)
+ net).
(7.15)
220
K. Sekihara and S. S. Nagarajan
Let us define the measurement covariance matrix as R b ; i.e., R b = (b(t)b T (t»), where (.) indicates the ensemble average. (This ensemble average is usually replaced by the time average over a certain time window.) Let us also define the covariance matrix of the sourcemoment activity as R s ; i.e., R, = (!li(t)v(t)v T (t)I]i"(t)T). Then, using Eq. (7.15), we get the relationship between the measurement covariance matrix and the source-activity covariance matrix such that (7.16) where the noise in the measured data is assumed to be white Gaussian noise with a variance of (J2, and I is the M x M identity matrix. Let us define the kth eigenvalue and eigenvector of R b as Ak and ei, respectively. Let us assume, for simplicity, that all sources have fixed orientations. Then, unless some source activities are perfectly correlated with each other, the rank of R, is equal to the number of sources Q. Therefore, according to Eq. (7.16), Rb has Q eigenvalues greater than (J2 and M - Q eigenvalues that are equal to (J2. The signal whose covariance matrix has such properties is referred to as the low-rank signal (Paulraj et al., 1993; Sekihara et al., 2000). Let us define the matrices E s and EN as E s = [er, ... , eQ] and EN = [eQ+]' ... ' eM]. The column span of E s is the maximum-likelihood estimate of the signal subspace of R b , and the span of EN is that of the noise subspace (Scharf, 1991). In the low-rank signals, the measurement covariance matrix R b can be decomposed into its signal and noise subspace components; i.e., (7.17) Here, we define the matrices As and AN as
As = diag[AI, ... , AQ]
and
AN = diag[AQ+l, ... , AM],
(7.18)
where diagl- ..] indicates a diagonal matrix whose diagonal elements are equal to the entries in the brackets. The most important property of the low-rank signal is that, at source locations, the lead-field matrix is orthogonal to the noise subspace of Rb. This can be understood by first considering that (7.19) Since L; is a full column-rank matrix, and we assume that R, is a full rank matrix, the above equation gives (7.20) This implies that the lead-field matrices at the true source locations are orthogonal to any noise level eigenvector; that is, they are orthogonal to the noise subspace (Schmidt, 1981),
221
Neuromagnetic Source Reconstruction and Inverse Modeling
i.e., (7.21) Since the equation above holds, the relationship 1T (r q)E N = 0 also holds. These orthogonality relationships are the basis of the eigenspace-projection adaptive beamformer described in Section 7.5.2, as well as the basis of the well-known MUSIC algorithm (Schmidt, 1986; Schmidt, 1981; Mosher et al., 1992).
7.4 SPATIAL FILTER FORMULATION AND NON-ADAPTIVE SPATIAL FILTER TECHNIQUES 7.4.1 SPATIAL FILTER FORMULATION
Spatial filter techniques estimate the source current density (or the source moment) by applying a linear filter to the measured data. Because the source is a three-dimensional vector quantity, there are two ways to implement the spatial filter approach in the neuromagnetic source reconstruction. One is the scalar spatial filter and the other is the vector spatial filter. In the scalar approach , we use a single set weights that characterizes the properties of the spatial filter, and define the set of the filter weights as a column vector w (r , 7]) = [WI (r, 7]) , w2(r , 7]) , . . . , wM(r, 7])]T . Here the weight vector depends both on the location r and the source orientation 7]. This weight vector w(r , 7]) should only pass the signal from a source with a particular location r and an orientation 7]. The weight vector rejects not only the signals from other locations but also the signal from the location r if the orientation of the source at r differs from 7]. Then, the magnitude of the source moment is estimated using a simple linear operation ,
L wm(r , 7])bm(t ), M
S{r, t ) = w T (r, 7])b(t ) =
(7.22)
m=1
where the estimate of the source magnitude is denoted s (r , t ). When using the scalar-type beamformer in Eq. (7.22), we need to first determine the beamformer orientation 7] to estimate the source activity at a specific location r . However, this 7] is generally unknown, although several techniques have been developed to obtain the optimum estimate of the source orientation (Sekihara and Scholz , 1996; Mosher et al., 1992). The vector spatial filter uses a weight matrix W(r ) that contains three weight vectors wAr), w y(r ), and w z(r ), which respectively estimate the x, y , and z components of the source moment. That is, the source current vector is estimated from S(r , t )
= W T(r )b(t ) = [w Ar ), w y(r), w z(r )fb(t),
(7.23)
where s (r , t) is the estimate of the source current vector. The vector spatial filter estimate the source orientation as well as the source magnitude. The application of a spatial filter weight artificially focuses the sensitivity of a sensor array on a specific location r , and this location r is a controllable parameter. Therefore, in
222
K. Sekihara and S. S. Nagarajan
a post-processing procedure, we can scan the focused region over a region of intere st to recon struct the entire source distribution.
7.4.2 RESOLUTION KERNEL The major problem with spatial filter techniques is how to derive a weight vector with desirable properties. To develop such weight vectors, we need a criterion which characterizes how appropriately the weig ht has been designed. The resolution kernel can play this role. Comb ining Eqs. (7.1) and (7.23), we obtain the relationship, s (r, t ) =
!
W T (r )L(r ' )s(r ' , t )dr' =
!
lR(r, r ' )s(r ' , t )dr' ,
(7.24)
where, lR(r,r ')
=
WT(r)L(r ') .
(7.25)
This lR(r, r ') is called the resolution kernel, which expres ses the relationship between the original and estimated source distributions (Menendez et al, 1997; Menendez et al., 1996; Lutkenhoner and Menendez, 1997). Therefore, the resoluti on kernel can provide a measure of the appropriateness of the filter weight. In other words, the weight must be chosen so that the resolution kernel has a desirable shape, which generally satisfies the three propert ies: (i) peak at the source location, (ii) a small main-lobe width, and (iii) a low side-lobe ampl itude . The most important property among them is that the kernel should be peaked at the source locat ion. Only in this case, the recon structed source distribution can be interpreted as a smoothed version of the true source distribution . However, if this condition is not met, the recon structed source distribution should contain systematic bias and may be totally different from the true source distributi on. The kernel should also have a small main-lob e width, so that the recon struction results have high spatial resolution. When the kernel has a lower side-lobe amplitude, the results have less systematic noise and artifacts.
7.4.3 NON-ADAPTIVE SPATIAL FILTER
Minimum norm spatialfilter There are generally two types of spatial filter techniques. One is a non-adaptive method in which the filter weight is independent of the measurements. The other is an adaptive method in which the filter weight depends on the measurements. The primary intere st of this chapter is the application of the adaptive spatial filter techniqu e to the neuromagnetic source reconstruct ion. However, before proceeding to describe the adaptive spatial filter, we briefly describe the non-adaptive spatial filter in order to clarify the difference between these two types of spatial filter methods. The best-known non-adaptive spatial filter is the minimum-norm estimate (Hamalainen and Ilmoniemi , 1984; Hamalainen and Ilmoniemi , 1994; Wang et al., 1992; Graumann, 1991). The filter weight can be obtained by the following minimiz ation: min! IIlR(r, r' ) - 8(r - r ') 1I 2dr' ,
(7.26)
Neuromagnetic Source Reconstruction and Inverse Modeling
223
where b(r) is the three-dimensional delta function. By making the resolution kernel close to the delta-function, the weight is obtained and it is expressed as
The estimated current density is then expressed as (7.28) The matrix G is often referred to as the gram matrix. The p and q element of G is given by calculating the overlap between the lead fields of the pth and qth sensors,
o.. =
f Ip(r)l~(r)dr.
(7.29)
Unfortunately, in biomagnetic instruments, the overlaps between the adjacent sensor lead fields are very large, as depicted in Fig. 7.1(a). As a result, G p •q has a more-or-less similar value for various pairs of p and q. Consequently, the matrix G is generally very poorly conditioned. This fact greatly affects the performance of this non-adaptive spatial filter method because it requires calculation of the inverse of G, a process which is very erroneous if G is nearly singular. This gram matrix G is usually numerically calculated by introducing pixel grids throughout the source space. Let us denote the locations of the pixel grid points r 1, r 2, ... , r N, and the composite lead-field matrix for the entire pixel grids L N:
Then, the matrix G is calculated from (7.30)
(a)
(b)
FIGURE 7.1. Schematic views of the sensor lead field. (a) Biomagnetic instrument and (b) X-ray computed tomography.
224
K. Sekihara and S. S. Nagarajan
However, to avoid the numerical instability when inverting it, the following regularized version is usually used: (7.31) where y is the regularization parameter. The final solution is expressed as (7.32) The regularization, however, inevitably introduces considerable amounts of smearing into the reconstruction results. Besides, the solution obtained using the minimum-norm spatial filter suffers a geometric bias in that the current estimates are forced to be closer to the sensor array than their actual locations. It should, however, emphasized that the poor performance of the minimum-norm spatial filter is not because the method itself has a serious defect but because a mismatch exists between the method and the biomagnetic instruments. This can be understood by considering a situation for other imaging modalities such as the X-ray computed tomography (CT). As is shown in Fig. 7.1(b), the overlaps between the lead fields of different sensors are very small for the X-ray CT. As a result, the matrix G is close to the identity matrix and the non-adaptive spatial filter method works quite well. Indeed, the minimum-norm spatial filter technique is considered identical to the filtered-backprojection algorithm (Herman, 1980) used for image reconstruction from projections in commercial X-ray CT systems. In the next subsection, we briefly describe investigations into ways of improving the performance of the minimum-norm-based spatial filter.
Least-squares-based interpretation ofthe minimum-norm methods The minimum-norm spatial filter is commonly derived by minimizing the leastsquares-based cost function. Actually, this least-squares-based interpretation is much more popular than the spatial-filter-based interpretation described in Section 7.4.3. Namely, the solution in Eq. (7.32) minimizes the cost function, (7.33) where S N is a source vector whose elements consist of the current estimate at the pixel points, i.e.'sN = [S(rj, t), ... ,S(rN, t)]T. In Eq. (7.33), the first term on the right-hand side is the least-squares error term and the second term is the total sum of the current norm. Therefore, the optimum solution minimizes the total current norm as well as the least-squares error. This is why the method is often referred to as the minimum-norm estimate. The trick to improving the performance of the minimum-norm method is to use a more general form of the cost function, expressed as (7.34) where iJ! represents some kind of weighting applied to the solution vector SN, and Y represents the weighting applied to the residual of the least-squares term. The solution
225
Neuromagnetic Source Reconstruction and Inverse Modeling
derived by minimizing this cost function is expressed as (7.35)
In this solution, the gram matrix becomes G= LNP-I L~ + yy-I. The inclusion of the matrices P and Y gives a greater degree of freedom in regularizing G, and by choosing appropriate forms for these matrices, the numerical instability can be improved without introducing unwanted side effects such as image blur. In general, the matrix P is derived from a desired property of the solution. One widely used example is the minimum weighted-norm constraint in which we use P = p L whose non-diagonal elements are zero, and diagonal terms are given by pL
_
3k+l,3k+l -
1 IlfY(rd[[2'
and
pL
3k+2,3k+2
=
1
IJlZ(r k) 11 2
(7 36) •
This weight pL can reduce the geometric bias of the minimum-norm solution to some extent, compensating for the variation in the lead-field norm. Low resolution electromagnetic tomography (LORETA) (Pascual-Marqui and Michel, 1994; Wagner et al., 1996) is another popular application of this particular type of P. It seeks the maximally smooth solution by using P = pLpR, where pR is the Laplacian smoothing matrix. Bayesian-type estimation methods determine P based on prior knowledge of the neural current distribution (Schmidt et al., 1999; Baillet and Gamero, 1997). Determination of P by fMRI has been proposed (Liu et al., 1998; Dale et al., 2000). The matrix Y is generally determined from the noise properties. When the measurements contain non-white noise and we know the noise covariance matrix, Y is usually set to the inverse of the noise covariance matrix. The determination of the optimum forms for the matrices p and r has been an active research topic, and many investigations have been performed in this direction. However, we will not digress into the details of these investigations. Instead, in the following section we describe different approaches known as the adaptive spatial filter, which does not use the gram matrix of the lead field.
7.4.4 NOISE GAIN AND WEIGHT NORMALIZATION The spatial filter weights determine the gain for the noise in the reconstructed results. In the scalar spatial filter techniques, the output noise power due to the noise input, Pn, is given by (7.37) where R; is the noise covariance matrix. When the noise is uncorrelated white Gaussian noise, the output noise power is equal to (7.38) where 0'2 = (n(t)n T (t)) is the power of the input noise. Therefore, the norm of the filter weight vector IIw(r)[!2 is called the noise power gain or the white noise gain. In vector
226
K. Sekihara and S. S. Nagarajan
spatial filter techniques, the output noise power is expressed as
Pn = tr{W T (r)(n(t)nT (t)) W T (r)} = tr{W T (r)R n W T (r)}.
(7.39)
When the input noise is uncorrelated white Gaussian noise, this reduces to (7.40) Here, the square sum of the norm of the filter weights tr{WT(r)W(r)} = IlwxCr)/I2 + Ilwy(r)f + /Iw z(r)11 2is the noise power gain. A minimum-norm spatial filter with weight normalization has also been proposed (Dale et al., 2000) The output of this spatial filter is expressed as ~ s(r, t) =
WT(r)b(t) T
tr{W (r)W(r)}
.
(7.41)
Because the weight norm is the noise gain, the output of this spatial filter is interpreted as being equal to the SNR of the minimum-norm filter outputs.
7.5 ADAPTIVE SPATIAL FILTER TECHNIQUES 7.5.1 SCALAR MINIMUM- VARIANCE-BASED BEAMFORMER TECHNIQUES The adaptive spatial filter techniques use a weight vector that depends on measurement. The best-known adaptive spatial filter technique is probably the minimum-variance beamformer. The term "beamformer" has been customarily used in the signal-processing community with the same meaning as "spatial filter". In this method, the spatial filter weights are obtained by solving the constrained optimization problem, subject to
(7.42)
and consequently we get (7.43) where I(r) is defined as I(r) = L(r )17. The idea behind the above optimization is that the filter weight is designed to minimize the total output signal power while maintaining the signal from the pointing location r. Therefore, ideally, this weight only passes the signal from a source at the location r with the orientation 17, and suppresses the signals from sources at other locations or orientations. One difficulty arises when applying it to actual MEGfEEG source localization problems. That is, when we use the spherically symmetric homogeneous conductor model to calculate I(r), the beamformer output has erroneously large values near the center of the sphere. This is because III (r) II becomes very small when r approaches the center of the sphere.
Neuromagnetic Source Reconstruction and Inverse Modeling
227
A variant of the minimum-variance beamformer, proposed by Borgiotti and Kaplan (Borgiotti and Kaplan, 1979), uses the optimization, subject to
(7.44)
The resultant weight vector is expressed as (7.45)
Because w T (r )w(r) represents the noise power gain, the output of the above bearnformer directly corresponds to the power of the source activity normalized by the power of the output noise. This Borgiotti-Kaplan beamformer is known to provide a spatial resolution higher than that of the minimum-variance beamformer (Borgiotti and Kaplan, 1979). Moreover, it can easily be seen that the output of the beamformer in Eq. (7.45) does not depend on III(r) II. Thus, III (r) 11- related artifacts are avoided. Another more serious problem with the adaptive beamformer techniques described so far is that they are very sensitive to errors in the forward modeling or errors in estimating the data covariance matrix. Since such errors are nearly inevitable in neuromagnetic measurements, these techniques generally provide noisy spatio-temporal reconstruction results, as demonstrated in Section 7.7. One technique has been developed to overcome such poor performance (Cox et al., 1987; Carlson, 1988). The technique, referred to as diagonal loading, uses the regularized inverse of the measurement covariance matrix, instead of its direct matrix inverse. Although this technique has been applied to the MEG source localization problem (Robinson and Vrba, 1999; Gross and Ioannides, 1999; Gross et al., 2001), it is known that the regularization leads to a trade-off between the spatial resolution and the SNR of the beamformer output.
7.5.2 EXTENSION TO EIGENSPACE-PROJECTION BEAMFORMER We here describe the eigenspace-projection beamformer (van Veen, 1988; Feldman and Griffiths, 1991), which is tolerant of the above-mentioned errors and provides improved output SNR without sacrificing the spatial resolution in practical low-rank signal situations. Using Eqs. (7.43) and (7.17), and defining a = l/[IT(r)Rbl/(r)], we rewrite the weight vector for the minimum-variance beamformer as (7.46) where
In Eq. (7.46), the second term on the right-hand side, arNI(r), should ideally be equal to zero because the lead-field matrix I (r) is orthogonal to EN at the source locations as indicated by Eq. (7.21). Various factors, however, prevent this term from being zero, and a
228
K. Sekihara and S. S. Nagarajan
non-zero aTNL(r) seriouslydegradesSNR as explainedin the next section.Therefore,the eigenspace-based beamformeruses only the first term of Eq. (7.46) to calculate its weight vector w(r); i.e., _ w (r )
= aTs/(r ) =
T
T s/(r ) 1
I (r )RJ; I (r )
•
(7.47)
Note that w(r) is equal to the projection of w(r) onto the signal subspace of Rb • Namely, the following relationship holds (Feldman and Griffiths, 1991; Yu and Yeh, 1995): (7.48)
w(r) = EsEJ w(r).
Therefore, the extension to an eigenspace-projection beamformeris attained by projecting the weight vectors onto the signal subspaceof the measurement covariance matrix. 7.5.3 COMPARISON BETWEEN MINIMUM- VARIANCE AND EIGENSPACE BEAMFORMER TECHNIQUES
Althoughthe minimum-variancebeamformerideallyhas exactlythe sameSNR as that of theeigenspace-based beamformer, theSNRof theeigenspacebeamformeris significantly higher in practical applications. The reason for this high SNR can be understoodas follows. Let us assume that a single source with a moment magnitudeequal to set) exists at r. We assume that the estimated lead field, l (r ), is slightly different from the true lead field I (r) . The estimate of set),s et), is derived bys(t ) = w T(r )b(t ) = w T(r)/(r)s(t ) and the average power of the estimated source momentFir ), p." is expressed as (7.49)
where P, is the average power of set) defined by P, = (s(t) 2). For the minimum-variance beamformer, this Ps is expressed as ~
r,
2
[i T (r )R -1 I T]2 (r) = a 2 Ps[1~ T (r )rsl T (r)] 2, b
= a Ps I
(7.50)
where we use the orthogonality relationship IT(r)E N = O. For the eigenspace-projection beamformer, it is also expressed as
r. = a
~
2
~T
T
2
Ps[1 (r )TsI (r) ] .
(7.51)
The average noise power Pn is obtained using Eqs. (7.38) and (7.46) and, for the minimum-variance beamformer, it is expressed as (7.52)
229
Neuromagnetic Source Reconstruction and Inverse Modeling
For the eigenspace-projection beamformer, Pn is expressed as (7.53 )
Thus, the output SNR of the minimum-variance beamformer, SNR(MV), is expressed as (Chang and Yeh, 1992; Chang and Yeh, 1993)
(7.54)
The SNR for the eigenspace-based beamformer, SNR(ES). is thus
(7.55)
The only difference between Eqs. (7.54) and (7.55 ) is the presence of the second term I T (r )r~I(r ) in the denominatorof the right-hand side ofEq. (7.54). It is readily apparent thatSNR(MV) and SN~ES ) are equalif we can use an accuratenoise subspaceestimateandan T accuratelead-field vector, because the termi (r )r~I(r) isexactlyequal to zero in thiscase. It is, however, generally difficultto attain the relationship, I T (r)r~I(r) = O. One obvious reasonfor this difficultyis that whencalculatingr ~ inpractice,pstead of usi'¥ R b , the ~am pIe covariancematrix Rb must be used; R b is calculated from R b = 1/ k L k=l b(tk )b T (tk) where K is the number of time points. Anotherfactorthat is specifictoMEGandcausesl (r ) r~1 (r ) to havea non-zerovalue is that it is almost impossible to use a perfectly accurate lead-field vector. This is because the conductivity distribution in the brain is usually approximated by using some kind of conductor model-such as the spherically homogeneous conductor model-to calculate the lead field matrix. Although this error may be reduced to a certain extent by using a realistic head model, the error cannot be perfectly avoided. Let us define the overall error in estimating I(r) as e. Assuming that 11/(r) [1 2»11 e /12, we can rewriteEq. (7.54) as ~T
~
(7.56)
Note that, in the denominatorof the right-hand side of this equation, the norm of the matrix e T r~e has an order of magnitudeproportionalto /I e 11 2 /A~, where AN representsone of the noise-level eigenvalues of Ri : The eigenvalue AN is usually significantly smaller than the signal-level eigenvalues. Therefore, Equation (7.56 ) indicates that even when the error " e " is very small, the term e T r ~g may not be negligiblysmallcompared to the firstterm in the denominator. Thus, in practice the eigenspace-projection beamformerattainsan SNR significantly higher than that of the minimum-variance beamformer.
230
K. Sekihara and S. S. Nagarajan
7.5.4 VECTOR-TYPE ADAPTIVE SPATIAL FILTER The scalar beamfonner techniques described in the preceding subsections require determination of the beamformer orientation "l to calculate l(r). We, here, describe the extension to vector-type adaptive spatial filter techniques, which does not require the predetermination of "l. Problem of virtual source correlation
A naive way of extending to the vector beamformer is to simply use the scalar beamformer weight vector obtained with "l = f ~ to estimate the source in the ~ direction (where ~ = x, y,orz).Letustrytoestimate~(t)byusing~(t) = wT(r, f~)b(t), where uif (r , f~) is obtained using Eq. (7.43). The use of such weight vectors, however, generally gives erroneous results, and the cause of this estimation failure can be explained as follows. Let us assume that a single source with its orientation equal to "l = [1]x, 1]y, 1];f exists at r. Its activation time course is assumed to be s(t). Then, we can express the measured magnetic field as bet) = 1]xs(t)r(r) + 1] ys(t)fY(r) + 1] zs(t)[Z(r). This can be interpreted as showing that the magnetic field is generated from three perfectly correlated sources located at the same location r , with moments equal to 1]xs(t)fx' 1] ys(t)f yo and 1] zs(t)f z. Let us, for example, consider the case of estimating the x component of the source moment. The estimated moment, s,,(t), is expressed as s,,(t)
=
w]' (r )b(t)
=
[1]x w T(r, fx)r (r)
+ 1]vw T(r, f,)fY (r) + 1]; w T(r, fz)F(r )]s(t). (7.57)
Since the weight w T(r, fx) is obtained by imposing the constraint w T(r, fx)r(r) = 1, we have (7.58)
In this equation, there is no guarantee that the relationships w T (r , fx)fY (r) = 0 and w T(r , f x)F(r) = 0 hold. Instead, w T tr , f x)fY(r) and w T(r , f x)[Z(r) generally have fairly large negative values, resulting in considerable errors. A vector-extended minimum-variance beamformer
The above analysis also suggests how we can avoid such errors. Equation (7.57) indicates that the weight should be derived with the multiple constraints, (7.59)
w]'r(r) = 1,
That is, we impose the null constraints on the directions orthogonal to the one to be estimated. We here omit the notation (r) for the weight expression unless this omission causes ambiguity. Similarly, to derive Wy and lL!:, the following constraints should be imposed, w;r(r) = 0, wIrer)
= 0,
T
=
1,
and
w yT[Z(r)
wzTfY(r) = 0,
and
w;[Z(r)
W
y fY(r)
= 0, = 1.
(7.60) (7.61)
231
Neuromagnetic Source Reconstruction and Inverse Modeling
The minimum-variance beamformer with such multiple linear constraints, referred to as the linearly constrained minimum-variance beamformer (Frost, 1972), is known to have the following solution: (7.62) It is clear from the discussion above that, when estimating one of the three orthogonal components of the source moment, we need to suppress the other two components. By doing this, we can avoid the errors caused by the perfectly correlated virtual sources, so the beamformer can detect the source moment projected in three orthogonal directions. Note that the set of weight vectors in Eq. (7.62) has been previously reported (van Dronge1en et al., 1996; van Veen et al., 1997; Spencer et al., 1992). In these reports, however, the necessity for imposing the null constraints was not fully explained.
Vector-extended Borgiotti-Kaplan beamformer The extension of the Borgiotti-Kaplan beamformer in Eq. (7.45) is performed in the similar manner. The weight vectors are obtained by using the following constrained minimizations, . TR mmw x bWx Wx
subject to
. w TR bWy subject to mm y w,
min WzT Rbw z W;::
,
subject to
WxT Wx = 1,
w;fY(r) = 0,
and
w;"fZ(r) = 0,
(7.63)
w vTx 1 (r) = 0,
w yT w y = 1,
and
w yTIZ(r) = 0 ,
(7.64)
w/"r(r) = 0,
w!fY(r) = 0, ,.
and
T Wz Wz = 1. (7.65)
We first derive the expression for w .r- Let us introduce a scalar constant ~ such that w;r(r) = ~ where ~ can be determined from the relationship w; Wx = 1. Then, the constrained optimization problem in Eq. (7.63) becomes
(7.66)
The solution of this optimization problem is known to have the form (7.67) Then, we have T 2 T Wx Wx = ~ f x
nf x'
where
n=
l l. T [L (r)R{;1 L(r)r L T(r)R{;2L(r)[L T (r)R/;1 L(r)r
(7.68)
232
K. Sekihara and S. S. Nagarajan
l/jI; nIx
w;
x
Thus , we get ( = from the relationship W = 1. Using exactly the same derivation, the weights wyand W z can be derived, and a set of the weights is expressed as l
w ~=
RbI L(r )[L T (r)R b l L(r )r I~
jI[nh
.
(7.69)
Extension to eigenspace-projection vector beamformer The extension to the eigenspace-projection vector beamformer is attained by using (7.70) The projection onto the signal subspace , however, cannot preserve the null constraints imposed on the orthogonal components. This can be understood by considering, for example , the case of wx' The null constraints in this case should be w.~fY (r ) = 0 and w~F (r ) = O. However, let us consider
w;JY(r) = (EsEJ wxfJY(r) = w;EsE JF (r),
w.~fZ(r)
= (E sE J wxfZZ(r) = w; EsE JfZ(r).
(7.71)
Because P(r) and fZ(r) are not necessarily in the signal subspace, we generally have EsEJP(r ) i= P(r) and EsEJZZ(r) i= fZ(r) , and therefore w~P(r) i= 0 and w~ZZ(r) i= O. It can, however, be shown that the eigenspace-projection beamformer in Eq. (7.70) can still detect the three orthogonal components of the source moment even though the null constraints are not preserved (Sekihara et aI., 2001).
7.6 NUMERICAL EXPERIMENTS: RESOLUTION KERNEL COMPARISON BETWEEN ADAPTIVE AND NON-ADAPTIVE SPATIAL FILTERS 7.6.1 RESOLUTION KERNEL FOR THE MINIMUM-NORM SPATIAL FILTER We compare the resolution kernels for the minimum-norm and the minimum-variance spatial filter techniques. These two methods are typical and basic spatial filter techniques in each category. In these numerical experiments, we use the coil configuration of the 148channel Magnes 2500™ neuromagnetometer (4D Neuroimaging, San Diego). The sensor coils are arranged on a helmet-shaped surface whose sensor locations are shown in Fig. 7.2. The coordinate origin is chosen as the center of the sensor array. The z direction is defined as the direction perpendicular to the plane of the detector coil located at this center. The x direction is defined as that from the posterior to the anterior, and the y direction is defined as that from the left to the right hemisphere. The values of the spatial coordinates (x, y , z)
233
Neuromagnetic Source Reconstruction and Inverse Modeling
o
,~
-5
1
......,. .• -',1 -..' .' . , ... .. .....- ...: .. ... .... ... .... ... ... .-.'.e. : : ,... .. ..,- . . .. ..• • ,...
: ~. . . e. •• I
- 10
• IIt '.
N
. : .
.
.
. •
• " . .. •_
~
-15
.
~
•••......... • •- •~ :
-20 20
'.
:
. . • ' e " • • • '.
-20
y (em)
-20
20
x (em)
FIGURE 7.2. Thesensorlocationsandthecoordinatesystemusedforplottingthe resolutionkernelsinSection7.6. The filled spots indicate the locationsof the 148 sensors, and the hatched rectangle shows the plane of x = 0 on which the resolutionkernels were plotted.
are expressed in centimeters. The coordinate system is also shown in Fig. 7.2. The origin of the spherically symmetric homogeneous conductor was set to (0, 0, -11). To plot the resolution kernel, we assume a vertical plane x = 0 located below the center of the sensor array (Fig. 7.2). The power ofthe resolution kernel II lR 11 2 was calculated using Eqs. (7.25), (7.27), and (7.31) for the minimum-norm method. A point source was assumed to exist at (0,0, -6), i.e., r' in Eq. (7.25) was set to (0,0, -6). The kernel was plotted within a region defined as -5 :s y :s 5 and -9 :s z :s -Ion the vertical plane of x = O. The resulting resolution kernels are shown in Fig. 7.3. Here, the results in Fig.7.3(a) show the kernel obtained from the original minimum-norm method. It is well known that the original minimum-norm method suffers from a strong geometric bias toward the sensors. The results in Fig. 7.3(a) confirm this fact. The kernels of the minimum-norm method with the lead-field normalization are shown in Fig.7 .3(b) and (c). Here, cpL in Eq. (7.36) was used and the regularization parameter y was set at O.OOOlA l for (b) and O.OOlA l t for (c). These results show that the lead-field normalization significantly improves the performance of the minimum-norm method. However, the resolution is still significantly low, particularly in the depth direction. Moreover, the peak of the kernel is located a few centimeters shallower than the assumed location; the depth difference depends on the choice of the regularization parameter. The results in Fig.7.3(d) show the kernel from the minimum-norm method with the normalized weight (Eq. (7.41)). The main lobe is significantly sharper than those in the lead-field normalization cases of (b) and (c). However, the peak is located 2cm deeper than its original position.
t This A1 is the largesteigenvalueof the gram matrix L J; LN.
234
K. Sekihara and S. S. Nagarajan -I
-2
-3
"~
--1
--1
~ -5
~ -5
-6
. ()
-7
-7
-8
-8 -9
.
·7 -8 .
c:; > u
.~
;;
0
11
100 (b)
200 latency (ms)
300
400
FIGURE 7.7. (a) Results of the spatio-temporal reconstruction obtained using the minimum-variance-based vector beamformer in Eq. (7.62) together with the regularized inverse (Rb + YI)-I. The parameter y was set to 0.003Al' where Al is the largest eigenvalue of Rs; (b) Estimated time courses from the first to the third sources are shown from the top to the bottom, respectively.
Neuromagnetic SourceReconstruction and Inverse Modeling
241
4
E 2N
6
4
E 2N
~.
6
~.
o
2 (a)
2
4
2
1 "-----
o
-'-100
2
0 y (em)
y (em)
-'--L.-_L.-...L-
300
4
-"
400
(b)
FIGURE 7.8. (a) Results of the spatio-temporal reconstruction with the vector-extended Borgiotti-Kaplan-type beamformer (Eq. (7.69)). (b) Estimated time courses from the first to the third sources are shown from the top to the bottom, respectively.
242
K. Sekihara and S. S. Nagarajan
-,
4 E
-=-
"
6
8 4
'2 Q
"
6
8
2
( a)
0 y (ern)
2
4
2
0 y (ern)
2
4
-=g:" " .e ;;
"E
100 (b)
200 latency (rns)
300
400
FIGURE 7.9. (a) Results of the spatio-temporal reconstruction obtained using the eigenspace-projected BorgiottiKaplan vector beamformer technique (Eqs. (7.69) and (7.70». (b) Estimated time courses from the first to the third sources are shown from the top to the bottom, respectively.
Neuromagnetic Source Reconstruction and Inverse Modeling
243
z
FIGURE 7.10. The x, y, and z coordin ates used to express the reconstructi on results in Section 7.8. The coordinate origin is defined as the midpoint between the left and right pre-auricular points. The axis directed away from the origin toward the left pre-auricular point is defined as the +y axis, and that from the origin to the nasion as the +x axis. The +z axis is defined as the axis perpendicular to both these axes and is directed from the origin to the vertex.
7.8 APPLICATION OF ADAPTIVE SPATIAL FILTER TECHNIQUE TO MEG DATA This section describes the application of the adaptive spatial filter technique to actual MEG data. The MEG data sets were collected using the 37-channel Magnes" neuromagnetometer. The first data set is an auditory and somatosensory combined response, which contains two major source activities. We show that the adaptive spatial filter technique can reconstruct these two sources and retrieve their time courses. The second data set is the somatosensory response with very high SNR achieved by averaging 10000 trials. With this data set, we show that the adaptive technique can separate cortical activities only 0.7-cm apart . Throughout this section, we use the head coordinates shown in Fig. 7.10 to express the reconstruction results.
7.8.1 APPLICATION TO AUDITORY-SOMATOSENSORY COMBINED RESPONSE The evoked response was measured by simultaneously presenting an auditory stimulus and a somatosensory stimulus to a male subject. The auditory stimulus was a 200 ms puretone pulse with 1 kHz frequency presented to the subject's right ear, and the somatosensory stimulus was a 30 ms tactile pulse delivered to the distal segment of the right index finger. These two stimuli started at the same time. The sensor array was placed above the subject's left hemisphere with the position adjusted to optimally record the Nlm auditory evoked field. A total of 256 epochs were measured, and the response averaged over all the epochs is shown in the upper part of Fig . 7.11. The adaptive vector beamformer technique was applied to localize sources from this data set. The covariance matrix R b was calculated with the time window between 0 ms and 300 ms. We calculated, by using Eqs. (7.69) and (7.70), the eigenspace-projected Borgiotti-Kaplan weight matrix containing the two weight vectors [w , we], and estimated the source magnitude vector Flr, t ) using Eq. (7.72). The signal subspace dimension Q was
244
K. Sekihara and S. S. Nagarajan
§
200
.~ c
.5'"
0
."
v
0,::
200 50
0
50
100 150 200 latency (ms)
250
300
10
E ~
Q5
5
0 5
5 10
E ~
E 5
5
~
0 5
5
~
5 O
5
0 (em)
5
10
®
65ms
®,
38ms
5 10
E-=
350
5
0 (em)
5
5
(~} 5
0
5
94ms
10
(em)
FIGURE 7.11. Results of the spatio-temporal reconstruction from the auditory-somatosensory combined response shown in the upper trace of this figure. The auditory-somatosensory combined response was measured by simultaneously applying an auditory stimulus and a somatosensory stimulus. A total of256 epochs were averaged. The contour maps show reconstructed source magnitude distributions at three different latencies (65, 138, and 194 ms). The reconstruction grid spacing was set to 5 mm. The maximum-intensity projections onto the axial (left column), coronal (middle column), and sagittal (right column) directions are shown. The letters Land R indicate the left and right hemispheres. The circles depicting a human head show the projections of the sphere used for the forward modeling.
set to two because the eigenvalue spectrum of R b showed two distinctly large eigenvalues. The maximum-intensity projections of the reconstructed moment magnitude Us(r, t) 11 2 onto the axial, coronal, and sagittal planes are shown in Fig. 7.11. The source magnitude at three latencies, (65, 138, and 194 ms), is shown in this figure. The source magnitude map at 138 ms contains a source activity presumably in the primary somatosensory cortex. The source magnitude map at 194 ms shows a source activity in the primary auditory cortex. The map at 65 ms contains both of these activities. The time courses of points in the primary somatosensory and auditory cortices are shown in Figs. 7.l2(a) and (b), respectively. The coordinates of these cortices were determined from the maximum points in the source magnitude maps at 138 ms and 194 ms. In Fig. 7.12(a) the P50 peak, which is known to represent the activity of the primary
245
Neuromagnetic Source Reconstruction and Inverse Modeling
138
65
.::-
.~
0.5
.S"
0
" ;;
.~
]
194
0.5 IL..-_~_~---=:""""'~_~_~_---J
o
.::-
.~
...
(a) 50
0.5
.5 .e"
~ 0.5 I'--_~-'---~_----'-~_---U..
o
50
(b)
100
150
__
200
~_---'
250
300
latency (ms)
FIGURE 7.12. Time courses of the points nearest to (a) the primary somatosensory cortex and (b) the primary auditory cortex. The solid and broken plotted lines correspond, respectively, to 511 (r, t) and 5-L(r, t). Three vertical broken lines indicate the time instants at 65, 138, and 194 ms.
somatosensory cortex, is observed at a latency of about 50 ms. In Fig. 7.12(b), the auditory Nlm peak is observed at a latency of about 100 ms.
7.8.2 APPLICATION TO SOMATOSENSORY RESPONSE: HIGH-RESOLUTION IMAGING EXPERIMENTS Electrical stimuli with 0.2 ms duration were delivered to the right posterior tibial nerve at the ankle with a repetition rate of 4 Hz. The MEG recordings were taken from the vertex centering at Cz of the intemationall0-20 system. An epoch of 60 ms duration was digitized at a 4000 Hz sampling frequency and 10000 epochs were averaged. The upper part of Fig. 7.13 shows the MEG signals, recorded over the foot somatosensory region in the left hemisphere. The eigenspace-projected Borgiotti-Kaplan beamformer was applied to this MEG recording. The covariance matrix Rb was calculated with a time window between 20 and 45 ms containing 100 time samples. The maximum-intensity projections of the reconstructed source magnitude IIs(r, t) 11 2 onto the axial, coronal, and sagittal planes are shown in Fig. 7.13. The source magnitude maps revealed initial activation in the anterior part of the S1 foot area at 33.1 ms, followed by co-activation of the posterior part of S 1 cortex at 36.2 ms. The posterior activation became dominant at 37.2 ms and the initial anterior activation completely disappeared at 39.1 ms. Fig. 7.14 shows the source magnitude map at 36.9 ms overlaid, with proper thresholding, onto the subject's MRI. Here, the anterior source was probably in area 3b and the posterior source was in an area near the marginal sulcus. The
246
K. Sekihara and S. S. Nagarajan
§
100
u
"0
.E
0
c.. ~
100 20
E ~
40
35 latency (ms)
45
12[:J] ".. 5~ L .. ~ 12~ . 11 ,~ ~ 12~ : . ~ ~" '. 50 ' 0 ~ 12~ ' . ). '~: _12~ ' ~ ~. 5a" . ' '\::~ 12 ~ ' . ~.: . . .s~12~ 0
'.'
I
•
. ,.
. ~~ .
5 . ... . .
.
..
...•.
L. - .•..
.£. 0 · · ·
'
~
:
::
..
5
8
'
4
.'
.
E ~
..'
8
"
4
.
.
-.'
... . -
-. .
.
L
~ .:
:
..
o
.
E 8 ~ 4
.
E8 -
··
.
:
·
~
4 ·
·
·
-
:
E 8 ~ 4
·
.
.
.
E 8 ' "
-
33.lms
4
v ,
..
5
36.2ms
....
~
••
. .:
•
' -
.
··
:
.
....
5 . -- -
~ 0
~
....
~ 0
E
30
5 -. L 0 : .' 12~ ' ~ .' . ' . E 5
E
25
37.2ms
.
.
8
·
4 ·
. . 10
(em)
10
~ 8 ..
'
.. -
-'-.
5
0 (em)
:.
5
4
.. .
-
5
.:
. ..
..:-.
o
. 5
39.1ms
10
(em)
FIGURE 7.13. Results of the spatio-temporal reconstruction from the somatosensory response shown in the upper trace of this figure. The somatosensory response was measured using the right posterior tibial nerve stimulation. The contour maps show reconstructed source magnitude distributions at four different latencies. The reconstruction grid spacing was set to I mm. The maximum-intensity projections onto the axial (left column), coronal (middle column), and sagittal (right column) directions are shown. The letters Land R indicate the left and right hemispheres. The circles depicting a human head show the projections of the sphere used for the forward modeling.
separation of the two sources was approximately 7 mm, demonstrating the high-resolution imaging capability of the adaptive spatial filter techniques. Details of this investigation have been reported (Hashimoto et al., 2001a), and the results of applying the adaptive beamformer technique to the response from the median nerve stimulation have also been reported (Hashimoto et al., 200Ib).
ACKNOWLEDGMENTS The author would like to thank Dr. D. Poeppel, Dr. A. Marantz, and Dr. T. Roberts for providing the auditory data. We are also grateful to Dr. I. Hashimoto, and Dr. K. Sakuma for
Neuromagnetic Source Reconstruction and Inverse Modeling
247
FIGURE 7.14. The source magnitude reconstruction results at a latency of 36.9 ms. The source magnitude map was properly thresholded and overlaid onto the sagittal cross section of the subject's MRI. The colors represent the relative intensity of the source magnitude; the relationship between the colors and relative intensities is indicated by the color bar. The anterior source was probably in area 3b and the posterior source was in an area near the marginal sulcus. The separation of the two sources was approximately 7 mm in this case. See the attached CD for color figure.
providing the somatosensory data and for useful discussion regarding the interpretation of the reconstructed results. This work has been supported by Grants-in-Aid from the Kayamori Foundation of Informational Science Advancement; Grants-in-Aid from the Suzuki Foundation; and Grants-in-Aid from the Ministry of Education, Science, Culture and Sports in Japan (C13680948). This work has also been supported by the Whitaker Foundation, and by National Institute of Health. (P4IRRI2553-03 and ROI-DC004855-0IAI).
REFERENCES Adachi, Y, Shimogawara, M., Higuchi, M., Haruta, Y, and Ochiai, M., 2001, Reduction of non-periodical extramural magnetic noise in MEG measurement by continuously adjusted least squares method, in Proceedings of12th International Conferences on Biomagnetism, (R. Hari et al., eds.), Helsinki University of Technology, pp. 899-902. A. M. Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. w., Lewine, J. D., and Halgren, E., 2000, Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity, Neuron, 26, pp. 55-67. Baillet S. and Garnero, L., 1997, A Bayesian approach to introducing anatomo-functional priors in the EEGIMEG inverse problem, IEEE Trans. Biomed. Eng., 44, pp. 374-385. Baillet, S., Mosher, J. c., and Leahy, R. M., 2001, Electromagnetic brain mapping, IEEE Signal Processing Magazine, 18, pp. 14-30. Barnard, A., Duck, I., Lynn, M., and Timlake, W., 1967, The application of electromagnetic theory to electrocardiography II. Numerical solution of the integral equations, Biophys. 1.,7, pp. 433-462. Borgiotti G. and Kaplan, L. J., 1979, Superresolution of uncorrelated interference sources by using adaptive array technique, IEEE Trans. Antenn. and Propagat., 27, pp. 842-845.
248
K. Sekihara and S. S. Nagarajan
Bradley, C. P., Harris, G. M., and Pillan, A. J., 2001, The computational performance of a high-order coupled FEMJBEM procedure in electropotential problems, IEEE Trans. Biomed. Eng., 48, pp. 1238-1250. Carlson, B. D., 1988, Covariance matrix estimation errors and diagonal loading in adaptive arrays, IEEE Trans. Aerospace and Electronic Systems, 24, pp. 397-401. Chang, L. and Yeh, C. C.• 1992, Performance of DMI and eigenspace-based beamformers, IEEE Trans. Antenn. Propagat., 40. pp. 1336-1347. Chang, L. and Yeh, C. C., 1993, Effectof pointing errors on the performance of the projection beamformer, IEEE Trans. Antenn. Propagat., 41, pp. 1045-1056. Clarke, J., 1994, SQUIDs, Scientific American, 271, pp. 36-43. Cox, H., Zeskind, R. M., and Owen, M. M., 1987, Robust adaptive beamforming, IEEE Trans. Signal Process., 35, pp. 1365-1376 . Cuffin B. N. and Cohen D., 1977, Magnetic fields of a dipole in special volume conductor shapes, IEEE Trans. Biomed. Eng., 24, pp. 372-381, 1977. CuffinB. N., 1991, Eccentric spheres models of the head, IEEE Trans. Biomed. Eng., 38, pp. 871-878 . Cuffin, B. N., 1996, EEG localizationaccuracy improvements using realistically shaped head models, IEEE Trans. Biomed. Eng., 43, pp. 299-303, 1996. de Peralta Menendez, R. G., Gonzalez Andino, S., and Lutkenhoner, B.• 1996, Figures of merit to compare distributed linear inversesolutions. Brain Topography, 9, pp. 117-124,1996. de Peralta Menendez, R. G.• Hauk, 0 ., Gonzalez Andino, S., Vogt, H., and Michel, C., 1997, Linear inverse solutions with optimal resolution kernels applied to electromagnetic tomography, Human Brain Mapp ing.S, pp. 454-467, 1997. Drung, D., Cantor, R., Peters, M., Ryhanen, P., and Koch, H., 1991, Integrated DC SQUID magnetometer with high dv/db, IEEE Trans. Magn., 27, pp. 3001-3004. Feldman, D. D. and Griffiths, L. J., 1991, A constrained projection approach for robustadaptive beamforming, in Proc. Int. Conf. Acoust ., Speech, Signal Process. , Toronto,May, pp. 1357-1360. Frost, O. T., 1972, An algorithmfor linearly constrained adaptivearray processing, Proc. IEEE , 60, pp. 926-935. Fuchs, M., Drenckhahn, R., Wischmann, H.-A., and Wagner, M., 1998, An improved boundary element method for realistic volume-conductor modeling, IEEE Trans. Biomed. Eng., 45, pp. 980-997. Geselowitz, D. B., 1970, On the magnetic fieldgenerated outside an inhomogeneous volume conductor by internal current sources, IEEE Trans. Biomed. Eng., 2, pp. 346-347. Graumann, R., 1991, The reconstruction of current densities, Tech. Rep. TKK-F-A689, Helsinki University of Technology. Gross J. and loannides, A. A., 1999, Linear transformations of data space in MEG, Phys. Med. BioI., 44, pp. 2081- 2097. Gross, J., Kujara, J., Harnalainen, M. S., Timmermann, L., Schnitzler,A., and R. Salmelin, 200I, Dynamic imaging of coherent sources: Studying neural interactions in the human brain, Proceedings of National Academy of Science, 98, pp. 694-699. Harnalainen, M. S. and I1moniemi, R. J., 1984, Interpreting measured magnetic fields of the brain: Estimates of current distributions, Tech. Rep. TKK-F-A559, Helsinki University of Technology. Harnalainen, M. S. andSarvas, J., 1989, Realisticconductivitygeometry modelof the human headforinterpretation of neuromagneticdata, IEEE Trans. Biomed. Eng., 36, pp. 165-171. Harnalainen,M. S., Hari. Rx Ilmoniemi. R. J., Knuutila, J.,and Lounasmaa, O. V., 1993,Magnetoencephalographytheory, instrumentation, and applications to noninvasivestudies of the working humanbrain, Rev. Mod. Phys., 65, pp. 413-497 . Hamalainen, M. S. and I1moniemi, R. J., 1994, Interpreting magnetic fieldsof the brain: minimumnorm estimates, Med. & BioI. Eng. & Comput., 32, pp. 35-42 . Hashimoto, I., Sakuma, K., Kimura, T., Iguchi, Y., and Sekihara, K., 200l a, Serial activation of distinct cytoarchitectonic areas of the human SI cortex after posterior tibial nerve stimulation, NeuroReport, 12, pp. 1857-1862. Hashimoto, I., Kimura, T., Iguchi, Y., Takino, R., and K. Sekihara, K., 200lb, Dynamic activation of distinct cytoarchitectonic areas of the human SI cortex after median nerve stimulation, NeuroReport, 12, pp. 18911897. Herman, G. T., 1980. Image Reconstruction fr om projec tions, Academic Press. New York, USA. J. D. Lewine J. D. and Orrison Jr., W. w., 1995, Magnetoencephalography and magnetic source imaging, in Functional Brain Imaging, (w. W. Orrison Jr. et al., eds.), pp. 369-417. Mosby-Year Book, Inc.
Neuromagnetic Source Reconstruction and Inverse Modeling
249
Liu, A. K.•Belliveau. 1. w.. and Dale. A. M.• 1998. Spatiotemporal imaging of humanbrain activity using functional MRI constrained magnetoencephalography data: Monte Carlo simulations. Proc. Natl. Acad. Sci., 95, pp. 8945-89 50. Lutkenhoner, B. and de Peralta Menendez. R. G.• 1997. The resolution fieldconcept. Electroenceph. Clin. Neurophysiol.• 102, pp. 326--334. Mosher. J. C; Lewis.P.S.•and Leahy. R. M.• 1992. Multipledipole modeling and localization from spatio-temporal MEG data. IEEE Trans. Biomed. Eng., 39. pp. 541- 557. Okada, Y., Lauritzen. M.• and Nicholson, C., 1987, MEG source models and physiology, Phys. Med. Bioi, 32, pp. 43-51. Parkkonen, L. T.. Simola, J. T.• Tuorinierni, J. T.• and Ahonen. A. I.. 1999. An interference suppression system for multichannel magnetic field detector arrays. in Recent Advances in Biomagnetism, (T. Yoshimoto et al.• eds.), Tohoku University Press, Sendai, pp. 13-1 6. Pascual-Marqui, R. D. and Michel, C. M., 1994, Low resolution electromagnetic tomography: A new method for localizing electrical activity in the brain. Int. J. Psychophysiol., 18. pp. 49--65. Paulraj, A.. Ottersten, B.•Roy. R.•Swindlehurst. A.•Xu. G., and Kailath, T.• 1993.Subspace methods for directionsof-arrivalestimation. in Handbook of Statistics. (N. K. Bose and C. R. Rao, eds.), Elsevier Science Publishers, Netherlands. pp. 693-739. Roberts, T. P. L. , Poeppel, D., and Rowley,H. A.• 1998. Magnetoencephalography and magnetic source imaging, Neuropsychiatry, Neuropsychology. and Behavioral Neurology. 11, pp. 49-64 . Robinson, S. E. and Vrba, J., [999, Functional neuroimaging by synthetic aperture magnetometry (SAM), in Recent Advances in Biomagnetism, (T. Yoshimoto et aI., eds.), Tohoku University Press. Sendai, pp. 302305. Sarvas. L, 1987, Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys. Med. BioI., 32, pp. 11- 22. Scharf, L. L.. 1991. Statistical Signal Processing: detection. estimation. and time series analysis, Addison-Wesley Publishing Company. New York. Schmidt. D. M.• George, J. S., and Wood, C. c.. 1999. Bayesian inference applied to the electromagnetic inverse problem, Human Brain Mapping, 7, pp. 195-212. Schmidt. R. 0. , 1981.A signal subspace approach to multipleemitter location and spectral estimation, PhD thesis, Stanford University, Stanford, CA. Schmidt, R. 0 ., 1986, Multiple emitter location and signal parameter estimation, IEEE Trans. Antenn. Propagat., 34. pp. 276--280. Sekihara, K.•Poeppel, D.. Marantz. A., and Miyashita, Y., 20()(). Neuromagnetic inverse modeling: applications of eigenstructure-based approaches to extracting cortical activities from MEG data. in Image, Language, Brain. (Alec Marantz et al.• eds.), The MIT Press. Cambridge. pp. 197-231. Sekihara, K. and Scholz, B.• 1996. Generalized Wiener estimation of three-dimensional current distribution from biomagnetic measurements. in Biomag 96: Proceedings of the Tenth International Conference on Biomagnetism , (C. J. Aine et al.• eds.), Springer-Verlag, New York. pp. 338-341. Sekihara, K., Nagarajan, S. S.•Poeppel, D.•Marantz, A., and Miyashita, Y.. 2001, Reconstructing spatio-temporal activities of neural sources using an MEG vector beamforrnertechnique, IEEE Trans. Biomed. Eng.• 48. pp. 760-771. Spencer, M. E.•Leahy.R. M.• Mosher. J. c.. and Lewis, P. S., 1992. Adaptive filters for monitoring localized brain activity from surface potential time series, in Conference Record for 26th Annual Asilomer Conference on Signals. Systems, and Computers, November, pp. 156--161. van Drongelen, W.• Yuchtrnan, M., van Veen, B. D., and van Huffelen, A. C; 1996. A spatial filtering technique to detect and localize multiple sources in the brain, Brain Topography. 9, pp. 39-49. van Veen, B. D. and Buckley, K. M., 1988. Beamforrning: A versatile approach to spatial filtering, IEEE ASSP Magazine. 5. pp. 4-24, April. van Veen, B. D., 1988, Eigenstructure based partially adaptive array design, IEEE Trans. Antenn. Propagat., 36, pp. 357-362. van Veen, B. D.• van Drongelen, W.• Yuchtrnan, W. and Suzuki, A.• 1997, Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans. Biomed. Eng.• 44. pp. 867880. van't Ent, D., de Munck, 1. C.• and Kaas, A. L., 2001, A fast method to derive realistic BEM models for ElMEG source reconstruction. IEEE Trans. Biomed. Eng., 48, pp. 1434-1443.
250
K. Sekiharaand S. S. Nagarajan
Vrba, J. and Robinson, S., 2001, The effect of environmental noise on magnetometer- and gardiometer-based MEG systems, in Proceedings of 12th International Conferences on Biomagnetism, (R. Hari et al., eds.), Helsinki University of Technology, pp. 953-956. Wagner, M., Fuchs, M., Wischmann, H.-A., Drenckharn, R., and Kohler, T., 1996, Smooth reconstruction of cortical sources from EEG or MEG recordings, Neurolmage, 3, pp. S168. Wang, J. Z., Williamson, S. J., and Kaufman, L., 1992, Magnetic source images determined by a leadfield analysis: The unique minimum-norm least-squares estimation, IEEE Trans. Biomed. Eng., 39, pp.565-575. Yu, J. L. and Yeh, C C, 1995, Generalized eigenspace-based beamformers, IEEE Trans. Signal Process., 43, pp. 2453-2461.
8 MULTIMODAL IMAGING FROM NEUROELECTROMAGNETIC AND FUNCTIONAL MAGNETIC
RESONANCE RECORDINGS Fabio Babiloni and Febo Cincotti
Dipartimento di Fisiologia Umana e Farmacologia, Universita di Roma "La Sapienza", Roma, Italy
8.1 INTRODUCTION Human neocortical processes involve temporal and spatial scales spanning several orders of magnitude, from the rapidly shifting somatosensory processes characterized by a temporal scale of milliseconds and a spatial scales of few square millimeters to the memory processes, involving time periods of seconds and spatial scale of square centimeters. Information about the brain activity can be obtained by measuring different physical variables arising from the brain processes, such as the increase in consumption of oxygen by the neural tissues or a variation of the electric potential over the scalp surface. All these variables are connected in direct or indirect way to the neural ongoing processes, and each variable has its own spatial and temporal resolution. The different neuroimaging techniques are then confined to the spatiotemporal resolution offered by the monitored variables. For instance, it is known from physiology that the temporal resolution of the hemodynamic deoxyhemoglobin increase/decrease lies in the range of 1-2 seconds, while its spatial resolution is generally observable with the current imaging techniques at few mm scale. Today, no neuroimaging method allows a spatial resolution on a mm scale and a temporal resolution on a msec scale. Hence, it is of interest to study the possibility to integrate the information offered by the different physiological variables in a unique mathematical context. This operation is called the "multimodal integration" of variable X and Y, when the X variable has typically particular appealing spatial resolution property (mm scale) and the Y variable has particular attractive temporal properties (on a ms scale). Nevertheless, the issue of several temporal and spatial domains is Corresponding author: Dr. Fabio Babiloni, Dipartimento di Fisiologia Umana e Farmacologia, Universita di Roma "La Sapienza", P.le A. Mow 5, 00185 Roma, Italy, Tel: +39-06-49910317, Fax: +39-06-49910917, Email: fabio.babiloniteuniromal.it
251
252
F. Babiioni and F. Cincotti
critical in the study of the brain functions, since different properties could become observable, depending on the spatio-temporal scales at which the brain processes are measured. Electroencephalography (EEG) and magnetoencephalography (MEG) are two interesting techniques that present a high temporal resolution, on the millisecond scale, adequate to follow brain activity. Unlikely, both techniques have a relatively modest spatial resolution, beyond the centimeter. In spite ofa lack of spatial resolution, neural sources can be localized from EEG or MEG data by making a priori hypotheses on their number and extension. A more detailed description of the techniques involved in the high resolution EEG and MEG recordings and imaging can be found in Chapter 7 and Chapter 8. Here, we briefly recall that the so called high resolution EEG methods included: (i) subject's multi-compartment head model (scalp, skull, dura mater, cortex) constructed from magnetic resonance images, the sampling of EEG potentials from 64-128 electrodes; (ii) the computation of the surface Laplacian from the scalp potential recordings and/or the use of multi-dipole source model for characterizing the neural active sources. However, the spatial resolution of the EEG/MEG techniques is fundamentally limited by the inter-sensor distances and by the fundamental laws of electromagnetism (Nunez, 1981). On the other hand, the use of apriori information from other neuroimaging techniques like functional magnetic resonance imaging (fMRI) with high spatial resolution could improve the localization of sources from EEG/MEG data. This chapter deals with the multimodal integration of electrical, magnetic and hemodynamic data to locate neural sources responsible for the recorded EEG/MEG activity. The rationale of the multimodal approach based on fMRI, MEG and EEG data to locate brain activity is that neural activity generating EEG potentials or MEG fields increases glucose and oxygen demands (Magistretti et aI., 1999). This results in an increase in the local hemodynamic response that can be measured by fMRI (Grinvald et al., 1986; Puce et al., 1997). On the whole, such a correlation between electrical and hemodynamic concomitants provides the basis for a spatial correspondence between fMRI responses and EEG/MEG source activity. The chapter is organized as follows: first, a brief introduction on the principles at the basis of the fMRI recordings will be presented; then a recall of the principal techniques used for EEG and MEG for locating neural sources will be presented, with special emphasis on cortical imaging and linear distributed solutions. This last technique will be employed to show both the mathematical principle and the practical applications of the multimodal integration of EEG, MEG and fMRI for the localization of sources responsible for intentional movements.
8.2 GENERALITIES ON FUNCTIONAL MAGNETIC RESONANCE IMAGING A brain imaging method, known as fMRI, has gained favor among neuroscientists over the last few years. Functional MRI reflects oxygen consumption and, as oxygen consumption is tied to processing or neural activation, can give a map of functional activity. When neurons fire, they consume oxygen and this causes the local oxygen levels to briefly decrease and then actually increase above the resting level as nearby capillaries dilate to let more oxygenated blood flow into the active area. The most used acquisition paradigm is the so-called Blood Oxygen Level Dependence (BOLD), in which the fMRI scanner works by imaging blood oxygenation. The BOLD paradigm relies on the brain mechanisms, which overcompensate for oxygen usage (activation causes an influx of oxygenated blood in excess of that
253
Multimodal Imaging from Neuroelectromagnetic
A
B
FI GUR E 8.1. Physiologic principle at the base of the generation of fMRI signals . A) Neurons increase their firing rates, increasing also the oxygen consumption. B) Hemodynamic response in a second scale increases the diameter of the vessel close to the activated neurons. The induced increase in blood flow overco mes the need for oxyge n supply. As a consequence, the percentage of deoxyhemoglobin in the blood flow decreases in the vesse l with respect to the figure A). See the attached CD for color figure.
used and therefore the local oxyhemoglobinconcentration increases). Oxygen is carried to the brain in the hemoglobin molecules of blood red cells. Fig. 8.1 shows the physiologic principle at the base of the generation of tMRI signals. In this figure it is shown how the hemodynamic responses elicited by an increased neuronal activity (A) produces a decrease of the deoxyhemoglobin content of the blood flow in the same neuronal district after few seconds (B). The magnetic properties of hemoglobin differ when it is saturated with oxygen compared to when it has given up oxygen. Technically, deoxygenated hemoglobin is "paramagnetic" and therefore has a short T2 relaxation time. As the ratio of oxygenated to deoxygenated hemoglobin increases, so does the signal recorded by the MRI. Deoxyhemoglobin increases the rate of depolarization of hydrogen nuclei creating the MR signal thus decreases the intensity of the T2 image. The bottom line is that the intensity of images increases with the increase of brain activation.The problem is that at the standard intensity used for the static magnetic field (1.5 Tesla) this increase is small (usually less than 2%) and easily obscured by noise and different artifacts. By increasing the static field of the fMRI scanner, the signal to noise ratio increases to more convenient values. Static field values of 3 Tesla are now commonly used for research on humans, while recently fMRI scanner at 7 Tesla was employed to map hemodynamic responses in the human brain (Bonmassar et al., 2001). At such high field value, the possibility to detect the initial increase of deoxyhemoglobin (the initial "dip") increase. The interest in the detection of the dip is based on the fact that this hemodynamic response happens on timescale of 500 ms (as revealedby hemodynamic optic measures; Malonek and Grinvald, 1996), compared to 1-2 seconds needed for the response of the vascular system to the oxygen demand. Furthermore, in the latter case the response has a temporal extension well beyond the activation occurred (10 seconds). As a last point, the spatial distribution of the initial dip (as described by using the optical dyes;
254
F. Babiloni and F. Cincotti
Malonek and Grinvald, 1996) is sharper than those related to the vascular response of the oxygenated hemoglobin. Recentl y, with high field strength MR scanners at 7 or even 9.4 Tesla (on animals), a resolution down to cortic al column level has been achieved (Kim et al., 2000). However, at the standard field intensity commonly used in fMRI studies (1.5 or 3 Tesla), the identification of such initial transient increase of deoxyhemoglobin is controversial. Compared to positron emitted tomography (PET) or single photon emitted tomography (SPECT), fMRI does not require the injection of radio-labeled substances, and its images have a higher resolution (reviewed in Rosen et al., 1998). PET, however, is still the most informati ve technique for directly imaging metabolic processes and neurotransmitter turnover.
8.2.1 BLOCK-DESIGN AND EVENT-RELATED fMRI Though dynamic fMRI experiments were early recognized to be fundamentally different from previous hemodynamically based functional imaging methods (like, for instance, Positron Emitted Tomography ; PET), early studies in fMRI typically used experimental paradigms that could have been easily performed by using previous nuclear technologies. Spec ifically, most experiments were performed by using extended period s of "on" versus "off" activations, in a way called block designs paradigm . Such paradigm had been used in dozens of functional studies of sensory and higher cortic al function using PET and single photon emission computed tomography (SPECT) for more than a decade. Nevertheless, although such block designs are a necessity when imaging hemodynamics by using techn iques that requir e a quasi-equilibrium physiological state for periods up to I min, they were clearl y not requi red for fMRI experiments, where activity was detectable within seco nds from stimulus onset. Movement away from block designs was gradual and aided by a number of studies exploring fMRI signal responses to brief stimulus events (as long as 2 seco nds or less; Blamire et al., 1992). A detectable signal change in fMRI was shown to be produ ced by 2 s or shorter stimulus (Blamire et al., 1992; Bandett ini, 1993). More over, it was also shown that visual stimulation as brief as 34 msec in duration could elicit small, but clea rly detectable, signal changes (Savoy et al., 1995). All these data suggest that tMRI is sensitive to transient phenomena and can provide at least some degree of quantitative information on the underlying neuronal behavior. Together, these results thus suggest that it should be possible to interpret transient fMRI signal changes in ways directly analogous to electrophysiologic evoked potenti als. A first step in this direction was made by Dale and Buckner (reviewed in Rosen et al., 1998) who showed that visual stimuli lateralized to one hemifield could be detected within intermixed trial paradigms. By using methods similar to those applied in the field of evoked response potential research, the trials were selectively averaged to reveal the predicted pattern of contralateral visual cortex activation. Taken together with the above observations, these collective data demonstrate convincingly that fMRI is capable of detecting changes related to single-tas k events and brief epoch s of stimulation. Hence, the paradigm in which the fMRI information was collected on a trial-by-trial basis is called "event-related fMRI" .
8.3 INVERSE TECHNIQUES The ultimate goal of any EEG, MEG and fMRI recordin gs is to produce information about the brain activity of a subject during a particular sensorimotor or cogniti ve task.
Multimodal Imaging from Neuroelectromagnetic
255
The mathematical procedures that allow us to recover information about the activity of the neural sources from the non-invasive EEG/MEG recordings are called inverse techniques. The inverse techniques have been systematically treated and reviewed, with application to ECG (Chapter 4), and MEG (Chapter 7). Mathematical models for the head as volume conductor and for the neural sources are employed by linear and non-linear minimization procedures to localize putative sources of EEG data. Several studies have indicated the adequacy of the equivalent current dipole as a model for the cortical sources (Nunez, 1981, 1995), while the importance of realistic geometry head volume conductor models for the localization of cortical activity has been stressed more recently (Gevins, 1989; Gevins et aI., 1991, 1999, Nunez, 1995). Results of previous intracranial EEG studies have led support to the idea that high resolution EEG techniques (including head/source models and proper regularization inverse procedures) might model with an acceptable approximation the strengths and extension of cortical sources of surface EEG data, at least in certain conditions (Le and Gevins, 1993; Gevins et aI., 1994; He et aI., 2002). We briefly present a survey on the principal inverse techniques, with a particular emphasis on the so-called "distributed" solutions, which we will use to demonstrate the multimodal integration of EEG/MEG and fMRI.
8.3.1 ACQUISITION OF VOLUME CONDUCTOR GEOMETRY A key point of high-resolution EEG and MEG technologies is the availability of the accurate model of the head as a volume conductor by using anatomical MRIs. These images are obtained by using the MRI facilities largely available in all research and clinical institutions worldwide. Reference landmarks such as nasion, inion, vertex, and preauricular points may be labeled using vitamin E pills as markers. Tl-weighted MR images are typically used since they present maximal contrast between the structures of interest. Contouring algorithms allow the segmentation of the principal tissues (scalp, skull, dura mater) from the MR images (Dale et aI., 1999). Separate surfaces of scalp, skull, dura mater and cortical envelopes are extracted for each experimental subject, yielding a closed triangulated mesh. This procedure produces an initial description of the anatomical structure that uses several hundred thousands points--quite too much for subsequent mathematical procedures. These structures are thus down-sampled and triangulated to produce scalp, skull and dura mater geometrical models with about 1000-1300 triangles for each surface. These triangulations were found adequate to model the spatial structures of these head tissues. A different number of triangles are used in the modeling of the cortical surface, since its envelope is more convoluted than the scalp, skull and dura mater structures. A number of triangles variable from 5000 to 6000 may be used to model the cortical envelope for the purpose of following the spatial shape of the cerebral cortex. In order to allow coregistration with other geometrical information, the coordinates of the triangulated structures are referred to an orthogonal coordinate system (x, y, z) based on the positions of nasion and pre-auricular points extracted from the MR images. For instance, the midpoint of the line connecting the pre-auricular points can be set as the origin of the coordinate system and, with the y axis going through the right pre-auricular point, the x axis lying on the plane determined by nasion and pre-auricular points (directed anteriorly) and the z axis normal to this plane (directed upward). Once the model of scalp surface has been generated, the integration of the electrodes' positions are accomplished by using the information about the sensor locations
256
F. Babiioni and F. Cincotti
FIGURE 8.2. Realistic MRI-constructed head of a human subject. Electrode positions (128) are shown on the MRI-constructed scalp surface and on the underlying cortex surface.
produced by the 3-D digitizer. The sensors positions on the scalp model are determined by using a non-linear fitting technique. Fig. 8.2 shows the results of the integration between the EEG scalp electrodes position and a realistic head model.
8.3.2 DIPOLE LOCALIZATION TECHNIQUES Dipole localization techniques produced estimates of the position and moment of one or several equivalent current dipoles localized in a head model from the non-invasive EEG and/or MEG recordings. From the position of the localized current dipoles in the head model, inferences about the neural sources in the real brain are inferred. So far, two approaches to dipole localization have become popular in neuroscience, and both rely on the solution of non-linear minimization algorithms. The first approach is the so-called "moving dipole" method (Cohen et al., 1990). Dipoles are found at a succession of discrete times, with no a priori assumption about the relation of the localized dipoles at different time instants. In general, it is difficult to locate more than two dipoles for each potential/magnetic field recorded due to the numerical instability of the inverse procedure. However, also with this limitation, this procedure is very popular and allows locating sources mainly in the primary sensory cortical areas. Fig. 8.3 shows the localization of a current dipole (the red arrow) indicating the restricted cortical areas responsible for the generation of the characteristic magnetic field distribution recorded by the magnetic sensors 20 ms after a stimulus delivery at right wrist in humans.
Multimodal Imaging from Neuroelectromagnetic
257
FIGURE 8.3. Localization of the equivalent current dipole (the red arrow) indicating the restricted cortical areas responsible for the generation of the characteristic magnetic field distribution occurring over the magnetic sensors 20 ms after a stimulus delivered at the right wrist of a subject (N201P20). The position of the dipole is integrated into a realistic head model built by segmenting sequential magnetic resonance images of the subject. See the attached CD for color figure.
The position of the localized dipole was then integrated into a realistic head model built according to the procedures described above. The second approach to the dipole localization combines both the spatial and temporal properties of scalp potentials/fields, to increase the ratio of available data to degrees of freedom for the minimization procedures. This results in an increase of the number of dipoles that may be reliably localized from EEG and MEG recordings. Different constraints are applied to find the best inverse solutions, for example setting the position of the dipoles and estimating the time/series of the dipole moments, or determining the orientation of the dipoles and setting their positions. This last approach is called multiple source analysis (MSA; Scherg and von Cramon, 1984; Ebersole 1997, 1999; Scherg et aI., 1999).
8.3.3 CORTICAL IMAGING The possibility to model the complex head geometry with the finite element technique allowed Alan Gevins and colleagues to derive a method, they called deblurring, that estimates potential distribution on the dura mater surface by using non invasive EEG recordings (Le and Gevins, 1993; Gevins et aI., 1994). This method still uses non-linear minimization techniques but did not use any explicit model of the neural sources. In fact, by just applying Poisson's equation, Gevins and co-workers were able to move back from the scalp potential distribution to the dura mater potential distribution. This method was also validated by
258
F. Babiloni and F. Cincott i
i
~· I 3) Generated cortical f potentials from the estimated dipole strenghts
I) EEG signa s
2) Estimated cortical strenghts from EEG siznals
FIGUR E 8.4. Possible representation of the corti cal imaging technique. The acquired EEG scalp potentials ( I) are used to estimate the cortical dipole strengths at the dipo le layer level, here represented with the realistic cortical surface . The estimated cortical dipole strength s (2) are then used to generate the potential distribution over a dura mater surface (3) using the basic laws of electromagnetism. Such distribution can be generated at any other modeled head structure.
using epicortical recording s, and the deblurred dura mater potential distributions showed a clear improvement with respect to the examination of the raw potential distribution s over the scalp. It is worth of note that the mathematical model supporting the deblurring method is not suitable to accommodate fMRI or PET inform ation. In fact, mathematical framework that allow integration between electromagnetic and metabolic modalities, require the sources of currents in the brain to be explicitly modeled. Another technique useful to recover improved images of cortical distributions from EEG scalp recording s is known as cortical imaging . In this technique, an explicit model of the neural sources, i.e. the current dipole, is used. In general, a layer of current dipoles simulates the cortical surface, and the retrieved dipole strengths are then used to generate potential distributions over a surface of the head model simulating the dura mater. Fig. 8.4 shows the idea at the base of the cortic al imaging technique. A three- shell (scalp, inner and outer surface of the skull) realistic head volume conductor is represented, together with the cortical dipole layer. It has been proven that even the use of homogeneous spherical volume conductor for the head and a realistic cortical surface for the dipole layer provided more focused and detailed information than the raw scalp potential s (Sidman et al., 1992, Srebro et aI., 1993, Srebro and Oguz , 1997). However, it must be noted that the condu ctivity ratio between skull and scalp is far than 1 as assumed in homogeneous models. The value adopted for such ratio by all the researchers in the field in these last 30 years is 1:80 (Rush and Driscoll, 1968), or even I :15 as stated more recently (Ooste ndorp et al., 2000) . According to this observation, several researchers (He et aI., 1996; Babiloni et aI., 1997; He, 1999; He et al., 1999) developed cortical imaging techniques that took into account the inhomogeneity of the head as volume conductor by using realistic head models and boundary element mathem atics. By regularization, dura mater potentials obtaine d both
259
Multimodal Imaging from Neuroelectromagnetic
from simulation and real recordings presented improved spatial characteristics with respect to the use of raw scalp potentials. It is worth of note that the mathematical framework of the cortical imaging technique allows in principle the integration of tMRI priors. In fact, since the cortical imaging method is a linear inverse technique (He, 1999), all the multimodal integration we will present in the following paragraph for the EEG/MEG and tMRI data for the distributed linear inverse solution can be in theory applied also for the cortical imaging. Beside to the use of Green's second identity, another approach to the imaging of the cortical potential distribution from non-invasive recordings was made by using the spherical harmonic functions (Nunez et al., 1994; Edlinger et al., 1998). However, the mathematical framework developed for the estimation of cortical potentials do not allow easily the integration of tMRI information. In fact, since this method recovers the "deblurred" cortical potential distribution but not the cortical current strengths, it is difficult to integrate the information about the activation of patches of cortical tissues obtained by tMRI.
8.3.4 DISTRIBUTED LINEAR INVERSE ESTIMATION As seen before, when the EEG activity is mainly generated by circumscribed cortical sources (i.e. short-latency evoked potentials/magnetic fields), the location and strength of these sources can be reliably estimated by the dipole localization technique (Scherg et al., 1984, Salmelin et al., 1995). In contrast, when EEG activity is generated by extended cortical sources (i.e. event-related potentials/magnetic fields), the underlying cortical sources can be described by using a distributed source model with spherical or realistic head models (Grave de Peralta et al., 1997; Pascual-Marqui, 1995; Dale and Sereno, 1993). With this approach, typically thousands of equivalent current dipoles covering the cortical surface modeled and located at the triangle center were used, and their strength was estimated by using linear and non linear inverse procedures (Dale and Sereno, 1993; Uutela et al., 1999). Taking into account the measurement noise n, supposed to be normally distributed, an estimate of the dipole source configuration that generated a measured potential b can be obtained by solving the linear system:
Ax-l n
e
b
(8.1)
where A is a m x n matrix with number of rows equal to the number of sensors and number of columns equal to the number of modeled sources. We denote with A j the potential distribution over the m sensors due to each unitary j-th cortical dipole. The collection of all the m-dimensional vectors A j , (j = 1, ... , n) describes how each dipole generates the potential distribution over the head model, and this collection is called the lead field matrix A This is a strongly under-determined linear system, in which the number of unknowns, dimension of the vector x, is greater than the number of measurements b of about one order of magnitude. In this case from the linear algebra we know that infinite solutions for the x dipole strength vector are available, explaining in the same way the data vector b. Furthermore, the linear system is ill-conditioned as results of the substantial equivalence of several columns of the electromagnetic lead field matrix A In fact, we know that each column of the lead field matrix arose from the potential distribution generated by the dipolar sources that are located in similar positions and have orientations along the cortical model used. Regularization of the inverse problem consists in attenuating the oscillatory modes
260
F. Babiloni and F. Cincotti
generated by vectors that are associated with the smallest singular values of the lead field matrix A, introducing supplementary and apriori information on the sources to be estimated. In the following, we characterize with the term "source space" the vector space in which the "best" current strength solution x will be found. "Data space" is the vector space in which the vector b of the measured data is considered. The electrical lead field matrix A and the data vector b must be referenced consistently. Before we proceed to the derivation of a possible solution for the problem drawn in (8.1) we recall few definitions of algebra useful for the following. A more complete introduction to the theory of vector spaces is out of the scope of this chapter, and the interested readers could refer to related textbooks (Spiegel, 1978; Rao and Mitra, 1977). In a vector space provided with a definition of a inner product (-,.), it is possible to associate a value or modulus to a vector b by using the notation (b, b) = [b]. The notion of length of a vector can be generalized even in a vector space in which the space axes are not orthogonal. Any symmetric positive definite matrix M is said a metric for the vector space furnished with the inner product (.,.) and the squared modulus of a vector b in a space equipped with the norm M is described by
(8.2) With these recalls in mind, we now face the problem to derive a general solution of the problem described in Eq. 8.1 under the assumption of the existence of two distinct metrics Nand M for the source and the data space, respectively. Since the system is undetermined, infinite solutions exist. However, we are looking for a particular vector solution that has the following properties: 1) it has the minimum residual in fitting the data vector b under the norm M in the data space 2) it has the minimum strength in the source space under the norm N. To take into account these properties, we have to solve the problem utilizing the Lagrange multiplier A and minimizing the following functional that express the desired properties for the sources x (Tikhonov and Arsenin, 1977; Dale and Sereno, 1993; Menke, 1989; Grave de Peralta and Gonzalez Andino, 1998; Liu, 2000) :
e
=
(IIAx - bll~ +)..1 IIxlI~)
(8.3)
The solution of the variational problem depends on adequacy of the data and source space metrics. Under the hypothesis ofM and N positive definite , the solution of Eq. 3 is given by taking the derivatives of the functional and setting it to zero. After few straightforward computations the solution is
(8.4) where G is called the pseudoinverse matrix, or the inverse operator, that maps the measured Note that the requirements of positive definite matrices for data b onto the source space the metric Nand M allow to consider their inverses . Last equation stated that the inverse operator G depends on the matrices M and N that describe the norm of the measurements and the source space, respectively. The metric M, characterizing the idea of closeness in the data space, can be particularized by taking into account the sensors noise level by using the Mahalanobis distance (Grave de Peralta and Gonzalez Andino, 1998). If no a priori information is available for the solution of linear inverse problem, the matrices M
e.
MultimodaI Imaging from Neuroelectromagnetic
261
and N are set to the identity, and the minimum norm estimation is obtained (Hamalainen and Ilmoniemi, 1984). However, it was recognized that in this particular application the solutions obtained with the minimum norm constraints are biased toward those sources that are located nearest to the sensors. In fact, there is a dependence of the distance on the law of potential (and magnetic field) generation and this dependence tends to increase the activity of the more superficial sources while depresses the activity of the sources far from the sensors. The solution to this bias was obtained by taking into account a compensation factor for each dipole that equalizes the "visibility" of the dipole from the sensors. Such technique, called column norm normalization by Lawson and Hanson in 1974, was used in the linear inverse problem by Pascual-Marqui, 1985 and then adopted largely by the scientists in this field. With the column norm normalization the inverse of the resulting source metric is (8.5) in which (N- I)ii is the i-th element of the inverse of the diagonal matrix N and II Ai II is the L2 norm of the i-th column of the lead field matrix A. In this way, dipoles close to the sensors, and hence with a large II Ai II, will be depressed in the solution of the inverse problem, since their activations are not convenient from the point of view of the functional cost. The use of this definition of matrix N in the source estimation is known as weighted minimum norm solution (Pascual-Marqui, 1995; Grave de Peralta et aI., 1997). The described mathematical framework is able to accommodate the information coming from EEG, MEG and tMRI data, as we will demonstrate in the following paragraphs.
8.4 MULTIMODAL INTEGRATION OF EEG, MEG AND FMRI DATA Before we describe how it can be possible to implement methods that fuse data from all modalities, some remarks are necessary about the neural sources that mayor may not be retrieved by multimodal EEG-MEG-tMRI integration. In the following paragraphs, we will present possible techniques for multimodal integration of EEG, MEG and tMRI data by using a particularization of the metrics of the data and the source space, in the context of the distributed linear inverse problem. In particular, we will show that the metric of the data space M can be characterized to take into account the EEG and MEG data. Furthermore, we will demonstrate how the source metric N can be particularized by taking into account the information from the hemodynamic responses of the brain voxels.
8.4.1 VISIBLE AND INVISIBLE SOURCES Any neuroimaging technique has its own visible and invisible sources. The visible sources for a particular neuroimaging technique are those neuronal pools whose spatiotemporal activity can be at least in part detected. In contrast, invisible sources are those neural assemblies that produce a pattern of the spatio-temporal activity not detectable by the analyzed neuroimaging technique. In the case ofEEG (or MEG) technique, it is clear that the visible sources are generally located at the cortical level, since the cortical assemblies are close to the recording sensors, and the morphology of the cortical layers allows the
262
F. Babiloni and F. Cincotti
generation of open (rather than closed) electromagnetic fields. On the other hand, it is often poorly understood that the invisible sources for the EEG (or MEG) are all those cortical assemblies that do not fire synchronously together. In fact, in a dipole layer composed by M coherent sources and N incoherent ones, the potentials due to individual coherent sources are combined by linear superposition, while the combination of the incoherent sources is only due to statistical fluctuations. The ratio between the contributions of coherent to incoherent source can be expressed by M 1v'N(Nunez, 1995). Hence, if N is very large, say about 10 million of incoherent neurons that fire continuously, and M is a small percentage of such neurons (say 1%; about 100,000 neurons) that instead fire synchronously, we obtain that the potential measured at the scalp level will be determined by 1051M , with a net result of about 30. Hence, only 1% of the active sources produce a potential larger than the other 99% by a factor of 30 just because of the synchronicity property. This means that a cortical patch may generate an EEG signal with no modification of its metabolic consumption, simply by increasing the firing coherence of a small percentage of neurons. As a consequence of that, neuroimaging techniques based on imaging of the metabolic/hemodynamic request of the neural assemblies may detect no activity change with respect to the baseline condition. However, there are other situations in which the visible sources for metabolic techniques such as fMRI and PET can be invisible for EEG or MEG techniques. Stellate cells are neurons present in the human cerebral cortex, and represent 15% of the neural population of the neocortex (Braitenberg and Schuz, 1991). These cells occupy a spherical volume within the cortex, thus generating essentially a closed-field electromagnetic pattern. Such a field cannot be recorded at the scalp level by electrical or magnetic sensors, although the actual firing rate of such stellate neurons is rather high with respect to the other cortical neurons. This means that these neuronal populations present high metabolism requirements that can be detected by the fMRI or PET techniques, while at the same time they are "invisible sources" for the EEG and MEG techniques. Other example of invisible sources for the EEG and MEG techniques are represented by the neural assemblies located at the thalamic level, since they are also arranged in such a way to produce closed electromagnetic field, while having high metabolic requirements.
8.4.2 EXPERIMENTAL DESIGN AND CO-REGISTRATION ISSUES 8.4.2.a Experimental design Experimental setups that take into account both the electrical and the hemodynamic responses as dependent variables have to be designed with particular attention. There are two main important classes of setups that can be considered in a study of this type, depending whether simultaneous EEG and fMRI measurements or just separate EEG/MEG and fMRI recordings are scheduled. In the first case, many issues related to the co-registration of the head can be easily overcome. However, in both cases the differences between the hemodynamic and electric behavior have to be taken into account. In fact, consideration about the signal to noise ratio (SNR) can limit the use of similar paradigms for EEG and fMRI recordings. For instance, EEG/MEG response to very brief stimuli (such as Somatosensory Evoked Potentials; SEPs, i.e. short electrical shock) can be recorded with a high SNR, while the hemodynamic responses decrease its SNR by decreasing the stimulation length. Furthermore, it has also been demonstrated that while EEG amplitudes decrease
Multimodal Imaging from Neuroelectromagnetic
263
by increasing the stimulation rate, the opposite it is true for the hemodynamic response amplitudes (Wikstrom et al., 1996; Kampe et al., 2000). Experimental design for either separate or simultaneous collection of electrophysiological and hemodynamic variables can be easier when event-related fMRI technique is used, in contrast to the block-design fMRI. In these experimental paradigms, the availability of the time behavior of the hemodynamic response can be useful to design similar stimulation setup for both modalities.
S.4.2.b Co-registration In the multimodal integration ofEEGIMEG data with the fMRI, a common geometrical framework has to be derived in order to locate appropriately the voxels whose EEG responses is high and voxels whose hemodynamic response is increased/decreased during the task performance. The issue of deriving a common geometrical framework for the data obtained by different imaging modalities is called the "co-registration" problem (van den Elsen et al., 1993). Several techniques can be used to produce an optimal match between the realistic head reconstruction obtained in the high resolution EEGIMEG by the MRIs of the experimental subject and the fMRI image coordinates. The first body of techniques is based on the presence of landmarks on the both images used for the co-registration. Corresponding landmarks have to be determined in both modalities (Fuchs et al., 1995). A second body of techniques is based instead on the matching of surfaces belong to the same head structure, as obtained by the different image modalities. In these techniques a prerequisite is the segmentation of the structures whose surfaces have to be matched (Wagner and Fuchs, 2001). With the volume-based registration technique no additional information as landmarks or surface detection is necessary (Wells et al., 1997). In the case in which the multimodal EEG and fMRI is performed simultaneously, the setup of a common geometrical framework becomes simpler. In this case registration can be performed based on a scanner coordinate system. As additional advantage, simultaneous measurement of EEG and fMRI also allows an accurate co-registration of the electrode positions, a problem that in the other cases have to be solved by using non-linear minimization techniques.
8.4.3 INTEGRATION OF EEG AND MEG DATA As mentioned before, electroencephalography (EEG) and magneto-encephalography (MEG) are useful tools for the study of brain dynamics and functional cortical connectivity due to their high temporal resolution (in the range of milliseconds). While the EEG reflects the activity of neural generators oriented both tangentially or radially with respect to the surfaces of electrodes, the MEG is more sensitive to cortical generators oriented tangentially to the surfaces of sensors. However, the recorded EEG is a distorted copy of the cortical potential distribution due to the poor conductivity of the skull, while the MEG is insensitive to the different head tissues conductivities. In this framework an important question arises, namely the importance of using one (EEG, MEG) or both modalities (EEG and MEG) for increasing the accuracy of the estimated neural activity. Simulation studies aimed at integrating data from MEG and EEG sensors with phantoms demonstrated an improvement of spatial accuracy of the reconstruction methods when MEG and EEG data are fused together (Phillips et al., 1997, Baillet et al., 1997, 1999). These simulation studies suggest the possibility to practically integrate data from both EEG and MEG modalities in the solution
264
F. Babiloni and F. Cincotti
of some neurophysiological problems also in the case of distributed source problem. It was also demonstrated that the use of combined EEG and MEG data increase the stability and the accuracy of the source activity estimate from primary sensory cortical areas of man with respect to using modalities separately (Stock et aI., 1987; Fuchs et aI., 1998). In this context, the question if the use of combined EEG and MEG measurements lead to a better estimate of the distributed cortical activity with respect to the use of EEG or MEG separately, has been recently addressed (Babiloni et aI., 2001). Here, we would like to expand in the following paragraph concepts about the possibility to integrate EEG and MEG data in the context of distributed linear inverse solutions. At a first look, the attempt to integrate the EEG and MEG data in order to increase the quality of the source reconstruction fails when we consider that the units of the potential and magnetic field differ. How can one fuse such data together? In order to combine the different measures of electric and magnetic data, both have to be converted to a common basis. This conversion was performed by normalizing the measured signals to their individual noise amplitudes, yielding unit-free measures for both electric and magnetic modalities. Such normalization procedure was accomplished by using the covariance matrix of the electric and magnetic noise as metric in the data space for the solution of the linear inverse problem. The estimation of the noise covariance matrices requires the recording of several single sweeps of EEG and MEG data, and the possibility to determine a segment of the recorded data in which no task-related activity is present. Then, on all the sweeps recorded and for the time period of interest, the maximum likelihood estimate for the covariance matrices of the electrical N, and magnetic Nm noise matrices have to be computed. With the use of these matrices we can produce the block covariance matrix of the electromagnetic measurement, by posing S = [ N, N m ] , i.e horizontally adjoining the two matrices. The forward solution specifying the potential scalp field due to an arbitrary dipole source configuration is solved on the basis of the linear system (8.6)
where (i) E is the electric lead field matrix obtained by the boundary element technique for the realistic MRI-constructed head model; (ii) B is the magnetic lead field matrix obtained for the same head model; (iii) x is the array of the unknown cortical dipole strengths; (iv) v is the array of the recorded potential values; and (v) m is the array of magnetic field values. The lead field matrix E and the array v were referenced consistently. In order to scale EEG and MEG, the rows of the lead field matrix E and B were first normalized by the norm of rows (Phillips et aI., 1997). This scaling was equally applied on the electrical and magnetic measurements arrays, v and m. As noted before, the inverse operator G is expressed in terms of the matrices M and N that regulates the metric in the measurement and source space, respectively. Here, M is now equal to the inverse of the covariance matrix S of the noise of the normalized EEG and MEG sensors, while N is the matrix that regulates how each EEG or MEG sensor is influenced by dipoles located at different depths of the source model. The covariance matrix S was derived from the normalized EEG (v) and MEG (m) data by maximum likelihood estimation as described before. The matrix N is a diagonal matrix in which the i-th element is equal to the norm of the i-th column of the normalized lead field matrix A.
265
Multimodallmaging from Neuroelectromagnetic
RIGHT MOVEMENT EEG
-t
.{l.5
-Hl.5
+1
0 -Hl.5 T1ME ......,
0 :~ '0 :=
0.06
"C
c
0 0
0.04
0.02
01L -_
....L..-_
---l_ _....L._----l_ _....L._ _L - - - _ - L_ _L - _ - - L _ - - - . J
20
40
60
80
100
frequency (Hz)
FIGURE 9.4. The frequency-dependent conductivity of gray matter based on the parameters given by Gabriel et al. (l996b ) .
The Electrical Conductivity of Living Tissue
289
9.1.5.1 Impact ofthe frequency dependence on the EEG The frequency dependence of the conductivity may influence the EEG. In order to estimate the impact on EEG, the effects of a frequency-dependent conductivity were simulated. The volume conductor model consisted of three concentric spheres representing the brain, skull and scalp, with radii of 87, 92 and 100mm. This model is often used to describe the head as a volume conductor. The source was modeled as a current dipole. Simulations were carried out for three current dipoles, i.e. a central dipole, and a radial and a tangential dipole at a radius of 80mm. As observation point, a point at the surface was taken where the potential, found for the lowest frequency, was maximal. The transfer function was calculated. The transfer function is the relationship between the strength of the current dipole and the potential in a point at the outermost surface as a function of the frequency. To enable a comparison between the different cases, all transfer functions were normalized. If there is no frequency dependence, the transfer function would have the value one for all frequencies. Two cases were studied. First, the gray matter conductivity was taken from Fig. 9.4 in which the conductivity increases by a factor four; the scalp conductivity was 0.33S/m and that of the skull 0.0042S/m. Second, the conductivity of all compartments increased 20% in the interval from 1Hz to 100Hz. At 1Hz the conductivity of the brain and scalp was 0.33S/m and that of the skull 0.0042S/m. The results of these simulations are given in Fig. 9.5. As shown the volume conductor acts as a low-pass filter. The potential may drop by approximately a factor two when the frequency is increased from 1 to 100Hz. The transfer function depends on the depth of the source.
9.2 MODELS OF HUMAN TISSUE Tissues are composed of cells. The interstitial space between the cells contains fluid. So, the effective conductivity of a tissue depends on the conductivity of the cells, the volume fraction occupied by the cells, and the conductivity of the extracellular medium.
9.2.1 COMPOSITES OF HUMAN TISSUE This section starts with a brief description of tissue at a cellular level. Next, the conductivity of a cell and that of the extracellular fluid will be discussed.
9.2.1.1 Cells All human cells stem from the round-shaped fertilized egg cell. There is no typical cell shape. Cells come in all shapes: cubes (cells lining sweat ducts), spheres (white blood cells of the immune system), Bismarck doughnuts (red blood cells), columnar cells, balloon-like cells (cells lining the urinary bladder), needle shaped ellipsoids or rods (skeletal muscle cells) and pancakes (cells on the surface of the skin) as illustrated in Fig. 9.6. Cells vary also considerably in size, and function. For instance, the diameter of a red blood cell is 7.5f.1m, the diameter of a human egg cell is 140fJ..m, a smooth muscle cell has a length of 20 to 500f.1m, while a skeletal muscle cell may have a length of 30cm. All cells
290
M. J. Peters, J. G. Stinstra, and I. Leveles
"- '- "- .._ "- "- .0_. _ . - 0_ . - 0_ . - 0_ . -
0.9
0_ . _.
0.8 c .2 U e 0.7 b = c. In that case the lower bound is found for cylinders, where La = 0, L, = L, = 1/2, leading to O"solv (l - P)5/3 .:::: O"cff
(9.35)
In summary, for elongated, homogeneously distributed and randomly orientated nonconducting spheroids in suspen sion the effective conductivity is limited by the following bounds O"solv(l - P)5/ 3 .:::: O"cff .:::: O"solv (l _ P)3/2
(9.36)
In Fig. 9.18 the various upper and lower bounds are plotted for a two-phase composite med ium consisting of a non-conducting phase embedded in a conducting medium as a function of the volume fraction occupied by the non-conducting phase.
9.5.1 WHITE MATTER White matter appears white becau se it contains fiber groups that possess myelin sheaths that consist of layers of membranes. These membranes, composed largely of a lipoprotein called myelin, have a higher proportion of lipid than other surface membranes. The oligodendrocytes (i.e. star-shaped cells) are arranged in rows parallel to the myelinated nerve fibers, with long processes running in the same direction. The white matter can be modeled as a suspension of elongated particles that may have any direct ion. The extracellular space measured by van Harreveld and Ochs (1956) in the cerebellum of mice varied between 18.1 and 25.5 percent with a mean of 23.6 percent. The conductivity of the interstitial fluid is that of the cerebrospinal fluid, i.e.l .8S/m. Thus according to equation (9.36), the effective conductivity will be between the limits O.lOS/m < O"cff < 0.23S/m.
9.5.2 THE FETUS In order to simulate the fetal electrocardiogram, a single compartment may describ e the fetu s. As no measured values for the conductivity of the human fetus are available, its conductivity is estimated. In order to be able to use the theory presented above, the
315
The Electrical Conductivity of Living Tissue
x 02 1 - - - - - ,- - -
- -..-....,...----r-----,------, 1a
0.8
:f O .7[ ~
"0
0.6
c:
80.5 Ql
>
'U 0.4 ~
Ql
0.3 0.2 0.1 1b,2b,3b,4b
0.2
0.4 0.6 volumefraction particles(p)
0.8
FIGURE 9.18. Upper and lower bounds for a material consisting of two composites where one phase is nonconducting as a function of the volume fraction occupied by the non-conducting phase. The conductivity of the conducting phase is oz. (I a) Upper bound if the only information that is available is the value of (f 2 ; the lower bound (l b) coincides with the x-axis. (2a) upper bound and (2b) lower bound if the only information that is available is the value of (f2 and the volume fraction p. (3a) upper bound, (3b) lower bound if the phases are homogeneously distributed and (f 2 and p are known. (4a) upper bound and (4b) lower bound if the non-conducting phase consists of spheroidal, homogeneously distributed particles and the values of (f2 and p are known. (Sa) upper bound and (5b) lower bound if the non-conducting phase consists of elongated particles that are homogeneously distributed and randomly orientated and the values of oz and p are known.
fetus is assumed to be a homogeneous conductor. It is assumed that the cells in the fetus are homogeneously distributed and randomly orientated and have a shape somewhere between a sphere and a cylinder. Looking at the histology of the fetus most tissues consist of elongated spheroids or spheres. Disc-like cells are less commonly encountered. Based on these assumptions the conductiv ity of the fetus can be estimated to be between the limits oAI - p)5/3 :s afetus :s ae (1 - p)3/2. The volume fraction of the extracellular space at the end of gestation is about 40 percent of the total body volume (Brace, 1998; Costarino and Brans, 1998). The extracellular space include s besides the interstitial fluid the fluids in the body cavities like the cerebrospinal fluid and the blood plasma . The blood plasma is about 18 percent of the total extracellular water content. As the fetus is considered as one single entity, there is no objection in taking all the extracellular fluid in account in estimating the conductivity, as all contribute to the conductivity. In compari son the extracellular fluid fraction in an adult is about 20 percent. Thence, the fetus is at least a factor two more conducting than the maternal abdomen. Assuming the conductivity of the extracellular space in both fetus and adult to be comparable at a value of 2S/m, equation (9.36) predicts a range
316
M.
J. Peters, J. G. Stinstra, and I. Leveles
of 1.9 X (004)5/3 ~ OAIS/m::::: afetus ::::: 1.9 X (0.6)3/2 ~ 0.88S/m. Assuming that the volume fraction will be somewhere between 40 and 60 percent and will approach 40 percent at the end of gestation, a value of O.SS/m is a reasonable choice for the conductivity of the fetus in the third trimester of pregnancy. This value is used for the solution of the inverse problem for fetal ECG and leads to reasonable results (Stinstra, 2001).
9.6 DISCUSSION The accuracy of measurements will be limited because the measurements are very complicated. The accuracy of the computation is limited because the cells vary in shape, they are not homogeneously distributed, blood supply plays a role, etcetera. Since the model used to describe a tissue in this chapter is a simplification the results are only an approximation. However, the results are useful in clarifying the relation between the conductivity and the structure of the tissue. The results can be used to predict the effects of changes due to, for instance, temperature, illnesses or age. Anyhow, it makes no sense to use values of the effective conductivity that suggest an accuracy higher than ten percent by giving the values with too many digits. The effective electrical conductivity is a macroscopic parameter that represents the electrical conductivity of the tissue averaged in space over many cells. Many of the tissues in the body such as lung, liver, fat, and blood have cells structures that macroscopically show no preferred direction. Even the heart, which is muscular, has its muscle strands wound in such a complicated fashion that, overall, no preferred direction can be readily discerned. Baynham and Knisley (1999) measured the effective epicardial resistance of rabbit ventricles and found that in contrast to isolated fibers the ventricular epicardium exhibits an isotropic effective resistance due to transmural rotation of fibers. Only skeletal muscle cells have a definite preferred direction when many cells are averaged (Rush et al., 1984). Most cells have an elongated shape. Thence, the bounds given in section 9.5 can be used to estimate the effective conductivity. These bounds are not so far apart, so they will help to restrict the uncertainties in the effective conductivity to be used in the bioelectrical inverse problem. An exception form long skeletal muscle and heart tissue, as the conductivity parallel to the fibers will take place both in the extracellular and the intracellular space.
REFERENCES Archie, G.E., 1942, The electrical resistivity log as an aid in determining some reservoir characteristics, Trans. Am. Institut. Min. Metal. Eng., 146: 55-62. Aseyev, 1998, Electrolytes. Interparticle interactions. Theory, calculation methods and experimental data, Begell House inc., New York. Baumann, S.B., Wozny, D.R., Kelly, S.K., and Meno, EM., 1997, The electrical conductivity of human cerebrospinal fluid at body temperature, IEEE T. Bio-Med. Eng., 44: 220-223. Baynham, C.']", Knisley, S.B., 1999, Effective resistance of rabbit ventricles, Ann. of Biomed. Eng., 27: 96-102. Boned, C., and Peyrelasse, J., 1983, Etude de la permittivite complexe d'ellipsoides disperses dans un milieu continuo Analyses theorique et numerique, Colloid Polym. Sci., 261:600-612. Boyle, M. H., 1985, The electrical properties of heterogeneous mixtures containing an oriented spheroidal dispersed phase, Colloid Polym. Sci., 263:51-57.
Th e Electrical Conductivity or Living Tissue
317
Brace, R.A., 1998, Fluid distribution in the fetus and neonate, in: Fetal and neonata l Physiology. (R. A. Polin, and W.w. Fox, eds.), Saunders Comp., Philadelphia, pp. 1703- 1713. Burger, H. c., and Dongen, R. van, 196 1, Specific electric resistance of body tissues, Phys. Med. Biol., 5: 431-437. Burger, H. C., and Milaan, J. B. van. 1943, Measurement of the specific resistance of the human body to direct current, Act. Med. Scand., 114:585-607. Burik, M. J. van, 1999, Physical aspects of EEG , PhD thesis, University of Twente, the Netherlands. Chapman, R.A., and Frye, C.H., 1978. An analysis of the cable properties of frog ventricular myocardium, J. Physiol., 283:263-283. Clerc, L., 1976, Directional differences of impulse spread in trabecular muscle from mammali an heart, Ibid. 255:335- 346. Cohen, D., and Cuffin, B.N., 1983, Demonstration of useful differences between magnetoencephalogram and electroencephalogram, Electroen. din. Neuro., 56:38-51. Cole, K. S., Li, C., and Bak, A. E. 1969. Electrical analogues for tissues. Exp. Neurol., 24:459-473. Costarino, A.T.. and Brans, Y. w., 1998, Fetal and neonatal body fluid composi tion with reference to growth and development, in: Fetal and neonatal Physiology, (R. A. Polin, and W.w. Fox, eds.), Saunders Comp ., Philadelphia, pp. 1713-1721. De Luca, E, Cametti, C, Zimatore, G., Maraviglia, B., and Pachi, A., 1996, Use of low-frequency electrical impedance measurements to determine phospholipid content in amniotic fluid, Phys. Med. BioI., 41:18631869. Epstein. B. R., and Foster, K. R., 1983, Anisotropy in the dielect ric properties of skeletal muscle, Med. BioI. Eng. Comput., 21:51-55. Eyuboglu , B. M., Pilkington, T. C; and Wolf, P. D., 1994, Estimation of tissue resistiv ities from multiple-electrode measurements, Phys. Med. BioI. 39: 1-17 . Foster, K. R., and Schwan, H. P., 1989, Dielectric properties of issues and biological materials: a critical review, Crit. Rev. Biomed. Eng., 17:25- 104. Foster, K. R., and Schwan, H. P., 1986, Dielectric permittivity and electrical conductivity of biological materials, in: Handbook of Biological Effects of Electromagnetic Fields, (C. Polk, and E. Postow, eds.), CRC Press, Inc., Boca Raton, pp. 27. Fricke, H., 1953, The Maxwel l-Wagner dispersion in a suspension of ellipsoids , J. Phys. Chem., 57 :934-937. Gabriel, S., Lau, R. w. , and Gabriel, C., 1996 a, The dielectric properties of tissue: II. Measurements in the frequency range 10Hz to 20GHz, Phys Med BioI., 41:225 1- 2269. Gabriel, S., Lau, R. w. , and Gabriel, C., 1996b , The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues, Phys. Med. Bioi., 41:2271-2293. Geddes, L. A., and Baker, L. E., 1967, The specific resistance of biological material-A compendium of data for the biomedical engineer and physiologist, Med. BioI. Eng., 5:27 1- 293. Geddes, L. A., and Sadler, C, 1973, The specific resistance of blood at body temperature, Med. BioI. Eng., 11:336-339. Gcrsing, E., 1998, Monitoring temperature induced changes in tissue during hyperthermia by impedance methods, Proc. of the X.ICEBI, Universitat Politecnica de Cataluya. Gielcn, E, 1983, Electrical conductivity and histological structure of skeletal muscle. PhD Thesis, University of Twente, the Netherlands. Gielen, E L. H., Wallinga-de Jonge, w., and Boon, K. L. , 1984, Electrical conductivity of skeletal tissue: experimental results from different muscles in vivo, Med. BioI. Eng. Comput ., 22:569-577 . Goncalves, S., Munck, J. C. de, Heethaar, R. M., Lopes da Silva E H., and Dij k, B. W. van, 2000, The application of electrical impedance tomography to reduce systematic errors in the EEG inverse problem-a simulation study, Physiol. Meas., 21:379-393. Grandqvist, C. G., and Hunderi, 0 ., 1978, Conductivity of inhomogeneous materials: effective medium theory with dipole-dipole interaction, Phys. Rev. B, 18:1554-1561. Hanai, T., 1960, Theory of the dielectric dispersion due to the interfacial polarization and its application to emulsions, Kolloid-Z., 171:23- 3 1. Harreveld, A. van, Crowell, J., Malhtotra S.A., 1965, A study of extracellular space in central nervous tissue by freeze-substitution, J. Cell Bioi., 25:117-1 37. Harreveld, A. van, and Ochs, S., 1956, Cerebral impedance charges after circulatory arrest, Am. J. Physiol., 187:203-207.
318
M. J. Peters, J. G. Stinstra, and I. Leveles
Hart, E X., Berner N. J., and McMillen R. L., 1999, Modelling the anistropic electrical properties of skeletal muscle, Phys. Med Bioi., 44:413-421. Hashin, Z., and Shtrikman S., 1962, A variational approach to the theory of the effective magnetic permeability of multiphase materials, J. Appl. Phys., 33:3125-3131. Havstad, J. W., 1967, Electrical impedance of cerebral cortex: an experimental and theoretical investigation, PhD Thesis, Stanford University. Hoekema, R., Huiskamp, G. J. M., Wieneke, G. H., Leijten, E S. S., van Veelen, C. W. M., van Rijen, P. C., and van Huffelen, A. C; 2001, Measurement of the conductivity of the skull, tempoarily removed during epilepsy surgery, Biomed Tech., 46:103-105. Homma, S., Musha, T, Nakajima, Y, Okamoto, Y, Blom, S., Flink, R., Hagbach, K. E., and Mostrom, U., 1994, Location of electric current sources in the human brain estimated by the dipole tracing method of the scalp-skull-brain (SSB) head model, Electroen. Clin. Neuro., 91:374-382. Kobayashi, N., and Yonemura, K., 1967, The extracellular space in red and white muscles of the rat, Jap. J. Physiol., 17:698-707. Kotnik, T, Bobanovic, E, and Miklavcic, D., 1997, Sensitivity of transmembrane voltage induced by applied electric fields-a theoretical analysis, Bioelectroch. Bioener., 43:285-291. Law, S. K., 1993, Thickness and resistivity variations over the upper surface of the human skull, Brain Topogr., 6:99-109. Ludt, H., and Hermann, H. D., 1973, In vitro measurement of tissue impedance over a wide frequency range, Biophys. J., 10:337-345. Maxwell, J. C., 1891, A treatise on electricity and magnetism, volume 1, Arts. 311-314, Dover Publ., New York. Me Rae, D. A., and Esrick, M. A., 1993, Changes in electrical impedance of skeletal muscle measured during hyperthermia, Int. J. Hyperthermia, 9:247-261. Nicholson, C., and Rice, M. E., 1986, The migration of substances in the neural microenvironment, Ann. New York Academy of Sciences, 481:55-71. Nicholson, P. w., 1965, Specific impedance of cerebral white matter, Exp. Neurol., 13:386-401. Oostendorp, T E, Delbeke, J., and Stegeman, D. E, 2000, The conductivity of the human skull; Results of in vivo and in vitro measurements, IEEE T. Bio-Med. Eng., 47:1487-1492. Peters, M. J., Hendriks, M., and Stinstra, J. G., 2001, The passive DC conductivity of human tissue described by cells in solution, Bioelectroch., 53:155-160. Pehtig, R., and Kell, D. B., 1987, The passive electrical properties of biological systems: their significance in physiology, biophysics and biotechnology, Phys. Med. Bioi., 32:933-970. Pfiitzner, H., 1984, Dielectric analysis of blood by means of a raster-electrode technique, Med. BioI. Eng. Comput., 22:142-146. Plonsey, R, and Barr, R.C., 1986, Effect of microscopic and macroscopic discontinuities on the response of cardiac tissue to defibrilating (stimulating) currents, Med. BioI. Eng. Comput., 24:130-136. Plonsey, R, and Heppner, D.B., 1967, Considerations of quasi-stationarity in electrophysiological systems, Bulletin of mathematical Biophysics, 29:657-664. Raicu, Y, Saibara, T, and Irimajiri, A., 1998a, Dielectric properties of rat liver in vivo: a non-invasive approach using an open-ended coaxial probe at audiolradio frequencies, Bioelectroch. Bioener., 47: 325-332. Raicu, Y, Saibara, T, Enzan H., and Irimajiri, A., 1998b , Dielectric properties of rat liver in vivo: analysis by modeling hepatocytes in the tissue architecture, Bioelectroch. Bioener., 47:333-342. Robillard, P. N., and Poussart Y, 1977, Specific-impedance measurements of brain tissues, Med. BioI. Eng. Comput., 15:438-445. Rosell, J., Colominas, J., Riu, P., Pallas-Areny, R., and Webster, J. G., 1988, Skin impedance from 1 Hz to 1 MHz, IEEE T. Bio-Med. Eng., 35:649-651. Rush, S., 1967, A principle for solving a class of anisotropic current flow problems and applications to electrocardiography, IEEE T. Bio-Med. Eng., BME-14:18-22. Rush, S., Abildskov, lA., and Me Fee, R, 1963, Resistivity of body tissues at low frequencies, Circ. Res., XII:40-50. Rush, S., Mehtar, M., and Baldwin, A. E, 1984, Normalisation of body impedance data: a theoretical study, Med. BioI. Eng. Comput., 22:285-286. Schwan, H. P., 1985, Dielectric properties of cells and tissues, in: Interactions between Electromagnetic Fields and Cells, (A Chiabrera, C. Nicolini, and H. P. Schwan, eds.) NATO ASI series, vol. 97, Plenum Press, New York, pp. 75-103.
The Electrical Conductivity of LivingTissue
319
Schwan, H. P., and Foster, K. R., 1980, RF-Field interactions with biological systems: Electrical properties and biophysical mechanisms, Proc. ofthe IEEE, 68: 104-113. Schwan, H. P., and Takashima, S., 1993, Electrical conduction and dielectric behaviour in biological systems, Encyclopedia ofApplied Physics, 5:177-199. Sekine, K., 2000, Application of boundary element method to calculation of the complex permittivity of suspensions of cells in shape ofDoch symmetry, Electroch., 52:1-7. Semrov, D., Karba, R, and Valencic, v., 1997, DC Electrical stimulation for chronic wound healing enhancement. Part 2. Parameter determination by numerical modelling, Bioelectroch. Bioener., 43:271-277. Sillars, R W., 1937, The properties of a dielectric containing semi-conducting particles of various shapes, J. Ins. Electrical Eng., 80:378-394. Stanley, P. C., Pilkington, T. C., and Morrow, M. N., 1986, The effects of thoracic inhomogeneities on the relationship between epicardial and torso potentials. IEEE T. Bio-Med. Eng., BME-33:273-284. Stanley, P. C., Pilkington, T. c., Morrow, M. N., and ldeker, R. E., 1991, An assessment of variable thickness and fiber orientation of the skeletal muscle layer on electrocardiographic calculations, IEEE T. Bio-Med. Eng., 38:1069-1076. Stinstra, J. G., 2001, Reliability offetal magnetocardiography, PhD thesis, University of Twente, the Netherlands. Stuchly, M. A., and Stuchly, S. S., 1980, Dielectric properties of biological substances-Tabulated, J. Microwave Power, 15:19-26. Takashima, S., 1989, Electrical properties of biopolymers and membranes, lOP Publishing Ltd, Bristol. Trautman, E. D., and Newbower, R S., 1983, A practical analysis of the electrical conductivity of blood, IEEE T. Bio-Med. Eng., BME-30:141-153. Ulgen, Y, and Sezdi, M., 1998, Electrical parameters of human blood, Proceedings 20th Ann. Int. Conference, IEEEIEMBS, Hongkong, 2983-2986. Veelen, C. van, Debets, R, Huffelen, A. van, Emde Boas, W. van, Binnie, C., Storm van Leeuwen, W., Velis, D. N., and Dieren, A. van, 1990, Combined use of subdural and intracerebral electrodes in preoperative evaluation of epilepsy, Neurosurgery, 26:93-101. Yamamoto, T., and Yamamoto, Y., 1976, Electrical properties of epidermal stratum corneum, Med. Biol. Eng., 3:151-158. Zheng, E., Shao, S., and Webster, J. G., 1984,Impedance of skeletal musclefrom 1 Hz to 1 MHz,IEEE T. Bio-Med. Eng., BME·31: 477-481.
INDEX
Action potential 2, 89, 120 Activation time imaging 167 Adaptive spatial filter 226 Anisotropic bidoma in 50 Archie's law 300 Beamformer 226, 230, 231 Bidomain model 124 Bidomain myocardi um 46 Biot-Savart law 216 Block-design fMRI 254 Body surface isopotential map 107 Body surface Laplacian 144, 183 Body surface Laplacian mapping 192 Body surface potential map 126 BOLD 252 BSLM 192 BSPM 126 Cable theory 17 Cardiac action potential 7, 8, 9 Cardiac arrhythmia 27 Cardiac tissue 302 Cell networks 23 Cells 289 Computer heart model 62 Conductivity tensor 5 1 Co-registration 262, 263 Cortical imaging 257 Current dipole density 46 DAD 14,25 Defibrillation 72 Dipole distrib ution imaging 164 Dipole distribution 163, 164 Dipole localization 256, 267 Dipole source imaging 163, 165 Dipole source 44
Drug integration 35 EAD 14,25 ECG 183 EEG 183,263 Effective conductivity 47, 283 Electrical conductivity 28 1 Electrocardiographic tomographic imaging 16 1, 168,175 Endocardial potential imaging 129, 147 Endocardial potential 127, 128 Epicardial potential imaging 129, 138 Epicardial potential 57 Equivalent conductivity 312 Equivalent current density 49 Equivalent dipole 54 Equivalent moving dipole 163 Event-related fMRI 254 Extracellular electrogra m 20 Extracellular fluid 29 1, 295 Fat 301 Fiber orientation 52, 86 Finite difference laplacia n 186 Finite difference method 58 Finite element method 58 Finite volume method 60 FitzHugh-Nagumo model 100 fMRI 252 Forward problem 43, 53 Functional magnetic resona nce imaging 252 Genetic integration 35 Global Laplacia n estimate 188 Gradiometer 2 15 Gray matter 305 Green's function 54, 125
321