IFMBE Proceedings Series Editors: R. Magjarevic and J. H. Nagel
Volume 22/1
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Former-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany IFMBE Proceedings ICBMEC 2005 “The 12th International Conference on Biomedical Engineering”, Vol. 12, 2005, Singapore, CD IFMBE Proceedings EMBEC’05 “3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering”, Vol. 11, 2005, Prague, Czech Republic, CD IFMBE Proceedings ICCE 2005 “The 7th International Conference on Cellular Engineering”, Vol. 10, 2005, Seoul, Korea, CD IFMBE Proceedings NBC 2005 “13th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 9, 2005, Umeå, Sweden IFMBE Proceedings APCMBE 2005 “6th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 8, 2005, Tsukuba, Japan, CD IFMBE Proceedings BIOMED 2004 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 7, 2004, Kuala Lumpur, Malaysia IFMBE Proceedings MEDICON and HEALTH TELEMATICS 2004 “X Mediterranean Conference on Medical and Biological Engineering”, Vol. 6, 2004, Ischia, Italy, CD IFMBE Proceedings 3rd Latin – American Congress on Biomedical Engineering “III CLAEB 2004”, Vol. 5, 2004, Joao Pessoa, Brazil, CD
IFMBE Proceedings Vol. 22/1 Jos Vander Sloten · Pascal Verdonck · Marc Nyssen · Jens Haueisen (Eds.)
4th European Conference of the International Federation for Medical and Biological Engineering ECIFMBE 2008 23–27 November 2008 Antwerp, Belgium
123
Editors Jos Vander Sloten KULeuven Biomechanics and Engineering Design Section Celestijnenlaan 300c - bus 2419 3001 Heverlee Belgium
Marc Nyssen Free University Brussels Medical Informatics Department Laarbeeklaan 103 1090 Brussels Belgium
Pascal Verdonck Ghent University Cardiovascular Mechanics and Biofluid Dynamics Research Unit De Pintelaan 185 9000 Gent Belgium
Jens Haueisen Technical University of Ilmenau Institute for Biomedical Engineering and Informatics P.O. Box 100565 98639 Ilmenau Germany
ISSN 1680-0737 ISBN-13 978-3-540-89207-6
e-ISBN-13 978-3-540-89208-3
DOI 10.1007/978-3-540-89208-3 Library of Congress Control Number: 2008939398 © International Federation of Medical and Biological Engineering 2008 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Offical Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting: Data supplied by the authors Production: le-tex publishing services oHG, Leipzig Cover design: deblik, Berlin Printed on acid-free paper 987654321 springer.com
About IFMBE The International Federation for Medical and Biological Engineering (IFMBE) was established in 1959 to provide medical and biological engineering with a vehicle for international collaboration in research and practice of the profession. The Federation has a long history of encouraging and promoting international cooperation and collaboration in the use of science and engineering for improving health and quality of life. The IFMBE is an organization with membership of national and transnational societies and an International Academy. At present there are 52 national members and 5 transnational members representing a total membership in excess of 120 000 worldwide. An observer category is provided to groups or organizations considering formal affiliation. Personal membership is possible for individuals living in countries without a member society The International Academy includes individuals who have been recognized by the IFMBE for their outstanding contributions to biomedical engineering.
Objectives The objectives of the International Federation for Medical and Biological Engineering are scientific, technological, literary, and educational. Within the field of medical, clinical and biological engineering it’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. In pursuit of these aims the Federation engages in the following activities: sponsorship of national and international meetings, publication of official journals, cooperation with other societies and organizations, appointment of commissions on special problems, awarding of prizes and distinctions, establishment of professional standards and ethics within the field, as well as other activities which in the opinion of the General Assembly or the Administrative Council would further the cause of medical, clinical or biological engineering. It promotes the formation of regional, national, international or specialized societies, groups or boards, the coordination of bibliographic or informational services and the improvement of standards in terminology, equipment, methods and safety practices, and the delivery of health care. The Federation works to promote improved communication and understanding in the world community of engineering, medicine and biology.
Activities Publications of IFMBE include: the journal Medical and Biological Engineering and Computing, the electronic magazine IFMBE News, and the Book Series on Biomedical Engineering. In cooperation with its international and regional conferences, IFMBE also publishes the IFMBE Proceedings Series. All publications of the IFMBE are published by Springer Verlag. The Federation has two divisions: Clinical Engineering and Health Care Technology Assessment. Every three years the IFMBE holds a World Congress on Medical Physics and Biomedical Engineering, organized in cooperation with the IOMP and the IUPESM. In addition, annual, milestone and regional conferences are organized in different regions of the world, such as Asia Pacific, Europe, the Nordic-Baltic and Mediterranean regions, Africa and Latin America. The administrative council of the IFMBE meets once a year and is the steering body for the IFMBE: The council is subject to the rulings of the General Assembly, which meets every three years. Information on the activities of the IFMBE can be found on the web site at: http://www.ifmbe.org.
Welcome to Antwerp! Dear colleagues, in name of all organizers, a cordial welcome to you in Antwerp on the occasion of the 4th European Conference for Medical and Biomedical Engineering, which is the 4th European Conference of the International Federation for Medical and Biomedical Engineering. The previous meetings were successfully organized twice in Vienna (1995 and 1999), and once in Prague (2005). Antwerp has a long standing tradition of openness to the world, as a cosmopolitan city as one of the great harbors and cultural centers of our continent. The symbol of the city is the hero Brabo, throwing the hand of the villain giant Antigoon he just defeated in combat into the river Schelde. It shows that the spirit of independence in this region of the world defeats tyrants. It also reveals the shift in emphasis over time: in the dark ages limbs were cut off from people, in our times we engage into challenges to restore and repair cells, organs, limbs … the whole body, thanks to the collaboration of medicine and engineering. The theme of the Antwerp Conference: ‘Engineering for Health’ aims to highlight this interactive collaboration. It also links to the current concepts on translational medicine, transferring research findings from the bench to industry and to the bed of the patient. From the start, the scientific committees were composed of both engineers and physicians and thanks to this we have elaborated a unique scientific program structured in seven tracks, embracing the rich field of biomedical engineering:
• • • • • • •
Signal and image processing and ICT Clinical engineering and applications Biomechanics Biomaterials and tissue repair Innovations and nanotechnology Modeling and simulation Education and profession
Thanks to the co-sponsorship of IFMBE and endorsement of the European societies ESEM and EAMBES, the German society DGBMT, the Belgian Society BSMBEC was able to organize this conference. We wish you an excellent conference, a productive meeting and enjoyable stay in Antwerp!
The Conference chairmen: Prof. Marc Nyssen
Prof. Jos Vander Sloten
Prof. Pascal Verdonck
Prof. Jens Haueisen
Conference details Name 4th European Conference of the International Federation for Medical and Biological Engineering Short name ECIFMBE 2008 Venue Antwerp, Belgium 23–27 November, 2008 In cooperation with ESEM Congress 2008 BMT 2008 – 40th Annual Conference of the German Society for Biomedical Engineering within VDE IEEE-EMBS Benelux Chapter Symposium 2008 Endorsed by EAMBES Local Organising Committee Chairmen/Editors Vander Sloten, Jos Haueisen, Jens Nyssen, Marc Verdonck, Pascal Members Brimioulle, Serge Cornelis, Jan De Wachter, Dirk Delbeke, Jean Delchambre, Alain Kolh, Philippe Lambrecht, Luc Lefèvre, Philippe Lemahieu, Ignace Peeters, Stefaan Van der Perre, Georges Van Huffel, Sabine Wuyts, Floris Scientific committee list Achten, Rik (Ghent, Belgium) Adam, Dan (Haifa, Israel) Ambrosio, Luigi (Naples, Italy)
Artmann, Gerhard (Jülich, Germany) Arvanitis, Theo (Birmingham, UK) Aubert, André (Leuven, Belgium) Baets, Roeland (Ghent, Belgium) Bamidis, Panagiotis D. (Thessaloniki, Greece) Bartic, Carmen (Leuven, Belgium) Bijnens, Bart (Leuven, Belgium) Blinowska, Katarzyna (Warsaw, Poland) Bosmans, Hilde (Leuven, Belgium) Brimioulle, Serge (Brussels, Belgium) Buzug, Thorsten (Lübeck, Germany) Caemaert, Jacques (Ghent, Belgium) Calil Said, Jorge (Zeferino Vaz, Brazil) Catelli Infantosi, Antonio F. (Rio de Janeiro, Brazil) Ceelen, Wim (Ghent, Belgium) Claessens, Tom (Ghent, Belgium) Clarys, Jan (Brussels, Belgium) Colardyn, Francis (Ghent, Belgium) Cornelis, Jan (Brussels, Belgium) Cornelissen, Ria (Ghent, Belgium) Costin, Hariton (Iasi, Romania) D’Asseler, Yves (Ghent, Belgium) D’Hooge, Jan (Leuven, Belgium) De Backer, Jan (Antwerp, Belgium) De Backer, Wilfried (Antwerp, Belgium) De Beule, Matthieu (Ghent, Belgium) De Deene, Yves (Ghent, Belgium) De Moor, Georges (Ghent, Belgium) De Moor, Bart (Leuven, Belgium) De Ridder, Dirk (Antwerp, Belgium) De Sutter, Johan (Ghent, Belgium) De Wagter, Carlos (Ghent, Belgium) Deconinck, Frank (Brussels, Belgium) Decramer, Mark (Leuven, Belgium) Deklerck, Rudi (Brussels, Belgium) Delbeke, Jean (Brussels, Belgium) Dickhaus, Hartmunt (Heilbronn, Germany) Dierckx, Rudi (Ghent, Belgium) Dössel, Olaf (Karksruhe, Germany) Dubruel, Peter (Ghent, Belgium) Duifhuis, Hendrikus (Groningen, The Netherlands) Eberle, Wolfgang (IMEC Leuven, Belgium) Epple, Matthias (Essen, Germany) Ermert, Helmut (Bochum, Germany) Fiers, Tom (Ghent, Belgium) Flandre, Denis (Louvain-La-Neuve, Belgium) Friesdorf, Wolfgang (Berlin, Germany)
X
Gehring, Hartmut (Lübeck, Germany) Gelinsky, Michael (Dresden, Germany) Geris, Liesbet (Leuven, Belgium) Gielen, Jan (Antwerp, Belgium) Gilly, Hermann (Vienna, Austria) Goh Cho Hong, James (Singapore, Singapore) Gomez, Enrique J. (Madrid, Spain) Hadjileontiadis, Leonthios (Thessaloniki, Greece) Haex, Bart (Leuven, Belgium) Hahn, Eckhart (Erlingen, Germany) Hammer, Joachim (Regensburg, Germany) Haueisen, Jens (Ilmenau, Germany) Hein, Hans-Joachim (Halle, Germany) Herijgers, Paul (Leuven, Belgium) Hesse, Christian (Nijmegen, The Netherlands) Hexamer, Martin (Bochum, Germany) Hoelscher, Uvo (Münster, Germany) Hofmann, Ulrich (Lübeck, Germany) Holcik, Jiri (Kladno, Czech Republic) Hozman, Jiri (Kladno, Czech Republic) Huang, Chun-Hsi (Connecticut, USA) Husar, Peter (Ilmenau, Germany) Hutten, Helmut (Graz, Austria) Hyttinen, Jari (Tampere, Finland) Igney, Claudia (Philips) Imhoff, Michael (Bochum, Germany) Jaeger, Marc (Karlsruhe, Germany) James, Christopher J. (Southampton, U.K.) Jämsä, Timo (Oulu, Finland) Jaron, Dov (Philadelphia, USA) Jobbàgy, Akos (Budapest, Hungary) Jonkers, Ilse (Leuven, Belgium) Kikuchi, Makoto (Saitama, Japan) Kim, Sun I. (Hanyang, Korea) Klee, Doris (Aachen, Germany) Kolh, Philippe (Liège, Belgium) Korb, Harald (Mannheim, Germany) Kraft, Marc (Berlin, Germany) Krishnan, Shankhar M. (Boston, USA) Lackovic, Igor (Zagreb, Croatia) Lahorte, Philippe (European Patent Office) Le Beux, Pierre (Rennes, France) Lee, Clive (Dublin, Ireland) Lefèvre, Philippe (Louvain-La-Neuve, Belgium) Lemahieu, Ignace (Ghent, Belgium) Lemor, Robert (St. Ingbert, Germany) Leonhardt, Steffen (Aachen, Germany) Liebert, Adam (Warsaw, Poland) Luypaert, Robert (Brussels, Belgium) Luyten, Jan (VITO, Belgium)
Conference details
Luyten, Frank (Leuven, Belgium) Maess, Burkhard (Leipzig, Germany) Magjarevic, Ratko (Zagreb, Croatia) Maglaveras, Nicos (Thessaloniki, Greece) Malberg, Hagen (Egggenstein-Leopoldshafen, Germany) Maniewski, Roman (Warsaw, Poland) McCullagh, Paul (Ulster, UK) Miklavcic, Damijan (Ljubljana, Slovenia) Mizrahi, Joe (Haifa, Israel) Mokwa, Wilfried (Aachen, Germany) Morgenstern, Ute (Dresden, Germany) Müller-Krager, Carmen (Caracas, Venezuela) Nagel, Joachim (Stuttgart, Germany) Niederer, Pieter (Zurich, Switzerland) Niederlag, Wolfgang (Dresden, Germany) Nunziata, Enrico (Torino, Italy) Nyssen, Marc (Brussels, Belgium) O’Brien, Fergal (Dublin, Ireland) Offenhäuser, Andreas (Jülich, Germany) Pallikarakis, Nicolas (Patras, Greece) Penzel, Thomas (Berlin, Germany) Philips, Wilfried (Ghent, Belgium) Prendergast, Patrick J. (Dublin, Ireland) Puers, Bob (Leuven, Belgium) Putzeys, Theo (Berchem, Belgium) Rakhorst, Gerard (Groningen, The Netherlands) Reichenbach, Jürgen (Jena, Germany) Ren, James (Liverpool, UK) Robitzki, Andrea (Leipzig, Germany) Rosahl, Steffen (Erfurt, Germany) Ruggeri, Alfredo (Padova, Italy) Sansen, Willy (Leuven, Belgium) Saranummi, Niilo (Tampere, Finland) Schacht, Etienne (Ghent, Belgium) Schauer, Thomas (Berlin, Germany) Scheunders, Paul (Antwerp, Belgium) Schmitz, Georg (Bochum, Germany) Schmitz, Klaus-Peter (Rostock, Germany) Schrooten, Jan (Leuven, Belgium) Secca, Màrio Forjaz (Lisboa, Portugal) Segers, Patrick (Ghent, Belgium) Siebes, Maria (Amsterdam, The Netherlands) Sijbers, Jan (Antwerp, Belgium) Simanski, Olaf (Rostock, Germany) Slaaf, Dick (Maastricht, The Netherlands) Staelens, Steven (Ghent, Belgium) Stett, Alfred (Reutlingen, Germany) Stieglitz, Thomas (Freiburg, Germany) Such, Olaf (Philips) Tabakov, Slavik (London, UK)
Conference details
Thonnard, J.-L. (Brussels, Belgium) Tonkovic, Stanko (Zagreb, Croatia) Trahms, Lutz (Berlin, Germany) Urban, Gerald (Freiburg, Germany) Van Bortel, Luc (Ghent, Belgium) Van Brussel, Hendrik (Leuven, Belgium) Van Buyten, Jean-Pierre (Vice President VAVP) Van de Voorde, Wim (Leuven, Belgium) Van Der Linden, A. (Antwerp, Belgium) Van der Perre, Georges (Leuven, Belgium) Van Huffel, Sabine (Leuven, Belgium) Van Humbeeck, Jan (Leuven, Belgium) Van Leeuwen ,Peter (Bochum, Germany) Van Lenthe, Harry (Leuven, Belgium) Van Oosterwyck, Hans (Leuven, Begium) Van Zundert, Jan (President VAVP) Vandenberghe, Stefaan (Ghent, Belgium) Vander Sloten, Jos (Leuven, Belgium)
XI
Vanderstraeten, Guy (Ghent, Belgium) Vanrumste, Bart (Leuven, Belgium) Veltink, Peter (Twente, The Netherlands) Verbanck, Sylvia (Brussels, Belgium) Verdonck, Pascal (Ghent, Belgium) Verdonk, Peter (Ghent, Belgium) Verdonk, René (Ghent, Belgium) Verellen, Dirk (Brussels, Belgium) Verkerke, Bart (Groningen, The Netherlands) Vleugels, Arthur (Leuven, Belgium) Voigt, Herbert F. (Boston, USA) Voss, Andreas (Jena, Germany) Walter, Marian (Aachen, Germany) Werner, Jürgen (Bochum, Germany) Wessel, Niels (Berlin, Germany) Wildau, Hans-Jürgen (Berlin, Germany) Wojcicki, Jan (Warsaw, Poland) Wuyts, Floris (Antwerp, Leuven)
Content Modern Concepts of Cardiovascular Analysis – Monitoring, Diagnosis, Risk Stratification, Prediction 1 Risk Stratification in Ischemic Heart Failure Patients with Linear and Nonlinear Methods of Heart Rate Variability Analysis .......................................................................................................................................... 1 A. Voss, R. Schroeder, M. Vallverdú, H. Brunel, I. Cygankiewicz, R. Vázquez, A. Bayés de Luna and P. Caminal
Biomagnetic risk stratification by QRS fragmentation in patients with Implanted Cardioverter Defibrillators............ 5 M. Goernig, D. DiPietroPaolo, J. Haueisen and S.E. Erné
Beating Rate Variability Studies with Human Embryonic Stem Cell Derived Cardiomyocytes....................................... 8 F.E. Kapucu, M. Pekkanen-Mattila, V. Kujala, J. Viik, K. Aalto-Setelä, E. Kerkelä, J.M.A. Tanskanen and J. Hyttinen
The method of assessment of chosen hemodynamic and electrophysiologic parameters in the healthy human subject circulation................................................................................................................................................................... 12 K. Peczalski, D. Wojciechowski, P. Sionek, Z. Dunajski, T. Palko
2D Isochronal Correlation Method to Detect Pacing Capture during Ventricular Fibrillation...................................... 14 X. Ibáñez-Català, M.S. Guillem, A.M. Climent, F.J. Chorro, F. Pelechano, I. Trapero, E. Roses, A. Guill, A. Tormos, J. Millet
Chaotic Phase Space Differential (CPSD) Algorithm for Real-Time Detection of VF, VT, and PVC ECG Signals............................................................................................................................................................ 18 Chien-Sheng Liu, Yu-Chiun Lin, Yueh-Hsun Chuang, Tze-Chien Hsiao, Chii-Wann Lin
Supervised ECG Delineation Using the Wavelet Transform and Hidden Markov Models............................................. 22 G. de Lannoy, B. Frenay, M. Verleysen and J. Delbeke
Validating the Reliability of Five Ventricular Fibrillation Detecting Algorithms ............................................................ 26 A.H. Ismail, M. Fries, R. Rossaint and S. Leonhardt
Novel multichannel capacitive ECG-System for cardiac diagnostics beyond the standard-lead system ........................ 30 M. Oehler, M. Schilling and H.D. Esperer
On the Use of Independent Component Analysis in Biomedical Signal Processing On Independent Component Analysis based on Spatial, Temporal and Spatio-temporal information in biomedical signals............................................................................................................................................................... 34 C.J. James
Extracting Event-Related Field Components Through Space-Time ICA: a Study of MEG Recordings from Children with ADHD and Controls ............................................................................................................................. 38 C. Demanuele, C. James, A. Capilla and E.J.S. Sonuga-Barke
Measurement, Signal Processing and Models for Human Movement and Posture Analysis: Measurements, Methods and Instrumentation Gender differences in the control of the upper body accelerations during level walking ................................................ 43 C. Mazzà, M. Iosa, P. Picerno, F. Masala and A. Cappozzo
Heart rate variability analysis during bicycle ergometer exercise ..................................................................................... 47 Federica Censi, Daniele Bibbo and Silvia Conforto
XIV
Content
Color cues in Human Motion Analysis ................................................................................................................................. 51 Ana Kuzmani Skelin, Damir Krstini and Vlasta Zanchi
Transforming retinal velocity into 3D motor coordinates for pursuit eye movements .................................................... 55 G. Blohm, P. Daye and P. Lefevre
Effectiveness of deep brain stimulation in subthalamic nucleus in Parkinson’s disease – a somatotopic organisation .................................................................................................................................................... 59 T. Heida, E.C. Wentink and J.A.G. Geelen
The Median Point DTW Template to Classify Upper Limb Gestures at Different Speeds.............................................. 63 R. Muscillo, M. Schmid and S. Conforto
2D Markerless Gait Analysis ................................................................................................................................................. 67 Michela Goffredo, John N. Carter and Mark S. Nixon
Tremor control during movement of the upper limb using artificial neural networks.................................................... 72 G. Severini, S. Conforto, I. Bernabucci, M. Schmid and T. D’Alessio
Low-cost, Automated Assessment of Sit-To-Stand Movement in "Natural" Environments ........................................... 76 Sonya Allin and Alex Mihailidis
Human Body Motions Classification..................................................................................................................................... 80 J. Havlik, J. Uhlir and Z. Horcik
Parametric Representation of Hand Movement in Parkinson’s Disease ........................................................................... 85 R. Krupicka, Z. Szabo and P. Janda
A wireless integrated system to evaluate efficiency indexes in real time during cycling .................................................. 89 D. Bibbo, S. Conforto, I. Bernabucci, M. Schmid and T. D’Alessio
Contactless head posture measurement................................................................................................................................ 93 P. Janda, J. Hozman, M. Jirina, Z. Szabo and R. Krupicka
Specialized glasses - projection displays for neurology investigation................................................................................. 97 Charfreitag J., Hozman J., Cerny R.
Digital Wireless Craniocorpography with Sidelong Scanning by TV Fisheye Camera ................................................. 102 J. Hozman, P. Kutilek, Z. Szabo, R. Krupicka, M. Jirina, V. Zanchi and R. Cerny
Calibration of a measurement system for the evaluation of efficiency indexes in bicycle training ............................... 106 S. Conforto, S.A. Sciuto, D. Bibbo and A. Scorza
Preliminary study on a remote system for diagnostic-therapeutic postural measurements .......................................... 110 S.A. Sciuto, IEEE Member and A. Scorza
FUSION – Future Environment for Gentle Liver Surgery Using Image-Guided Planning and Intra-Operative Navigation Software Assistance for Planning of RF-Ablation and Oncological Resection in Liver Surgery .................................. 114 S. Zidowitz, H. Bourquain, C. Hansen, C. Rieder, A. Weihusen, G. Prause and H.-O. Peitgen
Ultrasound Navigated RFA of Liver Tumors .................................................................................................................... 118 S. Arnold, A. Schmitgen, G. Grunst, R. Kubitz, D. Reichelt, M. Cohnen
LapAssistent - a laparoscopic liver surgery assistance system ......................................................................................... 121 Volker Martens, Stefan Schlichting, Armin Besirevic, Markus Kleemann
Content
XV
The Living Human Project: Building the Musculoskeletal Physiome The observation of human joint movement........................................................................................................................ 126 A. Cappozzo
Biomedical Signal Processing Estimation of speed distribution of particles moving in an optically turbid multiple scattering medium by decomposition of laser-Doppler spectrum..................................................................................................................... 130 S. Wojtkiewicz, H. Rix, N. oek, R. Maniewski and A. Liebert
A Novel Multivariate Analysis Method with Noise Reduction ......................................................................................... 133 Shu-Hao Chang, Yu-Jen Chiou, Chun Yu, Chii-Wann Lin, Tzu-Chien Hsiao
Iterative improvement of lineshape estimation.................................................................................................................. 138 M.I. Osorio Garcia, D.M. Sima, J.-B. Poullet, D. van Ormondt, S. Van Huffel
An Investigation of the use of a High Resolution ADC as a “Digital Biopotential Amplifier” ...................................... 142 D. Berry, F. Duignan and R. Hayes
A portable data acquisition system for the measurement of impact attenuation material properties .......................... 148 David Eager and Chris Chapman
Assessing Driver’s Hypovigilance from Biosignals ............................................................................................................ 152 D. Sommer, M. Golz, U. Trutschel, D. Edwards
Analyzing an sEMG signal using wavelets ......................................................................................................................... 156 Y. Bastiaensen, T. Schaeps and J.P. Baeyens
Role of Myelin in Synchronization and Rhythmicity of Visual Impulses ........................................................................ 160 S.M. Shushtarian
Analyzing Magnetic Resonance Spectroscopic Signals with Macromolecular Contamination by the Morlet Wavelet.......................................................................................................................................................... 163 A. Suvichakorn, H. Ratiney, A. Bucur, S. Cavassila and J.-P. Antoine
Robust and Adaptive Filtering of Multivariate Online-Monitoring Time Series ........................................................... 167 M. Borowski, M. Imhoff, K. Schettlinger and U. Gather
A novel synchronization measure for epileptic seizure detection based on Fourier series expansions ......................... 171 H. Perko, M. Hartmann, K. Schindler and T. Kluge
Investigating the Relationship between Breath Acoustics and FEV1 During Histamine Challenge............................. 176 E. Chah, S. Glynn, M. Atiyeh, R.W. Costello and R.B. Reilly
Spatio-Temporal Solutions in Inverse Electrocardiography ............................................................................................ 180 Murat Onal, Yesim Serinagaoglu
Genetic Algorithm Based Feature Selection Applied on Predicting Microsleep from Speech....................................... 184 J. Krajewski, M. Golz, D. Sommer and R. Wieland
Automated detection of tonic seizures using 3-D accelerometry ...................................................................................... 188 Tamara M.E. Nijsen, Ronald M. Aarts, Johan B.A.M. Arends, Pierre J.M. Cluitmans
A Classification Attempt of COPD, ALI-ARDS and Normal Lungs of ventilated Patients through Compliance and Resistance over Time Waveform Discrimination ....................................................................................................... 192 A. Tzavaras, B. Spyropoulos, E. Kokalis, A. Palaiologos, D. Georgopoulos, G. Prinianakis, M. Botsivaly and P.R. Weller
Modified Matching Pursuit algorithm for application in sleep screening ....................................................................... 196 D. Sommermeyer, M. Schwaibold, B. Schöller, L. Grote and J. Hedner
XVI
Content
Wireless capsule endoscopic frame classification scheme based on higher order statistics of multi-scale texture descriptors................................................................................................................................................................ 200 D. Barbosa, J. Ramos and C. Lima
One-class support vector machine for joint variable selection and detection of postural balance degradation .......... 204 H. Amoud, H. Snoussi, D.J. Hewson and J. Duchêne
The influence of treatment on linear and non-linear parameters of autonomic regulation in patients with acute schizophrenia...................................................................................................................................................... 208 S. Schulz, K.J. Bär and A. Voss
Intrinsic Mode Entropy for postural steadiness analysis .................................................................................................. 212 H. Amoud, H. Snoussi, D.J. Hewson and J. Duchêne
Influence of different representations of the oscillometric index on automatic determination of the systolic and diastolic blood pressures............................................................................................................................................... 216 V. Jazbinsek, J. Luznik and Z. Trontelj
Laser Doppler flowmetry signals: pointwise Hölder exponents of experimental signals from young healthy subjects and numerically simulated data............................................................................................................................ 221 Benjamin Buard, Anne Humeau, David Rousseau, François Chapeau-Blondeau, and Pierre Abraham
Exploring a physiological environment learning tool for encouraging continued deep relaxation ............................... 226 J. Condron, E. Coyle and A. de Paor
Characterization of a bimodal electrocutaneous stimulation device................................................................................ 230 P. Steenbergen, J.R. Buitenweg, E.M. van der Heide and P.H. Veltink
Identification of the electrohysterographic volume conductor by high-density electrodes.................................................................................................................................................... 235 Chiara Rabotti, Massimo Mischi, Marco Gamba, Maartje Vinken, Guid Oei and Jan Bergmans
Signal Separation in the Frequency Domain for Quantitative Ultrasound Measurements of Bone ............................. 239 S. Dencks, R. Barkmann, C.-C. Glüer and G. Schmitz
Consecutive Detection of Extreme Central Fatigue........................................................................................................... 243 D. Sommer, M. Golz and J. Krajewski
Detection of Obstructive Sleep Apnea by Empirical Mode Decomposition on Tachogram........................................... 247 B. Mijovi, J. Corthout, S. Vandeput, M. Mendez, S. Cerutti, S. Van Huffel
Empirical mode decomposition. Spectral properties in normal and pathological voices............................................... 252 M.E. Torres, G. Schlotthauer, H.L. Rufiner and M.C. Jackson-Menaldi
Global and local inhomogeneity indices of lung ventilation based on electrical impedance tomography .................... 256 Z. Zhao, K. Möller, D. Steinmann, J. Guttmann
Analysis of Intracardiac ECG Measured in the Coronary Sinus ..................................................................................... 260 C. Schilling, A. Luik, C. Schmitt and O. Dössel
Rapid wheezing detection algorithm for real-time asthma diagnosis and personal health care.................................... 264 Chun Yu, Tzu-Chien Hsiao, Tzu-Hsiu Tsai, Shi-Ing Huang, Chii-Wann Lin
Lung sound analysis to monitor lung recruitment............................................................................................................. 268 K. Möller, Z. Zhao, S. Schließmann, D. Schwenninger, S.J. Schumann, A. Kirschbaum, J. Guttmann
Dual Kalman Filter based State-Parameter Estimation in Linear Lung Models ........................................................... 272 E. Saatci and A. Akan
Beat Pressure and Comparing it with Ascending Aorta Pressure in Normal and Abnormal Conditions.................... 276 O. Ghasmelizadeh, M.R. Mirzaee, B. Firoozabadi, B. Sajadi, A. Zolfonoon
Content
XVII
Comparison of Ultrasonic Measurement and Numerical Simulation Results of the Flow through Vertebral Arteries.................................................................................................................................................. 286 D. Obidowski, M. Mysior and K.St. Jozwik
The Influence of Wall Deformation on Transmural Flow in Thoracic Aorta: Three-Dimensional Simulations ......... 293 M. Dabagh, P. Jalali
Non-invasive Vascular Ultrasound Strain Imaging: Different Arteries, Different Approaches.................................... 298 H.H.G. Hansen, R.G.P. Lopata, S. Holewijn, M. Truijers, and C.L. de Korte
Photoplethysmogram Signal Conditioning by Monitoring of Oxygen Saturation and Diagnostic of Cardiovascular Diseases .................................................................................................................................................. 303 O. Abdallah, A. Piera Tarazona, T. Martínez Roca, H. Boutahir, K. Abo Alam, A. Bolz
Choice of coordinate system for left ventricular FE-mesh generation............................................................................. 307 H.F. Choi, M. Wu, J. D’hooge, F.E. Rademakers and P. Claus
Fetal ECG Extraction Using Multi-Layer Perceptron Neural Networks with Bayesian Approach ............................. 311 S.Mojtaba Golzan, Farzaneh Hakimpourand Alireza Toolou
Feature Selection for Brain-Computer Interface............................................................................................................... 318 N.S. Dias, P.M. Mendes and J.H. Correia
Nature Inspired Concepts in Long-Term Electrocardiogram Clustering ....................................................................... 322 M. Bursa, L. Lhotska
Evaluation of the MU Firing Strategies from Spectral Shape Analysis of sEMG Data ................................................. 326 M. Abi Hayla, S. Boudaoud and C. Marque
Physiological Monitoring of Human Cognitive Processes................................................................................................. 330 E. Vavrinsky, I. Brezina, P. Solarikova, V. Stopjakova, V. Tvarozek and L. Majer
Is Detection of Different Anesthetic Levels Related to Nonlinearity of the Electroencephalogram?............................ 335 D. Jordan, G. Stockmanns, E.F. Kochs and G. Schneider
Combining HR-MAS and In Vivo MRI and MRSI Information for Robust Brain Tumor Recognition ..................... 340 A. Croitor Sava, T. Laudadio, J.B. Poullet, D. Monleon, M.C. Martinez-Bisbal, B. Celda and S. Van Huffel
Detection of ectopic beats in single channel electrocardiograms...................................................................................... 344 A. Hekler, N. Kikillus and A. Bolz
Application of Sequential Recognition of Patient Intent to the Bio-Prosthesis Hand Control – Experimental Investigations of Algorithms................................................................................................................................................ 348 A. Wolczowski, D. Davies and M. Kurzynski
EEG Coherence as Measure of Depressive Disorder......................................................................................................... 353 Anna Suhhova, Maie Bachmann, Kaire Aadamsoo, Ülle Võhma, Jaanus Lass and Hiie Hinrikus
Spectral Analysis of Overnight Pulse Oximetry Recordings in Sleep Studies................................................................. 356 Birgit Schultheiß, Agnieszka Jozefiak-Wesolowska, Nikolaus Böhning, Eckhard Schmittendorf
Changes in connectivity patterns in the kainate model of epilepsy .................................................................................. 360 P. van Mierlo, S. Assecondi, S. Staelens, P. Boon and I. Lemahieu
Detection of Foveation Windows and Analysis of Foveation Sequences in Congenital Nystagmus .............................. 364 Giulio Pasquariello, Mario Cesarelli, Paolo Bifulco, Antonio Fratini, Antonio La Gatta, Domenico Boccuzzi
Relationship between Eye Movement and Facilitation of Perceptual Filling-in ............................................................. 368 M. Yokota and Y. Yokota
Modeling the macromolecular background in Nuclear Magnetic Resonance spectroscopic signals ............................ 372 D.M. Sima, A.M. Rodríguez Díaz and S. Van Huffel
XVIII
Content
Real-time BSPM processing system .................................................................................................................................... 377 J. Muzik, K. Hana
Review on biostatistical results associated with the application of signal-processing-based scores for acute myocardial infarction (AMI) clinics.................................................................................................................................... 381 S.A.F. Amorim, J.B. Destro-Filho, L.O. Resende, M.A. Colantoni, E.S. Resende
A Principal Component Regression Approach for Estimation of Ventricular Repolarization Characteristics........... 385 J.A. Lipponen, M.P. Tarvainen, T. Laitinen, T. Lyyra-Laitinen and P.A. Karjalainen
Diagnosis of Ischemic Heart Disease with Cardiogoniometry – Linear discriminant analysis versus Support Vector Machines ........................................................................................................................................ 389 A. Seeck, A. Garde, M. Schuepbach, B. Giraldo, E. Sanz, T. Huebner, P. Caminal, A. Voss
Enhancement of a QRS detection algorithm based on the first derivative, using techniques of a QRS detector algorithm based on non-linear transformations ................................................................................................................ 393 C. Vidal, P. Charnay and P. Arce
Using the wavelet packet transform in automatic sleep analysis...................................................................................... 397 Beena Ahmed, Reza Tafreshi
Proposal of Feature Extraction from Wavelet Packets Decomposition of QRS Complex for Normal and Ventricular ECG Beats Classification ......................................................................................................................... 402 Michal Huptych, Lenka Lhotská
Biomedical Imaging and Image Processing Biomedical Image Segmentation Based on Morphological Spectra ................................................................................. 406 J.L. Kulikowski and M. Przytulska
Magnetic Resonance Imaging in Inhomogeneous Magnetic Fields with Noisy Signal ................................................... 410 V.E. Arpnar, B.M. Eyübolu
Combining EEG signals and MRI images for brain mapping using interpolation techniques; a comparative study.............................................................................................................................................................. 414 Pooryaghooti M.H., Golzan S.M., Hakimpour F., Karimi M.
Detection of Basal Nuclei on Magnetic Resonance Images using Support Vector Machines......................................... 421 R. Villegas, A. Bosnjak, R. Chumbimuni, E. Flores, C. López and G. Montilla
Analysis of digital radiographic equipments with development of specific phantoms and software ............................ 425 P. Mayo, F. Rodenas, B. Marín, J.M. Campayo and G. Verdú
Nonlinear Diffusion Filtering of Single-Trial Matrix Representations of Auditory Brainstem Responses .................. 429 I. Mustaffa, F.I. Corona-Strauss, C. Trenado and D.J. Strauss
A Multi-component similarity measure for improved robustness of non-rigid registration of combined FDG PET-CT head and neck images.................................................................................................................................. 433 Y. Papastavrou, D. Cash, D. Hawkes and B. Hutton
Deconvolution of freehand 3d ultrasound data using improved reconstruction techniques in consideration of ultrasound point spread functions .................................................................................................................................. 436 H.J. Hewener, R.M. Lemor
Modeling of ultrasound propagation through contrast agents ......................................................................................... 440 J.J.F.A.H. Grootens, M. Mischi, M. Böhmer, H.H.M. Korsten, R.M. Aarts
Evaluation of Simplex Codes for Photoacoustic Coded Excitation .................................................................................. 444 M.P. Mienkina, A. Eder, C.-S. Friedrich, N.C. Gerhardt, M.R. Hofmann and G. Schmitz
Content
XIX
Developing a high-resolution photoacoustic microscopy platform .................................................................................. 448 W. Bost, F. Stracke, M. Fournelle and R. Lemor
Investigation of changes in acoustic properties resulting from contrast material in through-transmission ultrasonic imaging ................................................................................................................................................................ 452 T. Rothstein, D. Gaitini, Z. Gallimidi, H. Azhari
Transcranial sonography as early indicator for genetic Parkinson’s disease ................................................................. 456 Christian Kier, Günter Seidel, Norbert Brüggemann, Johann Hagenah, Christine Klein, Til Aach and Alfred Mertins
Comparison of Imaging Modalities for Quantification of Cyanoacrylate Microbubble Concentration....................... 460 M. Siepmann, G. Schmitz
Ranking of color space components for detection of blood vessels in eye fundus images .............................................. 464 M. Patasius, V. Marozas, D. Jegelevicius, A. Lukoševiius
Detection of the Optic Disc in Images of the Retina Using Gabor Filters and Phase Portrait Analysis ....................... 468 Rangaraj M. Rangayyan, Xiaolu Zhu, and Fábio J. Ayres
A Markov Random Field Approach to Outline Lesions in Fundus Images .................................................................... 472 E. Grisan and A. Ruggeri
Realtime Temperature Control towards Gentle Photocoagulation of the Retina........................................................... 476 R. Brinkmann, K. Schlott, L. Baessler, K. Herrmann, W. Xia, M. Bever, R. Birngruber
Estimation of Real-Time Red Blood Cell Velocity in Conjunctival Vessels using a Modified Dynamic-Time-Warping Approach .................................................................................................................................... 480 E. Grisan, A. Tiso and A. Ruggeri
Functional Optical Imaging of a tissue based on Diffuse Reflectance with Fibre Spectrometer ................................... 484 Shanthi Princeand S. Malarvizhi
Elimination of clavicle shadows to help automatic lung nodule detection on chest radiographs .................................. 488 G. Simkó, G. Orbán, P. Máday, G. Horváth
A Novel Approach for Reducing Dental Filling Artifact in CT-Based Attenuation Correction of PET Data ............. 492 M. Abdoli, M.R. Ay, A. Ahmadian, N. Sahba and H. Zaidi
Comparative Assessment of Different Energy Mapping Approaches in CT Based Attenuation Correction: a Patient Study ...................................................................................................................................................................... 496 M. Shirmohammad, M.R. Ay, S. Sarkar, A. Rahmim, H. Zaidi
Optimization of Yttrium-90 Bremsstrahlung Imaging with Monte Carlo Simulations ................................................. 500 E. Rault, S. Vandenberghe, S. Staelens and I. Lemahieu
Sinogram-Based Motion Detection in Transmission Computed Tomography ............................................................... 505 S. Ens, J. Müller, B. Kratz and T.M. Buzug
A Comparison of MTT Calculation Techniques in MRI Brain Perfusion Imaging ....................................................... 509 J. Ruminski
Reduction of Intravenous Contrast Related Artifacts in CT-Based Attenuation Corrected PET Images ................... 513 M.R. Ay, J.H. Bidgholi, P. Ghafarian and H. Zaidi
Detection of characteristic texture parameters in breast MRI......................................................................................... 517 K.K. Holli, A.-L. Lääperi, L. Harrison, S. Soimakallio, P. Dastidar, H.J. Eskola
Numerical evaluation and comparison of instantaneous anatomical knee joint axes and orthotic joint axes using MRI data under weight-bearing condition............................................................................................................... 522 Annegret Niesche, Martin Tettke, David Hochmann and Marc Kraft
XX
Content
A new method for quantitative evaluation of target volume variations in radiotherapy planning ............................... 526 F. Gaya, B. Rodriguez-Vila, F. del Pozo, F. Garcia-Vicente and E.J. Gomez
Computer Aided Monitoring of Bone Quality and New Bone Formation upon Distractive Maxillary Expansion based on Pre- and Post-Surgical CT-Data.......................................................................................................................... 530 C. Kober, C. Landes, A. Preiss, Y. Lu, P. Young and R. Sader
Automatic Landmark Detection on Epicondyles of Distal Femur in X-Ray Images ...................................................... 533 B. Heidari, F. Madeh Khaksar and D. FitzPatrick
A multichannel system for real-time optoacoustics and its suitability for molecular imaging ...................................... 537 M. Fournelle, K. Maass, H. Hewener, C. Günther, H. Fonfara, H.-J. Welsch, R. Lemor
An Automated System for Full Angle Spatial Compounding in Ultrasound Breast Imaging ....................................... 541 Ch. Hansen, N. Hüttebräuker, M. Hollenhorst, A. Schasse, L. Heuser, G. Schulte-Altedorneburg and H. Ermert
AM-FM Representations for the Characterization of Carotid Plaque Ultrasound Images........................................... 546 C.I. Christodoulou, C.S. Pattichis, V. Murray, M.S. Pattichis, A. Nicolaides
Segmentation of 3D Echocardiographic Images using Deformable Simplex Meshes and Adaptive Filtering ............. 550 M.M. Nillesen, R.G.P. Lopata, I.H. Gerrits, L. Kapusta, H.J. Huisman, J.M. Thijssen, C.L. de Korte
Geometric regularization improves 2D myocardial motion estimates in the mouse: an in-silico study........................ 555 F. Kremer, H.F. Choi, S. Langeland, E. D’Agostino, P. Claus and J. D’hooge
Experimental setup with dual chamber cardiac phantom for ultrasonic elastography ................................................. 559 B. Lesniak-Plewinska, M. Kowalski, S. Cygan, E. Kowalik and K. Kaluzynski
Phase-resolved Doppler Fourier Domain Optical Coherence Tomography in the in vivo mouse model...................... 563 J. Walther, G. Mueller, M. Cuevas, H. Morawietz and E. Koch
Pseudo Automatic camera extrinsic estimation using 3D Hough Transform ................................................................. 567 Wouter Belmans, Tim Schaeps and Bart Jansen
A novel intrinsically calibrated method to measure intracellular Ca2+ with ultimate detection sensitivity utilizing confocal Fluorescence Correlation Spectroscopy (cFCS)................................................................................... 571 Norbert Opitz and Stephan Gude
An Efficient Airway Tree Segmentation Method Robust to Leakage Based on Shape Feature Optimization ............ 575 F. Youefi Rizi, A. Ahmadian, J. Alirezaie, N. Rezaie, M. Abdoli
Vessel tree extraction: Combination of a region competition based active contour model with a tubular active contour model ................................................................................................................................... 579 Y. Shang, R. Deklerck, E. Nyssen, A. Markova and X. Yang
Graph-based Tracking Method for Aortic Thrombus Segmentation .............................................................................. 584 J. Egger, T. O’Donnell, C. Hopfgartner and B. Freisleben
Fully automatic assessment of carotid artery curvature and diameter with non-invasive ultrasound ......................... 588 Alessandro C. Rossi, Peter J. Brands and Arnold P.G. Hoeks
Liver and Lesion Segmentation Algorithm for Contrast Enhanced CT Images............................................................. 592 A. Markova, F. Temmermans, R. Deklerck, E. Nyssen, P. Clerinx, F. De Munck and J. DeMey
Anatomical Models for Computer Assisted Surgery Using Support Vector Machine ................................................... 596 A. Bosnjak, G. Montilla, R. Villegas, I. Jara
MRI-based 3D-Modelling of Gleno-humeral Joint Deformities for Functional Surgical Planning .............................. 600 G. Al Hares, J. Bahm, B. Wein and K. Radermacher
Extending Mammographic Microcalcification Detection Method to Cluster Characterization ................................... 604 B. Pataki, L. Lasztovicza
Content
XXI
Medical feature matching and model extraction from MRI/CT based on the Invariant Generalized Hough/Radon Transform..................................................................................................................................................... 608 D. Hlindzich, R. Maenner
Image Segmentation of Cell Nuclei based on Classification in the Color Space ............................................................. 613 T. Wittenberg, F. Becher, M. Hensel and D.G. Steckhan
Volume Estimation of Pathology Zones in 3D Medical Images........................................................................................ 617 K. Krechetova, A. Glazs
Estimation of blurring of optic nerve disc margin............................................................................................................. 621 M. Patašius, V. Marozas, D. Jegeleviius, D. Daukantait, A. Lukoševiius
Robust Data Driven Modeling of Time Intensity Curves.................................................................................................. 625 A. Maciak, A. Kronfeld, P. Stoeter, T. Vomweg, D. Mayer, G. Seidel, K. Meyer-Wiethe
An optimization framework for classifier learning from image data for computer-assisted diagnosis ........................ 629 J. Mennicke, C. Münzenmayer, T. Wittenberg, and U. Schmid
Classification of alveolar microscopy videos with respect to alveolar stability............................................................... 633 D. Schwenninger, K. Moeller, H. Liu and J. Guttmann
Automated Detection of Cell Nuclei in PAP stained cervical smear images using Fuzzy Clustering............................ 637 M.E. Plissiti, E.E. Tripoliti, A. Charchanti, O. Krikoni and D.I. Fotiadis
Analysis of Capsule Endoscopy Images Related to Gastric Ulcer Using Bidimensional Empirical Mode Decomposition ...................................................................................................... 642 Alexandra Tsiligiri and Leontios J. Hadjileontiadis
Motion compensated iterative reconstruction of a cardiac region of interest for CT..................................................... 646 A.A. Isola, A. Ziegler, T. Köhler, U. van Stevendaal, D. Schäfer, W.J. Niessen and M. Grass
An Image Inpainting Based Surrogate Data Strategy for Metal Artifact Reduction in CT Images ............................. 651 M. Oehler and T.M. Buzug
Rodent Imaging with Helical μCBCT................................................................................................................................. 655 D. Soimu, Z. Kamarianakis and N. Pallikarakis
Microcalcification Detection using Digital Tomosynthesis, Dual Energy Mammography and Cone Beam Computed Tomography: A Comparative Study.................................................................................... 660 Z. Kamarianakis, D. Soimu, K. Bliznakova, N. Pallikarakis
Non-Minimum Phase Iterative Deconvolution of Ultrasound Images ............................................................................. 664 N. Testoni, L. De Marchi, N. Speciale and G. Masetti
Dynamic Visualization of the Human Orbit for Functional Diagnostics in Ophthalmology, Cranio-maxillofacial Surgery, and Neurosurgery ............................................................................................................. 669 C. Kober, B.-I. Berg, C. Kunz, E.W. Radü, K. Scheffler, H.-F. Zeilhofer, C. Buitrago-Téllez and A. Palmowski-Wolfe
A Communication Term for the Combined Registration and Segmentation.................................................................. 673 Konstantin Ens, Jens von Berg and Bernd Fischer
Elastic Registration of Optical Images showing Heart Muscle Contraction ................................................................... 676 M. Janich, G. Seemann, J. Thiele and O. Dössel
Automation of the preoperative image processing steps for ultrasound based navigation ............................................ 680 C. Dekomien, S. Winter
Elastic Registration of Functional MRI Data to Sensorimotor Cortex............................................................................ 684 T. Ball, I. Mutschler, D. Jäger, M. Otte, A. Schulze-Bonhage, J. Hennig, O. Speckand A. Schreiber
Enhanced Visualization of Ultrasound Volumes for Diagnostic and Therapeutic Purposes ......................................... 689 U. von Jan, D. Sandkühler, M. Rauberger, H.K. Matthies and H.M. Overhoff
XXII
Content
Compensation of Cardiac Motion in Angiographic Sequences for the Assessment of Myocardial Perfusion ............. 693 M. Erbacher, G. Korosoglou, R. Floca and H. Dickhaus
3D Cardiac Strain Imaging using a Novel Tracking Method ........................................................................................... 697 R.G.P. Lopata, M.M. Nillesen, I.H. Gerrits, H.H.G. Hansen, L. Kapusta, J.M. Thijssen and C.L. de Korte
SWI Brain Vessel Change Coincident with fMRI Activation........................................................................................... 701 Mario Forjaz Secca, Michael Noseworthy, Henrique Fernandes and Adrain Koziak
A Subspace Wiener Filtering Approach for Extracting Task-Related Brain Activity from Multi-Echo fMRI Data................................................................................................................................................ 705 C.W. Hesse, P.F. Buur and D.G. Norris
An Elasticity Penalty: Mixing FEM and Nonrigid Registration ...................................................................................... 709 D. Loeckx, L. Roose, F. Maes, D. Vandermeulen and P. Suetens
Evaluation of the biodistribution of In-111 labeled cationic Liposome in mice using multipinhole SPECT Technique ................................................................................................................................................................ 713 S.O. Viehoever, D. Buchholz, H.W. Mueller, O. Gottschalk, A. Wirrwar
Multimodal Medical Case Retrieval using Dezert-Smarandache Theory with A Priori Knowledge............................ 716 G. Quellec, M. Lamard, G. Cazuguel, B. Cochener and C. Roux
Noise properties of the 3-electrode skin admittance measuring circuit ........................................................................... 720 S. Grimnes, Ø.G. Martinsen and C. Tronstad
Thermal Imaging of Skin Temperature Distribution During and After Cooling: In-Vitro Experiments .................... 723 M. Kaczmarek, J. Ruminski
Magnetic Resonance Electrical Impedance Tomography For Anisotropic Conductivity Imaging............................... 728 E. Deirmenci and B.M. Eyübolu
Electro-Magnetic Impedance Tomography – a sensitivity analysis ................................................................................. 732 A. Janczulewicz, A. Bujnowski and J. Wtorek
A feasibility study on the delectability of Edema using Magnetic Induction Tomography using an Analytical Model.................................................................................................................................................... 736 B. Dekdouk, M.H. Pham, D.W. Armitage, C. Ktistis, M. Zolgharni and A.J. Peyton
Ventilatory Pattern monitoring by Electrical Impedance Tomography (EIT) in Chronic Obstructive Pulmonary Disease (COPD) patients .................................................................................................................................. 740 Marco Balleza, Teresa Feixas, Nuria Calaf, Mercedes González, Daniel Antón, Pere J. Riu and Pere Casan
A Magnetic Induction Tomography system with sub-millidegree phase noise and high long-term phase stability .... 744 H.C. Wee, S. Watson, R. Patz, H. Griffiths, R.J. Williams
A method for increasing the phase-measurement stability of Magnetic Induction Tomography systems ................... 748 S. Watson, H.C. Wee, R. Patz, R.J. Williams, H. Griffiths
Reduction of low-frequency noise in magnetic induction tomography systems.............................................................. 752 H. Scharfetter, S. Issa
Regional Image Reconstruction with Optimum Currents for MREIT – Evaluation on Shepp-Logan Conductivity Phantom ......................................................................................................................................................... 756 B. Murat Eyübolu, Adnan Köksal and Haluk Altunel
A Breast Surface Estimation Algorithm for UWB Microwave Imaging ......................................................................... 760 M. Helbig, C. Geyer, M. Hein, I. Hilger, U. Schwarz, J. Sachs
Automatic Lung Segmentation of Helical-CT Scans in Experimental Induced Lung Injury........................................ 764 L.M. Cuevas, P.M. Spieth, A.R. Carvalho, M.G. de Abreu and E. Koch
Content
XXIII
Identification of coronary collaterals in imaging cryomicrotome datasets ..................................................................... 768 J.P.H.M. van den Wijngaard, P. van Horssen, R.D. ter Wee, M. Siebes, H. Schulten, M.J. Post, J.A.E. Spaan
Improved regional myocardial perfusion measurement by means of an imaging cryomicrotome ............................... 771 Pepijn van Horssen, Jeroen P.H.M. van den Wijngaard, Maria Siebes, Jos A.E. Spaan
Dose Distribution in Pediatric CT Head Examination: Phantom Study ......................................................................... 775 R. Gotanda, T. Katsuda, T. Gotanda, A. Tabuchi, H. Yatake and Y. Takeda
Measurement of Half-Value Layer for QA and QC: Simple Method Using Radiochromic Film Density ................... 780 T. Gotanda, T. Katsuda, R. Gotanda, A. Tabuchi, K. Yamamoto, T. Kuwano, H. Yatake and Y. Takeda
Towards automatic detection of movement during sleep in pediatric patients with epilepsy by means of video recordings and the optical flow algorithm.......................................................................................................................... 784 K. Cuppens, L. Lagae and B. Vanrumste
Accuracy improvement of nuclear position extraction from hepatic histopathologic images ....................................... 790 M. Takahashi, H. Takayama, K. Oguruma and M. Nakano
Extraction of fibers and nuclei of hepatic histopathologic specimen stained with silver and HE ................................. 795 T. Kitani, M. Takahashi and M. Nakano
Alignment Pixel-to-Pixel for Mammography Obtained by Dual Energy ........................................................................ 799 I.T. Costa, H.J.Q. Oliveira
Reconstruction of phase images for GRAPPA accelerated Magnetic Resonance Imaging............................................ 803 C. Ros, S. Witoszynskyj, K.-H. Herrmann and J.R. Reichenbach
Probabilistic Assignment of Brain Responses to the Human Amygdala and its Subregions using High Resolution Functional MRI .............................................................................................................................. 807 Isabella Mutschler, Birgit Wieckhorst, Andreas Schulze-Bonhage, Erich Seifritz, Jürgen Hennig, Oliver Speck, and Tonio Ball
Velocity Analysis by Using Doppler Fourier Domain Optical Coherence Tomography with Variable Reference Length.......................................................................................................................................... 811 D. Hammer, J. Walther, M. Cuevas and E. Koch
Transverse and oblique motion effects in Fourier Domain Optical Coherence Tomography....................................... 816 J. Walther, M. Cuevas and E. Koch
Enhancement of alveolar videos using scattered light for illumination ........................................................................... 821 D. Schwenninger, K. Moeller, M. Schneider and J. Guttmann
Applications of Microwave Radiometry in Diagnostic Suspicion of Mammary Pathology ........................................... 825 L. Mustata, O. Baltag
Performance of Semiconductor Gamma-Camera System with CdZnTe Detector ......................................................... 829 K. Ogawa, T. Ishikawa, K. Shuto, N. Motomura and H. Kobayashi
Ultra-high Resolution SPECT with CdTe Detectors ......................................................................................................... 832 Naoka Ohmura, Koichi Ogawa
SPECT imaging with a semiconductor detector and a diverging collimator .................................................................. 836 Mizuho Fukizawa andKoichi Ogawa
Sonographic analysis of hyoid bone movement during swallowing ................................................................................. 840 Koichi Yabunaka, Mutsumi Ohue, Tsutomu Hashimoto, Toshizo Katsuda, Kenyu Yamamoto and Shigeru Sanada
Computer-aided ultrasound diagnosis of hepatic steatosis ............................................................................................... 843 Gert Weijers, Johan. M. Thijssen, Alexander Starke, Alois Haudum, Kathrin Herzog, Jürgen Rehage, and Chris L. De Korte
In vivo determination of the human crystalline lens shape with clinically established measurement methods ........... 848 H. Martin, O. Stachs, R. Guthoff, T. Terwee, N. Hosten and K.-P. Schmitz
XXIV
Content
Continuous wave Doppler ultrasound measurement of micro-vibrations induced by a focused acoustic radiation force ........................................................................................................................................................ 852 T.Z. Pavan, A.L. Baggio, A.A.O. Carneiro
Estimation of Carotid Stiffness Using Ultrasonic Dynamic Images for Evaluating the Degree of Arteriosclerosis ................................................................................................................................................................. 856 Y. Yokota, R. Taniguchi, Y. Kawamura, F. Nogata, H. Morita, Y. Uno
CT-MAR Reconstruction Using Non-Uniform Fourier Transform................................................................................. 861 B. Kratz, T. Knopp, M. Oehler, S. Ens, T.M. Buzug
Personalized Ambient Monitoring: Accelerometry for Activity Level Classification .................................................... 866 J.D. Amor and C.J. James
Healthcare Information Systems Engineering for Health in OP 2000 ..................................................................................................................................... 871 G. Graschew, T.A. Roelofs, S. Rakowsky and P.M. Schlag
Wireless microsensors system for monitoring breathing activity..................................................................................... 875 N. André, P. Gerard, P. Drochmans, T. Kezai, S. Druart, L. Moreno-Hagelsieb, L. Francis, D. Flandre and J.-P. Raskin
Identity Management to Support Access Control in E-Health Systems .......................................................................... 880 Xu Chen, Damon Berry and William Grimson
Building Trust on Body Sensor Network Signals............................................................................................................... 887 Annarita Giani, Ville-Pekka Seppä, Jari Hyttinen and Ruzena Bajcsy
Effects of UV Radiation on the Airborne Particles in the Operating Room.................................................................... 891 Y. Ülgen and I. Tezer
Soluble Gas Tight Capsules for use in Surgical Quality Testing ...................................................................................... 895 J.B. Vorstius, G.A. Thomson and A.P. Slade
Optimization of Ultrasonic Tool Performance in Surgery................................................................................................ 899 Yongqiang Qiu, Zhihong Huang, Alan Slade and Gareth Thomson
A parallel kinematic mechanism for highly flexible laparoscopic instruments............................................................... 903 A. Röse, H.F. Schlaak
A Smart Ultrasonic Cutting System for Surgery ............................................................................................................... 907 Anila Thampy, Zhihong Huang, Alan Slade and Victor Fernandez
Simultaneous Stereo-Optical Navigation of Medical Instruments for Brachytherapy................................................... 911 K. Berthold, D. Richter, F. Schneider and G. Straßmann
The impact of electrosurgical heat on optical force feedback sensors ............................................................................. 914 J.A.C. Heijmans, M.P.H. Vleugels, E. Tabak, T.v.d. Dool, M.P. Oderwald
Classification and Data Mining for Hysteroscopy Imaging in Gynaecology................................................................... 918 M.S. Neofytou, A. Loizou, V. Tanos, M.S. Pattichis, C.S. Pattichis
Video-endoscopic image analysis for 3D reconstruction of the surgical scene ................................................................ 923 A.M. Cano, P. Sánchez-González, F.M. Sánchez-Margallo, I. OropesaF. del Pozo and E.J. Gómez
Development of graphical user interface to control remote probe by reflecting contact force on body surface for tele-echography system .................................................................................................................................................. 927 K. Masuda, T. Horiguchi, K. Ookomori, H. Watanabe, K. Ozawa, T. Yoshinaga and Y. Aoki
An endoscopic laser navigation system for computer assisted surgery............................................................................ 932 B. Kosmecki, D. Mucha, M. Khan and T. Krueger
Content
XXV
Comparison of optical CT imaging versus NMR imaging for nPAG gel dosimetry....................................................... 936 J. Vandecasteele and Y. De Deene
Application of a surgical navigation system for zygoma implant surgery....................................................................... 940 Chen Xiaojun, Wu Yiqun and Wang Chengtao
Individual bone implant modeling using planned resection lines for facial and cranial tumor resection .................... 944 A. Rose, M. Klein and T. Krueger
A feasibility study on chronic wounds of laser Doppler perfusion imaging during Topical Negative Pressure therapy ................................................................................................................................................................... 948 S.H. Aarnink, M.D.I. Lansbergen, W. Steenbergen
Automated Inspection System of Stent ............................................................................................................................... 952 Issa Ibraheem, Alfred Binder
An intrahepatic electromagnetic localizer .......................................................................................................................... 958 R. Maestle, D. Mucha, B. Kosmecki and T. Krueger
Computerized Interpretation of Cardiotocographs Using Kubli Score........................................................................... 962 B.N. Krupa, F.M. Hasan, M.A. Mohd. Ali and E. Zahedi
Unrotating images in laparoscopy with an application for 30° laparoscopes.................................................................. 966 M. Moll, T. Koninckx, L.J. Van Gool and P.R. Koninckx
Teleimage: An Integrated Approach for Secure and Web-based Exchange of Medical Images and Reports ............. 970 T. Kurmann, D. Slamanig, C. Stingl and K. Roessl
The Degree of Privacy in Web-based Electronic Health Records .................................................................................... 974 D. Slamanig and C. Stingl
Electronic physiotherapy registry: towards structured physiotherapy records ............................................................. 978 R. Buyl, M. Nyssen
A Low Power Wireless Personal Area Network for Telemedicine................................................................................... 982 Cr. Rotariu, H. Costin, D. Arotaritei and G. Constantinescu
Towards an e-Learning and Telemedicine Network for Better Quality of Patient Care ............................................... 986 Vincenzo Lanza, M. Ignazia Cascio and Chun-Hsi Huang
Electronic Report Generation Web Service evaluated within a Telemedicine System................................................... 994 I. Martínez-Sarriegui, M.E. Hernando, F.J. Brito, G. García-Sáez, J. Molero, M. Rigla, E. Brugués, A. de Leiva, E.J. Gómez
Efficient database and web service design for confidential patient data in the TEMONICS project ........................... 998 H. Meier, I. Alich, H. Flick and B. Kotterba
TELEMON – A Complex System for Real Time Telemonitoring of Chronic Patients and Elderly People............... 1002 Hariton Costin, Vlad Cehan, Cristian Rotariu, Octavia Morancea, Victor Felea, Ioana Alexa, Gladiola Andruseac, Ciprian Costin
Mobile Devices for e-Services in Home Care ................................................................................................................... 1006 P. Aubrecht, L. Lhotska, J. Dolezel and J. Dolezal
Personalised Ambient Monitoring (PAM) of the mentally ill ......................................................................................... 1010 C.J. James, J. Crowe, E. Magill, S.C. Brailsford, J. Amor, P. Prociow, J. Blum and S. Mohiuddin
Method for the detection of life-threatening conditions in unconscious casualties....................................................... 1014 M. Jaeger, Y. Jin, T. Oezkan, R. Jaeger and A. Bolz
A Population Prospect for Future Health Care Models based on a System Dynamics Model .................................... 1018 J. Schröttner, E. König and N. Leitgeb
Kubios HRV – A Software for Advanced Heart Rate Variability Analysis .................................................................. 1022 M.P. Tarvainen, J.-P. Niskanen, J.A. Lipponen, P.O. Ranta-aho and P.A. Karjalainen
XXVI
Content
Implementation of an Open Telenephrology Platform to Support Home Monitoring................................................. 1026 F. Seoane, M.A. Valero, A. García-Perez and P. Gallar
An analysis of PLC noise level for risk management of medical use RFID system ...................................................... 1030 R. Hosaka
Intelligent Synchronized Magnifying Glasses for Assisting Reading of Temporal Mammograms............................. 1034 F. Temmermans, R. Deklerck, M. Suliga, C. Breucq, G. Behiels and P. Dewaele
A platform for physiological signals including an intelligent stethoscope ..................................................................... 1038 L. Rattfalt, C. Ahlstrom, M. Eneling, B. Ragnemalm, P. Hult, M. Lindén and P. Ask
Energy requirements of mobile phones and sensor technologies in mobile health applications.................................. 1042 J. Kreuzer, R. Diemer and T. Huber
Bioinstrumentation and Medical Devices Approach to Quantitative Detection of CD146 with a Label-free Protein Biosensor Based on Imaging Ellipsometry.................................................................................................................................................... 1046 Yu Niu, Ying Zhang, Xiyun Yan, Gang Jin
A Compact Imaging Ellipsometer for Label-free Biosensor........................................................................................... 1050 Yidan Luo, Gang Jin
Towards Forehead Reflectance Photoplethysmography to Aid Delivery Room Resuscitation in Newborns............. 1053 M.R. Grubb, B.R. Hayes-Gill, J.A. Crowe, D. Sharkey, N. Marlow, and N.J. Miles
Low-cost miniaturized UV photosensor for direct measurement of DNA concentration within a closed tube container ........................................................................................................................................... 1057 O. Bulteel, P. Dupuis, S. Jeumont, L.M. Irenge, J. Ambroise, B. Macq, J.-L. Gala and D. Flandre
Comparative Assessment of Rotating Slat and Parallel Hole Collimator Performance in GE DST-Xli Gamma Camera: A Monte Carlo Study................................................................................................. 1062 N. Dehestani, S. Sarkar, M.R. Ay, M. Sadeghiand M. Shafaei
Essential Design Considerations for Wireless Multi-Channel Photoplethysmography System .................................. 1066 A.Y. Kadhim, M.A.M. Ali and E. Zahedi
Contact-less human vital sign monitoring with a 12 channel synchronous parallel processing magnetic impedance measurement system ....................................................................................................................................... 1070 F. Liebold, M. Hamsch and C.H. Igney
An CMT reconstruction algorithm for detection of objects buried in a half-space...................................................... 1074 A. Janczulewicz, J. Wtorek and A. Bujnowski
The concept of transfer impedance in bioimpedance measurements............................................................................. 1078 Ø.G. Martinsen and S. Grimnes
Evaluation of an automated PEEP controller in mechanical ventilation support ........................................................ 1080 J. Arntz, D. Gottlieb, S. Lozano, J. Guttmann, K. Möller
Fast Electrical Impedance Spectroscopy for Moving Tissue Characterization Using Bilateral QuasiLogarithmic Multisine Bursts Signals .......................................................................................... 1084 B. Sanchez and R. Bragos
Assessment of Breathing Parameters during Running with aWearable Bioimpedance Device .................................. 1088 V.-P. Seppä, J. Väisänen, O. Lahtinen and J. Hyttinen
QCM Sensor Frequency Responses and MAC Values Comparison of Different Anesthetics used in Inhalation Anesthesia ............................................................................................................................................ 1092 H.M. Saraolu and B. Krankabe
Content
XXVII
Online-Classification of Capnographic Curves Using Artificial Neural Networks ...................................................... 1096 Marcus Bleil, Alexander Opp, Roland Linder, Soehnke Boye, Hartmut Gehring, Ulrich G. Hofmann
Evaluation of Conventional and Non-Conventional Pulse Oximeter............................................................................. 1100 M.A. Haleem, M.Z. Haque, F. Azhar and M.A. Muqeet
Volumetric Registration Method in Lung Tumour Discrimination............................................................................... 1104 João Cancela, José Silvestre Silva and Luísa Teixeira
Towards an ARM based low cost and mobile biomedical device test bed for improved multi-channel pulmonary diagnosis........................................................................................................................................................... 1108 Z. Çatmaka , .H. Köse, O. Toker, H.R. Öz
An evaluation of end-tidal CO2 change following alterations in ventilation frequency................................................ 1113 M.C. Jensen, S. Lozano, D. Gottlieb, J. Guttmann, K. Möller
Development of automatic respiration monitoring for home-care patients of respiratory diseases with therapeutic aids .......................................................................................................................................................... 1117 M. Okubo, Y. Imai, T. Ishikawa, T. Hayasaka, S. Ueno and T. Yamaguchi
Computer-Assisted Decision Support in the Sleep Apnea-Hypopnea Syndrome ......................................................... 1121 D. Álvarez-Estévez and V. Moret-Bonillo
A new calibration method with support vector machines for pulse oximetry............................................................... 1125 M. Ogawa, Y. Yamakoshi, M. Nogawa, T. Yamakoshi, K. Motoi, S. Tanakaand K. Yamakoshi
Hardware-in-the-Loop Testing for closed-loop Brain Stimulators ................................................................................ 1128 S.M. Vogt, M. Klostermann, A. Kundu, S. Andruschenko, and U.G. Hofmann
A Virtual Instrument for Bio-impedance Measurement in Oral Cavity ....................................................................... 1133 M. Kaštelan, S. Vlahini and I. Richter
Magnetic Marker Monitoring Using a Permanent Magnetic Sphere Oriented by a Rotating Magnetic Field.......... 1137 W. Andrä, M.E. Bellemann, M. Brand, J. Haueisen, H. Lausch, P. Saupe and C. Werner
An Improved Local Pressurization-Cuff Technique for Non-invasive Digital Arterial Pressure by the Volume-Compensation Method: Its Performance and Evaluation of Accuracy ............................................... 1141 A. Ikarashi, M. Nogawa, T. Yamakoshi, S. Tanaka and K. Yamakoshi
A Novel Electrophysiological Measurement System to Study Rapidly Paced Animal Hearts..................................... 1145 R. Arnold, T. Wiener, T. Thurner and E. Hofer
Measurements of pressure distribution by the tongue of infants on an artificial nipple.............................................. 1149 T. Niikawa, R. Kawachi, K. Minato and Y. Takada
A novel multielectrode for epicardial recording with temperature control based in Peltier cells............................... 1153 Guill A., Roses E., Ibáñez-Català X., Tormos A., Guillem M.S., Climent A.M., Chorro F.J., Trapero I., Pelechano F., Such-Miquel L., Millet J.
Arterial Blood Flow Sensor................................................................................................................................................ 1158 D. Zikichand D. Zikic
In-vivo measurements of heart ischemia using transoesophageal electrical impedance .............................................. 1163 Javier Rosell-Ferrer, Giuseppe Giovinazzo, Carol Galvez, Juan Ramos, Silvia Raga, Manel Sabate, Joan Cinca
Photoplethysmographic Augmentation Index as a Non Invasive Indicator for Vascular Assessments ...................... 1167 R. Gonzalez, A. Manzo, J. Delgado, J.M. Padilla, B. Trenor, J.M. Ferrero (Jr), J. Saiz
Thermoelectrical Stimulator for Patients´Quantitative Sensory Testing ...................................................................... 1171 J. Hozman, J. Hykel, J. Charfreitag and R. Cerny
XXVIII
Content
Transoesophageal Electronic Bioimpedance device for the study of post-transplant heart rejection ........................ 1176 G. Giovinazzo, J. Ramos, J. Rosell
Bio-compatible Insulated Substrate Impedance Transducers........................................................................................ 1180 R.S. Pampin, L. Moreno-Hagelsieb, D. Flandre
Electrophysiological Systems Macroconduction and microconduction during rapid pacing measured with cardiac near field technique.............. 1184 E. Hofer, T. Wiener, R. Arnold, F. Campos, A.J. Prassl, D. Sanchez-Quintana, V. Climent and G. Plank
An Efficient Piecewise Modeling of ECG Signals Based on Critical Samples Using Hermitian Basis Functions...... 1188 M. Abdoli, A. Ahmadian, S. Karimifard, H. Sadoughi and F. Yousefi Rizi
Can orthogonal leads be derived from the standard electrocardiogram during atrial fibrillation? ........................... 1192 M.S. Guillem, A.M. Climent, A. Bollmann, D. Husser, J. Millet, F. Castells
Abdominal Signal Processing: fetal ECG extraction by combining ESC and ICA methods ....................................... 1196 D. ar lung , W. Wolf, R. Strungaru and M. Ungureanu
Use of Activation Time Based Kalman Filtering in Inverse Problem of Electrocardiography.................................... 1200 Umit Aydin, Yesim Serinagaoglu
Comparison of Different Structures of Silver Yarn Electrodes for Mobile Monitoring .............................................. 1204 Alper Cömert, Markku Honkala, Baran Aydogan, Antti Vehkaoja, Jarmo Verho, Jari Hyttinen
FPGA based two-channel ECG sensor node for wearable applications ........................................................................ 1208 J. Mihel, R. Magjarevic
Emission Modelling for Supervised ECG Segmentation using Finite Differences........................................................ 1212 B. Frénay, G. de Lannoy and M. Verleysen
Towards a capacitively coupled electrocardiography system for car seat integration ................................................. 1217 B.K. Chamadiya, S. Heuer, U.G. Hofmannand M. Wagner
A mobile ECG monitoring system with context collection.............................................................................................. 1222 Li. J.P., Berry D. and Hayes R.
Identification of Signal Components in Multi-Channel EEG Signals via Closed-Form PARAFAC Analysis and Appropriate Preprocessing........................................................................ 1226 Dunja Jannek, Florian Roemer, Martin Weis, Martin Haardt, and Peter Husar
Analysis of Epileptic EEG Signals by Means of Empirical Mode Decomposition and Time-Varying Two-Sided Autoregressive modelling ............................................................................................... 1231 A. Kacha, G. Hocepied and F. Grenez
Neuroscience125 and Nonlinear Dynamics ...................................................................................................................... 1236 W. Klonowski
On Modelling User’s EEG Response During a Human-Computer Interaction: A Mirror Neuron System-Based Approach .................................................................................................................................................... 1241 P.C. Petrantonakis and L.J. Hadjileontiadis
Microneedle array electrode for human EEG recording ................................................................................................ 1246 R. Luttge, S.N. Bystrova and M.J.A.M. van Putten
Generating Brain-maps with ICA Source Estimates from Hybrid Optimizer Using Spectral Screened EEG Data .................................................................................................................................. 1250 S. Thomas George, S. Easter Selvan and Manoj Kumar Das
Content
XXIX
An Approximated Solution to the Inverse Problem of EEG........................................................................................... 1255 X. Ibáñez-Català and M.I. Troparevsky
The Removal Of Ocular Artifacts From EEG Signals: A Comparison of Performances For Different Methods..... 1259 M.A.Klados, C. Papadelis, C.D. Lithari and P.D. Bamidis
Characteristic features of the EEG patterns during anaesthesia evoked by fluorinated inhalation anaesthetics ..... 1264 E. Olejarczyk, A. Sobieszek, R. Rudner, R. Marciniak, M. Wartak, M. Stasiowski, P. Jalowiecki
Effects of propofol anesthesia on nonlinear properties of EEG: Time-lag and embedding dimension ...................... 1268 J. Roca-González, M. Vallverdú-Ferrer, P. Caminal-Magrans, F. Martínez-González, J. Roca-Dorda and J.A. Álvarez-Gómez
Effects of propofol anesthesia on nonlinear properties of EEG: Lyapunov exponents and short–term predictability............................................................................................................................................ 1272 J. Roca-González, M. Vallverdú-Ferrer, P. Caminal-Magrans, F. Martínez-González, J. Roca-Dorda and J.A. Álvarez-Gómez
Imaging Magnetoencephalographic Independent Component Scalp Distributions using a Subspace Correlation Approach .......................................................................................................................... 1276 C.W. Hesse
Performance of ICA for MEG data generated from subspaces with dependent sources............................................. 1281 F. Kohl, G. Wübbeler, D. Kolossa, R. Orglmeister, C. Elster, M. Bär
A robust independent component analysis algorithm for removing ballistocardiogram artifacts from EEG and fMRI recordings ....................................................................................................................................... 1286 T. Franchin, A.M. Bianchi, V. Cannatà, E. Genovese, F. Nocchi, S. Cerutti
Monitoring of Musical ‘Motion’ in EEG Using Bispectral Analysis: A Mirror Neurons-based Approach ............... 1290 S.K. Hadjidimitriou, A.I. Zacharakis, P.C. Doulgeris, K.J. Panoulas, L.J. Hadjileontiadis, and S.M. Panas
The somatosensory evoked response detection using coherence and different stimulation frequencies..................... 1294 D.B. Melges, A.F.C. Infantosi and A.M.F.L. Miranda de Sá
Reduction of alpha distortion in event related potentials ............................................................................................... 1298 K. Vanderperren, B. Hunyadi, M. De Vos, M. Mennes, H. Wouters, B. Vanrumste, P. Stiers, L. Lagae and S. Van Huffel
Measurement of high frequency EEG signals with very low noise amplifiers: Brain oscillations above 600 Hz measured non-invasively .................................................................................................................................................... 1302 H.J. Scheer, G. Curio and M. Burghoff
Comparison of spontaneous and event related measures in the electroencephalogram............................................... 1305 M.A. Schier
Detection of Evoked Responses in EEG using Computational Intelligence Tools ........................................................ 1309 A.P. Souza, A.M.F.L. Miranda de Sá, E.M.A.M. Mendes and L.B. Felix
Influence of Branching and Tapering on Intracoronary Pressure and Flow Velocity: Comparison of Axial Measurement Location .................................................................................................................. 1313 F. Nolte, M. Remmelink, J.P. van den Wijngaard, S.J. Zweers, J. Baan, J.J. Piek, M. Siebes
An active intravascular MR-probe using a miniature optical modulator ..................................................................... 1317 S. Fandrey, S. Weiss and J. Müller
Circle Maps Analysis implemented on an intelligent, miniaturized and wireless communicating sensor enabling online cardiac arrhythmia classification ........................................................................................................... 1321 M. Schiek, M. Schlösser, A. Schnitzer, H. Ying
An m-health system for continuous monitoring of children with suspected cardiac arrhythmias.............................. 1325 E. Kyriacou, C. Pattichis, M. Pattichis, A. Jossif, D. Vogiatzis, L. Paraskeva, A. Konstantinides, A. Kounoudes
IR-thermal monitoring of cardiosurgery interventions .................................................................................................. 1329 A. Nowakowski, M. Kaczmarek, W. Stojek, S. Beta, B. Trzeciak, J. Topolewicz, J. Rogowski, J. Siebert
XXX
Content
Monitoring Fluid Shifts During Haemodialysis Using Local Tissue Bioimpedance Measurement............................. 1334 Omar.I. Al-Surkhi, P.J. Riu, F. Vazquez, J. Ibeas. MD
Effect of a Sauna Bath and Smoking on a BCG, Carotid and Ankle Pulse Signal in Sitting Position ........................ 1339 J. Alametsä, J. Viik, A. Palomäki
Identifying Patients Suffering From Atrial Fibrillation During Atrial Fibrillation and Non-Atrial Fibrillation Episodes................................................................................................................................ 1349 N. Kikillus, M. Schweikert and A. Bolz
The Fetal Heart Rate Variability due to vibro-acustic stimulation: a complexity analysis.......................................... 1353 M. Ferrario, M.G. Signorini and G. Magenes
Normalization of the standard deviation and spectral indices of RR time series for comparison among situations with different mean heart rate............................................................................................................. 1357 M.A. García-González, M. Fernández-Chimeno, L. Capdevila, M. Ocaña, G. Rodas and J. Ramos-Castro
Is the Ventricular Response during Atrial Fibrillation Certainly Random? ................................................................ 1362 A.M. Climent, M.S. Guillem, D. Husser, F. Castells, J. Millet and A. Bollmann
Influence of Mental Stress on Heart Rate and Heart Rate Variability.......................................................................... 1366 J. Taelman, S. Vandeput, A. Spaepen and S. Van Huffel
Anaesthesia with Propofol Reduces Ventricular Rhythm Variability during Atrial Fibrillation ............................... 1370 R. Cervigón, F. Castells, C. Sánchez, A. Climent and J. Millet
An algorithm for FHR extraction from FHS signals ....................................................................................................... 1374 M. Cesarelli, M. Ruffo, M. Romano, P. Bifulco, F. Kovacs, S. Iaccarino
Fetal ECG Extraction Using Multi-Layer Perceptron Neural Networks with Bayesian Approach ........................... 1378 S. Mojtaba Golzan, Farzaneh Hakimpour, Mohammad Mikaili and Alireza Toolou
Analysis of fetal movement based on magnetocardiographically determined fetal actograms and fetal heart rate accelerations ...................................................................................................................................... 1386 P. Van Leeuwen, D. Geue, S. Lange and D. Groenemeyer
Identification of fetal auditory evoked cortical responses using a denoising method based on periodic component analysis............................................................................................................................................................. 1390 L. Moraru, R. Sameni, U. Schneider, C. Jutten, J. Haueisen, D. Hoyer
Development of neckband mounted active bio-electrodes for non-restraint lead method of ECG R wave................ 1394 A. Mizuno, H. Okumura and M. Matsumura
Simultaneous registration of uterine contractions and fetal heart rate using magnetomyography and magnetocardiography ................................................................................................................................................. 1398 P. Van Leeuwen, W. Hatzmann, S. Schiermeier and D. Groenemeyer
Comparison of reference subtraction methods for DC-MEG applications ................................................................... 1402 W. Müller, T.H. Sander and M. Burghoff
Reproducibility of Evaluation on Cardiac Autonomic Nervous System Activity through Tone-Entropy Analysis in Young Subjects ........................................................................................................ 1406 H. Nakamura and M. Yoshida
On-line optimization of drug delivery: An adaptive extremum seeking approach....................................................... 1410 D. Dochain, N. Hudon, M. Guay, M. Perrier
Multi-level Mathematical Model of Rheological Study of Pathogenic Microorganisms Suspended in Water........... 1414 E.Yu. Taran and V.A. Gryaznova
Numerical simulation of radial oscillations of individual and multiple microbubbles with elastic layers.................. 1418 H.M. Overhoff, A. Poelstra, S. Euting, T. Gehrke
Content
XXXI
Monitoring of Insonicated Microbubble Behavior and their Effect on Sonoporation Supported Chemotherapy of Fibrosarcoma Cells ........................................................................................................................................................ 1422 K. Hensel, M. Siepmann, K. Haendschke, S. Emmelmann, A. Daigeler, J. Hauser, G. Schmitz
Efficient Transmembrane Segment Prediction in Transmembrane Proteins Using Wavelet-Based Energy Criteria ................................................................................................................................................................... 1426 I.K. Kitsas, L.J. Hadjileontiadis and S.M. Panas
Towards User-friendly Interfacing of Biomedical Applications with the Grid: A Paradigm with SVM Optimization for Gene Prediction................................................................................................................... 1430 K.I. Vegoudakis, V. Koutkias, A. Malousi, I. Chouvarda and N. Maglaveras
Pico-Injector for the Discrete Chemical Stimulation of Individual Cells with a High Temporal and Spatial Resolution ................................................................................................................. 1434 J. Steigert, N. Wangler, O. Brett, M. Straßer, M. Laufer, M. Daub, and R. Zengerle
A Novel Approach in Melanoma Identification ............................................................................................................... 1438 A.E. Oprea, R. Strungaru, A.M. Forsea and G.M. Ungureanu
3D cephalometry: a new approach for landmark identification and image orientation .............................................. 1442 S. Van Cauter, W. Okkerse, G. Brijs, M. De Beule, M. Braem and B. Verhegghe
Methods for determining the blood flow velocity in cerebral vessels using intraoperative Indocyanine Green fluorescence video angiography......................................................................................................................................... 1446 P. Cimalla, D. Graf, P. Duscha, T. Meyer, J. Kuß, R. Steinmeier, E. Koch and U. Morgenstern
Detection of Epileptic Seizures Through Audio Classification ....................................................................................... 1450 G.R. de Bruijne, P.C.W. Sommen and R.M. Aarts
Automated Diagnosis of Early Alzheimer’s disease using Fuzzy Neural Network ....................................................... 1455 S. Mahesh Anand, M. Mukunda Rao, N. Shyam Prabhu, Samraj D. Simeon, D. Karthikeyan, and Snigdha Rashmi
Dual camera based eye gaze tracking system................................................................................................................... 1459 T. Kocejko, A. Bujnowski and J. Wtorek
Modeling of tooth’s structure based on CT and μCT data – comparative study.......................................................... 1463 S. Piszczatowski, J. Baginska, W. Swieszkowski
Measurement of the short-term viscoelastic properties of the periodontal ligament using stress relaxation............. 1467 R. Tohill, M. Hien, N. McGuinness, L. Chung and R.L. Reuben
Study of Electromyographic Signals During Chewing Process In Patients with Fixed Partial Denture .................... 1471 S. Kara, M. Tokmakç, Y. i man, E.T. Erta , E. mal, M.A. Özçoban
Variation of Power Spectral Density and Energy in Electromyogram of Jaw-Closing Muscles in Children with Class II Malocclusion................................................................................................................................................. 1475 S. Kara, . Okkesim, F. Latifolu, T. Uysal, A. Baysal
A high-resolution Schottky CdTe detector based spectrometric determination method of the kilovoltage applied to dental X-ray tubes............................................................................................................................................. 1479 B. Spyropoulos, G. Manousaridis, A. Papathymiou
Analytical approach to determine the rotational freedom of dental implant-abutment connections ........................ 1484 S. Kraft, W. Semper, K. Nelson and T. Krüger
Physiologically inspired coding strategies for cochlear implants ................................................................................... 1488 A. Bahmer, G. Langner and U. Baumann
Application of Otoplastics to Increase the Reproducibility of OAE-analyses ............................................................... 1492 T. Schmidt, A. Müller, Ch. Thron and H. Witte
XXXII
Content
Applicability of function-based analysis in retrospective data analysis of noise-induced hearing loss in the Finnish Defence Forces ............................................................................................................................................ 1496 M. Hannula, T. Holma, H. Kiukaanniemi, P. Kuronen and M. Sorri
3D Tele-Medical Speech Therapy using Time-of-Flight Technology............................................................................. 1500 M. Stürmer, A. Maier, J. Penne, S. Soutschek, C. Schaller, R. Handschu, M. Scibor, E. Nöth
Augmented control of hands free voice prostheses .......................................................................................................... 1504 Brian Madden, James Condron, Ted Burke, Eugene Coyle
EMD-KURTOSIS: A New Classification Domain for Automated Greek Sign Language Gesture Recognition ....... 1508 V.E. Kosmidou and L.J. Hadjileontiadis
Clinical Engineering Differentiation between Brain Metastasis and Glioblastoma using MRI and two-dimensional Turbo Spectroscopic Imaging data ............................................................................................................................................... 1513 T. Laudadio, J. Luts, M. Carmen Martínez-Bisbal, Bernardo Celda and Sabine Van Huffel
A high performance bidirectional micropump utilizing advanced low voltage piezo multilayer actuator technology for a novel artificial sphincter system............................................................................................................ 1517 T. Lemke, G. Biancuzzi, C. Farhat, B. Vodermayer, O. Ruthmann, T. Schmid, H.-J. Schrag, P. Woias and F. Goldschmidtboeing
The method of assessment of chosen hemodynamic and electrophysiologic parameters in the healthy human subject circulation............................................................................................................................................................... 1521 K. Peczalski, D. Wojciechowski, P. Sionek, Z. Dunajski, T. Palko
Tissue Recognition for Pressure Ulcer Evaluation .......................................................................................................... 1524 H. Mesa, L. Morente and F. Veredas
A System Ergonomic Analysis Approach for Potential Critical Incidents in Medical Treatment Processes ............. 1528 Daniela Fuchs, Beate Eilermann, Ingo Marsolek, Wolfgang Friesdorf, Dirk Pappert
Determination of the Mechanical Leg Axis Using a Force-Torque Sensor ................................................................... 1532 R. Elfring, F. Schmidt, M. de la Fuente, W. Teske and K. Radermacher
Automatic Discrimination of Duodenum inWireless Capsule Video Endoscopy.......................................................... 1536 L. Igual, J. Vitrià, F. Vilariño, S. Seguí, C. Malagelada, F. Azpiroz and P. Radeva
Ontology-based Computer-Aided Decision System: a new architecture and application concerning the musculoskeletal system of the lower limbs ................................................................................................................. 1540 T.T. Dao, F. Marinand M.C. Ho Ba Tho
Multi- tactile sensor concept for the autonomous navigation in human blood vessels ................................................. 1544 A. Keißner, C. Brücker, P. Jacobs, A. Kashefi
Studies on Viscosity, pH and Temperature of High Concentration Barium Sulfate in Mass Screening for Gastric Cancer – Particle size distribution –.............................................................................................................. 1548 K. Yamamoto, Y. Takeda C. Kuroda, T. Kubo, T. Gotanda, A. Tabuchi, H. Yatake, T. Kuwano, T. Katsuda, H. Yamazaki and M. Azuma
Blood pressure response to LBNP load in various types of examinations ..................................................................... 1552 J. Hanousek, P. Dosel, J. Petricek and L. Cettl
Study on the measurement of ejection fraction (EF) using left ventriculogram ........................................................... 1556 Tadao Kuwano, Toshizo Katsuda, Kenyu Yamamoto, Tatsuhiro Gotanda, Takashi Horinouchi, Shodayu Takashima, Masami Azuma and Yoshihiro Takeda
Content
XXXIII
Exposure Dose in Gastric Cancer Mass Screening using High Concentration Barium Sulfate – Comparison with Moderate Concentration Barium Sulfate –..................................................................................... 1561 K. Yamamoto, M. Azuma, T. Katsuda, T. Kubo, M. Takeshita, K. Yabunaka, R. Gotanda, K. Hayashida, C. Kuroda and Y. Takeda
Simulation and Experimental Study of an Ellipsoidal Cavity Reflector as part of a Focused Passive Brain Imaging System ........................................................................................................................................................ 1565 K.T. Karathanasis, I.A. Gouzouasis, I.S. Karanasiou and N.K. Uzunoglu
The Impact of Model-based Therapeutics on Glucose Control in an Intensive Care Unit .......................................... 1570 Christopher E. Hann, J. Geoffrey Chase, Thomas Desaive, Michael F. Ypma, Jos Elfring and Geoffrey M. Shaw
A comparison of gesture and speech control in intraoperative-near environment....................................................... 1574 A. Rose, D. Eichel and T. Krueger
Incident Investigation in the Healthcare System: a Comparative Analysis Derived from the Chemical Industries ............................................................................................................................................ 1577 P.P. Morita, S.J. Calil
Health Technology Management: Medical Equipment Classification........................................................................... 1581 N.F. Oshiyama, A.C. Silveira and J.W.M. Bassani
Quality Assurance and Control of Clinical Engineering Activities................................................................................ 1585 W.S. Trarawneh, A. Ghawanmeh, I. Malkawiand M. Ghannam
Active path selection of fluid microcapsules by acoustic radiation force in the artificial blood vessel ....................... 1589 Y. Muramatsu, S. Ueda, R. Nakamoto, Y. Nakayashiki, K. Masuda, K. Ishihara
Fatigue Testing of Polyimide-Based Micro Implants ...................................................................................................... 1594 S. Kisban, D. Moser, B. Rubehn, T. Stieglitz, O. Paul and P. Ruther
T2 weighted liver Magnetic Resonance imaging using functional residual capacity breath-hold with multi breath-hold ....................................................................................................................................................... 1598 A. Tabuchi, T. Katsuda, R. Gotanda, T. Gotanda, K. Yamamoto, M. Mitani, Y. Takeda
The usefulness of film reading to detect cancer by untrained radiographer in X-ray examination of the stomach ..................................................................................................................................................................... 1603 H. Yatake, T. Katsuda, C. Kuroda, H. Yamazaki, T. Kubo, R. Gotanda, K. Yabunaka, K. Yamamoto, Y. Sawai and Y. Takeda
A novel method for automatic evaluation of the effective dynamic range of medical ultrasound scanners............... 1607 A. Scorza
On the development of a powered prosthesis for transtibial amputees ......................................................................... 1612 R. Versluys, A. Desomer, G. Lenaerts, R. Van Ham, I. Vanderniepen, L. Peeraer, and D. Lefeber
Patient-Driven Cooperative Gait Training with the Rehabilitation Robot Lokomat................................................... 1616 A. Duschau-Wicke, J. v. Zitzewitz, L. Lünenburger and R. Riener
Improving Speech Understanding in Noise for Users of Bone Anchored Hearing Aids (BAHA) ............................... 1620 F. Pfiffner, C. Stieger and M. Kompis
AERBUS: Enhanced perception of the environment for visually impaired people...................................................... 1624 K. Möller, V. Balazs, F. Toth, S. Schumann, K.O. Arras, M. Bach, J. Guttmann
Selective stimulation of the vagus nerve in a man............................................................................................................ 1628 P. Pelin, I. Kneževi, T. Mirkovi, B. Geršak, I. Radan, M. Podbregar and J. Rozman
Improved Wearable Monitoring System for Posture Changes and Walking Speed and its Application to Supporting Physical Therapist in Rehabilitation ........................................................................................................ 1632 K. Motoi, Y. Kuwae, M. Wakugawa, Y. Toyonaga, T. Yuji, Y. Higashi, T. Fujimoto, S. Tanaka and K. Yamakoshi
Arterial Elasticity Measurements with Ankle Pulse Width Velocity and Ballistocardiography ................................. 1636 J. Alametsä, J. Viik, A. Palomäki
XXXIV
Content
A bioimpedance measurement device for sensing force and position in neuroprosthetic systems .............................. 1642 H. Nahrstaedt and T. Schauer
Unraveling of an original mechanism of hypometria in human using a new myohaptic device – The Wristalyzer .................................................................................................................................................................. 1646 M. Manto, G. Grimaldi, P. Jissendi, N. Van Den Braber, J. Meulemanand P. Lammertse
The “Highly Versatile Single Port System” for laparoscopic surgery: Introduction and first clinical application........................................................................................................................ 1650 S. Can, H. Mayer, A. Fiolka, A. Schneider, D. Wilhelm, H. Feussner, A. Knoll
A Locomotive System Mimicking Pedal Locomotion of Snails for the Capsule Endoscope ........................................ 1655 Daisuke Hosokawa, Takuji Ishikawa, Hirohisa Morikawa, Yohsuke Imai and Takami Yamaguchi
A Novel Laparoscopic Instrument with Multiple Degrees of Freedom and Intuitive Control .................................... 1660 H.F. Schlaak, A. Röse, C. Wohlleber, S. Kassner, R. Werthschützky
A Modified Zwicky's Morphological Analysis: Application to the design of a robotic laparoscope ........................... 1664 G.Villegas Medina, M.T. Phamand W. Marquis-Favre
Position Control of Piezoelectric Motors for a Dexterous Laparoscopic Instrument................................................... 1668 C. Wohlleber, H.F. Schlaak
A system to provide different view fields to both eyes of human respectively............................................................... 1672 F. Mizuno, K. Sawaguchi, T. Haga, T. Hayasaka and T. Yamaguchi
Knowledge-based OR table positioning assistant for orthopedic surgery ..................................................................... 1676 W. Lauer, B. Ibach and K. Radermacher
Bone mounted hexapod robot for outpatient distraction osteogenesis........................................................................... 1679 R. Wendlandt, F. Wackenhut, K. Seide and J. Müller
Synergistic CT based tele-manipulator for needle placement in spine procedures ...................................................... 1683 V.C.V.S. Cunha-Cruz, S. Serefoglou, P. Bruners, A.H. Mahnken and K. Radermacher
Design Results of an Upper Extremity Exoskeleton ........................................................................................................ 1687 S. Moubarak, M.T. Pham, T. Pajdla and T. Redarce
The development of endoscopic surgery for a training simulator.................................................................................. 1691 S. Yoneyama, H. Koyama, T. Komeda and S. Yamamoto
Haptic Aided Roboting for Heart Surgeons ..................................................................................................................... 1695 E.U. Braun, C. Gaertner, H. Mayer, A. Knoll, R. Lange, R. Bauernschmitt
Laparoscope Sizing Approach Based on the Virtual Exploration of the liver's Surface.............................................. 1697 G. Villegas Medina, M.T. Pham and W. Marquis-Favre
Probabilistic Forecasts of Epileptic Seizures and Evaluation by the Brier Score......................................................... 1701 M. Jachan, H. Feldwisch genannt Drentrup, F. Posdziech, A. Brandt, D.-M. Altenmüller, A. Schulze-Bonhage, J. Timmer, B. Schelter
A Long-Term Monitor Including Activity Classification for Motor Assessment in Parkinson’s Disease Patients.... 1706 D.G.M. de Klerk, J.P.P. van Vugt, J.A.G. Geelen and T. Heida
iNODE: intelligent Network Operating Device for Neurological and Neurophysiological Research ......................... 1710 Mario Schlösser, Andreas Schnitzer, Hong Ying, Carmen Silex, Michael Schiek
Distributed Intelligent Sensor Network for Neurological Rehabilitation Research ..................................................... 1714 H. Ying, M. Schlösser, A. Schnitzer, S. Leonhardt and M. Schiek
Online Laser Doppler Measurements of Myocardial Perfusion..................................................................................... 1718 C. Fors, H. Ahn and K. Wårdell
Content
XXXV
A Clinically Validated Patient Monitoring System.......................................................................................................... 1722 A. Ridolfi, O. Chetelat, J. Krauss, J. Sola, O. Grossenbacher and S.M. Jakob
Process Mapping, a key milestone in Engineering for Health ........................................................................................ 1726 Eng. Francesco Amorosi, Ph.D.
Allocation of Medical Equipment Costs to Medical Procedures .................................................................................... 1730 L.N. Nascimento, S.J. Calil
Supporting Clinical Information Management by NFC Technology............................................................................. 1734 J. Bravo, G. Casero, M. Vergara, C. Fuentes, R. Peña, R. Hervás & V. Villarreal
Towards Noninvasive Monitoring of Total Hemoglobin Concentration and Fractional Oxygen Saturation Based on Earlobe Pulse Oximetry..................................................................................................................................... 1738 O. Abdallah, K. Abo Alam, A. Bolz
Visual Transformation of the EEG in the Intensive Care............................................................................................... 1743 Michel J.A.M. van Putten
Development of a mobile toilet system servicing elderly on call..................................................................................... 1747 Ueno S., Imai Y., Hayasaka T., Okubo M., Ishikawa T. and Yamaguchi T.
Optimal Electrode Configurations for Impedance Pneumography during Sports Activities...................................... 1750 O. Lahtinen, V.-P. Seppä, J. Väisänen and J. Hyttinen
Improving model-based cardiac diagnosis with an ECG ................................................................................................ 1754 C.E. Hann, J.G. Chase, C.F. Froissart, C. Starfinger, T. Desaive, K. Kok, J. Revie, A. Ghuysen, B. Lambermont, P. Kolh and G.M. Shaw
Electronic Monitoring of Head Position after Vitrectomy.............................................................................................. 1758 M. Cizek, J. Dlouhy, I. Vicha and J. Rozman
Investigation of Heart Rate Variability after Cardiopulmonary Resuscitation and Subsequent Hypothermia ........ 1762 J. Hopfe, R. Pfeifer, C. Ehrhardt, M. Goernig, H.R. Figulla, A. Voss
Development of Non-restrained Sleep-Monitoring Method by Using Difference Image Processing .......................... 1765 Okada S., Ohno Y., Kenmizaki K., Tsutsui A. and Wang Y.
Hard- and software-configurable system for preoperative planning and intraoperative navigation of minimally invasive interventions................................................................................................................................... 1769 U. von Jan, D. Sandkühler, S. Maas and H.M. Overhoff
Laser Treatment of Radiation Injuries of Skin and Underskin Tissues ........................................................................ 1773 V. Ovsyannikov, G. Zharinov, S. Gosteva, G. Zaikin, A. Bushmanov, N. Nadyozhina
Musculo-skeletal, Joint & Bone Biomechanics A Validated Skeleton-based Finite Element Mesh for Parametric Analysis of Trabecular Bone Competence ......... 1777 J. Vanderoost, S.V.N. Jaecques, G. Van der Perre, S. Boonen, J. D’hooge, W. Lauriks and G.H. van Lenthe
A Mechanical Instrument to Evaluate Posture of the Spinal Column in Pregnant Women........................................ 1781 Mario Forjaz Secca, Cláudia Quaresma, Filipe Santos
Hardware-in-the-Loop-Simulator for Testing of Total Hip Endoprostheses................................................................ 1785 M. Kähler, R. Souffrant, S. Dryba, D. Kluess, R. Bader and C. Woernle
Experimental and numerical analysis of patello-femoral contact mechanics in TKA.................................................. 1789 B. Innocenti, M. Follador, M. Salerno, C. Bignardi, P. Wong and L. Labey
Microstructural quality of vertebral trabecular bone can be assessed from ultrasonic wave propagation ............... 1794 L. Goosssens, J. Vanderoost, S.V.N. Jaecques, S. Boonen, J. D’hooge, G.H. Van Lenthe, W. Lauriks and G. Van der Perre
XXXVI
Content
Contact pressure distribution in postmortem human knee during dynamic flexion-extension movement ................ 1798 J. Quintelier, P. De Baets and F. Almqvist
Predictive Mathematical Models based on Data Mining Methods of the Pathologies of the Lower Limbs ............... 1803 T.T. Dao, F. Marin and M.C. Ho Ba Tho
Periprosthetic Fields and Currents of an Electrostimulative Acetabular Revision System......................................... 1808 C. Potratz, H.-W. Glock, R. Souffrant, R. Bader, H. Ewald, U. van Rienen
Modeling of the children’s hip joint in diagnostics of bone deformation in cerebral palsy ......................................... 1812 S. Piszczatowski, M. Okonski
Signal processing concepts for optimal myoelectric sensor placement in a modular hybrid FES orthosis ................ 1816 O. Schill, R. Rupp and M. Reischl
Invention of slight lower limbs’ stretching and walking orthosis................................................................................... 1820 K.L. Yamamoto, T. Matsuda, E. Genda, Y. Suzuki, K. Kinoshita, Y. Iwamura, M. Izumino
A proof-of-concept exoskeleton for robot-assisted rehabilitation of gait....................................................................... 1825 P. Beyl, P. Cherelle, K. Knaepen and D. Lefeber
Physiologic Approach for Control of Hand Prostheses................................................................................................... 1830 K.H. Somerlik, T.B. Krueger, J. Carpaneto, T. Stieglitz and S. Micera
Accelerometer based measurement of body movement for communication, play, and creative expression .............. 1835 M. Nolan, E. Burke and F. Duignan
Wheelchair Direction Control by Acquiring Vocal Cords Vibration with Multivariate Analysis .............................. 1839 Chia-Hua Hsu, Huai-Yuan Hsu Hsin-Yi Wang Tzu-Chien Hsiao
Evaluation of Trajectory Applied to Collaborative Rehabilitation For a Wheelchair Driving Simulator ................. 1843 I. Randria, A. Abellard, M. Ben Khelifa, P. Abellard, P. Ramanantsizehena
A New Concept of an Electrostimulative Acetabular Revision System with Patient Individual Additional Fixation............................................................................................................................................................. 1847 D. Kluess, R. Souffrant, R. Bader, U. van Rienen, H. Ewald, W. Mittelmeier
Effect of splinting in a fixed partial denture on bone remodeling using FEM .............................................................. 1851 M.T. El-Wakad
Finite element study of load transfer in a splinted fixed partial denture....................................................................... 1856 M.Z. Bendjaballah
Electric Stimulation and Pudendal Evoked Potential Recordings for Management of Stress Incontinence in Women ............................................................................................................................................................................ 1862 C. Koutsojannis
An in-vitro study of human knee kinematics: natural vs. replaced joint....................................................................... 1867 B. Innocenti, L. Labey, J. Victor, P. Wong and J. Bellemans
An EMG control system of ultrasonic motors using PSoC microcomputer .................................................................. 1871 Yorihiko Yano and Kenta Mukai
Cardiac Mechanics Visualization and modeling of flow in the embryonic heart ........................................................................................... 1875 F. Maes, B. Chaudhry, P. Segers, P. Van Ransbeeck, P. Verdonck
An Innovative Design of a Blood Pump Actuator Device using an Artificial Left Ventricular Muscle ...................... 1879 B. Van Der Smissen Benjamin, T. Claessens, P. Verdonck, P. Van Ransbeeck, P. Segers
Content
XXXVII
Artificial heart constructed as a kinetic calotte-pendulum-transmission ...................................................................... 1883 H.A. Vielberg M. D.
Estimation of cardiac contractility during rotary blood pump support using an index derived from left ventricular pressure............................................................................................................................................ 1885 P. Naiyanetr, F. Moscato, M. Vollkron, D. Zimpfer, S. Sandner, G. Wieselthaler, H. Schima
Vascular and Biofluid Mechanics Impact of imaging modality for analysis of a cerebral aneurysm: comparison between CT, MRI and 3DRA.......... 1889 J. Poethke, L. Goubergrits, U. Kertzscher, A. Spuler, Ch. Petz and H.-Ch. Hege
Computational modeling of cerebral aneurysm formation - framework for modeling the interaction between fluid dynamics, signal transduction pathways and arterial wall mechanics................................................... 1894 H. Schmid, P. Watton, M. McCormick, Y. Lanir, H. Ho, C. Lloyd, P. Hunter, A. Ehret and M. Itskov
Circumferential variations in passive and active mechanical properties of healthy and aneurysmal ascending aorta ....................................................................................................................................... 1899 D. Tremblay, R. Cartier, L. Leduc, J. Butany, R. Mongrain, R.L. Leask
Arterial Remodeling in Response to Increased Blood Flow Using a Constituent-Based Model .................................. 1903 A. Tsamis and N. Stergiopulos
Research of Ride Comfort for Tilting Train Simulator Using ECG .............................................................................. 1906 Youngbum Lee, Kwangsoo Shin, Yongsoo Song, Sungho Han and Myoungho Lee
Strain energy function for arterial walls based on limiting fiber extensibility.............................................................. 1910 L. Horny, R. Zitny and H. Chlup
Carotid plaque and its effect on ultrasound carotid distension measurements ............................................................ 1914 T. De Schryver, J. Kips, A. Swillens and P. Segers
Relation Between Left Ventricular Relaxation Rate and Arterial Loading .................................................................. 1918 T.E. Claessens, E.R. Rietzschel, M.L. De Buyzere, D. De Bacquer, G. De Backer, T.C. Gillebert, P.R. Verdonck and P. Segers
Impact of aortic valve stenosis on left coronary artery flow: An in vitro study............................................................ 1922 E. Gaillard, D. Garcia, L. Kadem, P. Pibarot and L.-G. Durand
Pulsatile Blood Flow Simulations in Aortic Arch: Effects of Blood Pressure and the Geometry of Arch on Wall Shear Stress........................................................................................................................................................... 1926 P. Vasava, P. Jalali and M. Dabagh
Linking an Artery to the Circulation: Introducing a Quasi-Simultaneous Coupling Approach for Partitioned Systems in Hemodynamics..................................................................................................... 1930 G. Rozema, N.M. Maurits and A.E.P. Veldman
Mechanical Properties of Arteries with Aging and its Noninvasive Estimation Method ............................................. 1935 F. Nogata, Y. Yokota, Y. Kawamura, W.R. Walsh, H. Morita and Y. Uno
Accuracy Close to the Wall of Immersed Boundary Methods........................................................................................ 1939 M. Pourquie
A modified Mass-Spring system for myocardial mechanics modeling........................................................................... 1943 O. Jarrousse, T. Fritz and O. Dössel
The Significance of Flow Unsteadiness on the Near-Wall Flow of a Stented Artery .................................................... 1947 Juan Mejia, Rosaire Mongrain, Richard Leask, Josep Cabau-Rodes and Olivier F. Bertrand
A mechanical study of patient-specific cerebral aneurysm models: a correlation between stress and geometrical index ........................................................................................................................................................ 1951 A. Valencia, P. Torrens, R. Rivera, M. Galvez and Eduardo Bravo
XXXVIII
Content
Simulation of wall shear stress-driven in-stent restenosis............................................................................................... 1955 G. De Santis, P. Mortier, M. De Beule, P. Segers, P. Verdonck, B. Verhegghe
Microfluidic Modeling of Circulating Leukocyte Deformation...................................................................................... 1959 S. Gabriele, A.-M. Benoliel, P. Bongrand and O. Theodoly
Image-based Blood Flow Simulation in the Retinal Circulation .................................................................................... 1963 D Liu, N.B. Wood, X.Y. Xu, N. Witt, A.D. Hughes, S.A. Thom
Finite Element Modeling of LDL Transport in Carotid Artery Bifurcations ............................................................... 1967 A.I. Sakellarios, D.I. Fotiadis and L.K. Michalis
Micro-PIV as a research tool for in vivo studies of vascular remodeling ...................................................................... 1972 C. Poelma, B.P. Hierck and J. Westerweel
Numerical simulation and Experimental Validation in an Exact Aortic Arch Aneurysm Model............................... 1975 S. Seshadhri, G. Janiga, B. Preim, G. Rose, M. Skalej, D. Thévenin
The 3D Flow Analysis in Ruptured Cerebral Aneurysm ................................................................................................ 1980 M.L. Li, Y.C. Wang, H.D. Hsiao, L.C. Lee, K.C. Hung
Selected Topics in Biomechanics Sensor placement with a telescoping compliant mechanism........................................................................................... 1987 S. Griebel, L. Zentner, V. Böhm and J. Haueisen
The Cement-Bone Interface: Is It Susceptible To Damage Adaptive Remodeling? ..................................................... 1990 A.B. Lennon and P.J. Prendergast
Wall shear stress in the mouse aortic arch : Does size matter? ...................................................................................... 1994 B. Trachet, A. Swillens, D. van Loo, C. Casteleyn, A. De Paepe, B. Loeys, P. Segers
The Role of Arterial Wall Deformation on the Shear Stress over the Cardiovascular Smooth Muscle Cells: Computations in Two-Dimensional Geometry................................................................................................................. 1999 M. Dabagh, P. Jalali
Adapting a Mass-Spring system to energy density function describing myocardial mechanics.................................. 2003 Thomas Fritz, Oussama Jarrousse and Olaf Dössel
Fabrication of thin and flexible PDMS membranes for biomechanical test applications ............................................ 2007 C. Armbruster, M. Schneider, K. Gamerdinger, S. Schumann, M. Cuevas, E. Koch and J. Guttmann
Dual-Camera Spherical Indentation System for Examining the Mechanical Characteristics of Hydrogels.............. 2011 M. Ahearne, K.K. Liu and Y. Yang
Morphological and Functional Flow-Induced Response of Endothelial Cells and Adhesive properties of Leukocytes in 3D Stenotic Models ................................................................................................................................ 2015 L. Rouleau, M. Farcas, I. Copland, J.-C. Tardif, R. Mongrain and R.L. Leask
Development of an experimental device for the application of static and dynamic tensile strain on cells.................. 2019 S. Reimann, B. Rath-Deschner, J. Deschner, L. Keilig, A. Jäger and C. Bourauel
Dynamic Videomicroscopy reveals correspondence between mechanical characteristics of lung tissue and local morphology on alveolar scale ............................................................................................................................ 2023 S. Schumann, K. Gamerdinger, C. Dassow, C. Armbruster, M. Schneider, S. Uhlig, J. Guttmann
Differences in form stability between human non-tumorous alveolar epithelial cells type 2 and alveolar carcinoma cells under biaxial stretching .................................................................................................... 2027 S.J. Schließmann, K. Höhne, A. Charra, A. Kirschbaum, B. Cucuruz, S. Schumann, G. Zissel and J. Guttmann
Content
XXXIX
Simulation of tissue differentiation in a mechanically loaded bone regeneration chamber......................................... 2031 H. Khayyeri, S. Checa, M. Tagil and P.J. Prendergast
Three-dimensional Imaging of subpleural Alveoli by Fourier Domain Optical Coherence Tomography ................. 2035 S. Meissner, L. Knels, M. Mertens, M. Wendel, A. Tabuchi, W.M. Kuebler, T. Koch, E. Koch
The role of ventilation frequency in airway reopening ................................................................................................... 2040 K. Bauer, Ch. Brücker, G. Simbruner and M. Rüdiger
Cardiogenic oscillations reflect the compliance of the respiratory system .................................................................... 2045 A. Wahl, L. Vimlati, K. Möller, S. Schumann, R. Kawati, J. Guttmann, M. Lichtwarck-Aschoff
On the separate determination of lung mechanics in in- and expiration....................................................................... 2049 K. Möller, Z. Zhao, C. Stahl, S. Schumann, and J. Guttmann
Novel artificial lung model for respiration measurement and demonstration .............................................................. 2053 K. Stiglbrunner, M. Wurm, M. Weingant, J. Mader, A. Drauschke and P. Kroesl
Measurement of tidal volumes in case of high frequency oscillation ventilation .......................................................... 2057 M. Wurm, A. Drauschke, J. Mader, K. Stiglbrunner, M. Weingant, J. Bawitsch and P. Krösl
Gait and Motion Analysis Effect of Viscoelastic Constraints to Kinematic Parameters during Human Gait........................................................ 2061 T. Miyoshi, N. Sasagawa, S.-I. Yamamoto, T. Komeda and K. Nakazawa
Muscle electrical activity during force modulation exercise ........................................................................................... 2065 M. Mischi and M. Cardinale
Correspondence between Muscle Motion and EMG Activity during Whole Body Vibration..................................... 2069 Antonio Fratini, Paolo Bifulco, Mario Cesarelli, Giulio Pasquariello, Maria Romano, Antonio La Gatta
Study and implementation of a wireless accelerometer network for gait analysis ....................................................... 2073 J. Stamatakis, P. Gérard, P. Drochmans, T. Kezai, B. Caby, B. Macq and D. Flandre
Sensitivity of posturography to elimination of visual feedback ...................................................................................... 2077 T. Schnupp, M. Holzbrecher-Morys, D. Mandic and M. Golz
Impact analysis of shoes using the structural intensity technique.................................................................................. 2081 F. Cui, H.P. Lee, X. Zeng
Comparing the Biomechanics of Crouch Gait in Children with Cerebral Palsy to that of Age-Matched Controls and Young Healthy Adults..................................................................................................... 2085 Md.Z. Karim, R. Hainisch, A. Kranzl, M. Gfoehler and M.G. Pandy
Power spectral distribution analysis for detection of freezing of gait in patients with Parkinson’s disease............... 2089 H. Zabaleta, T. Kellerand J.F. Martí Massó
Predictive control continues after collision in self-induced impulsive loading: preliminary study ............................. 2093 Y. Bleyenheuft, P. Lefèvre, J.L. Thonnard
Does slippage influence the EEG response to load force variations during object manipulation? ............................. 2097 T. André, J. Delbeke, P. Lefèvre and J.-L. Thonnard
Visuomotor velocity transformations for visually guided manual tracking .................................................................. 2101 G. Leclercq, G. Blohm and P. Lefèvre
Towards a High Performance Expert System for Gait Analysis.................................................................................... 2105 V. Medved, V. Ergovic and S. Tonkovic
Feed forward compensation of hyper gravity in vertical pointing movements............................................................. 2109 F. Crevecoeur, J.-L. Thonnard and P. Lefèvre
XL
Content
Changes of muscle fiber length in vivo during walking as revealed by ultrasound images ......................................... 2113 N. Sasagawa, T. Miyoshi, S.-I. Yamamoto
Effect of the pull-down force magnitude on the external work during running in weightlessness on a treadmill..... 2116 T.P. Gosseye, N.C. Heglund and P.A. Willems
Objective analysis of lower limb movements of infants for diagnostic purposes: calculation of the knee joint center ................................................................................................................................... 2120 D. Karch, K. Kim, K. Wochner, H. Philippi, J. Pietz and H. Dickhaus
Fall prevention by vibration stimuli to planta pedis........................................................................................................ 2124 M. Yoshida, D. Kan, M. Takeoka and S. Mouri
Different Contraction Pattern of Lower Leg Muscle Fiber between Swaying and Tiptoe Standing in Human Upright Posture ................................................................................................................................................ 2128 S. Yamamoto, C. Shimizu, H. Yamamoto, N. Sasagawa, T. Miyoshi, H. Koyama, T. Komeda
Influence of visual information on optimal obstacle crossing......................................................................................... 2133 S.T. Rodrigues, A. Forner-Cordero, V.D. Garcia, P.F.P. Zagoand H. Ferasoli
Significance of the body-weight’s ratio of caregiver and receiver in the training process of care-giving motion...... 2138 Y. Koshino, Y. Ohno
Biological motion input to the oculomotor system........................................................................................................... 2142 Coppe S., Orban de Xivry J.J., Missal M., Lefevre P.
Compensation for smooth eye and head movements by gaze saccades during head-unrestrained tracking.............. 2146 P. Daye, G. Blohm and P. Lefèvre
Diagnostics of Human Body Stem Motor Functions by Systematic Provocation Method ........................................... 2150 H. Witte, E. Andrada, T. Kikova, M. Heinze and C. Ament
Cross education after power training ............................................................................................................................... 2153 A. Mastalerz, G. Lutosawska and C. Urbanik
Development of a Sit-to-Stand Assistance System ........................................................................................................... 2157 Kosuke Tomuro, Osamu Nitta, Yoshiyuki Takahashi, Takashi Komeda
Biomaterials, Tissue Engineering, Artificial Organs and Implants Laser generated scaffolds for regeneration of the auditory nerve and facial nerve ..................................................... 2161 G. Hohenhoff, J.K. Krauss, K. Schwabe, M. Nakamura, K. Haastert, and M. Hustedt
Experimental setup for high mechanical strain induction to cell loaded metallic biomaterials .................................. 2165 T. Habijan, T. Glogowski, S. Kühn, M. Pohl, G. Muhr and M. Köller
Stent Design for Gastrointestinal Leakage ....................................................................................................................... 2169 R.A. Rothwell, G.A. Thomson, M.S. Pridham
Textile blood vessels coated with DLC.............................................................................................................................. 2173 M. Jelinek, J. Podlaha, T. Kocourek, V. Žížková
Finite-element-analysis and in vitro study of bioabsorbable polymer stent designs..................................................... 2175 C. Schultze, N. Grabow, H. Martin and K.-P. Schmitz
Influence of crystallinity on bio- physical properties of hydroxyapatite films .............................................................. 2179 M. Jelinek, T. Dostálová, T. Kocourek, V. Studnika, M. Seydlová, Z. Teuberová, P. Kíž, B. Dvoánková, K. Smetana Jr, J. Kadlec, M. Vrbová
Novel, Biocompatible and Photo Crosslinkable Polymeric Networks based on Unsaturated Polyesters: Optimization of the Network Properties............................................................................ 2182 N.K. Mohtaram, M. Imani, Sh. Sharifi, H. Mobedi and M. Atai
Content
XLI
The Use of Fibrin as an Autologous Scaffold Material for Cardiovascular Tissue Engineering Applications: From In Vitro to In Vivo Evaluation ................................................................................................................................ 2186 T.C. Flanagan, J. Frese, J.S. Sachweh, S. Diamantouros, S. Koch, T. Schmitz-Rode and S. Jockenhoevel
Continuous oxygen consumption estimation method for animal cell bioreactors based on a low-cost control of the medium dissolved oxygen concentration................................................................................................................ 2190 A. Fontova, A. Soley, J. Gálvez, E. Sarró, M. Lecina, J. Rosell, P.J. Riu, J. Cairó, F. Gòdia and R. Bragos
Capillary network formation during tissue differentiation. A mechano-biological model .......................................... 2195 S. Checa and P.J. Prendergast
Formed 3D Bio-Scaffolds via Rapid Prototyping Technology........................................................................................ 2200 P.S. Maher, R.P. Keatch, K. Donnelly and J.Z. Paxton
Design and Construction of a System for the Application of Variable Pressure to Tissue Engineered Blood Vessels....................................................................................................................................................................... 2205 Stefanos E. Diamantouros, Thomas C. Flanagan, Thomas Finocchiaro, Thomas Schmitz-Rode, Stefan Jockenhoevel
Remedi: A Research Consortium Applying Engineering Strategies to Establish Regenerative Medicine as a New Industry ............................................................................................................................................................... 2209 M.L. Mather, J.A. Crowe, S.P. Morgan, L.J. White, K.M. Shakesheff, S.M. Howdle, R.J. Thomas, H.M. Byrne, S.L. Waters, D.J. Williams
A Biodegradable Balloon-expandable Stent for Interventional Applications in the Peripheral Vasculature – In vitro Feasibility .............................................................................................................................................................. 2213 N. Grabow, C.M. Bünger, C. Schultze, W. Schmidt, K. Sternberg, W. Schareck and K.-P. Schmitz
A membrane-based voice-producing element for female laryngectomized patients..................................................... 2216 J.W. Tack, H.A.M. Marres, C.A. Meeuwis, E.B. van der Houwen, G. Rakhorst, G.J. Verkerke
Fully implantable alloplastic urinary bladder ................................................................................................................. 2220 M. Roth, H. Wassermann and D. Jocham
Physical and dynamic models of the eye for tonometry applications............................................................................. 2223 M.R. Hien, R.L. Reuben, J. Hammer, R.W. Else, C. Muir
Temperature dependence of water absorption for wavelengths at 1920 nm and 1940 nm .......................................... 2228 Dirk Theisen-Kunde, Veit Danicke, Mario Wendt, Ralf Brinkmann
Manufacturing of bone substitute implants using Selective Laser Melting................................................................... 2230 S. Hoeges, M. Lindner, H. Fischer, W. Meiners and K. Wissenbach
Development of Drug-Eluting Stents on the Basis of Genistein and Poly(L-lactide) .................................................... 2235 Katrin Sternberg, Niels Grabow, Marian Löbler, Michael Petzsch, Janusz Lipiecki, Claus Harder, Christoph Nienaber, Klaus-Peter Schmitz
Coating homogeneity in the manufacture of Drug-Eluting Stents ................................................................................. 2240 C. Gocke, N. Grabow, C. Schultze, K. Sternberg, W. Schmidt and K.-P. Schmitz
Characterization of textile conductors for Bioimpedance Spectroscopy ....................................................................... 2244 Lisa Beckmann, Christian Neuhaus, Nadine Zimmermann, Harald Mai and Steffen Leonhardt
Analysis of mechanical properties of liver tissue as a design criterion for the development of a haptic laparoscopic tool .............................................................................................................................................. 2248 S. Kassner, J. Rausch, A. Kohlstedt and R. Werthschützky
Registered Microhardness of human teeth parts and dental filling composites............................................................ 2252 L. Schmitt, C. Lurtz, D. Behrend and K.-P. Schmitz
Quantitative High Speed Video Analysis of Biopolymer Encapsulated Cells while Capsule Formation.................... 2255 I. Meiser, S.C. Müller,M.M. Gepp,H. Zimmermann, F. Ehrhart
XLII
Content
Chemical modification of polymeric implant surfaces for local drug delivery ............................................................. 2259 M. Teske, H.W. Rohm, N. Grabow, K.-P. Schmitz and K. Sternberg
Synthesis of Hydrolytically Stable Monomers for Dental Adhesives ............................................................................. 2263 N. Moszner, J. Angermann, F. Zeuner and U. Fischer
Time and pH dependence of adsorption of chlorhexidine on anatase and rutile titanium dioxide ............................. 2265 Yanqing Hu, Michele E. Barbour, Geoffrey C. Allen
Local drug delivery from hydroxyapatite ceramic fibres ............................................................................................... 2269 M. Ravelingien, N. Smets, S. Mullens, J. Luyten, C. Vervaet, J.P. Remon
Considerations in the development of novel functional monomers for dental resin composites ................................. 2273 T. Buruiana, V. Melinte, E.C. Buruiana, M. Moldovan, C. Prejmerean
Development of a Production Process for Stem Cell Based Cell Therapeutic Implants Using Disposable Bioreactor Systems ............................................................................................................................... 2277 C. Weber, S. Pohl, R. Poertner, C. Wallrapp, P. Geigle, and P. Czermak
Using MEA system in verifying the functionality of retinal pigment epithelium cells differentiated from human embryonic stem cells .................................................................................................................................... 2281 N. Nöjd, T. Ilmarinen, L. Lehtonen, H. Skottman, R. Suuronen and J. Hyttinen
Tubular scaffold design of polylactide acid for nerve tissue engineering: In – vitro .................................................... 2285 M.T. Khorasani, H. Mirzadeh, A. Talebi, S. Irani
Finite Element Analysis of Bone Remodeling after Hip Resurfacing Arthroplasty ..................................................... 2288 Bernd-Arno Behrens, Ingo Nolte, Patrick Wefstaedt, Christina Stukenborg-Colsman, Anas Bouguecha
Innovations and Nanotechnology Systems in Foil – Opening new Perspectives in Medical Technology............................................................................. 2292 F.P. Wieringa, G.T. van Heck, P. Rensing, M.M. Koetse, S.S. Kalisingh and H. Schoo
Silicon Surface Modification with Supported Phospholipids Bilayer for Biosensor based on Imaging Ellipsometry ......................................................................................................................................... 2296 Y.Y. Chen, Z.H. Wang, Y. Liu, W. Liang, W.R. Chang, G. Jin
Monitoring Biomolecular Interaction with 1-D Plasmonic Nanostructure by Using Dark Field Microscopy ........... 2300 Hui-Hisn Lu, Chii-Wann Lin, Yueh-Yuan Fang, Tzu-Chien Hsiao, Su-Ming Hsu
Magnetic Nanoparticles Development of iron oxide nanoparticles for hyperthermia and drug targeting .......................................................... 2304 M. Zeisberger, S. Dutz, R. Müller
Optimization of magnetic drug targeting by mathematical modeling and simulation of magnetic fields .................. 2309 I. Slabu, A. Röth, T. Schmitz-Rode and M. Baumann
A Spectrometer for Magnetic Particle Imaging ............................................................................................................... 2313 S. Biederer, T. Sattel, T. Knopp, K. Lüdtke-Buzug, B. Gleich, J. Weizenecker, J. Borgert, T.M. Buzug
AC susceptometry and magnetorelaxometry for magnetic nanoparticle based biomolecule detection...................... 2317 D. Eberbeck, A.P. Astalan, K. Petersson, F. Wiekhorst, C. Bergemann, C. Johansson, U. Steinhoff, H. Richter, A. Krozer, L. Trahms
Breath Synchronous Magnetic Drug Targeting in the Lungs......................................................................................... 2322 Ch. Dahmani, S. Götz, Th. Weyh, R. Renner, M. Rosenecker, C. Rudolph
Content
XLIII
Quantification of magnetic nanoparticle concentration in pig lung tissue after magnetic aerosol drug targeting by magnetorelaxometry ..................................................................................................................................................... 2326 F. Wiekhorst, U. Steinhoff, D. Eberbeck, K. Schwarz, H. Richter, R. Renner, M. Roessner, C. Rudolphand L. Trahms
Optical fiber sensors for medical applications – Practical engineering considerations................................................ 2330 J.A.C. Heijmans, L.K. Cheng, F.P. Wieringa
Drug delivery by nanoparticles – facing the obstacles..................................................................................................... 2335 M. Löbler, H.W. Rohm, K.-P. Schmitz, A.H. Johnston, T.A. Newman, S. Ranjan, R. Sood, P.K.J. Kinnunen
Suitability of Nanoparticles for Stent Application........................................................................................................... 2339 F. Luderer, K. Sternberg, H.W. Rohm, M. Löbler, C. Schultze, K. Köck, H.K. Kroemer and K.-P. Schmitz
Preparation and Characterization of Dextran-Covered Fe3O4 Nanoparticles for Magnetic Particle Imaging.......... 2343 Kerstin Lüdtke-Buzug, Sven Biederer, Timo Sattel, Tobias Knopp and Thorsten M. Buzug
Localization of a magnetic nanoparticle spot from features of the magnetic field pattern and comparison to a magnetic dipole fit ....................................................................................................................................................... 2347 F. Wiekhorst, U. Steinhoff, W. Haberkorn, G. Lindner, M. Bär and L. Trahms
Microsystems Intraoral Drug Delivery Microsystem .............................................................................................................................. 2352 A. Schumacher, T. Goettsche, S. Haeberle, T. Velten, O. Scholz, A. Wolff, B. Beiski, S. Messner and R. Zengerle
A miniaturized pressure independent drug delivery system for metronomic cancer therapy..................................... 2356 F. Goldschmidtboeing, A. Geipel, C. Farhat, P. Jantscheff, N. Esser, U. Massing, P. Woias
An Efficient Low-voltage Micropump For An Implantable Sphincter System............................................................. 2360 G. Biancuzzi, T. Lemke, P. Woias, O. Ruthmann, H.J. Schrag, B. Vodermayer, T. Schmid and F. Goldschmidtboeing
Integration of microneedle-arrays and micro pumps in disposable and cheap drug delivery devices ....................... 2364 M. Vosseler, M. Jugl, M. Blaesing, D. Hradetzky, S. Messner and R. Zengerle
Adjustable Diffusion Barrier for Controlled Drug Release in Spastic and Pain Therapy ........................................... 2368 S. Herrlich, S. Ziolek, H. Hoefemann, R. Zengerle and S. Haeberle
A Prototype of Miniaturized SPR Sensing System based on Polymer Light-Emitting Diode...................................... 2372 Yueh-Yuan Fang, Tz-Bin Wang and Chii-Wann Lin
Micromanufactured electrodes for cortical field potential recording: in vivo study.................................................... 2375 J.G. Cordeiro, C. Henle, M. Raab, W. Meier, T. Stieglitz, A. Schulze-Bonhage, J. Rickert
A Binder-less Glucose Fuel Cell with Improved Chemical Stability Intended as Power Supply for Medical Implants .......................................................................................................................................................... 2379 S. Kerzenmacher, U. Kräling, J. Ducrée, R. Zengerle and F. von Stetten
Bio-Microsystem for Cell Cultivation and Manipulation and its Peripherals............................................................... 2384 U. Fröber, M. Stubenrauch, D. Voges, M. Hoffmann and H. Witte
Electrode Localization in a Self-organizing Network for Electrophysiological Diagnostics ........................................ 2388 M. Schaefer, C.P. Figueiredo, S. Kiefer, P.M. Mendes, R. Ruff, K.P. Hoffmann
A Novel Interconnection Technology for Laser-Structured Platinum Silicone Electrode Arrays .............................. 2392 T. Guenther, M. Schuettler, J.S. Ordonez, C. Henle, J. Wilde and T. Stieglitz
Fluid characterization by interdigitated electrodes sensors............................................................................................ 2396 Sylvain Druart, Rémi Pampin, Luis Moreno Hagelsieb, Laurent Francis, Denis Flandre
Non-contact Micromanipulation System with Computer Vision ................................................................................... 2400 Y. Tanaka, H. Kawada, K. Hirano, M. Ishikawa and H. Kitajima
XLIV
Content
Progress in the Development of the Artificial Accommodation System ........................................................................ 2405 J.A. Nagel, T. Martin, L. Rheinschmitt, U. Gengenbach, G. Bretthauer, R.F. Guthoff
Neural Engineering Coating of neural microelectrodes with intrinsically conducting polymers as a means to improve their electrochemical properties ........................................................................................................................................ 2409 W. Poppendieck, K.-P. Hoffmann
MEMS-Technology for Large-Scale, Multichannel ECoG-Electrode Array Manufacturing ..................................... 2413 B. Rubehn, P. Fries and T. Stieglitz
Integration of Recording channel for the Evoked Compound Action Potential in an Implantable Neurostimulator................................................................................................................................... 2417 Pascal Doguet, Thomas Costecalde, Hervé Mével, Jorge Marin Millan² and Jean Delbeke
Planar 2D-Array Neural Probe for Deep Brain Stimulation and Recording (DBSR).................................................. 2421 Silke Musa, Marleen Welkenhuysen, Roeland Huys, Wolfgang Eberle, Kris van Kuyck, Carmen Bartic, Bart Nuttin, Gustaaf Borghs
Robust microprobe systems for simultaneous neural recording and drug delivery..................................................... 2426 S. Spieth, A. Schumacher, K. Seidl, K. Hiltmann, S. Haeberle, R. McNamara, J.W. Dalley, S.A. Edgley, P. Ruther and R. Zengerle
Integration of Microfluidic Channels into Laser-Fabricated Neural Electrode Arrays .............................................. 2431 E. Fiedler, M. Schuettler, C. Henle, R. Zengerle, and T. Stieglitz
Why are penetrating electrodes for the cochlear nucleus not significantly superior to superficial implants?........... 2435 S.K. Rosahl and S. Rosahl
Deposition Parameters Determining Insulation Resistance and Crystallinity of Parylene C in Neural Implant Encapsulation ...................................................................................................................................... 2439 C. Hassler, R. von Metzen, T. Stieglitz
Electric Field Distribution for the Characterization of Planar and Recessed Electrodes ............................................ 2443 T.B. Krueger and T. Stieglitz
A Telemetry Platform for Implantable Devices Providing Inductive Energy Supply and a Bi-Directional Data Link............................................................................................................................................................................. 2447 C. Jeschke, M. Schuettler, L. Reindl, and T. Stieglitz
Modeling and Simulation The Influence of Inter-Crystal Scattering on Detection Efficiency of Dedicated Breast Gamma Camera: A Monte Carlo Study ......................................................................................................................................................... 2451 M. Rasouli, M.R. Ay, A. Takavar, S. Lashkari and G. Loudos
An Optimised 3D Breast Phantom for X-Ray Breast Imaging Techniques .................................................................. 2455 K. Bliznakova, S. Kazakli and N. Pallikarakis
Random walk simulation of R2-dispersion in foam microstructures............................................................................. 2459 S.H. Baete and Y. De Deene
Imaging haemorrhagic cerebral stroke by frequency-difference magnetic induction tomography: numerical modelling ........................................................................................................................................................... 2464 M. Zolgharni, H. Griffiths and D.S. Holder
Content
XLV
A novel simulation environment for testing experimental and established ultrasonic blood flow imaging techniques ..................................................................................................................................................... 2468 Abigail Swillens, Lasse Lovstakken, Hans Torp, Patrick Segers
Several Optimization Methods to Optimize the Spacing Between the Elements of Ultrasound linear Phased Array to Produce a Radiation Pattern with Minimum Side Lobes Level (SSL), Null Placement Control and Treating Cancer Tumors in Biological Media ................................................................. 2472 Mazhar B. Taiel, Nour H. Ismail, Ashraf T. Ibrahim
Monte Carlo Assessment of Geometric, Scatter and Septal Penetration Components in DST-XLi HEGP Collimator .......................................................................................................................................... 2479 M. Shafaei, M.R. Ay, D. Sardari, N. Dehestani, H. Zaidi
Simulation of Ultrasound Parameter Distribution Influence in Ultrasonic Computed Tomography......................... 2483 J. Roleek, D. Hemzal, I. Peterlík and J. Jan
Monte Carlo based calculation of patient exposure in X-ray CT-examinations ........................................................... 2487 R. Schmidt, J. Wulff, B. Kästner, D. Jany, J.T. Heverhagen, M. Fiebich and K. Zink
Heat transfer analysis software adapted to skin burn depth simulations...................................................................... 2491 M.K. Bajorek, M. Kacmarek
Changes in the length of the sacrospinous and sacrotuberous ligaments induced by Salter osteotomy: a computer simulation........................................................................................................................................................ 2495 W. Bartels, T. Pressel, S. Max, C. Hurschler and J. Vander Sloten
Virtual Stenting for Carotid Stenosis with Elastic Artery Wall Modeling .................................................................... 2499 J. Egger, S. Großkopf and B. Freisleben
Contact Configuration and Energy Consumption in Spinal Cord Stimulation ............................................................ 2503 C.C. de Vos, M.P. Hilgerink, H.P.J. Buschman and J. Holsheimer
Adaption of Mathematical Ion Channel Models to measured data using the Particle Swarm Optimization ............ 2507 G. Seemann, S. Lurz, D.U.J. Keller, D.L. Weiss, E.P. Scholz and O. Dössel
Localization of the Origin of Ventricular Premature Beats by Reconstruction of Electrical Sources Using Spatio-Temporal MAP-based Regularization ....................................................................................................... 2511 Y. Jiang, D. Farina and O. Dössel
System Identification of Neonatal Incubator based on Adaptive ARMAX Technique ................................................ 2515 Abbas K. Abbas, Steffen Leonhardt
A Convolution-based Methodology to Simulate Cardiac Ultrasound Data Sets: Integration of Realistic Beam Profiles .............................................................................................................................. 2520 Hang Gao, Piet Claus, G. Harry van Lenthe, Siegfried Jaecques, Steven Boonen, Georges Van der Perre, Walter Lauriks and Jan D’hooge
There is more than biphasic truncated exponential in defibrillation............................................................................. 2524 M. Schönegg and A. Bolz
Optimization of image quality and patient dose in paediatric radiology using Monte Carlo modeling ..................... 2528 P. Penchev, V. Klingmüller, G. Alzen and M. Fiebich
Evaluation of Induced Current Densities and SAR in the Human Body by Strong Magnetic Fields around 100 kHz................................................................................................................................................................... 2532 J. Bohnert, B. Gleich, J. Weizenecker, J. Borgert and O. Dössel
Pressure waveform estimation in the common carotid artery Different methods in comparison............................... 2536 Irene Zaccari, Alessandro C. Rossi, E. Marielle H. Bosboom, Peter J. Brands
Parameter estimation of recruitment models in mechanical ventilation ....................................................................... 2540 K. Möller, T. Sivenova, H. Runck, C. Stahl, S. Schumann, and J. Guttmann
XLVI
Content
Time and Memory Efficient Implementation of the Cardiac Bidomain Equations...................................................... 2544 M. Karl, G. Seemann, F.B. Sachse, O. Dössel and V. Heuveline
A generic model of overall heart geometry for model based studies of electrical, mechanical, and ion-kinetics aspects of the heart ................................................................................................................................. 2548 André C. Linnenbank, Peter M. van Dam, Thom F. Oostendorp, Peter H.M. Bovendeerd, Iris K. Rüssel, Mark Potse
Nonlinear Finite Element Analysis of Balloon Sinuplasty .............................................................................................. 2552 F. Cui, H.P. Lee and D.Y. Wang
A mathematical speedup prediction model for parallel vs. sequential programs ......................................................... 2556 H.M. Overhoff, S. Bußmann and D. Sandkühler
Model-Based Method of Non-Invasive Reconstruction of Ectopic Focus Locations in the Left Ventricle ................. 2560 D. Farina, Y. Jiang, O. Dössel, C. Kaltwasser and W.R. Bauer
Engineering Support in Surgical Strategy for Ventriculoplasty .................................................................................... 2564 Y. Shiraishi, T. Yambe, Y. Saijo, S. Masuda, G. Takahashi, K. Tabayashi, T. Fujimoto and M. Umezu
Simulation-based femoro-popliteal bypass surgery......................................................................................................... 2568 M. Willemet, G. Compère, J.F. Remacle and E. Marchandise
An applicability of Impedance Technique in evaluation of cardiac resynchronization therapy ................................. 2571 M. Lewandowska, J. Wtorek and L. Mierzejewski
Porcine model for CPR artifact generation in ECG signals ........................................................................................... 2575 A.C. Mendez, M. Roehrich and H. Gilly
Design and Assessment of Fuzzy Rules by Multi Criteria Optimization to Classify Anaesthetic Stages.................... 2579 R. Baumgart-Schmitt, C. Walther and K. Backhaus
Impact of the hERG Channel Mutation N588K on the Electrical Properties of the Human Atrium ......................... 2583 P. Carrillo, G. Seemann, E. Scholz, D.L. Weiss and O. Dössel
The Effect of Laser Characteristics in the Generation and Propagation of Laser Generated Guided Waves in Layered-skin Model ....................................................................................................................................................... 2587 Adèle L’Etang and Zhihong Huang
A Mesh-Based Model for Prediction of Initial Tooth Movement ................................................................................... 2592 K. De Bondt, A. Van Schepdael, J. Vander Sloten
Recipe Suggestion System .................................................................................................................................................. 2596 Satoshi Morita, Yasuyuki Shimada, Tsutomu Matsumoto, Shigeyasu Kawaji, and Timothy Teo Zhong Hon
An Object-oriented Model of the Cardiovascular System with a Focus on Physiological Control Loops.................. 2600 A. Brunberg, D. Abel and R. Autschbach
Computer Simulations of a Blood Flow Behavior in Simplified Stenotic Artery Subjected to Strong Non-Uniform Magnetic Fields .......................................................................................................................... 2604 S. Kenjeres and R. Opdam
A Multiphysics Model for Studying the Influence of Pulse Repetition Frequency on Tissue Heating During Electrochemotherapy ............................................................................................................................................ 2609 I. Lackovi, R. Magjarevi and D. Miklavi
Transient Simulation of the Blood Flow in the thoracic Aorta based on MRI-Data by Fluid-Structure-Interaction.......................................................................................................................................... 2614 Dipl.-Ing. Markus Bongert, Prof. Dr.-Ing. Marius Geller, Dr. med. Werner Pennekamp, Dr. med. Daniela Roggenland, Prof. Dr. med. Volkmar Nicolas
Micro-gripping of Small Scale Tissues.............................................................................................................................. 2619 R.E. Mackay, H.R. Le, K. Donnelly and R.P. Keatch
Content
XLVII
Optimizing drug delivery using non-uniform magnetic fields: a numerical study ....................................................... 2623 J.W. Haverkort and S. Kenjereš
A real bicycle simulator in a virtual reality environment: the FIVIS project............................................................... 2628 O. Schulzyk, U. Hartmann, J. Bongartz, T. Bildhauer, R. Herpers
Influence of body worn wireless mobile devices on implanted cardiac pacemakers .................................................... 2632 Sebastian Seitz and Olaf Dössel
Co-simulation approach for the design of MRI RF coils and its application to local SAR distribution ..................... 2636 Sylvia Smajic-Peimann and Waldemar Zylka
Effect Of Prism Induced Heterophoria On Binocular Visual Evoked Potential........................................................... 2640 S.M. Shushtarian, A. Norouzi
Experimental and Numerical Flow Modeling towards Refinement of Three-dimensional Echocardiography for Heart Valve Leakage Quantification .......................................................................................................................... 2644 P. Van Ransbeeck, M. Vermeulen, B. Van Der Smissen, F. Maes, R. Kaminsky, T. Claessens, P. Segers and P. Verdonck
A Model for the Regulation of the Ca2+ in the neuronal cell........................................................................................... 2648 C.M. Dabu
Mathematical modeling of fracture healing: coupling between mechanics, angiogenesis and osteogenesis............... 2651 L. Geris, J. Vander Sloten and H. Van Oosterwyck
Numerical Study of Effects of Bladder Filling on Prostate Positioning In Radiotherapy............................................ 2655 J. Krywonos, F. Elkut, J. Brunt, Z. Malik, C. Eswar, J. Fenwick and X.J. Ren
Time course of electrical and diffusional parameters during and after electroporation.............................................. 2659 D. Miklavcic and L. Towhidi
Multivariate Calibration Models to Estimate Non-invasively Blood Glucose Levels Based on A Novel Optical Technique Named Pulse Glucometry ................................................................................... 2664 Yasuhiro Yamakoshi, Mitsuhiro Ogawa, Takehiro Yamakoshi, Toshiyo Tamura and Ken-ichi Yamakoshi
Functional analysis of Normal and CSNB a-wave ERG component.............................................................................. 2668 R. Barraco, L. Bellomonte, M. Brai and D. Persano Adorno
Temperature distribution assessment during radiofrequency ablation......................................................................... 2672 G. Tato, T. Rok, E. Rokita
Numerical Modeling of Perfusion Flow in Irregular Scaffolds ...................................................................................... 2677 P. Van Ransbeeck, F. Maes, S. Impens, H. Van Oosterwyck and P. Verdonck
Return Time of Heart Dynamics ....................................................................................................................................... 2681 F. Ariaei, E.A. Jonckheere, W.P. Stuppy and T.S. Callahan
How to shift the human sleep-wake cycle: a simulation study incorporating monochromatic blue light................... 2686 C. Heinze, S. Schirmer, M. Golz
Benchmarking Different Models describing Sinus Node Heterogeneity........................................................................ 2691 M. Wilhelms, G. Seemann and O. Dössel
A Computational Modeling Study of the Effects of Acoustic Trauma on the Dorsal Cochlear Nucleus .................... 2695 X. Zheng, A. Giang, S. Vetsis, I.C. Bruce, and H.F. Voigt
It Takes Two to Tango: Regulation of Sarcoplasmic Reticulum Calcium ATPase by CaMK and PKA in a Mouse Cardiac Myocyte ............................................................................................................................................. 2699 J.T. Koivumäki, T. Korhonen, J. Takalo, M. Weckström and P. Tavi
Reconstruction of Ectopic Foci Using the Critical Point Theory.................................................................................... 2703 V. Reimund, D. Farina, Y. Jiang and O. Dössel
XLVIII
Content
Education and Profession An Non-traditional Career Evolution: Replacing the Leaders....................................................................................... 2707 Professor Gabriela Marinescu, Ph. D.
The role of biomedical engineers in systems / synthetic biology..................................................................................... 2714 J.A. Crowe
Cooperative Education Program in Medical Equipment Technology Education......................................................... 2718 A. Alhamwi, T. Elsarnagawy
Sustainable System Understanding & Empathic Product Design – A Custom-Built Qualification Concept for Biomedical Engineers ................................................................................................................................................... 2722 I. Marsolek, D. Fuchs, W. Friesdorf, O. Bergmann and D. Pappert
By legislation driven BME curriculum at VSB TU Ostrava........................................................................................... 2726 J. Cernohorsky, H. Sochorova
Assistive Technologies: New Challenges for Education .................................................................................................. 2730 M. Klima, L. Lhotska, V. Chudacek, M. Huptych and M. Husak
B-AEHS: a formal model for Adaptive Educational Hypermedia System in a Biomedical Project Evaluation........ 2734 M.A.F. Almeida and F.M. de Azevedo
EVICAB - Biomedical Engineering Program on the Internet including Video Files for iPod..................................... 2738 J.A. Malmivuo, A. Kybartaite and J.J. Nousiainen
Current Status of Biomedical Engineering Education Programs in Austria ................................................................ 2742 J. Schröttner
BME master studies as project-oriented learning at CTU Prague................................................................................. 2746 V. Rogalewicz and M. Vrbová
Dual Education in Biomedical Engineering at Berufsakademie Sachsen...................................................................... 2749 T. Schmitt, E. Uffrecht
Assessing the educational status of clinicians concerning mechanical ventilation........................................................ 2753 D. Gottlieb, S. Lozano, J. Arntz, J. Guttmann and K. Möller
TheraGnosos: an Interactive Blended Learning, Simulation and Training System for Biomedical Engineering University Courses.............................................................................................................................................................. 2757 U. Morgenstern, A. Abdel-Haq, V. Barth, H. Dietrich, I. Rudolph
Biomedical Engineering Education: New Curricula, New Experience.......................................................................... 2760 L. Lhotska
Using Open Structure Database for Teaching Designing Health Information System ................................................ 2764 I. Patasiene, M. Patasius and R. Kregzdyte
Curriculum of Bachelor Studies - Biomedical Technician at the Czech Technical University.................................... 2768 J. Charfreitag and J. Hozman
Design and Development of Virtual Scenes using 3D models for Computer-Assisted Learning................................. 2772 L. Colmenares, A. Bosnjak, G. Montilla, H. Villegas, I. Jara
Author Index....................................................................................................................................................................... 2777 Subject Index ...................................................................................................................................................................... 2793
Risk Stratification in Ischemic Heart Failure Patients with Linear and Nonlinear Methods of Heart Rate Variability Analysis A. Voss1, R. Schroeder1, M. Vallverdú2, H. Brunel2, I. Cygankiewicz3, R. Vázquez4, A. Bayés de Luna5 and P. Caminal2 1
University of Applied Sciences Jena/Department of Medical Engineering and Biotechnology, Carl-Zeiss-Promenade 2, Jena, Germany 2 Technical University of Catalonia/Biomedical Engineering Research Centre, Pau Gargallo 5, Barcelona, Spain 3 Medical University of Lodz/Institute of Cardiology, ul. Sterlinga 1/3, Lodz, Poland, from the MUSIC Trial 4 Valme University Hospital in Seville/ Cardiology Unit, Bobby Deglané 5, Seville, Spain, from the MUSIC Trial 5 Hospital de la Santa Creu i Sant Pau Barcelona/Servicio de Cardiologia, St Antoni Ma Claret 167, Barcelona, Spain, from MUSIC Trial Abstract — Heart failure has a current prevalence of 14 million concerned people in Europe and is thus a major and escalating public health problem in the industrialized countries with ageing populations. A five-year mortality rate between 62-75% in men and 38-42% in women related to the initial diagnosis of heart failure was documented in the Framingham study. The aim of this study was to investigate the suitability of linear (according the Task Force recommendations) and nonlinear (symbolic dynamics - SD and detrended fluctuation analysis - DFA) methods of heart rate variability (HRV) analysis for risk stratification in patients with ischemic heart failure (IHF). From 221 low risk (LR: stable condition) and 35 high risk (HR: cardiac death) IHF patients HRV from 24h long-term BBI time series were analyzed. Seven measures from all applied methods revealed significant differences (p60mm, LVEF - left ventricular ejection fraction 14mm, or abnormal relaxation patterns characteristic of diastolic dysfunction and sinus rhythm. Exclusion criteria were atrial
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1–4, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2
A. Voss, R. Schroeder, M. Vallverdú, H. Brunel, I. Cygankiewicz, R. Vázquez, A. Bayés de Luna and P. Caminal
fibrillation, permanent pacemaker or implantable cardioverter defibrillator (ICD), sustained ventricular tachycardia as well as ICD implantation or heart transplantation during the 2-year follow up period. Furthermore, patients with a percentage of ectopic beats or artifacts higher than 10% within the 24hour Holter ECG recordings (n=13) were excluded from this study to avoid filtering influences on the analysis results. All patients received optimal medical treatment with ACE inhibitors (73%), beta-blockers (75%) and diuretics (57%). In 118 patients (46%) of all IHF patients diabetes mellitus had been diagnosed. In this study 24-hour Holter ECGs (ELA Medical SyneFlash® - MMC, USA, sampling frequency = 200Hz) from 256 evaluable patients with IHF (210 males, 46 females) were investigated. Artifacts and ectopic beats within the given beat-to-beat interval (BBI) time series were detected and replaced by interpolated beats applying a special adaptive variance estimation algorithm. With reference to the follow-up protocol one subgroup including the low risk IHF patients (LR: n=221, without progression of IHF, stable condition) and one subgroup consisting only of high risk IHF patients (HR: n=35, death due to a cardiac event) were defined. Due to a slight significant difference (p=0.043) between the mean values of BBI in the groups LR and HR a meanNN comparable group LRNN was matched to the HR group (p=0.150) to exclude influences of the mean heart rate on the parameters and statistical results (Table 1). Table 1 Group characteristics of high risk (HR) and low risk (LR) IHF patients and a meanNN matched low risk IHF group (LRNN), * mean ± standard deviation, ** number of medicated patients
Parameters number of patients (males / females) age [years]* meanNN [ms]*
HR
LR
LRNN
35 (30 / 5) 65.5 ± 10.6
221 (180 / 41) 62.9 ± 9.6
200 (171 / 29) 63.4 ± 9.7
829.3 ± 133.5
868.9 ± 131.9
852.7 ± 124.8
LVEF [%]*
30.4 ± 7.6
35.4 ± 11.2
34.7 ± 10.8
LVDD [mm]*
63.5 ± 7.5
61.5 ± 8.9
61.8 ± 9.1
NYHA [II / III]*
2.5 ± 0.5
2.2 ± 0.4
2.2 ± 0.4
ACE inhibitors**
23
164
148
Beta-blockers**
19
174
153
Diuretics**
26
121
109
B. Methodology Standard HRV was quantified calculating parameters from linear time and frequency domain by means of the filtered BBI time series according to the Task Force recommendations by the European Society of Cardiology
_______________________________________________________________
and the North American Society of Pacing and Electrophysiology [9]. Amongst others, following time domain measures were evaluated: meanNN - mean value of BBI time series [ms], sdNN - standard deviation of BBI time series [ms] and rmssd - square root of the mean squared differences of consecutive BBIs [ms]. From the power spectra of the BBI time series (Fast Fourier Transform using a Blackman Harris window) frequency domain parameters were extracted: LF - power in the frequency band 0.04 - 0.15Hz [s2], HF - power in the frequency band 0.15 - 0.4Hz [s2], LF/HF - ratio of LF to HF and LFn as well as HFn - normalized LF [LF/(LF+HF)] and HF [HF/(LF+HF)] respectively. Symbolic dynamics (SD) was applied to classify dynamic changes within the BBI time series estimating nonlinear measures by transformation of the time series into four symbols {0, 1, 2, 3} as presented in [10]. After symbol transformation, from the symbol strings words consisting of three consecutive symbols were defined resulting in a total number of 64 several word types (000, 001, …, 333). Several SD parameters describing the word distribution were estimated, e.g.: pW000 to pW333 - probability of the occurrence of each single word type (000 to 333) and wpsum13 - relative portion of words that contain only the symbols “1” and “3”. In an additional mode of SD [11] the BBI time series were transformed into five symbols {0, 1, 2, 3, 4} according to equation 1. M −1 0 : ° Sj = ¦ ® i =1 °1: ¯
BBI i j − BBI i +j 1
≥ a < sd ( j )
BBI i − BBI
< a < sd ( j )
j
j i +1
with j = 1,..., w
(1)
Thereby, a sliding window j of length M beats (in this study M=5) is shifted over the BBI time series and the difference of each successive BBI within the window is determined. The number of differences that are less than the a-scaled (a=1) standard deviation sd(j) of the values in the sliding block will be determined and yield a symbol Sj. Afterwards, the window is shifted beats further (here =1) and the procedure starts again where w is the number of windows taken into account. Amongst others, the parameter tau1_p001 was determined counting the number of symbols from 5 symbol types with a probability of occurrence > 1%. Detrended fluctuation analysis (DFA) was first proposed and applied to physiologic signals by Peng et al. [12, 13] and quantifies the presence or absence of fractal correlation properties in non-stationarity time series. Firstly the BBI time series of length N is integrated and afterwards divided into s equal and non-overlapping segments of length n. For the data in each segment a least-squares regression line representing the local trend yn(s) in that segment is fitted and the integrated time series y(s) will be detrended by subtraction of yn(s). Subsequently, the root-mean-square
IFMBE Proceedings Vol. 22
_________________________________________________________________
Risk Stratification in Ischemic Heart Failure Patients with Linear and Nonlinear Methods of Heart Rate Variability Analysis
fluctuation parameter F(n) of the resulting detrended BBI time series is estimated. In this study, this computation was repeated over segments length from n = 4 to 64 data points to characterize the relationship between F(n) and n. Finally, a double-log diagram of F(n) against n was plotted and indicates the presence of fractal scaling if a linear relationship between both variables exists. In that case, fluctuations can be described by scaling exponents estimated as the regression slope of the double-log plot [14]. To characterize short- as well as long-term fluctuations a short-term scaling exponent 1 was estimated between n = 4 to 16 and a long-term scaling exponent 2 was calculated over the range n = 16 to 64.
3
parameter of SD was considerably decreased within the HR group (p=0.0008) compared to LR and quantifies the distribution of 5 different symbol types with a probability of occurrence of more than 1% within a transformed symbol series. From DFA, only the scaling exponent 1 (p=0.0005) showed a significantly decreased amount of short-term correlations within the BBI time series of high risk IHF patients in comparison to low risk patients. Table 2 Univariate significances (p) for discrimination between high risk (HR) and low risk (LR) IHF patients (* p.40 with KSS) and low inter-feature correlation (minimizing redundancy; filter criteria: Pearson correlation > τ the net value of both muscles working together is approximated by (1 − A)Fsum . A has a value close to 1 therefore the net muscle force is very small. Hence, the arm is moving very slowly in one direction. An accelerometer signal measured during human movement, consists of a part that represents accelerations due to the actual movements of the body and a part that represents the position of the sensor in relation to the gravity field. When there is no movement, the latter causes an offset in the signal between -1 and 1 g. During a change of posture, the position in relation to the gravity field can change, and thus the offset changes. For a simple 2-D planar rotation of the arm, in the field of gravity, the acceleration (At ) in the direction of the movement then yields: (4) At (t) = −Rα(t) − gsin(θ (t)); where R is the distance between the elbow and the accelerometer, α(t) is the angular acceleration, g is the gravitational constant and θ (t) is the angular displacement. Using kinetic relations α(t) can be replaced by: 4.5 (Fag − Fant ) (5) BMBL2 where BM is body mass and BL is body length. Using Eqs. 2–5 and t >> τ we get α(t) =
4.5 ((1 − A)Fsum ) − gsin(θ (t)) . (6) BMBL2 Now it can be seen that the acceleration caused by movement is much smaller as the acceleration caused by gravity. Thus the typical block-like pattern is mainly caused by the gravity component that slowly changes. At (t) =
_________________________________________
B. Features for block-like pattern For the calculation of features that represent the blocklike pattern, first we approximate the posture form the accelerometer signal. To this end each ACM-signal xi is filtered with a first order low-pass filter with a cut-off frequency of 0.5 Hz, thus creating xislw . xi represents one of the three signals from a 3-D accelerometer, i ∈ {1, 2, 3}. Then the first derivative (jerk) and the variance from xislw are determined as a measure for the change of posture. During a tonic seizure the movement is extremely slow, therefore the amplitude of the ACM-signal is between -1 and 1 g. During other movements there is more variation and the amplitude can be up to 2-3 g, therefore the distance between the minimal and maximal value of xislw is also a good indicator for tonic seizures. as : The jerk Jyslw is defined 3 xkslw [n] − xkslw [n − 1] 2 (7) Jyslw [n] = ∑ Δt k=1 where Δt is the sampling interval. During a tonic seizure the arm changes very slowly of position, thus the value of Jyslw is low. During other movement types the velocity of the position changes is much faster and thus also Jyslw . Per segment of N samples we calculate the mean magnitude of the jerk Jy . For a segment length of 10 seconds with a sampling frequency fs of 100 Hz this means that N = 1000 samples. 1 N Jyslw = ∑ Jy[n] , N = 1000 . (8) N n=1 The magnitude Yslw for the dominant arm sensor is: Yslw [n] =
3
∑ xk 2slw [n] .
(9)
1
The variance of the magnitude SY2 slw for each segment is: N 1 ∑ (Yslw [n] −Yslw )2 , N = 1000 , (10) N − 1 n=1 with Y the mean magnitude: 1 N Yslw = ∑ Yslw [n] , N = 1000 . (11) N n=1 Since Jy slw is a linear measure and SY2 slw quadratic, the square root of SY2 slw is used. The hypothesis is that the change of posture is unnaturally slow, thus SYslw is lower for tonic seizures than for other movement types. For the distance between minimum and maximum signal values, the range Ry is defined as: 3 Ry = |max(x lw[1 : 1 + L]) − min(x lw[1 : 1 + L])|2 .
SY2 slw =
∑
k=1
ks
ks
(12)
The range Ry s lw between the maximum and minimum value is in a smaller range for tonic seizures than for other movement types.
IFMBE Proceedings Vol. 22
___________________________________________
190
Tamara M.E. Nijsen, Ronald M. Aarts, Johan B.A.M. Arends, Pierre J.M. Cluitmans
C. Features for tremor The block-like pattern is often accompanied by a subtle tremor, therefore also the fast signal component xi f st is used, to calculate features that are indicative for tremor. To create xi f st , xislw is subtracted from the original signal xi . Then the variance is also calculated for this fast component (SY f st ). D. Features for other movements For a discriminative feature set, features need to represent characteristics of both tonic seizures and other movement types, this can also be motor seizures of another type. Therefore features based on our model for myoclonic and clonic seizures are included. The continuous wavelet transform (CWT) of a signal f (t) at scale a and position t is defined ∞ as: 1 ∗ t −τ CW Th [ f ](t, a) = √ f (τ)h dτ , (13) a −∞ a where h(t) is the wavelet base and ∗ denotes the complex conjugation. In this case the wavelet base h(t) is formed by our model: 1 −t h(t) = t(e−t − e B ) . A
(14)
This function satisfies the admissibility condition if A = B2 , see [6] for more details. Then the scalograms SCh [xi ](n, a) = |CW Th [xi ](n, a)|2 ,
(15)
of the three 1-D sensors are summed SCT (n, a) =
3
∑ SCh [xi ](n, a) .
(16)
i=1
Frequencies of movements during daily activities, dominantly lie between 0.3 and 3.5 Hz. Frequencies of clonic seizures typically lie in the range of 2–5 Hz [3] and accelerometer patterns of myoclonic seizures lie in the range of 4–6 Hz. For myoclonic (and clonic) seizures most of the power is in the range of scales 2-10. Our hypothesis is that during tonic seizures the power is concentrated in the higher scales (≤ 0.5 Hz) because of the slow change of posture. Hence the model based wavelet is used to calculate a scalogram for the scales 1-50. For the detection of tonic seizures the ratio between the power in scale 2–10 and the total power (ERhigh ) and the ratio between the power in scale 20–50 and the total power (ERlow ) can be useful features: ∑10 (SCT (n, a)) ERhigh [n] = a=2 , ∑50 a=1 (SCT (n, a))
(17)
∑50 a=20 (SCT (n, a)) . (18) ∑50 a=1 (SCT (n, a)) ERhigh is chosen because it is expected to be an important feature to discriminate between tonic movements and myoclonic, clonic and fast normal movements, and ERlow ERlow [n] =
_________________________________________
because it is an important feature to distinguish slow (blocklike) movements from the other movements. Per segment of 1000 samples the mean values of ERhigh and ERlow are determined: 1 N ERhigh = ∑ ERhigh [n] , N = 1000 , (19) N n=1 and 1 N ERlow = ∑ ERlow [n] , N = 1000 . (20) N n=1 III. C LASSIFICATION To establish their value for the detection of tonic seizures the features are evaluated in a ’two-class’ detection setup. The two classes are: ’tonic seizure’, and ’other movements’. Hence, myoclonic, clonic, and normal movements are regarded as one class. As classification method Fisher’s linear discriminant analysis is used [7]. IV. E VALUATION A. Patient data For evaluation ACM-data are used from 36 mentally retarded patients who suffer from refractory epilepsy. The patients are monitored with the setup described in our previous clinical study [2]. Three experts divided the corresponding ACM-signals into classes using video and accelerometric information. Available classes were: no movement, myoclonic seizure waveform, tonic seizure waveform, clonic seizure waveform, normal movement, and unclear. The interrater agreement is computed for each pair of experts. For the evaluation study, only events were selected, where two experts agreed on. Events marked as ’unclear’ were excluded from the evaluation. For a seizure event to be included, the seizure needed also to be visual in the EEG-signal. This resulted in a data set containing data of 18 patients, 27 tonic seizures, 10 clonic seizures, 16 myoclonic seizures and 36 normal movements. The data is divided into three groups. We aim for an approach that is robust among patients. Therefore the groups have no overlap in patients. From these three groups, three training sets are created that are composed of data of two groups. For each training set, the data of the remaining third group of patients is used for testing. The three training sets are also used for the determination of the optimal combination of features. To this end per training set the detection performance of each combination of features is calculated. The optimal feature set, is the feature set where all training sets obtain a PPV > 0.4 and where lowest sensitivity of the three training sets is maximal. B. Performance measures The performance per feature set is expressed in the sensitivity (SEN) the percentage of myoclonic seizures correctly classified, the number of false detections (FD), and the positive predictive value (PPV ), which is the ratio between correct detected tonic seizures and all events that are classified as a tonic seizure. Detected events are defined in a similar way as in [8], but with a the time-basis of 10 seconds instead of one second.
IFMBE Proceedings Vol. 22
___________________________________________
Automated detection of tonic seizures using 3-D accelerometry
V. R ESULTS A. Interrater agreement Annotations were made based on information from both video and ACM. With a mean value of 0.50 the agreement of our experts can be considered moderate [9]. This results is in agreement with the findings of Parra et al. [10]. B. Detection performance Table I shows the detection performance on the training data itself with the optimal feature set. It was found that the optimal feature set contains all features, except ERhigh . This feature did not contribute much extra to the performance of the algorithm. This feature was added to describe the characteristics of fast normal movements as well, but this feature appears to be redundant. Sensitivities are high. All tonic seizures except one are detected. The positive predictive values lie around 0.40. Table II shows the detection performance on the three test sets. The values for SEN and PPV are slightly lower than in the training phase. 80 % of the tonic seizures is detected with a PPV of 0.35. Analysis of the false detections shows that 42% of the false positives is also a seizure. TABLE I D ETECTION PERFORMANCE RESULTS ON TRAINING SETS Training set 1 2 3 overall
TP 7 12 14 33
FN 1 0 0 1
FD 14 19 18 51
Sen 0.88 1.00 1.00 0.97
PPV 0.33 0.39 0.44 0.40
191
window of 10 seconds. A positive predictive value of 0.35 implies that one out of three alarms is genuine. For offline analysis this is acceptable, especially when 42 % of the false alarms are actually motor seizures of another type (myoclonic or clonic). Previously we reported that there are three characteristic pattern types visible during simple motor seizures [2]. It was also shown that a seizure can consist of a combination of these ’elementary’ patterns. In our seizure detection setup we have chosen for a modular approach, where patterns associated with myoclonic, tonic and clonic seizures are separately handled. Nevertheless there is a percentage of seizures that manifests in patterns that are a mix of the three types. Thus a part of the false positives from the separate modules will point to other motor seizure types. For automatic analysis this is not a problem, since these are events that are also clinically relevant. To separate these mixed forms (if this is of clinical interest) from the purely elementary movements is possible in a post processing step using features like duration of event or amplitude. The false positives that were not of a mixed seizure type, were movements that can not be distinguished from tonic seizures based on information from one sensor alone. Experts had all five 3-D ACM-signals, video and EEG available. Using information from accelerometers placed on the other limbs might contribute to a higher SEN and PPV in these cases. In conclusion, the results show that our approach is useful for the automated detection of tonic seizures based on 3-D accelerometry and that it is a promising contribution in a complete multi-sensor seizure detection setup.
TABLE II D ETECTION PERFORMANCE RESULTS ON TEST SETS Test set 1 2 3 overall
TP 11 7 6 24
FN 3 2 0 5
FD 13 14 18 45
FDsz 4 5 10 19
Sen 0.79 0.78 1.00 0.83
PPV 0.46 0.33 0.25 0.35
VI. D ISCUSSION This paper shows the first quantitative results for the detection of tonic seizures based on 3-D accelerometry (ACM) recordings. It was possible to detect tonic seizures with a success rate around 0.80 and with a positive predictive value (PPV) of 0.35. Four of the five tonic seizures that were missed, did not have the characteristic block-like appearance in the ACM-signals. During a tonic seizure both agonist and antagonist muscles contract heavily. Usually there is still a net force effect in one direction, and then the affected limbs move slowly, but it can also happen that the net effect is zero and that there is no movement effect. It can also be the case that the movement is blocked, because the limbs are fixed (for example against the body, or against furniture). In these cases, where the seizures are not clearly visible in ACM-signals, the measurement of the EMG might be more useful. The other missed seizure was very short in duration (< 1 s) and therefore difficult to detect with a _________________________________________
R EFERENCES [1] K. Amano, J. Takamatsu, A. Ogata, C. Miyazaki, H. Kaneyama, S. Katsuragi, M. Deshimaru, S. Sumiyoshi, and T. Miyakawa. Characteristics of epilepsy in severely mentally retarded individuals. Psychiatry and Clinical Neurosciences, (54):17–22, 2000. [2] T.M.E. Nijsen, J.B.A.M. Arends, P.A.M. Griep, and P.J.M. Cluitmans. The potential value of 3-D accelerometry for detection of motor seizures in severe epilepsy. Epilepsy and Behavior, 7:74–84, 2005. [3] H.O. Luders and S.N. Noachtar. Epileptic Seizures, Pathophysiology and Clinical Semiology. Churchill Livingstone, New York, 2000. [4] T.M.E. Nijsen, R.M. Aarts, J.B.A.M. Arends, and P.J.M. Cluitmans. Model for arm movements during myoclonic seizures. 29th Annual International Conference of the IEEE EMBS, pages 1582–1585, 2007. [5] H.M. Hamer, H.O. L¨uders, S. Knake, B. Fritsch, W.H. Oertel, and F. Rosenow. Electrophysiology of focal clonic seizures in humans: a study using subdural and depth electrodes. Brain, 126:547–555, 2003. [6] T.M.E. Nijsen, A.J.E.M. Janssen, and R.M. Aarts. Analysis of a wavelet arising from a model for arm movements during epileptic seizures. ProRisc, 2007. [7] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. WileyInterscience, New York, 2nd edition edition, 2001. [8] T.M.E. Nijsen, P.J.M. Cluitmans, J.B.A.M. Arends, and P.A.M. Griep. Detection of subtle nocturnal motor activity from 3-d accelerometry recordings in epilepsy patients. IEEE Transactions on Biomedical Engineering, 54(11), 2007. [9] B. Dawson and R.G. Trapp. Basic and Clinical Biostatistics. Lange Medical Books / Mc Graw Hill, 2004. [10] J. Parra, P.B. Augustijn, Y. Geerts, and W. van Emde Boas. Classification of epileptic seizures: A comparison of two systems. Epilepsia, 42(4):476–482, 2001.
IFMBE Proceedings Vol. 22
___________________________________________
A Classification Attempt of COPD, ALI-ARDS and Normal Lungs of ventilated Patients through Compliance and Resistance over Time Waveform Discrimination A. Tzavaras1,2, B. Spyropoulos1, E. Kokalis1, A. Palaiologos1, D. Georgopoulos3, G. Prinianakis3, M. Botsivaly1 and P.R. Weller2 1
Technological Educational Institute (TEI) of Athens, Medical Instrumentation Technology Department, Athens, Greece. 2 City University, Center for Health Informatics, London, United Kingdom. 3 University General Hospital of Herakleion, Intensive Care Unit, Herakleion Crete, Greece.
Abstract — Ventilation management is the process of evaluating the adequacy of the supplied ventilation, based on patient needs, clinical personnel experience and expertise, and available protocols. Ventilation settings are adapted to patient pathology and lung mechanical properties. The aim of the present paper is to develop a simple method to rapidly classify patients, according to their lung mechanical properties, into three main categories, namely COPD, ALI-ARDS and normal lungs. Real patient flow and pressure ventilation data were recorded in two different ICUs. Data were classified with the assistance of clinical personnel into the three categories. A Matlab toolbox was employed for calculating, based on the recorded data, the dynamic changes in lung compliance (C) and resistance (R) during ventilation cycle. The resulted waveforms of dynamic changes in C and R, were analyzed making use of their visual presentation, their audio reproduction and their Fourier analysis, for identifying the most appropriate approach for patient classification. Trials performed on recorded data have shown that visual presentation and audio reproduction of the acquired waveforms lead to adequate information for classifying the patients into one of the three lung-conditions related main categories. Keywords — Ventilation management, Lung Compliance and Resistance, medical decision support tools.
I. INTRODUCTION Mechanical ventilatory support, is initiated when a patient’s ability to maintain gas exchange has failed. ICU clinicians monitor and evaluate cardio-respiratory related physiology parameters, in order to evaluate adequacy of mechanical ventilation. Since a patient’s needs are continuously changing, clinicians have to adapt the ventilation strategy and drug administration on a regular basis. This ongoing process is described as ventilation-respiration management. Support of patients with respiratory failure is given by mechanical ventilators. The majority of mechanical ventilators provide the patient with a user defined mixture of fresh gases, by applying positive pressure in the upper airways. Ventilation management strategy is determined by patients’
pathology. Lung pathology is characterized by abnormal lung mechanical properties. Therefore clinicians have to evaluate lung mechanics in the initiation phase of mechanical support. II. BACKGOUND Acute Lung Injury (ALI) and Acute Respiratory Distress Syndrome (ARDS) are clinical terms describing the condition of diffuse pulmonary inflammation [1]. ARDS was first described by Ashbaugh and co-workers in 1967 [2]. ALI is the less extreme manifestation of ARDS. Annual incidence of ALI-ARDS range from 8 to 70 cases per 100,000 people in developed countries [1], while mortality ranges from 30-40% for adults [3] and 30-75% for children [4]. ALI-ARDS is the disruption of the normal alveolarcapillary barrier [2]. Clinical manifestations are dyspnoea, severe hypoxemia due to mismatching of ventilation and perfusion, and lung-stiffness manifested by increased Compliance and Work of Breathing (WOB). ALI-ARDS could be caused by direct or indirect injury to the lung [4]. Sepsis is the basic etiology of ARDS in ICUs. Patients with multiple risk factors commonly develop ARDS and these factors are usually the cause of patients’ mortality rather than the ARDS itself. ALI-ARDS are usually treated with invasive mechanical ventilation and pharmacotherapeutic approaches. Ventilation strategy influences mortality. Strategies focus on Lung Volumes, fraction of Inspired Oxygen (FiO2), Positive End Expiratory Pressure (PEEP) and Ventilation Modes [1]. Adjuncts to traditional mechanical ventilation include prone positioning, recruitment maneuvers to prevent or recruit lung collapse, surfactant administration to reduce surface tension in alveoli, high frequency ventilation and non invasive ventilation. Chronic obstructive pulmonary disease (COPD) is “the airflow limitation due to narrowing and fibrosis of small airways and loss of airway alveolar attachment as a result of emphysema” [5]. Chronic airflow limitation is initiated by inflammation, airway hyperactivity, secretions and loss of the structural integrity of the lung parenchyma [6].
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 192–195, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
A Classification Attempt of COPD, ALI-ARDS and Normal Lungs of ventilated Patients through Compliance and Resistance …
COPD affects 6% of the general population and is one of the top five causes of morbidity and mortality in the USA [7]. A large percentage of COPD patients are admitted to ICU. Up to 74% of them receive mechanical ventilation support [8]. Ventilation is initiated to prevent Hypoxia and to control Acidosis and Hypercapnia [9]. Patients with Chronic Obstructive pulmonary Disease are characterized by increased Work of Breathing (WOB) and ventilatory Muscle Dysfunction due to chronic airflow limitation from Inflammation, Airway Hyperactivity, secretions and loss of the structural integrity of the Lung Parenchyma [6].A different strategy is suggested according to patient pathology. Although protocols and guidelines have been developed, there are diverse methods for dealing with the same problem [10-12]. ARDS is approached mainly by two different strategies. The Open Lung approach targets to the achievement a specific pressure by employing pressure controlled ventilation [13]. A second approach named ARDS-net, or Baby-Lung approach, focus on limitation of Tidal Volume using Volume controlled ventilation [14.]. Respiratory passageway Resistance (R), lung Compliance (C), Elasticity (E) and Alveolar Surface Tension, are the factors that influence air flow and volume delivery to the lungs. Resistance to ventilation is due to the anatomical structure of the conductive airways, the tissue resistance of the lungs and adjacent structures. Resistance is defined as the change in pressure for a given flow. Lung Compliance is a measure of the change in lung volume that occurs with a change in intrapulmonary pressure. Respiratory system Static Compliance (CRS), it is the usual method for measuring respiratory system compliance (eq. 1). Measurements of CRS require a passive patient, thus controlled ventilation. CRS is used to adjust ventilation strategy, either by changing the drug administration strategy (e.g. administration of Bronchodilating drugs), or changing the minute ventilation. Reduced CRS is often an indication of hyperinflation, suggesting lower volume delivery. PEEP values that maximize CRS, allow for maximum oxygen transport, with the lowest dead space. Respiratory system Resistance (RRS) is the sum of lung (RL) and chest wall (RCW) resistances (eq. 2). Resistance varies with respiration phase, lung volume and flow rate. Increased tidal volume expands airway diameter, thus decreasing resistance. At low flow rates resistance is linear, while at high flow rates turbulence and pressure friction losses increase, resulting in an exponential flow pattern. C RS = VT /( Pplat − PEEP )
eq. 1
R I = ( PIP − Pplat ) / FI
eq. 2
Where: VT: Tidal Volume, Pplat: Plateau Pressure, PIP: Peak Inspiratory Pressure, FI: Inspiratory Flow, PEE:P Positive End Expiratory Pressure.
_______________________________________________________________
193
III. METHODS We have recorded from nine (9) different patients seventy hours (70h) of flow and pressure real time data. Recordings were performed to the University Hospital of Heraklio Crete (PEPAGNI) and the Veterans’ Hospital of Athens (NIMITS), with the use of commercial software [15]. Ethical approval was granted by the hospitals’ ethics committee. Recorded data were classified with the assistance of PEPAGNI ICU clinicians into three lung categories, namely COPD, ALI-ARDS and Normal lungs. We developed a Matlab (Mathworks ®) toolbox for retrieving recorded data, analyzing and displaying waveforms (Fig. 2). The toolbox was retrieving and displaying pressure (P) and flow (F) over time waveforms. The volume (V) waveform was calculated as the integral of Flow over time.
Fig. 1 Toolbox overview We have calculated the dynamic changes in lung’s compliance and resistance according to equations 3 and 4. Recorded and calculated data were stored in spreadsheet format for further analysis. C = ΔV / ΔP ( L / mbar )
eq. 3
R = ΔP / ΔF (mbar / Ls −1 )
eq. 4
Calculation of dynamic changes in lung compliance (eq. 3), should not be confused with dynamic lung compliance measurements, where zero flow is assumed during the measurements of pressure changes. In order to identify waveform characteristics that could be employed for a rapid coarse classification of a mechanically ventilated patient into one of the three lungcondition classes, namely COPD, ALI-ARDS and Normal, following procedures have been carried out.
IFMBE Proceedings Vol. 22
_________________________________________________________________
194
A. Tzavaras, B. Spyropoulos, E. Kokalis, A. Palaiologos, D. Georgopoulos, G. Prinianakis, M. Botsivaly and P.R. Weller
First, a visual observation of the Compliance (C) and the Resistance (R) waveforms over time was performed, and regularly repeated characteristic patterns were located. Second, a discrete Fast Fourier Transformation (DFFT) analysis was performed to the Compliance (C) and to the Resistance (R) waveforms over time, and the Frequency content of these waveforms was examined, for all the available real-world patient sample-recordings. Finally, from each pair of the Compliance (C) and the Resistance (R) waveforms over time, an audio-signal was reproduced, under constant sampling-rate. By hearing and comparing (subjective discrimination) the corresponding sounds, from each patient’s waveform, an assignment occurred, to one of the above mentioned three classes. IV. RESULTS The visual observation of the Compliance (C) and the Resistance (R) waveforms over time led to the location of reproducibly and regularly repeated characteristic patterns, as displayed in figures 2-4. The most attracting features are the different Compliance waveform-shapes and their peak-values that allow for discrimination between various patients and the consequent classification to the COPD, ALI-ARDS and Normal coarse classes.
Fig. 2 COPD patient C & R waveforms
Fig. 4 Normal lungs patient C & R waveforms In order to quantify the subjective results of the visual pattern-recognition, we have created three “training” sets of waveforms, corresponding to each of the three classes (COPD, ALI-ARDS and Normal) and containing each about a dozen of already independently evaluated by a medical expert, without taking into account the Compliance and the Resistance waveforms. Four members of our group have studied the grouped cases for one hour, and then, copies of the cases were mixed and each one has tried to discriminate and classify the randomly presented cases, by someone of the rest of the group, who was also keeping record of the successful “hits”. The mean success of the group was 83%, however, it was not improved after various runs of the procedure, as shown in Table 1. We have tried to improve our performance by employing discrete Fast Fourier Transformation analysis to the Compliance and Resistance waveforms over time. However, the DFFT in the same patient’s category display no resemblance, for all the available real-world patient sample-recordings, as shown in figures 5-6. The audio-signal reproductions from of the Compliance and the Resistance waveforms over time, under constant sampling-rate, correlate fully to the visual patterns, and may constitute a complementary aid, during the fast Classification procedure, of ventilated Patients. Finally, we are presently developing the automatic quantification of the evaluation procedure, by regarding the patterns as images [16], and employing an already developed neuro-fuzzy algorithm [17], for the training of the system.
Table1 COPD, ALI-ARDS and Normal discrimination Success Rate
Fig. 3 ALI-ARDS patient C & R waveforms
_______________________________________________________________
Discrimination Success Rate (Hits) Subject1 Subject2 Subject3 Run1 8/10 7/10 9/10 Run2 7/10 9/10 8/10 Run3 8/10 8/10 9/10 Run4 9/10 10/10 10/10 Run5 9/10 8/10 7/10 Mean Rate 82.0% 84.0% 86.0%
IFMBE Proceedings Vol. 22
Subject4 8/10 8/10 10/10 8/10 7/10 82.0%
_________________________________________________________________
A Classification Attempt of COPD, ALI-ARDS and Normal Lungs of ventilated Patients through Compliance and Resistance …
195
REFERENCES
Fig. 5 COPD Fourier analysis of C & R for 2 different patients (left patient 1, right patient 2).
Fig. 6 ALI-ARDS Fourier analysis of C & R for 2 different patients (left patient 1, right patient 2)
V. CONCLUSIONS The developed method is not yet mature enough to provide for high-certainty Clinical Classification support for COPD, ALI-ARDS and Normal Lungs of ventilated Patients through Compliance and Resistance over Time Waveform Discrimination. There is still uncertainty, first, in the evaluation procedure, and second, in the number of the three patient-groups. However, the first results indicate that the method possess a promising discrimination potential, for a fast, automatic and reliable coarse classification of ventilated patients.
ACKNOWLEDGMENTS We would like to express our appreciation for the valuable feedback and cooperation to the medical staff of the ICUs of the following Hospitals: University Hospital of Heraklio Crete (PEPAGNI), Konstantinoupoleio General Hospital of Athens, Veterans’ Hospital of Athens (NIMITS), Thriasion General Hospital of Attica, and General Hospital of Nikaia.
_______________________________________________________________
1. Bellingan G, Finney S.J (2006), "Acute Respiratory Distress Syndrome", in Encyclopedia of Respiratory Medicine, ed Laurent G.J & Shapiro S.D, Elsevior Ltd, pp 11-19. 2. Lechin A.E, Varon J, (1994), "Adult Respiratory Distress Syndrome (ARDS): The Basics", The Journal of Emergency Medicine, vol 12, pp 63-68. 3. Zwischenberger J.B (2006), "Options for the Management of ARDS: Introduction", Thoracic & Cardiovascular Surgery, vol 8(1), pp 1 4. Hammer J, (2006), "ARDS- Long term follow up", Paediatric Respiratory Reviews, vol 75, pp 5192-5193. 5. Barnes P.J (2006), "Chronic Obstructive Pulmonary Disease", in Encyclopedia of Respiratory Medicine, ed Wedzicha J.A & Hurst J.R, Elsevior Ltd, pp 439-443. 6. Hess D.R., Kacmarek R.M. (2002), “essentials of mechanical ventilation”, 2nd edition, McGraw-Hill companies, ISBN 0-07135229-5. 7. Ambrosino N, Simonds A, (2007), "The clinical management in extremely severe COPD", Respiratory Medicine, vol 101, pp 16131624. 8. Gursel G, (2005), "Determinants of the Length of Mechanical Ventilation in Patients with COPD in the Intensive Care Unit", Respiration, vol 72, pp 61-67. 9. Plant P.K, Elliot M.W, (2003), "Chronic Obstructive pulmonary disease: Management of ventilatory failure in COPD", Thorax vol 58, pp 537-542. 10. Brochard L, Kauss A, Salvador B et al, (1994), “Comparison of three methods of gradual withdrawal from mechanical support during weaning from mechanical ventilation.”, Am. J. Resp. Crit. Care Med., vol 150, pp 896-903. 11. Butter R., Keenan SP, Inman KJ, et al, (1999), “Is there a preferred technique for weaning the difficult-to-wean patient? A systematic review of the literature.”, Crit. Care Med, vol 27, pp 2331-2336. 12. Horst HM, Mouro D, Hall-Jenssens RA, Pamukou N, (1998) “Decrease in ventilation time with a standardized weaning protocol”, Arch. Surg. vol 133, pp 483-489. 13. Amato MBP, Barbos CSV, Medeiros DM et al, (1998) “Effect of protective ventilation strategy on mortality in acute respiratory distress syndrome”, N. Engl. J. Med, vol 338(6), pp 347-354. 14. ARDS NETWORK, (2000), “Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome.”, N Engl J Med, vol 342, pp 13011308. 15. Nortis: http://www.nortis.net 16. M. Botsivaly, C. Koutsourakis, B. Spyropoulos, Evaluation of a new technique for the detection of Ventricular Fibrillation and Ventricular Tachycardia, CD-ROM Proceedings of the World Congress on Medical Physics and Biomedical Engineering, TU-CXH-62, July 2328, 2000, Chicago IL - USA. 17. Tzavaras A, Weller PR, Spyropoulos B, (2007), “A Neuro-Fuzzy Controller for the estimation of Tidal Volume and Respiration Frequency ventilator settings for COPD patients ventilated in control mode”, 29th IEEE EMBS Conference, Lyon, France. Corresponding Author: Professor Basile Spyropoulos, Technological Educational Institute of Athens, Medical Instrumentation Technology Department, Agiou Spyridonos & Dimitsanas, Egaleo, 12210 Athens, Greece. Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
Modified Matching Pursuit algorithm for application in sleep screening D. Sommermeyer1, M. Schwaibold1, B. Schöller1, L. Grote2 and J. Hedner2 1
MCC Gesellschaft f. Diagnosesysteme in Medizin und Technik mbH & Co.KG, Karlsruhe, Germany 2 Sleep Lab., Dept. of Pulmonary Medicine, University of Gothenburg, Sweden
Abstract — Sleep Apnea (SA) is considered to be a major, underdiagnosed public health problem with a prevalence of 2 4% for middle-aged women and men, respectively. Therefore a reliable, ambulant screening test is requested, which is easy to perform and does not necessarily demand profound knowledge of sleep medicine. In this paper a new Matching Pursuit based algorithm is presented that uses only information from one single pulse oxymetry sensor (SpO2, pulse wave amplitude, pulse frequency) for detecting different sleep disturbing events. Thus, sleep diagnosing parameters are calculated, which until now could only be determined by using multiple sensors. A signal decomposition algorithm based on a dictionary of time-frequency atoms (known as “Matching Pursuit method”) has been modified in order to analyze different patterns in the mentioned signals. The conventional procedure of Matching Pursuit, using dictionaries of signal templates, is optimized to be implemented in a standard embedded system. The algorithm was tested on 62 consecutive adult patients with suspected SA. All patients underwent standard overnight polygraphy (PG) with SOMNOcheck2 (WEINMANN, Germany), which is an established method for PG diagnosis. The correlation coefficient between manual scored AHI and automatic RDI, calculated from the new algorithm, using only pulse oximetry channels, was r = 0.967. Bland-Altmann analysis showed a mean difference of -0.6 between the two parameters. Using a cut-off value of RDI
15/h for SA classification, a sensitivity of 96.2% and specificity of 91.7% was reached. This novel computer algorithm provides a simple and highly accurate tool for quantification of SA. Keywords — Matching Pursuit, sleep screening, sleep apnea
I. INTRODUCTION Sleep apnea (SA) is a common respiratory disorder which is characterized by repeated episodes of temporary absence or cessation of breathing during sleep. SA is often associated with morbid conditions such as heart failure, hypertension, cardiovascular disease and cerebrovascular disease. Therefore, there is growing awareness of SA as a potential risk factor for cardiovascular disease1. Furthermore, studies have shown that SA often is connected with increased traffic accident rates. The gold standard diagnostic test for sleep apnea is an overnight polysomnography (PSG) in a sleep laboratory
which is costly in terms of time and money2, and the accessibility in some areas is limited. Although unattended PSG systems are available, the reliability of these systems varies. For unattended PSG devices Gagnadoux et al. reported failure rates of 23% 3. Therefore a reliable ambulant screening test is requested which is easy and practical to perform and does not necessarily demand profound knowledge of sleep medicine. Consequently a number of alternatives to the PSGSystem have been proposed for an ambulatory use. At first the pulse oxymetry is attractive because of its widespread availability and ease of application2. The decrease of oxygen saturation detected by pulse oxymetry is taken to indicate respiratory events. But results from previous studies4 show a wide range of sensitivity from 40% to 100% when comparing PSG and pulse oxymetry. These results can be explained with the influence of other parameters on oxygen saturation, such as sleeping position, sleep state, breathing rate etc.5. It is hypothesised that combining SpO2 information with results of pulse wave analysis may have a good diagnostic performance for the detection of SA if the algorithm is fully optimised. The presented work is a novel online computer algorithm for photoplethysmographic signals that allows considering information of an optional nasal flow signal. The algorithm uses a modified Matching Pursuit algorithm to analyze the SpO2 signal and pulse wave parameters. So sympathetic activations are detected and the diagnostic value of slight desaturations, which may be caused by mild respiratory events like hypopneas or flattening, are evaluated. Moreover intrathoracic pressure changes, which cause a frequency component in the distal measured pulse wave signal, are analyzed in order to distinguish apneas into obstructive and central events. II. MATERIALS AND METHODS A. Development of the algorithm The fundamental idea of the algorithm is to combine information from several pulse oxymetry signals, derived of one single sensor at the finger of a patient. It is possible to take further information into account, e.g. from a flow
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 196–199, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Modified Matching Pursuit algorithm for application in sleep screening
197
signal if available. Concerning the calculation process chronological coherences are considered as well as morphological parameters of the used signals. First step of the presented algorithm is the calculation of the parameters pulse wave amplitude (PWA) and pulse rate (PR), which are taken from the unfiltered pulse wave signal. All 3 signals – SpO2, PWA and PR – are analyzed using a modified, online Matching Pursuit6 procedure. Additionally filtered PWA signal is analyzed in order to differentiate respiratory events into obstructive and central. The original Matching Pursuit algorithm decomposes any signal into linear expansions of waveforms that are selected from a dictionary of functions. These waveforms are selected in order to best match the signal structures. Finally the algorithm combines detected patterns from the MP algorithm in order to conclude on sleep disturbing events like respiratory events and autonomic arousals.
Instead of a complete dictionary of possible signal patterns (“atoms”), which requires lots of memory, a derived function that consists of three sub-functions is defined (Fig. 2). The function can be varied using five parameters A-E to adjust it to signal patterns in the mentioned signals. The three sub-functions, together with associated parameters are shown in Fig. 2.
B. Preprocessing
Adapting the described atom at each sample of the signals, what would correspond to the classic MP algorithm6, is much too computationally intensive. Hence first promising positions in the signals are identified by analyzing a signal attenuation of a predefined threshold. To evaluate the relevance of the signal pattern, a degree of quality is defined, based on the Least Squared Error (LSE) between the original signal attenuation and the first subfunction. During the optimization procedure parameters of first sub-function are varied. Finally the parameter set that shows the best quality index is saved. If LSE is under a certain threshold, the complete atom will be adjusted. Otherwise the pattern is ignored. In case of adjusting the whole atom, next step is to determine the approximate duration of the pattern by means of derivative and amplitude criteria. The atom is fitted using the same LSE calculation, which has already been used for the first part of the atom. Doing this, a quality for the fitting of the complete atom is calculated and used for decision, if the signal pattern reflects an attenuation that is of interest for the detection of sleep events.
Initially the parameters PWA and PR are calculated from the photoplethysmographic pulse wave, which is sampled with 100 data points per second. A 2nd order Butterworth lowpass filter with cut-off frequency of 10Hz is used to remove noise from the measured pulse wave signal. Aiming on an efficient algorithm, which can be used in an embedded environment, filter coefficients are scaled to fixpoint arithmetic. First derivative signal is calculated and used to detect the systolic and diastolic phase of each pulse wave, followed by simple peak detection. Thereby PWA and PR signal are derived and saved with sampling rate of 5 Hz.
Fig. 2 Used atom, derived from three sub-functions
D. Effort calculation Fig. 1 Calculation of pulse wave parameters pulse wave amplitude (PWA) and pulse rate (PR)
C. Modified Matching Pursuit algorithm Next step is a signal decomposition by means of a modified and optimised Matching Pursuit (MP) procedure of all three signals, SpO2, PWA and PR.
_______________________________________________________________
As already described by Murray et al. in 19967 the intrathoracic pressure changes during spontaneous breathing can be seen in peripheral pulse trace. Accordingly the activity of the component of the PWA signal in the typical breathing frequency range (~0.150.4Hz) is analyzed in order to gain information about obstructive or central character of respiratory events. Therefore transient attenuations in the PWA signal are removed by high pass filtering. Afterwards, the strength of
IFMBE Proceedings Vol. 22
_________________________________________________________________
198
D. Sommermeyer, M. Schwaibold, B. Schöller, L. Grote and J. Hedner
the frequency component, which reflects breathing effort, is determined by calculating an envelope “effort” signal.
Patient characteristics of the study subjects are listed in table 1.
E. Determination of physiological events out of combination of detected MP patterns
Table 1 Patient characteristics Subjects n
The algorithm uses two options for the detection of respiratory events: -
SpO2 drop of 4% with min. length of four seconds SpO2 drop of 2% or 3% and the simultanous occurrence of an autonomic arousal (definition see below)
In order to detect the existence or absence of respiratory effort, the effort signal derived from the PWA signal is analyzed during the suspected phase of the respiratory event and is regarded in relation to the mean amplitude 15 seconds before the event. The final decision is done regarding the “effort ratio” of the current event:
effort _ ratio =
mean _ effortDuringEvent mean _ effortBeforeEvent
Since the intrathoracic pressure changes often increase during obstructive events, effort ratios > 1.0 are possible. For the detection of a central event an effort ratio of clearly smaller than 1.0 is required. The term “autonomic arousal” is used to describe transient reactions of the autonomic regulation. These events are not necessarily associated with an EEG arousal. There is considerable interest in using convenient, noninvasive markers for sympathetic activations in certain situations, e.g. during sleep disturbing events like apneas/ hypopneas 8. In this work “autonomic arousals” are defined as pulse rate increases and/or pulse wave amplitude decreases. There are three options for detecting an autonomic arousal: 1. obvious increase of pulse rate or
62
Males
40
Age (yrs)
52,4 (±14,1)
BMI (kg/m2)
28,0 (±4,8)
AHI (events/h)
22,2 (±22,0)
ODI (events/h)
18,3 (±22,1)
All patients underwent a standard overnight polygraphy (PG) with a SOMNOcheck2 (WEINMANN, Germany), which is an established method for PG-diagnose. The PG data was manually scored by registered sleep specialists. Apneas were scored using standard criteria established by the American Academy of Sleep Medicine11. The sleep data was considered acceptable if analysis period (AP) was more than 3 hours. III. RESULTS 89 patients completed the PG study in home environment. Due to technical problems or lost sensors 27 subjects had no analysable data (evaluable flow and photoplethysmographic signals) for more than 3 hours. A. Manually scored AHI compared with RDI Since it can’t be determined if an SpO2 drop was caused by an apnea or hypopnea in absence of a flow signal, a respiratory disurbance index (RDI) is calculated only by means of pulse oxymetry signals. The correlation coefficient between manual scored AHI and automatic RDI was r = 0.967. Bland-Altmann analysis showed a mean difference of minus 0.6 between the two parameters. The distribution of the differences can be seen in Figure 3 in form of a Bland-Altmann plot.
2. obvious drop of pulse wave amplitude or 3. slight increase of pulse rate AND slight decrease in PWA at the same time F. Subjects The study group consisted of 62 consecutive adult patients with suspected SA who were referred to the Sleep Medicine Unit of Sahlgrenska University Hospital, Gothenburg.
_______________________________________________________________
Fig. 3 Bland-Altmann plot: Comparison of man. AHI and autom. RDI
IFMBE Proceedings Vol. 22
_________________________________________________________________
Modified Matching Pursuit algorithm for application in sleep screening
199
B. Diagnostic ability of RDI
V. CONCLUSIONS 9
Following the ICSD-2 guidelines the existence of a sleep apnea syndrome was determined by a cut-off value of 15/h in order to validate the diagnostic sensitivity and specificity of the automatic calculated RDI. With 96.2% for sensitivity and 91.7% for specificity both values were found > 90%. C. Character of the disease The Pearson correlation coefficient of the obstructive RDI and the manually scored obstructive AHI was 0.95, the correlation coefficient of the central RDI (cRDI) and the manually scored central AHI (cAHI) was 0.86. Mean errors between manual und automatic indices were minus 0.2/h and minus 0.1/h. The predominant character of the disease was detected correctly in all 62 cases. IV. DISCUSSION The presented algorithm combines information of several signals, all originated from one single pulse oxymetry sensor. A resource saving and precise online analysis, based on a Matching Pursuit procedure delivers exact time and frequency information of detected patterns. Looking at several characteristics and chronological items, these patterns are combined to physiological, sleep disturbing events like respiratory events and autonomic arousals. The achieved results range clearly within the expected spread of manual AHI inter-scorer variability10. Especially in case of slight respiratory events (Hypopneas, Flattening), which often cause only small SpO2 drops, known algorithms as well as manual scorers show largest variability in their results11. The presented approach evaluates the diagnostic value of small SpO2 drops (2% or 3%) by verifying the relevance of the pattern by testing the existence of concomitant autonomic arousals. Such verification of the sleep disrupting character of unclear events, is a unique and important feature for screening algorithms. Thus automated scoring provides an opportunity to eliminate the problem of inter-scorer variability and regional differences in AHI criteria. As such algorithm can be used in simple screening devices at the patient’s home, measuring multiple consecutive nights is possible. This gives the physician information about night-to-night variability as well as access to additional information not obtainable with a onenight study, e.g. by altering night-to-night variables such as alcohol intake or sleeping position.
_______________________________________________________________
We employed a novel algorithm, which performs a sleep screening only by means of pulse oxymetry data. Compared with previous reports the developed algorithm offers excellent diagnostic sensitivity (>95%) and specificity (>90%). Beyond it provides information about the obstructive or central character of the disease. The novel algorithm works online and is optimised in terms of memory consumption and requirements concerning processing power, so that it could be implemented and tested in a standard embedded system. The presented technique allows a very simple, convenient and although accurate sleep screening. In the future multiple extensions and optimisation steps are possible: To determine the exact phase of respiratory events, information of a breathing flow sensor can be integrated and the differentiation into obstructive and central events can be done by pulse wave information. Moreover the relevance of unclear flow events can be evaluated using autonomic arousals.
ACKNOWLEDGMENT This work was granted by the German Ministry for Education and Science (BMBF).
REFERENCES 1.
Shahar E, Whitney CW, Redline S et al. (2001) Sleep disordered breathing and cardiovascular disease: cross sectional results of the Sleep Heart Health Study. Am J Respir Crit Care Med. 163:19-25 2. Vázquez JC, Tsai WH, Flemons WW, Masuda A, Brant R, Hajduk E, Whitelaw WA, Remmers JE (2000) Automated analysis of digital oximetry in the diagnosis of obstructive sleep apnoea. Thorax 55:302-307 3. Gagnadoux F et al. (2002) Home unattended vs hospital telemonitored polysomnography in suspected obstructive sleep apnea syndrome: a randomized crossover trial. Chest 121:753-758 4. Flemons W et al. (2003) Home Diagnosis of Sleep Apnea: A Systematic Review of the Literature. Chest 124:1543–1579 5. Wang Y, Teschler T, Weinreich G, Hess S, Wessendorf TE, Teschler H. (2003) Validierung von microMESAM als Screeningsystem für schlafbezogene Atmungsstörungen. Pneumologie 57:734-740 6. Mallat S and Zhang Z (1993) Matching Pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing, 41:3397-3415 7. Murray et al. (1996) The peripheral pulse wave: Information overlooked. Journal of Clinical Monitoring 12:365-377 8. Haba-Rubio J et al. (2005) Obstructive sleep apnea syndrome: effect of respiratory events and arousal on pulse wave amplitude measured by photoplethysmography in NREM sleep. Sleep Breath 9:73-81 9. American Academy of Sleep Medicine (2005) International classification of sleep disorders, 2nd ed.: diagnostic and coding manual. Westchester, IL: American Academy of Sleep Medicine 10. Collop N. (2002) Scoring variability between polysomnography technologists in different sleep laboratories. Sleep Med 3:43-47 11. Redline S et al. for the Sleep Heart Health Research Group (2000) Effects of Varying Approaches for Identifying Respiratory Disturbances on Sleep Apnea Assessment. Am J Respir Crit Care Med. 161:369-374
IFMBE Proceedings Vol. 22
_________________________________________________________________
Wireless capsule endoscopic frame classification scheme based on higher order statistics of multi-scale texture descriptors D. Barbosa1, J. Ramos2 and C. Lima1 1 2
Industrial Electronics Department, University of Minho, Braga, Portugal Gastroenterology Department, Hospital dos Capuchos, Lisboa, Portugal
Abstract — The gastrointestinal (GI) tract is a long tube, prone to all kind of lesions. The traditional endoscopic methods do not reach the entire GI tract. Wireless capsule endoscopy is a diagnostic procedure that allows the visualization of the whole GI tract, acquiring video frames, at a rate of two frames per second, while travels through the GI tract, propelled by peristalsis. These frames possess rich information about the condition of the stomach and intestine mucosa, expressed by color and texture in these images. These vital characteristics of each frame can be extracted by color texture analysis. Since texture information is present as middle and high frequency content in the original image, two new images are synthesized from the discrete wavelet coefficients at the lowest and middle scale of a two level Discrete Wavelet Transform of the original frame. These new synthesized images contain essential texture information, at different scales, which can be extracted from statistical descriptors of the coocurrence matrices, which are second-order representations of the synthesized images that encode color and spatial relationships within the pixels of these new images. Since the human perception of texture is complex, a multiscale and multicolor process based in the analysis of the spatial color variations relationships, is proposed, as classification features. The multicolor texture information is modeled by the third order moments of the texture descriptors sampled at the different color channels. HSV color space is more related to the perceptive human characteristics, therefore it was used in the ambit of this paper. The multi-scale texture information is modeled by covariance of the texture descriptors within the same color channel of the two synthesized images, which contain texture information at different scales. The features are used in a classification scheme using a multilayer perceptron neural network. The proposed method has been applied in real data taken from several capsule endoscopic exams and reaches 94.6% of sensitivity and 93.7% specificity. These results support the feasibility of the proposed algorithm. Keywords — Discrete Wavelet Transform, Texture Analysis, Capsule Endoscopy, Computer Aided Diagnosis
I. INTRODUCTION Until the introduction of wireless capsule endoscopy, it was not possible to see the gastrointestinal (GI) tract in its entire length, since conventional endoscopy is limited, in upper GI tract endoscopy, at duodenum and at terminal
jejunum, in lower GI tract endoscopy. Therefore, the vast majority of the small bowel, which has a medium length of six meters, is not seen by these conventional techniques. Consequently, prior to the invention of CE, the small intestine was the conventional endoscopy’s last frontier, because it could not be internally visualized directly or in entirely by any method [1]. Furthermore the conventional endoscopic procedures are uncomfortable to the patient and require advanced technical skills from the operating physician, in order to correctly navigate the flexible endoscope. Note also that there is the risk of injuring the GI tract walls with the tip of the endoscope [2]. The introduction of wireless Capsule Endoscopy (CE) in the clinical practice provided a simple and effective diagnostic tool to observe GI mucosa abnormalities, until then not easily seen by traditional imaging techniques, since GI tract could not be internally visualized directly or in its entire length by any conventional method [1]. The endoscopic capsule is a pill-like device, with only 11mm×26 mm, and includes a miniaturized camera, a light source and a wireless circuit for the acquisition and transmission of signals [3]. The acquired video frames are wireless transmitted to a receiver, which stores them in a hard disk drive. The camera captures images at a rate of two frames per second, for about eight hours, resulting in more than 50.000 video frames per exam [4]. The average time taken by a physician to analyze a capsule endoscopic is between 40-60 min [5]. During the exam analysis, it is necessary complete concentration by the doctor, since an abnormal frame can be in the middle of a normal frames video segment. So the analysis of a capsule endoscopic video is a time consuming task, prone to errors, and so it claims for computational help. Note also that having an expert physician analyzing, for a long period, a capsule endoscopic exam is also very costly, and, therefore, exists an important economic opportunity to develop a computer assisted diagnosis tool to this task. The detection of abnormalities based in texture alterations of intestine mucosa has been previously reported. In the Maroulis et al. work [6][7], different classifications schemes, based in textural features extracted from the Discrete Wavelet domain, were proposed to classify colonoscopy videos. Kodogiannis et al.[2] proposed two
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 200–203, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Wireless capsule endoscopic frame classification scheme based on higher order statistics of multi-scale texture descriptors
different schemes to extract features from texture spectra in the chromatic and achromatic domains, namely a structural approach, based in the theory of formal languages , and a statistical approach, where statistical texture descriptors are calculated from the histograms of the RGB and HSV color spaces of CE video frames. In authors’ previous work [8][9], are proposed different algorithms to classify capsule endoscopic video frames, based in textural descriptors taken from coocurrence matrices, using the discrete wavelet transform to select the bands with the most significant texture information for classification purposes. In the present work is proposed an algorithm based on higher order statistics of multi-scale texture descriptors, in order to model the complex process of human perception of texture. The texture features are the input of a multilayer perceptron neural network, a well known classifier in pattern recognition problems.
detail level, which allows the implementation of a multiscale approach to this classification problem. The proposed algorithm can be decomposed in the following steps: A. Wavelet coefficients selection and new image synthesis Each capsule endoscopic video decomposed in the three color channels:
frame
can
i = 1 ,2, 3.
Ii,
{
}
W i = Lin , Dli , i = 1,2, 3 l = 1,...,6
It is known that the human texture identification is a complex multi-scale process. In order to model this complex pattern recognition task, it is proposed a method based in higher order statistics of textural descriptors taken from the coocurrence matrix of images synthesized with the most relevant texture of an original capsule endoscopic video frame. It is also included, in the classification features extracted from each frame, the covariance between texture descriptors taken from coocurrence matrices of images synthesized with different scales of the Discrete Wavelet Transform (DWT) of the original frame, analyzing the variations of the information present at different scales in order to model the human multi-scale process of texture identification. Texture information is encoded as medium and high frequency, since the low-frequency components of the images do not contain major texture information, and, therefore, the lowest scales of the DWT of an image will present the most relevant texture information. To reduce the final number of features per frame, new images are synthesized from the selected wavelet scales, where the new image contains only the vital texture information present in the selected wavelet scale. In the present work, and in order to model the multi-scale aspects of the human texture identification, are synthesized two new images for each video frame, one containing the lowest DWT scale coefficients information (DWT bands 1,2 and 3) and the other containing the second scale DWT scale coefficients information (DWT bands 4,5 and 6). Therefore these two synthesized images contain texture information at different
be (1)
where i stands for the color channel. These three color channels are originally in the RGB color space, but are transformed to the HSV color space. Then a two level discrete wavelet transformation is applied to each color channel (Ii). This transformation results in a new representation (Wi) of the original image by a low resolution image and the detail images. The wavelet bases used were the Daubechies bases. The new representation is defined as:
II. PROPOSED ALGORITHM
_______________________________________________________________
201
(2)
where l stands for the wavelet band and n is the decomposition level. Since we want to evaluate the relevant patterns at different scales, it is necessary to select the desired wavelet coefficients, at different DWT scales. In the present work were selected the lowest DWT scale (DWT bands 1,2 and 3) and the second scale DWT scale (DWT bands 4,5 and 6). Therefore, let Si be matrices that have the selected wavelet coefficients at the corresponding positions and zeros in all other positions:
{ }
S x = Dli , i = 1,2, 3 l = 1,2,3 ∨ 4,5,6 x = 1 ∨ 2 (3) i
where l stands for the wavelet band, x for the selected DWT scale and i for the color channel. Note that l depends of the selected wavelet scale. The new images are then synthesized from the selected wavelet bands, trough the inverse wavelet transform. Let Ni be the reconstructed image, for each color channel:
N x = IDWT ( S x ), i = 1,2, 3 x = 1 ∨ 2 i
i
(4)
where i stands for color channel, x for the selected DWT scale and IDTW() is the inverse wavelet transform. For each capsule endoscopic frame two images are calculated, reconstructed from the selected DWT scales, containing the essential textural patterns of the original image, at different detail level.
IFMBE Proceedings Vol. 22
_________________________________________________________________
202
D. Barbosa, J. Ramos and C. Lima
B. Cooccurrence matrix and texture descriptors In spite of existing diverse methods to extract the texture information within an image, in the present work it was chosen an approach based in cooccurrence matrices and texture statistical descriptors, originally proposed by Haralick [10]. The cooccurrence matrix encodes the synthesized image level (for each color channel) spatial dependence based on the estimation of the second order joint-conditional probability density function f(i,j,d, ), which is computed by counting all pairs of pixels at distance d having wavelet coefficients of color levels i and j at a given direction . These matrices capture spatial interrelations among the intensities within the reconstructed image levels and represent the spatial distribution dependence of the gray levels within an image, determining how often different combinations of pixel brightness values occur in the image. From these matrices, statistical descriptors can be calculated in order to extract texture information from the synthesized images. In the proposed algorithm only 4 statistical measures are considered among the 14 originally proposed by Haralick [10], namely angular second moment (F1), correlation (F2), inverse difference moment (F3), and entropy (F4). There are two synthesized images and for each them are calculated four coocurrence matrices for each color channel, which results in twelve coocurrence matrices for each image. For each coocurrence matrix, four statistical measures are calculated, resulting in a total of 96 texture descriptors, 48 for each image:
( ( ))
Fm Cα N xi
i = 1 , 2, 3 x = 1 ∨ 2
π π π α = 0, , , 3 4 2
4
m = 1,2, 3, 4
γ
3 x , m ,i
C. Higher-order statistics of the texture descriptors and multi-scale textures covariance The mean and variance for each Fm are calculated over , for every color channel, resulting in a set of 24 components per synthesized image. In authors’ previous work, it is stated that higher order statistics can be used to model deviations to the Gaussian distribution, which are more accentuated in pathological cases for almost all the texture descriptors. Note that this shift to the normal Gaussian distribution does not affect preferentially any descriptor also. In the present work, and to reduce the size of the final
1 = N
( ( )) ·¸ { ( ( ))}¸¹
ª§ F C N i α m x ¦α ««¨¨ − E Fm C α N xi ¬« ©
3
º » » ¼»
(6)
where i stands for color channel, x for DWT scale, m for statistical measure and for the direction in the cooccurrence computation. The covariance between the same texture descriptors at different DWT scales is also calculated, in order to model the detail effect in the patterns within the original image, since the two synthesized images contain different detail level information. Let MSC be the multi-scale covariance of a texture descriptor in the two synthesized images:
[ ( ( )) { ( ( ))}]X
MSCi , m = ¦ Fm Cα N1i − E Fm Cα N1i α
[F (C (N )) − E{F (C (N ))}] m
α
j 2
m
α
j 2
(7)
where i stands for color channel, x for DWT scale, m for statistical measure and for the direction in the cooccurrence computation. Therefore, each capsule endoscopic video frame will be characterized by 84 features, which will be the input of the MLP network. The choice of a simple classification scheme was done to make the results more representative of the effectiveness of the proposed algorithm than of the classifier itself.
(5)
where i stands for color channel, x for DWT scale, m for statistical measure and for the direction in the cooccurrence computation.
_______________________________________________________________
feature set, it was only used the third centered moment to model the non-Gaussianity, calculated for each Fm over , as:
III. IMPLEMENTATION AND RESULTS The experimental dataset for the evaluation for the proposed method was constructed with frames from capsule endoscopic video segments of different patients’ exams, taken at the Hospital dos Capuchos in Lisbon by Doctor Jaime Ramos. The training set was constructed with images from normal segments of capsule endoscopic videos, some of them taken from exams with pathological cases. The final dataset consisted in 500 normal frames, which were equally divided in two sets, for the MLP network training and testing, and 150 abnormal frames, which were also equally divided in two sets. Examples of the dataset frames can be observed in the figure 1. From previous work [8][9], it was concluded that the reduction of the gradation levels of each color channel from 256 levels to 32 levels does not compromise the texture analysis process and leads to the improvement of the
IFMBE Proceedings Vol. 22
_________________________________________________________________
Wireless capsule endoscopic frame classification scheme based on higher order statistics of multi-scale texture descriptors
203
multi-scale covariance parameters, calculated trough (6), and feature set 4 was composed by all the 84 features. IV. DISCUSSION AND FUTURE WORK a)
b)
Fig. 1. Examples of: a) normal intestine frame; b) abnormal intestine frame
processing time per frame, which is about 2 seconds in MATLAB running in a 3.2 GHz Pentium Dual Core processor-based with 1 GB of RAM. Note also that this gradation levels reduction must be followed by a proper dispersion of the pixel values to all the available range, in order to minimize the loss of classification performance. Instead of measuring the rate of successful recognized patterns, more reliable measures for the evaluation of the classification performance can be achieved by using the sensitivity (true positive rate) and the specificity (100-false positive rate) measures. These two measures can be calculated as: Sensitivity =
d .100 (%). c+d
(8) (9)
where a are the true negative patterns, b are the false positive patterns, c are the false negative patterns and d are the true positive patterns. Table 1 Classification performance of the proposed algorithm 1
2
3
4
89.2±1.4
91.4±2.6
92.9±1.7
93.7±2.0
90.1±1.8
93.4±3.2
91.9±1.3
94.6±1.6
To test the performance of the proposed algorithm, it was evaluated the classification improvement when the third centered moment and the multi-scale covariance were added to the features extracted for each frame. Therefore, for each video frame, feature set 1 was composed by the mean and variance for each Fm calculated over , feature set 2 was composed by the elements of feature set 1 plus the third order centered moment, calculated trough (6), feature set 3 was composed by the elements of feature set 1 plus the
_______________________________________________________________
REFERENCES 1.
b · § Specificity = ¨100 − .100 ¸ (%). a+b © ¹
Feature Set Specificity (^±%) Sensibility (^±%))
From the presented results, it is clear that the proposed algorithm has potential to be used in a automatic classification tool to reduce the time spent by the physician in the analysis of a capsule endoscopy exam, namely as a selection process that only shows to the physician the most suspect frames. However, and to assure a robust application, this method has to be tested with a larger dataset, so the future work will include the increase in the available dataset. Also different classification schemes will be evaluated, to optimize the classification performance of the process. Dimensionality studies will be done to the proposed feature set, but different feature sets will be also considered. The main goal of the present research is development of an automatic abnormalities detection system in capsule endoscopy videos, for the most common CE detectable diseases.
Herrerías J, Mascarenhas M (2007) Atlas of Capsule Endoscopy. Sulime Diseño de Soluciones, Sevilla 2. Kodogiannis V, Boulougoura M, Wadge E and Lygouras J (2007) The usage of soft-computing methodologies in interpreting capsule endoscopy. Engineering Applications of Artificial Intelligence 20: 539–553 3. Idden G, Meron G, Glukhovsky A and Swain P (2000) Wireless capsule endoscopy. Nature 415-417 4. Qureshi WA (2004) Current and future applications of capsule endoscopy. Nature Reviews Drug Discovery 3:447-450 5. Pennazio M (2006) Capsule endoscopy: Where are we after 6 years of clinical use?, Digestive and Liver Disease 38:867–878 6. Maroulis D, Iakovidis D, Karkanis A and Karras D (2003) CoLD: a versatile detection system for colorectal lesions in endoscopy video frames. Computer Methods and Programs in Biomedicine 70:151-166 7. Karkanis S, Iakovidis D, Maroulis D, Karras D, and Tzivras M (2003) Computer-Aided Tumor Detection in Endoscopic Video Using Color Wavelet Features. IEE Trans. On Information Technology in Biomedicine 7:3:141-152 8. Lima C, Barbosa D et al. (2008) Classification of Endoscopic Capsule Images by Using Color Wavelet Features, Higher Order Statistics and Radial Basis Functions, Proceedings of IEEE-EMBC2008, to be published 9. Barbosa D, Ramos J, and Lima C (2008) Detection of Small Bowel Tumors in Capsule Endoscopy Frames Using Texture Analysis based on the Discrete Wavelet Transform, Proceedings of IEEEEMBC2008, to be published 10. Haralick RM (1979) Statistical and structural approaches to texture. Proc. IEEE 67:786–804
IFMBE Proceedings Vol. 22
_________________________________________________________________
One-class support vector machine for joint variable selection and detection of postural balance degradation H. Amoud1, H. Snoussi1, D.J. Hewson1 and J. Duchêne1 Institut Charles Delaunay, Université de technologie de Troyes, FRE CNRS 2848, Troyes, France
Abstract — The study of the static posture is of great interest for the analysis of the deficit of the control of balance. A method of balance analysis is to use a platform of forces which makes it possible to extract displacement of the centre of pressure (COP). The parameters extracted from COP time series prove like variables keys to supervise the degradation of balance. However, the irrelevance and\or the redundancy of some of them make difficult an effective detection of degradation. The objective of this paper is the implementation of a method of detection (SVDD) and of a procedure of selection of the relevant parameters able to detect a degradation of balance. The selected criterion of selection is the maximization of the area AUC under the curve ROC.
10
8
6
Anteroposterior displacement (mm)
1
4
2
0
−2
−4
−6
−8
Keywords — One Class classification, Feature Selection, Detection, Support Vector Data Description, Posture.
−10 −10
−8
−6
−4
−2
0
2
4
6
8
10
Mediolateral displacement (mm) 10
I. INTRODUCTION
0
−5
−10
10
5 ML (mm)
Recently, extensive research has been devoted to the study of postural balance. The attraction of this field of research is primarily due to the importance of characterizing the risk of falling due to a deficit in balance in the elderly. Balance, along with gait problems, loss of muscle strength, and previous falls are among the most commonly cited risk factors for falls in the elderly [1]. Elderly fallers have decreased autonomy and independence, while the risk of subsequent falls increases, leading to a marked deterioration in mental and physical health. Falls in the elderly are thus a major cause of mortality. For instance, in France alone, the number of deaths attributed annually to falls is estimated to be more than 9000, with a resultant cost of more than two billion euros [2]. Balance, or postural equilibrium, is maintained by reacting to information from the different sensory systems, including vestibular, visual, and proprioceptive systems. It is possible to evaluate postural control using either clinical [3-5] or biomechanical tests [6, 7]. Clinical tests have been shown to be able to identify elderly at a greater risk of falling[3, 4], however they cannot detect the evolution of this risk over time. In contrast, biomechanical tests could provide a means to follow elderly subjects over time, thus making it possible to predict the risk of fall [4, 6].
AP (mm)
5
0
−5
−10
0
2
4
6
8
10
time (s)
Fig. 1: Stabilogram: displacement of the centre of pressure in the horizontal plane (top), displacement in the anteroposterior direction over time (middle) and displacement in the mediolateral direction over time (bottom). Postural stability is usually measured using a force plate, from which measures of centre of pressure displacement in the horizontal plane in both anteroposterior (AP) and mediolateral (ML) directions are obtained. The representation of the COP time series in AP and ML directions over time is known as a stabilogram (Figure 1). In order to study the quality of equilibrium in a static position, a range of parameters can be extracted from the
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 204–207, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
One-class support vector machine for joint variable selection and detection of postural balance degradation
procedure of selection of relevant parameters. These two components of the algorithm are briefly described below. 1. Concerning the detection method, we propose the using of Support Vector Data Description (SVDD), which was developed by [12-14]. SVDD is based on the Support Vector Machines of Vapnik[15], which finds the optimal separating hyperplane between data sets. In contrast, SVDD adapts SVM for only one class. The idea of this modification is to enclose the data of the target class by a sphere for which the volume is the smallest possible, so that the decision function can reject the data of the outlier class (Figure 2). This method is adapted in order to reject a predetermined fraction of the target data. The SVDD method gives comparable results with the Parzen density method [12, 13]. Moreover, the performance of SVDD is not degraded when the distribution of the training data differs from that of the target class, provided that it covers a large volume of the data space of the target class [12, 13]. x
*
stabilogram time series. Classical parameters include spatiotemporal (area of ellipse) and spectral parameters (median frequency, deciles) [6]. These types of parameters provide purely statistical information while ignoring the dynamic characteristics of the centre of pressure displacements. Recently, parameters that describe the fractal dynamics and provide information related to underlying physiological control processes have been extracted using fractal and nonlinear analyses. These new groups of parameters include the Hurst exponent [8, 9], reconstructed phase space [9], entropy [10] and univariate and bivariate Empirical Mode Decomposition[11]. An alteration of the postural control system should be reflected in changes in the characteristics of the COP. Consequently, the parameters extracted from the COP should provide a means of supervising the degradation of balance. However, the multitude of parameters (at least 65 parameters), the irrelevance and\or the redundancy of some of them, as well as the complexity of the function related them to the postural system make it difficult, if not impossible to effectively detect of a possible degradation in balance. The objective of this article is to present a method to select the relevant parameters that will enable degradation in balance to be detected, followed by an appropriate detection method, which takes into account the complexity of the relationship between the state of the postural control system (required information) and the stabilogram signal.
205
a
x R
II. STATEMENT OF THE PROBLEM The parameters extracted from the stabilogram are not all relevant, with a lot of redundancy between parameters. In addition, the link between the quality of postural balance and the parameters is not always clear. Many of these parameters have nonlinear characteristics and cannot be modeled by models known in the literature. Finally, within the framework of the detection of degradation, most data are for the normal case without degradation, whereas data for degraded case is difficult to obtain, and is thus lacking. The objective is therefore to select the parameters that make it possible to detect a degradation of balance using detection methods based on a one-class classification and a kernel core. III. PROPOSED ALGORITHM The algorithm developed in this work is based on an implementation of a kernel-based method of detection and a
_______________________________________________________________
Fig. 2: SVDD: The sphere enclosing the target class, is described by the centre and the radius. Four objects are on the boundary (support vectors). Two objects from the outlier class are outside the sphere. 2. Concerning the selection procedure, we propose the using of supervised selection criterion which is based on the performance of the detection method. This performance is measured by the area AUC under the receiver operator characteristic (ROC) curve. In fact, this curve consists of different values of specificity and sensitivity of the detection method [16, 17], and is obtained by plotting the type II errors (false positive, outlier accepted) against those of type I (false negative, target rejected) for a range of threshold values of the decision function of the detection method [16]. An exhaustive search for optimal feature is computationally very difficult, such that it might not be possible to achieve. In the proposed algorithm, iterative search methods such as forward selection (FS) and recursive feature elimination (RFE) were used to select the pertinent features [18].
IFMBE Proceedings Vol. 22
_________________________________________________________________
206
H. Amoud, H. Snoussi, D.J. Hewson and J. Duchêne
IV. APPLICATION Data resulting from a follow-up of degradation of balance in elderly are not yet available. In order to validate the algorithm an artificial degradation of balance in a young adult was performed by applying vibration to the tibialis anterior tendon the when subject was in a static upright position. This vibration creates an illusion of a backwards tilt of the body, which causes the subject to make a forward tilt of the body in order to correct the tilt [19]. It has been demonstrated that this vibration induced a degradation in postural balance[19, 20]. The target class consisted of the data obtained from the stabilometric parameters for experiments without degradation, whereas the outlier class consisted of the data from experiments with a degradation of balance following the application of vibration. Precise details of the experimental protocol can be found in [20]. Sixty five stabilometric parameters were used in the application, with parameters divided into three subgroups: 18 spatiotemporal parameters, 18 spectral parameters and 29 nonlinear parameters. For more details about the parameters, see [6, 8, 10, 20-22].
The algorithm (selection with SVDD) was compared with other methods of detection, such as the mixture of Gaussian (MOG) and K-centres using the same selection criteria and the same feature-search methods (FS). The results of FS with the three methods of detection are presented in figures 3. The results of feature selection using FS along with the different methods of detection during the validation phase are presented in the left column of figure 3. This figure presents the evolution of the criterion of selection (AUC) according to the selected features for each detection method. The value of AUC in the X-coordinate j corresponds to the use of the j first features selected by the corresponding procedure of selection and method of detection. The values of AUC are calculated using the validation set. The results of feature selection using FS with the different methods of detection on the test set are presented in the right column of figures 3. This figure presents the evolution of AUC, during the testing phase, according to the features selected during the validation phase for each detection method. The AUC values were calculated for each method, starting with the first selected feature and progressively adding the other selected features.
AUC
(a)
(d)
1
1
0.995
0.995
0.99
0.99
0.985
0.985 MOG
MOG 0
20
40
60
0
20
AUC
(b)
40
1
1
0.995
0.995
0.99
0.99
0.985
0.985 K-centers
K-centers 0
20
40
60
0
AUC
(c)
20
40
60
(f)
1
1
0.995
0.995
0.99
0.99
0.985
0.985 SVDD
SVDD 0
60
(e)
20 40 number of features
60
0
20 40 number of features
60
Fig. 3: Evolution of AUC according to the features selected by FS for the three methods of detection during the validation phase (left). Evaluation of feature selection realized by FS for the three methods of detection during the test phase (right). The value of AUC in the X-coordinate j corresponds to the use of the j first features selected by FS.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
One-class support vector machine for joint variable selection and detection of postural balance degradation
Only the SVDD method with FS provided similar results to those obtained during the validation phase (classification without errors). The values of AUC are equal to one from the addition of the second feature until the 19th feature. The other methods are not satisfactory since the AUC do not reach the value of one for a reasonable number of features, while it is clear that the results during the trainingvalidation and test phases are different. The first 19 features selected are all parameters extracted from displacements in the AP direction. Moreover, these features are from the three types of parameters (spatiotemporal, spectral and non-linear), which makes it possible to provide three types of information on the state of balance: spatiotemporal, spectral, and related to the process of physiological control (non-linear). The 19 selected parameters belong to the parameters suitable for the AP direction, which is logical, given that vibrations were applied in this direction [20]. V. CONCLUSION
3.
4.
5.
6.
7.
8.
9.
10.
In conclusion, a reliable method to detect a degradation in balance was obtained using the Support Vector Data Description (SVDD) method. The model chosen used a reduced number of relevant parameters selected by a procedure of selection using a supervised selection criterion. Future work will test the algorithm within the framework of a longitudinal study of home-dwelling elderly.
11.
12. 13. 14.
ACKNOWLEDGMENT
15. 16.
This study was undertaken as part of the PréDICA research project (Prévision, Détection Investigation contre la Chute des Personnes Agées) supported by the French ANR agency (Grant ANR-O5-RNTS-01801), and the PARAChute research project (Personnes Agées et Risque de Chute) which was supported in part by The French Ministry of Research (Grant 03-B-254), The European Social Fund (Grant 3/1/3/4/07/3/3/011), The European Regional Development Fund (Grant 2003-2-50-0014 and Grant 2006-2-20-0011), The Champagne-Ardenne Regional Council (Grant E200308251), and INRIA (Grant 804F04620016000081).
17.
18.
19.
20.
21.
REFERENCES 1. 2.
22.
L. Z. Rubenstein and K. R. Josephson, "The epidemiology of falls and syncope," Clin Geriatr Med, vol. 18, pp. 141-58, 2002. C. F. d. E. p. l. Santé, "Les clés du " bien vieillir " : prévention des chutes chez les seniors," Caisse Nationale de l'Assurance Maladie des Travailleurs Salariés, 2001, pp. 20.
_______________________________________________________________
207
S. G. Brauer, Y. R. Burns, and P. Galley, "A prospective study of laboratory and clinical measures of postural stability to predict community-dwelling fallers," J Gerontol A Biol Sci Med Sci, vol. 55, pp. M469-76, 2000. B. E. Maki, P. J. Holliday, and A. K. Topper, "A prospective study of postural balance and risk of falling in an ambulatory and independent elderly population," J Gerontol, vol. 49, pp. M72-84, 1994. M. E. Tinetti, M. Speechley, and S. F. Ginter, "Risk factors for falls among elderly persons living in the community," N Engl J Med, vol. 319, pp. 1701-1707, 1988. T. E. Prieto, J. B. Myklebust, et al., "Measures of postural steadiness: differences between healthy young and elderly adults," Biomedical Engineering, IEEE Transactions on, vol. 43, pp. 956-966, 1996. T. E. Prieto, J. B. Myklebust, and B. M. Myklebust, "Characterization and modeling of postural steadiness in the elderly: a review," Rehabilitation Engineering, IEEE Transactions on [see also IEEE Trans. on Neural Systems and Rehabilitation], vol. 1, pp. 26-34, 1993. H. Amoud, M. Abadi, et al., "Fractal time series analysis of postural stability in elderly and control subjects," Journal of NeuroEngineering and Rehabilitation, vol. 4, 2007. H. Snoussi, H. Amoud, et al., "Reconstructed phase spaces of intrinsic mode functions. Application to postural stability analysis," presented at 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), New York, USA, 2006. H. Amoud, H. Snoussi, et al., "Intrinsic mode entropy for nonlinear discriminant analysis," IEEE Signal processing Letters, vol. 14, pp. 297-300, 2007. H. Amoud, H. Snoussi, et al., "Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis," EURASIP Journal on Advances in Signal Processing, vol. 2008, pp. 11 pages, 2008. D. M. J. Tax, "One-class Classification," Phd thesis, Delft University of Technology, 2001. D. M. J. Tax and R. P. W. Duin, "Support vector domain description," Pattern Recognition Letters, vol. 20, pp. 1191-1199, 1999. D. M. J. Tax and R. P. W. Duin, "Support Vector Data Description," Machine Learning, vol. 54, pp. 45-66, 2004. V. N. Vapnik, "Statistical Learning Theory," Wiley, 1998. A. P. Bradley, "The use of the area under the ROC curve in the evaluation of machine learning algorithms," Pattern Recognition, vol. 30, pp. 1145-1159, 1997. A. K. Jain, R. P. W. Duin, and J. Mao, "Statistical pattern recognition: A review," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 4-37, 2000. I. Guyon and A. Elisseeff, "An introduction to variable and feature selection," Journal of Machine Learning Research, vol. 3, pp. 11571182, 2003. J. P. Roll and J. P. Vedel, "Kinaesthetic role of muscle afferents in man, studied by tendon vibration and microneurography," Exp Brain Res, vol. 47, pp. 177-90, 1982. V. Michel, H. Amoud, et al., "Identification of a degradation in postural equilibrium invoked by different vibration frequencies on the tibialis anterior tendon," presented at 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), New York, USA, 2006. J. J. Collins and C. J. De Luca, "Random walking during quiet standing," Physical Review Letters, vol. 73, pp. 764-767, 1994. D. J. Hewson, J. Duchêne, et al., "The PARAChute project: Remote monitoring of posture and gait for fall prevention," EURASIP Journal on Advances in Signal Processing, vol. 2007, 2007.
IFMBE Proceedings Vol. 22
_________________________________________________________________
The influence of treatment on linear and non-linear parameters of autonomic regulation in patients with acute schizophrenia S. Schulz1, K.J. Bär2 and A. Voss1 1
University of Applied Sciences, Department of Medical Engineering and Biotechnology, Jena, Germany 2 Department of Psychiatry, Friedrich-Schiller-University, Jena, Germany
Abstract — An increased cardiovascular mortality up to three times higher than in general populations associated with an increased heart rate is reported in patients suffering from schizophrenia suggesting a cardiac autonomic dysregulation in schizophrenic patients. Mechanisms which could be responsible for this increased cardiovascular mortality are under debate (unhealthy lifestyle, smoking, diabetes, adverse pro-arrhythmic effects of antipsychotic medication, altered autonomic function). The aim of this study was to determine if medical treatment and/or schizophrenia itself causes cardiac autonomic dysregulation. We investigated 46 patients suffering from schizophrenia (acute non-mediated - G1, medicated - G2) and 23 matched (age, gender) healthy control subjects (CON). From every patient three times ECG and non-invasiv blood pressure were recorded. Cardiac autonomic regulation was evaluated by linear and non-linear (symbolic dynamics (JSD)) methods of heart rate variability (HRV), blood pressure variability (BPV) and baroreflex sensitivity (BRS). The results show that non-medicated schizophrenic patients differ significantly from CON (p 0.0001). This attests that the larger the delay of therapy application, higher the AMI size.
2. Significance of p = 0.02 for men with inferior AMI, because their scores are lower with respect of those with anterior AMI.
2. In the context of small injured myocardial area, wherein the score is lower in the inferior region than in the anterior; the QRS score was more relevant in masculine data (p = 0.02, which may be considered a significant value)
Logistic Regression and Minimum-Square Regression.
2. The men under inferior AMI compared with those with anterior AMI.
The variables are mainly scores, anatomic localization of AMI and ECG characteristics.
Conclusions
For 10% of patients, p = 0.001, pointing out relevant PPTI increase as a function of the AMI localization. r values are not provided.
3. Significance between QRS score and the estimated time of death for coronary disease (considering all patients).
Conclusions
Major Biostatistical Results Values of p = 0.237, 0.5653 and 0.0998 in the case of, respectively, no variation, decrease and increase in PPTI (for 90% of patients).
T-Student Test, ANOVA Test, Tukey Method, Logistic and Minimum-Square Regression.
IFMBE Proceedings Vol. 22
3. Significant correlation (p > 0.05) between QRS scores and elapsed time until death, due to coronary disease. r coefficients are positive with average value 0.40. “High Correlation” is associated with r > 0.7; “Low Correlation” requires r < 0.15. Significant p values are always lower than 0.1149.
1. QRS score presents better results for masculine data.
There are both positive and negative correlations, and p values are generally significant, although they have not been provided all the time along with correlation coefficients. .
_________________________________________________________________
384
S.A.F. Amorim, J.B. Destro-Filho, L.O. Resende, M.A. Colantoni, E.S. Resende
IV. CONCLUSIONS
REFERENCES
The last line of Table 1 provides the general conclusions that could be drawn from the seven papers. The first remark is that there is a general absence of details on biostatistical issues in all papers. For example, correlation coefficients are not provided all the time along with p values; just two papers have explicitly stated which statistical hypothesis tests were used. For these articles, it was not explained the reason for choosing such statistical tools, which may be considered very simple and unaccurate, from a theoretical viewpoint . The hypothesis set H0 and H1 were not provided in any paper. In brief, these preliminary conclusions point out practical difficulties to compare these works of the literature. Anyway, from the last line of Table 1, an “excellent” score provides correlation coefficients greater than 0.7, associated to p < 0.1149; wherein these correlations involve the score under study and clinical indicators of the injury level (Troponin and CKMb concentrations, ejection fraction). In addition, the need of closer interaction between clinical trials and biostatistical analysis is clear. Future work involves the analysis of all articles, as well as the identification and use of more advanced biostatistical tools.
ACKNOWLEDGMENTS
1.
2.
3.
4.
5.
6.
Ulrika S. P., Bernard R. et al. (1998) Comparison of the Various Electrocardiographic Scoring Codes for Estimating Anatomically Documented Sizes of Single and Multiple Infarctcs of the Left Ventricle. The American Journal of Cardiology. V. 81, p 809-815. Aldrich H. R, Wagner N. B. et al. (1988).Use of initial ST-segment deviation for prediction of final electrocardiographic size of actue myocardial infarcts. American Journal of Cardiology, v. 1, n. 61, part 10, p. 749-735, April. Wilkins Michele L., Anderson Stanley T. (1994) Variability of Acute ST-Segment Predicted Myocardial Infarct Size in the Absence of Thrombolytic Therapy. The American Journal of Cardiology. v.74, p 174-177. Wilkins Michelle L. et al. (1995) An Electrocardiographic Acuteness Score for Quantifying the Timing of a Myocardial Infarction to Guide Decisions Regarding Reperfusion Therapy. The American Journal Of Cardiology, p. 617-620. 15 mar. Merritt H.R MD et al. (1996) Relation Between Symptom Duration Before Thrombolytic Therapy and Final Myocardial Infarct Size: Circulation, Vol 93, p 48-52, nº 1, January 1. Jones Michael G et al (1990) Prognostic Use of a QRS Scoring System After Hospital Discharge for Initial Acute Myocardial Infarction in the Framinghan Cohort. The American Journal of Cardiology, v. 66, p 549-549, September 1. Corresponding Author: Joao-Batista DESTRO-FILHO, FEELT/UFU. Av. Joao Naves de Avila 2121 Santa Monica - 38400-902. Uberlândia MG – Brazil. E-mail:
[email protected] Authors are indebted with Prof. G.S. Wagner, Duke University, USA. This work has been funded by FAPEMIG, the Research Agency of Minas Gerais Province, Brazil.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Principal Component Regression Approach for Estimation of Ventricular Repolarization Characteristics J.A. Lipponen1, M.P. Tarvainen1, T. Laitinen2, T. Lyyra-Laitinen2 and P.A. Karjalainen1 1
2
University of Kuopio, Department of Physics, Kuopio, Finland Kuopio University Hospital, Department of Clinical Physiology and Nuclear Medicine, Kuopio, Finland
Abstract — Ventricular repolarization duration (VRD) is known to be affected by heart rate (HR) and autonomic control (mainly through sympathetic branch), and thus VRD varies in time in a similar way as HR. The time interval between Q-wave onset and T-wave offset, i.e. QT interval, in an electrocardiogram (ECG) corresponds to the total ventricular activity including both depolarization and repolarization times. Thus QT interval may be used as an index of VRD. Due to the difficulty in fixing automatically the Q-wave onset in VRD determination, RT interval is typically used instead of QT interval. In this paper, we propose a robust method for estimating ventricular repolarization characteristics such as RT interval and T-wave amplitude. The method is based on principal component regression (PCR). In the method, T-wave epochs are first extracted from ECG in respect of R-wave fiducial points. Then, correlation matrix of the extracted epochs is formed and its eigenvectors computed. The most significant eigenvectors are then fitted to the data to obtain noise-free estimates of T-waves. Nonstationarities in repolarization characteristics can also be modeled by updating the eigenvectors iteratively. The proposed method is tested with exercise ECG data measured from healthy subjects.
I. INTRODUCTION Ventricular repolarization duration (VRD) is known to vary in time. This time variation is a result of autonomic control and changes in heart rate [1]. In electrocardiogram (ECG), time between Q-wave onset and T-wave offset correspond to the total ventricular activity, and thus, QTinterval may be used as an index of VRD. It has been suggested that abnormal QT variability could be a marker for a group of severe cardiac diseases such as ventricular arrhythmias and that QT variability may yield HR independent information. In addition, the amplitude and shape of the T-wave may include important clinical information. For example, it has been observed that QT interval increases and T-wave amplitude decreases in hypoglycemia [2]. Automatic detection of Q-wave onset is quite difficult, and thus, R-wave apex is typically used in VRD determination [3]. RT interval can be defined as the time interval from R-wave apex to T-wave apex (RTapex) or from R-wave apex to T-wave offset (RTend). RTapex is typically
defined by fitting a parabola around the T-wave maximum [3]. T-wave offset can be defined in many ways but typically it is defined as an intercept of a threshold level with T-wave or some fitted line to T-wave downslope [4]. In many cases, RTapex measure has been found to give most accurate results. However, the variability of T-wave downslope has been found to hide important physiological information, and thus, the use of RTend measure may be advisable [5]. In this paper, we propose a new method for estimating the variation in RT interval. The proposed method is based on principal component regression (PCR). In the method each, T-wave is modeled using eigenvectors of correlation matrix of 60 previously T-waves. First two eigenvectors are fitted to T-wave epochs to get noise free estimates of Twave. T-wave apex and offset is then fixed using this modeled T-wave. In order to take into account changes in T-wave characteristic, the eigenvectors are updated at every heart beat. The proposed method is compared to the traditional RT-interval estimation methods using three exercise ECG measurements. Main benefit of the method is robustness to noise, and thus, T-wave apex and also offset can be estimated with high accuracy. II. MATERIALS AND METHODS A. Data acquisition ECG measurements used in this paper consist of three exercise ECG measurements. In all measurements ECG electrodes were placed according to the conventional 12lead system with the Mason-Likar modification. For the analysis we chose the chest lead V5. The exercise ECG recordings were performed by using a Cardiovit™ CS-200 ergo-spirometer system (Schiller AG) with Ergoline™, Ergoselect 200 K bicycle ergometer. The sampling rate of the ECG was 500 Hz. Three healthy male subjects were tested. First the subjects lay supine for three minutes and then sat up on the bicycle for the next three minutes. After that, the subject started the actual exercise part in which the load of the bicycle increased with 40 W every three minutes. The starting load was 40 W and the subject
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 385–388, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
386
J.A. Lipponen, M.P. Tarvainen, T. Laitinen, T. Lyyra-Laitinen and P.A. Karjalainen
continued exercise until exhaustion. After the subject indicated that he could not go on anymore, the exercise test was stopped and a 10 minute recovery period was measured.
Let M denote the number of T-waves within the whole ECG recording. The T-wave epochs are extracted using the T-wave searching window where RRav is average for these M RR intervals. j:th T-wave epoch the length of which is N is marked as
B. Traditional RT interval measurements
§ z (1) · zj = ¨ # ¸ © z (N) ¹ j
Two commonly used RT interval measurement methods RTend and RTapex are introduced here. Baseline drifts within the ECG can disturb RT estimates, and therefore, these lowfrquency components should be removed before analysis. We used a 5'th order Butterworth highpass filter with cutoff frequency at 1 Hz to remove these ECG baseline drifts. All RT interval estimation methods presume R-wave apex detection which is accomplished by using a QRS detection algorithm [6]. Once R-wave apexes are detected, T-waves can be extracted with a window function °[100,500] ms ® °¯[100,0.7 ⋅ RR av ] ms,
if RR av > 700ms if RR av < 700ms
(1)
where RRav is the average RR interval [7]. The RT interval can now be estimated from the extracted T-waves. In the RTapex method, time difference between R- and Twave apexes is estimated. First, ECG is low pass filtered with a 20 millisecond moving average FIR filter (for sampling rate of 500 Hz, filter order is 10, filter coefficients bj=1/10 for all j=1…10). Then, T-wave maximum is searched by fitting a parabola around the T-wave maximum within a 60 millisecond frame to reduce the effect of noise. T-wave apex is defined as a maximum of the fitted parabola. RTapex method is represented in Fig. 1 (middle row). The second traditional RT interval measure utilizes a line fit in T-wave offset determination. T-wave is first low pass filtered as above and the line fit is obtained as the steepest tangent of the T-wave down-slope as we can see in Fig 1 (first row). T-wave offset is then fixed as the intercept of this tangent with the isoelectric line. Isoelectric line is obtained as amplitude value corresponding to the highest peak in the ECG histogram. This RT interval parameter is here called as RTend.
(2)
j
As an observation model, we use the additive noise model zj = sj + ej
(3)
where sj is the noiseless T-wave epoch and ej is measurement noise. If we have M T-waves, they will span a vector space S which will be at most of min{M,N} dimensions. Each epoch zj can be approximated as linear combination of basis vectors yk zj = Hsθj + ej
(4)
where Hs = (y1…yK) is N × K matrix of basis vectors which span the K-dimensional subspace of S. θj is K × 1 column vector of weights related to j:th epoch. By placing each Twave epoch zj in measurement matrix Z = (z1…zM) and model weights in parameter matrix θ = (θ1…θM) we can write the observation model in a form Z = Hsθj + E
(5)
where E = (e1…eM) is a matrix of error terms. The model basis vectors yk can be defined in many ways, but in PCR basis vectors are selected to be the eigenvectors (νk) of the data correlation matrix which can be estimated as
R=
1 M
ZZ T
(6)
Eigenvectors can be solved from the eigendecomposition. The eigenvectors of the correlation matrix are orthonormal and therefore the least-squares solution for the parameters θ is of the form
θˆPC = H ST z
(7)
and T-wave estimates can be computed as C. Principal component regression
zˆPC = H SθˆPC
In the principal component regression, the vector containing the measured signal is presented as a weighted sum of orthogonal basis vectors. The basis vectors can be selected in many ways but in PCR basis vectors are selected to be the eigenvectors of either a data covariance or correlation matrix. The central idea in PCR is to reduce the dimensionality of the data set, while retaining as much as possible of the variance in the original data [8].
_________________________________________
(8)
The first eigenvector is the best mean square fit of a single waveform to the entire set of epochs, and therefore, it is similar to mean of the epochs. Second eigenvector covers mainly the variation of T-wave latency and it is expected to resemble the first derivative of the T-wave. As told earlier the idea in PCR is to reduce the dimensionality of the data set, this is done by evaluating
IFMBE Proceedings Vol. 22
___________________________________________
A Principal Component Regression Approach for Estimation of Ventricular Repolarization Characteristics
387
parameters θ using equation (7) and calculating an approximation of T-wave epoch zj by fitting most significant eigenvectors. We used first two eigenvectors because the first eigenvector models the mean of the Twaves and second eigenvector models latency variation between successive T-waves. By using such a low number of eigenvectors the denoising effect of the fit is maximized. If T-wave shape and position changes significantly during the ECG measurement, eigenvectors which are evaluated using a whole dataset are not capable to model changes in all T-waves. Thus, eigenvectors need to be updated dynamically during the measurement. In this paper, we use a 60 (zj–59…zj) previous T-wave epochs in the computation of model basis vectors νk. We noticed that during 60 T-waves, changes in position and shape of Twave are not too large and on the other hand 60 T-waves is enough to give desired prior information of T-wave characteristics. By using the estimated T-wave ( zˆPC ), apex of the TPC ) can be searched directly using wave (required for RTapex
maximum value of the zˆPC because of the model denoising PC ) can be effect. End of the T-wave (required for RTend estimated similarly as in the RTend method. It has also been noted that in very low signal to noise ratio (SNR) conditions the first two eigenvectors can also be noisy. In such situations approximation zˆPC can be lowpass filtered. The step-wise presentation of RTPC algorithm can be expressed as
1. Get T-wave epochs (zj–59…zj) using the window function (1) where RRav is calculated as a mean of RRj–59…RRj) and form data matrix Z = (zj–59…zj) 2. Calculate data correlation matrix R given by (6) and solve eigenvectors ν1 and ν2 3. Evaluate the model parameters θˆPC using equation (7) 4. Compute T-wave epoch estimates zˆPC using (8) PC PC and RTapex estimates 5. Calculate RTend
6. Get the next T-wave ( j=j+1) and jump to point one. III. RESULTS The proposed RT-interval estimation method was tested using three exercise ECG measurements. The method was compared to traditional RT-interval estimation methods. In Fig. 1 there are two different cases of RT-interval estimation. Case number one (first column) is taken from resting period, SNR is clearly high and all four RTapex, PC PC RTend, RTapex and RTend measures seem to work nicely. In
_________________________________________
Fig. 1: Different RT interval estimates (Red lines are original data). First row: RTend method with filtered data (black), fitted tangent (black dash line) and isoelectric line (thin black line). Second row: RTapex method with fitted parabola (green line). Third row: RTPC method with eigenvectors (green line) and approximation (black line).
second case (column 2), SNR is low and some movement artefacts are present. PCR method is based on prior information of earlier T-waves and thus fitted T-wave has same characteristics as earlier T-waves, which gives an advantage compared to traditional methods. First eigenvector is similar to the mean of earlier T-waves and thus fitted T-wave looks smooth although original data is noisy. The different RT interval estimation methods were then compared by applying them to the exercise ECG measurement of the three subjects (S1, S2 and S3). The PC results are shown in Fig. 2. In first row RTapex is compared with the RTapex method. Both methods give similar results: RT-interval shortens during the exercise as the RR-interval does. Variation of the RT-interval is also similar to these PC estimates are sown in RT apex estimates. RTend and RTend second row of Fig. 2. Trend of these two estimates is very PC similar but during exercise RTend variation is smaller than RTend measure. Steepest tangent of T-wave is highly sensitive to noise and thus variation of these two estimates PC is partly due to noise of ECG signal. Because RTend is more
IFMBE Proceedings Vol. 22
___________________________________________
388
J.A. Lipponen, M.P. Tarvainen, T. Laitinen, T. Lyyra-Laitinen and P.A. Karjalainen
PC (blue line) and RTapex (red line). Fig. 2: RT interval estimates applied to exercise ECG data. In first row: RTapex PC Second row: RTend (blue line) and RTend (red line). Third row: RR-interval time series.
robust to the noise, thus variation is smaller than RTend estimate. IV. DISCUSSION Ventricular repolarization duration variability is a potential tool in cardiovascular research. Various algorithms for estimating RT-interval have been presented. However the detection of the smooth T-wave can be problematic in exercise and non laboratory measurements where SNR is low. We have proposed a PCR-based method for estimation of RT-interval. The PCR method is based on constructing a model for T-wave using prior information received from earlier T-waves. Because of this prior information, the method is quite robust to noise. The presented method was compared to traditional RTinterval measures. Fig. 1 represents two cases how RTsearching methods are working. As we can see in situation two, RTPC method is working better on low SNR situations because prior information of earlier T-waves is essential to reach sufficient estimate of T-wave offset. In situations when T-wave position and shape vary significantly, two eigenvectors are not sufficient to model all changes in whole data set, thus RTPC method can produce some bias in estimation. However bias can be reduced by updating the eigenvectors dynamically. In Fig. 2, RT-interval measures were compared to each PC others. RTapex and RTapex methods give similar results but PC variation in RTend method is greater than in RTend method. T-wave downslope has been found to hide important
_________________________________________
physiological information [5]. Thus RT offset measures are used although traditional RTend method is more sensitive to PC noise than RTapex method. RTend method seems to be less sensitive to noise, and thus, it gives more accurate results of RT offset measure.
REFERENCES 1. 2.
3.
4.
5. 6. 7.
8.
W. Zareba and A. B. de Luna, (2005) QT dynamics and variability, Ann Noninvasive Electrocardiol, 10:256-262. T. Laitinen, T. Lyyra-Laitinen, et al. (2008) Electrocardiographic alterations during hyperinsulinemic hypoglycemia in healthy subjects. Ann Noninvasive Electrocardiol. 13:97-105 M. Merri, M. Alberti, and A. Moss (1999), Dynamic analysis of ventricular repolarization duration from 24-hour Holter recordings. IEEE Trans Biomed Eng, 40:1219-1225. A. Porta, G. Baselli, et al. (1998), Performance assessment of standard algorithms for dynamic R-T interval measurement: comparison between R-Tapex and R-Tend approach, Med Biol Eng Comput, 36:35–42. P. Davey (1999), QT interval measurement: Q to Tapex or Q to Tend. J Internal Med, 246:145–149. J. Pan and W. Tompkins(1985), A real-time QRS detection algorithm. IEEE Trans Biomed Eng, 32:230–236. P. Laguna, N. Thakor et al. (1990), New algorithm for QT interval analysis in 24-hour Holter ECG: performance and applications. Med Biol Eng Comput, 28:67–73. Jolliffe (1986), Principal Component Analysis. Springer-Verlag. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Jukka A. Lipponen Department of Physics, University of Kuopio P.O. Box 1627 FI-70211 Kuopio Finland
[email protected] ___________________________________________
Diagnosis of Ischemic Heart Disease with Cardiogoniometry – Linear discriminant analysis versus Support Vector Machines A. Seeck1, A. Garde2, M. Schuepbach3,5, B. Giraldo2, E. Sanz3, T. Huebner4, P. Caminal2, A. Voss1 1
2
University of Applied Sciences Jena, Departement of Medical Engineering and Biotechnology, Jena, Germany Dep. of ESAII, Universitat Politècnica de Catalunya (UPC), Institut de Bioingenyeria de Catalunya (IBEC) and CIBER de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN)., Barcelona, Spain 3 Laboratory for Cardiology, Zäziwil, Switzerland 4 enverdis GmbH, Jena, Germany 5 Dpt. of Neurology, University Hospital Bern, Switzerland and Centre d’Investigation Clinique CHU Salpêtrière, Paris, France
Abstract — The Ischemic Heart Disease (IHD) is characterized by an insufficient supply with blood of the myocardium usually caused by an artherosclerotic disease of the coronary arteries (coronary artery disease CAD). The IHD and its consequences have become a leading problem in the industrialized nations. The aim of this study was to evaluate a new diagnosing method, the cardiogoniometry, using two different classifying techniques: the method of linear discriminant function analysis (LDA) and the method of Support Vector Machines (SVM). Data of a group of 109 female subjects (62 healthy, 47 with IHD) were analyzed on the basis of extracted parameters from the three-dimensional vector loops of the heart. The LDA achieved an accuracy of 83,5% (Sensitivity 78,7%, Specificity 87,1%), whereas the SVM achieved an accuracy of 86% (Sensitivity 80,5%, Specificity 89,8%). It could be shown that cardiogoniometry, an electrophysiological diagnostic method performed at rest, detects variables that are helpful in identifying ischemic heart disease. As it is easy to apply, non-invasive, and provides an automated interpretation it may become an inexpensive addition to the cardiologic diagnostic armamentarium, possibly useful for early diagnosis of IHD or CAD, as well as in patients who do not tolerate exercise testing. It was also proven that by applying Support Vector Machines an increased diagnostic precision in comparison to the conventional discriminant function analysis can be achieved. Keywords — Cardiogoniometry, Support Vector Machines, nonlinear classifier, linear discriminant analysis, vector loop.
patient and achieving only moderate sensitivity and specifity [1]. Moreover, a number of patients do not tolerate exercise testing. There is a rapid development in the field of cardiac imaging (Cardiac MRI, Cardiac CT, scintigraphy) but these techniques are not applicable for the general practitioners [2]. Current broadly available non invasive cardiologic diagnostic tests, such as electrocardiography are very insensitive for IHD. Cardiogoniometry is a new vectorcardiographical technique to diagnose IHD at an early stage with the advantage of easy application and being performed under resting conditions [3]. The Support Vector Machines (SVM) are binary classifiers based on statistical learning techniques [4]. Introduced by Vapnik [5] and studied by others [6, 7], the SVM are a new powerful class of learning procedures that can deal with nonlinear classification by combining them with a kernel function. The SVM separate a set of objects having different class memberships by means of an optimal hyperplane that maximizes the margin between both classes and defines decision boundaries [8-10]. The algorithm can be shown to correspond to a linear method in a highdimensional feature space non-linearly related to input space [11]. The aim of this study was to compare the ability to classify cardiogoniometric data using linear discriminant function technique versus the classification with Support Vector Machines in order to differentiate between patients with IHD and healthy subjects.
I. INTRODUCTION The Ischemic Heart Disease (IHD) is characterized by an insufficient supply with blood of the myocardium usually caused by an artherosclerotic disease of the coronary arteries (coronary artery disease CAD). It is the leading cause for myocardial infarction and/or chronic heart failure. Today it is the most common cause of death in most industrialized countries and a major cause of hospital admissions. Diagnosis of IHD is usually performed with ergometry bearing a high risk of complications for the
II. DATA AND METHODS A. Analyzed Data In this study cardiogoniometry data of 109 female subjects were analyzed, 47 of them suffering from IHD and 62 being healthy. The diagnostic findings of the coronary angiography were used as golden standard. An IHD was diagnosed if one of the three big coronary vessels (LAD,
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 389–392, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
390
A. Seeck, A. Garde, M. Schuepbach, B. Giraldo, E. Sanz, T. Huebner, P. Caminal, A. Voss
RCA, and LCX) had a significant stenosis. The mean age of the group with IHD was 67.0±9.2 years and of the healthy group 65.2±9.1 years. B. Cardiogoniometry The Cardiogoniometry (CGM) uses four surface electrodes to register the vector loop. The potentials of the five bipolar leads are transformed into three orthogonal projections by applying equations 1 to 3 (Figure 1).
x = D ⋅ 45° − I
(1)
y = D ⋅ sin 45° + A
(2)
z = sin 45° ⋅ (Ve − Ho )
(3)
loop, R-loop, T-loop etc.), in the ECG. Also the total area, the planarity and the shape of the loop were examined. On the one hand these parameters were extracted from one calculated median loop. On the other hand parameters were taken from each heart cycle and the mean value was determined. The coordinate system of the three orthogonal axes divides the space in eight octants related to the anatomy of the heart: Octants 1 to 4 are apical, 5 to 8 are basal. The progression and the fractions of the vector loop in each octant were also taken as parameters. Altogether approx. 500 parameters were developed. To reduce this amount of parameters a combination of the Mann-Whitney-U-Test and a cross-correlation was applied. First the significance of each parameter for differentiating between patients with IHD and healthy subjects was calculated. Afterwards the correlation between all parameters was evaluated. If the correlation between two parameters was higher than 0.8, the one with less significance was excluded. In this way the parameter pool was reduced to 240 parameters, which were used to perform the two different classification methods. D. Linear Discriminant Analysis
Fig1 position of electrodes and resulting bipolar leads and planes for the cardiogoniometry
Vectorial addition of the three potentials measured in one plane results in a vector that describes the electric field in this plane. The vectors of two orthogonal planes allow the construction of the heart vector, which gives by its orientation the direction of the field and by its length the strength of the electrical field generated by the heart. The differences to the conventional vector ECG are a) the measurement of the vector per constructionem and without intercalated compensatory resistances in CGM, and b) the projection planes of CGM, the frontal plane and the oblique sagittal plane, that are oriented according to the anatomy of the heart rather than the body planes. The examination was conducted for approx. 15 seconds under resting conditions preferably without respiration. Sampling frequency was 500 Hz. C. Parameters The characteristics of the vector loop were analysed on the basis of different parameters, which were such features such as the amplitude, the direction, the time and the angles to the two different planes for significant landmarks (P-
_______________________________________________________________
The linear discriminant analysis is a method for classifying a set of observations into predefined classes, based on a set of parameters. The discriminant function is calculated using a set of observations for which the classes are known:
d = b1 x1 + b2 x2 + ... + bn xn + c
(4)
where the b's are discriminant coefficients, the x's are the input variables and c is a constant. This discriminant function is used to predict the class of a new observation with unknown class. For this study a forward stepwise linear discriminant function with a maximum of seven parameters was applied. To assure the performance of the discriminant function cross-validation was done. E. Support Vector Machines The SVM are learning systems based on statistical learning theory to classify. They separate a given set of twoclass training data using a hyperplane that is maximally distant from them (“maximum margin hyperplane”). The separating hyperplane for linearly separable data is defined by
IFMBE Proceedings Vol. 22
x ∗ w0 + b = 0
(5)
_________________________________________________________________
Diagnosis of Ischemic Heart Disease with Cardiogoniometry – Linear discriminant analysis versus Support Vector Machines
where w0 is the normal to the hyperplane obtained as a linear combination of a subset of the training data. Data are then classified by computing the sign of the above equation. However, data normally are not separable. In this case, a non-linear decision function is needed, and an extension to non-linear boundaries is achieved by using a specific function called kernel function [6]. The kernel function maps the data of the input space to a higher dimensional space (feature space) by a non-linear transformation. The optimal hyperplane is then constructed in the feature space, creating a non-linear boundary in the input space. The decision function for non-linearly separable data is defined by:
f ( x ) = sign(¦ α iti K ( x, xi ) + b )
· ¸ ¸ ¹
500 Parameters
Mann-Whitney-U-Test & Crosscorrelation
(6)
2
º » » ¼
(7)
At first a search for an optimal C and was performed with the 240 developed parameters comparing the classification accuracy for different combinations (C 11000, 0.1-50). With the best values a feature selection process was applied to extract only the best 7 parameters, in all classifications 10 fold crossvalidation technique was used. The whole process is presented in the following Figure 2.
_______________________________________________________________
CGM vector loop
240 Parameters
The coefficients i and b are determined by solving a large scale quadratic programming problem for which efficient algorithms exist that guarantee global optimum finding. ti are targets, and iti=0. The vectors xi are the support vectors which determine the optimal separating hyperplane and correspond to the closest points of each class. N is the number of support vectors and 0 ื i ื C, where C is penalty parameter, which allows some flexibility in separating the categories. It controls the trade off between maximizing the margin and minimizing the classification error. Increasing the value of C forces the creation of a more accurate model that may not generalize well. The goal is to find the minimum value of C with which a minimum error classification is obtained. Furthermore, kernel functions must satisfy some constraints in order to be applicable (Mercer’s conditions) [6]. The kernel function used in this work is a Gaussian basis function expressed as:
ª 1 § Xi − X j ( ) K X i , X j = exp«− ¨ « 2¨ σ © ¬
391
Optimatization C and sigma • Forward selection 7 steps • Cross validation Linear Discriminant Analysis
Support Vector Machines
PS_LDA7
PS_SVM7
Fig2 working flow diagram of the presented study
III. CONCLUSIONS A. Results For each classification method an optimal parameter set with seven parameters was found. The parameter set PS_LDA7 achieved a sensitivity of 78.7%, a specificity of 87.1% and a total accuracy of 83.5%. In comparison the PS_SVM7 achieved a sensitivity of 80.5%, a specificity of 89.8%and a total accuracy of 86.0% (Table 1). The optimal settings for C was 1000 and for 30. The following parameters are included: PS_LDA7: T Norm field - information, if the maximum of the T loop vector is in the field of healthy subjects SD Pmax/Tmax - standard deviation of the ratio between the maximum amplitude of the P loop and the T loop Median betaTExi - angle of the vector at the end of the T loop relative to the oblique sagittal plane Median T+vmax/T-vmax - ratio of velocity between the increasing and the decreasing part of the T loop
IFMBE Proceedings Vol. 22
_________________________________________________________________
392
A. Seeck, A. Garde, M. Schuepbach, B. Giraldo, E. Sanz, T. Huebner, P. Caminal, A. Voss
Median ROctPlusP[3] - mean amplitude of the vector from the beginning to the maximum of the R loop in the third octant Median TOctPlusP[8] - mean amplitude of the vector from the beginning to the maximum of the T loop in the 8th octant Mean TOct[4] - percentage of the total potential of the T loop in the 4th octant PS_SVM7: T Norm field - information, if the maximum of the T loop vector is in the field of healthy subjects SD Tmax - standard deviation of the amplitude of the maximum of the T loop Mean tZJe/tZZ - ratio of the duration from Z-point to Jpoint to the total duration of a heart cycle Median betaJe - angle of vector at the j-point relative to the oblique sagittal plane Median ROctMinusP[8] - mean amplitude of the vector from the maximum to the end of the R loop in the 8th octant Mean phi - angle between the maximum vector of the R loop and the maximum vector of the T loop Median PhiT - mean deviation of the solid angle of the T loop Table 1 achieved results for both parameter sets
Sensitivity Specificity Total accuracy
PS_LDA7 78.7% 87.1% 83.5%
PS_SVM7 80.5% 89.8% 86.0%
B. Discussion The diagnosis of patients with IHD is usually performed with ergometry carrying a high risk for the patient, needing cost-intensive equipment and achieving only a moderate mean sensitivity of 67% and specificity of 84%. Many patients cannot undergo exercise testing. As a first result it was shown that the cardiogoniometry has a high potential to differentiate between patients with IHD and healthy subjects. For the examined study group both classification techniques, the linear and the nonlinear, achieved better results than the standard reference diagnosing method [12]. The cardiogoniometry has the advantage of being riskless for the patient because no exercise is necessary. It is also very easy to apply, no costintensive special equipment is needed, only a modified ECG system and a PC. Therefore it can be a good support for the general practioner and be used as a screening examination.
_______________________________________________________________
Secondly it was proven that the SVM are a more powerful classifier compared to the traditional linear discriminant analysis. The SVM allow obtaining a maximum margin hyperplane that maximizes the distance between different classes, when a high degree of overlapping is present between patterns. This difference between linear and non-linear methods can be useful in the analysis of complexity of cardiologic data. A further evaluation of the performance of the method should be done validating it by a larger number of patients.
ACKNOWLEDGMENT This work was supported by a grant of the University of Applied Sciences Jena, Germany.
REFERENCES 1.
Dewey M, Richter WS, Lembcke A et al. (2004) Nichtinvasive Diagnostik der koronaren Herzkrankheit., Medizinische Klinik; 99:pp. 57 – 64 2. Sanz E, Steger JP, Thie W (1983) Cardiogoniometry, Clin. Cardiol., 6, 199-206 3. Gershlick AH, de Belder M, Chambers J et al. (2007): Role of noninvasive imaging in the management of coronary artery disease: an assessment of likely change over the next 10 years. A report from the British Cardiovascular SocieTY WORKING GROUP., HEART ;93: PP. 423– 431 4. Cristianini N, Shawe-Taylor J (2000), An Introduction to Support Vector Machines, Cambridge University Press, 2000. 5. Vapnik VN (1998), Statistical Learning Theory. John Wiley & Sons, New York 6. Burges CJC (1998) A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, Kluwer Academic Publishers, vol. 2, pp. 121-167 7. Veropoulos K, Cristianini N, Campbell C (1999) The application of support vector machines to medical decision support: a case study, in Advanced Course in Artificial Intelligence (ACAI’99) 8. Crawford B, Miller K, Shenoy P, Rao R (2005), Real-Time Classification of Electromyographic Signals for Robotic Control, Proceedings of AAI, pp.523-528 9. Furey TS, Cristianini N, Duffy N et al. (200), Support vector machine classification of cancer tissue samples using microarray expression data, Bioinformatics Vo1.6 N. 10 10. Georgoulas G, Stylios CD, Groumpous PP (2006) Predicting the risk of the metabolic acidosis for newborns based on fetal heart rate signal classification using support vector machines, IEEE Transactions on biomedical engineering , Vol.53 , N.5 11. Hearst M, Trends and controversies: support vector machines (1998), IEEE Intelligent Systems, Vol. 13 (4), pp. 18-28 12. Schüpbach WMM, Emese B, Loretan P, Mallet A, Duru F, Sanz E, Meier B. Non-invasive diagnosis of coronary artery disease using cardiogoniometry performed at rest. Swiss Med Wkly. 2008;138(1516):230-8
IFMBE Proceedings Vol. 22
_________________________________________________________________
Enhancement of a QRS detection algorithm based on the first derivative, using techniques of a QRS detector algorithm based on non-linear transformations C. Vidal1, P. Charnay2 and P. Arce3 1 2
Universidad de Talca/Escuela de Ingeniería en BioInformática, Talca, Chile.
[email protected].
Universidad Autónoma de Chile/Escuela de Ingeniería Informática, Talca, Chile.
[email protected]. 3
IBM Global Services, Edmonton, Alberta, Canada.
[email protected].
Abstract — This work shows details of the implementation of a QRS complex detector Algorithm, based on the first derivative ( Holsinger Algorithm), and includes characteristics of a more elaborate QRS detector based on Non-linear transformations( Hamilton-Tomkins Algorithm). These extensions are manifested by the use of a Refractory Period for the search horizon, decision rules using adaptive thresholds for detecting the QRS complex, and a pre-processing of the signal using a Band-pass filter, which maximizes the energy of the QRS complex. The performance of both algorithms is compared using some of the MIT Arrhythmia database records. Keywords — EKG, QRS Detector, Algorithm, Holsinger, Hamilton – Tompkins.
I. INTRODUCTION
A QRS complex detector Algorithm allows for the identification of the temporal location of QRS complexes in an electrocardiographic signal. In the following, performance cases in which the Holsinger algorithm in its original version, and versions in which improvements present in the Hamilton – Tompkins Algorithm are incorporated, will be presented and analyzed. The former is an Algorithm based on simple measurement techniques (First derivative), and whose practical performance is not very effective, while the latter uses more complex measurement techniques (Non-Linear transformations) and performs more effectively. The practical use of both Algorithm versions is assessed, using the MIT-BIH Arrhythmia Database [1] to measure their effectivenes. The goal of this work is to show how to obtain better results in the Holsinger Algorithm for detecting QRS complexes, using characteristics present in the HamiltonTompkins Algorithm.
II. HOLSINGER ALGORITHM
This QRS Complex detector Algorithm, in its original form, has a high incidence of false positives [2]. However thanks to the modifications proposed in [2], this Algorithm improves its performance substantially. Just as is shown in [2], the first derivative of the signal is calculated as y[n] = x[n + 1] − x[n − 1] , where x[n] is the nth sample of the registered signal. In its original version, this array is examined until one point surpasses the Threshold with slope y[i] > 0.45 . This point is considered a QRS complex candidate. A QRS complex candidate is effectively a QRS complex if one(1) of the following three(3) measured points allows the derivative to surpass the detection threshold. This is if y[i + 1] > 0.45 or y[i + 2] > 0.45 or y[i + 3] > 0.45 . It is important to note that this algorithm, just as it is proposed in [2], works with a sampling rate of 250 HZ, therefore the search for a QRS complex candidate is done in the following 12ms. The threshold value for a different sampling rate must be determined with an empirical approach, as must the derivative approximation and the quantity of points to be considered, for the identification of the QRS complex candidate. In the case of the MIT Arrhythmia database, a fixed Threshold of 32 has been established (Whole values are used) and the first derivative approximation is obtained by y[n] = x[n + 4] − x[n] . Once a QRS complex candidate is found, six points are examined to validate if this point is truly a QRS complex (a bit more than 16 ms). In the following Table 1 the results for this algorithm are shown for 2 records of the MIT Arrhythmia database. The results are expressed as the function of the detection of False QRS (false positives, FP), and the non-detection of True QRS (false negatives, FN), just as they are proposed in [AUG1995].
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 393–396, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
394
C. Vidal, P. Charnay and P. Arce
Table 1. Results obtained with the Holsinger Algorithm in its Original version, for some of the MIT Database records. Signal
Number of Pulses (NP)
True Positives (TP)
False Positives (FP)
False Negatives (FN)
(FP + FN) /NP
Reg. 118-S. 1
2278
2278
79676
0
3497.63%
Reg. 118-S. 2
2278
2278
77216
0
3389.64%
Reg.108– S. 1
562
562
8933
0
1589.50%
Reg. 108–S. 2
562
562
17299
0
3078.11%
In Table 1 it can be appreciated that the amount of true positives certainly corresponds to the number of heartbeats, but a large amount of false positives also appear. This is basically due to the absence of a refractive period once the detection occurs. Generally, the points surrounding a point that is declared a QRS complex have an elevated slope, therefore the possibility of also considering them QRS complexes is high. III. VERSION 1 OF THE HOLSINGER ALGORITHM MODIFIED: INCLUSION OF A REFRACTIVE PERIOD
The first modification made to this algorithm can be considered the most simple of all the possible enhancements and consists of the inclusion of a refractive period. This refractive period is a time lapse in which a QRS complex can’t occur and that has a duration determined by the physiology of the human heart [3], [4]. The previous references use refractive periods of 100 ms. and 200 ms., respectively. In this work a refractive period of 200 ms. will be used. This simple modification notably improves the algorithm’s performance, as it markedly reduces the number of false positives. In the following Table 2 the results of the modified Holsinger algorithm version using the records selected for the first evaluation are shown. Table 2. Results obtained with the Holsinger Algorithm in its Modified version 1, for some of the MIT Database records. Signal
Number of Pulses (NP)
True Positives (TP)
False Positives (FP)
False Negatives (FN)
(FP + FN) /NP
Reg. 118-S. 1
2278
1558
874
720
69.97%
Reg. 118-S. 2
2278
1650
798
628
62.60%
Reg.108– S. 1
562
346
246
216
82.20%
Reg. 108–S. 2
562
490
182
72
45.20%
previous version is obtained, based on the selected records. The virtue of the algorithm must not only be measured by the amount of hits, but also by the amount of times that the false existence of a value is indicated. IV. HOLSINGER ALGORITHM MODIFIED VERSION 2 : DECISION RULES USING AN ADAPTIVE THRESHOLD
A palpable weakness of the Holsinger algorithm is the utilization of a pre-established detection threshold. This means that in the presence of signals with high variability (something very common in the possible electrocardiographic signal universe), erroneous results are obtained. The signal’s amplitude is an example of a variable signal. A more complete QRS detector must include one or more thresholds that can be adjusted to the characteristics of the analyzed signal, like in the HamiltonTompkins algorithm [3] case. This last one uses 2 buffers that store the last noise peaks and QRS obtained. In this case 2 buffers will be kept; one that will store the amplitude values of the latest QRS complexes and another that stores the noise values of the signal. Every time that a new sample is obtained and it is not in the Refractive period, the approximation of the first derivative will be compared with the threshold (Umbral). If at this point the threshold is surpassed, we are in the presence of a QRS complex candidate and the derivative of the following 4 points are analyzed in order to validate the candidate point. Finally, if this point is indeed a QRS complex, it will be stored in the QRS complex buffer. Otherwise, the presence of a noise value is confirmed, and it is added to the noise buffer. The longitude of these buffers is 8 and 100, respectively. During the refractive period, the signal values are considered noise values and are added to the noise buffer. The threshold is calculated as: Umbral = α EQRS + (1 − α ) ER
(Ec. 1)
where 0 0, a pressure Pac is superimposed on the constant hydrostatic/equilibrium pressure P0. Assuming that the main source of nonlinearity is the nonlinear oscillation of insonated bubbles, the model for propagation of US waves though UCA can be described by the linear-wave equation for inhomogeneous medium given as
∂2P 1 ∂2 P ∂ 2V − = − ρ N , 0 ∂x 2 c0 2 ∂t 2 ∂t 2
2
(3)
Here is the diffusivity of US, c0 is the speed of sound, 0 is the ambient density, and = t - x/c0 is the retarded time frame variable of an observer traveling with the wave front.
βε k , α
(6)
where V is the volume of a single bubble and N the number of diluted bubbles per liter liquid [7]. The volume change of a spherical microbubble during oscillation can be described as
IFMBE Proceedings Vol. 22
_________________________________________________________________
442
J.J.F.A.H. Grootens, M. Mischi, M. Böhmer, H.H.M. Korsten, R.M. Aarts
∂ 2V 4π ∂ 2 R ) . = = 4π ( 2 RR 2 + R 2 R ∂t 2 3 ∂t 2
(7)
B. Simulations The propagation of an US wave through nonlinear media can be simulated using the Burgers’ equation (3). This is shown in Fig. 1 for different propagation distances; the solid, dotted and dashed curves represent an US wave propagating through a nonlinear medium for increasing distance from the transducer. When either the attenuation or the nonlinearity is zero, an exact solution for the Burgers’ equation can be derived. Otherwise, the solution has to be approximated. For a single frequency source, an approximated solution of the Burgers’ equation is given by Hamilton [2] as ∞
P ( x,τ ) = ¦ bn ( x ) sin( nω0τ ),
(8)
n =1
where bn are the Fourier coefficients representing the spectral amplitudes of the n-th harmonic. In case the attenuation exceeds the nonlinearity ( < 1), the first two spectral coefficients can be approximated as b1 = e −α x −
1 2 −α x Γ e (1 − e −2α x ) 2 + O(Γ 4 ), 32
(9)
1 b2 = Γ(e −2α x − e −4α x ) + O(Γ 3 ). 4
Equation (9) is suitable to derive the nonlinearity coefficient by using a curve fit of the approximated solution (8) on the experimental measurements. In fact, this curve fitting gives the spectral coefficients bn from which can be extracted. An analytical solution for equation (6) cannot be obtained, therefore it is solved numerically by discretizing P(x,t) and R(x,t) and map them to a spatial and temporal grid. Hence, equation (6) can be solved numerically using
Pi , j = 2 Pi , j −1 − Pi , j − 2 + c0 2 +t 2
Pi +1, j −1 − 2 Pi , j −1 + Pi −1, j −1
º +4πρ 0c0 +t ¬ª 2 RR 2 + R 2 R ¼ 2
+x2
2
i , j −1
(10)
Fig. 1 Simulated propagation of an US wave using the Burgers’ equation. C. Measurement setup To validate the models, a measurement setup is built for the measurement of the nonlinear distortion of an US wave propagating through different dilutions of UCA (Fig. 2). This setup consists of a large tank filled with water. A single element US transducer (Panametrics V360) with a nominal resonance frequency of 2.25 MHz ± 0.5 MHz is mounted fixed through the wall of the tank. An Hanningwindowed pulse of 20 sine cycles is designed using Labview® (National Instruments) and uploaded to a waveform generator (Agilent 33220A). The waveform is amplified using a RF power amplifier (ENI 240L) before it is transmitted to the transducer. Acoustically transparent tubes (SpectraPor) containing different UCA concentrations are placed at the transducer focus D to have the maximal pressure in the contrast agent dilution. The diameter of these tubes is 22 mm. The focal distance, defined as D= r2/ with r the diameter of the transducer and the US wavelength, is equivalent to the transition between the Fresnel and the Fraunhofer field [6]. The UCA in the tubes consists of a dilution of Luminity™ in saline with concentrations up to 0.2%. Aligned with the transducer, a hydrophone (Onda HGL-0400) with a bandwidth of 250 kHz to 20 MHz is used to measure the US waves that passed through the tube. The hydrophone is coupled to an amplifier (Onda AH-2010-025) and a National Instruments A/D-interface (NI-5122). The data acquisition is programmed in Labview®.
,
where Pi,j and Ri,j are the discretized pressure and bubble radius, respectively, at each point in space i and time j. t and x are the temporal and spatial step sizes. The US wave propagation is then simulated by alternately solving equation (10) to obtain Pac and substituting this into equation (5) for each time step i to calculate the bubble radius and its derivatives [8]. Fig. 2 Schematic overview of the measurement setup.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Modeling of ultrasound propagation through contrast agents
All measurements are performed with a low US mechanical index of 0.1 at the transducer focus to avoid bubble collapse. The frequency range is 0.5 – 3.5 MHz with steps of 0.5 MHz, which is around the resonance frequency of Luminity™.
443
tion of the modified RPNNP and the linear-wave equations can provide a better prediction of the nonlinear propagation of US waves through UCA.
III. RESULTS From the measurements it can be seen that for increasing UCA concentration, the nonlinearity of the US wave increases. Furthermore, for frequencies close to the microbubble resonance (2-2.5 MHz) and UCA concentrations above 75 ^L/L, a phase shift arises in the second harmonic of the US wave. This is shown in Fig. 3 for high UCA concentrations (300 ^L/L) at a frequency close to the microbubble resonance frequency (2.5 MHz). Using the Burgers’ equation, the US wave propagation is simulated for different propagation distances as shown in Fig. 1. Using equation (9), the ratio of nonlinearity can be determined from a given US wave with an accuracy of 5% when the nonlinearity is in the range of 0.3 q} (2)
Using the counters introduced above we can define a matrix depending on the threshold q,
[ ]
M (q ) = α ij , 0 ≤ i ≤ m, 0 ≤ j ≤ n ,
where m and n are the number of pixels in regions R1 and R2, and α ij is the number of those pixels where the inner counter (the number of pixels in region R1 whose difference to the central pixel is greater then the threshold) equals i and the outer counter equals j (see Eq. 1 and Eq.2),
{
}
α ij = # (x, y ) | cR1 (x, y ) = i ∧ cR2 (x, y ) = j; (x, y ) ∈ Lx × Ly ,(4)
1 N
if α (i, j ) > 0,
(6)
otherwise.
m
n
¦¦ j 2 r (i, j )
(7)
i =0 j =0
It turned out that good results could be achieved in ROI detection using this method with the moments as features, but the shape and size of the microcalcifications in the cluster cannot be measured based on these features. In the next chapter we will show that if we keep the concept of the M matrix, but we define different features, then the detection and the microcalcification characterization could be done using the same (new) parameters. III. THE PROPOSED NEW FEATURES As shown in previous chapter the originally suggested parameters could be used for ROI detection, but they are not well suited for analyzing the shape of the microcalcifications. For the sake of clarity the visualization of M matrices of artificial microcalcification clusters are shown in Fig 2 and Fig. 3. Three different shapes were used to form clusters in a homogeneous background. The three shapes are common in real mammograms: the first was “punctate” (small, (3) round), the second “fine linear branching”, the third “pleomorphic” (subtype “big round”). Two of the M matrices are shown in Fig.3; one was derived from a cluster of punctate microcalcifications, the other from big regular ones.
50 50
100 100
where Lx × L y is the image plane, i.e. the size of the input
150 150
200
ROI. The originally suggested features extracted from the M matrix are basically moments. Four features were suggested: the Horizontal Weighted Sum (HWS), the Vertical Weighted Sum (VWS), the Diagonal Weighted Sum (DWS) and the Grid Weighted Sum (GDS). Due to page limitation only the HWS is shown here, for the other parameters see [2].
N=
m
n
¦¦α (i, j ) i =0 j =0
_________________________________________
(5)
200
250
250
300
300
50
100
150
200
250
300
350
400
450
500
50
100
150
200
250
300
350
400
450
500
Fig 2. Two artificial M-matrices (left: from a cluster of punctate microcalcifications, right: from a cluster of big regular microcalcifications)
It could be seen that for different microcalcification shapes different parts of the M matrix are dominant. In these two cases the difference is so big, that the moments of the M matrix (HWS, VWS etc.) could properly characterize
IFMBE Proceedings Vol. 22
___________________________________________
606
B. Pataki, L. Lasztovicza
the difference. In Fig. 3 the M matrix of the artificial “finelinear-branching” type microcalcification cluster is shown. The difference e.g. between this and the punctate type is definite but not so high, therefore, the moments originally suggested for feature extraction do not reflect it properly.
position of the calcifications, so this is qualitatively correct only.) Therefore we have some hints about the number of the microcalcifications in the cluster. IV. EXPERIMENTS AND RESULTS A set of microcalcification clusters was formed from the DDSM database [4]. The set contains 49 image parts of size 350*350 pixels cut from the 50 μm/pixel resolution total mammographic pictures. 8-bit gray dynamic range was used. The type of clusters was taken as given in the database (specified by experienced radiologists). For comparison some normal parts of the same images were cut as well.
50
100
150
200
250
300 50
100
150
200
250
300
350
400
450
500
Fig 4. Artificial M-matrix from “fine-linear-branching” type microcalcifications
Table 1 Cluster types in the test set The original case benign
The original case malignant
Normal tissue
8
7
Punctate or amorphous
6
9
Pleomorphic (big and round type)
-
16
Fine linear branching
2
1
Type of cluster or normal
We defined new parameters, features, which could reflect these smaller differences as well. The basic idea is to use the M matrices of different artificial microcalcifications (MMask) as masks for the M matrices calculated from real mammographic images (MImg). The upper left corner gives no real information, e.g. the upper left corner point corresponds to pixels which are not brighter than any of the surrounding pixels. Therefore first the upper lines and the leftmost columns were discarded; only the lower right quarter of the matrix is used in the feature extraction (in Eq. 8 the summing starts from i1 and j1). In the next step the sum of the masked points (by the different artificial cluster M-matrices) were compared to the total remaining sum. In Eq. 8 mMask(i,j)=1 if αMask(i,j)>0, otherwise it’s value is 0. m
FeaureIm g ( Mask ) =
n
¦ ¦α Im g (i, j ) mMask (i, j ) i =i1 j = j1 m
n
, (8)
¦ ¦α Im g (i, j ) i =i1 j = j1
This feature depends on the mask used; therefore different features could characterize the probability of the presence of punctate, linear branching, big round etc. microcalcifications. If the mask used comes from e.g. a punctate case (artificial or typical), then the feature shows the presence of such microcalcifications This new parameters have a further nice feature: they are nearly linear in the sense that if we have in a given background k microcalcifications of the same type in a ROI and 2k ones of the same type in another ROI, than the parameter value will be nearly 2 times higher. (Of course it depends on the background and sometimes on the relative
_________________________________________
The free parameters of the srdm method were chosen as follows: w1=11, w2=21, w3=31 and q=7. (In this case n=520, m=320.) The size of the surrounding regions was chosen according to the fact that microcalcifications of size under 0.5 mm are considered to be sign of malignancy. This gave the 11 pixel (the resolution is 50 μm/pixel) as inner core. On the other hand big microcalcifications (larger than 1.5-2 mm in diameter) are usually benign. The threshold was optimized and set to 7 (in 8-bit dynamic range) in a pilot study. For calculating the new features the lower right 150*150 elements quarter of the M matrices was used. (i1=170, j1=370) We calculated three parameters using the three M matrices of artificial clusters (punctate, big round and fine linear branching) as masks. The masks shown in Fig 3 and Fig 4 were used. In Fig. 5 features of different cluster types are shown. The feature showing the presence of punctate microcalcifications is on the horizontal axis, the feature showing the presence of big round ones is on the horizontal axis. Different image types form groups in the picture. (Of course there are exceptions, e.g. 1 normal tissue has a relatively high “punctate microcalcs present” parameter.) In Fig. 6 the features of the different cluster types are shown using another two out of the three parameters. The feature showing the presence of punctate microcalcifications is on the horizontal axis again, but the feature
IFMBE Proceedings Vol. 22
___________________________________________
Extending Mammographic Microcalcification Detection Method to Cluster Characterization 0.4
showing the presence of linear branching microcalcifications is on the vertical axis. It can be seen that the different image types form groups in the picture. In Fig 7 the presence of punctate microcalcification features are shown for different clusters having 0, 1, … , ,6 microcalcifications. The parameter value is nearly proportional with the number of microcalcifications present.
Normal tissue
0.35
Punctate
0.3
Param0.25 using big round 0.2 mask
607
Pleomorphic (big round)
Fine linear branching
0.15
0.1
V. CONCLUSIONS
0.05
0
The importance of the microcalcification clusters found in mammographic images depends heavily on the characteristics of the cluster (size, shape and number of the calcifications). In this paper we applied an effective method (srdm) developed for microcalcification detection purposes only. It was shown that modifying the features used the method could provide such features that characterize the type and even the number of the calcifications as well. Therefore the detection could be more robust, the probability of the malignancy could be judged and not important clusters could be omitted. On the other hand the resulting complex detection-analysis process could be faster, because we use the same parameters in the two phases.
-0.05
-0.1 -0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Param using punctate mask
Fig 5. The features of different clusters using the punctate and the pleomorphic (big round) masks 0.4 Norma l tissue
0.35
Puncta te 0.3 Pleomorphic (big round)
Param0.25 using fine linear 0.2 branching mask
Fine line ar bra nching
0.15
0.1
REFERENCES
0.05
0
1.
-0.05
-0.1 -0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Param using punctate mask
2.
Fig 6. Features of different clusters using punctate and linear branching 3. 0.25
4. 0.2
Param using punctate mask 0.15
0.1
Papadopoulos A, Fotiadis D I, Likas A, “Characterization of clustered microcalcifications in digitized mammograms using neural networks and support vector machines,” Artificial Intelligence in Medicine (2005) vol. 34, pp. 141–150. Kim J K, Park H W, “Statistical textural features for detection of microcalcifications in digitized mammograms,” Medical Imaging, IEEE Transactions on, vol. 18, no. 3, pp. 231–238, 1999. Lasztovicza L, Pataki B (2008) “ROI Selection In Microcalcification”, 19th Int. EURASIP Conf. BIOSIGNAL 2008, Brno, Czech Republic June 29 - July 1, 2008, pp 789–792 Heath M, Bowyer K, Kopans D, Moore R, Kegelmeyer W P, “The digital database for screening mammography,” in Proceedings of the Fifth International Workshop on Digital Mammography, M. Yaffe, Ed. Medical Physics Publishing, 2001, pp. 212–218.
Author: Institute:
0.05
0
0
1
2
3
4
5
6
Number of (small) microcalcifications
Street: City: Country: Email:
Pataki B. Budapest University of Technology and Economics, Dept. of Measurement and Information Systems Magyar Tudosok krt. 2. Budapest Hungary
[email protected] Fig 7. The feature using the punctate mask depending on the number of microcalcifications
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
Medical feature matching and model extraction from MRI/CT based on the Invariant Generalized Hough/Radon Transform D. Hlindzich1, R. Maenner1 1
Institute for Computational Medicine (ICM), University of Heidelberg, Mannheim, Germany
Abstract — In this paper we present a variation of the Generalized Hough Transform (GHT) for automatic feature matching and model extraction. We propose a twodimensional algorithm with two reference points parameterization (Dual-Point GHT) that is invariant to rotation and uniform scaling and uses the specificities of the both generalized Hough and Radon transforms. The method operates with two-dimensional accumulators, that decreases strongly the required memory size. We realize the algorithm on Graphics Processing Units (GeForce 8800GTX/nVidia CUDA) and apply it to the MRI/CT cardiac shapes extraction as an initial step for further medical image segmentation. Keywords — Medical object recognition, Generalized Hough Transform, Radon Transform, MRI.
I. INTRODUCTION Object recognition is a vital problem of image analysis that has been intensively investigated since last decades. A wide and developing branch of object recognition deals with Hough Transform based methods. Having several advantages comparing to common matching techniques such as noise resistance, tolerance to boundary gaps, robustness to objects overlapping, they give rise to complications induced by memory consumption and computational complexity. Originally the Hough Transform was proposed to extract straight lines in the particle tracks recognizing procedure (Hough, 1962). It was popularized in image analysis after the work of Duda and Hart (1971). The authors proposed to use the Hough Transform for more general curves fitting and introduced the common rho-theta parameterization for lines representation that was already standard for the Radon Transform (Radon, 1917). Later, the Hough Transform was extended to the detection of quadratic curves and to the extraction of general shapes (Ballard, 1981). A number of optimizations and modifications have been proposed in this area until the present, such as the Fast Hough Transform (Li et al., 1986), the Adaptive Hough Transform (Illingworth and Kittler, 1987), the Hierarchical Hough Transform (Princen et al., 1989) and the Randomized Hough Transform (Xu and Oja, 1993).
In this paper we consider a two-dimensional variation of the GHT for medical object recognition, so called DualPoint GHT (DPGHT). The main feature of the approach consists in usage of two reference points in the procedure of template encoding and object extraction. The idea of two reference points utilization for the invariant generalized object recognition was first proposed by Yip et al. (1996), but, unfortunately, the algorithm contained a quite complicated accumulation procedure using supplementary invariant tables. Another drawback of the algorithm was usage of two-dimensional arrays for accumulation of actually four-dimensional parameters that was a coarse projection approximation and gave rise to errors during the object extraction process on practice. Several modifications to the DPGHT were also introduced in the works of Chau and Siu (1999, 2004). II. THE GENERALIZED HOUGH TRANSFORM Let % ⊂ N be a N -dimensional set of spatial coordinates and = {I ( x) : % → } is the space of all images defined on % . We consider also ⊂ M as the M -dimensional parameter space and the constraint function: C ( x, p ) : ( %, ) → ,
that defines a desired template. The template represents a parametric subset of points in the set of spatial coordinates that satisfy the equation C ( x, p ) = 0 for p ∈ . Each parameter value p ∈ defines some geometrical transformation of the template in space % . The Radon Transform (RT) in a general form is a mapping from the image and parameter spaces into the set of real numbers: RTC : (, ) → ,
and may be defined by the formula:
³
RTC ( I , p) = I ( x)δ (C ( x, p))dx, %
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 608–612, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
(1)
Medical feature matching and model extraction from MRI/CT based on the Invariant Generalized Hough/Radon Transform
where δ (x) is the Dirac delta function and δ (C ( x, p)) is the transformation kernel. Originally the RT was defined as the integral of a function over hyperplanes specified by the parameter p ∈ . In the context of the Radon Transform the GHT can be specified as its discretization in the case of binary images, i.e. when = {I ( x) : % → {0,1}} . However, such a theoretical interpretation first given by Deans (1981) doesn't consider explicitly the structure of the practical realization of the GHT and the idea of the voting process, proposing so called reading/writing paradigms (van Ginkel et al., 2004) for better explanation of the difference in calculations of the RT and the GHT. The main feature of the GHT realization in practice consists in utilizing the voting mapping that is a mapping from the set of spatial coordinates % to the space of all subsets of the parameter space:
{
}
* * * H C : % → * , = P , P ⊂ .
We define the voting mapping by the formula: H C ( x) = {p ∈ , C ( x, p ) = 0} , x ∈ % .
It means that for each point x ∈ % the voting mapping defines a set of parameters for which x has voted. Using this definition the GHT may be written in the form:
³
GHTC ( I , p) = I ( x)δ ( ρ ( H C , p))dx,
609
time by accessing to the corresponding element of the accumulator array. This is the main idea of the Hough formalism, that makes the GHT a computation effective, but from other hand a memory consuming method. It also should be noticed that usage of the GHT is more optimal for the tasks where votes numbers are needed to be found for a big amount of parameters (e.g. global maximum search) or the set of image object points is relatively sparse. The discrete RT approach is more preferable in the case when votes numbers are needed to be calculated only for certain parameters in aggregate with a large amount of image object points. III. THE DPGHT-BASED OBJECT RECOGNITION Similarly to the conventional DPGHT (Yip et al., 1996), in the first step of the algorithm two reference points P1 = ( P1x , P1 y ) ∈ % and P2 = ( P2 x , P2 y ) ∈ % are selected. A template model is represented in so-called ( α , β )-table format. This means that every boundary point of the template model C ∈ % is encoded as a pair ( α , β ), where α is the angle between vectors P1C and P1 P2 , and β is the angle between P2 C and P1 P2 as it is shown on Fig.(1).
(2)
%
where
ρ ( P* , p) : * , →
is
a
distance
between
*
parameter p and the subset P in the parameter space. In this context the Eq. (1) defines the same transformation as it was specified by Eq. (2), but this view will allow us to emphasize the features of the GHT calculation in practice. The algorithm that realizes the GHT practically works with discrete spaces % and . In this case the transformation (2) can be written in the form: GHTC ( I , p) =
¦1
x∈% I ( x ) =1
.
p∈H C ( x )
(3)
From the Eq. (3) it could be noticed that each value GHTC ( I , p) is equal to the number of votes that are given to the parameter p ∈ from all object points of the image, i.e. x ∈ % , I (x) = 1 . Therefore, for optimization of the calculation process we build an accumulator array that stores all the votes using only one passing through the object points of the image. The further calculation of the value GHTC ( I , p) for any p ∈ is fulfilled for a constant
_______________________________________________________________
Fig. 1 Parameters used in the template model encoding of the GHT (left) and the scheme of (P1,P2) votes calculation (right)
The resulting pair is stored in (α , β ) -table in the row that is defined by angle γ between boundary tangent vector calculated in point C and vector CP1 . Table 1 Example of (α , β ) -table
γ
Angles
0
( α11 , β11 ) ( α12 , β12 ) ... ( α1n
Δγ
( α 21 , β 21 ) ( α 22 , β 22 ) ... ( α 2 n
1
. . .
.
(m − 1)Δγ
IFMBE Proceedings Vol. 22
.
, β1n
1
2
, β 2n
) 2
)
.
( α m1 , β m1 ) ( α m 2 , β m 2 ) ... ( α mn
m
, rmn
m
)
_________________________________________________________________
610
D. Hlindzich, R. Maenner
A. The DPGHT Projection Analysis
B. The four-dimensional DPGHT parameter analysis
In the first step of the object recognition procedure socalled P1 - and P2 -projection analysis are carried out. P1 and P2 -projections of the DPGHT are orthogonal integral projections of function GHTC ( I , P ) ,
In the second step we analyse the parameters P ∈ such that ( P1x , P1 y ) ∈ 1 (τ ) and ( P2 x , P2 y ) ∈ 2 (τ ) in order to
P = ( P1x , P1 y , P2 x , P2 y ) ∈ = % 2 , specified by Eq. (3), onto
subspaces
{
}
1 = ( P1x , P1 y )
and
{
}
2 = ( P2 x , P2 y )
correspondingly and are defined as follows: GPC ( I , Pk ) =
¦
( Pkx , Pky )∈k
GHTC ( I , P ),
k ∈ {1,2} .
In order to calculate these projections we allocate two 2D arrays for accumulation of votes for P1 and P2 . At every image object point a = (a x , a y ) boundary tangent vector T = (Tx , T y ) is calculated and the coordinates of the
reference points that should receive votes are defined from the following equations: A1P1x + B1P1 y + C1 = 0,
A2 P2 x + B2 P2 y + C2 = 0,
(4)
where the coefficients are specified as: ª A1º « B1» = ¬ ¼
ª sin γ ′ − cos γ ′º ªTx º «cos γ ′ sin γ ′ » «T », ¼¬ y ¼ ¬
γ ′′ = γ ′ + β − α ,
C1 = −( A1a x + B1a y ), (5) ª A2º ª sin γ ′′ − cos γ ′′º ªTx º « B 2» = «cos γ ′′ sin γ ′′ » «T », C 2 = −( A2 a x + B2 a y ), ¼¬ y ¼ ¬ ¼ ¬
for every pair (α , β ) and the corresponding value of γ ′ = kΔγ , k = 0,1,..., m − 1 from (α , β ) -table. Thus all reference points located on two lines l1 and l 2 defined by equations (4) and (5) receive votes. Geometrically it means that for every a , (α , β ) and γ ′ the concerned equations reconstruct the lines intersecting in point a and directed towards P1 and P2 that are encoded relatively to the tangent and defined by α , β and γ ′ . The lines specify sets of potential positions of reference points in two accumulator arrays invariantly to rotation and uniform scaling of the initial template model. So, once the described process is finished for all image object points, the areas of arrays with high amount of intersections (more than a given threshold τ ) will indicate parameter regions with high probability of accumulator peak in four dimensions. We assign the corresponding sets of reference points as 1 (τ ) ∈ 1 and 2 (τ ) ∈ 2 .
_______________________________________________________________
find precise votes peak in four dimensions. The implementation of this procedure relates to the Radon Transform owing to the conception of direct calculation of the GHT values for specified P ∈ , that is effective here due to a small number of parameter points under investigation and low memory consumption. Using the ( α , β )-encoding of a template model and combining Radon and Hough paradigms we significantly optimize this calculation in comparison with Eq. (1). At every fixed reference point P1 ∈ 1 (τ ) we accumulate votes for pairs ( P1 , P2 ), P2 ∈ 2 (τ ) using a two-dimensional accumulator array in the following way. For every image object point a ∈ % we calculate boundary tangent vector T and angle γ between vectors aP1 and T . For every pair ( α , β ) from the row of ( α , β )-table specified by γ , the reference point P2 that should receive a vote is defined as intersection of two lines passing through points P1 and a with angles α and ( β - α ) to vector aP1 correspondingly as it shown on Fig. (1). The maximum in the resulting array for the current P1
indicates a candidate pair ( P1 , P2 ), P2 ∈ 2 (τ ) . Once the process is finished for all P1 ∈ 1 (τ ) , the pair ( P1* , P2* ) that has maximum of votes among all the candidate pairs defines the desired reference points with global votes peak in four dimensions. Thus, the algorithm analyses votes of all K =| 1 (τ ) | ⋅ | 2 (τ ) | four-dimensional parameters for O(| 1 (τ ) | NM ) multiplications operating only with a twodimensional accumulator array. Finally, having the estimation of two reference points P1* = ( P1x , P1 y ) , P2* = ( P2 x , P2 y ) and ( α , β )-table we reconstruct points R = ( R x , R y )T of the detected object as intersection of lines passing through P1 and P2 with angles α and β to vector P2 P1 correspondingly (see Fig. (1)). IV. RESULTS We evaluate the proposed algorithm on short and long axis cross-sectional views of cardiac MRI. At first, a template model is formed and encoded using reference points that are chosen manually. We choose these points inside a heart template model depending on oblongness and
IFMBE Proceedings Vol. 22
_________________________________________________________________
Medical feature matching and model extraction from MRI/CT based on the Invariant Generalized Hough/Radon Transform
other geometrical features of the shape. Before the object recognition procedure an analyzed image is pre-processed using a denoising and Canny edge detection filters.
Fig. 2 Template model of heart (short axis). Left and right ventricles The accuracy of the algorithm also depends on the threshold τ that defines sizes of projection sets 1 (τ ) and 2 (τ ) . By the use of high values of τ we can significantly increase the speed of accumulator peak search, but this also increases probability of missing the optimal pair ( P1* , P2* ).
611
A combination of both Hough and Radon paradigms for votes calculation makes the algorithm low memory consumptive that is convenient for its implementation on Graphics Processing Units (GPUs). It should be said that low memory usage is partially reached at the expense of computational speed decrease. Though the direct Houghbased votes accumulation may give better time results, it is, unfortunately, unsuitable for parallelization on GPU and application on conventional computer systems because of extremely high memory usage. We realized the proposed DPGHT-based algorithm on GPU (GeForce 8800GTX) using nVidia CUDA TM Technology and compared it with the direct approach realized on Pentium4 3.0 GHz system for a set of cardiac MRI images ( 128 × 128 pixels for memory saving). The tests have shown that the proposed object recognition approach has the speed advantage in a factor of 50-60.
Fig. 5 Automatic heart model extraction from the cardiac MRI (short axis) using the proposed DPGHT approach
Fig. 3 Surface plots of P1-projection (left) and P2-projection (right) In our experiments we used a pair of thresholds τ 1 and τ 2 for 1 and 2 correspondingly and selected their values as λ multiplied by the maximum projection values, where λ was chosen from the interval [0.75;0.95] .
Fig. 4 The DPGHT Projection Analysis. Areas of the most probable reference points location . P1-(left) and P2-projection (right)
_______________________________________________________________
V. CONCLUSIONS In this paper we consider a two-dimensional variation of the Dual-Point GHT for medical object recognition. The main feature of the approach consists in utilization of two reference points parametrization and the specificities of both generalized Hough and Radon transforms. We introduce a new object recognition procedure and (α , β ) -based model encoding. The process of accumulator peak search consists of two steps and estimates a global maximum in the fourdimensional parameter space using two-dimensional arrays. In the first step we find areas of the most probable reference points location using projection analysis of the parameter space and Hough formalism for votes accumulation. In the second step we calculate the precise values of parameters with maximum number of votes using a combination of Radon and Hough paradigms. We evaluated the proposed
IFMBE Proceedings Vol. 22
_________________________________________________________________
612
D. Hlindzich, R. Maenner
method to the MRI/CT cardiac data. The algorithm was parallelized and realized on Graphics Processing Units using nVidia CUDA TM Technology.
7.
REFERENCES
9.
1.
2. 3. 4. 5.
6.
Radon J., Ueber die Bestimmung von Funktionen durch ihre Integralwerte laengs gewisser Mannigfaltigkeiten. Berichte Saechsische Akademie der Wissenschaften, 69:262277, Leipzig, 1917. Hough P.V.C., Method and means for recognizing complex patterns, U. S. Patent 3069654, 1962. Duda R.O. and Hart P.E., Use of the Hough Transformation to Detect Lines and Curves in Pictures. Comm. ACM, pp. 1115, 1972. Ballard D.H., Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition, pp. 111-122, 1981. Deans S.R., Hough transform from the Radon transform. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3(2):185188, March 1981. Li H., Fast Hough transform for multidimensional signal processing. Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP ’86, pp. 2063- 2066, 1986.
_______________________________________________________________
8.
10.
11.
12.
13.
14.
Illiigworth J. and Kittler J., The Adaptive Hough Transform. lEEE Trans. on Pattern Anal sis and Machine htelliience, V. 9, No. 5, pp. 690- pp. 690-897, 1987. Leavers V.F., The Dynamic Generalized Hough transform. Proceedings of the First European Conference on Computer Vision. pp. 592 - 594, 1990. Xu L. and Oja E., Randomized Hough Transform (RHT): Basic mechanisms, algorithms, and computational complexities. CVGIP: Image Understanding, vol. 57, no. 2, pp. 131-154, Mar 1993. Yip R.K.K., Tam P.K.S., Leung D.N.K., Modification of Hough Transform for object recognition using a 2-dimensional array. Pattern Recogn. 28 (11), pp. 1733-1744, 1995. Chau C.P. and Siu W.C., Generalized dual-point Hough transform for object recognition. International Conference on Image Processing 1999. Vol. 1, pp. 560-564, 1999. Ecabert O. and Thiran J.-P., Adaptive Hough transform for the detection of natural shapes under weak affine transformations. Pattern Recognition Letters. Vol. 25 i12, pp. 1411-1419, 2004. Chau C.P. and Siu W.C., Adaptive dual-point Hough transform for object recognition. Computer Vision Image Understand. Vol. 96, pp. 1-16, 2004. van Ginkel M., Hendriks C. L. and van Vliet L., A short introduction to the radon and hough transforms and how they relate to each other. Technical Report QI-2004-01, Quantitative Imaging Group, Delft University of Technology, 2004.
IFMBE Proceedings Vol. 22
_________________________________________________________________
Image Segmentation of Cell Nuclei based on Classification in the Color Space T. Wittenberg1 , F. Becher1 , M. Hensel2 and D.G. Steckhan1,3 1
Fraunhofer-Institute for Integrated Circuits IIS, Dept. for Image Processing & Biomedical Engineering, Erlangen, Germany 2 Institute for Microbiology, University Hospital Erlangen, Germany 3 International Max-Planck Research School for Optics and Imaging, Erlangen, Germany
Keywords— microscopy, image segmentation, color spaces, machine learning, statistical classification Abstract— Many developments in the field of image processing and analysis have been motivated and driven by applications from microscopy. Unfortunately, today hardly any general applicable set of image processing methods exist to support biomedical experts with a fully or partially automated analysis of micrographs to support and strengthen his experiments. Hence, currently much image analysis and interpretation in this field has to be done manually. Otherwise, a background in image processing is needed to adopt available image analysis tools to the desired task, or to write scripts in image-processing toolboxes. Thus, using cervical nuclei as a first example, a novel image processing scheme is suggested to bridge the so-called ”semantical gap” between the analytical question of the biomedical experts on one side and the required set of image processing procedures on the other. Within this approach, the biomedical expert interactively annotates the background and foreground pixels (nuclei) on a small subset of micrographs of cervical cells. Using this information, the image processing system can now automatically find the most optimal classifier to separate these two sets of pixels in color space. Within this approach machine-learning algorithms, namely K-Nearest Neighbor, KStar, Gaussian Mixture Models, KMeans and Expectation-Maximation have been used and evaluated for the task to separate fore- and background pixels in the color space to yield an automated segmentation approach for cervical nuclei. Thus, the suggested approach is able to ”learn” the required segmentation task from the biomedical experts by interactively training the system on a small subset of images. After the self-organization and optimization procedure, the system is capable to apply the learned analysis mechanisms to other micrographs of cervical cells.
I. I NTRODUCTION In the past 30 years, many applications from (bright-field) microscopy have been the driving force for different developments in the field of automated digital image processing and image analysis. Nevertheless, for the support of leadingedge research of medical and biomedical experts with a fully or partially automated analysis of micrographs, unfortunately today hardly any general applicable set of image processing
Fig. 1: Building blocks proposed segmentation scheme
methods exist which can be adopted on the fly to the frequently changing analysis tasks. Thus, currently much image analysis and interpretation in this field is still being done manually. Contrariwise, a strong background in automated image processing and image interpretation is needed to adopt available image analysis tools to the desired task, or to write new scripts in image-processing tool-boxes. According to our observations, currently the state of the art for fully automated or interactive micrograph image analysis comprises two different approaches from opposite directions. On one hand, knowledge-driven, top-down methods exist that are dedicated to very specific and narrow applications within the field of microscopic image analysis. On the other hand, several data-driven, procedure-oriented imageprocessing frameworks and tool-boxes are available, which may be applied to the analysis of micrographs, but which are usually not dedicated to any specific application. Thus, to bridge the so-called ’semantical gap’ between the analytical questions regarding the interpretation of micrographs by medical and biomedical experts on one side and the therefore required (and frequently changing) set of image processing procedures on the other, a novel image processing scheme is suggested within this work, cf. Fig. 1. Using the task of cervical nuclei detection and segmentation (see Fig. 2, top row) as an example, the steps of the proposed scheme are as follows: Based on a small but representative reference image data set (1), an iconic annotation (2) will be made by a biomedical expert for each set of the micrographs delineating and labeling the objects of interest
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 613–616, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
614
T. Wittenberg, F. Becher, M. Hensel and D.G. Steckhan
Fig. 3: Micrograph with cervical cells (left) and corresponding distribution of nuclei (blue color) - and background (red color) pixels in the RGB (center) and HSV color space (right)
Fig. 2: Top row: typical micrographs of cervical cells with the task to identify cell nuclei. Examples from the training (left) and testing (right) sets. Bottom row: Corresponding manually annotated nuclei regions.
(here: cell nuclei, cf. Fig, 2, bottom row) in such a way, that a formal description of the image content is achieved (3). Using the reference image data (1) and the formal image description (3) as input data, a so-called segmentation engine will be trained and optimized (4) in such a way that it will be capable to achieve the segmentation results automatically on the reference data set as well as on similar micrographs of the same type. Several machine learning approaches used within the segmentation procedure will be detailed in the next section. As result, the segmentation parameters for this training set of micrographs will be obtained (5). Based on these parameters (5) further micrographs (6) of the same type can now be segmented using the trained segmentation engine (7), yielding a set of segmentation objects (8) as a result. For the training and optimization of the above mentioned segmentation engine for cervical cells, several statistical machine learning approaches have been applied. Since the stain of cervical nuclei usually correlates to a tight accumulation of pixels with similar dark-brownish colors in the RGB or HSV color spaces (as exemplified in Fig. 3 on the right side), this information can be used to train and adequate statistical classifiers. Furthermore, since for each pixel in the color space the additional information is known from the image content description, to which class (foreground, background, ... ) it belongs, optimization approaches can be used to train statistical classifiers to discriminate between these classes in the color space. Several different applications in the field of computer vision have applied pixel classification in color space such as e.g. the detection and tracking of people via their skin color [1, 2] or image analysis for barks [3]. Within the field of microscopy, Lezoray and Cardot [4] as well as Charrier et al. [5] have suggested color pixel classification as preliminary seg-
_________________________________________
mentation which is then improved using region growing techniques. Bergen et al. have applied color-pixel-classification as one step in an image processing chain for the segmentation of leukocytes [6].
II. M ETHODS The main idea of this paper is that the foreground and background of an image can be segmented by separation in color space. For the example of cervical cells, it is for the human eye obvious that the nuclei have a different color compared to the background and that the variety of colors exhibited by nuclei is in a small range. Our goal is to take advantage of this fact by analyzing a database of annotated images and finding either the border that best separates nuclei colors from background colors or finding a function that returns a probability whether a specific color belongs to a nuclei or not. In this paper different approaches are discussed and evaluated. The classification is always solely based on the color of each pixel. One main proposal in this work is the classification based on a Gaussian mixture model (GMM) combined with the Expectation-Maximation (EM) algorithm. Additionally a classification based on k-Nearest-Neighbors (kNN) and the kStar algorithm by Cleary and Trigg [7] is discussed. In the following we give a short introduction to the methods used in this paper. A. k-Nearest-Neighbor The k-Nearest-Neighbor is a basic classification algorithm that is based on majority voting. It classifies a feature vector by analyzing the k closest examples in a training set. Nearest in the case of the standard kNN means, that the k neighbors closest to the vector x are observed using the Euclidean distance function, to predict the label of x. kNN is a so-called lazy learner since it does not require any explicit training. One problem of Nearest-Neighbor methods is related to the kernel width k. Since k is a fixed value, this parameter needs to be well chosen. To avoid misclassification, different values
IFMBE Proceedings Vol. 22
___________________________________________
Image Segmentation of Cell Nuclei based on Classification in the Color Space
for k need to be evaluated on pre-classified data to find the optimal k for the given problem.
615
found, it is simply a matter of applying Eq. 2 for the classification. This equatation returns a probability that a specific feature vector belongs to a certain class.
B. kStar The kStar algorithm by Cleary and Trigg [7] is closely related to the previously introduced kNN algorithm. It differs in that it doesn’t use the Euclidean distance as distance metric but instead applies an entropy based metric. The key idea of this entropy based distance metric is to define the distance between two instances as the complexity of transforming one instance into the other. The mathematical details of defining this metric for the different types of attributes is rather complex and hence, we refer to the work of Cleary and Trigg for the mathematical details. C. Gaussian mixture model The use of Gaussian mixture models (GMM’s) to model the foreground distribution of colors is motivated by the interpretation that the Gaussian components represent color distributions that are different to the components of background colors. Another motivation is that Gaussian mixtures are capable to model arbitrary densities. GMM’s are a combination of a number of Gaussian bell curves or in our case, because of the three-dimensionality of our data, a combination of 3D Gaussian functions. The challenge with GMM’s is to find the right number of Gaussian distributions that best fits the given training set. A threedimensional Gaussian distribution is given by: N(x|μ, Σ) =
1
1 exp{− (x−μ)T Σ−1 (x−μ)}(1) 2 (2π) |Σ| 1
3 2
1 2
with μ denoting the mean vector and Σ being the 3×3 covariance matrix. A Gaussian mixture model p(x) is the weighted sum of K Gaussian distributions p(x) =
K
∑ πk Nk (x|μk , Σ),
(2)
k=1
where the weights are denoted by πk . The parameters of the GMM – namely the mean vectors, covariance matrices and mixture weights – are usually donated by k , Σk } with 1 ≤ k ≤ K. λ = {πk , μ
(3)
To model the GMM, the Expectation Maximization (EM) algorithm is used, which generates a maximum likelihood solution. Since the EM algorithm is very prone to the right starting values, we additionally use the k-Means algorithm to determine suitable starting values. Once a suitable GMM has been
_________________________________________
k-Means The k-Means algorithm is an algorithm that clusters a point set or – in our case the nucleus training set – in k clusters by minimizing the variance inside the clusters. Given arbitary starting values for the k cluster centroids, each point in the training set is assigned to the closest cluster centroid. Then, a new cluster centroid is calculated for every cluster by finding the mean of each cluster. Following this the points are reassigned to the closest cluster centroid. This loop is done until the centroids do not change anymore. Expectation Maximization Given a training set, the GMM λ which best fits the data has to be estimated. This problem is usually approached by using the widely used maximum likelihood (ML) estimation. With ML the likelihood is maximized that a model fits the training data. The likelihood p that a given feature vector set X = { xn } with 1 ≤ xn |λ ) n ≤ N fits a GMM model is given by p(X|λ ) = ΠNn=1 p( Maximizing this equation would yield the optimal parameters λ for the GMM. However, maximizing this equation directly is not possible, since this results in a nonlinear function of λ . An alternative to the direct ML solution is the Expectation-Maximization (EM) algorithm. ExpectationMaximization by Dempster et al. [8] is an algorithm that iteratively estimates the ML solution. Due to lack of space we will only present the basic idea here. For a detailed description of the EM the reader is referred to the work of Dempster et al. Starting from a initial guess of the parameters λ in each iteration a new model is estimated with a higher likelihood. Each iteration has an expectation step, which determines the distribution for the unobserved variables and a maximization step which re-estimates the parameters. The algorithm stops upon convergence of the parameters λ . The expectation step is given by p(k| xn , λ ) =
πk N( xn |μk , Σk ) . K xn |μ j , Σ j ) ∑ j=1 N(
(4)
The new parameters for the next iteration are estimated in the maximization step: xn , λ ) xn ∑Nn=1 p(k| N xn , λ ) ∑n=1 p(k| N new p(k| xn , λ ))( xn − μk )( xn − μknew )T ∑ Σnew = n=1 k xn , λ ) ∑Nn=1 p(k|
IFMBE Proceedings Vol. 22
μknew =
(5) (6)
___________________________________________
616
T. Wittenberg, F. Becher, M. Hensel and D.G. Steckhan
probability to falsely classify the cytoplasm as nucleus, as e.g. in the RGB and HSV color spaces.
IV. D ISCUSSION A tool for assistance in the automatic segmentation of microscopic cellular images is proposed. The used segmentation scheme is based on several pixel classification techniques in color space. Our experiments using the described pixel-based image segmentation approaches applied to the automatic detection and segmentation of cervical nuclei in brightfield micrographs have been promising, and have proven, that the suggested approach is in principle possible. The resulting segmentation images will then be subject to post-processing methods, incorporating further information to eliminate single false classified pixels and to unite adjacent pixels to form semantic units such as nuclei, cytoplasm, etc..
ACKNOWLEDGEMENTS
Fig. 4: Examples for result segmentation of test-micrograph depicted in Fig. 1 and different color spaces: RGB (top), HSV (center), LAB (bottom). Left side: GMM with k = 1, right side: GMM k = 4.
πknew =
xn , λ ) ∑Nn=1 p(k| N
(7)
These equations are proven to guarantee a monotonic increase of the likelihood.
III. R ESULTS For the training and evaluation of the proposed methods a set of 77 manually annotated cervical images with a spatial resolution 1000 × 700 pixels have been used. All three machine learning approaches, kNN, kStar, as well as the Gaussian mixture model have been evaluated with different color spaces, including RGB, HSV, LUV, LAB and YUV. Here, some results obtained with the Gaussian Mixture Models will be detailed. With the GMM, the classification of a yet unknown image from the testing set results in a gray scale probability map where dark pixel denote low and bright pixels represent high probabilities. Fig. 4 depicts exemplary results of the GMM based image segmentation of the right micrograph in Fig. 1, based on different color spaces (RGB, HSV, LAB) as well as different settings for the number of mixtures. Additionally, for the GMM two different settings (k = 1 and k= 4) for the clustering parameter k were tested. As can be seen in Fig. 4, the probability to classify a pixel correctly as nucleus increases with a higher value of k. Furthermore, with the LAB color space there exists a decreased
_________________________________________
Parts of this work have been supported by the International Max-Planck Research School (IMPRS) for Optics and Imaging.
R EFERENCES 1. Phung SL, Bouzerdoum A, Chai D. (2005) Skin Segmentation Using Color Pixel Classification: Analysis and Comparison. IEEE Trans Pattern Analyis and Machine Intelligence 27:148–154. 2. Vandenbroucke N, Macaire L, Postaire JG. (2003) Color image segmentation by pixel classification in an adapted hybrid color space: application to soccer image analysis. Computer Vision and Image Understanding 90:190 - 216. 3. Huang Z, Wang ZF. (2007) Bark Classification Using RBPNN in Different Color Space. Neural Inf. Processing Letters & Reviews 11:7-13. 4. Lezoray O, Cardot H. (2002) Cooperation of Color Pixel Classification Schemes and Color Watershed: A Study for Microscopic Images. IEEE Trans.Image Processing 11:783–789. 5. Charrier C, Lebrun G, Lezoray O. (2007) Evidential segmentation of microscopic color images with pixel classification posterior probabilities. J. of Multimedia 2:57–65. 6. Bergen T, Steckhan D, Wittenberg T, Zerfass T. (2008) Segmentation of leukocytes and erythrocytes in blood smear images. Proc’s EMBC 2008, :in print.Vancouver, Canada, 20.-24.8.2008. 7. Cleary JG, Trigg LE. (1995) K*: An instance-based learner using an entropic distance measure. Proc’s 12th Int. Conf. on Machine Learning :108-114. 8. Dempster AP, et al . (1977) Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Soc. 39:1-38. • Corresponding Author: Thomas Wittenberg, Fraunhofer IIS • Address: Am Wolfsmantel 33, 91058 Erlangen, Germany • Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Volume Estimation of Pathology Zones in 3D Medical Images K. Krechetova, A. Glazs Riga Technical University, Institute of Computer Control, Engineering and Technology, Latvia Abstract — Medical images of a brain, acquired with computer tomography (CT) or magnetic resonance (MR) are widely used in medicine for patient diagnosis. Therefore, a task of pathology zone detection and its volume estimation develops in medical image analysis. To successfully solve this task several problems have to be considered: 3D visualization of medical images, image segmentation, pathology zone extraction and volume estimation of the extracted zone. Standard software for processing medical CT and MR images in many cases does not allow extraction of the threedimensional pathology zone and its volume estimation. Often the detection of pathology zone and its volume estimation is so complex, that the physicians prefer to measure only the pathology zone’s maximum axial and coaxial diameters in twodimensional slices of medical images, although it is clear, that precise volume estimation could be of great assistance to the physicians in patient diagnostics. In addition, the standard medical imaging software is very specific – it is usually installed only on one work station linked to medical hardware and that is not always convenient. The problems described above complicate the diagnosis of the patient for physicians. In this paper we propose several algorithms for 3D visualization of medical images, image segmentation and volume estimation of the extracted pathology zone to solve these problems. Based on the proposed algorithms new software is developed for processing medical images in DICOM format that are acquired with CT and MR. Keywords — Medical images, segmentation, visualization, volume estimation, pathology zone.
I. INTRODUCTION The analysis of medical images of a brain, acquired by CT or MRI is very actual in both clinical and research fields. As a clinical example, extraction of the pathology zones and their volumetric quantification can help in the diagnosis of various diseases. The most important requirements for the medical imaging software are: fast operability, precise analysis and ease of use. The existing medical imaging software, provided with medical MRI or CT equipment in some cases does not meet those requirements (problems with pathology zone extraction, volume estimation and 3D visualization’s poor quality). One more drawback of the standard medical imaging software is very specific – it is usually installed only on one work station linked to MRI or CT equipment. This greatly limits the research possibilities and is inconvenient for the physicians. Therefore, a task of medical image processing and
3D visualization develops in medical image analysis. The problem of quick and accurate image processing is very actual. This work aims to solve this task. The objective of this work can be divided into several tasks: development of a segmentation algorithm for medical image processing, development of a volume estimation algorithm for evaluating volume of the pathology zones, development of an algorithm for 3D visualization of medical with high quality. Images used in this work are medical images of a brain in DICOM format, acquired with magnetic resonance and provided by the Institute of Radiology, Riga Stradinš University (using General Electric GE Signa HDx 1,5T). Experimental results for 3D visualization of medical objects were obtained by A. Sisojevs [1]. II. PROPOSED METHODS A. Segmentation The goal of image segmentation is to categorize the pixels (or voxels) of an image into a certain number of regions, which are homogenous with respect to some characteristics (for example: intensity, color, texture, etc.). In order to estimate the volume of the pathology zone and to visualize a medical object, image segmentation has to be done first. In most cases, standard medical imaging software provides manual image segmentation, which, although is most accurate if done by expert physician is also very timeconsuming. On the other hand, automatic image segmentation is often doubted by the physician, because a computer could never replace a doctor in diagnostics. There are many known approaches to medical image segmentation [2], [3]. However, these approaches are not without disadvantages (some require complex priory information input, some cannot clearly extract the pathology zone, some require too much resources). Therefore, a simple and quick semi-automatic segmentation algorithm is proposed [4], with the possibility to also manually adjust the region of interest. The proposed method combines the advantages of greylevel thresholding, region based and edge based segmentation. Segmentation algorithms are combined with developed image enhancement algorithm that is based on image histogram enhancement and existing segmentation result post-processing algorithm. The structure of the algorithm is shown on Fig. 1.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 617–620, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
618
K. Krechetova, A. Glazs
C. Volume estimation The volume estimation method uses the segmented data of the pathology zone. The area of the diseased zone is measured on every 2D slice, where this zone appears. For this, the number of points (pixels) in each region of interest has to be calculated. The additional information in DICOM file (image dimensions and distance between pixels) allows representing the area in physical values (mm2). Volume of the pathology zone is calculated according to the formula : n
V = ¦ vi ,
Fig. 1 Segmentation method
(1)
i =1
The algorithm automatically removes the background from the medical image and segments the data, providing a segmentation map of the image. From this point, the physician can select any region of interest and then manually adjust borders of the region until the result is satisfactory.
where n is the number of slices where pathology zone is detected; vi is the volume of the zone between the segments of pathological slice and it is calculated according to the truncated pyramid volume formula, meaning that the segments are approximated with rectangles:
B. Visualization For visualizing medical objects in 3D it is proposed to use polygonal visualization by means of standard OpenGL graphics library. In this case, the 3D medical object is defined by control points, acquired on the 2D slices of CT or MRI medical image. The control points are taken from the segmented data using a simple clockwise searching algorithm (see Fig. 2). First the edge is detected between the medical object and background on every 2D slice. From the center of the medical object, vectors are drawn to the detected edge in a clockwise motion. When the vector crosses the edge of the object, the crossing point is added to the array of control points. Based on acquired control points a mesh (array of polygons) is built and the array of normals is calculated for each point of the mesh. The data is then visualized using the OpenGL library.
vi =
(2)
where H is the distance between the 2D slices; S1 is the area of pathology zone in slice i ;
S 2 is the area of pathology zone in slice i + 1 . Figure 3 illustrates volume estimation for each vi .
Fig. 3. Volume estimation for vi
Fig. 2 Control points detection
_________________________________________
1 H ( S1 + S1 S 2 + S 2 ) , 3
IFMBE Proceedings Vol. 22
___________________________________________
Volume Estimation of Pathology Zones in 3D Medical Images
619
III. EXPERIMENTAL RESULTS A. Segmentation The proposed method is semi-automatic and allows physicians to detect pathology zones in image with more ease. The results are shown on Fig. 4.
Fig. 4. Segmentation results. Top left: initial image. Top right: segmentation map. Bottom: extracted pathology zone (no manual adjustment) Fig. 5. Visualization results. Top: mesh of the medical object. Bottom: The method builds a segmentation map, using which the expert can select a region of interest and then adjust it until it meets his requirements. B. Visualization Fig. 5 shows the result of the proposed visualization method. Because the visualization method uses the standard graphics library (OpenGL), the object is rendered in real time. This allows real-time interaction with the visualized medical object (rotation, transformation, etc). C. Volume estimation The proposed method was compared with two known methods - the Trapezoidal estimation and Cavalieri’s method, both described in [5]. All three methods (Trapezoidal estimation, Cavalieri’s method and Proposed method) were tested using an artificial computer-modeled object with known volume. The object was artificially “cut” several
_________________________________________
visualized medical object.
times and then using the acquired slices, the methods estimated the volume of the object. The results of the experiments are shown in Table 1. As seen from Table 1, the proposed method gives better results than the known methods. The volume estimation of the proposed method is more precise. IV. CONCLUSIONS Several methods are proposed for medical image segmentation, visualization and volume estimation of the pathology zones. The proposed semi-automatic medical image segmentation algorithm combines the advantages of grey-level thresholding, region based and edge based segmentation. It is fast and efficient, while allows manual adjustment to the region of interest, if necessary.
IFMBE Proceedings Vol. 22
___________________________________________
620
K. Krechetova, A. Glazs
10 slices
7 slices
5 slices
3 slices
Table 1 Volume estimation results.
ACKNOWLEDGMENT
Object’s Volume 12141.06 mm3
Cavalieri’s Method
Trapezoidal Estimation
Proposed Method
Volume, mm3
20461.19
16189.41
15068.64
Inaccuracy, %
69%
33%
24%
Volume, mm3
9879.68
13131.64
13088.92
Inaccuracy, %
19%
8%
8%
Volume, mm3
10977.51
13113.4
11554.93
This work has been partly supported by the European Social Fund within the National Programme „Support for the carrying out doctoral study programm’s and postdoctoral research” project „Support for the development of doctoral studies at Riga Technical University”.
REFERENCES 1.
2. Inaccuracy, %
10%
8%
5%
Volume, mm3
11532.63
11846.75
12288.77
Inaccuracy, %
5%
2%
1%
3.
4.
Medical object 3D visualization is based on standard graphic library OpenGL. This reduces application’s requirements for computer resources and allows real-time rendering of the modeled object. The proposed method of volume estimation was compared with several known methods (Trapezoidal estimation and Cavalieri’s method) and the results of experiments show that the proposed method has better precision in volume estimation than the known algorithms. All the proposed methods were implemented in a practical tool for medical image processing. The developed program can be installed on any computer. That will allow the physicians to diagnose the medical images outside the hospitals.
_________________________________________
5.
Sisojevs A, Glazs A (2008) An new approach of visualization of free – form surfaces by a ray tracing method, IEEE MELECON Proceedings, 2008, pp 872-875 Dastidar P, Heinonen T, Vahvelainen T, Elovaara I, Eskola H (1999) Computerised volumetric analysis of lesions in multiple sclerosis using new semi-automatic segmentation software. Med. Biol. Eng.Comput., vol. 37, 1999, pp 104-107 Jiang C, Jiang L, Zhang X, Gevantmakher M, Meinel C (2004) A New Practical Tool for 3D Medical Image Segmentation, Visualization and Measurement. Proceedings of CCCT, 2004, Austin, USA Krechetova K, Glaz A (2007) Development of a new segmentation method for medical images. Biomed. Eng. conference proc., Kaunas, Lithuania, 2007, pp 133-136 Smitha S, Revathy K, Kesavadas C (2006) Segmentation and Volume Estimation of Brain tissues from MR Images. IMECS, 2006, pp 543547 Author: Katrina Krechetova, PhD student Institute: Institute of Computer Control, Automation and Computer Engineering , Riga Technical University Street: Meza 1/3-304 (3rd floor) City: Riga Country: Latvia Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Estimation of blurring of optic nerve disc margin M. Patašius1, V. Marozas1,2, D. Jegelevi ius1,2, D. Daukantaitª1, A. Lukoševi ius1 1 2
Biomedical engineering institute, Kaunas University of Technology, Kaunas, Lithuania Department of signal processing, Kaunas University of Technology, Kaunas, Lithuania
Abstract — It is known that eye fundus optic nerve disc margins get blurred in papilledema or neuritis. Doctors describe this feature of optic nerve disc by using binary classification “clear margin” or “blurred margin”. We haven’t found studies about quantitative estimation of this phenomenon during our literature analysis. We explored objective quantitative parameters which could describe the blurriness of optic nerve disc margins. Analysis of optic nerve disc model has led us to investigation of several modifications of Michelson and Weber contrasts. The study indicates feasibility of automatic estimation and classification of blurriness of optic nerve disc margin. However, the accuracy of classification needs to be improved. Keywords — Eye fundus images, optic nerve disc, blurring.
I. INTRODUCTION Ophthalmoscopic evaluation and tracking of changes of the eye fundus is important diagnostic method in ophthalmology. Photography of the eye fundus helps in documentation and follow-up of the eye fundus conditions and also other diseases. Evaluation of the eye fundus images is complicated because of variety of the anatomical structures and possible changes in case of eye diseases and requires good experience from the expert. Main components of the eye fundus imaged by photography are optic nerve disc, fovea and blood vessels. The optic nerve disc (OND) is the exit site for all retinal nerve fibers. It is known that eye fundus optic nerve disc margins get blurred in papilledema [1] or neuritis [2]. Doctors describe this feature of optic nerve disc by using subjective binary classification “clear margin” or “blurred margin”, see Fig. 1 (with intermediate states possible). However, during our literature analysis we haven’t found studies about quantitative estimation of this phenomenon. In this study, we explored objective quantitative parameters which could describe the blurriness of optic nerve disc margins. At first a two dimensional model of optic nerve disc and surrounding background was proposed and investigated. Then two, contrast based, features were applied to estimate the margin blurriness in both cases: model and real images. ROC curves and area under ROC curves were calculated in order to investigate the
performance of proposed features in classification task of real images. a)
b)
Fig. 1 Examples of optic nerve discs with clear (a) and blurred (b) margins II. MATERIALS AND METHODS Light intensity of the optic nerve disc image with surrounding background can be modeled using sigmoid function: ⎛ ⎜ 1 I (x, y ) = ⎜1 − ⎛ − A⋅⎜ x 2 + y 2 − R ⎞⎟ ⎜ ⎠ ⎝ 1+ e ⎝
⎞ ⎟ ⎟ ⋅ (I OND − I B ) + I B ⎟ ⎠
(1)
Here R is the radius of the disc, A corresponds to the blurring level of the disc margin, IB is the color intensity of the background and IONH is the color intensity of the disc itself. Examples of intensity profile and images are presented in Fig. 2. The model indicates that the disc margin is more blurred with lower values of A and lower difference between IB and IONH. Whereas model is symmetrical with regard to the center of the image, we can further analyze only one dimensional profile of modeled eye fundus optic nerve disc intensity. The profile represents a ray from optic nerve disc center. Intuitively, the blurring in the single segment of the margin can be estimated by the integral of derivative of image intensity through the segment (ab) of the ray (L) that can be considered to be the neighborhood of the intersection of the ray and the margin. That is equivalent to the
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 621–624, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
622
M. Patašius, V. Marozas, D. Jegelevi ius, D. Daukantaitª, A. Lukoševi ius
difference of the image intensities of two points on the ray on the both sides of the margin: a) 240
Intensity level
220 increasing A
200 180 160 140 120 100 -50
0 Distance from center
b)
50
c)
When the real eye fundus images are explored, the estimate of the blurring of the whole optic disc margin can be produced by averaging local estimates calculated for individual rays around the whole optic nerve disc. Data set of real eye fundus images consisted of 128 eye fundus color photographs obtained using Canon C60UVi fundus camera in Department of Ophthalmology, Institute for Biomedical Research, Kaunas University of Medicine, Lithuania. The size of images was 3072x2048 pixels; compression format – JPEG. An experienced ophthalmologist marked optic nerve disc boundary. Then this boundary was approximated by ellipses [5] (see Fig. 3). Two ophthalmology experts classified images into two classes: having “clear margins” (50 images) and having “blurred margins” (45 images). The rest of the images (33) were judged to be of not acceptable quality or the experts were of different opinions about them. 360 rays with the step of 1 degree from the center of the optic nerve disc ellipse were analyzed (Fig. 3). The intersections of rays with the ellipse were found. Then the set of pixels on the ray at a certain distance (described by the pixel range dR) from the ellipse was taken to represent, correspondingly, the optic disc and the background. The intensities of those pixels were averaged to be used to calculate Michelson and Weber contrasts.
Fig. 2 Examples of modeled optic nerve disc intensity profiles (a), 2D representation of blurred (b) and clear margins (c) b
B = v∫ I ′ ( x, y ) dl = ∫ I ′(r )dr = I (b ) − I ( a ) L
(2)
a
a
However, such estimate could be very sensitive to the noise, so, the average color intensities of the neighborhoods of these points (described by pixel ranges dR) have been used instead. Furthermore, to allow comparisons between images of different luminance, the difference B in (2) is normalized using the sum of the internal and external image intensities (equivalent to Michelson contrast [3]): BMich =
I (b ) − I (a ) I (b ) + I (a )
dR
b dR
(3)
or one of the individual intensities (equivalent to Weber contrast [4]): I (b ) − I (a ) I (a )
(4)
I (b ) − I (a ) = I (b )
(5)
BWeb = BWeb′
_______________________________________________________________
Fig. 3 Optic nerve disc and features used in estimation of margin blurriness
IFMBE Proceedings Vol. 22
_________________________________________________________________
Estimation of blurring of optic nerve disc margin
623
ROC curves were used to compare the results of subjective and objective classification of real images into two groups: “clear margins” and “blurred margins”. We also checked how the usage of different RGB components (instead of grayscale) would change the suitability of blurriness estimates.
tends to saturate towards 1.5 while Michelson contrast based estimate saturates at 0.5. BWeb seems to have larger dynamical range than BMich. Fig. 4 (b) shows dependency of BMich and BWeb on background intensity IB. Decreasing of IB increases both B estimates. Margins with higher intensities differences seem to be less blurred than those with lower intensities differences.
III. RESULTS
0.2
In order to investigate the characteristics of proposed blurriness estimates BMich and BWeb, those estimates were explored using modelled optic nerve disc intensity profile.
0.1
Contrast
1.5 Weber contr. Michelson contr.
Michelson contrast Weber contrast
0.15
0.05
0
-0.05 B parameter
1
-0.1
-0.15 0.5
0
50
100
150
increasing dR: 1...25
200 Degrees
250
300
350
400
Fig. 5 Variation
of blurriness estimates around optic nerve disc margin for the image in Fig.3
increasing dR: 1...25 0
0
1
2
3
4
a)
5 6 A parameter
7
8
9
10
Fig. 5 shows illustration of blurriness estimates variation around optic nerve disc margin for the image in Fig.3. It might be noted that values of the contrasts vary throughout the optic disc boundary. Regions with negative contrast correspond to the places where blood vessels intersect the optic nerve disc margin.
1.5 Weber contr. Michelson contr.
B parameter
1
ROC Curve 1,0
0.5
Source of the Curve Mich. contrast 1-10 Mich. contrast 15-25
decreasing background intensity: 100..250
Weber contrast 15-25
0,8
Reference Line
b)
0
1
2
3
4
5 6 A parameter
7
8
9
10
Fig. 4 Modeled blurring estimates B as the functions of parameters: A and dR (a), A and IB (IOND=const=255) (b)
Fig. 4 represents some dependencies of estimates BMich and BWeb on different model parameters. It can be seen that clear margin (represented by high values of parameter A) is indicated by high BMich and BWeb values. Increasing calculation interval dR leads to nonlinear shape of B estimates dependency on blurriness parameter A. Also we can see that blurriness estimate based on Weber contrast
_______________________________________________________________
Sensitivity
0
0,6
0,4
0,2
0,0 0,0
0,2
0,4
0,6
0,8
1,0
1 - Specificity
Fig. 6 Sample of ROC curves for different estimates of blurring
IFMBE Proceedings Vol. 22
_________________________________________________________________
624
M. Patašius, V. Marozas, D. Jegelevi ius, D. Daukantaitª, A. Lukoševi ius
Fig. 6 shows some of the ROC curves for both Michelson and Weber contrasts and varying dR. It can be observed that the obtained ROC curves are far from ideal. The proposed method for automatic classification of fundus images according to blurriness of optic disc margin was evaluated using calculation of the area under the ROC curves. The results of evaluation are shown in Table 1. Table 1 Areas under the ROC curves for Weber and Michelson contrasts using different pixel ranges to represent the internal and external color intensity
Pixel range
Weber contrast
The most obvious weakness of the described method is the high level of distortion by blood vessels intersecting the optic disc margin. It might be possible to decrease this distortion by using the results of blood vessel detection or by giving different weights to various segments of the margin, for example, by differentiating temporal, nasal, inferior and superior parts of optic nerve disc margin. Individual variation of blurriness around the optic nerve disc margin (see Fig. 5) could be used as the sensitive parameter for optic nerve disk changes in time.
Michelson contrast
1-5
0.634
0.636
1-10
0.645
0.648
1-20
0.663
0.657
1-25
0.666
0.662
1-29
0.660
0.658
5-25
0.664
0.665
10-25
0.667
0.666
15-25
0.671
0.665
20-25
0.667
0.663
ACKNOWLEDGMENT The authors would like to acknowledge the ophthalmologists Valerijus Barzdziukas and Dovile Buteikiene for provided fundus images, marked boundaries of optical nerve discs and evaluations of their blurring. This work was partially supported by Lithuanian State Science and Study Foundation, project No. B-07019.
REFERENCES
It has been found that modified Weber contrast for pixel range 15-25 has the largest area under the ROC curve (0.671). The modified Michelson contrast for pixel range 10-25 has showed slightly lower classification performance (0.666). Different RGB components were tried in order to improve the classification. Using red channel only for pixel range 15-25 the area under the ROC curve was found to be 0.6773 for estimate of Michelson contrast and 0.6756 for Weber contrast. Using green channel the areas under the ROC curve were 0.6067 and 0.6076, respectively. Using the blue channel the areas under the ROC curves were both found to be equal to 0.5916. The higher suitability of the red channel might be explained by the lower visibility of blood vessels interfering with the optic disc margin in that channel.
1.
2. 3.
4.
5.
Cullen JF (2001) The Swollen Optic Disc: Further Observations of a European Neuro-Ophthalmologist in Southeast Asia. Asian J Ophthalmol 2001;3(2):10-13. Bluziene A, Jasinskas V et al. (2005) Ocular disease manual. A.S. Narbutas’ publishing house, Siauliai (in Lithuanian) Miura M, Elsner AE, Osako M, Yamada K, Agawa T, Usui M, Iwasaki T. (2005) Spectral imaging of the area of internal limiting membrane peeling. Retina. 25(4): 468–472. Bruce McGregor, John Pfitzner, Gu Zhu, Marlene Grace, Ann Eldridge, John Pearson, Carol Mayne, Joanne F. Aitken, Adele C. Green, Nicholas G. Martin. (1999) Genetic and environmental contributions to size, color, shape, and other characteristics of melanocytic naevi in a sample of adolescent twins. Genetic Epidemiology 16(1):40-53. DOI 10.1002/(SICI)10982272(1999)16:13.0.CO;2-1 Jegelevicius D, Buteikiene D, Barzdziukas V and Paunksnis A. (2008) Parameterization of the Optic Nerve Disc in Eye Fundus Images, IFMBE Proceedings, 14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics, Vol. 20, pp 528-531. DOI 10.1007/978-3-540-69367-3
Corresponding author:
IV. CONCLUSIONS The study indicates feasibility of automatic estimation and classification of blurriness of optic nerve disc margin. However, the accuracy of classification needs to be improved.
_______________________________________________________________
Author: Martynas Patasius Institute: Kaunas University of Technology, Biomedical engineering institute Street: Studentu str. 65- 112 City: Kaunas Country: Lithuania Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
Robust Data Driven Modeling of Time Intensity Curves A. Maciak1, A. Kronfeld1, P. Stoeter1, T. Vomweg2, D. Mayer2, G. Seidel3, K. Meyer-Wiethe3 1
Institute for Neuroradiology, University Hospital of Mainz, Germany 2 CADMEI GmbH, Ingelheim, Germany 3 Department of Neurology, University Hospital of Lübeck, Germany
Abstract — This article describes a robust, stable and fast method to describe time intensity curves for pharmacological contrast agents uptake and washout (kinetic). The model bases on the physical behavior of bolus fluids. It is data driven and does no require any further knowledge about tissue compartments. It describes kinetic flow without an arterial input function (unlike pharmacokinetic compartment models). The model has been proven in two different types of clinical settings. Perfusion-Sonography relies on the flow of blood in brain tissue. The data is heavily affected by noise, movement and low SNR. In MR-Mammography the challenges are to describe the whole contrast agents uptake and washout with few data points and to have a cheap computable model because of the largeness of the 3D-Volumes. The bolus flow was measured in MRI with a high and a low time resolution. Keywords — Modeling, Time Intensity Curve, Imaging, Contrast Agents, Pharmacokinetic.
I. INTRODUCTION Different types of diagnostics rely on the accurate modeling of contrast agent behavior. Any kind of flow analysis requires numerical flow models to describe the speed, volume and transit time of flow [1–3]. Several flow models have been developed for specific clinical questions. Flow in biological system is called perfusion. One of the most investigated topics is brain perfusion. It has been investigated with different imaging modalities [4]. An Overview for MRI based perfusion imagin can be found in [5, 6]; respectively Perfusion-CT [7, 8] and Ultrasound [9, 10]. Further research topics are myocardial and hepatic perfusion [11]. Both diagnostic methods rely on parametric images extracted from perfusion sequences. This requires robust methods to estimate the bolus of contrast agents. A second major topic requiring models is the measuring of contrast agents uptake in MRI (CE-MRI, T1w-PerfusionMRI, Bolus Tracking). Most lesions enhance contrast agents. Especially in contrast enhancement and late enhancement MRI it is important to describe the uptake and washout of contrast agents in tissue [12, 13]. Particularly some types of investigation procedures rely on an accurate representation of kinetics. For example the classification of lesions into benign and malignant in MR-Mammography or
the myocardial contrast agents uptake in acute myocardial infarction tissue in Late-Enhancement MRI. Several flow and pharmacokinetic models have been presented during the past years. Popular pharmacokinetic models are based on different biological compartments (indicator dilution techniques). These models require a-priori knowledge about the different biological compartments which affect the contrast agents [6, 14]. This makes these models dependent to echo techniques, types of contrast agents and purpose of application. A short overview can be found in [15, 16, 17]. II. MATERIALS AND METHODS 1. Because of the elasticity of biological vessels, blood flow can be abstracted to simple models. We disregard the sight on laminar and turbulent flows. Biological systems do have other requirements. They require correct description of blood volume per time, flow velocity, contrast agents drift into interstitium and enhancement of objects. The suggested physical model − a2t
e I (t ) = a + a 1+ e 0
1
− a3 ( t − t 0 )
.
(1)
consists of three parts. The first parameter a0 describes the flow baseline intensity. This baseline can be a continuous perfusion, a constant signal or noise. Second, there is a product consisting of two terms. The first one describes the raise of flow or raise of enhancement; the second one describes the washout. The logistic function
(1 + e
− a3 ( t −t0 )
)
−1
gives the increase of the flow. Two terms control this ascent. With t0 we give the time point of the beginning of inflow and with a3 we denote the mean slope. The exponential decay of the flow
e
− a2t
is expressed by the contrast agents saturation a1 the half-life of contrast agents in the plasma characterized by a2. A
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 625–628, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
626
A. Maciak, A. Kronfeld, P. Stoeter, T. Vomweg, D. Mayer, G. Seidel, K. Meyer-Wiethe
Figure 1. Time intensity curve with parameters LPI and TTP illustrated. typical model based time intensity curve and the signal change (differentiation) is shown in figure 1. The figure shows the model curve with three different parameters describing the curve in a physiological way. The imaging modality does not directly measure the time intensity curve. It measures some data points, which gives only an indication to the contrast agents behavior. A flow describing model must be fitted into this data to resolve an analytical description of the time intensity curve. This is done by curve fitting methods. Curve fitting is a process to guess model parameters to measured data in an optimal way. Fitting methods belong to optimization problems. There are several numerical methods to solve curve fitting tasks [18]. Simple fitting methods are polynomial fitting methods. Non-Linear models like most pharmacokinetic models or the suggested model (1) need more sophisticated methods for curve fitting. The Gauss-Newton method solves a Non-Linear optimization problem iteratively resulting in a local minimal least squares residual. The local least squares residual must not be the global error. This depends on the starting points. A second common method is to solve curve fitting problems with the Levenberg-Marquardt method[19]. This method is more robust than Gauss-Newton because of the usage of gradient descent method to force decreasing residuals. Both methods try to minimize the sum of quadratic errors (least square error). A comprehensive overview over curve fitting and optimization is given in [18]. The suggested model (1) can be easily fitted with both methods. The variability of the model parameters is shown in figure 2. It gives a good impression of what happens when the parameter changes slightly. This model was validated on several clinical applications which require a valid representation of the contrast agent behavior. On basis of this representation contrast agents uptake and washout descriptors can be extracted.
_______________________________________________________________
Figure 2. Variability of the presented model with different a1,a2,a3 and t0. The Baseline a0 is obvious.
III. RESULTS Perfusion Sonography Harmonic Imaging depends on a robust and valid representation of flow kinetics. It is necessary to analyze the cerebral blood flow, cerebral blood volume and further paramaters describing the perfusion (mean transit time, peak intensity, time to peak, area under curve and full with at half time). These parameters are extracted from flow kinetics. The problem of Harmonic Imaging is the very low signal to noise ratio, caused from the cranial bones, shadowing artifacts and heavy speckle noise. The model was validated on a study consisting of 26 patients using Bolus Harmonic Imaging. For each patient 30 60 images (560×480px) were acquired with an interimage distance of 1 5 seconds. Each coordinate of this images contributed 30 60 data points for computation time intensity curve on basis of the presented physical model. The Gauss-Newton method has been taken for fitting method. This resulted in a mathematical model with known 5 degrees of freedom. After that, the local peak intensity, time to peak and area under the curve has been extracted analytically by simple curve sketching. The model has proved as robust and fast. Parametric images extracted on analytical description of the TIC seems to show more accurate correlation to perfusion than parametric images based on the real imaging data [20, 21]. On behalf of this model a method for detecting stripe artifacts in ultrasound images has been presented [22].
IFMBE Proceedings Vol. 22
_________________________________________________________________
Robust Data Driven Modeling of Time Intensity Curves
Figure 3. Different Time Intensity Curves from typical Perfusions Sonography Image Coordinates. Good perfused (red and blue) vs. low perfusion (green) regions.
MR-Mammography Contrast Enhanced MR-Mammography depends on a robust estimation of contrast enhancement and washout of contrast agents. MR-Mammography measures only a few data points, so it is a high computational effort to estimate a dynamic curve based on a low amount of data. On basis of such a pharmacokinetic behavior of contrast agents kinetic parameters like signal intensity, washout time, and relative peak enhancement can be extracted to classify tissue in normal parenchyma, benign lesions and malignant lesions. The study in which the presented model was used, relies on 142 patients consisting of 130 malignant and 61 benign histological proven lesions. It has been acquired one pre contrast MRI-volume and 3 to 6 post contrast T1w 3Dvolumes depending on the clinic (4 clinical settings, breast coils required, Dimension of 512 × 256 × 80 up to 460 × 640 × 96). This resulted in 4 to 7 data points for each picture coordinate. All 5 degrees of freedom have been estimated with the Levenberg-Marquard method for each coordinate. This resulted in 5 parametric 3D-volumes describing the Baseline, Slope, Enhancement, Washout and Uptake. After that the baseline volume a0 has been subtracted from each post contrast volume resulting in an optimized subtraction image. By the way, this leads to an optimal image intensity homogenization. The signal enhancement and washout values are shown directly in the parametric volumes. Based on the segmentation result morphological and kinetic features has been extracted. The classfication was done with simple feed-forward neural networks. Despite to only few data points, this model has proven to be robust regarding the small amount of data and easy to fit with well-known fitting methods [23].
_______________________________________________________________
627
Figure 4. TIC showing malignant enhancing behavior (red) vs. TIC showing benign enhancing behavior (green) vs. TIC showing typical non-enhancing behavior (blue, i.e. fat tissue). Myocardial Perfusion Myocardial perfusion MRI can be used to differentiate between hemodynamic relevant and non relevant stenosis of coronary vessels and may so substitute invasive angiographies. For the determination of myocardial perfusion the first pass of T1-shortening contrast agent through the myocardial tissue is measured. 3 to 5 ECGgated short-axis-images are acquired every heart beat for approx. 40 heart beats during one breath hold. Typical resolutions are 2 3 mm2 with a slice thickness of 8 mm [24]. Normalized signal intensity-time-curves are derived from each voxel of the myocardial tissue or from regions of interest, which divide the myocardium into segments and evaluated using the physical model (see fig. 4). The voxelbased evaluation method shows perfusion-parameters with a higher spatial resolution. But, since physiological motion caused by the beating of the heart or breathing is not fully avoidable, the segmental evaluation method is in the majority of cases more stable and better comparable to other measurements. In a first-pass-perfusion measurement, the blood-flow is described best by the uptake term [25]. To appreciate the hemodynamic relevance of a stenosis, two measurements have to be done: One under rest conditions and one under pharmacologically induced stress. The quotient of the two corresponding uptake-values gives information about the hemodynamical conditions in the corresponding area or segment [11]. The perfusion of the myocardial tissue depends strongly on the hemodynamic conditions in the left ventricle of the heart. Therefore, results could be more accurate if an arterial input function (AIF) would be used [25]. Further validations of the model on first-pass-perfusion-data of the myocardial tissue have to be done.
IFMBE Proceedings Vol. 22
_________________________________________________________________
628
A. Maciak, A. Kronfeld, P. Stoeter, T. Vomweg, D. Mayer, G. Seidel, K. Meyer-Wiethe 3.
4.
5. 6. 7. 8.
9. 10.
Figure 4. TIC showing low enhancing behavior (red) vs. TIC showing normal enhancing behavior (green) vs. TIC showing typical late enhancing behavior (blue).
11. 12. 13.
IV. DISCUSSION AND CONCLUSIONS We have presented a simple model allowing the robust estimation of a bolus flow of contrast agents in human body. This model bases on the physical behavior of fluids and does not need further physiological parameters like arterial input functions, renal extraction coefficients or body compartment indices. It fits the main needs of perfusion imaging, i.e. robust estimation of parametric images (peak intensity, time to peak, mean transit time). The limitation of such a multi purpose physical model is the lack of quantification power of flow speed or flow volume. But these are issus not even solved by use of compartment models and a main focus of research interest. Concluding this model fits very well to general bolus tracking applications in MRI and Ultrasound, despite to the different imaging modalities, difficulties and applications. It gives also an accurate description of uptake and washout of contrast agents in CE-MRI-Mammography. The practicability, robustness and descriptive power were approved in different clinical applications, though the results of myocardial perfusion studies are pending at this moment.
14. 15.
16.
17.
18. 19.
20.
21.
22.
23.
REFERENCES 1.
2.
Zierler, K.L.: Theoretical Basis of Indicator-Dilution Methods For Measuring Flow and Volume. Circulation Research 10 (1962) 393– 407 Wagner, J.G.: Pharmacokinetiks for the Pharmaceutical Scientist. 1 edn. Technomic Publishing Co., Lancester, Basel (1993)
_______________________________________________________________
24.
25.
Benet, L.Z.e.a.: Clinical Pharmacokinetics and Pharmacodynemics. In of Californa, U., ed.: Encyclopedia of Pharmacological Technology. Marcel Dekker Inc., San Francisco (2002) Le Bihen, D., e.a.: MR imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders. Radiology 161(2) (1986) 401–407 Rosen, B., e.a.: Perfusion imaging with NMR contrast agents. Magnetic Resonance in Medicine 14 (1990) 249–265 Tofts, P.S.: Quantitative MRI of the Brain - Measuring changes cused by disease. Wiley & Sons (2003) Koenig, M., e.a.: Perfusion CT of the Brain : Diagnostic Approach for Early Detection of Ischemic Stroke. Radiology 209 (1998) 85–93 Roether, J., e.a.: Hemodynamic assessment of acute stroke using dynamic singleslice computed tomographic perfusion imaging. Arch Neur 57 (2000) 1161–1166 Wiesmann, M., e.a.: Ultrasound perfusion imaging of the human brain. Stroke 31 (2000) 2421–2425 Harrer, J. U., e.a.: Comparison of Perfusion Harmonic Imaging and Perfusion MR Imaging for the Assessment of Microvascular Characteristics in Brain Tumors - Perfusion Harmonic Imaging in Brain Tumours. Ultraschall in der Medizin 58 (2007) 1–8 Jerosch, M., e.a.: Analysis of Myocardial Perfusion MRI. Jorunal of Magnetic Resonance Imaging 19 (2004) 758–770 Weinmann, H. J., e.a.: Contrast Agents. MR Imaging of the Skull and Brain. Springer, Berlin New York (1992) Dawson, P., e.a.: Textbook of Contrast Media. Isis Medical Media (1999) Smith, D. A., e.a.: Pharmacokinetics and Metabolism in Drug Design. Wileys-VCH Verlag GmbH (2001) Vijayakumar, S., e.a.: Evaluation of Three Different Kinetic Models for Use with Myocardial Perfusion MRI Data. In: Proceedings of the 26th Annual International Conference of the IEEE EMBS. Volume 4., San Francisco, CA, USA (2004) 1922– 1924 Srikanchana, R., e.a.: A Comparison of Pharmacokinetic Models of Dynamic Contrast Enhanced MRI. In: 17th IEEE Symposium on Computer Based Medical Systems. Volume Proceedings. (2004) Meyer-Wiethe, K., e.a.: Comparison of Different Mathematical Models to Analyse Diminution Kinetics of Ultrasound Contrast Enhancement in a Flow Phantom. Ultrasound in Medicine and Biology 31(1) (2005) 93–98 Schwarz, H., R.: Numerische Mathematik. 4 edn. B.G. Teubner Stuttgart, Stuttgart (1997) Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear paramters. Journal of the Society for Indeustrial and Applied Mathematics 11(2) (1963) 431–441 Maciak A, e.a.. Robuste Ermittlung parametrischer Bilder für die Ultraschall-Perfusionsbildgebung basierend auf einem Model der Boluskinetik von Kontrastmitteln in Biomedizinische Technik 26.-29. September 2007. 2007: Proceedings Aachen. Maciak A, e.a. Automatic Detection of Perfusion Deficits with Bolus Harmonic Imaging in European Journal of Ultrasound 2008: 29: eFirst.10.1055/s-2008-1027190. Maciak A, e.a. . Detecting Stripe Artifacts in Ultrasound Images in Journal of Digital Imaging 2007: 23 (3): Suppl. 1.DOI: 10.1007/s10278-007-9049-0. Mayer D, e.a.. Vollautomatisierte Tumordiagnose in der dynamischen MRT der weiblichen Brust in Bildverarbeitung für die Medizin 2007. 25-27 März München: Springer-Verlag ISBN: 3540710905: 86-90. Schreiber W. G., e.a.: Dynamic contrast-enhanced myocardial perfusion imaging using saturation-prepared TrueFISP. Journal of Magnetic Resonance Imaging 16(6) (2002) 641–652 Schmitt M., e.a.: Quantification of myocardial blood flow and blood flow reserve in the presence of arterial dispersion: a simulation study. Magnetic Resonance in Medicine 47(7) (2002) 787–793
IFMBE Proceedings Vol. 22
_________________________________________________________________
An optimization framework for classier learning from image data for computer-assisted diagnosis 1
2
2
1
J. Mennicke , C. M¨unzenmayer , T. Wittenberg , and U. Schmid 1
2
Faculty Information Systems and Applied Computer Science, University of Bamberg, Bamberg, Germany Dept. for Image Processing & Biomedical Engineering, Fraunhofer-Institute for Integrated Circuits IIS, Erlangen, Germany
Abstract—In computer-assisted medical diagnosis it is often hard or even impossible to obtain a valid set of rules for disease classication by classical knowledge engineering methods. Alternatively, machine learning methods are applied to obtain classiers from sets of data pre-classied by medical experts. Typically in a medical context, available data sets are imbalanced with respect to the possible classications. E.g., in dermatology, there are only few data representing cases of malign melanoma vs. many cases representing benign nevi. Furthermore, there are different missclassication costs assigned to different classes. E.g., it is much more critical (i.e. costly) to erroneously classify a malign melanoma as benign than the other way around. We propose a universally applicable optimization framework that successfully corrects the error-based inductive bias of classier learning methods on image data. The framework integrates several techniques of common optimization techniques, such as modifying the optimization procedure for inducer-specic parameters, modifying input data by an arcing algorithm, combining classiers of several classier learning methods (kNN, SVM and C4.5) with different settings according to locally-adaptive, cost-sensitive voting schemes. The framework is designed to make the learning process cost-sensitive and enforcing more balanced missclassication costs between classes. The framework was evaluated on image data for Barrett’s esophagus with promising results compared to the base learners. Keywords—Classier Learning, Computer-Assisted Diagnosis, Medical Image Data.
I. I NTRODUCTION When designing a system for computer-assisted diagnosis – be it in medicine or in other contexts of application – the greatest challenge is to model the expert knowledge in such a way that automated diagnosis (a) meets the accuracy of human experts and (b) the criteria on which diagnosis was based are transparent to the human expert. The traditional way to design a diagnosis system is knowledge engineering, that is, to employ empirical methods of knowledge acquisition and to build the knowledge base by formalizing this knowledge. This approach is known to be tedious and prone to errors and omissions – the so called knowledge acquisition bottleneck [Cohen and Feigenbaum, 1982], [Cuilen and Bryman, 1988]. An alternative way to obtain
expert knowledge is to use machine learning methods [Mitchell, 1997]. Here, experts give their diagnoses for a sample of medical data. The data are represented as feature vectors and the diagnoses are coded as classes and associated with the according feature vector. The great advantage of the learning approach to knowledge acquisition is that the experts are just doing what they are experts in – that is, come up with a diagnosis in light of medical data – and there is no need to conduct lengthy interviews or other assessment methods. A classication learner will automatically extract the relevant feature combinations which best reproduce the diagnostic ratings of the experts and the learned classier subsequently can be used to generate diagnostic proposals. In many areas of medicine, the main data sources on which diagnoses are based are visual: In the most simple case data result from direct inspection of skin, throat, teeth or other body parts. Often image data are gained from X-rays or camera snapshots (e.g. in endoscopy). Obtaining textural, color or form features from such image data is a much researched topic in the domain of computer-assisted medical diagnosis [M¨unzenmayer, 2006]. Typically, the main effort in designing and implementing such diagnosis systems is to come up with fast and reliable algorithms for feature extraction from image data. In contrast, modeling of the classication rules which are applied to such features to produce a diagnosis is often done in a rather ad hoc manner. This aspect is the focus of the work presented in this paper and, as argued above, we propose using machine learning methods to obtain classication rules by which a diagnosis can be gained from features extracted from image data. A common problem of machine learning approaches is that the quality of the obtained classiers is dependent on a fairly even distribution of cases between classes. In the context of medical diagnosis, typically severe illnesses occur very seldomly and therefore, the data sample is biased. Furthermore, in medical diagnostics faulty diagnoses vary in their degree of harmfulness for the patients: Overseeing a severe illness is highly critical while erroneously suspecting a severe illness which can be exluded after further examinations is relatively harmless. Finally, diagnostic classes often
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 629–632, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
630
J. Mennicke, C. Münzenmayer, T. Wittenberg, and U. Schmid
have varying degrees of hardness of class boundaries, that is, only for some types of data, diagnosis is straight-forward. To address these problems, we designed a framework considering several points for improvement of common optimization techniques, such as modifying the optimization procedure for inducer-specic parameters, modifying input data implemented by an arcing algorithm, and combining classiers of several classier learning methods with different settings according to locally-adaptive, cost-sensitive voting schemes. The framework is designed to make the learning process cost-sensitive and enforcing more balanced misclassication costs between classes. In the following we rst introduce the medical image data used to evaluate our framework. Afterwards we shortly characterize the base machine learning methods used. Afterwards we present our learning framework, followed by the learning results gained by using this framework versus the base learners. Details about the data, further data sets we investigated, the background of classier learning, the framework, and the experimental settings and results are given in [Mennicke et al., 2008].
Table 1 Characteristics of data sets I183P1035 482 Cases 180 Attributes Class # Cases EP 182 BM 122 CC 178
I233P1017 300 Cases 48 Attributes Class # Cases EP 97 BM 80 CC 123
Table 2 Cost matrix of Barrett’s esophagus classication problems True Class c(x) EP BM CC
Classied as (h(x)) EP BM CC 0 0.25 0.25 0.75 0 1 0.25 0.5 0
that correlate with the cost matrix, meaning that hard to learn classication boundaries (high misclassications in the misclassication matrix) are also most costly. This poses classication problems which are difcult to handle by any error-based learning method.
II. DATA C OLLECTIONS III. C LASSIFIER L EARNERS As data base we used two sets of endoscopic images from patients with suspected Barrett’s mucosa [M¨unzenmayer, 2006]. The Barrett’s esophagus classication problem (data sets I233P1017 and I183P1035) requires the framework to induce a classier that successfully classies mucous tissues into one of three different classes: Class BE represents Benign Epithelium, class BM Neoplastic Barrett’s Mucosa and class CC contains mucosa of the Cardia and Corpus as there is no clear boundary between these two. For feature extraction from the images, color texture algorithms [M¨unzenmayer, 2006] were used. After feature extraction, each image is represented by a real-valued feature vector. The two data sets are different in the features extracted from these images. Features for data set I233P1017 were extracted using spectral color correction, color shading correction and statistical geometrical features [M¨unzenmayer, 2006]. Features for data set I183P1035 were extracted using color shading correction and sum- and difference-histograms [M¨unzenmayer, 2006]. The characteristics of the data sets are given in table 1. For the experiments, exemplary cost matrices were dened to our best knowledge (see table 2). For real applications, these matrices should be dened by experts. The classes in these data sets are imbalanced. Furthermore, class boundaries have varying degrees of hardness
_________________________________________
There exists a large variety of methods for classication learning with different strengthes and weaknesses [Michie et al., 1994]. We selected three well known learning approaches which are highly different. The high variety in the characteristics of the methods is expected to contribute to an overall improvement of the classication performance when combined into a single voting classier. Taking local capabilities of the learners into account, such a meta learner should be capable of overcoming individual weaknesses while combining only the best aspects of the individual classiers. K-nearest neighbor learning (kNN) [Cover and Hart, 1967] is the most basic lazy (or instancebased) learning method [Mitchell, 1997]. Lazy learners, as opposed to eager learners, do not explicitly induce a representation of the target function. Their training phase rather consists of simply storing the training data. Generalization is postponed until an unseen query instance is to be classied. During the classication phase the learning algorithm examines the relationship of the new instance to the instances in training data. This forms the basis for the decision about the assigned class. kNN learning can be expected to be less capable of dening class regions in feature space which suffer from the curse of dimensionality, meaning that a lot of attributes are irrelevant
IFMBE Proceedings Vol. 22
___________________________________________
An optimization framework for classifier learning from image data for computer-assisted diagnosis
for dening the local decision boundary. Nevertheless, for class regions in feature space whose decision boundaries depend on many attributes, boundaries are possibly very complex and drawn diagonal to the attribute axes. kNN can be expected to outperform other methods, because the classication is based on local instances only rather than on a global approximation of the target function. When highly complex decision boundaries need to be approximated, the value of k must be small, thereby increasing the sensitivity to noise in data. The degree of t to the data can only be adjusted globally. Support vector machines (SVMs) [Sch¨olkopf et al., 1999] are powerful kernel-based learning methods that belong to the class of eager learners. SVMs aim to induce a linear decision function in feature space. In a multi-dimensional feature space, such a linear decision function is represented by a hyperplane. The hyperplane is to be chosen in such a way that its margin is maximal. The margin is dened by the minimal distance of the hyperplane to those instances in training data (support vectors) which are closest to the hyperplane. As opposed to kN N learning, SVMs can be expected to suffer less from the curse of dimensionality and to class regions in which instances of neighboring classes are imbalanced. They are able to draw arbitrary decision functions in input space which yield a global approximation of the target function. Decision Tree (DT) inducers are eager learning methods which build symbolic hypotheses represented by a decision tree (or a set of if-then-rules). With this type of hypothesis language disjunctive concepts can be expressed and the learned trees or rules are easily understandable to humans. One of the most common decision tree learning algorithms is C4.5 [Quinlan, 1993] which can be applied to problems with discrete as well as continuous-valued attributes. DT learning can be expected to suffer least from the curse of dimensionality, as it is the only method that considers the local discriminative power of the attributes. It is robust to noise in data, as most relevant attribute tests are based on many training instances, thereby reducing the effect of noise on the classication performance. The sensitivity of DTs to noise, when classifying an unseen instance on continuousvalued attributes, is eliminated by the application of soft thresholds in Quinlan’s C4.5. The degree of t to the data is adjusted locally by the pruning facilities according to the relevance of the attribute tests of each subtree. However, for complex decision boundaries that are drawn diagonal to the attribute axes, DTs will only perform well if sufcient data is available. Additionally, DT learning is usually sensitive to imbalanced data sets, because the selection of attribute tests will be biased by such distributions in data.
_________________________________________
631
IV. O PTIMIZATION F RAMEWORK To realize cost-sensitive learning and enforce more balanced misclassication costs between classes, we propose a framework which is organized in three stages: • The base learning methods (here kNN, SVN, and DT) tuned by parameter selection with regard to error-based, cost-sensitive, and cost-balancing objectives (Level 1). • A cost-sensitive, cost-balancing arcing (boosting-like) algorithm wrapped around each base learner that creates ensembles modifying inputs of the base learner in a cost-sensitive, cost-balancing manner and combines the models of the ensemble by a cost-sensitive, locally adaptive voting scheme. Several strategies are used for both input-modications and voting (Level 2). • A combination of such boosted ensembles again using cost-sensitive, locally-adaptive voting schemes (Level 3). Cost-sensitive means that costs from a misclassication cost matrix are to be minimized. Cost-balancing refers to the capability of the classication to deliver balanced average expected misclassication costs between classes. Locallyadaptive can refer to the region in feature space, but in the experiments this only refers to being adaptive to the respective classes. Arcing (adaptively resampling and combining) [Domingos, 1999] algorithms are an approach to optimize learners by introducing resampling techniques, training different classiers with different samples and produce a classication via majority vote over these classiers. For our framework we developed a new algorithm which allows for cost-sensitive and cost-balancing arcing and uses a locally adaptive combination scheme [Mennicke et al., 2008]. Resampling is realized via probabilities p(i) associated with training data, making it more probable for a data vector with a high probability vector to be included into the sample. Cost-sensitive resampling is based on the following weight updating formula: pt+1 (i) =
pt (i) ∗ (1 + cst(c(xi ), ht (xi ))a ) a i (pt (i) ∗ (1 + cst(c(xi ), ht (xi )) ))
Cost-sensitive and cost-balancing resampling is based on: pt (i) ∗ (1 + cst(c(xi ), ht (xi ))a ) ∗ (1 + (cstc(xi ) )b ) pt+1 (i) = a b i pt (i) ∗ (1 + cst(c(xi ), ht (xi )) ) ∗ (1 + (cstc(xi ) ) ) where
cstc(xi ) =
IFMBE Proceedings Vol. 22
x:c(xi )==c(x)
cst(c(xi ), h(x))
x:c(xi )==c(x)
1
___________________________________________
632
J. Mennicke, C. Münzenmayer, T. Wittenberg, and U. Schmid
Table 3 Results of error-based C4.5, SV M and kN N compared to average meta-learner Classier Overall Error Deviation Costs Squared Sum Costs Overall Costs
Error-based C4.5 0.245 0.182 0.162 0.159
Average Meta 0.136 0.130 0.074 0.100
Diff (%) 44.5 28.6 54.3 37.1
Classier Overall Error Deviation Costs Squared Sum Costs Overall Costs
Error-based SV M 0.196 0.151 0.097 0.130
Average Meta 0.136 0.130 0.074 0.100
Diff (%) 30.6 13.9 23.7 23.1
Classier Overall Error Deviation Costs Squared Sum Costs Overall Costs
Error-based kN N 0.174 0.166 0.116 0.123
Average Meta 0.136 0.130 0.074 0.100
Diff (%) 21.8 21.7 36.2 18.7
and cst(c(xi ), ht (xi )) represents the misclassication cost as predened (see table 2). We dened different voting schemes similar to a weighted product rule [Mennicke et al., 2008]. The most simple scheme is based on the actual costs caused by a classier on all available data – that is, the classier with the highest global trust is selected. To realize local adaptiveness, another voting scheme selects that classier which performes best on a specic output class. By combining both schemes, a mixed trust scheme can be obtained which was also used. Combining different strategies for parameter selection for the three base learners (level 1) with different strategies for resampling and voting (level 2) results in a collection of classiers which are nally combined into a meta classier (level 3). For the meta classier the same voting schemes as on level 2 were realized.
Since medical image data have several characteristics which make it hard for standard error-based machine learning approaches to produce automated classiers with acceptable accuracy rates such methods are often ignored and computer-assisted diagnosis systems rely on hand-crafted classication rules instead. We proposed a learning framework which specically addresses the specics of medical image data and could show promising initial results. R EFERENCES Cohen and Feigenbaum, 1982. Cohen, P. R. and Feigenbaum, E. A. (1982). The Handbook of Articial Intelligence, volume 3. William Kaufmann, Los Altos, CA. Cover and Hart, 1967. Cover, T. and Hart, P. (1967). Nearest Neighbor Pattern Classication. IEEE Transactions on Information Theory, 13:21–27. Cuilen and Bryman, 1988. Cuilen, J. and Bryman, A. (1988). The knowledge acquisition bottleneck: Time for a reassessment? ExPert Systems, 5(3):216–224. Domingos, 1999. Domingos, P. (1999). MetaCost: a general method for making classiers cost-sensitive. In KDD ’99: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 155–164. ACM Press. Mennicke et al., 2008. Mennicke, J., M¨unzenmayer, C., and Schmid, U. (2008). Classier Learning for Imbalanced Data with Varying Misclassication Costs - A Comparison of kNN, SVM, and Decision Tree Learning. VDM, Saarbr¨ucken. (based on the diploma thesis of J. Mennicke, University of Bamberg, 2006, http://www.cogsys.wiai.unibamberg.de/theses/mennicke/mennicke.pdf) Michie et al., 1994. Michie, D., Spiegelhalter, D., and Taylor, C. (1994). Machine Learning, Neural and Statistical Classication. Ellis Horwood. Mitchell, 1997. Mitchell, T. M. (1997). Machine Learning. McGrawHill, New York. M¨unzenmayer, 2006. M¨unzenmayer, C. (2006). Color texture analysis in medical applications. Der Andere Verlag, T¨onning. Quinlan, 1993. Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco. Sch¨olkopf et al., 1999. Sch¨olkopf, B., Burges, C. J. C., and Smola, A. J. (1999). Advances in Kernel Methods – Support Vector Learning. MIT Press, Cambridge, MA.
V. R ESULTS AND C ONCLUSION The specic settings for the base algorithms as well for details on the sampling method and evaluation measures used are reported in [Mennicke et al., 2008]. Improvements achieved by the whole framework compared to the errorbased learning base methods DT, SVM, and kNN on data sets I233P1017 and I183P1035 in all performance measures ranged from 13.9 – 54.3% (averaged over both data sets). The results are summarized in table 3. In addition to the overall classication error, performance evaluation was done with respect to the costs associated with misclassications. In addition to the data reported here, we also performed experiments with image date for malign melanoma classication and blood cell classication with similar results.
_________________________________________
Address of the corresponding author: Author: Ute Schmid Institute: Faculty WIAI, University of Bamberg Street: Feldkirchenstraße 21 City: 96045 Bamberg Country: Germany Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Classification of alveolar microscopy videos with respect to alveolar stability D. Schwenninger1, K. Moeller2, H. Liu1 and J. Guttmann1 1
2
Division of experimental anesthesiology, University Medical Center Freiburg, Germany Department of Biomedical Engineering, Furtwangen University, Villingen-Schwenningen, Germany
Abstract — With endoscopic microscopy the in-situ and invivo analysis of alveolar dynamics during mechanical ventilation in animal models of lung diseases is possible [1]. In an animal study, microscopy-videos from sub-pleural alveoli were recorded over several breathing cycles with different ventilation modes and settings. The alveolar stability might be useful to evaluate guidelines for respirator settings in the treatment of critically ill lungs. It can be obtained by analyzing the proportional change in alveolar size during ventilation [2]. This project aims to calculate numerical values to describe the alveolar stability without performing automated [3] or manual [2] segmentation. Instead alveolar stability is measured by summing up the image variations in the videos which have to be classified. Since the alveoli can be identified in the videos by their border-edges, the alveolar stability is assumed to correlate with the amount and change of edges in the video. Thus, in a preprocessing step, the edges of the video-frames are extracted. High changes in amount and intensity of the detected edges (relative to the mean values) are supposed to correlate with low stability and vice versa. As a basis for the evaluation of the calculated values, as well as for the training of a classifying neural network, some sort of “stability score” is required. The visual evaluation of 315 videos by 4 individuals is used for this purpose. The calculated values are correlated with the mean of the stability score (correlation factor ~0.8). We thus developed a reproducible method for classification of alveolar microscopyvideos in an ARDS model in terms of alveolar stability. This method is fully automated and can be used in real-time to evaluate respirator settings on-line. Keywords — alveoli, stability, processing, classification
lung
mechanics,
image
I. INTRODUCTION Ventilator induced lung injury (VILI) is a serious problem, with a significant mortality rate, in intensive care. In 2000 the ARDS (acute respiratory distress syndrome) net study showed a correlation between ventilator settings and mortality [4]. This indicates that investigating the ventilation modes and settings with the least mortality is a matter of necessity. Evaluating available and new ventilation strategies as well as individualizing these strategies to the corresponding patient are promising approaches.
Alveolar microscopy can be used to achieve information about the alveoli’s status and mechanical behavior during ongoing ventilation therapy. It can be used to monitor and evaluate ventilation maneuvers on the alveolar level. To this end a microscopic endoscope was developed in our group that allows to observe alveoli in-situ and in-vivo in animal models of lung diseases during mechanical ventilation [1]. One interesting parameter is the alveolar stability to avoid pulmonary trauma due to atelectasis. A method to calculate alveolar stability does already exist. It is defined by the proportional change in alveolar size during ventilation [2]. The size of the alveoli is measured manually, which makes the method unsuitable for real-time applications. There is also a method for measuring the alveolar size automatically [3]. However, this method depends on the clear visibility of the alveolus edges and requires an initialization. It is therefore better suited to process recorded videos than for real-time monitoring. The aim of this project is to calculate global values from the videos to obtain information about alveolar stability in real-time while actually running respiratory maneuvers. These measures are supposed to express the alveolar stability as a single value in a logical and reproducible way. As a basis for the evaluation of the calculated values as well as for the training of a classifying neural network, some sort of “stability score” is required as a basis. Therefore the mean of 4 experts’ evaluations was used. To achieve these evaluations, the subjects were advised to watch the alveolar videos and enter scores describing the alveolar stability based on their personal impression. The calculation of the global values is based on the alveoli in the available videos. These are defined by and perceptible due to their borders. Thus for the global values, the borders of the alveoli in the videos are extracted and measures regarding their intensity and variability are calculated. Four different features are extracted from the borders. A correlation between the values and the mean of the subjects’ stability scores is used as a measure for the values classificationutility. Furthermore the four values were used to train a two-layer feedforward neural network (NN) to test if the correlation can be enhanced by means of machine learning methods.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 633–636, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
634
D. Schwenninger, K. Moeller, H. Liu and J. Guttmann
B. Calculation of values
II. MATERIAL An available database of 315 recorded color videos with the resolution 720x576 at 25 frames per second was used for the calculations. It includes recordings from healthy lungs and from models of lung disease (lavage-lung). The videos were recorded using the alveolar microscope, as described in [1], on 11 different dissected rat-lungs. While recording the videos, different ventilation-settings were used for mechanical ventilation of the lungs, as listed in table 1. The settings differ in the ventilation mode (volume or pressure controlled), in the post end expiratory pressure (PEEP) and in tidal volume (VT). Table 1 Used respirator settings Ventilation mode
PEEP [mbar]
VT [ml]
Pressure controlled
5/10/20
Pressure controlled
5/10/20
6 15
Volume controlled
5/10/20
6
Volume controlled
5/10/20
15
The audio tracks of the videos contain the sound of the switching ventilator valves. III. METHODS As outlined in the introduction, the aim is calculating numeric values, that represent a measure of alveolar instability, and evaluating them by using manually created scores as targets. As an alternative way, a two-layer feedforward NN is trained with the calculated values to check if the classification can be improved. A. Classification by hand To gain a basis for evaluation, stability scores were produced for the videos in the database. 4 experts were asked to watch the videos and grade them in terms of alveolar stability. A value between 0 (very stable) and 10 (most unstable) had to be assigned to every video. Before the evaluation by hand was started, all individuals were trained to evaluate the alveolar videos with respect to mechanical stability, so that they could establish a general idea of which alveoli are stable and which are unstable. While evaluating, the subjects had no knowledge about which ventilation settings were used for which video. The mean of the subjects’ values is used to evaluate the automatically calculated values.
_______________________________________________________________
To keep the calculated values comparable, the videos are limited to 2 respiration cycles. For this purpose, the audiotrack of the processed video is scanned for high values, since those appear only when the ventilator-valves switch. A threshold is defined and the video sequence is limited to those frames between the first and the fifth time-point where the audio signal is above the threshold (one cycle = 2 valve-actions). The selected video sequence is processed frame by frame. The resolution of the frames was thereby divided by 4 to increase the processing speed. To calculate the classifying values from a video sequence, the borders of the alveoli have to be extracted. Therefore a transformation to a grey-value image is required. Since the borders of the alveoli are better perceptible in the color images than in the grey images (if calculated by adding the color channels and dividing by their number) of the video-frames, the L*a*b* color-space is used, since it is based on the human color perception. Especially using the normalized a* channel, which is created out of an images’ red and green color information, led to a good performance. Equation 1 shows how the a* channel is calculated from the available RGB- (red green blue) utilizing the XYZ-color space.
X = +2.3646 R − 0.51515G + 0.00520 B Y = −0.89653R + 1.42640G − 0.01441B § X Y ·¸ a* = 500 * ¨¨ 3 −3 Yn ¸¹ © Xn
(1)
Extracting the borders from the grey-image is done using the difference of Gaussians-method. Therefore the image is smoothed two times by convolving it with two-dimensional Gaussian kernels of different radii. The result of the subtraction of the more smoothed from the less smoothed image results in an image that only contains edges that did not have been filtered in the less smoothed, but have been filtered in the stronger smoothed image. Only the positive values of the result are further used, since these values contain the edge data that corresponds to the alveolar borders. This result represents a new image that will further be referenced to as E. Now, for every frame of the sequence, the value of every pixel is summed up and stored into the array ip. Furthermore the images containing the extracted edges are, for every frame, subtracted from that of the previous frame. The standard deviation s, as defined in equation 2, is calculated from the result of that subtraction and stored in the array sv.
IFMBE Proceedings Vol. 22
_________________________________________________________________
Classification of alveolar microscopy videos with respect to alveolar stability
s=
xn yn _ 1 ( E − E )2 ¦¦ ( x, y ) xn ⋅ yn − 1 x =1 y =1
(2)
635
data. Due to the limited number of videos, only 5 neurons were used to implement the hidden layer. The network was trained with the Levenberg-Marquardt algorithm [5].
_
xn is the width, yn the height of E in pixels and E is the mean value of the image E. The equations 3 to 6 show how the four values a, b, c and d, that are supposed to be evaluated, are calculated from the arrays ip and sv. n
a = ¦ ip (i − 1) − ip (i)
(3)
IV. RESULTS Figure 1 depicts the NNs’ regression after training the network. Table 2 shows the correlation between the stability scores of the individual experts. Notice that all of them are highly correlated with the mean.
i=2
1 n ¦ sv(i) n i =1 1 n c = ¦ ip (i ) n i =1
Table 2 Correlation of the individual stability scores
b=
(4) (5)
1 n d= (ip (i ) − c) 2 ¦ n − 1 i =1
(6)
Thus a describes the total change of alveolar edges, b is the mean standard deviation of the frame to frame differences in the edge data, c is the mean amount of edges and d is the standard deviation of the amount in edge data.
Individual
1
2
3
4
mean
1
1
0.84
0.89
0.90
0.96
2
0.84
1
0.80
0.86
0.93
3
0.89
0.80
1
0.88
0.94
4
0.90
0.86
0.88
1
0.96
mean
0.96
0.93
0.9402
0.96
1
Table 3 shows the correlation between the calculated values a to d and the mean of the stability score. It is noticeable, that a shows the, with 0.64, highest correlation. Table 3 Correlation of the values a, b, c and d with the mean stability score
C. NN approach
Value
A fast approach of training a NN with the calculated values a, b, c and d was done using the nftool in the matlab neural network toolbox. This tool allows implementing a two-layer feedforward NN in a fast and simple way. The values a, b, c and d from the 315 videos are used as the input vector while the mean of the stability score is used as the target. The data was split into 60% training, 15% validation and 25% testing
Correlation
a
0.64
b
0.17
c
0.24
d
0.47
Figure 1 shows the regression of the NN approach as outlined previously. The correlation between the test group
Figure 1: Regression of the NN result. R is the correlation between NN outputs and targets.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
636
D. Schwenninger, K. Moeller, H. Liu and J. Guttmann
and their targets is above 0.8. The actual correlation with the training group is with 0.83 not significantly higher. V. DISCUSSION The correlation of the scores with the mean of the experts’ opinion is high. This indicates that all users have a similar idea of how alveolar instability looks like. The fact that the correlation with the mean is, in all cases, higher than the correlation among the individual experts indicates, that -following the law of big numbers- including scores of more individuals, might eventually lead to a stable mean value. The results of the correlation between the mean of the scores and the calculated values a to d indicate that there is a correlation between them and the alveolar instability. However, only the value a seems to have a significant correlation. Using the standard NN approach, as previously outlined, led to a significant improvement of the correlation. Figure 1 shows that there are no extreme outliers in the plot of NN output versus used targets. Thus the values a to d contain information about the alveolar stability. The best result can thus be achieved by extracting that information and combining it. It should be possible to further increase the achieved correlation by customizing a NN for this special case instead of using the standard matlab NN. Applying this method to monitor alveolar stability online should be possible as long as a signal is provided that indicates starting and ending of respiration cycles. Although this evaluation method is closely related to evaluating the videos by looking at it, it has the advantages of reproducibility and automation, thus evaluating the videos is independent of individual condition and avoids the monotonous task of manually evaluating videos while applying ventilation maneuvers. A known problem of the presented method is related to the extraction of the alveolar borders, since not only the alveoli's borders lead to edges. These artefacts are also taken into account for the calculation of the values. This problem does also lead to the conclusion that calculating the edges differently will lead to a different performance of the method.
_______________________________________________________________
VI. CONCLUSION The presented approach was used successfully on the available videos. We, thus, developed a reproducible and automated method of classifying - in terms of alveolar stabilitymicroscopic endoscopy videos of sub pleural alveoli in healthy lungs as well as in ARDS lungs. The method seems to have potential for improvement. This could be achieved by optimizing the detection of borders and the used machine learning approach. However, due to the small observable area, the calculated measure does not provide a stability index for the whole lung by itself.
ACKNOWLEDGMENT This work is supported by grants of the MWK BadenWürttemberg, and of the Deutsche Forschungsgemeinschaft DFG (Gu561/6-1). The authors acknowledge H. Liu, A. Wahl, and Z. Zhao for the manual evaluation of the video database.
REFERENCES 1.
2.
3.
4.
5.
Stahl CA, Schumann S, Knorpp H, Guttmann J et al. (2006) Intravital endo-microscopy of alveoli: a new method to visualize alveolar dynamics. J Biomech 39(1): 598 DiRocco JD, Pavone LA, Nieman GF et al. (2006) Dynamic alveolar mechanics in four models of lung injury. Intensive Care Med 32:140– 148 DOI 10.1007/s00134-005-2854-3 Schwenninger D, Möller K, Guttmann J et al. (2008) Determining Alveolar Dynamics by Automatic Tracing of Area Changes Within Microscopy Videos. ICBBE Proc. vol. 2, International Conference on Bioinf. & Biomed. Eng, Shanghai, China, 2008, pp 2335-2338.. DOI 10.1109/ICBBE.2008.916 ARDS Network. (2000) Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. N Engl J Med, 342:p1301–1308. Moré JJ (1978) Numerical analysis The Levenberg-Marquardt algorithm: Implementation and theory Springer Berlin: 105-116 DOI 10.1007/BFb0067690 Corresponding author: Author: David Schwenninger, MSc. Institute: University Medical Center Freiburg Street: Hugstetter Str. 55 City: Freiburg Country: Germany Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
Automated Detection of Cell Nuclei in PAP stained cervical smear images using Fuzzy Clustering M.E. Plissiti1, E.E. Tripoliti1, A. Charchanti2, O. Krikoni2 and D.I. Fotiadis1 1
Unit of Medical Technology and Intelligent Information Systems, Dept. of Computer Science, University of Ioannina, GR 45110 Ioannina, Greece 2 Department of Anatomy-Histology and Embryology, Medical School, University of Ioannina, GR 45110 Ioannina, Greece Abstract — In this work we present an automated method for cell nuclei detection in PAP stained cervical smear images. The method is based on the detection of regional minima in the image, followed by a two phase clustering of the detected centroids. An empirical rule and the fuzzy C-means clustering algorithm are applied on the resulted centroids in order to reduce false positive findings. The number of classes in which the nuclei are classified is determined automatically for the dataset that is used. The proposed method is evaluated using cytological images of conventional PAP stained cervical smears, which contain 3085 recognized squamous epithelial cells. Keywords — Nuclei detection, PAP stained cervical smear images, fuzzy clustering.
method is fully automated and it can be applied in any microscopic cervical cell sample image. II. MATERIALS AND METHODS A. Study group The dataset of images in this work is conventional PAP stained cervical cell images, acquired through a CCD camera adapted on an optical microscope. Some images obtained with the DotSlide Image Analysis System (Soft Imaging System GmbH) were also included. We used a 10× magnification lens and images were stored in JPEG format. We have collected 16 images from several slides and the total number of cell nuclei recognized by an expert is 3085.
I. INTRODUCTION The accurate detection of cell nuclei in cytological images is crucial for diagnostic decisions, because the nucleus is a very important structure within the cell and it presents significant changes when the cell is affected by a disease. However, the visual interpretation of these images is a difficult process, because these images exhibit certain characteristics, such as the high degree of cell overlapping, the lack of homogeneity in image intensity and the variations in dye concentration. In the last years, the automated analysis of cell images is a subject of interest for several researchers. A large number of methods have been proposed [1-5] for this purpose, such as pixel classification schemes [1], morphological watersheds [2], active contours [3, 4], and methodologies based on fuzzy logic [5]. Our work aims at the definition of nuclei location in conventional PAP stained cervical cell images. The proposed method consists of three individual steps: the preprocessing of the image, the detection of the centroids of the candidate cell nuclei and finally the clustering of these centroids in classes of interest. The size of patterns, the kind of characteristics and the number of clusters, are parameters which vary in our experiments. It should be noted that the optimal number of clusters is automatically identified. The proposed
B. Image preprocessing The preprocessing step is necessary for the extraction of the background and the definition of smooth regions of interest. We perform contrast-limited adaptive histogram equalization and global thresholding to the red, green and blue component of the image. The obtained binary images are combined using a logical OR operator. In the final binary image, all particles with area smaller than a threshold t are removed, by a morphological opening operation, in order to exclude objects that may interfere in the next steps. C. Detection of candidate nuclei centroids The parts of the image found in the preprocessing step contain either isolated cells or cell clusters. The detection of cells nuclei in both cases is based on the gray-scale morphological reconstruction [6] in combination with the detection of regional minima [7] in the image. These minima indicate among other findings the position of the cell nuclei. Considering that nuclei are darker than the surrounding cytoplasm, we search for intensity valleys in the red, green and blue channels of the color image. For the formation of homogenous minima valleys we apply the h-minima transform. In this way if the deepness of each minimum exceeds
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 637–641, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
638
M.E. Plissiti, E.E. Tripoliti, A. Charchanti, O. Krikoni and D.I. Fotiadis
(a) (b) (c) Fig. 1: (a) The initial image, (b) the detected centroids of regional minima, indicated by the marker “x”, (c) the result of the first step of clustering of centroids.
or equals to a given threshold h, then the minimum is treated as a marker, otherwise it is eliminated. The final image contains the regional minima, whose depth is less than h. The location of each candidate nucleus is determined with the centroid rc of each detected intensity valley and it is defined as:
( )
rc = x, y =
1 N
N
¦ ( x , y ). i
(1)
i
i =1
The list of image pixels found in this step indicates the location of the cell nuclei in the image. However, a high number of false positives occurrences is detected (Fig 1(b)), and further processing of the detected points is needed. D. Cell nuclei centroids clustering For the determination of the final nuclei locations, we follow two clustering steps. In the first step, we eliminate the existence of two or more centroids in an area of radius that it is smaller than the mean radius of a normal nucleus. For this reason we apply the following rule for all the obtained centroids: ∀ p = ( x, y ) ∈ Rc ,
{
}
if exists q = ( xq , yq ) | D ( p, q ) ≤ T ,
{
}
select r = p, q | min { I ( p ) , I ( q )} ,
_________________________________________
(2)
where Rc is the set of all centroids, D is the euclidean distance between two points, T is the threshold on the minimum radius and I ( p ) the intensity of the image at the point p . This is an empirical rule which is based on the fact that the points, which belong to the area of a nucleus, are usually darker than the surrounding points. With the application of this rule, a significant reduction of the total number of the false positive centroids is achieved (Fig 1(c)). The next step is the application of the fuzzy C-means clustering algorithm on the remaining points. As our dataset of points derives from several images with different cervical samples, the pattern of each nucleus varies. For this reason, the fuzzy C-means algorithm is applied independently in each image. E. Determination of the number of classes Although our problem can be considered as a 2-class problem (existence of a nucleus or not), the selection of a number of classes larger than the two obvious has an intuitive meaning of separating the nuclei from other findings and furthermore, distinguishing the nuclei, in order to group those with common features in the same class. The optimal number of classes can be determined with the calculation of the fuzzy mean intra class distance as it is described in [8]. In the case of under-partitioned data, a
IFMBE Proceedings Vol. 22
___________________________________________
Automated Detection of Cell Nuclei in PAP stained cervical smear images using Fuzzy Clustering
large fuzzy mean intra-cluster distance (FMICD) is maintained in at least one cluster, which rapidly decreases when we obtain optimal or over-partitioned state. Moreover, at the over-partitioned state, the inter-cluster minimum distance (ICMD) is very small, because there is a possible subdivision of at least one of the compact classes. In contrary, this measure achieves large values in the case of optimally or under-partitioned data. With the use of these two measures (FMICD and ICMD), the optimal number of classes can be obtained automatically. Since both functions have small values only at optimal cluster number, an appropriate combination of each function produces the optimal number of clusters. The optimal number of classes c is obtained using the smallest value of usv ( • ) for cmin = 2 to the maximum per-
d min = min ui − u j (the minimum distance between cluster i≠ j
centers): uo =
(3)
partition measure function and the second term uoN ( c, V ) is over-partition
measure
function.
The
sets
V = [ v1 , v2 ,..., vc ] and X = [ x1 , x2 ,..., xn ] are the prototype T
T
matrix of size c × f , and the dataset of size c × f respectively, where c is the number of classes, f is the number of features and n is the total number of centroids. The under partition measure function is the mean of fuzzy mean intra-cluster distance over the cluster c : uυ ( c, V ; X ) =
1 c 1 ¦ c i =1 ni
n
¦υ
xk − ui
m ik
2
k =1
(4)
c
1 = ¦ MDi , 2 ≤ c ≤ cmax c i =1
where MDi is the fuzzy mean intra-cluster distance and υik is the membership value of feature vector xk in cluster i. n
ni is given as ni = ¦υikm . The normalized under-partition k =1
measure function is given as:
uυ N ( c, V ; X ) =
uυ ( c, V ; X ) − uυ min uυ max − uυ min
, where
(5)
uυ max = max uυ ( c, V ; X ) and uυ min = min uυ ( c, V ; X ) . c
c
The over-partition measure function is given as the ratio of the cluster number to the minimum inter-cluster distance
_________________________________________
d min
.
(6)
The normalized over-partition measure function is given uoN ( c, V ) =
uo ( c, V ) − uo min uo max − uo min
,
(7)
where uo max = max uo ( c, V ) and uo min = min uo ( c, V ) . c
c
III. RESULTS
where the first term uuN ( c, v; X ) is the normalized underthe
c
as:
missible number of classes cmax, which is given by: usv ( c, V ; X ) = uuN ( c, v; X ) + uoN ( c, V ) ,
639
For the evaluation of the method we have to examine the performance of the different steps, starting from the preprocessing step until the application of the fuzzy C- means classification algorithm. In the preprocessing step, three nuclei in all images are missed and the sensitivity of the this step is 99.90% .The procedure for the detection of the candidate cell nuclei centroids misses the location of 24 nuclei as it is verified by the expert observer. The sensitivity of this step is 99.22%. The first clustering of the cell nuclei, which is based on the distance and the intensity between neighboring nuclei yields in the reduction of false positives by 35.03%. The sensitivity of this step is 98.86%. For the evaluation of the performance of fuzzy C-means clustering algorithm we have used several datasets, which contain intensity and texture features in different sizes of the neighborhood of each centroid. The former patterns contain the intensity value of the image in the pixels of the specific neighborhood of the centroids. The dataset that contains texture patterns is based on the calculation of texture parameters in a given neighborhood. We have used six texture measures: average gray level, average contrast, measure of smoothness, third moment, measure of uniformity and entropy. In our experiments, we have tested our method using two or three classes for all our data, and we have compared the results with those obtained using the algorithm for the determination of the optimum number of classes. Moreover, we used the Euclidean and the diagonal distance for the calculation of the distance between each pattern and the centroid of each cluster. The results of the classification which is based on the textural and intensity characteristics are presented in Table 1 and Table 2, respectively.
IFMBE Proceedings Vol. 22
___________________________________________
640
M.E. Plissiti, E.E. Tripoliti, A. Charchanti, O. Krikoni and D.I. Fotiadis
As we can see in both tables, the best results for sensitivity are obtained using three classes for the classification. However, with this choice the number of false positives occurrences is very large, resulting in an extremely small specificity. In the case of two classes, the specificity of the method is sufficient, but the sensitivity is deteriorated. With the application of the algorithm for the determination of the optimum number of different classes, the sensitivity of the classification is maintained in high levels, while the specificity is kept in an acceptable level. Table 1: Results of classification obtained using intensity features. INTENSITY FEATURES
Euclidean distance
3×3×3
Diagonal distance
Pattern
3×3×3
5×5×3 7×7×3 9×9×3
5×5×3 7×7×3 9×9×3
2 classes
3 classes
Optimal number
Sens
Spec
Sens
Spec
Sens
Spec
91.40 91.17 90.18 87.23 91.23 90.97 90.18 87.53
77.45 77.54 78.25 78.73 77.53 77.82 78.39 78.98
97.92 97.65 97.42 95.07 96.14 97.88 97.62 97.32
56.96 58.10 58.38 58.80 56.71 57.65 58.23 58.78
94.01 92.97 93.12 91.07 94.18 93.62 93.25 91.33
70.34 71.77 73.98 73.47 70.56 71.66 74.07 73.57
Table 2: Results of classification obtained using textural features. TEXTURE FEATURES Pattern
2 classes
3 classes Sens
Spec
Optimal number Sens
3×3×3
91.40
77.43
97.85
57.39
93.62
71.96
5×5×3
91.20
77.54
97.68
58.17
94.34
71.25
7×7×3
90.14
78.32
97.25
58.51
92.39
73.05
9×9×3
87.17
78.72
96.82
58.88
90.77
73.82
3×3×3
91.13
77.50
97.95
56.99
93.45
72.03
5×5×3
90.67
77.86
97.78
57.77
94.01
71.42
7×7×3
89.25
78.35
97.22
58.09
91.83
72.98
9×9×3
85.58
78.98
96.59
59.05
89.55
74.52
More specifically, the method reaches 94.34% in sensitivity and 71.25% in specificity when we use textural characteristics and pattern size 5×5. Moreover, in the case of intensity characteristics, the method reaches 93.25% in sensitivity and 74.07% in specificity when we use intensitybased characteristics and pattern size 7×7.
_________________________________________
The method is applied in conventional PAP stained cervical smear images without any observer interference. However, for the extraction of acceptable results, several parameters must be defined. First of all, the contrast limited adaptive histogram equalization is performed in image regions of 8×8 pixels and the clip limit is set to 0.01. For the rejection of objects in the image that are not of interest we use as a threshold t=500, which is sufficient for the elimination of small image artifacts, while preserving the isolated cells in the image. Nevertheless, the rejection of the nuclei in this step is due to the fact that some of the cells cytoplasm are faintly stained, and they are not distinguishable from the background. As a consequence, the nucleus is considered as an isolated object, and with the application of this step, it is removed. For the selection of the intensity valleys we choose the threshold value h=15, which produces the minimum loss of true positives centroids. For the centroids found in this step we calculate the minimum euclidean distance from the neighboring centroids and we apply the rule (2) if the distance is less than 11 pixels. In this step, 35 nuclei are erroneously not detected, either because of the existence of an image artefact in a small distance from them, or they are adjacent to other nuclei. The application of the fuzzy C-means algorithm is necessary for the classification of the detected centroids in classes of interest. In the case of three different classes, we sort the classes by the number of true positives in each class and finally, we merge the two classes with the larger number of true positives as the nuclei class.
Spec
Euclidean distance
Spec
Diagonal distance
Sens
IV. DISCUSSION
V. CONCLUSIONS The proposed method is fully automated and its application was performed without any observer interference. As it is verified by the results, the method is suitable for the detection of cell nuclei in PAP stained cervical smear images. It must be noted that several parameters must be defined to reach acceptable results. Although the achieved performance is satisfactory, the reduction of false positives must be addressed in the future.
REFERENCES 1.
2.
Lezoray O, Cardot H (2002) Cooperation of Color Pixel Classification Schemes and Color Watershed: A Study for Microscopic Images. IEEE Trans Image Process, 11: 783-789. Costa J A F, Mascarenhas N D A et al. (1997) Cell nuclei segmentation in noisy images using morphological watersheds, SPIE Proc 3164, International Society for Optical Engineering, pp 314-324.
IFMBE Proceedings Vol. 22
___________________________________________
Automated Detection of Cell Nuclei in PAP stained cervical smear images using Fuzzy Clustering 3. 4.
5.
Bamford P, Lovell B (1998) Unsupervised cell nucleus segmentation with active contours, Signal Processing 71: 203-213. Plissiti M E, Charchanti A et al. (2006) Automated segmentation of cell nuclei in PAP smear images, ITAB Proc of International Special Topic Conference on Information Technology in Biomedicine, Greece, Ioannina, 26-28 October 2006. Begelman G, Gur E et al. (2004) Cell nuclei segmentation using fuzzy logic engine, ICIP Proc. Vol 5, Int. Conf. on Image Processing, Singapore, 2004, pp. 2937–2940.
_________________________________________
6.
7. 8.
641
Vincent L, (1993) Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms, IEEE Trans Image Process, 2: 176-201. Breen E J, Jones R, (1996) Attribute Openings, Thinings, and Granulometries, Computer Vision and Image Understanding, 64 : 377-389. Tripoliti E., Fotiadis D I et al. (2007) Automated segmentation and quantification of inflammatory tissue of the hand in rheumatoid arthritis patients using magnetic resonance imaging data, Artif Intell Med 40: 65-85.
IFMBE Proceedings Vol. 22
___________________________________________
Analysis of Capsule Endoscopy Images Related to Gastric Ulcer Using Bidimensional Empirical Mode Decomposition Alexandra Tsiligiri and Leontios J. Hadjileontiadis Aristotle University of Thessaloniki, Faculty of Technology, Department of Electrical & Computer Engineering, GR – 541 24, Thessaloniki, Greece Abstract — Capsule endoscopy (CE) is a novel technology that allows direct noninvasive visualization of the entire small intestine. CE permits a detailed examination in the ambulatory setting, allowing identification of clinically relevant lesions, and it is appealing to both patients and providers. In this context, advanced image processing analysis could facilitate the physician to come up with a diagnosis, providing him/her with the appropriate classification indicators. Towards this direction, in this work, the Bidimensional Empirical Mode Decomposition (BEMD) was applied to small intestine images generated by a CE system (i.e., the Pillcam SB capsule) to extract their Intrinsic Mode Functions (IMFs). The latter could be used as a new classification domain, as they reflect different modes included in the original signal and related to the underlined pathology. BEMD is more advantageous compared to other techniques (such as Fourier analysis, Wavelets, AM-FM decomposition) due to its adaptation to the nonstationary character of the signals (i.e., most of natural images exhibit such behaviour), and the extraction of global structures due to its better stability. In this paper, the BEMD analysis is focused on endoscopic images related to gastric ulcer, which is the most popular disease of the gastrointestinal tract. The corresponding IMFs reveal differences in structure and provide features from their finest to their coarsest scale, structuring a new analysis, recognition and classification domain. Keywords — Capsule endoscopy, Ensemble Bidimensional Empirical Mode Decomposition, Intrinsic Mode Functions, gastric ulcer, classification
I. INTRODUCTION Gastroenterology is said to be one of the most difficult medical fields due to the inaccessibility of the gastrointestinal tract (GT) and the complex nature of pathologic findings. This is the reason for the many endoscopic methods that have been developed through the years. Capsule Endoscopy (CE) is a novel technique that allows visualization of the whole GT in a comfortable, noninvasive and efficacious way [1]. The patient swallows the video capsule, which has the size of a common pill. The latter moves via the physical peristalsis of the intestine through the GT and captures continuously every position that it takes. The generated
video is wirelessly transmitted to the data recorder, fitted on a patient’s belt. Finally, the doctor reviews the data with the aid of appropriate software to extract possible pathologic features. The main disadvantage of CE is the lack of automatic diagnosis. That is, in order to find the problem, the clinician must examine the whole data mass, a video of approximately 55.000 pictures, even frame to frame in some cases. That renders the technique time consuming (about 3 hours needed), and prevents it from wide use. Previous attempts towards automatic extraction of interesting pictures refer to the work of Vilarino et al. [2], who extract and reject images illustrating intestinal juices, which are with no pathologic interest. Using Gabor filters they recognize the bubble texture, which is produced by the turbid motion of juices, and, thus, they reduce the examination time even to 46%. In addition, Jean-Michel Cauvin et al. [3] created an intelligent atlas of indexed endoscopic lesions which they are used as a reference for computer-assisted diagnosis. Their method is based on similarity metrics and the classification algorithm utilizes four descriptors, anatomic location, shape, color and relief. Moreover, Coimbra et al. [4] benefit the characteristics of MPEG-7 standard. MPEG-7 defines a series of visual descriptors which they are used to classify the data of capsule endoscopy. Useful descriptors for the classification are local edge histogram, homogenous texture, scalable color. The aim of this work is to process the CE data with a new image processing tool, namely Ensemble Bidimensional Empirical Mode Decomposition (EBEMD) [5], and classify the results in normal and pathologic ones, using lacunarity [6]. The pathologic situation that we focused on is ulceration. This is due to the fact that peptic ulcer is one of the most usual diseases of the gastrointestinal tract. It arises in the stomach and more often in the duodenum and it is in essence a healing wound that develops on the mucous membrane. EBEMD-based analysis combined with lacunarity proved to be a promising tool for the CE analysis and characterization.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 642–645, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Analysis of Capsule Endoscopy Images Related to Gastric Ulcer Using Bidimensional Empirical Mode Decomposition
c.
II. METHODOLOGY A. EBEMD The Bidimensional Empirical Mode Decomposition (BEMD) is the extension of Empirical Mode Decomposition (EMD) in two dimensions, introduced by Huang et al. [7] as an alternative way of decomposition, intuitive and with no a priori analysis basis. Thus, it can be applied to both nonlinear and non-stationary time series. In the EMD analysis the intrinsic oscillatory modes, i.e., Intrinsic Mode Functions (IMF), are estimated that every data, even if highly complicated, embed. In the 2D case, the image is imposed to the sifting process until a stopping criterion is satisfied. The first 2D-IMF is produced and the same process is applied to the difference of the initial image and the first 2D-IMF. The same procedure is followed until a fixed number of iterations completes. An enhancement of EMD has been introduced by Huang et al. [5] in order to eliminate the mode mixing phenomenon, that is, a single IMF either consisting of signals of disparate scales or a signal of similar scale residing in two or more IMFs. The enhancement that has been proposed is a new noise-assisted EMD, the so called ensemble EMD. In this case, the original EMD is applied to every component of an ensemble containing copies of the initial signal on which white noise of finite amplitude has been added. The added white noise has been proved that forces the signal different scales to be projected on the proper reference scales. Since the output of the procedure is the average of the corresponding IMFs, noise cancels itself out [5].
The box moves one step to the right and the box mass is counted again. d. The procedure iterates with the box scanning the whole signal. A mass frequency distribution is generated n(s, r) for the specific box size. e. The frequency distribution is converted to probability distribution Q(s, r), by dividing n(s, r) with the number of boxes needed to scan the signal N(r). f. The 1st and 2nd moments of Q(s, r) are calculated:
Z( 1 ) = ¦ sQ(s,r), Z( 2 ) = ¦ s 2Q(s,r)
g. Lacunarity is given from: L(r) =
Z( 2 ) Z( 1 )2
h. All the previous are repeated for all possible values of box size, producing an L-r curve. In case of numerical data, the previous procedure can still be applied if formerly data are converted to dyadic by thresholding. Another version of lacunarity, introduced by Dong [9], is appropriate for 2D numerical data (e.g. grayscale images) without the necessity of thresholding. This version is called differential lacunarity. In this case, there is a gliding box r x r and a gliding window w x w (r º − Yi ¦ Aij wic ª«d i e »¼ i∈S m ¬ − < Ai μ n > ¦ Aij wic Ai μ n d i e
.
(4)
i∈S m
B. Region of interest 4D image data set In many applications, the ROI is much smaller than the volume that is irradiated. In our case, as an example, the ROI is defined by the RCA vessel of a clinical patient. In this paper, the iterative reconstruction method of a ROI for CT proposed by Ziegler et al. in [6], is used.
(3)
,
with μ j > 0, and forward projections < Ai μ > = ¦ j Aij μ j . n
function with width wk is centered at each phase
(2)
where di is the expected number of photons leaving the source along the i-th projection. Aij are the elements of the system matrix, and c1 is a constant. The function L() has to be maximized to find the best reconstructed image in the sense of this method. As described in Kamphuis et al. [10], an approximated solution of maximizing Eq. 2 leads to the following iterative step ( n 6 n + 1) of a OSML method:
647
Fig. 1 Top: total FOV of one complete projection of the heart in a clinical case. Middle: total FOV with ROI removed. Bottom: ROI
To determine the MVF, a 4D ROI image data set is needed. The 4D reconstruction is achieved by an aperture weighted cardiac reconstruction (AWCR) [13]. AWCR is a 3D algorithm of the filtered back-projection (FBP) type [14]. Here, the patient’s ECG is recorded synchronously with the projection data. In order to perform a 4D image reconstruction, gated reconstructions of 3D images are performed at equidistant φ k phase points throughout the entire P
cardiac cycle with the smallest possible gating window width[15]. C. Motion-vector field estimation The proposed MC iterative reconstruction method needs estimated MVF for all phases. For a cardiac reconstruction the MVF is represented by displacement vectors
IFMBE Proceedings Vol. 22
_________________________________________________________________
648
A.A. Isola, A. Ziegler, T. Köhler, U. van Stevendaal, D. Schäfer, W.J. Niessen and M. Grass
G G P P m j ( x j (φ r ), φ r , φ ) of the corresponding grid point P P G x j (φ r ) from a reference heart phase φ r to a new grid position xG *j (φ ), in an arbitrary heart phase φ (Fig. 2) by
G G P P P G G* G* x j = x j (φ ) = x j (φ r ) + m j ( x j (φ r ), φ r , φ ).
x j
r =
r jz =
(5)
x *j +1 − x *j −1 x j +1 − x j −1
, r
y j
y *j + N x − y *j − N x
=
z *j + N x N y − z *j − N x N y z j+ Nx N y − z j−Nx N y
y j+Nx − y j−Nx
,
(7)
in the X, Y and Z direction. x j , y j and z j are the 3D coor-
Inserting the MVF in Eq. 5 into Eq. 1 yields
*
~* f =
G G ¦ μ j b( x − x *j ).
,
*
*
dinates of the jth grid point, x j , y j , and z j are the corre-
N
(6)
j =1
If we are interested in a single coronary segment, such as the RCA vessel, user interaction for determining the disG placement vectors m j is feasible. Manual motion tracking is performed by scrolling through the slices of the volume data set and by looking for the outlet from the aorta and the first branching point of the RCA. This procedure is repeated for all volume data sets reconstructed at different heart G phases. The m j vectors from phase to phase for these two landmarks are calculated. The MVF at the corners of the ROI are set to zero, in order to suppress blobs that leave the ROI and are projected outside the ROI sinogram. Finally, these points are used as an input for a TPS warping [5] in order to determine a dense MVF. D. Forward projection step for MC iterative reconstruction
sponding coordinates after applying the MVF (Fig. 2).
Fig. 2 Sketch of the proposed volume adaptation of the blob-footprint in a focus-centered detector. The blob j, in a regular grid (left), in a non-equidistant grid after applying a divergent MVF and a volume-scaling of its footprint (center), and the density plots of the footprints of this blob on the detector (right), are sketched.
In a second step, the ratio of the footprint of the jth blob before and after the scaling is calculated, depending on the actual source and detector position. This is given by, f
To perform the forward- and back-projection steps, the Aij elements of the system matrix in Eq. 4 have to be determined. Ziegler et al. in [16] proposed a method for calculating the Aij weights. In the first step, the center of each blob is projected onto the detector. The footprint of a blob, which consists of all parallel line integrals through the volume element, is magnified and centered at the projected blobcenter on the detector. The magnification of the volume element is given by the ratio of the source-detector to source-blob distance. In a last step, the convolution of the footprint with the detector pixels is performed, which determines the weights Aij. However, in case of MC reconstruction of a moving object (e.g. the heart), the model proposed in [16] neglects the motion of the blob itself and the change of its volume caused by the existence of a divergent MVF (Eq. 5). In this paper, a modification of the model discussed above is used. We propose a two step method which performs an efficient blob adaptation by changing the blob-size and its footprint on the detector depending on the neighboring blobs. First, the width of the jth blob is scaled by the factors
y2
*
r j = [ r jx 2 cos2 ϕ *j + r j sin 2 ϕ j ] cos2 ϑ *j + r jz 2 sin 2 ϑ *j , (8) *
where ϕ j is the angle in the XY-plane between the X-axis G and the line going through the X-ray source position x s and *
the modified center of the jth basis function , and ϑ j is the angle between the plane of the gantry and a vector pointing from the source to the modified position of blob j. Moreover, a similar volume-dependent scaling to approximate the change of the jth blob-footprint over the detector pixel is achieved using the factors,
r ju = r jx 2 sin 2 ϕ *j + r jy 2 cos 2 ϕ *j , r jv = r jz ,
(9)
in the u and v detector axis directions respectively (Fig. 2). A volume-adaptation of the blob-footprint with the facf
u
v
tors r j , r j and r j leads to volume-adapted Aij weights, which inserted in Eq. 4, compensate the divergence of the MVF.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Motion compensated iterative reconstruction of a cardiac region of interest for CT
III. RESULTS Once a MVF is obtained, a MC cardiac reconstruction of the RCA can be performed. The MC iterative reconstruction can be combined with the concept of gating, in order to remove residual motion blurring due to large gating windows. In this paper, a gated iterative reconstruction is compared with a MC gated iterative reconstruction for one clinical case. Patient data are acquired with a helical cardiac scan with a collimation of 64x0.625 mm (Brilliance CT Scanner, Philips Medical Systems). Table feed per rotation and scanner rotation time are chosen as 8 mm and 420 ms. To perform a MC gated iterative reconstruction at the referP
ence phase point φ r = 20 % RR with a fixed gating window width wk = 40% RR the AWCR ROI images are generated in advance at phase points within the range from 0 to 40% RR in steps of 5% RR with a gating window width of 20% RR. The MVF is determined from these AWCR images from phase to phase. The MC OSML reconstruction is per-
649
formed using 12 subsets, each one filled with 600 projections. The order of the subsets is determined randomly. A simple cubic grid of blobs is used (0.53 mm3). Finally, the results are presented after 15 iterations with a relaxation parameter of =0.8. The image quality of gated reconstructions strongly depends on the cardiac phase. If as in our case, a gated reconstruction without MC is performed at systole (20% RR), the reconstructed images are strongly blurred (Fig. 3). Instead, the MC gated images are sharp and the blurring artifact is almost removed (Fig. 3), even in a phase of large motion. With the MC reconstruction, some anatomical RCA’s features as its ostium (Fig. 3.a, 3.d) and its marginal acute branch (Fig. 3.b, 3.d) are noticeably visible. The assumption of zero MVF (no motion) at the corners of the FOV has only slightly compromised the reconstruction result in the image regions close to the border of the ROI. IV. CONCLUSIONS We presented a MC iterative reconstruction method of a cardiac ROI for CT. It has been applied to a clinical case acquired in a helical acquisition mode with a parallel ECG recording. The patient RCA’s MVF estimation was derived by interactive landmark motion tracking on a set of precomputed 3D gated-reconstructed images. The MVF for the whole volume was determined by a TPS warping. Using the estimated MVF with the MC OSML iterative method, sharp RCA images are provided and their quality is significantly better than the quality of the gated iterative reconstructions.
ACKNOWLEDGEMENTS This work received financial support of the European Community under a Marie-Curie Host Fellowship for Early Stage Researchers Training, MEST-CT-2005-020424.
REFERENCES 1. 2. 3. 4. 5.
Fig. 3 The Axial and coronal images and the 3D volume rendering of the patient’s RCA. In order, the gated OSML (left column) and the MC gated OSML reconstructed images (right column) are shown. The outlet of the RCA from the aorta (a), its first branching point (b), the RCA vessel (c), and the 3D volume rendering of the RCA vessel (d) are shown. (at 20% RR, ROI’s radius=32.5 mm, Level=200 HU, Window=900 HU)
_______________________________________________________________
6. 7. 8. 9.
Kachelriess M, Ulzheimer S, and Kalender W A (2000) IEEE Trans. Med. Imaging 19:888-901 Grass M, Manzke R et al. (2003) Phys. Med. Biol. 48:3069-3084 Schäfer D, Jandt U et al. (2007) Proceedings of the Fully 3D'2007 Conference (Lindau, Germany) pp 245-248 Bookstein F (1989) IEEE Transactions on Pattern Analysis and Machine Intelligence 11(6):567-585 van Stevendaal U, Lorenz C et al. (2007) Proceedings of the Fully 3D'2007 Conference (Lindau, Germany) pp 437-440 Ziegler A, Nielsen T et al. (2008) Med. Phys. 35(4):1317-1327 Lewitt R M (1990) J. Opt. Soc. Am. A 7(10):1834-1846 Lewitt R M (1992) Phys. Med. Biol. 37(3):705-716 Matej S, Lewitt R M (1996) IEEE Trans. Med. Imaging 18(6):519-537
IFMBE Proceedings Vol. 22
_________________________________________________________________
650
A.A. Isola, A. Ziegler, T. Köhler, U. van Stevendaal, D. Schäfer, W.J. Niessen and M. Grass
10. Kamphuis C, Beekman F (1998) IEEE Trans. Med. Imaging 17(6):1101-1105 11. Lange K, Fessler J A (1995) IEEE Trans. Im. Proc. 4(10):1430-1450 12. Nielsen T, Manzke R et al. (2005) Med. Phys. 32(4):851-860 13. Koken P, Grass M (2006) Phys. Med. Biol. 51:3433-3448
_______________________________________________________________
14. Katsevich A (2001) Proceedings of the Fully 3D'2001 Conference (Asilomar, USA) pp 3-6 15. Manzke R, Grass M et al. (2003) Med. Phys. 30(12) :3072-3080 16. Ziegler A, Köhler T et al. (2006) Med. Phys. 33(12):4653-4663
IFMBE Proceedings Vol. 22
_________________________________________________________________
An Image Inpainting Based Surrogate Data Strategy for Metal Artifact Reduction in CT Images M. Oehler and T.M. Buzug Institute of Medical Engineering, University of Luebeck, Ratzeburger Allee 160, 23538 Luebeck, Germany Abstract — The goal of this work is the reduction of metal artifacts in reconstructed CT images. Mathematically, those artifacts are caused by inconsistencies in the Radon space. The metal-artifact reduction (MAR) algorithm presented here is based upon an idea adapted from image inpainting, a technique to repair damaged films and photographs. Here, it is used to restore the inconsistent projection data in the Radon space in such a way that the gap, caused by ignoring the inconsistencies, is undetectable at the end. This method is compared to a classically used one-dimensional linear interpolation inside one projection under one view and to a directional interpolation taking the flow of the surrounding projection data into account when calculating the artificial sinogram data. The best result is achieved with the twodimensional PDE-based image inpainting approach. Due to the fact that the repaired sinogram data are still afflicted with residual inconsistencies – depending on the used interpolation strategy – a weighted MLEM algorithm is used to reconstruct the CT images. Here, those artificially generated sinogram projections are weighted less. The proposed MAR method is evaluated on sinogram data from an anthropomorphic torso phantom marked with two steel markers. Also raw data of the same cross section without the markers were acquired, which serve as ground truth in the evaluation of the metal-artifact reduction quality. Keywords — Metal-artifact reduction, image inpainting, computed tomography, MLEM reconstruction, surrogate data.
suppression result is improved when a directional onedimensional interpolation strategy is used in Radon space [4]. In this work, the concept of directional interpolation in Radon space is further elaborated. The problem of the directional one-dimensional interpolation strategy is a complex fill-up pattern of potentially crossing interpolation directions. To overcome this problem, an inherently twodimensional interpolation method is proposed here. The idea is borrowed from the image inpainting concept usually used to restore damaged photographs or films. The interpolation is based on partial differential equations (PDE) modeling an elastic data progression regularized by a diffusion process [5]. This novel strategy for CT is compared to a one-dimensional directional interpolation approach taking the flow of the metal trace surrounding data into account when calculating the surrogate data for restoration. Additionally, these methods are compared to the state-of-the-art one-dimensional artifact-suppression technique. Since the repaired sinogram data consist of residual inconsistencies it can be demonstrated that a gradually weighted Maximum-Likelihood ExpectationMaximization (-MLEM) algorithm [4] lead to a superior reconstruction quality compared with an ignoring data strategy or just the interpolation step alone. II. MATERIALS AND METHODS
I. INTRODUCTION Inconsistencies in the Radon space caused by metal objects, lead to streak artifacts in reconstructed CT images. Unfortunately, these inconsistencies are multi-factorial and, hence, an easy strategy for the correction of the projection values based on the physics of photon-matter interaction is not available [1]. Therefore, in the last two decades, several sinogram restoration algorithms have been developed that discard and replace corrupted sinogram data with artificially generated values. The majority of the methods based on the idea to bridge the metal projection onto the detector with the neighboring projection values of the same projection angle by means of a one-dimensional interpolation (cf. e.g. [2,3]). Recently, however, it has been demonstrated that the artifact
A. Data Experiments were carried out on an experimental Philips Tomoscan M/EG scanner. On this system, sinogram data of an anthropomorphic torso phantom were acquired marked with two steel markers. The phantom used here (CIRS Inc. Computerized Imaging Reference Systems, Norfolk, Virginia, USA) is designed to provide an accurate simulation of an average male torso for medical imaging applications. The epoxy materials used to fabricate the phantom provide optimal tissue simulation in the diagnostic energy range (40 keV to 20 MeV). The phantom accurately simulates the physical density and linear attenuation of actual tissue to within 2% in the diagnostic energy range. The organs are placed to maintain the position when the phantom is placed upright. Simulated muscle material layers
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 651–654, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
652
M. Oehler and T.M. Buzug
the rib cage and vertebral column. The exterior envelope simulates a mix of 30% adipose and 70% muscle tissue. The phantom is sealed at the bottom by an acrylic plate. Water or blood mimicking fluid can be used to fill all the interstitial voids [6]. To evaluate the image quality of the algorithm sinogram data of the torso phantom were acquired without steel markers at the same cross section. These data serve as ground truth. B. Methodology Segmentation of Metal Trace The first step to reduce the existing metal artifacts consists in the inpainting-based repair method of the Radonspace data. For this, metal objects are labeled in a preliminary filtered backprojection reconstruction using simple threshold segmentation. Then, a forward projection of the metal-only image is calculated resulting in a sinogram mask. All sinogram values which are different from zero in this mask are assumed to be inconsistent.
(1)
where pn is the sinogram gap to be inpainted and n stands for the number of iterations. The coordinates of the pixels inside the sinogram data are given by (k,l), with k being the number of detector elements and l the number of views. n In every iteration n the update of the image p t is taken into account with a rate of improvement Δt ∈ [0,1]. The goal of this strategy is to propagate the lines from outside the inpainting region inside in a sensible way until the gap is closed, i.e. a steady state is reached. Therefor the update of the image is calculated using G G p nt (k , l ) = δLn (k , l ) ⋅ N n (k , l ) .
(2)
→
Here, N(k,l) is the propagation direction, i.e. the direction of the isophotes arriving at the boundary ∂, which mathematically can be expressed by the normal on the image gradient G N (k , l , n) = ∇ ⊥ p n (k , l ) = − p ny ( k , l ), p xn ( k , l ) .
(
)
(3)
L is an image smoothness estimator that represents the information that should be propagated in the direction of the isophotes defined by
One-Dimensional Interpolation Strategies The majority of methods are based on the idea to replace the inconsistent data by interpolation within a single projection view. However, a better result is achieved with the one-dimensional directional interpolation [4]. This interpolation concept follows the ‘flow’ of the Radon space data outside the metal projections. In the object domain the direction of this interpolation corresponds to the acquisition directions under slight angulations – able to ‘squint’ behind the metal markers. Image Inpainting Superior to the interpolation methods presented in the paragraph above is an inherently two-dimensional sinogram restoration technique based on a PDE-approach adapted from image inpainting ideas. The goal of image inpainting is to modify an image as far as possible in an undetectable way. There are different applications of inpainting such as restoration of damaged paintings, photographs and films, or the removal of selected objects. Here, it is used to replace the inconsistent projection data in a meaningful way, i.e. to fill the gap inside the sinogram data, the so called inpainting region with boundary ∂, in such a way that it is not detectable in the resulting image. The inpainting algorithm introduced by Bertalmio et al. [5] is used. It is based on the iteration equation
_________________________________________
p n + 1 (k , l ) = p n (k , l ) + Δtp tn (k , l ), ∀(k , l ) ∈ Ω ,
n Ln ( k , l ) = p xx ( k , l ) + p nyy ( k , l ) .
(4)
→
Hence, L(k,l) is a measure of change in the information L (k,l). For a detailed description see [5]. After 15 steps of inpainting one step of anisotropic diffusion is interleaved to ensure a correct evolution of the direction fields and to avoid the lines arriving at the boundary from crossing each other. Here, the Perona-Malik anisotropic diffusion [7] is used. Subsequently, the repaired sinogram data are reconstructed using the -MLEM algorithm, which will briefly be described in the next subsection. n
λ-MLEM Reconstruction As any surrogate data method has a weak underlying physical model, it cannot be expected that the interpolated data perfectly fit the projections measured without metal objects. Recently, two key changes of the classical MLEM formula for transmission computed tomography [8] have been proposed to derive a weighted -MLEM algorithm [4]. In the first step, all rows of the system matrix A = {aij} corresponding to projections running through a metal object are weighted with a confidence parameter 0 1. The second modification is based on the fact that the number of detected X-ray quanta is proportional to the
IFMBE Proceedings Vol. 22
___________________________________________
An Image Inpainting Based Surrogate Data Strategy for Metal Artifact Reduction in CT Images
intensity of the radiation. Therefore, the projection sum pi = Σj=1,...,N aij fj must be weighted adequately as well. So, the adapted number of X-ray quanta is ~ n i=n0exp(λi pi) and the new fixpoint iteration reads
§ M − ¦ λi aij f j*( n ) ¨ λ a e j=1 ¨ ¦ i ir = f r*( n ) ¨ i =1 M ¨ ¦ λi air e − λi pi ¨ i =1 © N
f r*( n +1)
· ¸ ¸ ¸, ¸ ¸ ¹
(5)
with j=1…N the number of pixels in the reconstructed image. For a detailed derivation of the -MLEM algorithm see [4]. III. RESULTS AND DISCUSSION Fig. 1 shows the results of the sinogram data acquired from the anthropomorphic torso phantom without the two steel markers (cf. fig. 1a) and with the two steel marker (cf. fig. 1b). Here, the inconsistent projection data can be clearly seen in form of the two white sinusoidal traces.
653
the inconsistent projection data: c) linear vertical interpolation, d) directional interpolation and, e) image inpainting. It can be seen that in all repaired sinogram data still residual inconsistencies are incorporated. By visual evaluation, the best results are achieved with the directional interpolation as well as with the image inpainting technique. Fig. 2 shows the results of the reconstructions of the different repaired sinogram data with the -MLEM algorithm. Fig. 1a and b display the result of the acquired raw data of the anthropomorphic torso phantom without the two steel markers (cf. fig. 1a) and with the two steel markers (cf. fig. 1b) reconstructed with the -MLEM algorithm with a choice of = 1.0. This is identical to the results achieved with the classical MLEM reconstruction. The fig. 2c-e show the results of the different repaired sinogram data: c) linear interpolation, d) directional interpolation and, e) image inpainting; all reconstructed with the -MLEM algorithm with an appropriate choice of the confidence parameter . The choice of depends on the quality of the sinogram restoration. In case of the directional and the image inpainting technique the best result is achieved with a choice of = 0.5 and in case of the linear interpolation with a choice of = 0.2. The best result is obtained with the directional interpolation reconstructed with the -MLEM algorithm and a choice of =0.5.
a)
c)
b)
d)
e)
Fig. 2 a)-e) Reconstructions with the -MLEM algorithm: a) ground truth data, b) torso phantom marked with two steel markers, reconstructed with a choice of = 1.0, c) sinogram data repaired with the linear interpolation, reconstructed with a choice of = 0.2, d) sinogram data repaired with the directional interpolation, reconstructed with a choice of =0.5 and, e) sinogram data repaired with the image inpainting approach, reconstructed with a choice of = 0.5. Fig. 1 Sinogram data: a) sinogram data of the torso phantom without markers (ground truth), b) sinogram data acquired of the torso phantom marked with two steel markers, c) sinogram data repaired with the linear interpolation, d) sinogram data repaired with the directional interpolation, e) sinogram data repaired with the image inpainting approach. The image row at the bottom of fig. 1 shows the results of the different reparation techniques used to fill the gap of
_________________________________________
The quality of the metal-artifact reduction is evaluated by calculating the correlation coefficient r. Fig. 3 shows the correlation coefficient versus the confidence parameter for the repaired torso phantom sinogram data using the MLEM algorithm (U: image inpainting, : directional interpolation, : linear interpolation within a single
IFMBE Proceedings Vol. 22
___________________________________________
654
M. Oehler and T.M. Buzug
projection view). The •-curve represents the result of the correlation with a perfect set of surrogate sinogram data. In this case the information from the ground truth data are simply copied into the gap of the inconsistent projection data, i.e. the •-curve shows the upper limit of the image quality and the hatched area represents the region of correlation that can never be achieved with the -MLEM algorithm.
The first method is the linear interpolation inside the projection of one single view, the second is the directional interpolation taking the flow of the surrounding data into account and, the third technique is adapted from image inpainting. After filling the gap of the inconsistent projection data in a more or less meaningful way those data were reconstructed using the -MLEM algorithm. It has been shown that a slight reduction of the confidence parameter leads to a better result in terms of metal artifacts in the reconstructed CT images.
REFERENCES
Fig. 3 Correlation coefficient versus confidence parameter ; interpolation methods: U - image inpainting, - directional interpolation and, - linear interpolation under a fixed angle; - repaired with the original data of the ground truth.
It can be stated that a slightly reduction of the weight of the modified MLEM reconstruction generally leads to better image quality. The best result is obtained with the sinogram data repaired with the image inpainting approach as well as with the directional interpolation reconstructed with the MLEM algorithm and a choice of = 0.5. IV. SUMMARY AND CONCLUSION Three different restoration techniques to fill the gap of inconsistent projection data inside a sinogram are tested.
_________________________________________
1. Buzug T M (2008) Computed Tomography: From Photon Statistics to Modern Cone-Beam Systems, Springer, Berlin 2. Kalender W A, Hebel R, Ebersberger J (1987) Reduction of CT artifacts caused by metallic implants. Radiology 164: 576-577 3. Glover G H, Pelc N J (1981) An algorithm for the reduction of metal clip artifacts in CT reconstructions. Med. Phys. 8: 799-807 4. Oehler M, Buzug T M (2007) Statistical Image Reconstruction for Inconsistent CT Projection Data. Methods Inf. Med., 46(3): 261-269 5. Bertalmio M, Sapiro G, Ballester C, Caselles V (2000) Image Inpainting. Computer Graphics, SIGGRAPH 2000 6. CIRS Incorporated Tissue Simulation and Phantom Technology (homepage on the Internet). Computerized Imaging Reference Systems, Inc.; c2005 (updated 2005; cited Sep 30, 2006). Available from: http://www.cirsinc.com/602_ct_xray.html 7. Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Patt. Anal. Mach. Intell. 12: 629-639 8. Lange K, Bahn M, Little R (1987) A theoretical study of some maximum likelihood algorithms for emission and transmission tomography. IEEE Trans. Med. Imag. 6: 106-114 Corresponding author: May Oehler Institute of Medical Engineering University of Luebeck Ratzeburger Allee 160 23538 Luebeck Germany E-mail:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Rodent Imaging with Helical μCBCT D. Soimu, Z. Kamarianakis and N. Pallikarakis Department of Medical Physics, School of Medicine, University of Patras, Greece Abstract — Micro-computed tomography (μCT) is a new kind of nondestructive technique, which provides high resolution images of the internal structure of small objects or samples, being a powerful imaging modality for small animals. The aim of this study was to investigate the quality of generalized Feldkamp algorithm for helical μCBCT in rodent imaging. For this study, μCT projections of a digital (voxelized) phantom approximating the mouse were simulated for three different helical pitch factors of 0.5, 1 and 2. Due to its modest computational requirements and relative ease of implementation, a generalized FDK algorithm was used for 3D volume reconstruction. For the voxelized mouse phantom, 1024x1024x2048 pixels volumes were reconstructed for all helical pitches. The normalized mean square errors (NMSE) were computed in all three directions (x, y, z) as objective image characteristic. Although the NMSE slightly increase for the pitch factor of 2, the overall image quality of reconstructed tomograms is comparable with the one obtained using a pitch factor of 0.5: soft tissue can be clearly distinguished from bones; lungs, heart and intestines can be identified on the on the reconstructed tomograms. Based on the above observation we can state that using pitch factor of 2 is preferable in longitudinal studies of small animal, to reduce significantly the radiation dose. Keywords — helical CT, μCT, small animal imaging, generalized FDK
I. INTRODUCTION The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in small animal imaging. Micro-computed tomography (μCT) systems did become in the latest years powerful imaging modalities for small animals. The earliest reported systems used X-ray image intensifiers as detectors [1], though this approach limits spatial resolution. Over the past three years, tremendous progress has been made in Xray detectors, hardware, real/time volumetric CT algorithms, and computing techniques. Development of a volumetric μCT cone-beam fluoroscopic system with multiple X-ray sources has become feasible by 2001 [2]. Recently reported a prototype μCT system, based on a CMOS flat panel detector [3] has been successfully demonstrated in small animal imaging. CMOS based μCT demonstrated advantage in their use for whole body
imaging of small animals as large as a laboratory rat. However, a fundamental limitation which should be considered, especially in experiments involving imaging the same animal over time, is the inherent use of ionizing radiations which may approach the lethal dose for small rodents. In addition, systems designed for small animal are usually optimized for slightly reduced spatial resolution, typically with 50–100 μ voxel spacing. The image noise is proportional to (x)-2 (for isotropic voxel spacing x) if Xray exposure to the animal is held constant [1]. Thus, extremely high-resolution imaging might necessitate unacceptably high whole-body X-ray doses for live animals. Because helical μCT can be used for rapid volumetric imaging with high longitudinal resolution, the development of exact and efficient algorithms for image reconstruction from spiral cone-beam projection data has been a subject of active research in recent years. Katsevich filtered backprojection formula represents a significant breakthrough in this field [4]. A number of exact reconstruction algorithms have been developed for the reconstruction of cone-beam projection data acquired in a helical mode, starting from Katsevich work. These algorithms are mathematically exact in the absence of noise and discretization (sampling) effects, and generally produce images of high quality when used on real data. However, known exact reconstruction algorithms are capable of covering only a narrow range of helical pitches or translation speeds of the object, though higher pitches or translation speeds are sometimes required in order to meet certain clinical or inspection requirements. This motivates continuing research on approximate algorithms that can more easily be adapted to general configurations. This study investigates the quality of generalized Feldkamp algorithm [5] for helical μCBCT in rodent imaging, considering a helical trajectory for three different helical pitch factors of 0.5, 1 and 2. II. MATERIALS AND METHODS A. Phantoms For this study noise-free projection of two simulated phantom (an analytical test phantom and a voxelized mouse phantom) were used. The first phantom, used to test the contrast sensitivity, was a simplified version of the clock
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 655–659, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
656
D. Soimu, Z. Kamarianakis and N. Pallikarakis
phantom. It consists of a sphere containing a set of 8 balls with varying densities providing background contrast variations of 10%, 15%, 20%, 25%, 30%, 35%, 40% and 50%. These inside spheres are placed symmetrically above and below isocenter, being arranged in a clockwise fashion and gradually offset in the z-direction. Micro-CT projections of a digital (voxelized) phantom approximating the mouse (Fig. 1) were also simulated. The phantom was derived from a digital mouse phantom [6]. The densities of the tissues composing the phantom were as follows: body: 1.05g/cm 3, intestines: 1.03g/cm 3, substance filling the intestines: 0.3g/cm 3, spine: 1.42g/cm 3, other bones (the hips): 1.92g/cm 3. The phantom was discretized onto a 1024 × 1024 × 2048 matrix having a voxel size of 0.2 mm. The phantom was completely contained within the Xray beam of the simulated μCT system.
studying specific effects, being free of distortions and others inaccuracies inherent to radiographic units. Projection images of the phantoms were generated considering a helical trajectory for three different helical pitch factors of 0.5, 1 and 2. The scanning parameters for the two phantoms are shown in table 1. Table 1: Cone-beam scanning parameters used in the numerical simulations Clock Phantom
Mouse phantom
SID (mm)
500
1000
SDD (mm)
700
1300
views/rot.
90
180
helical pitch
0.5/ 1/ 2
0.5/ 1/ 2
x-size detector
256 pixels
1024 pixels
x pixel size
1 mm
0.2mm
y-size detector
256 pixels
1024 pixels
y pixel size
1 mm
0.2 mm
slice width size of reco. matrix
1 mm
0.2 mm
256
3
1024*1024*2048
C. Reconstruction of μCT acquisition
Fig. 1. The digital mouse phantom (derived from a [6])
B. X-ray CT Simulations 1. The object/organs of the two phantoms were set to model the distribution of attenuation coefficients at 21 keV photon beam. Cone-beam projection data were simulated from the two phantoms using Simphan, an in-house a software for radiographic imaging investigations [7, 8]. This investigative software tool can be used to simulate the entire radiological process, including the imaged object, imaging modalities, operating parameters, beam transport. It provides sufficient accuracy and flexibility to allow its use in a wide range of approaches, being of particular help in the design of an experiment and conducting first level trials. We used simulated data because it is particularly useful in
_________________________________________
The full μCBCT 3D volume of digital mouse was reconstructed into a 1024x1024x2048 arrays with a pixel width and slice thickness of 0.2 mm Due to its modest computational requirements and relative ease of implementation, a generalized filteredbackprojection algorithm [9] (cosine pre-weighting of the projection, ramp filtering along the detector line, and backprojection) was used for 3D tomographic reconstructions. FDK algorithm represents the most widely implemented method for 3D cone-beam reconstruction from transmitted X-ray projections. In the helical acquisition scheme, planar projections P(p, ) of an object f(x,y,z) are obtained at a number of angles . Basically, the following three steps are followed to reconstruct a z-slice: collecting data from a spiral turn centered on the z-slice, filtering the data, and backprojecting the filtered data onto the z-slice. The formula for generalized backprojection reconstruction is given by: g( x, y , z ) = m(
IFMBE Proceedings Vol. 22
∞ 1 2π P( β )2 ³ ³ Pβ ( p,ζ ) ∗ 2 0 ( P( β ) − v )2 − ∞
P( β )u − p )∗ P( β ) − v
P( β )
P( β )2 + p 2 + ζ 2
(1)
dpdβ
___________________________________________
Rodent Imaging with Helical μCBCT
657
Herein, where (u,v) denotes the rotated coordinate system described by: u = x cos β + y sin β ® ¯v = − x sin β + y cos β
(2)
is the rotational angle and P(p, ) is the projection acquired at angle expressed in the local coordinate system (p,).
ζ =
P( β )z' ( β ) P( β ) − β
(3)
and the local coordinate system [x, y,z’()] associated with the angle is defined by z’() = z-h()
In order to compare the quality of the reconstructed images from various pitch data, the normalized mean square error (NMSE) was computed for all the reconstructed images, as objective image characteristic. In fig. 5, the NMSE along the three axes of the mouse phantom is plotted for the 0.5 pitch and 2 pitch. Table 2 presents the central slice NMSE for both phantoms for all three helical pitches. The simulated projections were acquired with the inhouse Simphan Simulator, and the reconstructions were performed on a Pentium 4 at 2.8GHz computer, using IDL language. It took about 23 seconds to reconstruct a 1024x1024 pixels slice using the from the 2 pitch helical CT data, and about 87 seconds to reconstruct the same image from the 0.5 pitch factor data.
(4)
where h is the distance traveled by the source, with h'() > 0 so that h() is always increasing. The filter m is applied one-dimensionally in the detector plane along lines parallel with the plane of the source trajectory. In this study, we associated the ramp filter with an apodization window, like the hamming window, to attenuate the high-frequency noise. In any μCT systems, several factors affect the spatial resolution of the reconstructed images/volumes. These factors include the inherent resolution of the X-ray detector, geometric magnification, focal spot size, stability of the rotation mechanism and the filtering method used during filtered-backprojection reconstruction.
Fig. 3. Central reconstructed slice of the mouse phantom using images acquired with a 0.5 helical pitch (left) and a 2 helical pitch (right).
III. RESULTS In figures 2 and 3 are shown the central reconstructed slices of the two phantoms for different helical pitch. The skeleton of the full 3D reconstructed mouse for patch factor of 2 is shown in fig. 4.
Fig. 2 . Central reconstructed slice of the modified clock phantom from data acquired at 0.5 pitch (left), pitch = 1 (center) and helical pitch factor of 2 (right)
_________________________________________
Fig. 4. Three orthogonal slices and 3D skeleton view of reconstructed mouse using data acquired with 2 helical pitch factor
IFMBE Proceedings Vol. 22
___________________________________________
658
D. Soimu, Z. Kamarianakis and N. Pallikarakis
voxel size, and no discussion is made about contrast or noise in images. Although the resolution remains quite poor due to the rather large pixels of the detector used, soft tissue can be clearly distinguished from bones: lungs, heart and intestines can be identified on the on the reconstructed tomograms. Using a larger pitch, reconstructed images exhibits larger cone-beam artifacts, surrounding the smaller and higher contrast structures (figures 2 and 3), as well as the shading artifacts, and the NMSE slightly increases. The low contrast structures and their shape are visible in all reconstructed slices, for all helical pitches. V. CONCLUSIONS
Fig. 5. NMS error plots along X0, Y0 and Z0 directions for pitch factors of 0.5 (continuous line) and 2 (dotted line) Table 2: NMS errors for the central slices of the two phantoms Pitch
Central slice clock phantom
Central slice, Mouse phantom
0.5 1 2
0.032 0.033 0.055
0.025 0.028 0.032
Significant advances in the development of transgenic and knockout animal models of human disease have made whole-animal imaging an important new application for micro CT. In many studies of genetically altered animals, investigators require a non-destructive, 3D technique to characterize the phenotype of the animal. The effect of helical pitch on 3D image reconstruction of μCT data was evaluated using two simulated phantoms, for various situations (low contrast, small animals). Based on the above observation we can state that using pitch factor of 2 is preferable in longitudinal studies of small animal, to reduce significantly the radiation dose.
IV. DISCUSSIONS There are numerous exciting applications for μCT in small animal laboratory investigation. However, a fundamental limitation which should be considered, especially in experiments involving imaging the same animal over time is the inherent use of ionizing radiations. In scans that combine both high-resolution and low noise, the X-ray exposure could approach the lethal dose for small rodents (~6Gy). We compared the quality of reconstructed images from data acquired with different helical pitches, using the same radiation dose per acquired image. Using a larger pitch (i.e of 2) means acquisition of ¼ projection images than in the case of a 0.5 pitch factor, and ¼ reduces radiation dose. Small animal CT is now at the point where there are systems being manufactured by various manufacturers based on primarily identical system designs, with things such as reconstruction time being one of the few things to differentiate the systems. There is as yet no specific performance measurements specified to characterize CT system performance. Thus comparing the various systems is difficult. In most of these small animal imaging CT systems, the resolution is quoted as being the reconstruction
_________________________________________
ACKNOWLEDGMENT The authors would like to express their thanks to Dr. W.P. Segars for the digital mouse phantom and Dr. K. Bliznakova for her valuable assistance concerning the Simphan tool. We also thank the PENED 2003 programme for funding the above work.
REFERENCES 1. 2. 3.
4.
5. 6.
Holdsworth D W et al. (1993) A high resolution XRII-based quantitative volume CT scanner. Med. Phys. 20: 449-462 Liu Y et al. (2001) Half-scan cone-beam CT fluoroscopy with multiple X-ray sources. Med. Phys. 28:1466–1471 Lee S C, Kim H K et al. (2003) A flat-panel detector based micro-CT system: performance evaluation for small-animal imaging. Phys. Med. Biol. 48: 4173-4185 Katsevich A. An improved exact filtered backprojection algorithm for spiral computed tomography Advances in Applied Mathematics, vol. 32, no. 4, pp. 681–697, 2004. Feldkamp L A, Davis L C, Kress J W (1984): Practical cone-beam algorithm, J. Opt. Soc. Am. A. 1(6):612-619 Segars P et al (2004) Development of a 4D digital mouse phantom for molecular imaging research. Mol. Imaging Biol. 6(3): 149-159
IFMBE Proceedings Vol. 22
___________________________________________
Rodent Imaging with Helical μCBCT 7. 8.
9.
659
Bliznakova K (2003): Study and development of software simulation for X-ray imaging, PhD thesis (Patras University, Greece) Lazos D, Kolitsi Z, Pallikarakis N (2000) A software data generator for radiographic imaging investigations, IEEE Trans. Inf. Biomed. 4:76-79 Wang G., Lin T H., Cheng P C., Shinozaki D M (1993). A general cone-beam reconstruction algorithm, IEEE Trans. Med. Imaging, 12, 486-496
_________________________________________
Address of the corresponding author: Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 22
Delia Soimu University of Patras, Dept. of Medical Physics, School of Medicine Patras Greece
[email protected] ___________________________________________
Microcalcification Detection using Digital Tomosynthesis, Dual Energy Mammography and Cone Beam Computed Tomography: A Comparative Study Z. Kamarianakis, D. Soimu, K. Bliznakova, N. Pallikarakis Department of Medical Physics, School of Medicine, University of Patras, Greece Abstract — The purpose of this study was to investigate and compare microcalcification detectability using Digital Tomosynthesis (DTS), Dual Energy Mammography (DEM) and Cone Beam Computed Tomography (CBCT) with a flatpanel detector. A simulated 3D uncompressed breast approximating the medium sized breast, with 50% adipose and 50% glandular tissue was used. Microcalcifications were modeled as calcium carbonate spheres/ellipsoids with size in full range of 0.1 to 1mm. An amorphous-silicon CsI flatdetector, 2048 x 2048 mm2 at 0.1mm resolution was modeled. Images from the above phantom were simulated using three acquisition protocols for DTS, DEM and CBCT. For DTS, projections data were acquired for an acquisition arc from -200 to 200 with 20 sampling step, and the 3D reconstructed volume was obtained using a Multiple Projection Algorithm (MPA), with a 0.1 mm width between successive tomograms. In the DEM case, two synthetic projection images at 21 and 52 KeV respectively were simulated. A weighted subtraction of low and high – energy digital X-ray images was performed, allowing the removal of background morphology and enhancement of the obscured details. For the last method (CBCT), 181 projections acquired every 20 over a full circular scanning path were used for 3D breast volume reconstruction using the wellknown FDK algorithm. The image quality of the reconstructed tomograms was visually assessed and quantified in terms of contrast to noise ratio (CNR), relative detail contrast (RC), as well as Artifacts Spread Function (ASF). The results show that microcalcifications with diameter equal or greater than 0.1mm can be detected, with the CBCT and DTS techniques. Superior quality is demonstrated by CBCT technique in comparison with DTS and DE by means of better microcalcification detection. Keywords — dual energy mammography, tomosynthesis, cone beam breast CT
sometime indicate the presence of an early breast cancer. Microcalcifications (^C) appear as a spot or as group of small calcifications huddled together, called "clusters of microcalcifications. They can vary in size and shape, ranging from ^m to centimeters. Big calcifications (e.g. macrocalcifications) are usually not associated with cancer. The purpose of this study was to investigate and compare ^C detectability using Digital Tomosynthesis (DTS), Dual Energy Mammography (DEM) and Cone Beam Computed Tomography (CBCT) with a flat-panel detector. For this purpose, a realistic breast phantom was designed with a cluster of μCs. Subsequently, this breast model was used in simulations including CBCT, DTS and DEM techniques in order to generate a set of diagnostic images, subjected to multiple evaluations. II. MATERIALS AND METHODS A. Breast Phantom and Detector The breast phantom is a synthesized model of the woman breast that was reported previously by our group [2]. The simulated 3D uncompressed breast approximates the medium sized breast, with 50% adipose and 50% glandular tissue. Figure 1 shows the model of the breast phantom used in the study. The model also contained one cluster of five μCs. They were simulated as spheres/ellipsoids calcium carbonate with size in the range of 100^m to 1mm. The μC sizes are specified in Table1. Pectoralis muscle was not modeled for comparison reasons among the techniques. Table 1: ^Cs diamensions
I. INTRODUCTION X-ray mammography is currently the best method for early detection of breast carcinoma and has been shown to reduce breast cancer mortality [1]. However, the detection of cancers in dense breast tissue is limited because they are masked by radiographically dense fibroglandular tissue, which may be overlying or surrounding the tumor/microcalcifications. Microcalcifications are tiny flecks of calcium in the soft tissue of the breast that can
μC
μC dimensions (mm)
#1
(0.31, 0.57, 0.62)
#2
(0.50, 0.71, 0.36)
#3
(0.58, 0.38, 0.89)
#4
(0.10, 0.61, 0.35)
#5
(0.47, 0.43, 0.91)
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 660–663, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Microcalcification Detection using Digital Tomosynthesis, Dual Energy Mammography and Cone Beam Computed Tomography …
Fig. 1
The 3D model of the breast used in simulations, with a cluster of five ^Cs shown as red ellipsoids
An amorphous-silicon CsI flat-detector with thickness of 0.1mm and size of 2048 x 2048 mm2 with resolution of 10 pixels/mm was selected for image acquisition based on previous studies [3]. The detector was modeled as a photon counting absorber. The characteristics of the modeled materials are listed in table 2. Table 2: Density of the materials, modeled in simulations. Material Gland Adipose CaCO3 CsI
Density (, g /cm3) 1.02 0.95 2.8 4.51
B. Simulation Techniques The following imaging acquisition techniques were simulated: (a) CBCT, (b) DTS and (c) DEM. Photon fluencies and entrance skin exposures were calculated for each simulated acquisition technique as listed in Table 3. The entrance skin exposure is given per acquired image. Table 3: Simulation parameters for image acquisition Simulated technique Low Energy Image for DEM High Energy image for DEM
Photons/pixel 7
3.450734 x 10 5.657647 x 107
Entrance Skin exposure, mR 600 200
DTS
2.191216 x 106
38.0952
CBCT
2.542041 x 105
4.42
DEM simulation: The incident ‘low’ and ‘high’ monoenergetic photon beams were with energies 21keV and 52 keV, respectively. The acquired images corresponded to cranio-caudial images. The simulated low and high energy
_________________________________________
661
images were subtracted according to the Ergun’s formalism [4] in order to obtain DEM images. DTS/CBCT simulation: Projection data of the breast phantom were simulated for both DTS with stable detector and for circular CBCT, using an XRAY Imaging Simulator that is an in-house software tool for radiographic imaging investigations [5]. Table 4 summarizes the acquisition parameters used in the numerical simulations for the DTS, DEM and the CBCT cases. SID stands for the source-to-center-of-rotation distance, while SDD is the source-to-detector distance. In the case of DTS imaging, the x-ray tube is rotated in 20 increments to acquire projection images at 21 different angles considered in the limited arc of ± 20 0 . The detector is kept stable during the DTS acquisition procedure. In the case of CBCT the couple source-detector rotates along a full circle from 00 to 3600 with angular increment of 20, thus 181 projections are taken. Table 4: Simulation parameters for image acquisition SID (mm) SDD (mm) Acquisition angle Acquisition step x-size detector (pixels) x pixel size (mm) y-size detector (pixels) y pixel size (mm) slice width (mm) size of reco. matrix
DTS
CBCT
DEM
600
600
600
800
800
800
[0° : 360°]
00
2°
2°
-
2700
1200
1200
0.1
0.1
0.1
2700
1200
1200
0.1
0.1
0.1
0.1
0.1
0.1
[-20° : +20°]
1100x700x700
1100x700x700
1200x1200
C. Reconstruction algorithms DTS reconstruction with stable detector: In the case of the DTS reconstruction, a modified version of Multiple Projection Algorithm [6] was used, adapted for the rotating source-stationary detector case [7]. For each acquired angle, the projection data is first geometrically projected onto the ‘’image formation plane’’, then shifted depending on the position of reconstructed plane, and normalized to the magnification of the isocenter plane. CBCT reconstruction: In the case of circular CBCT, the well known Feldkamp approximate algorithm [8] was chosen for 3D volume reconstruction.
IFMBE Proceedings Vol. 22
___________________________________________
662
Z. Kamarianakis, D. Soimu, K. Bliznakova, N. Pallikarakis
D. Image Quality Evaluation
III. RESULTS
To quantitatively evaluate the quality of the reconstructed images, three parameters were calculated for all the features: (i) Contrast-to-Noise Ratio (CNR), (ii) relative contrast (RC) and (iii) the Artifact Spread Function (ASF). The CNR measures the detectability of a feature in a reconstruction plane. It is defined as:
CNR = S μC − Sbg / σ bg ,
(1)
where SC is the mean pixel value of the ^C, Sbg is the mean pixel value in a background area adjacent to the feature, and bg is the standard deviation of pixel values in the background region of interest (ROI). The value of Sbg was measured in a moving window with size of 50 x 50 pixels around the object under evaluation. The mean value of Sbg and bg was estimated from a set of 6 moving windows around the object. The value of SC depends on the size of the investigated feature. The smallest area was an area of 1x3 while the largest was 4x4-pixel area, located approximately at the center of the microcalcification. The relative contrast (RC) is defined as:
RC = S μC − S bg /( S μC + S bg ) ⋅ 100%
S artifact ( z ) − S bg ( z ) S μC ( z 0 ) − S bg ( z 0 )
,
DEM image.
Figures 3 and 4 show the ^Cs reconstructed on slices using DTS and CBCT. Figure 3(d), 4(d) show reconstructed plane with ^C with size 0.1mm.
Fig. 3 DTS reconstructed tomograms at planes where the ^Cs are located. (from left to right, a-d)
(3)
where z0 is the location of the in-focus plane of the real feature, z is the location of an off-focus plane, SC(z0) and Sbg(z0) are the average pixel intensities of the feature and the image background in the in-focus plane respectively, Sartifact(z) and Sbg(z) are the average pixel intensities of the artefact and the image background in the off-focus plane, respectively. The ASF measured for both DTS and CBCT reconstruction cases. Sbg(z0) was measured in two 60x60 ROIs in the in-focus plane and SC(z0) in a 2x2 ROI located approximately at the center of the microcalcification in infocus plane respectively. SC(z) and Sbg(z) were measured in ROIs at the same in-plane locations in the off-focus plane.
_________________________________________
Fig. 2
(2)
The artifacts that exist in reconstructions usually appear as ghosting artifacts due to limited resolution along Z axis. Artifacts from a real feature located in an image plane are usually observed in other image planes with an appearance close to that of the real feature [9]. The artifacts are typically stronger in image planes close to the plane that the real feature is located. ASF measures the intensity of the artefact relative to the intensity of the real feature that causes the artefact. ASF is defined as:
ASF ( z ) =
Figure 2, shows the DE image. A simple inspection of the zoomed region of this figure demonstrates that the smallest ^C (e.g #4) is not visible.
Fig. 4 CBCT reconstructed tomograms where the ^Cs are located
IFMBE Proceedings Vol. 22
(up-left to right, a-c) and (down-left to right, d, e)
___________________________________________
Microcalcification Detection using Digital Tomosynthesis, Dual Energy Mammography and Cone Beam Computed Tomography …
Visually, the best ^C detectability is demonstrated by CBCT technique. DTS slices exhibits typical appearance of limited angle reconstruction methods introducing shape distortions when defining contours of ^C. These effects are not present on CBCT where the shapes of ^C are quite accurately reconstructed. DTS and CBCT provide very high CNR and RC of the features in comparison to DEM. The RC improvement demonstrated by applying CBCT and DTS is 3 to 8 times compared to DEM. Results showed that RC for DTS and CBCT are comparable. The CNRCBCT was calculated to be 3 times larger that CNRDTS. Also, CNRDTS is approximately 5 times larger than that of DEM, while the CNRCBCT is 17 times larger respectively. Figure 5 shows the comparison of ASF for ^C #5 in case DTS and CBCT. The positive distance on this figure represents reconstructed images above the feature layer. ASF values for the case of CBCT reconstruction are clearly lower compared to the DTS.
663
reconstructions. The MPA has smoother decreasing ASF, demonstrating that are strong artifacts from the corresponding feature in the neighbor reconstructed planes. On the contrary, the corresponding ASF for CBCT drops rapidly as the distance from the feature increased, due to fully coverage of circular trajectory. V. CONCLUSIONS This paper investigated the ^Cs detectability using three X-ray imaging techniques. Though DTS succeeds in detecting all ^Cs, various noise removals algorithms will be needed for accurate 3D localization. CBCT provides more accurate results, for both accurate detection and reconstruction of ^Cs’ shape, and can be further be used as the technique of choice in the localization procedure of an abnormality.
ACKNOWLEDGMENT The authors would like to thank the PENED 2003 program for funding the above work.
REFERENCES 1. 2.
3. 4.
Fig. 5 ASF curves for DTS (continuous line, triangles) and CBCT
5.
(dashed line, circles) for microcalcification #5 6.
IV. DISCUSSIONS 7.
The DEM failed to visualize the smallest ^C. This due to the fact that the breast is uncompressed and background subtraction is not completely achieved. DTS quality can be improved by noise removal techniques. Both CBCT and DTS demonstrate high contrast. While the relative contrast is similar in both techniques the CNRCBCT is higher than in DTS because the additional reconstruction artifacts in DTS. The effect of the artifacts depends on the distance between the features position in the z-direction, the contrast of the feature and the number of projections used for
_________________________________________
8. 9.
Kerlikowske K. et al (1995) Efficacy of screening mammography. A meta-analysis. JAMA 273:149-154 Bliznakova K., Bliznakov Z., Bravou V., Kolitsi Z., Pallikarakis N. (2003) A 3D breast software phantom for mammography simulation Phys Med Biol 22:3699-3720 Bliznakova K., Kolitsi Z. and Pallikarakis N. (2006) Dual-energy mammography: Simulation studies. Phys Med Biol 51:4497 - 4515 Ergun D. et al (1990) Single-exposure dual-energy computed radiography: improved detection and processing. Radiology 174:243249 Bliznakova K, Study and development of software simulation for Xray imaging, Ph.D. dissertation, Dept. Medical Physics, University of Patras, Greece, 2003 Kolitsi Z., Panayotakis G., Anastassopoulos V., Skodras A., and Pallikarakis N. (1992) A multiple projection algorithm for digital tomosynthesis. Med Phys 19:1045-1050 Soimu D, Kolitsi Z, Pallikarakis N (2004) 4th European Symposium in Biomedical Engineering and Medical Physics, Patras, Greece. Feldkamp LA, Davis LC, Kress JW (1984) Practical cone-beam algorithm. J Opt Soc Am A 1:612-619 Wu T, Moore RH, Rafferty EA, Kopans DB (2004) A comparison of reconstruction algorithms for breast tomosynthesis. Med Phys 31: 2636–2647
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Zacharias Kamarianakis University of Patras University Campus Patras Greece
[email protected] ___________________________________________
Non-Minimum Phase Iterative Deconvolution of Ultrasound Images N. Testoni, L. De Marchi, N. Speciale and G. Masetti DEIS/ARCES, Università di Bologna, Bologna, Italy Abstract — The strongest limitation to ultrasound image quality is due to the blurring effect on the back-scattered echo produced by the echographic transducer response. This effect alters significantly the received echo, reducing the resolution of the echographic image. As under proper general assumptions a convolution-based model can be used to represent the radiofrequency incoming echo signal, fast and robust deconvolution algorithms can be successfully employed to improve image quality by means of attenuating the unwanted transducer effects. In this work we propose an iterative deconvolution algorithm designed to deal with both non-minimum phase transducer impulse responses and scattering events not aligned with the sampling grid. This is achieved by means of analytically-designed all-pass filtering stages. Sparse solutions of the deconvolution problem as well as Bernoulli-Gaussian output sequences are particularly favoured thanks to the adopted approach. Deconvolution performances, evaluated in terms of axial resolution gain, peak signal to noise ratio, quality index and contrast gain over a data-set of phantoms and in-vivo images show that our algorithm features very good results when compared to other in literature. Keywords — non-minimum phase, iterative deconvolution, non-integer delay, all-pass filtering, ultrasound.
I. INTRODUCTION Ultrasound (US) imaging is a relatively inexpensive, fast and radiation-free imaging modality. It is excellent for a non-invasive imaging and diagnose of a number of organs and conditions, without x-ray radiation; it is however often difficult to interpret: as a matter of fact, results of diagnostics using conventional US images are highly dependent on the physician's skills. Echographic signals result from the interaction between the pressure wave generated by the transducer and the tissue structure. A comprehensive model for the received radiofrequency (RF) signal y(t), is discussed in [1], but under the assumptions of weak scattering, narrow US beam and linear propagation, the incoming echo signal can be expressed [2] as: y (t ) = h(t ) ∗ x(t ) + γ (t )
(1)
where x(t) is the tissue reflectivity function, h(t) the acquisition system point spread function (PSF) and (t) is a zero-mean white noise term. Besides noise, it is clear from model (1) that the biggest limitation to ultrasound image quality is due to the blurring effect on the back-scattered echo produced by the transducer response. This effect alters significantly the received echo, reducing the resolution of the echographic image. In order to improve US image quality by means of attenuating the unwanted transducer effects, deconvolution techniques can be successfully employed. Two approaches are the most common when dealing with US image deconvolution: type I algorithms usually incorporate the PSF estimation procedure within the deconvolution procedure, whereas in type II algorithms, PSF and true image estimation are two disjoint procedures. While the first approach usually allows for more robust estimates of the clean tissue response, it leads to computationally heavy algorithm not suited for real-time signal processing. The second approach is somewhat less robust, however allowing for more efficient computations. In fact, many commonly used type II algorithms require the estimated PSF to be minimum-phase to ensure reconstruction stability, hence discarding any phase information and allowing only for an approximate reconstruction of the incoming blurred signal. Moreover, it can be shown that whenever an echo is generated by an interface or a scatterer which is not aligned to the sampling grid implicitly defined by the time domain sampling, it is characterized by a different phase content with respect to the ones generated by interfaces aligned with the grid. In this work we propose a type II deconvolution algorithm based on an iterative procedure which addresses the previously described limitations: by means of analytically-designed all-pass filtering techniques, our algorithm can deal with both minimum and non-minimum phase impulse responses and adjust their phase shift in order to improve the matching with the incoming signal. Section II of this work briefly describes the phenomenon of scattering from a target which is not aligned to the sampling grid; a filtering technique used to simulate the non-integer delay is presented in Section III, as well as the criteria for its optimization; Section IV discusses the iterative deconvolution algorithm. Results from the
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 664–668, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Non-Minimum Phase Iterative Deconvolution of Ultrasound Images
comparisons of our algorithm with other in literature are reported in Section V. II. SCATTERING FROM MISALIGNED TARGETS
665
The founding idea which our algorithm is built on is the option of decomposing any causal filter with no poles or zeros on the unit circle in a cascade of minimum-phase and all-pass filters [3], so:
(
)
H eq ( z ) = z − (θ0 / T0 −m ) H ap ( z ) H min ( z )
By considering the simplest possible tissue response in the continuous time domain — a Dirac delta at time 0 — and calling h(t) the unsampled acquisition system impulse response and T0 the sampling period, if the noise is neglected, the following expression is obtained for each time-sample n:
(5)
where Hmin is the minimum-phase estimate of H and Hap contains the discarded phase information. The cumulative effect of non-integer delay plus the Hap is modelled using all-pass filters, while Hmin carries amplitude related information.
+∞
y (nT0 ) = ³ h(nT0 − τ )δ (τ − θ 0 )dτ = h(nT0 − θ 0 )
(2)
−∞
Moving to the Z-domain and calling m the integer part of the ratio 0/T0, the following equality holds: Y ( z ) = z − m z − (θ 0 / T0 −m ) H ( z )
H eq ( z ) = z
(4)
H ( z)
Figure 1 Evaluation of the MAE expressed in decimal degree between the phase of the all-pass filter and the ideal delay term for different values of and .
_________________________________________
In practice it is possible to model Hap using a cascade of simple all-pass filters [3]:
(3)
where Y(z) and H(z) are the Z transforms of the output signal y(t) and the impulse response h(t) respectively. Working in a discrete environment, any non-integer delay term is usually neglected, giving way to approximate estimations of the received echoes and increasing the output noise level. In order to improve deconvolution performance, it is possible to jointly estimate what is called the equivalent impulse response Heq(z): − (θ 0 / T0 − m )
III. ALL-PASS FILTERING AS NON-INTEGER DELAY
P( z ) =
− a + z −1 1 − a * z −1
(6)
It is relatively simple to simulate a non-integer delay by means of (6): assuming that the filter parameter a is real and evaluating P(z) on the unit circle one obtains: P (e jω ) =
(a 2 + 1) cos(ω ) − 2a + j (a 2 − 1) sin(ω ) (a 2 + 1) − 2a cos 2 (ω )
(7)
To ensue filter stability the tuning parameter a must satisfy |a| < 1.
Figure 2 Subdivision of the delay/pulsation plane according to the best parameter choice for non-integer delay reconstruction.
IFMBE Proceedings Vol. 22
___________________________________________
666
N. Testoni, L. De Marchi, N. Speciale and G. Masetti
While the modulus of P(ej) is always 1 independently of both a and , the analytical phase P() of P depends on both these parameters: (a 2 − 1) sin(ω ) · ¸¸ © (a + 1) cos(ω ) − 2a ¹ §
ϕ P (ω , a) = tan −1 ¨¨
2
(8)
Following [3], it’s easy to derive from (8) an analytical formula for the group delay = g(,a) of the filter (6): 1− a2 τ g (ω , a) = 1 − 2a cos(ω ) + a 2
(9)
Equipped with (9) and given an estimation of the transfer function of the echographic transducer, it is possible to optimize the filter parameter a so that (6) produces an overall delay such to compensate both the effects of nonminimum phase and target misalignment. A useful approximation of this estimation can be obtained by evaluating (9) at = 0 corresponding to the pulsation owing the highest energy spectral density of the PSF. The group delay (9) is a smooth function of the filter parameter a within its definition domain; function shape depends on the pulsation and is symmetrical around = /2. Solving a from (9) as function of and , two solutions are obtained for each given input parameters pair: a1, 2 =
τ cos(ω ) ± 1 − τ 2 sin 2 (ω ) τ +1
(10)
where a1 and a2 are associated to the minus and plus sign respectively. Thanks to the implicit bounds on and , filter parameters obtained in this way are always real numbers. It is of particular interest to estimate how similar the unwrapped phase of the all-pass filter is to the phase of the non-integer delay term z-q with q being the difference between 0/T0 and its integer part m. By setting = q and making use of (10) to estimate the all-pass filter parameter, we measured the Median Absolute Error (MAE) in decimal
Figure 3 Computational grid scheme: original grid points are filled in black, while refined points are filled in white. The grid scheme used in the naïve implementation is depicted in the top row, while the one exploited by the fast method is shown in the bottom row.
_________________________________________
degree over the whole pulsation range for different values of and . The results, shown in Figg.1-2, allow us to say that best results are obtained by making use of a1 for most of the delay/pulsation plane of and except for a small region corresponding to high values of the independent variables. IV. ITERATIVE DECONVOLUTION ALGORITHM Having shown that an all-pass filter, along with an appropriate setting of its parameter, can adequately emulate the behaviour of a non-integer delay term, a modified version of the CLEAN algorithm [4] is used to perform signal deconvolution. Given the estimated acquisition system impulse response h and the blurred and noisy signal y, the original algorithm steps can be summarized in this way: 1. Find position pk for which h has the best correlation with y. 2. Optimize the amplitude gk of h so that gk · h becomes a good fit for y and subtract the final result from y. 3. Repeat steps 1 and 2, each time replacing y with the result of step 2 until its energy becomes lower than a fixed threshold or the maximum number of iterations is reached. 4. Return all the combinations of positions pk and amplitudes gk found evaluating steps 1 and 2. To the purposes of our algorithm, the first step in this sequence is substituted with one capable to take into account non-integer delays. The values which can be use for pk are usually taken from an integer grid: in practice, this grid is refined adding N-1 equally spaced points between each point. Within each interval, each of these new points is associated with a value of the parameter used to estimate the all-pass filter parameter a: in particular, for the j-th point = j/N is set. A naïve implementation of our algorithm uses the grid shown in the top row of Fig. 3: at each white point of the grid an equivalent acquisition system impulse response he is synthesized by filtering h with the corresponding all-pass filter, while at black points the original h is used. Correlation with the original signal is then computed and a vector is generated: the position pk within this vector, corresponding to the highest correlation, is then used along with the matching pulse. Although straightforward, this procedure is quite suboptimal and slow, it needs to recalculate he at each point of the grid. An optimization of this policy is shown on the bottom row of Fig. 3: instead of evaluating he at each point, all the N-1 shifted versions of h are calculated in advance.
IFMBE Proceedings Vol. 22
___________________________________________
Non-Minimum Phase Iterative Deconvolution of Ultrasound Images
Table 1 Performance comparison of deconvolution algorithmsapplied on echographic images: on top, results related to phantom images; on bottom, results obtained from in-vivo images Phantoms FWRD WLSD ADID
Gax 0.84 ± 0.17 3.83 ± 0.95 6.67 ± 0.42
PSNR [dB] 22.34 ± 2.09 22.56 ± 1.92 21.91 ± 1.68
QI 0.79 ± 0.09 0.83 ± 0.07 0.80 ± 0.06
CG 1.15 ± 0.07 2.56 ± 0.22 3.33 ± 0.56
In-vivo FWRD WLSD ADID
Gax 1.64 ± 0.22 3.38 ± 1.13 8.73 ± 0.54
PSNR [dB] 26.54 ± 2.38 26.63 ± 1.96 25.54 ± 2.06
QI 0.93 ± 0.04 0.93 ± 0.03 0.91 ± 0.03
CG 0.83 ± 0.11 3.22 ± 0.28 4.43 ± 0.86
Then, calling H the set of h and the so generated he, correlation between y and each element of H is estimated using the original grid. In fact, white points corresponding to the same j are separated by the very same interval of black points. Thinking about the latter as corresponding to j = 0, each position pk is determined by the coordinates of the dot which achieves the best correlation with y. So, given the estimated acquisition system impulse response h and the blurred and noisy signal y, the final algorithm steps can be summarized in this way: 1. Set the number N of subdivision of the base interval. 2. Generate the N-1 non-integer shifted versions he of h by filtering h with a properly set all-pass filter P(z). 3. Find the position pk for which there is the best correlation between y and any of the elements of H. 4. Optimize the amplitude gk of the corresponding element hk of H so that gk · hk becomes a good fit for y and subtract the final result from y. 5. Repeat steps 3 and 4, each time replacing y with the result of step 4 until its energy becomes lower than a fixed threshold or the maximum number of iterations is reached. 6. Return all the combinations of positions pk and amplitudes gk found evaluating steps 3 and 4. V. PERFORMANCE EVALUATION To evaluate the proposed algorithm as a de-blurring technique for biomedical images and to verify and compare its effectiveness against other type II algorithms in literature (FWRD [5] and WLSD [6]), we used a RF US signal dataset comprising both synthetic phantom (CIRS Model 047) and in-vivo TRUS acquisitions of prostatic glands (264 frames), all obtained with a commercial US equipment (MYLAB90 Esaote S.p.a.). To quantify resolution improvement, axial Resolution Gain at 6dB (Gax) [7] was measured; conversely, Peak Signal to Noise Ratio (PSNR) and Quality Index (QI) were used to compute the dissimilarity between the original and
_________________________________________
667
processed image, in terms of loss of correlation, luminance and contrast distortion [8]; finally, image contrast enhancement on phantoms was measured by means of Contrast Gain (CG) [9]. All the images from the dataset were processed with the proposed deconvolution algorithms driven by the pulse estimated through cepstral techniques following these two procedures: in the first one, the pulse was estimated using the returning echo of a water-tank experiment, while in the second one the estimation was conducted directly on the acquired image. Image processing based on the second method provided overall better performances with respect to the considered evaluation metrics. This is due to the system response aberration caused by intrinsic inhomogeneities in the sound speed propagation constant within the propagation mean, which can be accounted only by estimating the pulse form the acquired frames. Table 1 reports the mean values and the standard deviations of the results obtained processing both phantom and in-vivo images. All the discussed algorithms provide a good resolution increase in the axial direction for both invivo and phantom acquisitions, with the only notable exception of FWRD applied to phantoms; better performance were recorded processing the in-vivo frames. Our algorithm features the best results on both phantoms and in-vivo frames with a very low deviation from the mean value. The standard WLSD scores second on phantom while its adaptive delay version is second best on in-vivo images, however with a deviation higher than our algorithm. Peak SNR is almost the same for all the algorithms, with fluctuations less than 2 dB, both on phantoms and in-vivo frames. These fluctuations are comparable to the relative deviations from the mean values, thus negligible. With the only noticeable exception being that in-vivo images are better processed compared to phantoms, the same happens regarding image quality. Finally, Contrast Gain estimations again award our algorithm for what concerns the mean values. However, while these results are quite good, the same cannot be told concerning the ratio between mean and standard deviation: in this case WLSD is the best algorithm on in-vivo frames, while FWRD scores the best results on phantoms. Figure 4 visually compares the different deconvolution algorithms processed output. At a visual inspection, the lumen at coordinates [5, 25] is best rendered by the FWRD algorithm, while the best overall improvement in image resolution is achieved by our algorithm. Although the best image background noise rejection is again obtained by FWRD, several anatomical structures that could not be seen in the original image become visible only after image processing with our algorithm. WLSD seem to be a good
IFMBE Proceedings Vol. 22
___________________________________________
668
N. Testoni, L. De Marchi, N. Speciale and G. Masetti
compromise solution, featuring the fastest computation time among all the discussed algorithms.
comparisons between our algorithm and other in literature have been presented. In particular we compared results obtained from different standard US image quality metrices on phantom and in-vivo images, showing that our noninteger delay filtering techniques performs better than other deconvolution methods, allowing noticeable improvements in image resolution.
ACKNOWLEDGMENT The authors gratefully acknowledge Prof. L. Masotti and his group (University of Florence, Italy) for providing the RF dataset.
REFERENCES
Figure 4 Deconvolution algorithms visual comparison on echographic images; from top to bottom: a) Original image, b) our proposed output, c) FWRD output, d) WLSD output.
VI. CONCLUSIONS In this work we presented an iterative deconvolution algorithm which is suitable for US image deblurring. By making use of analytically optimized all-pass filtering techniques and correlation estimations, it can fruitfully use as input both minimum and non-minimum phase impulse responses and address phase problems connected to scatterers not aligned with the sampling grid. Performance
_________________________________________
[1] Chen C, Hsu W-L, Sin S-K (1988) A comparison of wavelet deconvolution techniques for ultrasonic NDT. IEEE Proc. vol. 2, Intern. Conf. on Acous. Speech and Sign Proc, 1988, pp 867–870 [2] Insana M F, Wagner R F, Brown D G et al. (1990) Describing smallscale structure in random media using pulse-echo ultrasound. J Acous Soc A 87:179–192 [3] Manolakis D (2005) Statistical and Adaptive Signal Processing Spectral Estimation. Artech House [4] Högbom J A (1974) Aperture synthesis with a non-regular distribution of interferometer baselines. Astron Astroph Supp 15: 417–426 [5] Neelamani R, Hyeokho C, Baraniuk R (2004) Forward: Fourierwavelet regularized deconvolution for ill-conditioned systems. IEEE Trans Sign Proc 52:418–433 [6] Izzetoglu M, Onaral B, Bilgutay N (2000) Wavelet domain least squares deconvolution for ultrasonic backscattered signals. IEEE/EMBS Proc. vol. 1, Intern. Conf. Eng. Med. and Biol. Soc., 2000, pp 321–324 [7] Abeyratne U, Petropulu A, Reid J et al. (1997) Higher order versus second order statistics in ultrasound image deconvolution. IEEE Trans. Ultrason. Ferroelect. Freq. Contr. 44:1409–1416 [8] Loizou C, Pattichis C, Christodoulou C et al. (2005) Comparative evaluation of despeckle filtering in ultrasound imaging of the carotid artery. IEEE Trans. Ultrason. Ferroelect. Freq. Contr. 52:1653–1669 [9] Tang J, Peli E, Acton S (2003) Image enhancement using a contrast measure in the compressed domain. IEEE Sign Proc Lett 10:289–292
IFMBE Proceedings Vol. 22
___________________________________________
Dynamic Visualization of the Human Orbit for Functional Diagnostics in Ophthalmology, Cranio-maxillofacial Surgery, and Neurosurgery C. Kober 1, B.-I. Berg 2,3, C. Kunz 2,3, E.W. Radü 4, K. Scheffler 5, H.-F. Zeilhofer2,3, C. Buitrago-Téllez 6 and A. Palmowski-Wolfe 7 1
2
Faculty of Life Sciences, Hamburg University of Applied Sciences, Hamburg, Germany Hightech Research Centre of Cranio-Maxillofacial Surgery, University Hospital Basel, Basel, Switzerland 3 Department of Cranio-Maxillofacial Surgery, University Hospital Basel, Basel, Switzerland 4 Department of Neuroradiology, University Hospital Basel, Basel, Switzerland 5 MR Physics, University Hospital Basel, Basel, Switzerland 6 Department of Radiology, Spital Zofingen, Zofingen, Switzerland 7 Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
Abstract — This project is dedicated to computer aided support of functional diagnostics of the entire human orbit, inter alia with focus on the extra-ocular muscles and the optic nerve. Therefore, radiological imaging by MRI has to be referred to. Three approaches were followed, firstly highly resolved 3Dreconstruction of the ocular anatomy, further so called oculodynamic MRI with radiological acquisition of only one MRI slice but including near real-time eye movements, thirdly sequential MRI with full 4D-visualization. The oculo-dynamic MRI has already been integrated in the clinical setting as part of routine MRI examination. The full 4D-approach which was rated as very promising by clinical experts has been successfully applied for a control.
a
Keywords — 4D-visualization, oculo-dynamic MRI, extraocular muscles, optic nerve, eye movements.
I. INTRODUCTION As the ability to see is highly decisive for quality of life the goal of this project is to provide a bundle of efficient tools to aid clinical analysis of the organ of vision including adjacent hard and soft tissue structures. As the physiological functionality of the human eye movement is based on the whole orbit, the project refers to the entire orbital cavity including the optic nerve, the extra-ocular muscles, and the orbital connective tissue (Fig. 1). Besides ophthalmology, clinical input from cranio-maxillofacial surgery and neurosurgery is needed. As standard ophthalmologic imaging methods are not sufficient for the entire orbit, radiological imaging is required. This all has motivated the installation of a highly multidisciplinary team including radiology and neuroradiology as further medical disciplines as well as medical physics and computer science. Thereby, it is possible to cover the full chain ranging from image acquisition over image processing and medical visualization to clinical validation and detailed research concerning diagnostic significance and clinical manageability and acceptance.
b Fig. 1 Anatomy of the human eye, (a) extra-ocular muscles, medial view, (b) sagittal cross section of the orbital cavity [1]
3D-reconstruction based on computer tomography (CT) or magnetic resonance imaging (MRI) has already been widely accepted as giving valuable insight to individual – static – patient’s anatomy. Dynamic radiological monitoring of functional disorders is still in its beginning. As regards the dynamics of bony organs as the human temporomandibular joint, a certain standard has already been reached, see inter alia [2]. First activities concerning the dynamics of the
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 669–672, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
670
C. Kober, B.-I. Berg, C. Kunz, E.W. Radü, K. Scheffler, H.-F. Zeilhofer, C. Buitrago-Téllez and A. Palmowski-Wolfe
Fig. 2 Images from an axial od-MRI movie of a control, right gaze,
Fig. 3 Images from an axial od-MRI movie of a control, left gaze,
(a) original MRI slice, (b) visualization
(a) original MRI slice, (b) visualization
human eye go back to the early nineties [3]. Up to now, capturing the ocular dynamics is subject of intensive research, see inter alia [4]. Therefore, our project is hierarchically structured which means that three different visualization strategies are followed: Firstly, high resolution 3D-reconstruction of the orbital structures, further near real time oculo-dynamic MRI (od-MRI) standing for “2D-over time”, finally full 4Dvisualization, namely “time dependent+3D”. In the contrary to the od-MRI technique, the 4D-approach has not yet been realized in real-time. The expected benefit of the project in clinical diagnostics is very high. Actual and future applications are posttraumatic adhesions, eye movement disorders, for example: ocular paralysis, myopia, entrapment of ocular muscles, tumors, periorbital fat oedema and structural abnormality. Mechanic eye movement disorders can be differentiated from paralytic ones. Pathological movements and diplopia can be clearly classified. Finally, significant input to basic research with regard to extra-ocular muscle mechanics is expected.
diological imaging of the orbital cavity with special focus on the ocular soft tissue, MRI has proved to be the appropriate means [3 – 6]. As mentioned before, within this project, three different visualization strategies are followed, namely high resolution 3D-reconstruction of the orbital structures, further od-MRI standing for “2D-over time”, finally full 4D-visualization where time dependent 3D-animations are shown. For highly resolved 3D-reconstruction, MRI data with various imaging protocols were referred to. Thanks to the precise radiological documentation of craniofacial cases practiced by the involved medical institutions, a detailed data base with and without ophthalmologic pathologies is available. For better representation of the skeletal tissue, in many cases, CT data were superposed to the MRI data by rigid registration of the skull base. Though this kind of data gives excellent detail reconstruction of the patients’ anatomy, with regard to soft tissue organs like the extra-ocular muscles, they only provide a snapshot of a probably never recurring situation. Therefore, inter alia by the od-MRI approach, research concerning dynamic radiological imaging of the human organ of vision was initiated. For this approach which can be looked at as preparatory for full 4D, the radiological acquisition is focused on only one slice (with very high slice thickness) but including near real-time movement, so it is “2D+time” (Fig. 2 a and Fig. 3 a). Horizontal eye movements were assessed using a central axial or a coronal slice,
II. MATERIALS AND METHODS A. Radiology Radiological imaging is required for imaging the entire orbit including optic nerve and extra-ocular muscles. For ra-
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Dynamic Visualization of the Human Orbit for Functional Diagnostics ...
671
Fig. 4 Original MRI-slices from an axial 4D-series referring to left and right gaze
Fig. 6 Frontal visualization of an axial 4D-series with superimposed partial surface reconstruction of the skull referring to left and right gaze
Fig. 5 Frontal visualization of an axial 4D-series referring to left and right gaze
up and down eye movements were assessed using a central sagittal slice. The applied slice thickness was 5 mm. A TrueFisp sequence with 180ms/image was used. The image resolution was 1.3 mm x 1.3 mm. Though usually more than 10 cycles were acquired the additional expenditure of time per sequence is below 30 sec. The od-MRI has already been applied in the clinical setting whereby more than 20 data sets are available for further analysis.
_______________________________________________________________
Full real time 4D-radiology is still beyond the technical possibilities. Therefore, as regards the actual state of the project, so-called “quasi-continuous” MRI data acquisition was applied. In this context, a healthy volunteer (female, 28 Y) continuously moved the focus of her eyes following a straight row of labeled points (distance 1 cm) axially from the left to the right, as well as, in another series, sagittally up and down. The acquisition time for each 3D-MRI data set was 1 minute. Each 4D-series is composed by 12 – 20 3D-MRI data sets comprising about 50 axial slices with isotropic voxel size of 1 mm in x, y, and z-direction (Fig. 4). An SE imaging sequence was applied using a 1.5 T Siemens MRI machine. For the sake of validation of reproducibility, the axial 4D-series was taken twice with a delay of several weeks. B. Visualization Generally, for all levels within the project, after refined image processing, anatomical structures were visualized by direct volume rendering with especially designed transfer
IFMBE Proceedings Vol. 22
_________________________________________________________________
672
C. Kober, B.-I. Berg, C. Kunz, E.W. Radü, K. Scheffler, H.-F. Zeilhofer, C. Buitrago-Téllez and A. Palmowski-Wolfe
functions (extra-ocular muscles, optic nerve) combined with shaded surface reconstruction (bulb, skeletal tissue). As regards the reconstruction of the skeletal tissue, the low contrast of bone tissue in MRI aggravates the analysis. For the 3D-approach, this was overcome by rigid registration of the CT-data (if available) on the MRI-data. All image processing, visualization steps were performed using the visualization toolbox Amira 4.1 [7, 8]. Concerning the time dependent approaches, the od-MRI and the full 4D, the first step was dedicated to compensation of head movements during data acquisition. The single MRI data sets, either 2D or 3D, were vice versa registered with respect to the facial skull. The algorithm for affine registration available within Amira which is based on an iterative optimization algorithm delivered satisfactory results [7, 8]. As similarity measure, the Euclidean distance was chosen. For the od-MRI, improved visibility of the extra-ocular muscles, the optic nerve, and the sclera was achieved by a special image processing procedure, inter alia based on direct volume rendering (Fig. 2 b and Fig. 3 b). For the 4Dapproach, after refined image processing, the anatomical structures were also visualized by a combination of direct volume rendering for bulb and ocular soft tissue superimposed with shaded surface reconstruction for skeletal tissue (Fig. 5 and Fig. 6). Again for both time dependent approaches, continuous update for every time step provided dynamical sequences. Additionally to conventional (mono) visualization, stereoscopic rendering using red-cyan anaglyphs was applied aimed at facilitation of the analysis of the visualization results [6].
IV. DISCUSSION AND OUTLOOK This article refers to a multidisciplinary project dedicated to the evaluation of the functionality of the human organ of vision with focus on the extra ocular muscles and the optic nerve. The project is hierarchically structured where the odMRI has already provided valuable diagnosis support in the clinical setting. All approaches stimulated new ophthalmologic and cranio-maxillofacial research. The full 4D-concept has been successfully realized for a control so the next step will be application to clinical cases. Furthermore, reduction of MRI acquisition time for the 4D-approach and automization of data processing are subject of intensive actual research.
REFERENCES 1. 2.
3.
4.
5.
III. RESULTS The highly refined 3D-reconstruction of the orbital structures frequently provided valuable diagnosis support in clinical cases. For both time dependent approaches, the improved visualization techniques developed within the project increased the boarders of tissue contrast and thus facilitated identification of sclera, periost, and muscle structures. The od-MRI has already been applied in the clinical setting (high myope, strabismus, palsy and paralysis, tumors, periorbital fat oedema). As part of routine MRI examinations, it offers a fast (30s) means to visualize eye movements in living patients. Actually, a data base of more than 20 clinical cases is available. The 4D-concept has been successfully tested for a control. Dynamical sequences of moving extra-ocular muscles
_______________________________________________________________
and optic nerve were provided [5, 6]. One of the advantages of the 4D-visualization compared with od-MRI is the possibility of frontal visualization of the eye (Fig. 5 and Fig. 6). Further, the stereographic visualization was looked at valuable extension of the concept [6].
6.
7. 8.
Gray H, Anatomy of the Human Body, http://www.bartleby.com/107/ Krebs M, Gallo LM, Airoldi RL at al. (1995) A new method for threedimensional reconstruction and animation of the temporomandibular joint. Ann Acad Med Singapore 24(1):11–16 Cabanis EA, Iba-Zizen MT, Delmas V et al. (1990) The dynamic study of the human body using MRI. Bull Acad Natl Med 174(9):12891296 Piccirelli M, Luechinger R, Rutz AK et al. (2007) Extraocular muscle deformation assessed by motion-encoded MRI during eye movement in healthy subjects. J Vision 14/7:1-10 Kober C, Boerner BI, Buitrago Tellez C et al. (2007) 4D-visualization of the orbit based on dynamic MRI with special focus on the extraocular muscles and the optic nerves. Int J CARS 2(Suppl 1):26-28 Kober C, Boerner BI, Mori S et al. (2007) Stereoscopic 4D-Visualization of Craniofacial Soft Tissue based on Dynamic MRI and 256 Row 4D-CT. In: Buzug TM, Holz D, Weber S et al., eds. Advances in Medical Engineering, Springer Proc Physics 114, Berlin Heidelberg Amira™ – Advanced 3D Visualization and Volume Modeling, www. amiravis.com Stalling D, Westerhoff M, Hege HC (2005) Amira: A Highly Interactive System for Visual Data Analysis. In: Hansen CD, Johnson CR, eds. The Visualization Handbook 38:749-67, Elsevier, Amsterdam Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
C. Kober Hamburg University of Applied Sciences Lohbruegger Kirchstr. 65 Hamburg Germany
[email protected] _________________________________________________________________
A Communication Term for the Combined Registration and Segmentation Konstantin Ens1,2, Jens von Berg 2 and Bernd Fischer1 1
Institute of Mathematics, University of Luebeck, Luebeck, Germany 2 Philips Research Europe, Hamburg, Germany
Abstract — Accurate image registration is a necessary prerequisite for many diagnostic and therapy planning procedures where complementary information from different images has to be combined. The design of robust and reliable non-parametric registration schemes is currently a very active research area. Modern approaches combine the pure registration scheme with other image processing routines such that both ingredients may benefit from each other. One of the new approaches is the combination of segmentation and registration ("segistration"). Here, the segmentation part guides the registration to its desired configuration, whereas on the other hand the registration leads to an automatic segmentation. By joining these image processing methods it is possible to overcome some of the pitfalls of the individual methods. Here, we focus on the benefits for the registration task. To combine segmentation and registration, a special communication or coupling term is needed. In this note we present a novel coupling term, which overcomes the pitfalls of conventional ones. It turned out that not only the achieved results were better, but the overall scheme converges much faster, resulting in a favorable computation time. The performance tests were carried out for magnetic resonance (MR) images of the brain demonstrating the striking the potential of the proposed method for real live examples. Keywords — segistration, medical image registration, segmentation, mathematical modeling, magnet resonance imaging, neuro-imaging.
I. INTRODUCTION Medical image registration and segmentation are two of most challenging imaging problems. We start by defining the non-parametric image registration problem [1] in a variational setting. Given a reference image R and a template image T, to be transformed. We wish to find a displacement field Y that minimizes the following functional JREG (Y; R, T) = D(Y; R, T) + 1 S(Y).
(1)
Here, S denotes a regularizer for the displacement field Y. We have used the so-called elastic potential [2]. D is a
distance measure quantifying the similarity between the reference image and the deformed template image. We have chosen the cross corelation (CC) [1] for our experiments. The parameter 1 may be used to emphasize the distance or the smoothness term. The role of upcoming parameters is along the same line. Next, we would like to incorporate a segmentation part into the registration functional (1). In the light of this variational formulation, it appears very natural to use as well an energy functional for the formulation of the segmentation problem. To this end, we consider the functional JSEG (C; I) = EINT(C) + 2 EEXT (C ;I)
(2)
for the image I. Here, the C [3] denotes a finite set of smooth, closed curves, which eventually define the segmentation. The particular representation of the curves is not in the focus of this work. For the examples to be presented, we have employed the widely used implicit representations via Level Sets. EINT acts solely on the curve and is therefore frequently called internal energy. It controls the smoothness of the curve. Typical smoothers are based on the length or on the area enclosed by the curve. Here, we use the length of the curve. EEXT should drive the curve to its destination and is known as external energy. For our experiments we use a variation of the Mumford-Shah functional [5] introduced by Chan and Vese [4]. Their external energy reads as follows EEXT (C; I) = in(C) |I - c1|dx + \in(C) |I – c2|dx. (3) Where in(C) denotes the region enclosed by C and an image domain. The constants c1 and c2 are the average gray-values of the respective integration areas. II. METHODS The coupling of the registration and segmentation is given by the following functional J (Y, CR; R, T) = JREG (Y; R, T) + 1 JSEG (CR; R)
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 673–675, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
674
Konstantin Ens, Jens von Berg and Bernd Fischer
+ 2 DC (Y, CR; R, T)
(4)
The aim of this functional is to find a displacement field Y for the registration and a segmentation of objects in reference image CR. The segmentation of the template image CT is assumed to be given. Actually, a segmentation of the reference is computed and then compared to the segmentation of the template by the computed displacement field. The term DC is responsible for the coupling between the registration and segmentation part. In the literature, only on one possibility for this measure [8] is reported DC (Y, CR; CT) = in(CR) «T(Y)(x)dx.
(5)
see [1]), 1 = 0,001 and 2 = 0,001. To obtain a benchmark registration we next registered the two images by a plain elastic registration scheme, based on (1). It is apparent from Figure 2 (b), that after this registration the alignment is improved, but the ventricles are still not optimally registered. Next, we applied the segistration scheme based on (4) for both coupling terms (5) and (6). Both attempts produce superior results as compared to the plain registration. In particular, for the medical relevant area of the ventricle system, the obtained distance is very small. The corresponding segmentation results for the reference are shown in Figure 3. All three segmentation appear visually quite satisfying. However, it should be noted that the new scheme based on (6), requires only about 2/3 iteration steps as compared to the two other approaches.
Here T(Y) is the implicitly signed distance function of the contour CT. To our experience, the overall performance of this approach is not very impressive as indicated by the presented example. On top, one has to compute the function
T(Y) which is of complexity O(n log(n)) [9], where n denotes the number of grid points. Here we propose a new distance measure for the coupling of the registration and segmentation by
(a)
2
(b)
Fig. 1: (a) reference image, (b) template image
DC (Y, CR; CT) = (in(CT(Y(x)))- in(CR(x))) dx. (6) Here the computation time is only O(n). Moreover, the convergence behavior is much nicer, as will be shown in the result section. III. RESULTS To demonstrate the performance of the described distance measures for real clinical problems, we applied them to the segmentation and registration of two magnetic resonance images of the brain used for diagnosis and therapy of Alzheimer's disease. The data used in this article was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (www.loni.ucla.edu/ADNI). For demonstration purposes, we comment on two - dimensional slices of the three - dimensional brain images. Figure 1(a) illustrates the reference and 1(b) the template image, respectively. The difference between the reference and the template image is displayed in Figure 2(a). It is apparent that the clinically highly relevant ventricles, as well as the skull have quite different positions in the images. Next, we turn our attention to the coupling of registration and segmentation. In all experiments we selected the parameter 1 = 1, 2 = 1, = 0, ^ = 1 (elastic potential, for details
_______________________________________________________________
(a)
(b)
(c)
(d)
Fig. 2: difference between the reference and (a) the template image, (b) the result of the plain elastic registration, (c) the result of the segistration with coupling term (5) (d) the result of segistration with with coupling term (6)
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Communication Term for the Combined Registration and Segmentation
675
ACKNOWLEDGMENT
(a)
Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative. The authors are thankful to Stewart Young and Fabian Wenzel of Philips Research Europe - Hamburg for discussions with regard to magnetic resonance images of the brain. We would like to thank Jan Modersitzki of the McMaster University in Hamilton, Canada, Steffen Renisch and Ingwer Carlsen of Philips Research Europe - Hamburg for discussions with regard to the image registration problem.
(b)
REFERENCES
(c) Fig. 3: segmentation of the reference image (a) without coupling, (b) with coupling and distance measure (5), (c) with coupling and distance measure (6)
1. 2. 3. 4.
IV. CONCLUSIONS 5.
We introduced a novel communication or coupling term for a joint registration and segmentation scheme. It has been shown that this term is not only computationally very attractive but also produces very favorable results. Currently, we are implementing this promising approach in conjunction with the early diagnosis of Alzheimers disease and thereby testing its robustness. First results are very encouraging.
_______________________________________________________________
6. 7. 8.
9.
Modersitzki J (2003) Numerical methods for image registration. Oxford University Press Broit C (1981) Optimal registration of deformed images. PhD thesis, Computer and information science, University of Pennsilvania Kass M, Witkin A, Terzopoulos D (1987) Snakes: Active contour models. Internationa journal of computer vision 1, pp 321-331 Moumford D, Shah J (1989) Optimal approximation by piecewise smooth functions and associated variational problems. Comm pure appl. Math 42 Chan T, Vese L (1999) An active contour model without edges. Scale–space LNCS Liu J, Wang Y, liu J (2006) A unified framework for segmentationassisted image registration. ACCV, 2, pp 405-414 Unal C, Slabaugh G (2005) Coupled PDEs for non-rigid registration and segmentation. CVPR Wang F, Vemuri B C (2005) Simultaneous registration and segmentation of anatomical structures from brain MRI. MICCAI, pp 17-25 Malladi R, Sethian J A (2002) A general framework for low level vision fast methods for shape extraction in medical and biomedical imaging. Springer verlag 1-13
IFMBE Proceedings Vol. 22
_________________________________________________________________
Elastic Registration of Optical Images showing Heart Muscle Contraction M. Janich1 , G. Seemann1 , J. Thiele1 and O. D¨ossel1 1
Institute of Biomedical Engineering, Universit¨at Karlsruhe (TH), Karlsruhe, Germany
Abstract— Image registration is used to reduce movement artifacts caused by contracting heart muscle in transmembrane voltage measurements using fluorescence microscopy. The applied registration methods include Thin-Plate Splines (TPS) and Gaussian Elastic Body Splines (GEBS). Landmarks are established automatically using regional cross-correlation. Then these landmarks are filtered for meaningful correspondences by requiring a minimum correlation coefficient and clustering adjacent and identical displacements. Registration of an image sequence showing a contracting muscle is realized by spatially aligning the images at maximum contraction and at rest. For the other images the movement of the muscle is interpolated using an analytical description of the contraction of heart muscle. TPS cause amplification of displacements at the image border, while GEBS restrict landmark’s influence to a local region. Over a set of 81 images GEBS are shown to register images better and more robust than TPS, which in some cases cannot reduce movements. Validation through visualization of transmembrane voltages on contracting muscle reveals that GEBS registration removes movement artifacts better than TPS. Image regions with prominent structures are successfully tackled by GEBS registration. Keywords— elastic image registration, transmembrane voltage measurement, muscle contraction, Thin-Plate Splines, Gaussian Elastic Body Splines.
I. I NTRODUCTION Optical image acquisition using fluorescence microscopy poses a flexible approach to measuring transmembrane voltages of high temporal as well as spatial resolution [1]. It is based on a fluorescent marker which changes its emitting light spectrum proportional to the cell’s transmembrane voltage. After injection of an electrical current the depolarization of some cells spreads over the whole tissue (so called action potential). Electro-mechanical coupling causes muscle movement and consequently different tissue patches appear in front of one pixel in the camera. If tissue of different light intensity moves in front of the pixel the signal is carried away. The tissue’s light intensity dominates over the signal caused by fluorescence, resulting in a movement artifact. Using image post-processing the movement can be reduced by bringing images from an image sequence over time in spatial correspondence, i.e. registering them. The contract-
(a)
(b)
(c)
(d)
Fig. 1: Detail from set of landmarks established with regional cross-correlation on regular grid (a). Landmarks are reduced to meaningful displacements by requiring a minimum cross-correlation coefficient (b), consistent directions (c), and clustering identical landmarks (d). ing muscle has local displacements requiring a model which can handle local distortions. Elastic image registration gives a local and smooth transformation. Landmark based registration allows transformation and interpolation in one step, using corresponding points in the images to be registered. Registration is composed of finding landmarks, estimating the transformation model, and transformation including interpolation.
II. L ANDMARKS A measure of similarity between images is given by crosscorrelation. When applied in small template regions it can be used to establish point correspondences. Using regional cross-correlation on a regularly spaced grid of template kernels gives the movements at the grid points. The large amount of tracked image regions needs to be thinned out to meaningful landmarks. Filtering is done in three subsequent steps based on correlation coefficients, consistent directions, and clustering. These steps are visualized in fig. 1. The similarity between two signals is significant if its normalized cross-correlation coefficient is close to one. The first
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 676–679, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Elastic Registration of Optical Images showing Heart Muscle Contraction
filtering step discards all landmarks having a correlation coefficient below threshold t. In order to track small and large structures regional crosscorrelation is calculated with two different template sizes for each grid point. This assigns to each grid point two corresponding points in the other image. If these points are not identical, tracking in one or both cases was unsuccessful. In this filtering step all landmarks at the same origin, having different directions or norms (length of displacement), are removed. Adjacent landmarks with identical properties (direction and norm) are grouped using hierarchical clustering. First the landmarks are grouped by identical properties. Then each group is clustered and clusters having a size smaller than threshold c are discarded. Each cluster is reduced to one landmark in its centroid location. Reliable landmarks originate from cluster sizes greater than two.
III. T RANSFORMATION MODELS The most often used elastic transformation functions in image processing are Thin-Plate Splines (TPS). They perform a global mapping but can handle locally varying geometric distortions. Basis functions with properties similar to body tissue are introduced with the Gaussian Elastic Body Splines (GEBS). The influence of each landmark decreases away from its origin. GEBS promise a better solution because they are derived from the displacements of elastic material under the influence of forces, which corresponds to the properties of body tissue. When using GEBS, a deformation in one direction affects other directions as well. For example a muscle shortening in one direction thickens in the other directions. The methods TPS and GEBS interpolate the displacements given by a set of landmarks constrained to the physical model each one is based on. The displacements at the locations of landmarks are identical with the displacements given by the landmarks, i.e. landmarks are mapped onto each other.
677
Fig. 2: Analytical description of muscle contraction (red, normalized), fitted to SSD of each image with image at rest (blue, normalized).
B. Gaussian Elastic Body Splines GEBS are based on the partial differential equations of Navier that describe equilibrium displacements of homogeneous, isotropic elastic material subjected to forces [4, 5]. They incorporate the elasticity parameter ν (Poisson ratio) which describes to which extent the tissue is compressible. The development of the GEBS uses spatially limited Gaussian forces r2
− f (x) =ci √ 1 e 2σ 2 ( 2πσ )3
where the distance from origin is r = |x| = x2 + y2 + z2 , to derive an analytic solution of the Navier equation. A small value for the standard deviation σ results in spatially limited forces and gives a transformation which is able to cope with local deformations. A larger value for σ makes the transformation more global.
IV. AUTOMATIC R EGISTRATION OF S EQUENCE After reading the whole image sequence of a contracting muscle, the movement of the muscle is estimated based on the change of the similarity measure sum of squared intensity differences. The SSD for two discrete images f (x, y) and g(x, y)
A. Thin-Plate Splines The TPS model incorporates an infinite thin metal plate fixed at the landmark points and constrained to lie above or below the ground depending on the movement of the landmark [2, 3]. Physical steel takes this form as long as the displacements are small. The mapping minimizes a “bending energy” which would have been required if the landmark displacements in question were normal to the plane of the image rather than within that plane.
_________________________________________
(1)
SSD( f , g) =
1 | f (x, y) − g(x, y)|2 N∑ x,y
(2)
is defined for all pixels x, y in the images’ overlap domain, where N is the number of pixels. The SSD of each image compared with an image at rest is assumed to be proportional to muscle movement. An analytical description of the movement of heart muscle [6] is fitted to these SSD values (see fig. 2). The function developed by Palladino et al. describes
IFMBE Proceedings Vol. 22
___________________________________________
678
M. Janich, G. Seemann, J. Thiele and O. Dössel
Fig. 3: Optical image of the trabecula of a rat’s heart at rest. the time course of active force generation and is proportional to the movement of the contracting muscle. From the model fit we can derive the image pair with the greatest movement: images at rest and at maximum contraction (here t = 0 and 106 ms, respectively). The image at maximum contraction is used as source image to be registered with the image at rest, the reference image. The result from this registration gives a displacement vector for each pixel. For each image in the sequence these vectors are scaled in length according to the normalized analytical description of the contraction and are then used to transform each image in the sequence. This interpolates the movement for the images between the registered image pair. Regional cross-correlation and filtering establishes a set of landmarks, which varies slightly for different parameters t and c. A reliable set of landmarks, which is comparable to manual selection by an expert user, is extracted through minimization of the SSD by testing 15 parameter variations. GEBS parameters (σ and ν) are optimized sequentially by using a golden-section search algorithm.
Fig. 4: Quantitative comparison of TPS (blue) and GEBS (red) registration applied to sequence of images showing contracting trabecula from rat heart. GEBS reaches on average a normalized SSD of 25.51% below TPS. Results before optimization of GEBS parameter Poisson ratio ν are plotted in green. It reduces SSD by 1.11%. On average GEBS registration improves SSD to 44.13%. SSD > 1. Even after optimizing landmark finding parameters TPS doesn’t improve SSD for these images. This is a major drawback of TPS over GEBS, because it shows that TPS is not robust. GEBS transformation is motivated by its derivation from the physical model of an elastic material under forces, which is similar to body tissue. The Poisson ratio ν determines the incompressibility of the material. Its effect is plotted in fig. 4. On average optimization of the elasticity parameter improves registration by 1%. Even though the impact of Poisson ratio is limited the elasticity properties are still advantageous.
V. R ESULTS AND D ISCUSSION Both elastic registration algorithms are tested on optical image data from the trabecula of a rat’s heart. The data set consists of 152 images acquired 1 ms apart and showing one contraction. The images have resolution 32 × 172 and depth 12 bit (see fig. 3). The calculation time for registration implemented in MATLAB (The MathWorks, Natick, MA, USA) of an image pair on an Apple Macintosh G5 with 1GB RAM is less than 10s for TPS and 600s for GEBS. A. Quantitative Comparison A sequence of a contracting muscle under the microscope acquired with normal light is registered using TPS and GEBS (fig. 4). Each image in the sequence is registered to the same reference image at rest. The SSD serves as measurement for comparison. It is normalized so that unity corresponds to the image pair without registration. GEBS give a better result in every case. On average GEBS reach a normalized SSD of 26% below TPS. Registration with TPS adds extra movement in images 72-76, where
_________________________________________
B. Action Potential In order to compare registration methods in successfully reducing movement artifacts in action potential measurements, three measurements under different conditions are made: A. under fluorescent light, showing action potential on contracting muscle, B. under normal light, showing only contraction, and C. under fluorescent light and under the influence of electromechanical decoupling, showing action potential on muscle at rest. These image sequences are acquired one after the other while the object remains in the same position and is well nourished. Fig. 5 describes how the data sets A, B, and C are processed. B, being not superimposed by an action potential, is used to calculate the transformation function T of the image registration. A is then transformed according to T . The last data set C serves as reference. Attention must be paid to the effects on action potential caused by the substance which decouples
IFMBE Proceedings Vol. 22
___________________________________________
Elastic Registration of Optical Images showing Heart Muscle Contraction
Fig. 5: Comparison of normalized and filtered action potentials on moving ¯ registered moving muscle A¯ T , and muscle without movement C. ¯ muscle A, Calculation of the transformation function T is based on an image sequence under normal light B. Without excitation light, image B shows only muscle movement without an action potential.
679
Fig. 7: Plot of action potential in single point. C¯ (blue): reference signal without movement. A¯ (green): Darker tissue comes into field of view, resulting in a signal drop. A¯ T ,T PS (red): moving muscle registered with TPS. A¯ T ,GEBS (turquoise): GEBS registration.
VI. C ONCLUSION
(a) without image registration: A¯
(b) TPS registration: A¯ T ,T PS
(c) GEBS registration: A¯ T ,GEBS
Fig. 6: Action potential on trabecula from rat’s heart at time of maximum contraction. The fluorescent signal is normalized to resting membrane voltage. Values greater unity are not physiological.
contraction from action potential (i.e. C may not be the ideal case). At last all images are normalized and filtered [1]. Images of the action potential at maximum contraction are displayed in fig. 6. Compared to GEBS, TPS leave more areas of movement and even add movement. TPS add greater movement in the bottom left corner because there is no landmark outside the image border to reduce the movement intensity. The local influence of landmarks in GEBS registration shows its advantage here. GEBS registration better reduces movement artifacts. Movement remains in regions with low image contrast. Image post-processing cannot correctly align images in low-contrast regions because there is too little information. A plot of an action potential in one pixel is displayed in fig. 7. Both registration methods bring the signal ¯ closer to reference C.
_________________________________________
For the registration of contracting muscles, Gaussian Elastic Body Splines are proven to be superior to Thin-Plate Splines. The advantages of GEBS are: displacements in one direction affect other directions, landmarks have locally restricted influence, and elasticity from physical model corresponds to tissue properties. TPS cause amplification of landmark displacements at the image border, while GEBS restrict landmark influence to a local region. Over a sequence of 81 images GEBS are shown to register images more robust than TPS, which in some cases cannot reduce movements. In all cases GEBS perform better than TPS. Visualization of action potentials on contracting muscle reveals that GEBS registration can best match the reference image without movement.
R EFERENCES 1. Thiele J. (2008) Optische und mechanische Messung von elektrophysiologischen Vorg¨angen im Myokardgewebe. PhD thesis, Institute of Biomedical Engineering, Universit¨at Karlsruhe (TH) . 2. Grimson WEL. (1982) A Computational Theory of Visual Surface Interpolation. Royal Society of London Philosophical Trans. Series B . 3. Bookstein FL. (1989) Principal Warps: Thin-Plate Splines and the Decomposition of Deformations. IEEE Transactions on Pattern Analysis Machine Intelligence 11:567–585. 4. Davis MH, Khotanzad A, Flamig DP, Harms SE. (1997) A physics-based coordinate transformation for 3-D image matching. IEEE Transactions on Medical Imaging 16:317-328. 5. Kohlrausch J, Rohr K, Stiehl HS. (2005) A New Class of Elastic Body Splines for Nonrigid Registration of Medical Images. Journal of Mathematical Imaging and Vision 23. 6. Palladino JL, Danielsen M, Noordergraaf A. (2000) An analytical description of heart muscle. Proceedings of the IEEE 26th Annual Northeast Bioengineering Conference 45-46.
IFMBE Proceedings Vol. 22
___________________________________________
Automation of the preoperative image processing steps for ultrasound based navigation C. Dekomien1, S. Winter1 1
Institut of Neuroinformatik, Ruhr University Bochum, Bochum, Germany
Abstract — Ultrasound based navigation is a flexible, fast, and robust method for intraoperative navigation, where intraoperative 3D ultrasound is used for the registration procedure. To establish ultrasound based navigation in the clinical routine, it is necessary to automate the preoperative image processing steps. The task of this process is the extraction of the bone surface from the preoperative CT or MRI data. For the automation we developed an image processing pipeline. We designed a model of an ultrasound scan, which consists of a scan path and scan properties as transducer shape and width. The scan path was attached to anatomical landmarks. Based on these landmarks the model was registered within the preoperative image data and an ultrasound scan was simulated in the data to extract the bone surface visualised in ultrasound images. Additionally for complex structures as the lumbar spine it is necessary to separate single vertebrae. This segmentation was done by a shape-based level set method. The segmentation result was combined with the extracted bone surface, to assign the correct surface points to the vertebra. The ultrasound registration with the described surface extraction method was evaluated by applying the proposed procedure on phantom and patient data. To estimate the overall accuracy, phantoms of the lumbar spine and the femur were used to compare the ultrasound registration with an accurate point-based registration. Therefore, 100 ultrasound registrations were compared with the reference registration and target registration errors were calculated for different anatomical regions. For instance, at the phantom of the femur the mean RMS error for all targets was 0.74 mm, where 0.64 mm was the systematic and 0.36 mm was the statistical error. The results lie within an admissible range for intraoperative navigation. Keywords — Ultrasound Registration, Segmentation, Navigation, CT-data
I. MOTIVATION The request for minimally invasive surgeries leads to an increasing demand for surgical navigation systems. Requirements for navigation systems are low costs, flexibility and an easy handling [1, 2]. To establish a navigation system it needs to simplify the surgical work. Hence, a high automation of the preoperative data processing for the navigation process is essential.
For image based navigated surgery, the coordinate system of the patient has to be registered within the coordinate system of the preoperative data. To overcome some of the problems with the common landmark based registration methods [2], we use intraoperatively acquired freehand ultrasound to represent the coordinate system of the patient. In previous work we developed a fast and robust algorithm to register the intraoperative ultrasound and preoperative CT or MRI data [3-5]. This surface-volume algorithm requires the extraction of the bone surface from the preoperative data. The bone surface is projected into the ultrasound data and, by the use of an optimization process, the surface is transformed into its optimal position. The optimization criterion is the sum of gray values of the ultrasound voxels which are masked by the transformed surface. The preoperative process consists particularly of the extraction of the bone surface from the CT or MRI data. Our ambition is the automation of this process to keep the user interaction low. This will make the whole process more simple and less time consuming. II. MATERIAL AND METHOD A. Data We acquired 3D ultrasound and CT data from a spine phantom and a femur phantom. Moreover, spiral CT and ultrasound patient data from the lumbar spine were obtained. We used a Telemed ultrasound device with a 9 MHz linear array transducer to record the phantom ultrasound data. For the patient ultrasound data acquisition a Siemens Sonoline Omnia system and a 5 MHz curved array transducer was used. To get isotropic data all dataset were resampled to a resolution of 0.5 × 0.5 × 0.5 mm3. B. Image processing pipeline For the automation of the preoperative bone surface extraction we developed an image processing pipeline (see Fig. 1).
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 680–683, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Automation of the preoperative image processing steps for ultrasound based navigation
681
distance function between segmentation result and surface we assigned surface points to single vertebrae. C. Evaluation
Fig. 1. Diagram of the preoperative image processing pipeline. After a landmark identification a scan path is matched with these landmarks. According to this scan path the ultrasound scan is simulated in the preoperative data. Simultaneously the landmarks are used for the model matching to segment single bones. The surfaces of single bones were extracted by combining the segmentation result with the extracted bone surface.
The precondition for the preoperative image processing pipeline is the design of models of ultrasound scan paths. These model paths should correspond approximately to the intraoperatively used ultrasound scan paths. Hence, the shapes of the model scan paths should nearly approximate the skin surface in the data. Additionally, some scan attributes of the intraoperatively used ultrasound transducers were also attached to the models. We distinguished between linear and curved array transducers and considered different transducer breadths and different angles for curved array transducers. For each anatomical region and for each ultrasound transducer a scan path model was designed. Additionally, anatomical landmarks were attached to the scan path models. Thus, a model scan path consists of the path with additionally scan and transducer features, and corresponding landmarks. The first step of the image processing pipeline (see Fig.1) is the identification of predefined anatomical landmarks in the preoperative data. After this the model of the scan path was matched with the preoperative data by the use of a rigid point registration which is based on anatomical landmarks. According to the registered scan path and the scan properties, the ultrasound scan was simulated in the preoperative data. To extract the bone surface every single ray of the simulated ultrasound was followed and the first voxel with a defined intensity, hit by the ray, was added to a set of surface points. Because of the relative movement of bones it is important to register single bones. Therefore, in complex anatomical regions like the lumbar spine it is necessary to segment single vertebrae. For the separation of the bone surface of a single vertebra, the vertebra was segmented from the whole lumbar spine with a flexible shape based level set segmentation technique [6], which is based on the algorithm from Tsai [7]. Finally, the extracted surface of the whole lumbar spine was combined with the segmentation result. By means of a
_______________________________________________________________
For the evaluation of the new bone surface extraction method we compared the ultrasound registration with an accurate point registration. Two plastic phantoms were used for the evaluation experiments: a lumbar spine phantom and a femur phantom. Both contained a number of drill holes, which were used as reference points for a point-based registration. For the reference registration the drill holes in the phantom were marked ten times with a pointer and a mean point set was build. For the ultrasound registration we acquired ten 3D ultrasound datasets and created ten surfaces with the described image processing pipeline. With the ultrasound datasets and the surfaces we performed 100 ultrasound registrations. The target registration error between the 100 ultrasound registration and the accurate reference registration was calculated for different point sets. At the lumbar spine, point sets of anatomical regions which are important for a pedicle screw insertion were marked. The anatomical regions are the right and left pedicle of the vertebra and the ventral face of the vertebral body. At the femur we marked anatomical regions, which are important for an anterior cruciate ligament reconstruction, where a borehole is drilled for a graft fixation. The first and the second region we chose, are the insertions of the anterior and the posterior cruciate ligament (these points are near the entrance point of the drill). The third region is the intersection between the lateral epicodyle and the femur diaphysis, where the graft is fixated. III. RESULT After performing 100 ultrasound registrations for each of our phantoms the target registration error was calculated. The RMS error at the pedicle of the vertebra was 1.04 mm and at the ventral face of the vertebral body 1.11 mm. At the drill entrance at the femur point the RMS error was 0.76 mm and at the intersection between the lateral epicodyle and the femur diaphysis 0.69 mm. The overall RMS error at the femur was 0.74 mm and at the lumbar spine the overall RMS error was 1.14 mm. All results are listed in Table 1. The target registration error consists of a systematic and a statistical error. The statistical error equates to the mean distance of all ultrasound registration to the center of all ultrasound registrations and the systematic error is the
IFMBE Proceedings Vol. 22
_________________________________________________________________
682
C. Dekomien, S. Winter
Table 1 Target registration error at the femur and the lumbar spine; RMS: root mean square; MAX: maximal error; STA: statistical part of the error; SYS: systematic error. All values are mesured in mm. The different anatomical regions are the insertions of the anterior (AC) and posterior (PC) cruciate ligament, the intersection between the lateral epicodyle and the femur diaphysis (ED), the left (LP) and the right (RP) pedicle of the vertebra and the ventral face of the vertebral body (VB). anatomical region femur AC femur PC femur ED entire femur lumbar spine LP lumbar spine RP lumbar spine VB entire lumbar spine
RMS 0.77 0.78 0.7 0.74 1.04 1.27 1.11 1.14
MAX 1.43 1.78 1.42 1.78 1.69 1.95 2.37 2.37
STA 0.37 0.41 0.36 0.38 0.21 0.19 0.2 0.2
SYS 0.66 0.67 0.61 0.64 1.04 1.25 1.09 1.12
Over all regions at the femur the mean statistical error was 0.36 mm; at the lumbar spine the statistical error was 0.2 mm. In Fig. 2 the deviation of all ultrasound registrations to the center of all ultrasound registrations are displayed. At the femur 100% were lower than 1 mm. At the lumbar spine 99.6% of the cases were lower than 1 mm. The systematic error at the femur was 0.64 mm and at the lumbar spine 1.12 mm. The preoperative pipeline for the surface extraction was although evaluated with patient data. Because in patient data we could not measure the accuracy, we visually estimated the correlation between the preoperative data and the ultrasound data. In Fig. 3 the registration result at the lumbar spine is illustrated. The registration showed a good correlation of the bone surface in the ultrasound and the preoperative data. IV. DISCUSSION The extraction of the bone surface with the preoperative pipeline showed good results. It was possible to perform ultrasound registrations with patient and phantom data. The high reliability oft the registration is reflected in the values of small statistical errors (see Table 1). The statistical errors of 0.38 mm at the femur and 0.2 mm at the lumbar spine were within an admissible range for intraoperative navigation. However, the main direction of the systematic error corresponds to the direction of the ultrasound propagation. After a systematic evaluation of the causes we will be able to reduce this error.
_______________________________________________________________
The calculation process of the automatic surface extraction is preoperative and therefore the computing time 1.4
1.2
Deviation to the center of all ultrasound registrations (mm)
difference of the center of all ultrasound registrations to the accurate point registration.
1
0.8
0.6
0.4
0.2
0 0
7
14
20
27
34
40
47
54
60
67
74
80
87
94
Percent of registrations sorted by error (%)
Fig. 2. The deviation of the ultrasound registrations to the center of all ultrasound registrations. The grey line represents the errors at the lumbar spine and the black line the errors at the femur.
Fig. 3. Registration result at lumbar spine with patient data. The arrows mark the bone surface in the Ultrasound and CT data. a) Ultrasound volume b) and c) Overlay of ultrasound and CT data d) CT data
is not that relevant. The operating time was 3-5 minutes on an Intel Core 2 Duo 2.19 GHz CPU, whereas the shape based level set segmentation needed most of the time. V. CONCLUSION We developed a preoperative image processing pipeline for ultrasound based navigation. Our preoperative process, advances the automation of the whole navigation process. Therefore, this work enhances the possibility of applying ultrasound based navigation in the clinical routine. Next steps of our work will be the automatic detection of landmarks. These landmarks are necessary for the scan path matching and for the initial start position for the shape based segmentation method. Furthermore, we want to
IFMBE Proceedings Vol. 22
_________________________________________________________________
Automation of the preoperative image processing steps for ultrasound based navigation
evolve the vertebra segmentation and expand the method to MRT segmentation at different anatomic regions.
5.
ACKNOWLEDGMENT This work was an activity of the OrthoMIT consortium. It was supported by the Bundesministerium für Bildung und Forschung (Az.01EQ0424)
REFERENCES 1.
2.
3.
Peters T (2000) Image-guided surgery: From X-rays to Virtual Reality. Computer Methods in Biomechanics and Biomedical Engineering 4(1):27–57 Yaniv Z, Cleary K (2006) Image-Guided Procedures: A Review. Technical Report, Georgetown University Imaging Science and Information Systems Center Winter S, Brendel B, et al. (2008) Registration of CT and intraoperative 3D ultrasound images of the spine using evolutionary and gradient-based methods. IEEE Transactions on Evolutionary Computation 12(3):284-296
_______________________________________________________________
4.
6.
7.
683
Winter S, Dekomien C, et al. (2007) Registrierung von intraoperativem 3D-Ultraschall mit präoperativen MRT-Daten für die computergestützte orthopädische Chirurgie. Zeitschrift für Orthopädie und Unfallchirurgie 145:586–590 Brendel B, Winter S, et al (2002) Registration of 3D CT- and ultrasound-datasets of the spine using bone structures. Computer Aided Surgery 7:146–155 Dekomien C, Winter S (2007) Segmentierung einzelner Wirbel in dreidimensionalen CT-Daten mit einem modellbasierten Level Set Ansatz. Biomedizinische Technik 52 (Suppl.) Tsai A, Yezzi A (2003) A shape-based approach to the segmentation of medical imagery using level sets. IEEE Transactions on Medical Images 22:137–154
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Claudia Dekomien Institut of Neuroinformatik Universitätsstr. 150 Bochum Germany
[email protected] _________________________________________________________________
Elastic Registration of Functional MRI Data to Sensorimotor Cortex T. Ball1, 2, I. Mutschler3, D. Jäger4, M. Otte5, A. Schulze-Bonhage1, 2, J. Hennig6, O. Speck7 and A. Schreiber8 1 Epilepsy Center, University Hospital, University of Freiburg, Germany Bernstein Center for Computational Neuroscience, University of Freiburg, Germany 3 Department of Psychiatry, University of Basel, Switzerland 4 Department of Radiology, University of Freiburg, Germany 5 Department of Neurology, University Hospital, University of Freiburg, Germany 6 Department of Diagnostic Radiology, Medical Physics, University Hospital, University of Freiburg, Germany 7 Department of Biomedical Magnetic Resonance, Institute for Experimental Physics, Faculty of Natural Sciences, Otto-von-Guericke-University Magdeburg, Germany 8 Siemens Medical, Erlangen, Germany 2
Abstract — Functional MRI (fMRI) studies often critically build on assignments of task related fMRI responses to brain anatomy that presuppose accurate registration of the functional data to anatomical space. Registration methods proposed in the literature for this purpose range from rigid body transformation which only allows for translation and rotation, over affine transformations additionally allowing for scale and shear, to high-dimensional non-linear methods such as elastic registration with hundreds of control parameters. There is only little data on the impact of transformation model complexity on fMRI localization accuracy. Here we have therefore compared the spatial accuracy of rigid body registration, affine registration, and elastic registration based on Bezier-spline transformations. To this aim, we have acquired fMRI data in subjects performing a hand movement task. When applying rigid body and affine registration, the response center of mass was erroneously assigned to the primary somatosensory instead to the primary motor cortex in 20% and 45% of cases, while such errors only occurred in 5% of elastically registered cases. Our findings demonstrate that sophisticated registration techniques can increase the assignments accuracy of fMRI responses to sensorimotor cortex. Furthermore we provide an outlook for methods based on high resolution fMRI circumventing the registration problem, that – compared to methods based on structuralfunctional co-registration – are suitable to achieve higher localization accuracy. Keywords — Motor cortex, functional MRI, registration, spline transformation, high resolution
I. INTRODUCTION Function to anatomical structure assignment is fundamental to many current neuroimaging studies. To accurately assign functional maps to anatomical data, a coregistration problem has to be solved, if the anatomical information that can be derived from the functional images is not detailed enough and therefore additional high-
resolution anatomical data sets are recorded. Then, the high resolution anatomical data set must be brought into spatial correspondence to the lower resolution functional data set. This is achieved by application of a registration algorithm finding a suitable spatial transformation from the source or ‘moving’ image to the reference or ‘target’ image. Registration algorithms applied in fMRI brain imaging studies for the EPI-to-anatomy registration problem are mostly linear methods, but also non-linear approaches have been proposed (for a review see ref. [3]). Linear models in the general sense are those models retaining colinearity of image points, such as rigid body registration allowing translation and rotation in x, y and z (corresponding to 6 independent parameters). Within the linear framework, up to 9 additional parameters can allow for rescaling and/or perspective transformations. In contrast, non-linear registration is based on high-dimensional transformation maps allowing for elastic (hundreds of parameters) or fluid (thousands of parameters) deformations. In case of echo planar imaging (EPI) data, elastic registration may be useful for correction of geometric EPI distortions [4;8] that can be of magnitudes of several mm [2] potentially shifting functional responses into neighboring anatomical structures. There is however only little data on whether registration methods using high dimensional transformations have advantages for intra-individual registration of (distorted) EPI data to (relatively undistorted) anatomical MRI data. In the present study we have addressed this issue. Additionally we give a short outlook of how precise anatomical localization can also be achieved by circumventing the functional-structural registration problem. II. MATERIAL AND METHODS In the present study we have therefore acquired hand movement related fMRI data from 2 subjects each taking
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 684–688, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Elastic Registration of Functional MRI Data to Sensorimotor Cortex
III. RESULTS Each session of each of the two subjects yielded significant task related signal changes in multiple cortical areas. Activation was found in the left and the right central region in the vicinity of the precentral knob, a characteristic Ω-shape structure previously described as a landmark for the primary sensorimotor hand area [6]. There were, however, differences in the exact response locations in respect to the anatomical data when using the three registration techniques. The 3d co-ordinates of the center of mass (CoMs) of activation in the sensorimotor hand region were significantly different depending on the registration method used. The mean difference for the pair rigid vs. affine registration was significantly smaller that for the rigid vs. elastic and affine vs. elastic pair (t-test, p 0.2 ¿ ¯0,
γ =®
(5)
After tracking all pixels of the ROI, the strain images were generated by accumulation of strain rate for each pixel.
IFMBE Proceedings Vol. 22
_________________________________________________________________
3D Cardiac Strain Imaging using a Novel Tracking Method
699
Fig. 3 a) The initial grid of a ROI in the left ventricle (Sax-view); Fig. 2 a) Axial elastograms for five different time points during the translation and deformation cycle (I-V) and b) the mean axial strain for the box-shaped area (white dashed line in a-I).
III. RESULTS The vessel phantom was subjected to relatively large translations (up to 5.0 mm, Fig. 1a-b). The initial mesh (Fig. 1c) was compared with the resulting mesh after the entire cycle. Regularization using eq(2) revealed a high amount of region implosion (Fig. 1d). The use of neighbouring displacement values resulted in almost no implosion of the region although a less uniformly distributed mesh after 64 frames (Fig. 1e). The results revealed high MCC-values, so little regularization was necessary. The resulting axial elastograms are shown in Fig. 2a. The mean strain curve in Fig. 2b corresponds to a square ROI (Fig. 2a-I, white dashed line). The elastograms correspond with the time points I-V, indicated by circles. The axial strain patterns are in correspondence with previous studies [8]. The peak strain ranged from -10 to 10%. After removing the pressure, the resulting strain should be zero. The final strain image revealed almost no strain and little strain residue. In addition, no trend is observed in the curve. Fig. 3 shows the Sax-view of the left ventricle of a canine heart. The initial ROI is again divided into an equally spaced mesh (Fig. 3a). To demonstrate the necessity of regularization, the resulting mesh after one entire cardiac cycle without the use of an internal force is shown in Fig. 3b. The mesh had an irregular shape and the mesh is nonuniformly distributed. The mesh with displacement regularization (Fig. 3c) revealed a smoother shape and a more uniform distribution of mesh points. The use of eq(1) resulted again in severe implosion (results not shown). Fig. 4 shows both the short-axis (a) and the long-axis images (b) of the left ventricle for one cardiac cycle. The captured ECG-signal is shown (Fig. 4c) and the blackdashed lines indicate the time points at which the images I to V were acquired. In both the Sax and Lax view, the meshes
_______________________________________________________________
b) The resulting grid without any regularization; c) The resulting grid using displacement regularization as internal force.
return to their initial position although with altered appearance. A region was indicated in the starting frame of the Sax-view (white line). The mean strain in both the axial both directions was calculated. The three resulting strain curves for this ROI are shown in Fig. 4d-f after cumulation with (solid) and without tracking (dashed). IV. DISCUSSION Tracking is necessary when estimating strain in actively deforming tissue. A pixel-based approach is needed to assess local strain. The proposed approach uses RF-based displacement estimates. RF-based displacement estimation has the advantage of a higher sensitivity, but may lack robustness. Unreliable displacement estimates can be caused by either large deformations or by low echogenicity. Hence, regularization of the tracked mesh is favored. The preliminary findings of this current study show that an additional regularization force based on the behavior of neighbouring pixels improved mesh uniformity and smoothness without implosion of the ROI. The main advantage of the MCC-coupled weighting factor lies in the fact that translations were not over-smoothed, since reliable displacement estimates were maintained. Since the linear data of the tube phantom had a high echogenicity and the maximum strain rate was moderate (± 0.5 %), MCC-values were high (0.9 – 1.0) and little regularization was necessary. In the cardiac data, high local strain rate (± 5 - 10%), low echogenicity and out-of-plane motion can be a significant source of error and could very well be the cause of the problems at 3 and 9 o’clock in the short axis view (Fig 3a).. Especially the measured lateral movement seemed to be underestimated. Besides, the mesh shape had altered, probably due to the aforementioned problems and the fact that the last acquired image was not exactly equal to the initial frame. However, the resulting strain curves were in accordance with previous studies [3,4]. The strain curves with tracking showed higher maximum strain values and
IFMBE Proceedings Vol. 22
_________________________________________________________________
700
R.G.P. Lopata, M.M. Nillesen, I.H. Gerrits, H.H.G. Hansen, L. Kapusta, J.M. Thijssen and C.L. de Korte
Fig. 4 a) Sax images of the left ventricle with the tracked region for several time points during the cardiac cycle (I-V); b) Lax images of the left ventricle with tracked region; c) The captured ECG-signal with the time points indicated by black dashed lines; d) The mean radial (axial) strain with (solid) and without tracking (dashed); e) The mean circumferential (lateral) strain; f) the mean longitudinal strain. less trend, although considerable trend was still present in the circumferential strain (Fig. 4e). For further improvement of tracking, a detrend or a twoway approach can be used [3]. A detrend of each pixel’s coordinates results in final meshes that are equal to the initial grids. This may result in a tracking-lag during the systolic phase. In two-way tracking, tracking is repeated in reverse and the average of the forward and backward tracking is used. In our opinion, the latter is favored. V. CONCLUSIONS
ACKNOWLEDGMENT This work is supported by Philips Medical Systems and the Dutch Technology Foundation (STW), project NKG 06466.
REFERENCES
2.
4.
5. 6.
7.
8.
Pixel-based tracking and assessing local strain in actively deforming tissue is feasible using RF-based techniques.
1.
3.
Langeland S, d’Hooge J et al. (2004) RF-based two-dimensional cardiac strain estimation: a validation study in a tissue-mimicking phantom. IEEE Trans. UFFC. 51(11): 1537–1546. Lopata R, Nillesen M et al. (2006) In vivo 3D cardiac and skeletal muscle strain estimation. Proc. IEEE Ultrasonics Intern. Conf., Vancouver, Canada, 2006, pp. 744–747. Kallel F, Ophir J. (1997) A least-squares strain estimator for elastography. Ultrason. Imag. 19(3): 195–208. Cespedes I, de Korte C et al. (1999) Echo decorrelation from displacement gradients in elasticity and velocity estimation. IEEE Trans. UFFC. 46(4): 791–801. Nillesen M, Lopata R et al. (2007) Segmentation of the heart muscle in 3-D pediatric echocardiographic images. Ultrasound Med. Biol. 33(9): 1453–1462. Ribbers H, Lopata R et al (2007) Noninvasive two-dimensional strain imaging of arteries: validation in phantoms and preliminary experience in carotid arteries in vivo. Ultrasound Med. Biol. 33(4): 530-540. Author: R.G.P. Lopata Institute: Clinical Pysics Lab, Department of Pediatrics Radboud University Nijmegen Medical Centre Street: CUKZ 833, P.O. Box 9101, 6500 HB City: Nijmegen Country: the Netherlands Email:
[email protected] Ophir J, Cespedes I et al. (1991) Elastography : a quantitative method for imaging the elasticity of biological tissues. Ultrason. Imag. 13(2): 111–134. Bohs L, Trahey G. (1991) A novel method for angle independent ultrasonic imaging of blood flow and tissue motion. IEEE Trans. Biomed. Eng. 38(3): 280–286
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
SWI Brain Vessel Change Coincident with fMRI Activation Mario Forjaz Secca1,2, Michael Noseworthy3,4, Henrique Fernandes1 and Adrain Koziak4 1
Cefitec, Dep. of Physics, Universidade Nova de Lisboa, Portugal 2 Ressonância Magnética de Caselas, Lisboa, Portugal 3 Brain Body Institute, Hamilton, Ontario, Canada 4 Electrical & Computer Engineering and The School of Biomedical Engineering, McMaster University, Hamilton, Ontario, Canada E-mail:
[email protected] Abstract — fMRI has been extensively used for the last ten years, however it is not fully understood what it really measures. To map brain function, fMRI is known to make use of a chain of physiological events, from neuronal activation to blood oxygenation, which gives rise to the BOLD signal. The SWI (Susceptibility Weighted Imaging) sequence allows us to see the blood content of small venous vessels in the brain. This led us to look for the possibility of observing the changes in small vessels that occur during the activation of a particular brain area, by acquiring a set of images at rest and another set of images as the paradigm task is being performed. Subtracting the two sets it should be possible to see changes in oxygenation at the vessel level. By comparing the SWI images with the fMRI activation maps obtained on the same paradigm we can look for coincidences of blood vessel changes with the fMRI activations and thus validate in images part of the accepted chain of physiological events occurring during neuronal activation. Our 3D T1 images, functional images and SWI images were all obtained on the same axial locations so that we could compare them directly. The data obtained with SWI, BOLD activation and 3D FSPGR images for the same volunteers on the same axial plane, showed very good spatial correlation between the relevant eloquent areas. After subtraction the SWI image only showed the changed vessels on the same location as the fMRI activation areas. These corresponded to the cortical areas expected for the paradigms used, with a very good spatial correlation between the three images. Our data seems to show that it is possible to see the vessel changes that occur during neuronal activation and correlate its localisation with the BOLD activation area.
produces an increase in oxygen consumption. Therefore this will induce an increase in rCBF and an increase in rCBV. In the blood there is a decrease in oxygen extraction fraction producing an increase in oxyhemoglobin and a decrease in deoxyhemoglobin. This decrease in deoxyhemoglobin produces a decrease in local microscopic field gradients, which in turn produces an increase in T2*, measured as an increase in signal, by the BOLD sequence. The Susceptibility Weighted Imaging (SWI) sequence (Haacke, 2004) allows us to see the blood content of small venous vessels in the brain since it is associated with the magnetic susceptibility difference between oxygenated and deoxygenated hemoglobin, showing the phase difference along regions containing desoxyhemoglobin concentration. This led us to look for the possibility of observing the changes in small vessels that occur during the activation of a particular brain area, by acquiring a set of images at rest and another set of images as the paradigm task is being performed. Subtracting the two sets it should be possible to see changes in oxygenation at the vessel level. By comparing the SWI images with the fMRI activation maps obtained on the same paradigm we can look for coincidences of blood vessel changes with the fMRI activations and thus contribute to the validation of part of the accepted chain of physiological events occurring during neuronal activation. II. MATERIALS AND METHODS
Keywords — fMRI, BOLD, SWI
I. INTRODUCTION Although fMRI has been extensively used for the last ten years, it is not fully understood what it really measures (Logothetis, 2007). To map brain function, fMRI is known to make use of a chain of physiological events, from neuronal activation to blood oxygenation, which gives rise to the BOLD signal. It is generally accepted that neuronal activation leads to a local increase in glucose consumption, which in turn
All MRI images were obtained on a 3.0T Signa GE Healthcare system. For the fMRI we acquired a 28 slice BOLD EPI sequence with an 8 channel phased array head coil using a Flip Angle = 90, TE=35ms, TR=3s, Sl.Th.=5.0 mm, FOV=24 ×24 cm2 and a 64x64 matrix, with a total acquisition time of 282s. During this, a motor activation paradigm consisting of a simple closing and opening of the hand, in 30 second blocks of activation and rest, was performed. The fMRI postprocessing was performed with the Neurolens software (Neurovascular Imaging Lab, UNF Montreal), after motion correction and spatial smoothing, using a Linear model
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 701–704, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
702
Mario Forjaz Secca, Michael Noseworthy, Henrique Fernandes and Adrain Koziak
positive Gamma 3rd order polynomial. The activation map was then overlaid on a 3D FSPGR set of images for localisation. We also acquired two high resolution 3D SWI fully velocity compensated gradient echo sequences: one with the volunteer at rest and the other with the volunteer opening and closing their hand for the whole length of the sequence in exactly the same way as for the fMRI paradigm. The SWI post-processing was performed on a GE Advantage Windows workstation. Following phase filtering we subtracted the two sets of images that had been processed to enhance the venous signal: the set of processed SWI images acquired during the execution of the activation paradigm from the set of SWI images acquired at rest. The SWI images were mathematically manipulated, rather than the filtered phase, as the SWI data shows venous (i.e. deoxyhaemoglobin) related signal change most strikingly. This provided a set of subtraction images, that highlighted the changes of blood vessel size between rest and activation. Our 3D T1 images, functional images and SWI images were all obtained on the same axial locations so that we could compare them directly. Our male volunteers performed a motor task of closing and opening the hand and a visual stimulation task. III. RESULTS The subtraction of both sets of SWI images, during activation and during rest, allowed us to see changes in oxygenation at the vessel level, as can be seen in Fig. 1.
To see if the activated area was only related to the venous system, we have also adjusted our SWI sequence processing for the arteries and the subtraction gave a blank result, showing no change with activations, as can be seen in Fig. 2.
Fig 2: SWI subtraction image adjusted for the arteries during the right hand motor activation paradigm.
By comparing the SWI images with the fMRI activation maps obtained on the same paradigm, as can be seen in Fig. 3, we were able to find some clear coincidences of blood vessel changes with the fMRI activations and thus get a better understanding about the neuronal activation mechanism, since it validates in images part of the theoretical chain of physiological events occurring during this phenomenon.
Fig 1: SWI subtraction image showing only one dilated vein during the right hand motor activation paradigm. Fig 3: fMRI BOLD activation localization of the right hand motor activation area.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
SWI Brain Vessel Change Coincident with fMRI Activation
703
To validate our data we repeated the same experiment with the same volunteer four months later and obtained very similar results, as we can see in Fig. 4.
Fig 6: fMRI BOLD activation localization of the left hand motor activation area for one volunteer.
Fig 4: fMRI BOLD activation localization of the right hand motor activation area repeated 4 months after the first acquisition.
The 3D T1 images obtained on the same axial locations show that the activation area corresponds to a sulcus, where the small veins are, Fig. 5.
Fig 7: fMRI BOLD activation localization of the left hand motor activation area for another volunteer.
To further test our hypothesis on a different part of the brain we tried the same procedure with a simple visual paradigm alternating dark periods with periods of continuously changing images. This produced very similar results of coincidence of SWI subtraction images and fMRI activation areas, as can be seen in Fig. 8. IV. CONCLUSIONS Fig 5: 3D FSPGR anatomical image of the brain slice corresponding to the activation area.
We repeated the same experiment but with a left hand motor paradigm for the same volunteer, Fig. 6, and with a different volunteer, Fig. 7, obtaining similar results for both.
_______________________________________________________________
The data obtained with SWI, BOLD activation and 3D FSPGR images for the same volunteers on the same axial plane, showed very good spatial correlation between the relevant eloquent areas. After subtraction, the SWI images only showed the changed vessels on the same location as the fMRI activation areas and demonstrated that the arterial
IFMBE Proceedings Vol. 22
_________________________________________________________________
704
Mario Forjaz Secca, Michael Noseworthy, Henrique Fernandes and Adrain Koziak
paradigms used, with a very good spatial correlation between the images. Our data seems to show that it is possible to see the changes in vein activation that occur during neuronal activation and correlate its localisation with the BOLD activation area.
REFERENCES 1. 2.
Fig 8: fMRI BOLD activation localization of a visual activation paradigm.
blood vessel diameters and intensity are the same in both states. This proves that the activated area in only at the venous end of the pipes, excluding any possible interference of arterial blood in the BOLD activation formation. These results corresponded to the cortical areas expected for the
_______________________________________________________________
Haacke EM, Xu Y, Cheng YC, Reichenbach JR. (2004) Susceptibility weighted imaging (SWI) Magn Reson Med. 52:612-618 Logothetis, N. (2007) The ins and outs of fMRI signals. Nature Neuroscience 10:1230-1232 Corresponding author: Author: Mario Forjaz Secca Institute: Cefitec, Physics Department, Universidade Nova de Lisboa Street: Quinta da Torre City: 2829-516 Caparica Country: Portugal Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
A Subspace Wiener Filtering Approach for Extracting Task-Related Brain Activity from Multi-Echo fMRI Data C.W. Hesse1, P.F. Buur1 and D.G. Norris1,2 1
F.C. Donders Centre for Cognitive Neuroimaging, Radboud University Nijmegen, Nijmegen, The Netherlands 2 Erwin L. Hahn Institute for Magnetic Resonance Imaging, Essen, Germany
Abstract — This work presents a novel application of a subspace Wiener filtering approach to multi-echo functional magnetic resonance imaging (fMRI) data from a cognitive neuroscience experiment in order to extract task-related brain activity at each voxel. Subspace Wiener filtering maximizes the correlation of a linear combination of multiple echo time-series with the signal subspace spanned by a set of target waveforms, e.g., the condition dependent modeled hemodynamic response derived from the design matrix. Compared with existing echo combination methods against a single echo baseline, subspace Wiener filtering leads to an increased selective enhancement of the signal components reflecting task-related BOLD activation, and could be useful for pre-processing multi-echo fMRI data. Keywords — functional magnetic resonance imaging (fMRI), multi-echo fMRI, Neuroimaging, Wiener filtering
I. INTRODUCTION Many functional magnetic resonance imaging (fMRI) studies in the cognitive and clinical neurosciences measure experimentally induced or spontaneous changes in brain activity using the blood oxygenation level dependent (BOLD) contrast [1] and conventionally acquire single T2*weighted images at a fixed time after excitation using the gradient echo echo-planar imaging (GE-EPI) sequence [2]. Recently developed parallel imaging technologies allow multiple images to be acquired at different times following a single excitation [3], and methods for combining multiple echo time-series at each voxel have been shown to enhance those BOLD signal components which reflect brain activity of interest [4-8]. However, existing echo combination approaches do not make explicit use of the design matrix, which is available in experimental fMRI studies and which provides additional information about the (expected) time course of BOLD activation in different conditions. From a signal processing perspective, the problem of finding a linear combination of several signals that minimizes the difference between the output and a single target signal can be addressed using Wiener filtering, see e.g., [9]. When the desired response is parameterized by several signal components, the classic Wiener filtering approach can be extended to maximize the correlation with the subspace spanned by the target signals.
This paper examines the use and utility of the subspace Wiener filtering approach in a neuroimaging context, where it is applied to extract task-related brain activity from multiecho fMRI data from a cognitive neuroscience experiment involving a paradigm with several (three) conditions. In comparison with existing methods for combining multiple echo time-series at each voxel, e.g., echo summation [4] and exponential fitting [7], and with individual echo time series, the subspace Wiener filtered signals have superior signal to interference plus noise ratio (SINR) characteristics. In particular, the extracted signals have higher correlations with the signal subspace spanned by the design matrix and lower correlations with the subspace spanned by the matrix of parameters reflecting motion artifacts. Thus, for multiecho fMRI data acquired in an experimental setting involving several different conditions, subspace Wiener filtering may provide a useful tool for enhancing the underlying signal components which reflect task-related BOLD activation, prior to subsequent statistical analysis. II. MATERIALS AND METHODS A. Multi-Echo fMRI Data Acquisition Multiple echo fMRI data was collected from six volunteer participants who gave informed consent in accordance with local ethical committee requirements. Images were acquired on a 3T TIM system (Siemens Medical Solutions, Erlangen, Germany) installed at the F. C. Donders Centre for Cognitive Neuroimaging, using a multi-echo pulse sequence that was developed in-house and which was based on the product GE-EPI sequence. Phase-encoding gradients were rewound between echoes so that distortion was identical in the images. 31 transversal slices were acquired (ascending slice order, 10% slice gap, 3.5 mm isotropic voxels, FOV 224 mm., 64 x 64 matrix). The product 12channel head coil was used for signal reception to allow for accelerated parallel data acquisition. Five echoes were collected at TE = 9.3, 21.1, 33, 45, and 56 ms using threefold acceleration with subsequent GRAPPA image reconstruction [10]. Other scanning parameters were: TR = 2 s, flip angle = 90°, receiver bandwidth = 2520 Hx/pixel.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 705–708, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
706
C.W. Hesse, P.F. Buur and D.G. Norris
SINR characteristics were quantified with respect to the condition dependent expected hemodynamic response (derived from the design matrix) and the motion parameters, using the subspace correlation coefficient r = y T UUT y
Fig. 1. Shown are the time-courses for the condition dependent expected hemodynamic response derived from the design matrix (A) and the motion parameters estimated from the first echo for one of the subjects (B).
B. Participants and Task BOLD activation was induced by means of a color-word matching Stroop task [11], which elicits activation in several different brain regions, including visual areas, motor areas (cortex and cerebellum) as well as parietal and prefontal cortex. The experiment consisted of 15 task blocks of 24 seconds, separated by 12 seconds of baseline (fixation cross), giving an experimental time of 10.5 minutes (310 volumes). The order of the blocks is illustrated in Fig. 1 A, which shows the modeled hemodynamic response for each of the three conditions. Subjects were instructed to respond to both matching and non-matching stimuli by a button press with the right index or middle finger, respectively. C. Multi-echo fMRI Data Motion Correction Prior to the application of various echo combination methods, the multi-echo fMRI data were corrected for head motion artifacts using SPM5 [12]. The motion parameters were estimated from images acquired at TE = 9.3 ms (the first echo) and applied to realign all of the echo time series. The time courses of the motion parameters from one subject are shown in Fig. 1 B. D. Measures of Signal Quality The aim of using methods for combing multiple echoes is to improve the quality of the signal components that reflect task-related BOLD activation, i.e., to increase the SINR. As the time-course of task-related brain activity ought to reflect the experimental manipulation, and head motion artifacts [13] – next to respiration and cardiac artifacts [14-15] – constitute a major source of interference in fMRI data, the
_______________________________________________________________
(1)
where the T ×1 vector y reflects the linear combination of the multi-echo time-series, and where the columns of the T × M matrix U form an orthonormal basis for the column space of the T × M matrix D containing T samples of the M target signal component waveforms. The matrix U can be obtained by taking the left singular vectors of the singular value decomposition SVD, see, e.g., [16] of the matrix D , which here reflects the hemodynamic response or the motion parameters, as appropriate. Thus, subspace correlation coefficients for the combined echo time-series and the design matrix (SCD), and the measured motion parameters (SCM) were computed for all voxels within the brain volume. The SINR characteristics at each voxel were quantified in terms of the subspace correlation ratio SCD/SCM. Given these local, voxel-wise measures, there are several ways in which the overall signal quality of a combined-echo time-series may be quantified within and across subjects. One common approach is to compute mean values for one or more brain regions of interest (ROI), and then to average these over subjects. Although usually neuroscientifically well motivated, ROI-based analysis is inherently biased and potentially insensitive to important effects elsewhere in the brain. An alternatively is to consider the mean values for subsets of task-related voxels, e.g., voxels whose SCD value exceeds some threshold, or voxels with the R largest SCD values; however, this can involve arbitrary thresholds and may lead to voxel selections with little or no overlap when comparing different echo combination methods. There are usually large differences across subjects in the quality of fMRI data, which are attributable to biophysical factors as well as cognitive processing and task-compliance. The variability introduced by these inter-subject differences can mask the differences in signal quality between echo combination methods, and this can affect both performance and ROI-based measures. This problem of inter-subject variability may be mitigated by relative measures of signal quality computed with respect to a subject-specific baseline, and measures derived from a single echo time-series seem an appropriate datum for comparing the performance of different echo-combination methods. In the present context, echo 3 (at TE = 33 ms) had the highest overall signal quality (see Fig. 2) and was chosen as the baseline. Thus, global measures of signal quality were expressed as proportions of voxels with relative SCD and SINR increases, and the average magnitude of any increases.
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Subspace Wiener Filtering Approach for Extracting Task-Related Brain Activity from Multi-Echo fMRI Data
Fig. 2. Comparisons of the overall signal quality of the individual echo time-series with the signal of echo 1 at TE = 9.1 ms. The values reflect the mean and standard deviation for 6 subjects. The maxima are at echo 3.
E. Subspace Wiener Filtering Wiener filtering is a multi-channel signal processing technique which seeks an optimal set of linear combination weights (filter coefficients) w = [w1, …, wN]T such that the least-squares error between the filtered signal y(t) = wTx(t), where x(t) = [x1(t), …, xN(t)]T is the observed N-channel signal, and a desired output signal d(t) is minimized, i.e., 2· § w = arg min¨ d (t) − wT x(t) ¸ . (3) © ¹ The well-known solution to this least-squares optimization problem (see, e.g., [9]) is given by w = R−1 (4) xx R xd , −1 where Rxx is the inverse of the cross-correlation matrix of the observed signal x(t), and Rxd is the cross-correlation vector (matrix) of x(t) and d(t). When the target is parameterized by (i.e., is an unknown linear combination of) M signal components, d(t) = aTd(t), where a = [a1, …, aM]T and d(t) = [d1(t), …, dM(t)]T, the Wiener filter should maximize the subspace correlation of the output y(t) with the column space of DT = d(t). This is effectively achieved by replacing the matrix Rxd in (4) with the first left singular vector of the SVD of Rxd. Subspace Wiener Filtering (SWF) is also referred to as Reduced-Rank Wiener Filtering (RRWF) in the signal processing literature [17-20], and is related to Canonical Correlation Analysis (CCA) [21] and Partial Least Squares (PLS) regression [22].
707
Fig. 3 shows the proportions of voxels with SCD and SINR increases and SCM decreases relative to echo 3, along with the average magnitudes of the respective changes. Compared with the SUM, EF1 and EF2 methods, the SWF approach yields the largest relative increases in SCD and SINR. While EF1 on average yields fewer voxels with an SCD increase than SUM, the mean magnitude of the increase in those voxels is larger, which together with a greater overall reduction in SCM results in a higher SINR. Thus, while SINR increases in the case of SWF are mainly due to a selective enhancement of task-related activity, the EF2 signal enhancement seems to arise more from selective suppression of the motion artifacts. Fig. 4 shows maps of the raw SCD values obtained using the SWF method for one subject. Consistent with this experimental paradigm, strongly task-related voxels are located in visual areas, primary motor cortex and cerebellum, as well as parietal and frontal cortical areas.
III. RESULTS Simple echo summation (SUM) and exponential fitting (which involves linear regression of log-transformed data onto the echo times) using only the slope (EF1) and both slope and intercept parameters (EF2) were compared with subspace Wiener filtering applied to the raw (SWF) and to log-transformed multi-echo data (SWFL).
_______________________________________________________________
Fig. 3. Comparison of signal quality measures for each echo combination method relative to the single echo at TE = 33 ms. The values reflect the means and standard deviations for 6 subjects.
IFMBE Proceedings Vol. 22
_________________________________________________________________
708
C.W. Hesse, P.F. Buur and D.G. Norris
REFERENCES 1.
2. 3. 4.
5.
6.
7. Fig. 4. Shown are “activation” maps of the raw subspace correlation coefficient values for the subspace Wiener-filtered multi-echo data with the signal subspace spanned by the time-courses of the condition dependent, modeled hemodynamic response derived from the design matrix (SCD).
8.
9.
IV. SUMMARY AND CONCLUSIONS
10.
This work has presented a novel application of subspace Wiener filtering to multi-echo fMRI data from a cognitive neuroscience experiment involving a paradigm with several conditions, in order to selectively enhance signal components reflecting task-related BOLD activation at each voxel. In comparison with existing methods for combining information from multiple echoes, the subspace Wiener filtering method performs better in terms of the signal to interference plus noise ratio (SINR). Specifically, the extracted signals tend to have higher correlations with the signal subspace spanned by the design matrix and lower correlations with the signal subspace spanned by the parameters reflecting motion artifacts (except in the case of EF2). Thus, for multi-echo fMRI data acquired in an experimental setting involving several different conditions, subspace Wiener filtering may provide a useful preprocessing tool for extracting (enhancing) those signal components that reflect task-related BOLD activation.
11.
12. 13.
14.
15. 16. 17. 18.
19.
ACKNOWLEDGMENT
20.
The authors gratefully acknowledge funding from the Dutch Technology Foundation (STW): C.W. Hesse was supported by grant number NET.7050; and P.F. Buur by grant number NGT.6154.
21.
_______________________________________________________________
22.
Ogawa S, Lee TM, Kay AR, Tank DW (1990) Brain magnetic resonance imaging with contrast dependent on blood oxygenation, Proc Natl Acad Sci U S A 87:9868-9872 Mansfield P (1977) Multi-planar image-formation using NMR spin echoes," Journal of Physics C-Solid State Physics 10:L55-L58 Larkman DJ, Nunes RG (2007) Parallel magnetic resonance imaging, Physics in Medicine and Biology 52:R15-R55 Posse S, Wiese S, Gembris D, Mathiak K, Kessler C, Grosse-Ruyken ML, Elghahwagi B, Richards T, Dager SR, Kiselev VG (1999) Enhancement of BOLD-contrast sensitivity by single-shot multi-echo functional MR imaging, Magnetic Resonance in Medicine 42:87-97 Weiskopf N, Klose U, Birbaumer N, Mathiak K (2005) Single-shot compensation of image optimization using multi-echo EPI distortions and BOLD contrast for real-time fMRI, Neuroimage 24:1068-1079 Poser BA, Versluis MJ, Hoogduin JM, Norris DG (2006) BOLD contrast sensitivity enhancement and artifact reduction with multiecho EPI: Parallel-acquired inhomogeneity-desensitized fMRI, Magnetic Resonance in Medicine 55:1227-1235 Speck O, Hennig J (1998) Functional imaging by I-0- and T-2*parameter mapping using multi-image EPI, Magnetic Resonance in Medicine 40:243-248 Buur PF, Hesse CW, Norris DG (2008) Separating BOLD activation from stimulus-correlated motion by means of linear source extraction applied to multi-echo data, Proc. ISMRM 16th Scientific Meeting & Exhibition (ISMRM 2008), Toronto, Canada, 3-9 May 2008, p. 2491 Haykin S (1996) Adaptive Filter Theory, 3rd edition, Prentice Hall, Englewood Cliffs, NJ, USA Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V,Wang J, Kiefer B, Haase A (2002) Generalized autocalibrating partially parallel acquisitions (GRAPPA), Magnetic Resonance in Medicine, 47:1202-1210 Zysset S, Muller K, Lohmann G, von Cramon DY (2001) Color-word matching stroop task: Separating interference and response conflict, Neuroimage 13:29-36 SPM5 Toolbox: http://www.fil.ion.ucl.ac.uk/spm/software/spm5/ Friston KJ, Williams S, Howard R, Frackowiak RS, Turner R (1996) Movement-related effects in fMRI time-series, Magnetic Resonance in Medicine 35:346-355 Raj D, Paley DP, Anderson AW, Kennan RP, Gore JC (2000) A model for susceptibility artefacts from respiration in functional echoplanar magnetic resonance imaging, Physics in Medicine and Biology 45:3809-3820 Dagli MS, Ingeholm JE, Haxby JV (1999) Localization of cardiacinduced signal change in fMRI, Neuroimage 9: 407-415 Golub GH, van Loan CF (1996) Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore Scharf LL (1991) The SVD and reduced rank signal processing, Signal Processing 25:113-133 Scharf LL, Thomas JK (1998) Wiener filter in canonical coordinates for transform coding, filtering and quantizing, IEEE Transactions on Signal Processing 46(3):647-654 Scharf LL, Mullis CT (2000) Canonical coordinates and the geometry of inference, rate, and capacity, IEEE Transactions on Signal Processing 48(3):824-831 Hua Y, Nikpour M, Stioca P (2001) Optimal reduced-rank estimation and filtering, IEEE Transactions on Signal Processing 49(3):457-469 Hotelling H (1936) Relations between two sets of variates, Biometrika 8:321-377 Höskuldsson A (1988) PLS Regression methods, Journal of Chemometrics 2:211-228
IFMBE Proceedings Vol. 22
_________________________________________________________________
An Elasticity Penalty: Mixing FEM and Nonrigid Registration D. Loeckx, L. Roose, F. Maes, D. Vandermeulen and P. Suetens Medical Image Computing (ESAT/PSI), Faculty of Engineering, Katholieke Universiteit Leuven, Belgium Abstract — Voxel-intensity based nonrigid image registration can be formulated as an optimization problem whose goal is to minimize a cost function consisting of two parts. One part characterizes the similarity between both images. The other part regularizes the transformation and/or penalizes improbable or impossible deformations. In this paper, we extend previous work on nonrigid registration by introducing a new penalty term expressing the elastic energy of the deformation, using the same expression as used in finite element modeling (FEM). We compare the new elasticity penalty, a volume penalty and a rigidity penalty with a biomechanical mass-tensor model (MTM), equivalent to FEM. Comparison is carried out on artificial images and volunteer breast MR images. We show that the results obtained using the elasticity penalty approximate the MTM registration up to less than 1 voxel for the artificial images and less than 3 voxels for the clinical images. The errors are mainly situated near the edges of the registered structures, and therefore can be attributed to differences in boundary conditions. We also show that the elasticity penalty, volume penalty and rigidity penalty give similar results. Keywords — image registration, FEM, B-splines, penalty
I. INTRODUCTION Image registration is a common task in medical image processing. For applications where a rigid or affine transformation is appropriate, several fast, robust and accurate algorithms have been reported and validated [1]. However, in many cases the images to be registered show local differences, such that overall affine registration is insufficient and nonrigid image matching is required for accurate local image alignment. Voxel-intensity based nonrigid image registration can be formulated as an optimization problem whose goal is to minimize a cost function consisting of two parts. The first part is the driving force behind the registration process and aims to maximize the similarity between the two images. The second part, which is often referred to as the regularization or penalty part, constrains the transformation between the source and target images to avoid impossible or improbable transformations. Several penalty terms have been proposed in literature, e.g. modeling the deforming image as a thin plate [2] or penalizing (local) deviations from rigidity [3] or volume changes [4]. As an alternative to voxel-intensity based image registration, several authors have used surface-based registration
[5,6]. The deformation within the objects is then guided by the (assumed) material properties of the underlying tissue, in general represented by a finite element model (FEM). As boundary condition, the alignment of two surfaces or a set of landmark points is used. However, except for the boundary conditions, those approaches do not account for the underlying intensity information. Moreover, they require the segmentation of the boundary surface or landmark points, which is often cumbersome and error-prone. Therefore, we introduce an elasticity penalty term to combine the FEM-based and intensity-based approaches. The elasticity penalty expresses the elastic energy caused by the deformation field. This elastic energy is a physical property of the underlying tissue, whereas previous penalties either model a non-physical underlying structure [2] or model deviations from a physical property such as volume [4] or rigidity [3] that should be constant. Because the elasticity penalty is calculated in each voxel, there is no need to accurately segment the relevant boundaries in the images, as needed for FEM. We compare the results obtained using our penalty with the results obtained using a FEM-based registration model, both on artificial and on clinical images. II. METHODS A. Nonrigid Registration Model To register a floating image F to a reference image R we need to determine the optimal set of parameters μ for the transformation g(xR;μ) such that F'(xR)=F(g(xR;μ) is in correspondence with R. Several transformation models have been proposed for nonrigid image registration. We adopt a tensor-product Bspline model, as proposed by Rueckert et al [2,7]. The Bspline model is situated between a global rigid registration model and a local nonrigid model at voxel-scale. Using second degree B-splines, the 3D transformation field is given by g (x R ; ) = ¦ μijk β Δ2x ( x − kix ) β Δ2y ( y − k jy ) β Δ2z ( z − kkz )
(1)
ijk
with Δx, Δy, Δz the mesh spacing. The transformation is governed by the displacement vectors μijk located at the tensor-product knots k ijk = (kix , k jy , kkz ) .
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 709–712, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
710
D. Loeckx, L. Roose, F. Maes, D. Vandermeulen and P. Suetens
B. Cost Function The proposed cost function Ec consists of a similarity measure Es and one or more penalties. Some popular penalties model the rigidity [3] or volume preservation [4] of the transformation, which we will denote respectively by Er and Ev. Moreover, within this article, we introduce a new penalty term Ee that models the elastic energy in the transformation field. Thus, the total cost function becomes Ec = wsEs + wrEr + wvEv + weEe with ws, wr, wv and we weight factors that determine the relative importance of each term. For the optimization, we adopt a limited memory quasi Newton method [8]. To avoid discretization errors, the derivative of all terms of Ec is calculated analytically. Elasticity Penalty. The main contribution of this paper is the introduction of an elasticity penalty. The penalty calculates the internal potential energy Ee of an elastic body that undergoes the transformation. This internal potential energy σ T (x; μ ) (x; μ )dx with can be expressed as Ee ( μ ) = ³ R∩F ′
strain vector T
∂g ∂g § ∂g ∂g ∂g ∂g ∂g ∂g ∂g · ε = ¨ x , y , z , y + x , z + y , z + x ¸ (2) ∂y ∂y ∂z ∂x ∂z ¹ © ∂x ∂y ∂z ∂x
Using the B-spline derivative properties dBd(u)/du = B (u+1/2)-Bd-1(u-1/2), (2) can be easily computed from (1). Assuming linear elasticity, σ=(σx,σy,σz,τxy,τyz,τxz)T=Dε, with D the elasticity matrix representing the material properties. For an isotropic material D is given by d-1
§1 E (1 −ν ) ¨ AA D= ¨ (1 + ν )(1 − 2ν ) ¨ 00 ©0
A 1 A 0 0 0
A A 1 0 0 0
0 0 0 B 0 0
0 0 0 0 B 0
0· 0 ¸ 0 0¸ 0¸ B¹
(3)
with A=ν/(1-ν), B=(1-2ν)/(2(1-ν)), the Poisson’s ratio ν and Young's modulus E. Thus, Ee (μ )=³ ε T (x;μ) Dε (x;μ)dx.
Similarity Measure. We use mutual information of corresponding voxel intensities [9,10] as the similarity measure. To improve its smoothness and to make the criterion derivable, we construct the joint histogram using Parzen windowing as proposed by Thévenaz et al [7,11]. C. Validation To validate the proposed method, we compared transformation fields obtained by voxel-intensity based registration with transformation fields obtained from a biomechanically based registration method [6,12], which we consider as ground-truth. The voxel-intensity based registrations were performed without a penalty and using a volume, rigidity and elasticity penalty with 8 different weights. The weights were varied using powers of 10; for each penalty the lowest weight was chosen such that the obtained deformation field was very close to the deformation field obtained without any penalty. The range was such that the highest weight was so strong that almost no registration took place. Thus, an optimum should be obtained between the maximal and minimal weight. This way, a range for the rigidity and elasticity penalty from 0.0001 to 1000 and for the volume penalty from 0.01 to 100000 was obtained. The biomechanically based method uses a mass tensor model (MTM) to represent the tissue [12]. MTM is an alternative and equivalent formulation of the classical FEM problem. Sliding contacts between the segmented boundary surfaces in the floating and the reference image are imposed as boundary constraints. The registration error is measured by the warping index ϖ [11], which is the root mean square of the local registration error in each voxel, i.e. the difference between the transformation field obtained using our method and the transformation field obtained using MTM. The warping index is expressed in voxels and calculated excluding the background.
R∩F ′
Replacing ν and E by ν (x) and E (x) extends the penalty to images consisting of multiple materials. The derivative of Ee(μ) with respect to the individual transformation parameters μijk is T §ª · ∂Ee (μ) ∂ε (x;μ) º ∂ε (x;μ) ¸ ¨ =³ Dε (x;μ)+ε (x;μ)T D dx (4) « » R∩F ′ ¨ ∂μijk ∂μijk »¼ ∂μijk ¸ © «¬ ¹
and can easily be calculated using the B-spline derivatives. Similar to Rohlfing et al [4], we compute the penalty term as a discrete approximation to the continuous integral calculated over the set of sampled voxels contained in R ∩ F'.
_________________________________________
III. EXPERIMENTS A. Phantom experiments For the phantom experiments, we used a sphere as reference and an ellipsoid as floating image. In case (A), the volume of both objects was identical, while in (B) the ellipsoid volume was 10% larger than the sphere. All images were 250×250×250 voxels. We have registered the floating images to the reference images using a multiresolution strategy with four stages. In the first two stages, the images were downscaled twice in each dimension, in the last two stages they were downscaled once. In stage 1, a mesh spacing of 64 voxels was used.
IFMBE Proceedings Vol. 22
___________________________________________
An Elasticity Penalty: Mixing FEM and Nonrigid Registration
711
B. Breast MR registration
Fig. 1 Warping index ϖ for the artificial images (a,b) without and (c,d) with a volume difference between both images and for (a,c) ν=0.20 and ν=0.45. For low weights, the warping index approaches the warping index without a penalty, for high weights, the warping index is similar to the one obtained without registration. The optimum is somewhere in the middle. This was gradually reduced to 32 voxels in stage 2 and 3 and 16 voxels in stage 4. The calculation time was about 5 minutes per registration. Two different values for the Poisson’s ratio, ν=0.20 and ν=0.45, were used. The penalty was applied over the volume of the sphere only. The transformation fields obtained from the voxelintensity based registration were compared to the transformation field obtained by the MTM method, using the same Poisson’s ratios. Results are shown in Fig. 1. It can be seen that, in the case of volume preservation (Fig. 1, (a) and (b)), the elasticity penalty with the correct ν yields good results, but slightly better results can be obtained using a volume or rigidity penalty. When the volume is not preserved, the best results are obtained using the appropriate elasticity penalty. Fig. 2 pictures the warping index in the central slice for the best results for each penalty in Fig. 1(b). The areas of larger error are mainly situated at the edges.
Fig. 2 Warping index in the central slice for the best results in figure 1(b).
_________________________________________
Three normal volunteers were scanned twice using a clinical breast MR imaging protocol (a 3-D FLASH sequence with TR=9ms, TE=4.7ms, flip angle=25°, FOV=384mm and axial slice orientation), resulting in images of 384×384×64 voxels with a voxelsize of 1.04mm× 1.04mm×2mm. After the first scan, the subjects were asked to step out of the MR scanner, to stand upright and to reposition them in the scanner. Since the two images are acquired immediately after each other, there is no anatomical change between the two images and all image differences are due to different positioning. Four multiresolution stages are used, downscaling the images twice in the X- and Y-direction and once in the Zdirection for the first two stages and half of this in the last two stages. The initial mesh spacing was 64 for the X and Y dimension and 32 for the Z dimension. This spacing was halved after stages 1 and 3. Total calculation time was about 30 minutes per image pair. The Poisson’s ratio was kept constant at ν=0.45. The same penalty weights as for the phantom data were used, still leading to almost no deformation for the highest weights and almost no penalty for the lowest weights. To avoid the need for segmentation of the breast tissue, the penalty was applied to the whole image.
Fig. 3 Warping index for the volunteer images The transformation fields obtained from the voxelintensity based registration were compared to the transformation field obtained by the MTM method with ν=0.45. The results for each image pair can be seen in Fig. 3. Again, the elasticity penalty leads to the best results, yet the difference with the volume and rigidity penalty is small. The warping error for the best results for Volunteer 2 is shown in Fig. 4. The major errors occur near the (inner) edges of the breasts.
Fig. 4 Warping index in the central slice for the best results of Volunteer 2.
IFMBE Proceedings Vol. 22
___________________________________________
712
D. Loeckx, L. Roose, F. Maes, D. Vandermeulen and P. Suetens
IV. DISCUSSION
V. CONCLUSION
We have introduced a new penalty term for nonrigid image registration, modeling the elastic energy generated by the deformation. This energy is calculated from the same modeling equations as used in the standard FEM approach and depends on the Poisson’s ratio ν and Young's modulus E of the underlying tissue. The artificial images show that the use of the correct ν leads to the best results, yielding an improvement in warping index of 0.25 voxels. As can be seen in (3), the penalty is proportional to E, thus the choice of E will only influence the value of the optimal we. We have compared the deformation fields obtained using several penalties with a ground truth calculated from a biomechanically based MTM-registration. However, the MTM registration starts from segmented surfaces. As it is difficult in medical images to consistently define corresponding surfaces, also the MTM-method is not perfect. This, together with inhomogeneities caused by the meshing of the volumes to be registered, introduces errors in the deformation fields obtained from the MTM-registration, mainly at the edges. As can be seen in Fig. 2 and Fig. 4, areas of large transformation difference between the FEM and our method are mainly situated at the edges. Therefore, we believe that a significant part of the warping index can be contributed to the MTM imperfections. The main hurdle to take to apply the elasticity penalty or any other penalty in practice is the determination of the correct weight factor we. This factor expresses the ratio between the belief in the mechanical model and in the similarity measure. This ratio is problem-dependent. For highquality images the elasticity penalty weight can be quite low, as it will mainly be needed to normalize the transformation in homogeneous regions. For low-quality images, or images with significant inter-modality differences, the penalty should be higher, as it will have to avoid local optima or non-physical image correspondences. Our experiments show that the elasticity, volume and rigidity penalty lead to similar results. However, we prefer the elasticity penalty, as it expresses a physical property of the deformation field. E.g., a map of the local elasticity energy can be used to evaluate the parts of the image in which the algorithm creates high transformation energy. The choice between a biomechanical registration approach and a B-spline approach using a FEM penalty is less straightforward. The FEM approach will start from corresponding surfaces and requires a mesh of the internal volume. Next, totally ignoring the intensities, it will very quickly provide the registration results. The B-spline has no need for surfaces and thus also not for a segmentation, yet it requires a longer calculation time.
_________________________________________
We have introduced an elasticity penalty for nonrigid image registration, modeling the mechanical properties of the underlying tissue. We show that the results obtained using this penalty approximate the MTM registration up to less than 1 voxel for the artificial images and 2 to 3 voxels for the clinical images. The errors are mainly situated at the edges of the registered structures, and therefore can largely be contributed to differences in boundary conditions. Although the elasticity penalty, volume penalty and rigidity penalty give similar results, we prefer the elasticity penalty as it models a physical property of the deformation.
REFERENCES 1.
West J et al. Comparison and evaluation of retrospective intermodality brain image registration techniques. J Comput Assist Tomogr 21(4) (1997) 554–566 2. Rueckert D, Sonoda LI, Hayes C, Hill DL, Leach MO, Hawkes DJ, Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans. Med. Imag. 18(8) (Aug 1999) 712– 721 3. Loeckx D, Maes F, Vandermeulen D, Suetens P. Non-rigid image registration using free-form deformations with a local rigidity constraint. MICCAI 2004, LNCS 3216 (2004) 639–646 4. Rohlfing T, Maurer CR, Bluemke DA, Jacobs MA. Volumepreserving nonrigid registration of MR breast images using free-form deformation with an incompressibility constraint. IEEE Trans. Med. Imag. 22(6) (Jun 2003) 730–741 5. Ferrant M, Nabavi A, Macq B, Jolesz FA, Kikinis R, Warfield SK. Registration of 3-D intraoperative MR images of the brain using a finite-element biomechanical model. IEEE Trans. Med. Imag. 20(12) (Dec 2001) 1384–1397 6. Roose L, Loeckx D, Mollemans W, Maes F, Suetens P. Adaptive boundary conditions for physically based follow-up breast MR image registration, MICCAI 2008, LNCS 5242 (2008) 839–846 (in press) 7. Loeckx D. Automated nonrigid intra-patient image registration using B-splines. PhD Thesis, K.U.Leuven, May 2006, Leuven, Belgium, http://hdl.handle.net/1979/298 8. Byrd R, Lu P, Nocedal J, Zhu C. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16(5) (Sep 1995) 1190–1208 9. Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P. Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imag. 16(2) (Apr 1997) 187–198 10. Viola P, Wells WM. Alignment by maximization of mutual information. International Journal of Computer Vision 24(2) (Sep 1997) 137– 154 11. Thévenaz P, Unser M: Optimization of mutual information for multiresolution image registration. IEEE Trans. Signal Process. 9(12) (Dec 2000) 2083–2099 12. Cotin S, Delingette H, Ayache N. A hybrid elastic model allowing real-time cutting, deformations and force-feedback for surgery training and simulation. The Visual Computer 16(8) (2000) 437–452 Corresponding Author: Dirk Loeckx Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Evaluation of the biodistribution of In-111 labeled cationic Liposome in mice using multipinhole SPECT Technique S.O. Viehoever1, D. Buchholz1, H.W. Mueller1, O. Gottschalk2, A. Wirrwar1 1
Dept. of Nuclear Medicine, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany 2 Department of Surgery, LMU Munich, Germany
Abstract — Today, the biodistribution of liposome-based agents after administration in small animals can be determined using multipinhole collimation SPECT imaging. With this imaging technique, liposome labeled with gamma (photon) emitting radionuclide can be monitored in vivo. This paper provides a brief talk on the biodistribution and uptake of radiolabeled cationic liposome in vivo in mice using multipinhole collimation SPECT imaging. Keywords — Cationic liposome, multipinhole collimator, SPECT, biodistribution and organ uptake
I. INTRODUCTION In vivo molecular imaging has established the way in which researchers study biological processes. This kind of research calls for imaging with high sensitivity and high resolution. In the past decades, several advances in gamma camera design have provided the ability for more preclinical and clinical testing of liposome agents in small animal. The distribution of liposome in the organs and localizing the site of radiolabeled liposome uptake for disease diagnostic in mice after in vivo administration can be determined using multipinhole SPECT imaging. Hence, the science of liposome as a delivery system for drugs and vaccine has evolved through many phase and has reach a climax after the in vivo studies of injected liposome-based therapeutics. The use of SPECT (single photon computed emission tomography) requires small amount of matter (radioactivity) which does not interfere with the biodistribution of labeled liposome. Magnetic resonance imaging and computed tomographic imaging provides high resolution images than SPECT but they need higher amount of matter to obtain image contrast which can alter the normal biodistribution of the agent being tracked and also increase the risk for adverse reaction induced by the contrast agent. When compared with other imaging modalities, the Multipinhole SPECT has the ability to image the small organs in the body there by increasing the sensitivity which is >1000cps/MBq and maintaining the reconstructed resolution of a single pinhole of approximately 1mm FWHM.
II. SPECT-SCANNER AND COLLIMATORS SPECT scanner (Prism 2000S, Philips, Eindhoven, the Netherlands) is made of two detector heads with a NaI(Tl) crystal of 8 mm thickness and a rectangular field of view of 390 X 510 mm2. The intrinsic spatial resolution is 3.8mm and the energy resolution for Tc-99m is 10%. By using 256 x 256 matrix in SPECT, the image quality was improved. The multipinhole collimators are made of 12 mm lead shielding and an aperture, made of 10mm thick tungsten alloy. The height of each collimator is 150mm. For our mice studies, we used a collimator with 10 holes and each of the hole measures 1.5 mm in diameter which yields a resolution of 1.2 mm FWHM. III. CATIONIC LIPOSOME Cationic liposome is considered as one of the colloidal carrier system for drug delivery. Hence, they are drug delivery vehicles. The binding and uptake of cationic liposome occurs very fast, that is why the formulation development and implications in vivo are fundamentally different from those of long circulation liposome [1]. IV. LABELLING OF LIPOSOME 200 MBq111In-Cl3 was dissolved in 320μl of NaCl. 175μl of 111InCl3 + 175μl Sodium citrate-Dehydrate Buffer (pH = 4.5). 320μl of the mixture was for Incubated for 30 minutes with 320μl of cationic liposome solution. 640μl was the amt of radiolabeled cationic liposome (DOTAP/DOPC/DMPEDTPA (50/49/1)) with 80 MBq of activity. Quality control was carried out with a thin layer chromatography which resulted to 98% binding of labelled liposome in the injected solution. V. MATERIALS AND METHODS 6 Wild Balbc/male mice which were 6-8 weeks old were used for the study and each weighed 21.14±1.1g. The mice
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 713–715, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
714
S.O. Viehoever, D. Buchholz, H.W. Mueller, O. Gottschalk, A. Wirrwar
were anaesthetized by injecting 0.8ml of Ketavet/Rompun and catheter was inserted through the tail veins for the infusion of the Radiolabeled liposome. The mouse was positioned under the under the SPECT camera in a special holder. Radiolabeled liposome of 11MBq was injected and the dynamic images were acquired for 30 minutes each at time per projection of 47 seconds on the first day in each mouse. On the second day, DPD was injected into the mice in order to enable us identify the mice morphology. The mice were measured for 10mins at time per projection of 80 seconds thereby acquiring the static images and on the third day each mouse static measurement was for 15 minutes at time per projection of 120 seconds each. The acquired images were transferred and reconstructed using HISPECT software programme for the biodistributions study. ROI analysis were done by placing a region around the lungs, liver and spleen and the ROI statics and mean deviation data’s were also evaluated using Amide software programme.
Fig. 2. Binding image of cationic Liposome. Lungs uptake after 27.5mins p.i, liver, spleen uptake after 24hrs p.i and bones uptake after 24hrs p.i
VII. DISCUSSION VI. RESULTS A. Uptake of Liposome At the first 30mins, most of the liposome was trapped in the lungs, after 24h and 48h there was a higher uptake in the liver, spleen and bladder. The image on the right hand side represents the distribution of Tc-99m labelled DPD, a clinical bone marker, which was injected two hours before the 48h measurement of the liposome. Hence, the binding and uptake of cationic liposome occurred very fast in the organs.
A fast uptake and biodistributions of liposome was observed in the lungs of 49.41% which decreases gradually to 3.98%. At the liver there was a gradual uptake of 7.08% liposome, which increases to 47.36% and gradually decreases to 43.85% and at the spleen, the uptake of 0.13% was seen which increases constantly to 5.10% at (Fig. 3). liposome Uptake in Mouse 60% 50%
47,36%
% Uptake Liver 40,00%
Uptake
40%
Tail
% Uptake Lung
49,41%
43,85%
% Uptake Spleen
30%
28,28% 23,72%
20%
19,18%
21,63%
25,74% 24,90% 17,92% 16,15%
11,92%
10% 7,08%
0%
0,13%
2,5
0,27%
0,64%
7,5 m in p.i. 12,5
1,06%
17,5
4,03%
1,52%
1,74%
3,32%
22,5
27,5
24h
5,10% 3,98%
48h
Fig. 3. Graph representation of liposome uptake against time Head 7.5m
15mi
30mi
24hr
48hr
i
Fig. 1. Multipinhole SPECT images of the In-111 labelled liposome uptake behaviour in a male balb/c mouse after 7.5min, 15min, 30min, 24hrs, and 48hrs
_________________________________________
VIII. CONCLUSION
48hrs DPD
At the first 30mins, most of the cationic liposome was trapped in the lungs, after 24hrs and 48hrs there was a higher and uptake in the liver, spleen and bladder. Therefore, there was a release of liposome from the lungs into the liver.
IFMBE Proceedings Vol. 22
___________________________________________
Evaluation of the biodistribution of In-111 labeled cationic Liposome in mice using multipinhole SPECT Technique
Hence, we were able to conclude that with the aid of multipinhole SPECT technique, the biodistributions and the fast uptake of the radiolabeled cationic liposome in the organs of mice can be clearly visualized without decreasing the image resolution or the sensitivity.
REFERENCES 1.
2.
ACKNOWLEDGEMENT The authors would like to thank the Bundesministerium for Bildung und Forschung – BMBF Germany for supporting and sponsoring this project.
_________________________________________
715
3.
G. Gregoriadis (ed.) Liposomes Technology 3rd edition, Vol.3, Interactions of liposomes with the biological milieu, informa healthcare, New York, London, 2007 Wirrwar, A., Nikolaus, S., Viehöver S., Schramm, N.U., Mueller, H.-W. Erste dynamische Studien des Rezeptorstoffwechsels von Ratten mit Multipinhole-SPECT. Supplement to the Annual meeting of the Deutschen Gesellschaft für Nuklearmedizin (DGN), Berlin, Germany, 2006 Nikolaus, S., Wirrwar, A., Antke, C., Kley, K., Muller, H.-W. Stateof-the-art of small animal imaging with high-resolution SPECT. NuklearMedizin 44 (6), pp. 257-266, 2005
IFMBE Proceedings Vol. 22
___________________________________________
Multimodal Medical Case Retrieval using Dezert-Smarandache Theory with A Priori Knowledge G. Quellec1,2, M. Lamard3,2, G. Cazuguel1,2, B. Cochener3,2,4 and C. Roux1,2 1
2
INSTITUT TELECOM; TELECOM Bretagne; UEB; Dpt ITI, Brest, F-29200 France; Inserm, U650, IFR 148 ScInBioS - Science et Ingénierie en Biologie-Santé, Brest, F-29200 France; 3 Univ Bretagne Occidentale, Brest, F-29200 France; 4 CHU Brest, Service d'Ophtalmologie, Brest, F-29200 France;
Abstract — In this paper, we present a Case Based Reasoning (CBR) system for the retrieval of medical cases made up of a series of images with semantic information (such as the patient age, sex and medical history). Indeed, medical experts generally need varied sources of information, which might be incomplete, uncertain and conflicting, to diagnose a pathology. Consequently, we derive a retrieval framework from the Dezert-Smarandache theory, which is well suited to handle those problems. The system is designed so that a priori knowledge and heterogeneous sources of information can be integrated in the system: in particular images, indexed by their digital content, and symbolic information. The method is evaluated on a classified diabetic retinopathy database. On this database, results are promising: the retrieval precision at five reaches 81.17%, which is almost twice as good as the retrieval of single images alone. Keywords — Case based reasoning, Image indexing, DezertSmarandache theory, Contextual information, Diabetic Retinopathy.
When designing a CBR system to retrieve such cases, several problems arise. We have to aggregate heterogeneous sources of evidence (images, nominal and continuous variables) and to manage missing information. These sources may be uncertain and conflicting. As a consequence, we applied the Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning, proposed in recent years [2], which is well suited to fuse such sources of evidence. II. DIABETIC RETINOPATHY DATABASE
(a)
(b)
(c)
(d)
(e)
I. INTRODUCTION
(f) (g) (h) (i) (j) Fig. 1 Photograph series of a patient eye. Images (a), (b) and (c) are photo-
In medicine, the knowledge of experts is a mixture of textbook knowledge and experience through real life clinical cases. Consequently, there is a growing interest in casebased reasoning (CBR), introduced in the early 1980s, for the development of medical decision support systems [1]. The underlying idea of CBR is the assumption that analogous problems have similar solutions, an idea backed up by physicians' experience. In CBR, the basic process of interpreting a new situation revolves around the retrieval of relevant cases in a case database. The retrieved cases are then used to help interpreting the new one. We propose in this article a CBR system for the retrieval of medical cases made up of a series of images with contextual information. Textbook knowledge about the contextual information are integrated in the proposed system. It is applied to the diagnosis of Diabetic Retinopathy (DR). Indeed, to diagnose DR, physicians analyze series of multimodal photographs together with contextual information like the patient age, sex and medical history.
graphs obtained by applying different color filters on the camera lens. Images (d) to (j) form a temporal angiographic series: a contrast product is injected and photographs are taken at different stages (early (d), intermediate (e)-(i) and late (j)).
Diabetes is a metabolic disorder characterized by sustained inappropriate high blood sugar levels. This progressively affects blood vessels in many organs, including the retina, which may lead to blindness. The database is made up of 63 patient files containing 1045 photographs altogether. Patients have been recruited at Brest University Hospital since June 2003 and images were acquired by experts using a Topcon Retinal Digital Camera (TRC-50IA) connected to a computer. Images have a definition of 1280 pixels/line for 1008 lines/image. The contextual information available is the patients' age and sex and structured medical information (about the general clinical context, the diabetes context, eye symptoms and maculopathy). Thus, at most, patients records are made up of 10 images per eye (see
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 716–719, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Multimodal Medical Case Retrieval using Dezert-Smarandache Theory with A Priori Knowledge
figure 1) and of 13 contextual attributes; 12.1% of these images and 40.5% of these contextual attribute values are missing. The disease severity level, according to ICDRS classification [3], was determined by experts for each patient. Table 1 Patient disease severity level distribution Disease severity
Number of patients
No apparent DR Mild non-proliferative DR Moderate non-proliferative DR Severe non-proliferative DR Proliferative DR Treated / non-active DR
7 11 18 9 8 10
Twelve a priori rules about contextual information are available, including for instance: “After 12 years with diabetes type I, 90 to 95% of the patients have DR, among them, 40% have a proliferative DR.” III. THE DEZERT-SMARANDACHE THEORY The Dezert-Smarandache Theory allows combining any types of independent sources of information represented in term of belief functions. It generalizes the Dempster-Shafer theory. It is particularly well suited to fuse uncertain, highly conflicting and imprecise sources of evidence [2]. Let θ = {θ a , θ b ,...} be a set of hypotheses under consideration for the fusion problem; θ is called the frame of discernment. A belief mass m( A) is assigned to each element A of the hyper-power set D(θ ) , i.e. the set of all composite propositions built from elements of θ with ∩ and ∪ operators, such that m({ }) = 0 and ¦ A∈D (θ ) m( A) = 1 . The belief mass functions specified by the user for each source of information, noted m j , j = 1..N , are fused into the global mass function m f , according to a given rule of combination. Several rules have been proposed to combine mass functions, including the hybrid rule of combination or the PCR (Proportional Conflict Redistribution) rules [2]. It is possible to introduce constraints in the model [2]: we can specify pairs of incompatible hypotheses (θ u , θ v ) , i.e. each subset A of θ u ∩ θ v must have a null mass, noted A ∈ C (θ ) . Once the fused mass function m f has been computed, a decision function is used to evaluate the probability of each hypothesis: the credibility, the plausibility or the pignistic probability [2].
_______________________________________________________________
717
IV. DEZERT-SMARANDACHE THEORY BASED RETRIEVAL Let xq be a case placed as a query. We want to rank the cases in the database by decreasing order of relevance for xq . In that purpose, we estimate for each case x in the database the belief mass function for the frame of discernment θ1 = Q, Q , where Q (resp. Q ) means “ x is relevant (resp. irrelevant) for xq ” ( Q and Q are incompatible hypothesis). First, a mass function m j is defined for each feature F j , where F j denotes either an imaging modality or a contextual attribute; m j is based on the similarity between the values taken by x and xq for F j (see section IV.B below). Then, another mass function mapk is derived from the A Priori Knowledge about contextual information (see section IV.C below). Finally, all the mass functions are fused to estimate the belief degree in Q (see section IV.D). To define m j , we first define a finite number of states f jk for F j and we compare the membership degree α jk of x and xq to each state f jk . If F j is a discrete variable, we associate a state with each possible value for F j . If F j is an image, the following procedure is applied.
{ }
A. Integrating images in the system To define a finite number of states for an image feature F j (an imaging modality), we follow the usual steps of Content-Based Image Retrieval (CBIR) [4]: 1) building a signature for each image (i.e. extracting a feature vector summarizing their numerical content), and 2) defining a distance measure between two signatures. Thus, measuring the distance between two images comes down to measuring the distance between two signatures. Similarly, to define variable states, we cluster similar image signatures (according to the defined distance measure) and associate a state of F j for each image cluster. In previous studies, we proposed to compute a signature for images from their wavelet transform (WT) [5]. These signatures model the distribution of the WT coefficients in each subband of the decomposition. The associated distance measure D [5] computes the divergence between these distributions. We used these signatures and distance measure to cluster similar images. Any clustering algorithm can be used, provided that the distance measure between feature vectors can be specified. We used FCM (Fuzzy C-Means) [6], one of the most common algorithms, and replaced the Euclidian distance by D .
IFMBE Proceedings Vol. 22
_________________________________________________________________
718
G. Quellec, M. Lamard, G. Cazuguel, B. Cochener and C. Roux
B. Estimating the mass functions To compute the mass functions m j for a given feature F j , we first estimate a degree of match dm j x, xq between x and xq . We assume that the state of the cases in the same class are predominantly in a subset of states for F j . So, in order to estimate dm j x, xq , we use a correlation measure S jkl between two feature states f jk and f jl , regarding the class of the cases at these states. To compute S jkl , we first compute the mean membership D jkc (resp. D jlc ) of cases y in a given class c to the state f jk (resp. f jl ):
(
(
)
)
¦ δ ( y, c )α jk ( y ) °D = β y ° jkc ¦ δ ( y, c ) ® y ° C 2 °¯ ¦c=1 D jkc = 1, ∀( j , k )
(
(1)
)
where δ ( y, c ) = 1 if y is in class c , δ ( y, c ) = 0 otherwise, and β is a normalizing factor. S jkl and dm j x, xq are given by equations (2) and (3), respectively.
(
S jkl = ¦Cc=1 D jkc D jlc
(
)
( )
(3)
Then, we define a test T j on the degree of match: T j is true if dm j x, xq ≥ τ j and false otherwise, 0 ≤ τ j ≤ 1 . The sensitivity (resp. the specificity) of test T j represents the degree of confidence in a positive (resp. negative) answer to the test. Whether the answer is positive or negative, Q ∪ Q is assigned the degree of uncertainty. The mass functions are then assigned according to equation (4) if T j is true, or equation (5) otherwise.
(
)
(
)
( )
m j (Q ) = P T j Q = sensitivity T j ° m j Q ∪ Q = 1 − m j (Q ) ® ° mj Q = 0 ¯
(4)
m j (Q ) = 0 °° m j Q ∪ Q = 1− m j Q ® ° m Q = P T j Q = specificity(T j ) ¯° j
(5)
(
(
) () )
() ( )
()
To calibrate the retrieval system, we learn τ j from the database so that T j is both sensitive and specific. As τ j increases, sensitivity increases and specificity decreases. So, we set τ j as the intersection of the two curves “sensitivity according to τ j ” and “specificity according to τ j ”; these curves are built from each pair of examples in the database
_______________________________________________________________
C. Integrating contextual a priori knowledge The a priori knowledge about contextual information associates features with a severity level or a disjunction of severity levels: L0={no apparent DR}, L1={mild nonproliferative DR, moderate non-proliferative DR, severe non-proliferative DR}, L2={proliferative DR}, L3={Treated DR}. From these rules, we want to derive a degree of match between x and xq . First, we define a second frame of discernment θ 2 = {L0 , L1 , L2 , L3 } . For each case y considered (either x or xq ), we fuse the conclusion of all the rules ri applying to that case as follows. For each rule ri with conclusion Li ∈ θ 2 , we define a mass function m'i as in equation (6):
)
(2)
dm j x, xq = ¦ k ¦l α jk (x )S jklα jl xq
(one playing the role of x and the other the role of xq ). τ j is searched by the bisection method.
( ) ( )
° m'i Li = sensitivity(ri ) ® 4 °¯m'i * j =0 L j = 1 − m'i (Li )
(6)
All the m'i mass functions are then fused within θ 2 and the credibility Bel (Li ) and plausibility Pl (Li ) of each hypothesis Li , i ∈ {0,1,2,3} are computed. We define a credibility vector Bel ( y ) = (Bel (L0 ), Bel (L1 ), Bel (L2 ), Bel (L3 )) and a plausibility vector Pl ( y ) = (Pl (L0 ), Pl (L1 ), Pl (L2 ), Pl (L3 )) for y . From these two vectors, evaluated for x and xq , we derive an estimation Bel x, xq (resp. Pl x, xq ) of the credibility (resp. the plausibility) that x is relevant for xq :
(
)
(
( ) ( )
)
( ) ( )
° Bel x, xq = Bel (x )Bel xq ® T °¯ Pl x, xq = Pl (x )Pl xq
T
(7)
Finally, these values are translated into a mass function mapk for the frame of discernment θ1 (defined in section IV). In that purpose, we used the two following equations, relating the belief functions and a mass function m within a frame of discernment with two exclusive hypothesis such as θ1 : Bel (Q ) = m(Q) ® ( ) Pl Q = m(Q ) + m Q ∪ Q ¯
(
(
)
)
(8)
(
)
Applying equation (8) to Bel x, xq , Pl x, xq and mapk , we obtain the following mass function:
IFMBE Proceedings Vol. 22
(
)
mapk (Q ) = Bel x, xq ° m Q ® apk ∪ Q = Pl x, xq − Bl x, xq °m Q = 1 − m (Q ) − m Q ∪ Q apk apk ¯ apk
( ()
)
(
)
(
(
)
)
(9)
_________________________________________________________________
Multimodal Medical Case Retrieval using Dezert-Smarandache Theory with A Priori Knowledge
719
D. Retrieving the most similar cases All cases in the database are processed sequentially. For each case x , the mass functions for the frame of discernment θ1 are computed for each feature F j available for both x and the query xq (see section IV.B) and for the contextual rules (see section IV.C). The sources available for xq are then fused with the PCR5 rule [2] and the pignistic probability of Q , noted betP(Q ) , is computed. The cases are then ranked in decreasing order of betP(Q ) and the topmost five results are returned to the user. Fig. 2
Robustness regarding missing values.
V. RESULTS The mean precision at five (mp5) of the system, i.e. the mean number of relevant cases among the topmost five results, reaches 75.6%. As a comparison, the mp5 obtained by CBIR (when cases are made up of a single image), with the same image signatures, is 46.1% [5]. To evaluate the contribution of the proposed system for the retrieval of heterogeneous and incomplete cases, it is compared to a linear combination of heterogeneous distance functions, managing missing values [7], which is the natural generalization of classic CBR to the studied cases. Its extension to vectors containing images is based on the distance D between image signatures (see section IV.A). A mp5 of 52.3% was achieved by this method. The most significant mass functions are the m j functions defined for each attribute F j , as described in section IV.B; indeed, adding the mass function mcr deriving from the contextual rules only leads to an increase of less than 2% of the mp5. To assess the robustness of the method regarding missing information, 1) we generated artificial cases from each case in the database by removing attributes, 2) we placed sequentially each artificial case as a query to the system and 3) we plotted on figure 2 the precision at five of these queries according to the number of available attributes.
is significantly more precise than a simple linear combination of heterogeneous distances on the DR database (75.6% / 52.3%). This study suggests that the a priori knowledge about diabetic retinopathy is not very useful for the retrieval system, either because it is too vague or because the rules are already found by the learning procedure. Finally, if we use a Bayesian network to infer the missing values, prior to estimating the mass functions, the mp5 becomes 81.2%. It is thus a possible alternative to the decision tree based retrieval system we proposed previously [8] (showing a performance of 79.5% in mp5).
REFERENCES 1. 2. 3.
4.
5.
6.
VI. CONCLUSION
7.
In this article, we introduce a method to include image series, with contextual information and contextual knowledge, in CBR systems. DSmT is used to fuse the output of several sensors (direct fusion) and of a priori knowledge (indirect fusion). On this database, the method largely outperforms our first CBIR algorithm (75.6% / 46.1%). This stands to reason since an image alone is generally not sufficient for experts to correctly diagnose the disease severity level of a patient. Besides, this non-linear retrieval method
_______________________________________________________________
8.
Bichindaritz I, Marling C (2006) Case-based reasoning in the health sciences: What's next? Artif Intell Med 36(2):127–135 Smarandache F, Dezert J (2006) Advances and Applications of DSmT for Information Fusion II. Am Res Press, Rehoboth Wilkinson C, Ferris F, Klein R et al. (2003) Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 110(9):1677-1682 Smeulders AWM, Worring M, Santini S et al. (2000) Content-based image retrieval at the end of the early years. IEEE T Pattern Anal 22(12):1349-1380 Lamard M, Cazuguel G, Quellec G et al. (2007) Content Based Image Retrieval based on Wavelet Transform coefficients distribution. 29th Annual International Conference of the IEEE Eng Med Biol Soc, Lyon, France, 2007, pp. 4532-4535 Bezdek JC (1973) Fuzzy Mathemathics in Pattern Classification, Applied Math. Center, Cornell University, Ithaca Wilson DR, Martinez TR (1997) Improved Heterogeneous Distance Functions. J Artif Intell Res 6:1-34 Quellec G, Lamard M et al (2008) Recherche de cas médicaux multimodaux à l’aide d’arbres de décision. ITBM-RBM DOI 10.1016/j.rbmret.2007.12.005 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Gwénolé Quellec INSTITUT TELECOM; TELECOM Bretagne Technopôle Brest-Iroise - CS 83818 Brest Cedex 3 France
[email protected] _________________________________________________________________
Noise properties of the 3-electrode skin admittance measuring circuit S. Grimnes1,2, Ø.G. Martinsen1,2 and C. Tronstad1 1
Department of Clinical and Biomedical Engineering, Rikshospitalet, Oslo, Norway 2 Department of Physics, University of Oslo, Norway
Abstract — The 3-electrode skin admittance op.amp measuring circuit has the unique property of measuring the admittance of one single electrode (M) by the use of an additional reference electrode (R) in addition to the second current carrying electrode (CC). It is a monopolar set-up with the R and CC electrodes usually being considered uncritical. There are two special properties of the 3-electrode system. If an electrode pair or the skin itself generates a DC potential the 3-electrode circuit sets up a substantial and continuous DC current flow. Electrolysis and skin irritation may be the result. An attractive feature of the 3-electrode circuit is that any external ground referenced noise signal capacitively coupled to the body is cancelled from the current reading channel. But if the CC electrode is small and the skin under the CC electrode is very dry the voltage drive capability of the op.amp.A may be insufficient. The product of the impedance of the CC electrode and the noise current has a critical value above which the op.amp is driven into saturation and the measurement is completely destroyed. Therefore the CC electrode sometimes is critical.
higher complex resistivity than the living parts of the body. Ohm's law for a volume conductor is [2]: J = E
(2) 2
where J is current density [A/m ], is conductivity [S/m] and E is electric field strength [V/m].
Keywords — Skin impedance, 3-electrode skin admittance measurement, bioimpedance instrumentation, bioelectric instrumentation.
I. INTRODUCTION Fig.1 shows the cross section of a human body model covered by a thin dead layer of SC (stratum corneum). The three electrodes of a measuring system are the measuring electrode (M), the reference electrode (R) and the current carrying (CC) electrode. A 3-electrode admittance measuring circuit [1],[2] is obtained if the op.amp.A is connected to the electrodes as shown. Op.amp.A drives enough measuring current through the CC electrode, the body and the M electrode to obtain the same voltage on the R electrode as the excitation signal (30 mV). Op.amp.B is the current measuring transresistance amplifier. The output voltage from op.amp.B is proportional to the ac current i. The measured complex admittance Y is proportional to i according to the equation: Y = G+jB = i/v
(1)
Here v is the excitation voltage (30 mV on Fig.1), G is conductance and B susceptance. The circuit is of special interest in skin measurements when the SC has a much
Fig.1 3-electrode admittance measuring circuit. Dotted lines represent measuring current.
Eq(2) can be rewritten with resistivity £ [m]: E=£J
(3)
Eq(3) shows that E or the potential change per meter is proportional to £ and therefore very high in SC and low in the living body. Thus the living body is approximately equipotential, and all potential change occurs in the SC. Because the SC is so important it is shown separately on Fig.1 even if it is very thin. Under the prerequisite that SC resistivity is much higher than the living body resistivity the current between the M and CC electrodes results in a negligible voltage drop in the body but a substantial drop in the SC under both electrodes. The R electrode is with negligible current flow and consequently there is no voltage drop in the SC under the R electrode. The R electrode picks up the potential of the isoelectric living body.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 720–722, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Noise properties of the 3-electrode skin admittance measuring circuit
721
The circuit of Fig.1 has the attractive property that it measures the admittance under a single electrode (M) because the contribution of the CC electrode is eliminated by the third reference electrode. It is a robust circuit because the R and CC electrodes are not critical. The R electrode is not current carrying and therefore not polarized. The CC electrode is also apparently uncritical because of the large amplification of op.amp.A. On the other hand the circuit has two rather special properties with respect to a) DC currents and b) noise rejection. Even when the intended use of the circuit is for ac measurements, an electrode pair may generate DC potentials if different metals or different contact gels are used. Also the skin may act as a DC voltage generator. Imagine for instance that electrode R generates a 50 mV DC voltage. Op.amp.A will then drive a DC current through CC and M large enough to create a voltage drop of 50 mV in the SC under M. In such cases the circuit of Fig.1 sets up a substantial and continuous DC current flow caused by the generated DC potential. Electrolysis and skin irritation may be the result. With respect to noise rejection that is the topic which now will be treated more in detail. II. EXTERNALLY
IMPOSED NOISE
Fig.2 shows a power line cable and the distributed capacitance coupling a leakage current into our human body model. The power line wires are simplified to one wire carrying the mains voltage. The 3-electrode circuit is ground referenced, and therefore the capacitive coupling should drive a leakage current through M to ground, and this current should be read by op.amp.B and therefore disturb the admittance measurements. However, this will not happen. The ac current through M will create a voltage drop in the SC under M so that the body has a uniform ac voltage with respect to ground. The R electrode will pick up this voltage and op.amp.A will set up an ac current through CC so that the body voltage is close to zero. Op.amp.A clamps the body voltage to ground. As seen from eq.(3) the leakage current flow therefore creates a voltage gradient in the SC, but not in the living body. The living body is equipotential. Fig.2 illustrates how current driven by the distributed capacitance instead flows to the CC electrode. The CC current cancels the externally imposed leakage current through M. Of course this is not limited to power line noise. It is a very interesting feature of the 3-electrode circuit that any external ground referenced noise signal capacitively coupled to the body is cancelled from the current reading channel and accordingly does not disturb the admittance signal.
_______________________________________________________________
Fig.2 Distributed capacitance between an external mains power line and the body. Dotted lines represent leakage current.
III. LIMITS OF NOISE CANCELLATION. The leakage current flow lines shown in Fig.2 are with very low voltage gradients in the living body as shown by eq.(3). The current density of the noise current on the entrance side is very low as the capacitance is distributed over a large area. Under the CC electrode the noise current density is maximal, but because the low resistivity of the living body the voltage gradient still is small there. The noise current path is also through the SC under the CC electrode. An appreciable voltage gradient can exist across this part of the SC. Usually this is taken care of by the drive capability of the output stage of op.amp.A. But if the CC electrode is small and the skin under the CC electrode is very dry the voltage drive capability of the op.amp.A may be insufficient. Let us illustrate this with a practical example: Assuming ±5V power supply to the op.amps., a noise current of 1μA rms and very dry SC skin under a small CC electrode resulting in a SC resistance of 10 M. The peak voltage drive capability of op.amp.A must then be about 14 volt, far beyond the 5 V available from the power supply. Op.amp.A will leave the linear range and stop functioning. The product
IFMBE Proceedings Vol. 22
_________________________________________________________________
722
S. Grimnes, Ø.G. Martinsen, and C. Tronstad
of the impedance of the CC electrode and the noise current has a critical value above which the op.amp is driven into saturation and the measurement completely breaks down. Therefore the CC electrode sometimes actually is critical. IV. CONCLUSIONS The 3-electrode circuit has the attractive property that it measures the admittance under a single electrode (M) because the contribution of the CC electrode is eliminated by the third reference electrode. It is a robust circuit because the R and CC electrodes both are considered non-critical. The 3-electrode circuit has two rather special properties with respect to a) DC currents and b) noise rejection. If an electrode pair or the skin itself generates a DC potential the 3-electrode circuit sets up a substantial and
_______________________________________________________________
continuous DC current flow. Electrolysis and skin irritation may be the result. An attractive feature of the 3-electrode circuit is that any external ground referenced noise signal capacitively coupled to the body is cancelled from the current reading channel. But if the CC electrode is small and the skin under the CC electrode is very dry the voltage drive capability of the op.amp.A may be insufficient. The whole 3-electrode circuit function will then break down. Under these circumstances the CC electrode has become a critical component.
REFERENCES 1. Grimnes S (1983) Impedance measurement of individual skin surface electrodes. Med & Biol Eng & Comput 21:750-755. 2. Grimnes S, Martinsen ØJ (2008) Bioimpedance and Bioelectricity Basics. Academic Press / Elsevier
IFMBE Proceedings Vol. 22
_________________________________________________________________
Thermal Imaging of Skin Temperature Distribution During and After Cooling: In-Vitro Experiments M. Kaczmarek, J. Ruminski Gdansk University of Technology, Department of Biomedical Engineering, Gdansk, Poland Abstract — The goal of this work is to analyze the heat flow and temperature distribution in in-vitro skin tissue samples using external cooling excitation. One of the interesting method is application of active IR thermal imaging with external excitations. In the paper modeling techniques and invitro experiments are described. Obtained results can be used for optimizing diagnostic procedure for skin burn evaluation and other ADT applications. Keywords — Thermal imaging, skin burns, heat flow
I. INTRODUCTION Analysis of skin properties is a very important aspect in diagnostics. Especially evaluation of skin burn depth is still an open issue. The standard approach assumes that shallower burn wounds, which will heal spontaneously within three weeks of the burn trauma, should be treated conservatively, while deeper wounds, need surgical intervention [1]. The task is to put a diagnosis either the wound heal spontaneously or not. One of the common method is clinical assessment of burn wound depths, based on visual inspection [2]. However typical accurate prognosis are and only 50% – 70% of cases [3]. Additionally, differentiation between the IIa (superficial dermal) and IIb (deep dermal) wounds is problematic even for the most experienced practitioners. Histopathological assessment is considered to be the a golden standard [4] but even this procedure lacks commonly accepted quantitative criteria and is local and invasive so it is not frequently used in clinical practice [5], [6]. There are many research activities in the field of evaluation of skin burn depth. Static thermography (ST) [7], [8], ultrasonography (USG) [9], reflection-optical multispectral imaging [10], [11], laser Doppler imaging [12], [13] (LDI) and indocyanine green (ICG) fluorescence [14] are the most popular in literature, however none has yet been widely accepted. One of the interesting method is application of active IR thermal imaging. The general principle of functional diagnostics with IR thermal imaging is similar to that for modalities currently used in other functional imaging
techniques. The measured object is thermally excited and a characteristic response is measured. Assessment of the local functional behaviour of the tissue or organ tested, which is represented quantitatively by the values of parameters appropriate for the technique applied, is possible by analysing the response to a known excitation. The analysis enables parameters to be estimated that are related to local heat transients within the object tested. It is expected that calculated parameter values can be used in imaging and diagnostics decisions. Different excitation methods can be used. These include cold stress [15], pharmacological excitation [16] or even electrical stimulation [17]. We demonstrated the promising results of skin burn depth evaluation using optical heating [18]. However the results has shown the problem with heat flow control for externally (enforced) and internally (natural) heated skin. Cooling is promising external excitation which is also more comfortable for a patient with burned skin. The goal of this work is to analyse the heat flow and temperature distribution on and in in-vitro skin tissue samples using external cooling excitation. The expected results can influence the prepared method of skin burn depth evaluation using IR thermal imaging and cooling excitation. II. METHOD A. Reference Experiment A set of experiments was prepared. In the first one, reference experiment, solid object with known thermal parameters was used. It was made with aluminum (150mm/150mm/5mm). Temperature distributions were measured at front and back surfaces of the object using two identical Flir A320G cameras (FPA, spectral range 7.513μm, resolution 320x240, 16 bit, thermal sensitivity 70mK@30)C, frame rate: 60Hz). Measurements were performed in the fixed environment (i.e. room temperature, no excitation) and during and after the excitation. As a cooling excitation a cryotherapy device using CO2 vapour was used. The excitation was applied during 40 - 50 s from
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 723–727, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
724
M. Kaczmarek, J. Ruminski
the distance about 20 cm. The maximum achieved temperature gradient (object at room temperature vs. cooled object) was about 9 oC . The reference experiment was performed to establish a relationship between known thermal parameters and observed object response to the excitation. B. In-vitro Experiments Next experiments were performed for pig/chicken skin tissue samples. First the pig sample was used in two configurations: 1) thick sample: skin with fat/muscle layers up to 3 cm deep; 2) thin sample: skin with fat/muscle layers up to 1 cm deep. In this configuration additional IT thermal camera was used (Flir SC 3000, QWIP FPA, spectral range 8-9μm, resolution 320x240, 16 bit, thermal sensitivity 20mK@30)C, Stirling cooled, frame rate 60Hz). with the close-up lenses to observe in-depth heat flow. The experiment setup is presented in fig. 1.
convection with following parameters: air temperature Tamb=22oC, atmospheric pressure Patm=101.351 kPa, gravitational acceleration g=9.81 m/s2. Table 1
The thermal properties of modeled objects [18]
Aluminum sample Pig skin sample Chicken skin sample
d [mm]
k [W/m⋅K]
c [J/kg⋅K]
ρ [kg/m3]
5 30 3
180 0.325 0.23
913 3200 3000
2800 1100 1000
The thermal properties of skin model obtained by Henriques and Moritz for in-vitro experiments [19] are taken as a first approximation of materials in the skin thermal model. The main problem is to adopt in-vitro measured parameters values into living tissue models [18]. III. RESULTS A. Reference Experiment In Figure 2 the temperature change in time is presented measured during and after excitation on both sides of the tested object. 22 Back side Front (excitated) side
Fig. 1. The 3 cameras experiment setup
20
18 Temperature [oC]
The last experiment using chicken skin (1-3mm depth) was performed using only 2 cameras as for the reference experiment. In all in-vitro experiments the similar cooling excitation was applied. Thermal IR images were measured with 10Hz frequency during and after excitation.
16
14
12
10 0
C. Modeling and simulations
40
60
80
100
120
140
160
Time [s]
Experiments were modeled using IDEAS 9 (SDRC) modeling and simulation software. The geometry and FEM mesh was done as homogeneous and isotropic objects, without veins, arteries, hair follicles and internal heat sources. The object temperature in steady state is assumed as uniform ambient temperature (Tamb = 22oC), unidirectional heat flow is modeled. The thermal properties of aluminum plate, pig and chicken skin tissue are brought together in Table 1. Thermal conductivity, specific heat, material density and length, denoted, respectively, by k, c, ρ and d are applied in the models. The cooling process is realized as forced convection of CO2 gas medium, and heat exchange between modeled object and atmosphere as free
_________________________________________
20
Fig. 2. Temperature change in time is presented measured during and after excitation on both sides of the tested object
The response character is exponential suggesting the RC like equivalent model of the tested object. If we assume the corresponding thermal parameters: k-thermal conductivity and ρc – volumetric heat capacity we can performed data to model fitting to estimate object properties and then compare them to well known aluminum parameters. B. In-vitro Experiments In figure 3,4 and 5 achieved results for in-vitro experiments are presented. Basing on experiments it can be
IFMBE Proceedings Vol. 22
___________________________________________
Thermal Imaging of Skin Temperature Distribution During and After Cooling: In-Vitro Experiments
stated that for used camera measurement accuracy the 2layer exponential model is sufficient to represent measured data. If temperature changes introduced by an excitation source in the object are high enough then the 2-layer model estimation is better correlated to the measured data comparing to 1-layer model.
725
For both type of experiments the thermal response is exponential. According to low value of thermal conductivity and high of thermal capacitance the delay in heat transport is clearly visible for in-vitro samples. The thermal inertia is also proportional to thickness of sample e.g. for pig skin the delay is almost 20 sec. (30mm thick) and for chicken skin is near 10 sec. , while for aluminum plate is non measurable. There are also temperature gradient (about 3oC) between the font and the back side of samples. C. Modeling and simulations
a)
b)
c)
Fig. 3. The pig thick skin sample, temperature distribution at the end of the cooling; a) photo, b) front side, c) back side
As an example in Fig. 6 the obtained numerical simulation results are shown for chicken skin model. The temperature profiles are very similar for the reference experiment. Also for in-vitro experiments and numerical simulations the time delay in heat transport thru sample thickness are similar: 10 s for chicken skin model, about 30 s for pig skin model. Unfortunately the temperature gradient between front and back side is a little bigger (about 5 oC for chicken skin model and 10oC for pig skin model).
24
22
20
20
18
18 Temperature [ C]
16
o
Temperature [oC]
Back side Front (excitated) side 22
14
16
14
12
12
Front (excitated) side Back side
10 0
50
100
150
200
250
300
350
10
Time [s]
Fig. 4. Temperature change in time is presented measured during and after excitation on both sides of the tested thin pig skin sample (see fig. 3)
8 0
20
40
60
80
100
120
140
160
180
Time [s]
Fig. 6. Temperature change in time calculated for chicken skin model according to Table 1 is presented, data during and after excitation on both sides of the modeled chicken skin sample
20 19 18
o
Temperature [ C]
17
D. Summary of results
16 15
Figures 2,4,5,6 shows surface temperature profiles in time, basing on these profiles after external heating is switch off (re-warming phase), the fitting procedure is applied to calculate parameters presented in Table 2.
14 13 12 Back side 11 Front (excitated) side 10 0
50
100
150
200
250
300
Time [s]
Fig. 5. Temperature change in time is presented measured during and after excitation on both sides of the tested chicken skin sample
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
726
M. Kaczmarek, J. Ruminski
Table 2
Calculated time constants of equivalent model
Aluminum Front τ1
27.08
τ1 τ2
9.86 97.47
τ1
25.08
τ1 τ2
6.06 90.61
Pig skin sample
Chicken skin sample Back Front Back Front Back One exponential equivalent model 31.03 146.63 180.06 127.32 161.68 Two exponential equivalent model 9.59 4.13 180.06 3.73 6.86 64.74 157.67 180.06 166.34 155.15 Numerical model One exponential equivalent model 35.07 42.57 158.35 79.38 33.62 Two exponential equivalent model 10.51 23.05 12.27 9.00 13.07 59.46 134.49 82.62 92.89 113.65
We can differentiate between different type of samples basing only on value of time constant however the differences in time constant values between numerical simulations and experiments for biological tissues are quite big. This is probably due to unknown the exact values of thermal parameters of the tissue, also the surface of pig and chicken skin was wet what change the heat exchange very much. The second reason is that we use only simple one layer model when in our previous work [18] concerning burns evaluation the 5 layer model for skin was used and experiment and simulation results was very similar.
conclusion that the ADT approach may be declared to be an innovative, effective method for burn depth discrimination.
ACKNOWLEDGMENT This work was financed from MNiSW grant no R13 027 01
REFERENCES 1.
2. 3. 4.
5.
6.
7. 8.
IV. DISCUSSION AND CONCLUSIONS Collected experimental results were compared with data obtained from numerical thermal model of skin and metal sample. Correlation between thermal model simulations and experiment results is quite high what confirms that the main mechanisms responsible for heat exchange in the investigated tissues are properly recognized. The problem was to define proper methodology of experiments for ADT in burn diagnostics and choice of equipment and experimental conditions, e.g. time of excitation phase, etc. The recent paper is reporting results of experiments, which are already standardized in secure established measurement conditions. Therefore we are fully convinced that the examples we show are of proper diagnostic importance (high diagnostic quality) even the number of discussed cases is small. The results of cooling clearly show very high importance of this procedure in comparison to heating by optical excitation. ADT examination is simple, non-contact and short. The instrumentation to be applied is based on the IR-cameras already used in hospitals and now available at reduced prices. The results obtained in this work for the ADT method by evaluating the thermal time constants lead us to a
_________________________________________
9.
10.
11.
12.
13.
14.
15.
16.
L.H. Engrav, D.M. Heimbach, J.L. Reus, T.J. Harnar, J.A. Marvin (1983) “Early excision and grafting vs. nonoperative treatment of burns of indeterminate depth: a randomised prospective study,” J. Trauma; 23, pp.1001-1004 J.M. Converse, A.H.T. Robb-Smith (1994) “An anatomic classification of burns,” Ann. Sur.; 120, pp. 873-885 P.G. Shakespeare (1992), “Looking at burn wounds: the A. B. Wallace Memorial Lecture,” Burns 1992; 18, pp.287-295 D.M. Heimbach, M.A. Afromowitz, L.H. Engrav, J.A. Marvin, B. Perry (1984), “Burn depth estimation: man or machine,” J. Trauma, 24, 5, pp. 373-377 A.M.J. Watts, P.P.H. Tyler, M.E. Perry, A.H.N. Roberts, D.A. McGrouther (2001), “Burn depth and its histological measurement,” Burns; 27: pp. 154-160 A.J. Singer, L. Berruti, H.C. Thode, S.A. McClain (2000) “Standardized burn model using a multiparametric histologic analysis of burn depth,” Acad. Emerg. Med.; 7: pp. 1-6 R.P. Cole, S.G. Jones, P.G. Shakespeare (1990) “Thermographic assessment of hand burns,” Burns; 16, pp. 60-63 A. Renkielska, A. Nowakowski, M. Kaczmarek, M.K. Dobke, J. Grudzinski, A. Karmolinski, W. Stojek (2005) “Static thermography revisited – an adjunct method for determining the depth of the burn injury,” Burns, 31, pp. 768-775 S. Iraniha, M.E. Cinat, V.M. VanderKam, A. Boyko, D. Lee, J. Jones, B.M. Achauer (2000), “Determination of burn depth with noncontact ultrasonography,” J. Burn Care Rehab; 21, pp. 333-338 M.A. Afromowitz, J.B. Callis, D.M. Heimbach, L.A. DeSoto, M.K. Norton (1988), “Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth,” IEEE Trans. Biomed. Eng; 35, pp. 842-850 W. Eisenbeiss, J. Marotz, J.P. Schrade (1999) “Reflection - optical multispectral imaging method for objective determination of burn depth,” Burns; 25, pp. 697-704 C.L. Riordan, M. McDonough, J.M. Davidson, R. Corley, C. Perlov, R. Barton, J. Guy, L.B. Nanney (2003) “Noncontact laser Doppler imaging in burn depth analysis of the extremities,” J. Burn Care Rehab; 24, pp. 177-186 J.S. Chatterjee (2006), “A Critical Evaluation of the Clinimetrics of Laser Doppler as a Method of Burn Assessment in Clinical Practice,” Journal of Burn Care & Research, 27, pp. 123 – 130 J.M. Still, E.J. Law, K.G. Klavuhn, T.C. Island, J.Z. Holtz (2001) “Diagnosis of burn depth using laser-induced indocyanine green fluorescence: a preliminary clinical trial,” Burns, 27, pp. 364-371 A.L. Herrick, S. Clark (1998) “Quantifying digital vascular disease in patients with primary Raynaud's phenomenon and systemic sclerosis,” Ann Rheum Dis; 57, pp. 70-78 I. Fujimasa, T. Chinzei, K.Mabuchi (1995) “Converting algorithms for detecting physiological function changes from time sequential thermal images of skin surface,” Proc. Of Engineering in Medicine and Biology Society, IEEE 17th Annual Conf., pp. 1709-1710 vol.2
IFMBE Proceedings Vol. 22
___________________________________________
Thermal Imaging of Skin Temperature Distribution During and After Cooling: In-Vitro Experiments 17. A. Merla, L. Di. Donato, G. L. Romani, P.M. Rossini (2002) “Infrared functional imaging evaluation of the sympathetic thermal response,” Proc. 2nd European Medical and Biological Engineering Conference-EMBEC02, pp. 1610-1611 18. Ruminski J., Kaczmarek M., Renkielska A., Nowakowski A. (2007) Thermal parametric imaging in the evaluation of skin burn depth, IEEE Trans. on Biomedical Engineering. - Vol. 54, nr 2, pp. 303-312 19. Henriques F. C., Moritz A. R.(1947), Studies of thermal injury. P. 1: the conduction of heat to and through skin and the temperatures attained therein: a theoretical and experimental investigation. Am. J. Pathol. , 23, pp. 531-549.
_________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
727
Mariusz Kaczmarek Gdansk University of Technology Narutowicza 11/12 80-952, Gdansk Poland
[email protected] ___________________________________________
Magnetic Resonance Electrical Impedance Tomography For Anisotropic Conductivity Imaging E. Deirmenci and B.M. Eyübolu Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey Abstract — Magnetic Resonance Electrical Impedance Tomography (MREIT) brings high resolution imaging of true conductivity distribution to reality. MREIT images are reconstructed based on measurements of current density distribution and a surface potential value, induced by an externally applied current flow. Since biological tissues may be anisotropic, isotropic conductivity assumption, as it is adopted in most of MREIT reconstruction algorithms, introduces reconstruction inaccuracy. In this study, a novel algorithm is proposed to reconstruct MREIT images of anisotropic conductivity. Relative values of anisotropic conductivity are reconstructed iteratively, using only measurement of current density distribution. By measuring a surface potential or a conductivity value, true values of anisotropic conductivity can be recovered. The technique is evaluated based on simulated measurements with and without additive noise. The results show that anisotropic and isotropic conductivity distributions can be reconstructed, successfully. Keywords — Magnetic resonance, impedance, anisotropic conductivity, imaging, tomography.
evaluated using simulated measurements with and without additive noise. II. METHOD A. Forward Problem For a given conductivity distribution and probing current, calculation of current density distribution and surface potentials is referred as the forward problem of MREIT. The relation between conductivity and potential field is given as follows, with Neumann boundary conditions: ∇ ⋅ §¨ σ ∇φ ·¸ ( x, y ) = 0 © ¹
σ
J ∂ϕ ° = ®− J ∂n ° ¯ 0
( x, y ) ∈ S
at positive electrode
(1-2)
at negative electrode elsewhere
I. INTRODUCTION In-vivo imaging of tissue conductivity values and their variation with physiological activity have been realized and shown to be useful even at low spatial resolution [1]. Magnetic Resonance Electrical Impedance Tomography (MREIT) provides images of conductivity distribution at high spatial resolution. Although, it is known that significant portion of the tissues in human body have anisotropic conductivity values [2], most of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity [3]. In this study, a novel MREIT reconstruction algorithm for anisotropic conductivity imaging is proposed. The proposed technique is based on the equipotential projection based MREIT algorithm [4]. The technique uses only the current density distribution measured using Magnetic Resonance Current Density Imaging (MRCDI), for relative conductivity image reconstruction. A scaling factor to recover the true conductivity values can be achieved by means of a single potential or conductivity measurement. Reconstruction performance of the proposed algorithm is
Here,
σ ( x, y )
is
the
2D
anisotropic
electrical
ªσ xx σ xy º conductivity which is defined as σ = « » , φ is the ¬σ yx σ yy ¼ electrical potential, n is the outward unit normal and S is the imaging plane. After obtaining φ distribution, electric field and the interior current density distribution are obtained as,
E = −∇φ
(3)
ªσ xx σ xy º ª∇φ x º J = σ E = −« »⋅« » ¬σ yx σ yy ¼ ¬∇φ y ¼
(4)
Here, J x , J y and ∇φ x , ∇φ y shows the components of current density and potential gradient in x and y directions, respectively.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 728–731, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Magnetic Resonance Electrical Impedance Tomography For Anisotropic Conductivity Imaging
B. Reconstruction of anisotropic conductivity
R=
The proposed algorithm utilizes the measured current density distribution and needs gradient of the potential field in the filed of view (FOV), to reconstruct anisotropic conductivity as given in equation (4). Equipotential lines in the FOV are constructed using current density distribution. Consequently, potential values assigned to equipotential lines can be any value which ensures the existing potential gradient field. So, it is sufficient to find the potential gradient values at the starting point of equipotential lines, i.e. boundary pixels. For this purpose, equation (4) is rearranged and solved for ∇φ y : ∇φ y =
J yσ xx − J xσ yx
σ xyσ yx − σ xxσ yy
.
(5)
Similar equation is also obtained for ∇φ x . Known conductivity values are assigned to the boundary pixels of the FOV. In practice, this can be realized using a conductive belt with known conductivity value. Hence, potential gradient can be calculated at all boundary pixels of the FOV. Then, assigning a potential value to a boundary pixel, potential values at all boundary pixels can be calculated by based on potential gradient at the boundary. Subsequently, potential values are assigned to equipotential lines based on these potential values. In case of isotropic conductivity, it is known that equipotential lines and current lines intersect perpendicularly. But this is not true when the conductivity is anisotropic and the angle of intersection is determined by the anisotropic conductivity. Equation (4) can be rearranged for the calculation of this angle as follows: α ∇φ
( x, y )
§ J yσ xx − J xσ yx = tan −1 ¨ ¨ J xσ yy − J yσ xy ©
· , ( x, y ) ∈ S ¸ ¸ ¹ ( x, y )
(6)
Using the calculated angles, equipotential lines are constructed and potential field is obtained for the entire FOV by projecting the boundary potentials along the equipotential lines. Then, potential gradient is calculated using 3×3 Sobel operators. The intersection angle can not be calculated at the first iteration since, neither conductivity nor its anisotropy is known. Therefore, equipotential lines are assumed perpendicular to current lines, at the first iteration. A residual function is defined as follows:
_________________________________________
Here
⋅
³
S
729
G2 − σ ∇φ − J dS
(7)
shows L2 norm. If R is minimized with respect
to σ , following equation system is obtained: J xj = σ xxj ∇φ xj + σ xyj ∇φ yj J yj = σ yxj ∇φ xj + σ yyj ∇φ yj
(8)
shows the anisotropic conductivity Here, σ ..j components of jth element and J xj and J yj show the measured current density components. As seen from (8), two equations with four unknowns are obtained for only one current injection pattern. Therefore, at least two independent current distributions (i.e. current injection patterns) are needed to solve this equation system, uniquely. Equation (8) can be written for N different current injection patterns as follows: ª J 1x º ª ∇φ x1 ∇φ 1y º « 2» « 2 2» « J x » « ∇φ x ∇φ y » ªσ º xx « . »=« . . »⋅« » ¬σ xy »¼ « » « . » « . » « . « J N » «∇φ N ∇φ N » x y ¼ x ¼ ¬ ¬
Jx
G
ª J 1y º ª ∇φ x1 ∇φ 1y º « 2» « 2 2» « J y » « ∇φ x ∇φ y » ªσ º yx « . »=« . . »⋅« » ¬σ yy »¼ « » « . » « . » « . « N» « N N» φ φ J ∇ ∇ y ¼ x y ¬ ¬
¼ Jy
(9)
G
Anisotropic conductivity values are then calculated as follows: −1 ªσ xx º −1 «σ » = G ⋅ J x ¬ xy ¼
,
−1 ªσ yx º −1 «σ » = G ⋅ J y ¬ yy ¼
(10)
All steps described above are repeated iteratively. When the difference between the results of two consecutive iterations becomes lower than a predefined value ε , iterations are terminated. At the end of the iterations, relative conductivity distribution is obtained. Making only one potential measurement or utilizing known conductivity value of a pixel, the potential or the conductivity distribution is scaled and the true conductivity values can be determined.
IFMBE Proceedings Vol. 22
___________________________________________
730
E. Deirmenci and B.M. Eyübolu
III. RESULTS Two dimensional computer model shown in figure 1 is used to evaluate the performance of the technique. Target conductivity distribution is composed of two anisotropic objects placed in an isotropic background of 1 Sm-1. Conductivity values of the objects are given in table 1. Four different current patterns are generated by means of eight electrodes placed at the surface of the model. These current injection patterns are defined in table 2.
values of the jth element, respectively and N is the number of pixels in the grid. Similar error calculation is also made for isotropic regions. But, before the execution of these error calculations, true conductivity values can be determined by utilizing one potential measurement. Figure 2 shows the results after ten iterations for noise free case and the corresponding reconstruction errors are given in table 3. In order to evaluate the performance of the technique under measurement noise, the noise model explained in [5] is adopted. For a 2 Tesla imaging magnet, SNR of 30 dB is reported in [5]. Obtained reconstruction results for SNR=30 dB are given in figure 3 and the corresponding reconstruction errors are given in table 4. 2.8 4.5
4
2.3
3 1.8
2
1.3
Figure 1 Two dimensional computer model and eight electrodesplaced on the boundary
1
0.8
(a) Object
(b) 0.03
Table 1 Conductivity values of the model (Sm-1)
0.04
0.03
0.02
σ xx σ xy = σ yx
1 1
2 3
0.02
0
0
σ yy
5
1
-0.01
0.02 0.01 0.01
0
-0.01
-0.02 -0.02 -0.03
Table 2 Current amplitudes applied to the electrodes for current injection patterns I1, I2, I3 and I4. Values are in mA
I1 I2 I3 I4
E1 0 0 +20 0
E2 +20 0 0 0
E3 0 0 0 +20
E4 0 -20 0 0
E5 0 0 -20 0
E6 -20 0 0 0
E7 0 0 0 -20
(c)
(d)
Figure 2 Anisotropic conductivity images for SNR = : (a) σ xx , (b) σ yy , (c) σ xy , (d) σ yx
E8 0 +20 0 0
Table 3 Reconstruction errors for SNR =
ε σ xx Reconstruction error is calculated as in equation (11) for quantitative evaluation of the performance of the technique: εσu =
1 N
N
¦ j =1
(σ jtu − σ jru ) 2
σ 2 jtu
× 100 %
Object 1 Object 2 Background
(%)
22.5 29.2 20.4
ε σ yy
(%)
46.8 21.3 19.8
σ xy and σ yx values for object 1 are calculated (11)
with 60.4% and 60.2% errors, respectively.
Here, u represents the anisotropic conductivity index, σ jt and σ jr represent the true and the calculated conductivity
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
Magnetic Resonance Electrical Impedance Tomography For Anisotropic Conductivity Imaging
2.8 4
2.4 3 2
2
1.6
1.2 1
0.8
(a)
(b) 0.03 0.04 0.02 0.02 0.01 0 0 -0.02 -0.01 -0.04 -0.02 -0.06 -0.03
(c)
(d)
In the noise model, the amount of noise is calculated independent of current strength. Therefore, noise strength in the current decreases with increasing SNR. Two current injection patterns are sufficient to reconstruct anisotropic conductivity images uniquely. But, the effect of noise in the reconstruction accuracy becomes very high for the regions where current density is very low. Strength of the applied current is determined by patient safety regulations and must be kept at the lowest possible level. Thus, in order to obtain the best result for the entire FOV, using the currents which are optimized to cover the FOV, will decrease the reconstruction error. For in-vivo MREIT imaging, current levels must be reduced to a safe level. But, when lower current is applied, the effect of system noise to the current density measurement will be higher. Therefore, using an MR system with a lower system noise (i.e. with a higher magnetic field strength) will increase the accuracy of the technique.
Figure 3 Anisotropic conductivity images for SNR = 30: (a) σ xx , (b) σ yy , (c) σ xy , (d) σ yx Table 4 Reconstruction errors for SNR = 30 dB
Object 1 Object 2 Background
ε σ xx (%) 25.8 35.3 29.8
ε σ yy
(%)
52.2 31.1 30.5
σ xy and σ yx values for object 1 are calculated
ACKNOWLEDGMENT Evren Deirmenci is currently studying for his Ph.D. degree at Middle East Technical University (METU) and on leave from Mersin University. This study is part of E. Deirmenci’s Ph.D. thesis and M. Eyübolu is the thesis supervisor. This study is supported by METU Research Grant BAP-08-11-DPT2002K120510 and Turkish Scientific and Technological Research Council (TUBITAK) Research grant 107E141.
with 74.6% and 66.8% errors, respectively.
REFERENCES 1.
IV. CONCLUSION In this study, a novel reconstruction algorithm is proposed to image anisotropic conductivity. The proposed algorithm is evaluated with simulated measurements. The technique is iterative and based on construction of equipotential lines at each pass. Obtained results show that anisotropic conductivities can be reconstructed with errors less than 38% using the proposed algorithm, for noise-free data. Incorporating anisotropy into MREIT reconstruction is improves the accuracy of conductivity reconstruction. Reconstruction errors for noisy current density data increase with respect to noise-free data, since, noise cause inaccurate calculation of the potential gradients and subsequently inaccurate construction of the equipotential lines.
_________________________________________
731
2. 3.
4.
5.
Eyuboglu B M (2006) Electrical Impedance Imaging: Injected Current Electrical Impedance Imaging. In WILE Y-Encyclopedia of Biomedical Engineering (Metin Akay, ed.), Vol.2, pp.1195-1205. Breckon W (1992) The problem of anisotropy in electrical impedance tomography. Proc. of IEEE EMBS Conf. vol 14, pp 1734-1735. Seo J K, Pyo H C, Park C et al (2004) Image reconstruction of anisotropic conductivity tensor distribution in MREIT: computer simulation study. Phys. Med. Biol. vol 49, pp 4371-4382. Özdemir M S, Eyübolu B M, Özbek O (2004) Equipotential projection-based magnetic resonance electrical impedance tomography and experimental realization. Phys. Med. Biol. vol 49, pp 4765-4783. Scott G C, Joy M L G, Armstrong R L et al (1992) Sensitivity of magnetic resonance current density imaging. J. Magn. Reson. vol 97, pp 235–54.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Evren Deirmenci Dept. of Electrical and Electronics Engineering Middle East Technical University Balgat - Çankaya Ankara Turkey
[email protected] ___________________________________________
Electro-Magnetic Impedance Tomography – a sensitivity analysis A. Janczulewicz, A. Bujnowski and J. Wtorek Biomedical Engineering Department, Gdansk University of Technology, Gdansk, Poland Abstract — Two different methods of sensitivity calculation are presented in the paper. A first approach is based on analytical description of the potential creating by current flowing between two point electrodes. A second approach is based on finite element approximation of Biot-Savart relationship. The former approach appears a very fast and relatively accurate. It can be especially used in one step algorithm. Keywords — Electroimpedance tomography, magnetic conductivity imaging, Biot-Savart law.
electro-
I. INTRODUCTION Electro-Magnetic Impedance Tomography (EMIT) is an extension technique to the Electrical Impedance Tomography (EIT) [1-3]. Using this method it is possible to introduce more information obtained from measurements due to additional data achieved about internal object composition – here spatial electrical parameters distribution. Technique can be essentially usable in the applications where it is not possible to increase the number of contacting electrodes. In the paper we are discussing a sensitivity matrix formulation for the presented problem. The sensitivity matrix has to include the electric and magnetic problem. Two different methods of magnetic sensitivity calculation are discussed in the paper. Both methods utilize Biot-Savart law [2]. However, a first one is based on calculation of branch currents associated with each finite element while a second method is based on calculation of mean current flowing in cubic element. It is calculated using analytic approximation of the nodes’ potentials. The sensitivity is then calculated using the knowledge of mean electric field in each cubic element. A constant conductivity is assumed within each cubic element. The latter methods results in higher speed of the sensitivity calculation. Results of simulations and reconstructions are also presented. II. METHOD In spite of reconstructing both electric and magnetic images the presented method differs from classical electromagnetic one in that it treats both parts separately. It is due to an assumption of a very low frequency of
electromagnetic field. Thus, electric filed due to current flow is described by following relation: ∇ ⋅ σ ∗∇ϕ = 0 ,
(1)
where: σ ∗ - complex conductivity, ϕ - potential. It is a mixed value problem. A normal current vanishes on a part of boundary (excluding current electrodes): ∂ϕ = 0 on S-Sei, ∂n
(2)
While on electrodes (metallic) constant potential is assumed:
ϕ = Vei on Sei.
(3)
Additionally, it is assumed that the potential vanishes at infinity:
ϕ = 0 for R → ∞ .
(4)
A sensitivity of measured potential to conductivity changes may be calculated using Geselowitz formula [4]: ∇ϕ ∇ ψ dv ⋅ Iϕ Iψ
³
ΔZ = − Δσ v
(5)
where the potentials ϕ ψ -potentials created by current flowing between current injection and voltage measurements electrodes respectively. A magnetic field is calculated using formula proposed by Biot-Savart [5]: B=
μ0 4π
R
³ j× R
3
dv ,
(6)
v
where: j – current density, R – distance between centre of the coil and point of the object. The sensitivity of the magnetic field to conductivity may be obtained from the relation (6) in the following form: μ ∂B =− 0 4π ∂σ
§
§ ∂ϕ · ·
R
³ ¨¨© ∇ϕ + σ∇¨© ∂σ ¸¹ ¸¸¹ × R
3
dv .
v
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 732–735, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
(7)
Electro-Magnetic Impedance Tomography – a sensitivity analysis
733
A. Analytic method The sensitivity is calculated using analytical approximation for potential distribution [6,7]:
ϕ (O ) =
1 2πσ
³
Sei
j(x, y,0 ) dS r
(8)
where r is the distance between observation point O(x,y,z) and points belonging to electrodes, j(x,y,0) is the current density under electrodes. Assuming that the point electrodes are used and only two electrodes are uses to source and sink current the relation (8) may rearrange to the following one: Fig. 1 The array of electrodes and coils. Electrodes located on a circle
I §¨ 1 1 ·¸ ϕ (O ) = − 2πσ ¨© r1 r2 ¸¹
(9)
where r1and r2 are distances between observation point O and current source and sink, respectively. Using the relation (7) and taking into account a relationship between magnetic flux and induced voltage in the measuring coil the following relationship is obtained:
μω ∂v = − 0 3 (S c × rOc ) ⋅ ∇ϕ ∂σ 4πrOc
(10)
of larger diameter (marked by * and o) are used for current injection while those on a circle of a smaller diameter (marked by + and *) for potential measurements as well as coils (marked +).
An sensory array consists of 32 electrodes placed on two circles and 16 coil forming a square (Fig. 1). III. RESULTS The sensitivity distribution calculated using analytic description of potential is presented in the Figure 2.
where rOc is the distance between the point and the centre of the coil Sc is an area of the coil, ω - angular frequency of the measurement current, ϕ - potential distribution. B. Numerical method Another implementation is based on a discrete approximation of Biot-Savart relationship and Finite Element Method used for current calculation [8]: B=−
μ0 4π
¦i dl k
k
k
×
Rk
Sm =
[
R3
] [R dl x
b)
c)
d)
(11)
Rk3
where ik is current flowing in kth branch of tetrahedral element, dlk is distance between nodes forming the kth branch, Rk is the distance between centre of the kth branch and centre of the coil. Using the relationship the sensitivity can be calculated as follows: Cijk Vi − V j
a)
y
− R y dl x
]
(12)
where Cijk - geometrical coefficient for branch formed by nodes i and j of kth element, Vi and Vj potentials of ith and jth node respectively, R distance between the centers of the branch and coil.
_________________________________________
Fig. 2 The sensitivity distribution in the top four layers of the model a) first layer from the top, b) second layer from the top, c) third layer from the top, d) fourth layer from the top. The presented sensitivity in each layer of the model has been obtained for different combination of current injecting electrodes and the coil. The similar sensitivity was obtained using finite element approximation of Biot-Savart relation (12) (Fig. 3).
IFMBE Proceedings Vol. 22
___________________________________________
734
A. Janczulewicz, A. Bujnowski and J. Wtorek
obtained considering only one component of magnetic field Bz.
a)
b)
Fig. 6 Result of one step reconstruction using three components of magnetic field. Conductivity perturbation was equal to 10 S/m while background one 1 S/m.
Result of reconstruction when using all components of the magnetic field is presented in Figure 6. c)
d)
IV. DISCUSSION
Fig. 3 The sensitivity distribution calculated using finite element approximation of Biot-Savart relationship, a) first layer from the top, b) second layer from the top, c) third layer from the top, d) fourth layer from the top.
The sensitivity of the magnetic field depends on potential distribution and, sensitivity to conductivity, geometry and localization of the coil (eq. (6)). In turn, potential distribution depends on electrode localizations and conductivity distribution. The same result can be obtained using reciprocity theorem []. The flux in the coil is described by the following relation: Φ=
1 A c ⋅ jdv I
³
(13)
v
Fig. 4 Reconstruction result using for the perturbation located centrally using only component of the magnetic field Bz. The perturbation of conductivity was equal to 10 S/m.
Fig. 5 Reconstruction result for the perturbation localized eccentrically using only component of the magnetic field Bz. The perturbation of conductivity was equal to 10 S/m. Results of reconstructions obtained for centrally and eccentrically located perturbations in relation to array of coils are presented in Figures 4 and 5. These results were
_________________________________________
where Φ - magnetic flux, Ac – magnetic vector potential created by reciprocal current I in the coil and j – current density in the conductive body injected by electrodes. When using the relationship (13) and description of the magnetic vector potential for a coil of a given geometry and localization one can arrived to the similar relation as (10). Two different approaches of the magnetic sensitivity are presented in the paper. The first one relies on calculating the potential gradient using a relation describing the potential distribution when a current is flowing between two points electrodes in the semi-infinitive uniform medium. This relationship has been used to calculate gradient of the potential in the medium. A two different solutions were checked. The gradient of potential was calculated for each node of the medium divided into cubes using the analytical relationship. Then, average value was calculated in the whole cube. The latter method utilized the analytical description of the potential for calculating its value in each node. Next, the potential distribution was calculated inside the cube applying description used in finite element method
IFMBE Proceedings Vol. 22
___________________________________________
Electro-Magnetic Impedance Tomography – a sensitivity analysis
for cubic elements. It allowed calculation of the potential gradient distribution in each cubic element. The potential was calculated for each node and it was used in estimation of the sensitivity by means of the relation (12). The analytic calculation of the potential used in the paper assumes a point electrodes for current injection. However, this relation may be easily extended to circular electrodes using the attempt proposed by Mueller et al [6]. The former approach appeared a very fast and accurate when comparing to the latter one.
REFERENCES 1.
2. 3. 4.
5.
6.
V. CONCLUSIONS Two method of the sensitivity calculations are presented in the paper. The method based on potential calculation using analytic approximation appeared to be fast and accurate. This approach may be useful when using one step reconstruction algorithms.
ACKNOWLEDGMENT This research was partially performed under grant T11F 022 30 from the Polish Ministry of Science and Higher Education.
_________________________________________
735
7. 8.
Levy S, Bresler Y (2002) Electromagnetic impedance tomography (EMIT): A new method for impedance imaging, IEEE Trans. Med. Imaging. 21:676-87. Gencer N G and Acar C E (2004) Sensitivity of EEG and MEG measurements to tissue conductivity. Phys Med Biol 49:701-717 Gencer N G, Tek M N (1999) Electrical conductivity imaging via contactless measurements. IEEE Trans Med Imag 18:617-627 Geselowitz D B (1971) An application of electrocardiographic lead theory to impedance pletysmography, IEEE Trans. Biomed. Eng, 18:138-41. Geselowitz D B (1970) On the magnetic field generated outside an inhomogeneous volume conductor by internal current sources IEEE Trans Magn 6:346-347 Mueller J L, Isaacson D, Newell J C (1999) A reconstruction algorithm for electrical impedance tomography data collected on rectangular electrode arrays. IEEE Trans. Biomed Eng 46:1379-1386 Kotre C J (1996) Subsurface electrical impedance imaging using orthogonal linear electrode arrays. Sci Meas. Techn 143:41-46 Janczulewicz A, Wtorek J, Bujnowski A (2008) A CMT reconstruction algorithm for detection of objects buried in half-space, Proc. IFMBE, this issue Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Agnieszka Janczulewicz Biomedical Engineering Department Narutowicza 11/12 Gdansk Poland
[email protected] ___________________________________________
A feasibility study on the delectability of Edema using Magnetic Induction Tomography using an Analytical Model B. Dekdouk1, M.H. Pham2, D.W. Armitage1, C. Ktistis1, M. Zolgharni3 and A.J. Peyton1 1
School of Electrical and Electronic Engineering, The University of Manchester PO Box 88, M60 1QD 2 The Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, Victoria 3052, Australia 3 School of medicine, Swansea University, Swansea, SA2 8PP, UK Email:
[email protected] Abstract — Magnetic induction tomography (MIT) is a low frequency electromagnetic modality, which aims to reconstruct the conductivity changes from coupled field measurements taken by inductive sensors. MIT is a subject of research for medical clinical applications where several reports have shown low conductivity tissue structures can be detected. The aim of this paper is to analyze the sensitivity of a single MIT channel to the central edematous region in a simplified head model. A multilayer model was used, which comprises concentric shells representing scalp, skull, cerebral spinal fluid, gray matter and white matter. An analytical solution of the electromagnetic field problem is presented and validated against numerical COMSOL (a FE commercial package) model. The size of the edema region is progressively increased and the relative sensitivity of the MIT channel is presented. The detectability of the edema with regard noise limitations of MIT systems is analyzed. Using outer boundary information in the inverse problem as a priori condition could improve solution stability. The effect of the resolution of the shape scanner’s on measuring the boundary of the target on the MIT measurements is examined. Keywords — Magnetic induction tomography, brain edema, forward problem.
I. INTRODUCTION Cerebral edema also commonly known as the stroke is a life threatening pathological condition involving an accumulation of fluid in the brain. In the UK, a report by the National Health Service (NHS) Confederation [1] indicates every year an estimated 110,000 people in England suffer a stroke and the condition is the third largest cause of death. In addition to being described as a medical emergency, fatalities may not only result from the immediate brain injury; rather, progressive damage to brain tissue develops over time [2]. In fact, 30 % of affected people will suffer long term disability and 20 to 30 % will die within a month. It is also estimated that stroke costs the UK NHS about £2.8 billion p.a., which is about 66 % of the annual costs to the wider economy, associated with lost productivity, disability and informal care. Evidence shows that timely rapid diag-
nosis during the 24 hours following the stroke including first fast access to brain scanning and continuous monitoring of the lesion development can dramatically improve rates of survival and avoid associated potential neurological aggravations. Brain edema can be broadly classified as extracellular vasogenic or intracellular cytotoxic in origin. The former, also called a hemorrhage, happens when a blood vessel bursts due to breakdown of tight endothelial junctions which make the Brain Blood Barrier (BBB). This allows blood to enter the white matter and spread extracellularly along fiber tracts and can affect gray matter. The latter is caused by ischemia which occurs when the blood supply to the brain cells is limited due to clot formation in the vascular system. This leads to inadequate functioning of the sodium and potassium pump in the cell membrane. As a result there is cellular retention of sodium and water leading to damage of brain tissues. Ischemic edema is usually treated with thrombolytic drugs which operate by dissolving the blood clot to increase fluid circulation in the affected vessel. This treatment, however, increases the fatality of the hemorrhaged edema by aiding in blood leakage. The two syndromes show the same symptoms and both require rapid medical intervention. Currently CT or MRI are used to diagnose the type of the edema, however, access to these facilities is often limited. Furthermore, the condition also develops in delayed fashion and medical care ideally requires portable scanners which can be used at the patient bedside for frequent monitoring purposes. The local fluid accumulation in the brain space causes the water composition of the tissues to change leading to a change in associated electrical conductivity distribution. In this case, Magnetic Induction Tomography (MIT) has been proposed to detect the relatively high conductive blood leakage and hence determine the type of the edema. Not only is it portable, with a potentially high image capture rate, but it is also contactless, non-ionizing and magnetic fields can penetrate non-conducting barriers such as the skull [3]. MIT requires an array of excitation coils to energize the object under investigation with an AC sinusoidal time-varying magnetic field B0. The conductivity structure
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 736–739, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
A feasibility study on the delectability of Edema using Magnetic Induction Tomography using an Analytical Model
of the target causes a change B of the field due to the flow of induced eddy currents. Subsequently, B is measured from multiple excitations by an array of detection coils. The data capture process is followed by an image reconstruction scheme to reveal the conductivity distribution within the target [4]. So far, two methods of image reconstruction have been investigated namely; difference imaging and absolute imaging. The latter method suffers low sensitivity to the biological perturbation and noise problems, particularly if no means are provided to cancel out the effects of the large primary field B0 which can mask B of interest. This paper investigates the limits of absolute imaging in detecting a brain edema. For this purpose, we considered an MIT system with a single measurement channel and derived an analytical solution of an axial symmetrical head model. The sensitivity of the MIT channel to various sizes of edema was analyzed for two cases: (i) the background signal is measured and (ii) the background field is cancelled out. For image reconstruction, a priori knowledge of the shape boundary of the head is useful and therefore this paper also considers the sensitivity of the MIT channel to changes in the diameter of the head model as an indication of the dimensional accuracy required from the shape scanning system,
Detection coil
737
Excitation coil
k ak
r0
r1
0
a0 0 aK K K+1= 0 Fig. 1 MIT channel and Analytical head model Table 1 Electrical properties and dimensions of human head tissues Tissue
k
Edema White matter Gray matter CSF Skull Scalp Outside
0 1 2 3 4 K K+1
Conductivity (Sm-1) 0.8624 0.1582 0.2917 2.002 0.0828 0.6168 0
Radius (mm) 80
II. MODEEL AND VALIDATION A multi-shell head model was modeled as a sphere with an axial changing piecewise conductivity profile (radii ak; conductivity k; 0kK+1) simulating tissue layers: scalp, skull, cerebral spinal fluid (CSF), gray matter, white matter, and inner sphere representing the edema. The transmitter and receiver coils are modeled as filamentary circular elements, (rad. 25 mm) and positioned 141.5mm and 131.5mm respectively to the origin as shown in Fig. 1. The excitation current was sinusoidal and 1A amplitude at a frequency of 10MHz. Note, the channel dimensions and excitation frequency correspond to the 16 channel MIT system (MK1) found in [5]. The electrical parameters and the radii of the tissue layers are extracted from [6],[7] and displayed in the table below. The conductivity of the edema was computed from the white matter and the blood conductivities assuming the leakage blood occupies 3/4 of the affected tissue. Permeabilities are assumed to be the same for all layers for the sake of simplicity. Electromagnetic field coupling due to tissue permeativities is minute at this operating frequency and hence is neglected.
The electromagnetic field problem is described by the basic Maxwell’s equations and the constitutive laws of electromagnetism which are combined into a Poisson’s equation written in spherical coordinates (r, , ) as:
(
)
) ⎞⎟ = jωμσ E ⎟ ⎠
ϕ
[1]
Using the method of separation of variables, the general of solution of (1) is given by ∞
Eϕ ( r ,θ ) = ∑ Rn ( r ) n (θ )
[2]
n =1
where Rn and Θ n are written in terms of modified Bessel functions and associated Legendre polynomials
Rn ( r ) =
An
n (θ ) =
_________________________________________
(
2 1 ∂ rEϕ 1 ∂ ⎛⎜ 1 ∂ sin θ Eϕ + 2 2 ∂θ r ∂r r ∂θ ⎜ sin θ ⎝
IFMBE Proceedings Vol. 22
r
I n +1/ 2 (α r ) +
Bn r
K n +1/ 2 (α r )
2n + 1 1 Pn (cos(θ )) 2n(n + 1)
[3]
[4]
___________________________________________
738
B. Dekdouk, M.H. Pham, D.W. Armitage, C. Ktistis, M. Zolgharni and A.J. Peyton
where α = (1 + j ) ωμσ / 2 . The limit of (3) when σ = 0 is given by: Rn ( r ) = An r n +
Bn r n +1
[5]
Solution (2) only satisfies homogeneous equation. In the presence of a source − jωμ I 0δ ( r − r0 ) δ (θ − θ 0 ) , the Green function associated with this source can be shown to be ∞
γ
r
G ( r , r0 , θ , θ 0 ) = ∑
[6] III. RESULTS AND DISCUSSION
r> = max ( r , r0 )
γ = jωμ I 0 r0 sin θ 0 ,
Where
and
r< = min ( r , r0 ) . Since I n +1/ 2 ( ∞ ) and K n +1/ 2 ( 0 ) diverge, component Rn (r ) of solutions in each region is given by Rn0 ( r ) =
An0
Rnk ( r ) =
Ank
r
r
I n +1/ 2 (α 0 r )
[7]
I n +1/ 2 (α k r ) +
Bnk r
K n +1/ 2 (α k r ) , 1 ≤ k ≤ K
[8]
[9]
From (9), (6) and (2) the field outside the conductor is given by: ∞ ⎛ B K +1 rn +1 ⎠ n =1 ⎝ r ∞ ⎛ ∞ ⎛ γ BnK +1 ⎞ r ⎠ ⎠
[10] To compute the voltage change, we only need to integrate the scattered field due to the target, which is the first term on the right hand side of the above equation. The scattered field is given by: ∞ ⎛ B K +1 ⎞ Esc ( r , θ ) = ∑ ⎜ nn +1 ⎟ n (θ ) n (θ 0 ) n =1 ⎝ r ⎠
v∫
Esc ( r ,θ ) d A =
C (Re ceiver )
[11]
2π
∫ E (r ,θ ) r sin θ dϕ sc
1
1
1
For a first analysis, the aim is to analyze the sensitivity of the measurement channel to various sizes of the central edema and compare it with noise levels exhibited by the associated MIT instrumentation; hence we can deduce the smallest radius of the edema that could be resolved in absolute imaging. Two cases were analyzed: i) when the background field is not cancelled and the induced voltage comprises V0+V. The sensitivity of the channel to the edema with respect to the background signal is expressed as: (Signal/Background Ratio) SBR =
B K +1 rn +1 r
V =
Where the coordinates (r1, 1) denotes the location of the receiver coil. In order to validate the analytical solution, a finite element (FE) model of the MIT model was constructed using a commercial 3D numerical simulator (COMSOL, by Multiphysics). The problem was meshed with 168,000 FEs approximated with quadratic shape functions. The change in the induced voltage due the head object was calculated analytically and numerically. The resulted error was estimated by 7.5 % which could be due to numerical solution associated with meshing quality.
1
0
= 2π r1 sin θ1 E sc ( r1 , θ1 )
_________________________________________
[12]
ΔV2 − ΔV1 V0
[13]
Where V0 is the induced voltage correlated to the primary field and V2 /V1 denote the induced voltages due to the head with/without edema respectively. As expected, the sensitivity increases with the radius of the edema. This is explained by stronger induced eddy currents increasing the coupled perturbation field. ii) when the primary field is assumed to be perfectly eliminated. The signal to target ratio (STR) is evaluated with respect the induced voltage due to the unhealthy head as: STR =
ΔV2 − ΔV1 ΔV2
[14]
The simulation results depicted in Fig. 2 show the channel sensitivity to the edema has considerably improved. Taking the MK1 as a reference MIT system with a reported signal to noise ratio (SNR = V2/noise) of 40dB an edema of 27mm radius and 5.45 contrast to the background potentially could be resolved. A new MIT (MK2) prototype is under research as part of a joint project by Universities of Manchester, Glamorgan and Swansea and Philips Medical Research expecting much better SNR will be achieved and smaller edema could be detected in near future. In a second analysis, the MIT sensitivity to noise caused by error in shape scanning of the head boundary is analyzed. This analysis seeks to obtain a view of the impact of
IFMBE Proceedings Vol. 22
___________________________________________
A feasibility study on the delectability of Edema using Magnetic Induction Tomography using an Analytical Model
the shape scan noise on MIT measurements of the head and helps in defining the criteria for design requirements of the optical shape scanner. In this investigation we assumed the background can be eliminated and the SNR was simulated for different radii of the head. The results displayed by Fig. 3 indicate a spatial deviation of 1.2mm is equivalent to the system noise exhibited by the MK1.
120
1 / SBR 1 / STR
"1/ SBR" "1/ STR" (dB)
110 100 90 80 70 60 50
IV. CONCLUSIONS This paper has reported an analytical based expression of the eddy current problem for a multilayer spherical head model which can be used to assess the detectability limits for MIT. The analytical solution was validated against a numerical head model and a 7.5% error was recorded. Simulation results of a single measurement channel of the MK1 showed a central cerebral edema with 27 mm radius could possibly be recovered if instrumentation is provided to eliminate the background field. Nevertheless, a detectable edema of such size is critical. Improvement withregard SNR is expected with future generation of MIT systems. Effects of limitations of shape scanners have also been investigated and can be used as criteria for future design. A future work will focus on incorporating the frequency dependency of the biological tissues and analyze the sensitivity in frequency differential imaging.
40 30
ACKNOLEDGMENT
20 10
15
20
25
30
35
40
Radius of the edema (mm)
Fig. 2 (1/SBR) and (1/STR) versus radius of the edema
The authors would like to thank both the Algerian government and UK EPSRC (Ref. EP/E009158/1) for financial support.
50
REFERENCE
SNR(shape scan)
SNR(dB)
739
45
1. 2.
40
3.
35
4. 5.
30
6. 25 0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Radius of the head (mm)
7.
Fig. 3 Simulated SNR from shape scanner versus radius of the head
_________________________________________
The Nhs Confederation, (2008) The national stroke Strategy Vinas F.C, (2001) Bedside monitoring techniques in severe braininjured patients. Neuro. Res. 23 157-66 Brunner P, Merwa R, Missner A et al (2006) Reconstruction of the shape spectra using multi-frequency magnetic induction tomography. Meas. Sci. Technol 27 S237-S248. Griffiths H, (2001) Magnetic induction tomography. Meas. Sci. Technol. 12:1126-1131. Watson S, Williams R.J, Griffiths H et al, (2003) “Magnetic induction tomography: phase versus vector-voltmeter measurement techniques,” Physiological measurement 24 (2): 555-64, UK Gabriel S, Lau R.W, Gabriel C et al, “The dielectric properties of biological tissues. II. Measurements in the frequency range 10 Hz to 20GHz”, Phys. Biol, Med 41, 2251-2269.1996 Custo A, William M. Wells III et al. (2006) Effective scattering coefficient of the cerebral spinal fluid in adult head models for diffuse optical imaging. Vol. 45, No. 19 Applied Optics
IFMBE Proceedings Vol. 22
___________________________________________
Ventilatory Pattern monitoring by Electrical Impedance Tomography (EIT) in Chronic Obstructive Pulmonary Disease (COPD) patients Marco Balleza1,2, Teresa Feixas1, Nuria Calaf1, Mercedes González1, Daniel Antón2, Pere J. Riu2 and Pere Casan1 1
Unitat de Funció Pulmonar. Pneumology Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain 2 Electronic Engineering Department. Universitat Politècnica de Catalunya, Barcelona, Spain.
Abstract — Introduction. The calibration equations obtained to validate TIE4sys depend on anthropometric parameters which affect the magnitude of the recorded impedance changes, thus the equations for healthy people may not be adequate for COPD patients. Objectives. To validate the procedure and equations developed for health volunteers in a group of COPD patients and to find out a model for the possible differences between the results obtained with the pneumotach and TIE4sys. Materials and Methods. A group of 30 COPD male patients (FEV1/FVC 0,3 μm
Statistics
Mean Std. Dev. Median Minimum Maximum
Before UV # /m2/min
After UV # /m2/min
2,2 2,7 1,3 0 8,4
1,8 2,8 0,7 0 7,8
z
0,895
P
0,371
Mean Std. Dev. Median Minimum Maximum
Before UV # /m2/min
After UV # /m2/min
3,3 5,3 2,6 0 18,1
1,4 2,7 0 0 7,8
P 0,84 0,96 0,90 0,92 0,90
t 0,05 0,05 0,19 0,36 0,54
> 1.0 μm
z
P
1,166
0,244
t 0,99 0,08 0,64 0,83 0,81
1 2 3 4 5
Table 5 Microbiological assessment for Fungi (table top) Statistics
t 0,21 0,05 0,13 0,10 0,13
1 2 3 4 5
> 0,5 μm
> 0,7 μm
P 0,96 0,96 0,85 0,72 0,60
t 0,13 0,17 0,04 0,31 0,45
> 5.0 μm
P 0,35 0,94 0,54 0,43 0,44
t 1,86 0,97 2,84 1,96 2,14
> 0,5 μm
P 0,90 0,87 0,97 0,77 0,66
> 7.0 μm
P 0,09 0,35 0,02 0,08 0,06
t 1,80 0,33 1,35 1,14 1,34
Before UV
P 0,10 0,75 0,21 0,28 0,21
After UV
ISO level
For fungi, there has been no meaningful decrease on the floor considering all 12 sampling locations, except the location M on the operating table top (Table 5). To assess the statistical meaningfulness of the particle counting measurements, Paired Sample t test is applied with a 0,05 level of significance, at each sampling location. Statistical results are shown in Table 6 for particle sizes of 0,3μm, 0,5μm, 0,7μm, 1μm, 5μm and 7μm. Particle diameters of 0,3μm and 0,5μm are of prime importance in assessing the level of cleanliness in the operating rooms. Asceptic environments, must prove that they meet the international standard for cleanliness to ISO146441 Class 5, where no more than 3520 particles at > 0.5 μm are present per cubic meter of sampled air. However, in this study ISO Class 6 and 7 levels of cleanliness are measured in the operating room during 11 weeks (Figure 3). No noticeable UV improvement is observed when considering the particle diameters of 0,5μm, 0,7μm and 1μm. A statistically acceptable decrease in particle concentration is shown at particle size > 5μm, at the sampling location 3 that is close to the UV lamp fixture. No noticeable UV improvement is observed when considering the particle diameters of 0,5μm, 0,7μm and 1μm. A statistically acceptable decrease in particle concentration is shown at particle size > 5μm, at the sampling location 3 that is close to the UV lamp fixture.
_________________________________________
Average # of Particles
0,12 0,1 0,08 ISO Class 7 0,06 0,04 0,02
ISO Class 6
0 1
2
3
4
5
6
7
8
9
10
11
Fig. 3 Particle concentrations (x106/m3) at > 0.5 μm. ISO Class 6 and ISO Class 7 levels are separated by the solid line.
IV. CONCLUSION The results of the sedimentation method tested with the non-parametric Wilcoxon Signed Rank Test show that, for coccus, 8 of 15 locations showed significant decrease; where for fungi, only one of 15 locations showed significant decrease. The results of particle counting method tested by Paired Sample t test demonstrated that only one of 5 locations showed significant decrease and only for the particle size of > 5 μm.
REFERENCES 1. 2.
Streifel A J (2000) Hospital air quality monitoring. Infection Control Fitzwater J (1961) Bacteriological effect of ultraviolet light on surgical instrument table. Public Health Reports. 76:97-104
IFMBE Proceedings Vol. 22
___________________________________________
894 3.
4.
5. 6.
Y. Ülgen and I. Tezer Hayes J S, Soule B M, La Rocco M T J, Jones M Jr, Houghton L et al. (1987) Nosocomial infections: an overview. Ed. Carson D, Bircher S. Clinical and Pathogenic Microbiology, C V Mosby Company, St. Louis Kingston D. (1990) Cleaning the air: the theory and application of ultraviolet air disinfection. American Review of Respiratory Disease. 142:1233-1237 Wong E S, (1996) Surgical site infections. Mayhall C G. Hospital Epidemiology and Infection Control. Williams & Wilkins, Baltimore ISO14644-1:1999(E) Cleanrooms and associated controlled environments -- Part 1: Classification of air cleanliness
_________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Yekta Ülgen Institute of Biomedical Engineering Bebek Istanbul Turkey
[email protected] ___________________________________________
Soluble Gas Tight Capsules for use in Surgical Quality Testing J.B. Vorstius, G.A. Thomson and A.P. Slade University of Dundee, Division of Mechanical Engineering & Mechatronics, Dundee, UK
Abstract — During many surgical procedures such as gastrointestinal or vascular surgery it is necessary to excise and reconnect bodily conduits. Failed connections can have extremely serious consequences and a method to help surgeons determine the integrity of connections has been proposed. This involves the detection of trace gas breaching an anastomosis. A key feature of this is establishing a method to place a controlled volume of gas into the conduit. A soluble capsule method is presented here. The requirements were that the capsule must be safe, a volume of 0.5 – 1.0ml of gas at 2 Bar pressure should be contained, that the filling gas can be varied to suit the clinical application and that the capsules have a shelf life of at least 5 days without showing loss of gas. A device and procedure were developed to meet these criteria. This consists of modified oral pharmaceutical capsules and a capsule filling mechanism contained in a vacuum chamber. Sealing the capsules has been achieved via an alcohol / water mix coupled to capilliary action. Results show performance which meets the design specification and the capsules have shown themselves effective in tests using phantom scenarios. Keywords — pharmaceutical anastomoses
capsules,
leakage,
At present there is no widely accepted way to test the quality of an anastomosis during surgery. Generally this is done by physical inspection though with leakage figures of typically 4% [5], this approach can not be said to be particularly certain. Alternative approaches tried for colon surgery include applying pressurized fluid through the anus to check for leakage, or flooding the open lumen with saline while applying compressed air via the rectum to check for bubbles [6-11]. These methods are however difficult to apply, and the pressures used risk damaging the connections they purport to check. As a result these techniques are not commonly applied and tend only to be used in animal tests when evaluating new anastomsis techniques or equipment.
gas,
I. INTRODUCTION Many forms of surgery involve the excision of internal tissue. Following the removal of the diseased or damaged tissue, restorative work will be performed. Where this tissue forms a conduit or bladder for bodily fluids there can be particularly serious consequences in the event of a failure of the restoration. In such situations, even with modest leaks, fluid can hemorrhage through the tissue, resulting in loss of function and/or infections can arise in either the tissue wall or the body lumen. A classic example of this occurs in gastroinsestinal surgery. Surgery to the gastrointestinal tract is extremely common as a way to deal with a number of conditions including colon cancer or physical injury [1-4]. Typically the diseased or damaged section is removed and the remaining healthy tissue is reconnected using sutures or staples to maintain function. Even slight failure of this anastomsosis can provide a pathway for gut bacteria and faecal matter to escape into the abdominal lumen where peritonitis can quickly develop. This can be fatal and often requires surgical correction which poses further risk to the patient and cost to the healthcare provider.
Fig. 1 Capsule being inserted into colon An alternative approach which has been considered by our group is to use a small volume of low pressure gas contained in a capsule to be placed into the colon prior to completion of suturing, see Fig. 1. Following closure of the anastomosis the capsule will release the gas into the colon and breaches of the suturing can be detected via an appropriate chemical sensor. A key feature of this method is the design of suitable capsules. This paper presents work done in developing a system to allow capsules to be reliably filled with a known gas at a known pressure and demonstrating the use of the capsules in phantom scenarios. II. MATERIALS AND METHODS By considering the application to which the capsule would be applied the following specification was drawn up for the capsule and its associated filling system.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 895–898, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
896
• • • • • •
J.B. Vorstius, G.A. Thomson and A.P. Slade
The capsule should be safe for placement in the intestinal tract Should be able to house a gas volume of 0.5- 1.0ml Should be able to contain this gas at a pressure of 2 bar Should be such that a range of gases could be used to fill the capsule The gas should be able to be released through the capsule dissolving or by being physically breached. Exact mechanism to be determined at a later stage. That the sealed capsules should show an acceptable shelf life without loss of gas.
While the placement of gas containing capsules into a surgical site is novel, the use of capsules as drug delivery procedure is routine through oral methods. Typically these pharmaceutical capsules take one of two forms; two part capsules or soft-gel capsules. Both of these forms are made from gelatin which is water soluble and the capsules are extremely common having been around for in excess of 150 years [12]. Two part capsules are well suited to the containment of powder or granular material, and can be used for small scale operations as special filling and sealing equipment is not necessarily required. They are however not naturally gas or liquid tight. Soft gel capsules can also contain powder but as they are factory sealed are commonly used to contain oil based liquids. They do however require specialist manufacturing and filling facilities.
For our work, while developing a bespoke solution was a possibility, it was felt that a modification of the two part capsules might be an effective option. These were readily available, proven to be safe and offered an accessible base level from which to begin the research. Fig. 2. and Table 1 show schematics of a capsule and details for various standard capsule sizes. Our requirements called for a volume of gas of 0.5ml to 1.0ml therefore capsule sizes “0” and “00” were selected for the trials. As previously stated, these capsules are generally used to carry powder and are administered orally. We required capsules which would be gas tight. It was therefore necessary to develop a filling rig which would allow us to fill and seal the capsules with a gas of our choosing. To this end, we developed a gas tight box with movable adapters for the capsule parts, a mechanism to join them and a pipe system to spray the sealing solution. The prototype box held 5 capsules. The filling procedure begins with arranging the capsule halves in the adapters and adjusting them loosely together, without joining the locking rings (Fig. 3).
Fig. 3 Capsule halves in the adapters with pipe to spray the sealing solution
Fig. 2 Typical two part capsule showing locking rings to hold capsule together following filling
The box is then closed and a vacuum of 90 percent created. In the next step the gas that should be filled into the capsules is slowly inflowed into the box up to a pressure of 2 bar. Through the pipe system, a solution of 50 percent ethanol and 50 percent water is sprayed on the joint where capsule body and capsule cap meet. The capsules are then joined together. After 30 minutes drying, the filled and sealed capsules can be removed from the box (Fig. 4).
Table 1 Two part capsule sizes Size 000 00 0 1 2 3 4
Outer Diameter ( mm ) 9.9 8.5 7.7 6.9 6.4 5.8 5.3
Locked Length ( mm ) 26.1 23.3 21.7 19.4 18 15.9 14.3
_________________________________________
Volume ( ml ) 1.37 0.95 0.68 0.5 0.37 0.3 0.21
IFMBE Proceedings Vol. 22
Fig. 4 Filled and sealed capsule (sealing coloured)
___________________________________________
Soluble Gas Tight Capsules for use in Surgical Quality Testing
Following the development of the filling mechanism, it was necessary to test if the capsules were filled consistently and could hold the gas over a longer period of time without leakage from the capsule. Only once these conditions could be met might the capsules would be fit to be used in surgery. To measure the amount of gas in the capsules, CO2 filled capsules were placed in turn into a closed vessel and activated by injection of water into the vessel to dissolve the capsule shell. A sensor measured the gas concentration in parts per million (ppm) inside the vessel (Gascard, Edinburgh Instruments, Edinburgh UK). Several testing series with capsules from the same and from different fillings showed that the capsules were filled with amounts of gas leading to gas concentrations in the box that varied only 150 to 200 ppm. Fig. 5 shows the amount of gas released by capsules of one filling measured in ppm.
the colon phantom was approximately 60ml and initially contained air at atmospheric pressure. This was then placed in a sealed box with a gas sensor, this box having a volume of ??? and again containing air at atmospheric pressure. A small quantity of water was injected into the phantom colon via an end cap to dissolve the capsule and release the gas. Fig. 7 shows results of tests in which the capsule successfully dispersed its gas which was then either contained by an intact colon phantom or was leaked from the holed phantom.
2500
Gas concentration (ppm)
III. RESULTS
897
2500
2000 Mon
1500
Tue 1000
Wed Thu
500
Fri 0
1500
0
1000
Capsule 1 Capsule 2
500
Capsule 3 Capsule 4
0 0
50
100
150
200
Time since capsule activated (s) (20s offset between traces for clarity) Fig. 5 Gas release of capsules
Fig. 6 shows the results from a batch of capsules tested over the course of a week. It can be seen that, while the results are not entirely consistent the capsules appear to be effective at retaining the gas over this period. The capsules were also tested in a section of artificial colon. This would help check the overall feasibility of the system in determining leakage. A 2 bar size 0 (0.68ml) CO2 capsule was placed in a section of phantom colon (Limbs & Things, Bristol, UK). This section of phantom colon had end caps to restrict the volume of colon exposed and to simulate the clamps used in surgery. The internal volume of
_________________________________________
50
100
150
200
Time since capsule activated (s) Fig. 6 Test of gas retention of capsules over a period of 5 days
Gas concentration (ppm)
Gas concentration (ppm)
2000
2000 1500 1000 500 0 0
50
100
150 Time (s)
1mm hole
200
250
300
No hole
Fig. 7 Tests of capsules in intact and holed phantom colon
IV. DISCUSSION These results indicate that standard hard gelatin capsules can be modified to carry gas for surgical leak detection.
IFMBE Proceedings Vol. 22
___________________________________________
898
J.B. Vorstius, G.A. Thomson and A.P. Slade
There are however some outstanding issues still to be resolved largely related to how these capsules would need to function in the clinical environment. A key issue relates to the capsules deploying their gas only once the anastomosis has been completed. At present this has been achieved by injecting water into the cavity containing the capsule. Whether this is viable in a clinical setting and whether residual moisture in the colon would cause premature gas deployment are issues currently being explored. V. CONCLUSION Leakage following anastomosis of the gastrointestinal tract is a serious issue which causes serious clinical and financial issues for a great many. The development of a safe gas capsule to aid in the detection of such leaks at the time of operation is an important step in helping reduce the number of leaks.
ACKNOWLEDGMENT The authors would like to acknowledge the UK Engineering and Physical Sciences Research Council for supporting this work through grant EP/D003040/1.
_________________________________________
REFERENCES 1.
Herrera F.A., Coimbra R., Easter D.W., (2008) Penetrating Colon Injuries: Primary Anastomosis versus Diversion, Journal of Surgical Education, 65(1), 31-35 2. Rood L.K., (2007) Blunt colon injury sustained during a kickboxing match, Journal of Emergency Medicine, 32(2), 187-189 3. Boyle P. & Ferlay J., (2005) Cancer incidence and mortality in Europe, 2004, Annals of Oncology 16, 2005, 481-488. 4. National Service Framework Assessments No. 1, (2001) NHS Cancer Care in England and Wales. Supporting Data: 1, Who gets cancer and their survival, Commission for Health Improvement, Audit Commission. 5. Demetriades D., (2004), Colon injuries: new perspectives .Injury , 35 (3) , 217 - 222 6. Wheeler J.M.D. & Gilbert J.M., (1999) Controlled intraoperative water testing of left-sided colorectal anastomoses: are ileostomies avoidable?, Annals of the Royal College of Surgeons of England, 81, 105-108. 7. Sugerman H.J. et al, (2000) Ileo pouch anal anastomosis without ileal diversion, Annals of Surgery, 232(4), 530-541. 8. Griffith C.D.M. & Hardcastle J.D., (1990) Intraoperative testing of anastomotic integrity after stapled anterior resection for cancer, Journal of the Royal College of Surgeons of Edinburgh, 35, 106-108. 9. Beard J.D. et al, (1990) Intraoperative air testing of colorectal anastomoses: a prospective, randomized trial, British Journal of Surgery, 77, 1095-1097. 10. Royle J.P. & Phillips R.K., (1993) An inexpensive method of quality assessment in anastomosis workshops, Journal of the Royal Army Medical Corps, 139, 105-108. 11. Gilbert J.M. & Trapnell J.E., (1988) Intraoperative testing of the integrity of left-sided colorectal anastomoses: a technique of value to the surgeon in training, Annals of the Royal College of Surgeons of England, 70, 158-160. 12. Dekker M., (1998) History of dosage forms and basic preparations in Encyclopedia of Pharmaceutical Technology 7. Informa Health Care. 304-306.
IFMBE Proceedings Vol. 22
___________________________________________
Optimization of Ultrasonic Tool Performance in Surgery Yongqiang Qiu, Zhihong Huang, Alan Slade and Gareth Thomson School of Engineering, Physics and Mathematics, University of Dundee, Dundee, DD14HN, UK Abstract — This paper investigates the tool/material interface boundary conditions in an actual operation environment. The aim is to establish a fundamental understanding of the tool/material interfacial mechanisms using finite element methods and to predict how the vibration parameters and material properties might influence the overall system performance. In the simulation, a 3D FE simulation is being developed using dynamic mechanical analysis to characterize the vibration parameters of the tool at its tuned frequency. A selection of the material and geometry required for the tool is examined in this analysis. An ultrasonic cutting system is then implemented to achieve maximize ultrasonic benefits. This knowledge is then used to maximize ultrasonic benefits, and to quantitatively identify a close approximate model of the interfacial boundary conditions for estimating the effects on the die termination in the analytical stages of design. Keywords — Ultrasonic material/tool interaction
tool
design
and
optimization,
The mode of resonance is in longitudinal direction, caused by the elongation and contraction of the ultrasonic transducer. II. DESIGN OF THE BLADE The blade is originally designed to operate at 35 kHz and vibrate in longitudinal mode. Two main measures of performance should be taken into account when designing the horn, the uniformity of performance at the tip surface and the frequency separation [3]. The uniformity is defined as the ratio of the output surface’s minimum amplitude divided by the maximum amplitude. The frequency separation is the frequency difference between axial resonant mode shapes (longitudinal mode shape) and non axial resonant shapes. To avoid mode coupling, 1.2 kHz frequency separation around longitudinal mode working frequency is required.
I. INTRODUCTION Recently, there are many researches on the medical treatment technology by using ultrasound. The ultrasonic surgical knife is one of the medical treatment technologies using ultrasound. It cuts tissues by using ultrasonic vibrations of the blade, and stop bleeding by coagulation at the same time. Their advantages include reducing operating time and better-quality geometric cuts, however, the reliability issue in the inherent vibration characteristics of ultrasonic tool such as non-linear behavior and the problems associated with maintaining a tuned operating condition have greatly limited the exploitation of ultrasonic assisted forming technology [1]. This is due to the absence of a clear understanding of the required vibration characteristics to produce the best results. The ultrasound knives works in the frequency range from 20 to 100 kHz. (e.g. Johnson & Johnson , Harmonic Scalpel: 55.5 kHz; Olympus Sonosurg: 23.5/47 kHz; Aloka, Sonop: 23/35 kHz [2]). In this work, the ultrasonic cutting system operates at frequency of 35 kHz, it is generated by a piezoelectric transducer, which transforms electric power into mechanical movements. In order to achieve maximum vibration amplitudes at minimum power consumption, it is necessary to tune the system at one of its natural frequency.
A range of different materials, sizes and shapes of the horns are investigated. In this design, two materials are selected: aluminum of density 2800Kgm-3, the young’s modulus of 70 GPa and the poisons ratio of 0.33; mild steel of density 7800 Kgm-3, the young’s modulus of 210 GPa and the poisons ratio of 0.29. The typical design of the blade is showed in Fig.1.
Fig. 1 Ultrasonic cutter design.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 899–902, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
900
Yongqiang Qiu, Zhihong Huang, Alan Slade and Gareth Thomson
III. SIMULATION OF
THE BLADE
Finite element simulations were developed using a commercial finite element code, ANSYS. The ultrasonic horns were meshed using 3D solid elements. Using the solutions to the equation of motion for an undamped system, the natural frequencies of a system can be computed.
[ M ]{
d2x } [ K ]{x} {0} dt 2
analysis were carried out to validate the theoretical results. The frequency response functions (FRF) of both are showed in Fig. 3. Results from both these analysis methods will generally yield good results but will always be different due to limitations when using computer software.
(1)
The mode shape required for the application of Ultrasonic cutting is the longitudinal mode shape as it acts along the cutting axis of the tool. It is the calculation of this frequency that provides the basis of the analysis for the Ultrasonic horn. By choosing the length of the blade (L) is half the wavelength (λ), it results in an amplified displacement at the opposite end of the horn hence making the output surface operation more efficient.
L
E M c & c 2 2f S
(a)
(2)
There are numbers of resonant frequencies recorded in ANSYS. The longitudinal axial mode shape of aluminum horn occurs at 35077 Hz shown in Fig. 2. There are also other modes around longitudinal mode at 32406 Hz and 37347 Hz. The frequency separations are 2671 Hz and 2270 Hz, conform to the requirement.
(b) Fig. 3 Frequency Response Function of the cutter by (a) FE simulation (b) experiments.
Furthermore, the stress distribution has been taken into consideration to avoid high stress region occurs at thin blade end, as shown in Fig. 4 and 5.
Fig. 2 Longitudinal mode of aluminum horn.
The frequency response of the horn is then simulated in a pre-stressed condition and external load of 0.01 mm harmonic displacement at 35077 Hz at one end. The high uniformity of vibration at the output surface in Fig. 6 shows the blade is especially well designed.
If we used mild steel in the same size, the longitudinal mode shape occurs at 36393 Hz. Experimental modal
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
Optimization of Ultrasonic Tool Performance in Surgery
901
Fig. 7 Amplitude of the whole blade.
Fig. 4 Stress contour of the cutter.
(a)
Fig. 5 Stress contour of the hollow horn.
(b)
Fig. 6 Amplitude across blade output surface.
(c) Fig. 8 Temperature (a) depth (b) and cutting efficiency (c) history when subject to a long period of ultrasonic pulse.
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
902
Yongqiang Qiu, Zhihong Huang, Alan Slade and Gareth Thomson
Several preliminary tests have been undertaken using a purpose-designed ultrasonic generator includes a sine wave generator and a pulse width modulation circuit. The material used for these tests were balsa wood and Sawbones samples. The “quality of cutting” is defined by the ratio between the depth of cutting and the degree of burning. Results of temperature, depth and cutting efficiency in Fig. 8 and 9 shows that the ultrasonic system is controllable and an optimum configuration of the ultrasonic tool based on finite element analysis could be conducted incorporating knowledge about optimum vibration performance and modal analysis.
(a)
IV. CONCLUSIONS In this paper, 3D FE simulation has been carried out to characterize the vibration parameters of the tool at its tuned frequency. A selection of the material and geometry required for the tool is examined in this analysis. The influence of the tool/material boundary conditions on the operation performance of an ultrasonic tool is investigated with a view to the design and construction of ultrasonic tooling. An optimum configuration of the ultrasonic tool based on finite element analysis is conducted incorporating knowledge about optimum vibration performance and modal analysis. Specific tuning based on the calculated resonant frequency is conducted for the system to achieve optimum efficiency. This knowledge is then used to maximize ultrasonic benefits, and to quantitatively identify a close approximate model of the interfacial boundary conditions for estimating the effects on the die termination in the analytical stages of design.
(b)
(c)
REFERENCES
Fig. 9 Temperature (a) depth (b) and cutting efficiency (c) history when subject to a single ultrasonic pulse. 1.
To find out node point for mounting, amplitudes variation along the length of the whole blade (30 nodes) has been plotted in Fig. 7, where 30 nodes were picked in every 3 mm from the horn bottom to tip. This allows fixing the blade without damping the vibrations. From Fig.7, it estimates that the zero amplitude point locates nearly at 38.5 mm from the bottom of the blade.
_________________________________________
2.
3.
Cardoni A. Lucas M. Cartmell M. Lim F. (2004). A Novel Multiple Blade Ultrasonic Cutting Device. Ultrasonics 42, P.69-74. Kosuke Ebina, Hideyuki Hasegawa, and Hiroshi Kanai (2007), Investigation of Frequency Characteristics in Cutting of Soft Tissue Using Prototype Ultrasonic Knives. Japanese Journal of Applied Physics. Vol.46, No.7B. Sherrit S., Badescu M., Bao X., Bar-Cohen Y., Chang Z., (2004) Novel Horn Designs for Power Ultrasonics, Proceedings of the IEEE Ultrasonics Symposium, Montreal, Canada.
IFMBE Proceedings Vol. 22
___________________________________________
A parallel kinematic mechanism for highly flexible laparoscopic instruments A. Röse, H.F. Schlaak Technische Universität Darmstadt, Institute of Electromechanical Design, Darmstadt, Germany Abstract — Classic laparoscopic instruments suffer from their rigid structure and have only four degrees of freedom limited by the pivot point at the access to the abdomen. Inside the body they are rigid. The surgeon has to choose the access carefully because a change in the required working direction might require a new access. Additionally some operation tasks are impossible in laparoscopic surgery since the organs are not accessible in a straight line. Approaches to overcome these restrictions aim to add flexibility to the distal instrument tip either by simple mechanically coupled gear mechanisms or by multiple wire driven bending. This paper describes the development of a novel parallel kinematic mechanism that is mounted at the tip of a rigid instrument and is able to move in four degrees of freedom (DOF) driven by four electrically actuated driving rods. The instrument platform is designed to carry any instrument but will initially be used with dissection instruments. Thus it is possible to control the instrument tip in multiple working directions and move the working platform in a small space even if the instrument shaft is stationary. To be able to develop the mechanism, technologies and design methods for parallel kinematic machines were used. Parallel kinematic mechanisms are difficult to control due to their closed loop kinematic structure. The so called inverse kinematic problem that is needed for control has been solved making it possible to provide the surgeon an intuitively usable control element. The mechanism only contains joints with one degree of freedom. hinges with one DOF are producible in thermoplastic injection moulding technology. Thus the presented mechanism can be produced in a low cost technology allowing cost effective and sterile disposables. Keywords — laparoscopy, parallel kinematic mechanism, intuitive control, highly flexible
I. INTRODUCTION Much research work is done in the field of laparoscopic surgery aiming to provide tools and endoscopes with higher flexibility for minimally invasive interventions. In contast to flexible endoscopes, laparoscopic instruments need to be able to resist much higher forces. In the case of a laparoscopic liver surgery forces up to 5 N were measured in a cholecystectomy scenario [1]. The currently available flexible laparoscopic instruments are either pure mechanically controlled as in the case of the radius surgical system [2] that demands special training
skills from the operating surgeon or driven by cables [3]. These cable driven approaches come along with difficult sterilisation for laparoscopic application. All approaches just provide rotational movement of the distal tool tip. Merlet [4] proposes a parallel kinematic working platform with three degrees of freedom (Fig. 1) Tool
Center
Actuated driving rods
Fig. 1: Three-DOF parallel kinematic instrument by Merlet et al. [4]
This is an interesting approach regarding both the working forces and the ability for intuitive control. The proposed platform could easily be controlled by a joystick because the working platform is driven by computer controlled electric drives. The presented approach is to use a parallel kinematic mechanism with an improved kinematic structure that on the one hand moves in a larger working space and can later be fabricated from injection moulded thermoplastics due to its simple joint layout. Thus it will be a cost effective disposable part that does not suffer from sterility problems. II. KINEMATIC STRUCTURE A. Overview of the moving capabilities In an extensive study the requirements of laparoscopically working surgeons have been evaluated. The distal side of an instrument has to provide basically additional rotational degrees of freedom (DOF) for working in different directions without changing the trocar position (the abdominal access). If possible it should be positionable
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 903–906, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
904
A. Röse, H.F. Schlaak
within several cubic centimetres to accomplish small precision movements. According to the requirements the serial kinematic structure of our distal instrument side has been defined as shown in Fig. 2.
f tot = nDOF + 6 ⋅ (c − 1)
(1)
where ftot is the total number of degrees of freedom to be placed in the mechanism joints, nDOF is the number of degrees of freedom of the mechanism which is equal to four in this case and c is the number of kinematic chains (usually equal to the number of mechanism legs) which is four in this case. Thus a total number of 22 joint DOF is needed, four of which are represented by the driving actuators. This leads to a large class of possible realisations. Out of this large variety a kinematic scheme as shown in Fig. 3 has been chosen.
Fig. 2: Serial kinematic structure of the proposed instrument
The combination of two rotational DOF (2) and (3) and one linear DOF (1) guarantee a working space of some cm3 that can easily be controlled by a three-DOF Joystick (updown, left-right, forward-backward). The fourth DOF (4) is a rotational hinge in the same plane as (2) and thus extends the rotational capability to one side. It will be controlled by an additional 1-DOF control element. This provides a total deflection of 90° allowing to work around the corner. The presented serial kinematic chain is later referred to as the “system chain”. The kinematic structure was chosen to work well with dissection instruments such as monopolar HF current or a laser dissector that are being developed by our project partners.
Fig. 3: Kinematic scheme of the parallel kinematic mechanism
In contrast to Merlets design not all of the kinematic chains lead from the base (the instrument shaft) to the tool centre point. For reasons of lateral stability a branched kinematic structure has been realized. C. Realization and Workspace
B. The parallel kinematic mechanism In order to be able to drive the system chain linear actuators are fixed to the instrument shaft and passive driving chains are added to the mechanism. This leads to the earlier mentioned parallel kinematic structure. A parallel mechanism contains closed loop kinematic chains and one actuator per chain. Thus, placing the actuators in the instrument shaft we obtain the desired movement with one single passive working mechanism: the parallel kinematic mechanism. To move a working platform in four degrees of freedom we need at least four kinematic chains, i.e. the system chain and three additional driving chains. The Grübler formula [5] gives the total number of degrees of freedom to be placed in mechanism joints for achieving exactly four DOF with four actuators:
_________________________________________
Fig. 4 shows the first functional sample as a result of the design process. Four driving rods move in the direction of the instrument shaft while the tool centre point carrying the dissection instrument moves within the orientated workspace shown in Fig. 5. The small arrows point into the working direction while their bases represent the tool centre point position. An 11 cm3 workspace is achieved. A 90° working angle is possible. The realized workspace allows for small precision movements even when the instrument shaft is fixed relative to the organs. This seems to be of great importance for cutting instruments. Surgeons often do complex cutting geometries by successive small movements. That is exactly what is possible with the presented mechanism.
IFMBE Proceedings Vol. 22
___________________________________________
A parallel kinematic mechanism for highly flexible laparoscopic instruments
a small moving radius of the joint. One of the early papers that describes the calculation of monolithic joints is [7]. It gives a good overview of how to calculate flexure hinges during the design process.
10 mm
Fig. 4: First functional sample of the parallel kinematic mechanism
Fig. 5: Workspace of the presented mechanism as a convex hull with small arrows pointing in the working direction. The workspace is orientated according to Fig. 4
In the design process attention has been paid to the fact that a monolithic implementation of the mechanism shall later be constructed. In most parallel kinematic mechanisms the degrees of freedom of the joints are concentrated to universal (3 DOF) or kardanic joints (2 DOF) making the kinematic calculation much easier (or even analytically possible) due to geometric simplification. Here hinges with just one rotational degree of freedom have been used. Hence later it will be possible to fabricate the whole mechanism as one injection moulded plastic part. This approach is according to Jungnickels proposal for monolithic parallel kinematic mechanisms [6]. Fig 6. shows a 1-DOF hinge and its equivalent monolithic representation. The monolithic hinge has to be manufactured from a flexible material such as a thermoplastic. A lot of literature refers to monolithic joints made from stiff materials like metal or epoxy for MEMS applications though these materials come along with
_________________________________________
905
Fig. 6: 1-DOF joint and its monolithic representation
First results have already been achieved in transferring a complex parallel kinematic mechanism into a monolithic one. Fig. 7 shows a two-DOF mechanism which was built as a first functional sample to show the working principle of the described class of mechanisms. This mechanism contains 8 joint degrees of freedom in 1-DOF joints, two of which are the driving actuators. This number of joints is needed to achieve a two-DOF movement with two actuators according to Grüblers formula (1). The shown monolithic mechanism has been produced by Laser sintering technology which is one rapid prototyping process that can be used for developing monolithic joints. Later, it is essential to use injection moulding technology for the mechanism because it is the only known technology for producing joints with large deflections suitable for many load cycles. Future work will concentrate on the transfer of the presented 4-DOF mechanism into injection moulding fabrication. In this way low cost production for disposable parts will be realized. D. Mechanism control With 22 DOF spread all over the mechanism as 1-DOF joints the kinematic calculation gets much more difficult or even analytically impossible compared to a mechanism with concentrated joint DOF like in kardan or ball joints. This problem has been solved by the implementation of a numerical Newton-Raphson approximation solver [8] for the inverse kinematics of the developed mechanism. The inverse kinematics calculation is needed for steering the actuators according to a desired mechanism position. The solver presents a new solution every 10-20 ms on a Pentium 1.6 GHz processor. This is due to the fact that the approximation starts from the last calculated solution. The last solution is not far away from the new solution regarding realistic moving velocities. The calculation speed can be
IFMBE Proceedings Vol. 22
___________________________________________
906
A. Röse, H.F. Schlaak
optimized using dedicated calculation hardware e.g. field programmable gate arrays (FPGA). Some very remarkable results in calculating complex mechanical problems in dedicated hardware have been described in [9] and can lead to a faster real time calculation for the presented mechanism.
V. ACKNOWLEDGEMENT We want to thank the German Federal Ministry of Education and Research that is supporting the presented work in the Project FUSION, subproject INKOMAN (intracorporeal manipulator) [10] (reference number 16SV2023).T
REFERENCES 1.
Fig. 7: 2-DOF mechanism in a) presicion mechanics and b) laser sintering technology.
III. CONCLUSION A new 4-DOF mechanism for laparoscopic instruments has been developed and built as a functional sample. It is well suited for complex cutting movements with dissection instruments. Technical problems, namely the calculation of the inverse kinematics for instrument control and the exclusive use of 1-DOF joints for future monolithic fabrication, have been solved. IV. OUTLOOK A laparoscopic instrument using the described 4-DOF mechanism for use in animal experiments has been built. Tests by surgeons are planned. The mechanism will then be optimized and transferred to an injection moulded monolithic plastic part in order to achieve a cost effective single use instrument tip.
_________________________________________
Rausch, J.; Röse, A.; Werthschützky, R.; Schlaak, H.F. (2006) INKOMAN - Analysis of mechanical behaviour of liver tissue during in-tracorporal interaction. Biomedizinische Technik. Proceedings. Gemeinsame Jahrestagung der Deutschen , Österreichischen und Schweizerischen Gesellschaften für Biomedizinische Technik. 06. 09.09., ETH Zürich, Schweiz. 2. Bueß, G., Matern, U., Kuner, W., Rudinski, A., Burghardt, J. (2004) Wie beeinflusst die Technik die Entwicklung in der minimalinvasiven Chirurgie? Chir. Gastroenterol. 20, pp 7-14 3. Nakamura, R., Kobayashi, E., Masamune, K., Sakuma, I., Dohi, T., Yahagi, N., Tsuji, T., Hashimoto, D., Shimada, M., Hashizume, M. (2000) Multi-DOF Forceps Manipulator System for Laparoscopic Surgery. Proc. Third International Conference on Medical Image Computing and Computer assisted Interventions, Pittsburgh, PA, USA, pp 11-14 4. Merlet, J. and INRIA, S. (2001) Micro parallel robot MIPS for medical applications. Proceedings. 2001 8th IEEE International Conference on Emerging Technologies and Factory Automation 5. Lung-Wen Tsai (1999) Robot Analysis. John Wiley & Sons, New York 6. Jungnickel, U. (2004) Miniaturisierte Positioniersysteme mit mehreren Freiheitsgraden auf der Basis monolithischer Strukturen. Dissertation. TU Darmstadt, Germany. 7. Paros, J. M., Weisbrod, L. (1965) How to design Flexure Hinges. Machine Design. Nov. 25 1965, pp 151-156 8. Stoer, J., Bauer, F. L., (2005) Numerische Mathematik 1. Springer, Berlin [u.a.] 9. Hildenbrand, D., Lange, H., Stock, F., Koch, A., (2008) Efficient Inverse Kinematics Algorithm based on Conformal Geometric Algebra Using Reconfigurable Hardware. Intl. Conf. on Computer Graphics Theory and Applications (GRAPP). 10. Röse, A.; Kern, T. A.; Eicher, D.; Schemmer, B.; Schlaak, H. F. (2006) INKOMAN – An intracorporal manipulator for minimally invasive surgery. Biomedizinische Technik. Proceedings. Gemeinsame Jahrestagung der Deutschen ,Österreichischen und Schweizerischen Gesellschaften für Biomedizinische Technik. 06.09.09.2006, ETH Zürich.
Author: Andreas Röse Institute: Technische Universität Darmstadt, Institute of Electromechanical Design Street: Merckstraße 25 City: 64283 Darmstadt Country: Germany Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
A Smart Ultrasonic Cutting System for Surgery Anila Thampy, Zhihong Huang, Alan Slade and Victor Fernandez School of Engineering, Physics and Mathematics, University of Dundee, Dundee, DD1 4HN, UK Abstract — Ultrasonic cutting is widely used in food processing applications to produce a clean and accurate cut. However, it is yet to be adopted in orthopedic applications, mainly due to the high temperatures that can be generated at the cut site. In this paper a single-blade ultrasonic cutting device is used to study ultrasonic cutting of different materials using materials of cheese, plastic and wood. A comparison between experimental and computed results is used for the relationship between the forming force and tool displacement and visualization which demonstrate close agreement. If deformation is performed under superimposed vibration, the mean stress necessary to maintain plastic now decreases appreciably in comparison with that for purely static deformation, and this decrease is accurately predicted by the finite element (FE) models. The future work aims to perform the experiments at temperatures close to that of the body in order to explore whether it is possible to maintain cutting temperatures within safety limits by controlling the cutting parameters. Keywords — Ultrasonic cutting, Finite Element modeling, Surgery
I. INTRODUCTION The science of ultrasound has found usage in all aspects of the medical field, including diagnostic, therapeutic and surgical applications. Medical applications of ultrasonics include both low intensity and high intensity applications. The use of low intensity ultrasonic waves is of great value as a diagnostic agent and have an excellent safety record such that in many hospitals consideration is being given to its routine use during pregnancy [1]. Sterilization and cleaning of surgical instruments, preparation of emulsions and preparation of pharmaceutical materials are the proven benefits of ultrasonic to clinical practice. The principal characteristics of ultrasonic wave propagation in body tissues are related to velocity and attenuation. These are the basic factors that determine the effectiveness of both diagnostic and therapeutic applications of ultrasound. Attenuation is an important factor in medical diagnosis for it can reveal much information concerning the properties of tissue through which the wave propagates. Although not all mechanisms associated with the results obtained through ultrasonic therapy are known, there are certain possibilities which must be considered in order that
this form of energy may be used most beneficially and harmful effects avoided. The generation of heat is associated with absorption of ultrasound energy and many physicians think of ultrasound as a means of producing heat within the body for therapeutic purposes [2]. Joint diseases (e.g., osteoarthritis) are common in middle age and the older population, resulting in pain and impaired mobility. In particular, osteoarthritis is the breakdown of a joint’s cartilage. Cartilage breakdown causes bones to rub against each other, causing pain and loss of movement. Hands, knees and hips are among the most affected joints. Non-surgical treatment includes thermal therapies, joint protectors and medicaments help in strengthening the muscles, in limiting stress levels in the joint and in relieving pain [3]. Knee osteoarthritis may be extremely disabling and often needs surgical treatment. These interventions may involve total knee replacement (Figure 1). Given the relatively short life span of a total knee replacement (generally 12 years) and the limited number of revisions (typically two), this treatment may be unsuitable for young and middle-aged patients.
Figure 1: Total Knee Replacement
Osteotomy is an alternative solution for joint diseases in young and middle-aged patients. Knee osteotomy surgically repositions the joint, realigning the mechanical axis of the limb away from the diseased area. Arthritis can also cause misalignment of the knee (bow-leg). In a surgery process
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 907–910, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
908
Anila Thampy, Zhihong Huang, Alan Slade and Victor Fernandez
including osteotomy, a resorbable implant (called a wedge) can be inserted in the upper tibia to correct the misalignment of the knee. Alternately, the surgeon can perform a cut in the bone and wedge the bone open with bone graft, and then fix the newly aligned bone with a plate and screws [4]. Ultrasonic blades can offer an alternative bone-cutting process to conventional saws or drills. By vibrating (thus hammering) at a high frequency (often 20 to 40 kHz), an ultrasonic blade initiates and maintains a controlled, fatigueinduced crack-propagation cutting process [7]. Ultrasonic surgery is based simply on the destructive nature of ultrasound (its ability to rupture tissues) and its ability to promote erosion of hard tissues. Ultrasonic blades have been used for scale removal in dentistry [5] and soft tissue cutting, such as in retina surgery, cataract removal, skin and muscle cutting, varicose vein removal and cauterization [6]. Ultrasonic osteotomy is not a novel concept, with devices dating back to 1957. However the limitation in the tool and transducer design and the lack of suitable methods for tuning power control restricted the early development. In the last fifteen years after many improvements in the transducer design interest has been renewed in ultrasonic surgical devices. The current challenge for ultrasonic bone cutting resides in the development of tuned system capable of delivering sufficient acoustic power to cut hard tissue without exceeding temperatures that risk bone necrosis [7]. II. FINITE ELEMENT SIMULATION This work includes numerical analysis to simulate the crack growth due to ultrasonic vibration induced residual stress field at the crack tip and to predict stresses acting during the cutting process and subsequently the power requirement necessary to perform a particular operation.
propagation, the crack growth is affected by interfacial boundary conditions, material properties, and loading conditions, thus any crack growth analysis must be based on the carefully considerations of fracture mechanics parameters such as crack tip model, Stress intensity factors, energy release rate and boundary conditions. Such parameters encapsulate and describe the local effect of the crack on a component [8]. The energy release rate and stress intensity are closely linked. The stress intensity factors describe the magnitude of the elastic stress field at a crack point. § EG · ¸ K = ¨¨ 2 ¸ © 1 − (αν ) ¹
1
2
[1]
Where K - stress intensity factor , ν - poisson’s ratio G – plain strain E - young’s modulus A model for the elastic-plastic finite element simulation in plane stress is presented. The crack growth simulations are based on the plane stress-strain curve of the node point near crack-tip. The displacement near crack-tip, the stressstrain curve and stress redistribution along the crack plane are investigated. A software implementation using the Crack tip Opening Displacement method in conjunction with ANSYS is presented. This implementation addresses generation of cracked 2D meshes and crack growth prediction. 8 node PLANE82 was used as an element type with the plane stress option. Nonlinear analysis was performed with an elasticplastic material model under a static load and the ultrasonic load at 35 kHz. An elastic-perfect plastic model was considered as material (Aluminum) behaviors. Results demonstrate large scale crack growth under generalized mixed mode loading and the development of complex 3D crack surfaces as shown in (Figure 2).
In this study, a finite element analysis using ANSYS was conducted to calculate the plane strain and plane stress cases at different applied loading conditions for 2-D model. The numerical analysis provides a significant insight to understand the mechanism of ultrasonics induced residual stress field in spite of its limitation. Crack is produced when one part of material experiences a large permanent plastic deformation while other parts remain elastically. The crack growth continues under constant amplitude cyclic loadings. During the crack
_________________________________________
IFMBE Proceedings Vol. 22
Figure 2: Crack growth in aluminum cube
___________________________________________
A Smart Ultrasonic Cutting System for Surgery
909
III. THE CUTTING SYSTEM
frequency. The core of this circuit is the SWG integrated circuit ICL8038; the summing amplifier U03 provides the variable voltage (9 to 12 V) controlling the frequency, for a range from 25 to 45 kHz. The PWM signal, generated by a Siemens C167CR micro-controller, gives the ON/OFF information to pulse the amplified sine wave.
For the experiments a 800W ultrasonic blade was used. The global impedance of the piezoelectric cutter, plotted from experimental data, is shown in Figure 3. Resonances occur when the impedance reaches its lowest values (e.g. nearly 50 ohms at 35, 42 and 62 kHz). As the global impedance is influenced by all the modes of vibration, several resonance regions appear, this plot does not specify the mode of resonance (longitudinal, at 35 kHz). Therefore FE simulation and experimental modal analysis were carried out to characterize the ultrasonic blade in terms of its natural frequencies, damping values and mode shapes. Results in figure 4 shows the longitudinal vibration mode of resonance at 34.7 kHz.
As the PWM switch is extremely quick (50 ns), the power circuit would have to deal with a sudden change in the input, generating undesired overheating in the amplifier and inducing a loss of the resonance conditions in the cutter. With the purpose of damping the PWM transient, a secondorder, low-pass filter interface was added, and the more universal analogue multiplier AD633 replaced the DG308A (the original DG308A was then bypassed by applying a continuous 5 VDC input). A schematic diagram of the pulsed ultrasonic signal generator is shown in Figure 5.
Figure 5: Schematic diagram of signal generator
Figure 3: The impedance of the ultrasonic cutter
An ultrasonic cutter is designed to work at – or as close as possible to – resonance conditions, in order to maximize the cutting power. In this system, the resonance condition is verified by analyzing the electrical parameters, specifically the amplifier voltage (voltage Va and the cutter voltage (voltage Vb).The whole transducer circuit is shown in figure 6.
Figure 6: Complete ultrasonic power circuit
IV. RESULTS
Figure 4: The longitudinal mode of the ultrasonic cutter at 34.7 kHz
The purpose-designed ultrasonic generator includes a sine wave generator (SWG) and a pulse width modulation (PWM) circuit, the ultrasonic sine wave (35 kHz) being sent within the pulses (typically in the region of 1 to 100 Hz). The SWG circuit controls the signal’s amplitude and
_________________________________________
Resonance occurs when the cutter impedance reaches a minimum value, and has no cutting load. As the cutting load increases, the system moves from resonance and voltage Vb leads voltage Va. The phenomenon is described in Figure 7. The control system has therefore to continuously adjust the
IFMBE Proceedings Vol. 22
___________________________________________
910
Anila Thampy, Zhihong Huang, Alan Slade and Victor Fernandez
sine frequency as the load conditions change, to keep the system working in resonance. When working with pulse-enveloped signals, the ultrasonic cutter develops a second-order, electromechanical transient. The ultrasonic transducer cannot stop vibrating suddenly, and also the transformer, resistor and cutter association behaves as an RLC circuit. The cutter voltage increases during the transient, meaning a higher cutter impedance, and so a lower cutter amplitude. Subsequently, the cutting process is hindered while the transient lasts. Through the off-phase of the pulse, the cutter returns the reactive energy to the circuit, this energy being dissipated within the series resistor. The ultrasonic sine amplitude can be controlled, or modulated, in order to optimize the cutting efficiency as shown in the Figure 8. Amplitude load range No Load With Load
resonance(s)
frequency
Figure 7: Change in resonance conditions with the load
Preliminary tests were carried out using Balsa wood, sawbones and animal bones. Although the number of tests carried out would not be representative for an inclusive parametric analysis, it proved the advantage of the PWM strategy over the conventional one. From the test results we can see that the quality of cutting is hampered at pulse frequency of 100 Hz. In addition, the tests conducted between the pulse frequencies of 20 Hz to 50 Hz give good quality of cutting. Now some tests at 20 Hz give better results than continuous sine wave. V. CONCLUSIONS In this paper, frequency characteristics in the cutting of soft tissue were investigated using ultrasonic blade operated at 20-75 kHz. FE simulations of the cutting performance of the blade have been investigated. A fully operational, controllable ultrasonic power generator has been designed and built to achieve the best quality of cutting. Different modulation strategies, as the PWM and the triangle modulation, are discussed and analyzed. This preliminary work has focused on the design of a mechanical interface for ultrasonic cutting. Subsequent experiments will aim to improve the pulse signal strategy, monitor the thermal response occurring when the blade is in contact with the specimen, and explore strategies to monitor how to minimize the effects of temperature.
REFERENCES
Figure 8: Cutting efficiency
_________________________________________
[1] DYSON Mary, A review of Recent Experimental Evidence on the Effects of Diagnostic Ultrasound on Tissue-Physics in Medical Ultrasound 1986. [2] ENSMINGER .Dale : Ultrasonics : Fundamental , Technology and Applications. 1988 [3] M. LUCAS ,A.CARDONI, A.MACBETH: Temperature effects in Ultrasonic cutting of Natural Materials; Dept of Mechanical Engg ; University of Glasgow [4] WAPLINGTON.P, L. Blunt, Walsmley A.D. and Lumley.P.J., “Dental hard tissue cutting characteristics of an ultrasonic drill”, Int. J. Mach. Tool Manufacture., Vol. 35 No. 2, pp 339-343, 1995 [5] EBINA.K, HASEGAWA.H and KANAI.H- Investigation of frequency characteristics in cutting soft tissue using prototype Ultrasonic knives. [6] EWALDS.H.L, Wanhill .R.J.H – Fracture Mechanics. [7] S. TOKSVIG-LARSEN L. Ryd & A. Lindstrand, “On the problem of heat generation during bone cutting”, Journal of bone and Joint surgery (Br). 73-1, pp 13-15.
IFMBE Proceedings Vol. 22
___________________________________________
Simultaneous Stereo-Optical Navigation of Medical Instruments for Brachytherapy K. Berthold1,2, D. Richter1, F. Schneider1 and G. Straßmann2 1
2
University of Applied Sciences, Department DCSM, Wiesbaden, Germany University Hospital of Marburg, Medical Center for Radiology, Marburg, Germany
Abstract — The current procedure of positioning needles for irradiation of prostate cancer by interstitial brachytherapy implies some constrains due to the usage of a template grid. To improve the possibilities of the radiologist an infraredbased stereo-optical 3D-navigation system is under development, which is able to track devices simultaneously. Infrared light emitting diodes (LEDs) are attached to a transrectal ultrasound probe and a needle tracking device. The probe is calibrated with respect to the position of the image plane by using a 3D-calibration pattern. To determine the position and orientation of the tracking devices, the correspondences of the spots in both camera frames have to be computed. The 3D positions of the LEDs ar e then calculated, assuming that the nearest approaches of the lines of sight give the correct pairs of combinations of the LED spots. The position and orientation of each tracking device is then computed using a closed form algorithm. With the calibration parameters the position of the image plane and the needle path may be calculated. The intersection point between the needle path and the displayed ultrasound image is calculated and visualized. The localization of the tracking devices is done with an accuracy of ± 1 mm in a volume that is large enough for the use in brachytherapy for prostate cancer. First measurements have shown that a better accuracy of the needle position relative to the ultrasound image is needed. Therefore the form and the thickness of the ultrasound image plane has to be taken into account, which is assumed to be zero until now. Keywords — brachytherapy, ultrasound, position detection, 3d navigation
I. INTRODUCTION The positioning of needles for intra-corporal irradiation of prostate cancer by interstitial brachytherapy is currently done by using a template grid placed in front of the perineum. The grid holes have distances of 5 mm in both spatial directions. The procedure implies inaccuracies of up to 5 mm with respect to the prostate position and constrains the needle application to parallel paths. Thus, some parts of the prostate might be not reachable due to anatomical conditions. To avoid these constraints and to improve positioning accuracy an infrared-based (IR) stereo-optical 3D-navigation system was developed to simultaneously track an ultrasound probe and the needle as well. Omitting the template grid, oblique needle paths may be applied
under control of ultrasound images with online visualization of the needle paths. II. METHODS For the 3D navigation two video cameras are used with resolution of 1024x768 pixels, with a frame rate of 30 images per second, with IR-long-pass filters with cut-off frequency of 830 nm mounted in front of the lenses and with firewire interface to the computer system. The stereo cameras are calibrated by using a single plane method [1, 2]. They are mounted about 1.4 meters above the region of interest. Four infrared light emitting diodes (LEDs) with wavelength of 895 nm are attached to a transrectal ultrasound (TRUS) probe for a 6-DoF navigation. This can be seen in Fig. 1. The probe is calibrated with respect to the position of the image plane by using a 3D-calibration pattern in a water bath [3, 4, 5]. The TRUS probe is connected to an ultrasound scanner which has an analogue video output signal. This signal is digitized by a frame grabber board with resolution of 768x756 pixels. A tracking device was built to track the position of the needle with two LEDs mounted along the axis of the needle for a 5-DoF navigation. It is shown in Fig. 2. The construction enables detaching the tracking device after needle insertion into the tissue. To simultaneously track the two tracking devices it has to be ensured, that the distances between the LEDs of both the ultrasound probe and the needle tracking device are unambiguous. The mutual distances define the geometry model of each tracking device. Due to the filters mounted in front of the lenses of the cameras only the LEDs are visible in the images of the two cameras. They are represented as bright spots with a diameter of about 10 pixels. To determine the position and orientation of the tracking devices simultaneously, the unknown correspondences of the spots in both camera frames have to be computed for all possible combinations. To reconstruct the 3D position of each corresponding pair of spots the local positions of the nearest approaches of the assumed lines of sight are calculated. By using a threshold for the distances between the lines of sight, invalid combinations of correspondences of spots are eliminated.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 911–913, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
912
K. Berthold, D. Richter, F. Schneider and G. Straßmann
Fig. 1 Simultaneous usage of ultrasound probe and needle inserted into a prostate phantom (left image). Corresponding ultrasound image; the intersection point of the needle is the white spot in the lower-left part of the image (right image).
Due to occlusions not all of the six LEDs might be visible for one or both of the cameras. Only those combinations are used, which define at least five valid correspondences of spots, because at least three for the ultrasound probe and two for the needle tracking device are needed. The distances between the computed LED positions are compared to those of the geometry models of the tracking devices defined before. Once it is known, which LED position belongs to which tracking device, the position and orientation of each tracking device is computed using a closed form algorithm [6]. With the calibration parameters of the ultrasound probe the position of the image plane may be calculated. The relative position of the needle with respect to its tracking device is known due to the construction data. Therefore the position of the needle path may be calculated as well. A program was developed, which enables to record and display a data set of ultrasound images each with its position and orientation. The data of the paths of the inserted needles may be stored as well for irradiation preplanning purpose. The intersection points between the current needle path or the stored needle paths and the displayed ultrasound image plane are calculated. These intersection points are visualized in the displayed image [7].
III. RESULTS Different system parameters were evaluated to determine the accuracy of the method. One parameter set is the accuracy of localizing the LEDs with the stereo-optical system. In a defined area of ± 100 mm in each x and y direction and in z direction ± 55 mm the 3D positions of the LEDs are reconstructed with an accuracy of ± 1 mm [8]. Another parameter set is the accuracy of the calibration of the ultrasound probe. Therefore the preciseness of the calibration of the ultrasound probe has been measured. The angles and denotes the rotation of the ultrasound image plane with respect to the tracking device along the centreline axis of the ultrasound probe. The results show that the standard deviation of the repeat accuracy for the angles of the rotation matrix is = 1.05°, = 0.31° and = 2.41° [5]. To evaluate the overall accuracy of the complete system, i. e. the needle position relative to the ultrasound image, measurements were taken out. The cameras, the tracking devices and the ultrasound probe were calibrated as described in the method section. Then the ultrasound probe and the needle, attached to the needle device, were inserted into a water bath. The position and orientation of each
Fig. 2 The needle tracking device with two infrared LEDs. In the background a section of the camera calibration pattern is shown.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Simultaneous Stereo-Optical Navigation of Medical Instruments for Brachytherapy
device were determined by the system. The intersection point of the needle and the ultrasound plane was calculated with help of the position information and the calibration data of the ultrasound probe. It was then compared to the coordinates of the point of the needle, which appeared in the ultrasound image. This was done serveral times with each set of calibration parameters of the ultrasound probe, which was calibrated two times. The mean value of the results is 3.84 mm with a standard deviation of 1.88 mm. IV. DISCUSSION
V. CONCLUSIONS The system described above allows tracking two devices simultaneously. Due to inaccuracies mainly related to the character of the ultrasound images, further improvements are needed before using it in e.g. interstitial brachytherapy of the prostate carcinoma. The thickness of the ultrasound image has to be taken account in the model of the system.
_______________________________________________________________
This is one of the next steps for further investigations. In future it will be possible to insert needles into the prostate in oblique paths. The virtual needle path is then visualized in the ultrasound image before actually inserting the needle under supervision of the radiologist.
REFERENCES 1.
2.
The postulated accuracy for medical application of ± 1 mm to localize the LEDs of the tracking devices was achieved in a volume that is large enough for the use in interstitial transperineal brachytherapy for prostate cancer. The results for the determination of the position of the pixels of the grabbed ultrasound image in reference coordinates show, that the values differ from the needle path in a range of about 2 mm – 5.5 mm. There are several different conditions which influences this value. Among these, the ones with the greatest impact are probably the leverage effect at the needle device (a detection error of the LEDs leads to a position error of the needle tip which is 3.5 times greater) and the form and thickness of the ultrasound plane.
913
3.
4.
5.
6. 7.
8.
Tsai RY (1986) An efficient and accurate camera calibration technique for 3D machine vision, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach FL, USA, 1986, pp 364-374 Posch S (1990) Automatische Tiefenbestimmung aus Grauwertstereobildern. Deutscher Universitäts-Verlag, Wiesbaden Pagoulatos, Haynor DR, Kim Y (2001) A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer. Ultrasound in Med. & Biol. 27:1219–229 Bouchet LG, Meeks SL, Goodchild G, Bova FJ, Buatti JM, Friedman WA (2001) Calibration of three-dimensional ultrasound images for image-guided radiation therapy. Phys. Med. Biol. 46:559–577 Richter D, Voß H, Berthold K, Straßmann G (2005) Calibration and navigation of a transrectal ultrasound probe for prostate cancer therapy, IFMBE Proc. vol. 11, European Med. & Bio. Eng. Conference, Prague, Czech Republic, 2005 Horn BKP (1987) Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 4:629–642 Schneider F (2006) Simultane 3D-Navigation unabhängiger IRmarkierter Tracker in der Brachytherapie. Diploma thesis, Department of Computer Science, University of Applied Sciences of Wiesbaden Voß H (2005) Development of a stereo-optical 3-D navigation system for ultrasound-guided brachytherapy for prostate carcinoma. Diploma thesis, Dep. of Computer Science, University of Applied Sciences of Wiesbaden Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Karsten Berthold DCSM, FH Wiesbaden Kurt-Schumacher-Ring 18 65197 Wiesbaden Germany
[email protected] _________________________________________________________________
The impact of electrosurgical heat on optical force feedback sensors J.A.C. Heijmans1, M.P.H. Vleugels2,3, E. Tabak1, T.v.d. Dool1, M.P. Oderwald1 1
TNO Science and Industry/Advanced Precision and Production Equipment, Delft, Netherlands 2 EFI BV, Maastricht, Netherlands 3 Rivierenland Hospital, Netherlands
Abstract — Electrosurgery enables cutting and coagulation (desiccation) of tissue for Minimally Invasive Surgery. Measurements performed by TNO with an infrared camera showed that the forceps of an endoscopic instrument can be over 300°C in temperature. During electro-surgery the surgeon relies on the power control and endoscopic images to perform the procedure successfully. Manipulation of tissue with the present forceps does not give accurate tissue information due to the presence of friction in the transmission mechanism and in turn force, control of the operating surgeon is poor. The latest developed instruments incorporates sensors and actuators that enable better control of force application on the tissue and give a better feeling of the tissue to the surgeon. TNO works for EFI BV on the development of a surgical instrument that senses and controls the gripping force even during electrosurgery. Electric sensors and actuators experience Electro Magnetic Interference during the use of electro-surgery, making it impossible to control the force. The high temperature that arises at the forceps influences and possibly destroys the sensors when positioned nearby the heat source. Based on the experience with optical fiber sensors TNO has developed an instrument that is immune to EMI and withstands temperatures up to 200°C. This optical sensor is based on a Fiber Bragg Grating (FBG). The FBG read out system, named the interrogator, transfers the optical fiber signal from the mechanical local strain. By this way the force exerted on the tissue and its resistance can be measured. However this sensor system is also sensitive to temperature changes. To control accurately he gripping force, the measurement must be independent of temperature. Therefore the thermal load at the forceps were measured and analyzed. The results are used for the instrument design and location of sensors. Keywords — Electrosurgery, coagulation, Minimally Invasive Surgery, Fiber Bragg Grating, optical force sensor
instruments has started. The majority of these instruments are based on existing instruments, used in conventional surgical procedures. These evaluated into devices which are difficult in handling and which lost force control. Moreover information of the force created by the tissue resistance on the tip of the instrument is absent. These disadvantages added to the poor visualization of the area of operation, decrease the usability performance of the instruments and hampers the introduction of minimal invasive surgery into areas with higher demands on accuracy. By introducing novel technologies from the field of robotics into the instruments, the mechanical disadvantages of the current instruments can be tackled. Innovation in medical technology is rather difficult due to the harsh environment in which the instrument must operate. However, thorough understanding of the medical problem and the issues related to human interfacing is just as important [1]. The cooperation of medical company EFI BV with TNO Science and Industry covers both the medical and technical field. The concept for this next generation of surgical instrumentation comes from EFI BV, (Endoscopic Forcereflecting Instrument). The control issues related to haptic feedback, the mechatronical and fiber optic (FO) issues are all dealt with by TNO. II. SYSTEM OVERVIEW A. Instrument overview Figure 1 shows the design of an endoscopic instrument for surgery with haptic feedback. It consists of forceps for gripping tissue, a shaft with a rod inside for moving the
I. INTRODUCTION Minimally Invasive Surgery is performed with instrumentation that enables surgery through small incisions. This type of surgery enables faster recovery of the patient and reduces the change on postoperative complications. The surgeon, however, makes sacrifices on its perception and ergonomics. With the introduction of minimally invasive surgical procedures, the development of a new range of surgical
Fig. 1
Conceptual design of the endoscopic instrument
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 914–917, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
The impact of electrosurgical heat on optical force feedback sensors
915
forceps, and a handle that moves the rod relative to the shaft. This instrument is connected through a cable with an external controller (not shown). A regular handle is shaped like a simple pair of scissors. This handle has been replaced by a type that houses two actuators, one for actuating the forceps and one for actuating the trigger. In the present regular instruments for Minimally Invasive Surgery (MIS) the surgeon has little feeling of the tissue which he is gripping due to friction in the handle, the rod inside the shaft, and hinges in the forceps. The newly developed forceps has fiber optic FBG sensors that accurately measure the gripping force. This force is relayed to the controller which in turn sends signals to the actuators in the handle such that the desired force is exerted on the tissue and an exact scaled force is exerted on the trigger at the same time which enables the surgeon to feel the tissue characteristics. Also the force can automatically be limited to safe values that prevent tissue damage. The measuring range of the clamping force is up to 20N with a resolution of 10mN. A further advantage of a “wired” MIS instrument is that it allows for independent scaling of force and displacement. This enables highly accurate operation of delicate tissue as is the case in an extreme degree for neuro- and eye surgery. One of the challenges for the endoscopic instrument is the need to perform electrosurgery. This requires the instrument to accurately measure the force while the forceps is charged with electric signals with frequencies in between 350kHz and 3.3MHz. Besides the EMC the instrument must be reusable and therefore allow demounting, cleaning and sterilizing by autoclavation.
Multiple sensors can be used inside one fiber, known as multiplexing. The length of the grating is typically 1 to 10mm in length. The diameter of the sensor is generally in between 10^m up to 100^m that corresponds with the fiber core and cladding respectively. The signal can be monitored with an optical instrument which is named an interrogator. This instrument analyses the signal with interferometry or spectroscopy. To create a realistic haptic feedback system, an interrogator is used with a bandwidth of 19kHz. This relatively high bandwidth is commercially available nowadays [3].
B. Fiber optic sensor, FBG Fiber optic (FO) sensors are a relatively new type of sensors and their application fields rapidly grow. Medical technology is one of these applications. Here fiber optics is very attractive due to their insensitivity to electromagnetic fields, their small dimensions and their intrinsic electrical safety [2]. The FBG sensor used for this application is used for force measurement at the distal end of a surgical instrument. The functionality is quite identical to an electrical strain gauge with the difference that no electrical signal comes from the sensor. A grating situated in the fiber reflects a small band of wavelength from the light that is sent in the fiber. A change in the grating results in a change of the reflected wavelength. This is in general a length difference of the sensor (strain) or a temperature difference. The sensitivity of an FBG sensor expressed in wavelength shift for these parameters is typically 10pm per Kelvin, 1.2pm per μstrain (1 μstrain = 1×10-6 [-]).
_______________________________________________________________
III. EXPERIMENT As the FO FBG sensor is sensitive to temperature changes an experiment has been performed to determine the thermal effects of electrosurgery. In order to obtain the required force measurement accuracy the temperature must be known within 2°C. Even without electrosurgery the temperatures vary from 20°C to 37°C, clearly indicating the need for temperature compensation. Therefore a second FBG is used that only measures the temperature at the forceps. However, the temperature gradient in the forceps should be sufficiently homogenous. Since the aim is that the instrument can be used at all times, even during electrosurgery, experiments have been carried out in order to measure thermal effects during electrosurgery. In figure 2 a schematic representation is given of the experimental setup. The Valleylab Force 30 (1) is used for both electrical cutting and desiccation. A handheld device (2) is used to initiate the electrosurgery. The device is electrically
(1)
(2) (3)
(4)
(7)
(5)
IFMBE Proceedings Vol. 22
(6)
Fig. 2 Schematic representation of the test setup
_________________________________________________________________
916
J.A.C. Heijmans, M.P.H. Vleugels, E. Tabak, T.v.d. Dool, M.P. Oderwald
connected (3) to the endoscopic instrument (4). Thermocouples (not depicted) are glued to the forceps of the instrument that are monitored with a multimeter (5). A piece of chicken meat (6), which is connected to the return electrode (7), is used to simulate human tissue. When electrosurgery is performed heat is generated at the contact area of the forceps with the tissue. Figure 3 depicts the actual setup. The temperature distribution is measured using an infrared camera (7), which is placed just above the tissue. At first, the thermocouples (8) are calibrated offline. Secondly, the results from the camera are calibrated using the thermocouples.
IV. RESULTS Measurements show that cutting results in slightly higher temperature than desiccation. Figure 4 depicts a typical temperature distribution during cutting. Maximum temperatures of 160°C are measured at the tip of the forceps.
(7) (8) Fig. 4
Temperature distribution in the forceps and tissue during cutting with representative settings
The result of the second experiment (increased power) is depicted in figure 5. The maximum temperature at the tip is approximately 300°C. Fig. 3
The actual test setup in which the temperature distribution is measured using an infrared camera (7). A detail of the forceps and thermocouples (8) are depicted at the corner, far right
In order for the experiments to be representative, the measurement procedure is formulated in consultation with a surgeon (M.Vleugels2,3). Both (monopolair) cutting and desiccation is performed. Cutting is done by applying a cut of approximately 3 to 4 centimeters. Next, a waiting period is introduced of 3 seconds and than another cut is made. In the measurements four cuts are made. The power used for the desiccation and cutting is depicted in figure 2. A second experiment is carried out in which a high load situation has been examined. In this case the power is increased by a factor of 2 and is applied for 20 instead of 3 seconds continuously. It must be noted that the heat load is depending on the power, the contact area between tissue and forceps and the amount of moisture in the tissue. A third experiment is carried out with a FBG sensor fixed to the forceps. When applying cutting and desiccation to this sample, the FBG signal is read out simultaneously. The purpose is to show the thermal effects experimentally and to verify that the sensor and interrogator are indeed insensitive to electromagnetic fields.
_______________________________________________________________
Fig. 5 Temperature distribution in the forceps and tissue during cutting with increased heat load
The result of the third experiment is shown in figure 6, showing the response of the FBG sensor to the heat that arises from electrosurgery. After nine seconds the forceps cuts in the tissue for three seconds. This is repeated for four times. The FBG signal is clearly affected and therefore a
IFMBE Proceedings Vol. 22
_________________________________________________________________
The impact of electrosurgical heat on optical force feedback sensors
917
simultaneous temperature measurement must be taken to decouple the clamping force from the rise in temperature. Note that the signal from the FBG sensor is not affected by the EMC but is only sensitive to temperature changes.
the FBG sensor. Therefore the temperature must be measured separately and subtracted from the force measurement. Correct fixation of the FBG sensor is essential to make the characterization reproducible. The cooperation between medical company EFI BV and research institute TNO has been vital for the successful development of this high tech surgical instrument.
1550.1
Optical strain [nm] Wavelength [nm]
1550 1549.9
VI. FUTURE DEVELOPMENTS
1549.8 1549.7 1549.6 1549.5 1549.4 1549.3
0
10
20
30 time [s]
40
50
60
Fig. 6 Optical signal due to heating when performing electrosurgery Finally, a study to different sensor fixations has been performed. Results from this study showed the impact of the geometry and materials to the reproducibility of the sensor signal in time and after autoclavation. This has led to the design of a stable fixation with a reproducibility error of less than 0.9% after 10 cycles of cleaning and autoclavation.
The FBG sensors, including the adhesive that is used to fix them, need to be able to withstand the high mechanical, chemical, and thermal loads due to coagulation, cleaning and sterilization. Tests will therefore be conducted with washing (100 cycles) and sterilization (1000 cycles) to see if any degradation occurs. A first prototype of this MIS instrument is being realized. This instrument will be used for experimental surgery on cadavers to verify the advantages of such an instrument in practice. With the experience and knowledge acquired, a MIS product based on this technology will be realized.
REFERENCES 1.
2.
V. CONCLUSIONS 3.
The realized demonstrator has shown the achievability of force sensing during electrosurgery with a fiber optic sensor. Measuring the clamping force in the forceps with an optical strain sensor requires that the temperature must be taken into account. Tests have shown that during electrosurgery temperatures can rise up to 160°C and up to 300°C for high load situations. This heat load at the forceps causes a significant deviation in the strain read out signal of
_______________________________________________________________
Wieringa F, Poley M, Dumay A. et al. (2007). Avoiding pitfalls in the road from idea to certified product (and the harsh clinical environment thereafter) when innovating medical devices. In 7th Belgian Day on Biomedical Engineering, Brussels, Belgium, 2007 Heijmans J, Cheng L, Wieringa F, Optical fiber sensors for medical applications, IFMBE Proc., 4th European Congress on Med. & Biomed. Eng., Antwerp, Belgium, 2008 Cheng L, Groote Schaarsberg J, Osnabrugge van, J, et al. (2001) Novel Fiber Bragg Grating sensor system for high -speed structure monitoring, 3rd Int. Workshop on Struct. Health Mon., Stanford, 2001. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
J.A.C.Heijmans TNO Science and Industry Stieltjesweg 1 Delft Netherlands
[email protected] _________________________________________________________________
Classification and Data Mining for Hysteroscopy Imaging in Gynaecology M.S. Neofytou1, A. Loizou1, V. Tanos2, M.S. Pattichis3, C.S. Pattichis1 1
Department of Computer Science, University of Cyprus, Nicosia, Cyprus 2 Aretaeion Hospital, Nicosia, Cyprus 3 Deptartment of Electrical and Computer Engineering, University of New Mexico, NM, USA
Abstract—The objective of this study was to develop a CAD system for the classification of hysteroscopy images of the endometrium (with suspicious areas of cancer), based on two data mining procedures, the C4.5 and the Hybrid Decision Tree (HDT) algorithms. Twenty-six texture features were extracted from three texture features algorithms: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray level difference statistics (GLDS). A total of 404 ROIs of the endometrium in RGB system format were recorded (202 normal and 202 abnormal) from 40 subjects. Images were gamma corrected and converted to grey scale, and the HSV and YCrCb systems. Results show that abnormal ROIs had lower grey scale median and homogeneity values, and higher entropy and contrast values when compared to the normal ROIs. The maximum average correct classifications score was 72,2% and was achieved using the HDT algorithm using 26 texture features, for the Y channel. Similar performance was achieved with both the HDT and the C4.5 algorithms when trained with the YCrCb texture features. Although similar performance to these models was also achieved when using the SVM and PNN models, the decision tree algorithms investigated, facilitated also the rule extraction, and their use for classification. These models can help the physician especially in the assessment of difficult cases of gynaecological cancer. However, more cases have to be collected and analysed before the proposed CAD system can be exploited in clinical practise. Keywords—Hysteroscopy imaging, gynaecological cancer, texture analysis, data mining, decision tree algorithms, classification, endometrium.
identifying patients with a low risk factor when the operation is usually prophylactic 4. The objective of this study was to apply data mining analysis using texture features for the automated classification of suspicious ROIs for gynaecological cancer. It is hoped that the proposed system will increase the diagnostic accuracy of the physician during an hysteroscopy examination for difficult cases of gynaecological cancer. Previous work carried out by our group documented the use of a standardized protocol for the preprocessing of hysteroscopy images based on gamma color correction 2. Both grey level 3 and color 5 texture analysis was shown to differentiate between normal and abnormal ROIs of hysteroscopy images of the endometrium. Gray-level texture analysis is widely used in numerous image processing and analysis tasks 5. New studies, exploiting the usefulness of color texture have been presented by several researchers 6, 7. In laryngoscopic imaging 6, suspect lesions were analyzed automatically using cooccurrence matrices with color differences between neighbouring pixels. A novel methodology for the extraction of color image features in colonoscopic video processing for the detection of colorectal polyps was developed in 7. They utilized the covariances of the second-order statistical measures calculated over the wavelet transformation of different color bands. The rest of the paper is organized into the following sections. In sections II, III and IV we present the methodology, results and concluding remarks respectively.
I. INTRODUCTION In laparoscopic/hysteroscopic imaging, the physician guides the telescope inside the uterine or abdominal cavity investigating the internal anatomy, in search of suspicious, cancerous lesions 1. During the exam, the experience of the physician plays a significant role in identifying suspicious regions of interest (ROIs), where in some cases, important ROIs might be ignored and crucial information neglected 2. The analysis of endoscopic imaging is usually carried out visually and qualitatively 3, based on the subjective expertise of the endoscopist. In terms of impact, laparoscopic/hysteroscopic procedures are especially significant in
II. METHODOLOGY A. Video Recording The CIRCON IP4.1 8 camera was used. The analog output signal of the camera (PAL 475 horizontal lines) was digitized at 720x576 pixels using 24 bits color at 25 frames per second, and was then saved in the AVI format 9. B. Material A total of 404 RGB hysteroscopy images from the endometrium were recorded from 40 subjects. ROIs of 64x64
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 918–922, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Classification and Data Mining for Hysteroscopy Imaging in Gynaecology
919
image. The algorithm is based on the estimation of the probability density p of image pixel pairs at a given distance =(,), having a certain absolute gray level difference value. Coarse texture images, result in low gray level difference values, whereas, fine texture images result in interpixel gray level differences with great variances. The following features were computed: 1) Homogeneity, 2) Contrast, 2) Energy, 4) Entropy and 5) Mean. D. Decision Tree Algorithms and ROI Classification The diagnostic performance of the texture features was evaluated with two different classifiers: the C4.5 algorithm 13, 14 and the Hybrid Decision Tree (HDT) algorithm 15. These classifiers were trained to classify the texture features into two classes: i) normal ROIs or ii) abnormal ROIs. Fig. 1. Endometrium images in (left to right) RGB, Y, Cr, and Cb space. Dotted blue and solid red boxes represent normal and abnormal ROIs respectively.
pixels were manually cropped and classified into two categories: (i) normal ROIs (N=202) and (ii) abnormal ROIs (N=202) based on the physician’s subjective criteria and the histopathological examination (see Fig. 1). All images were color corrected for camera sensor variations based on an extended gamma algorithm 2. Furthermore, all ROIs were transformed to grey scale, HSV, and YCrCb. C. Feature Extraction The following texture features were extracted for the grey, RGB, HSV and YCrCb systems and channels. Statistical Features (SF): The following texture features were computed: 1) Mean, 2) Variance, 3) Median, 4) Mode, 5) Skewness, 6) Kurtosis, 7) Energy and 8) Entropy. Spatial Gray Level Dependence Matrices (SGLDM): The spatial gray level dependence matrices as proposed by Haralick et al. 12 are based on the estimation of the secondorder joint conditional probability density functions that two pixels (k, l) and (m, n) with distance d in direction specified by the angle θ, have intensities of gray level i and gray level j. Based on the estimated probability density functions, the following four texture measures out of the 13 proposed by Haralick et al. were computed: 1) ASM, 2) Contrast, 3) Correlation, 4) Variance, 5) Homogeneity, 6) Sum Average, 7) Sun Variance, 8) Entropy, 9) Sum Entropy, 10) Dif. Variance, 11) Dif. Entropy, 12) Inf. Correlation1, and 13) Inf. Correlation2. Gray level difference statistics (GLDS): The GLDS algorithm, is based on the assumption that useful texture information can be extracted using first order statistics of an
_________________________________________
C4.5 algorithm The method that it was used in C4.5 13, 14 for the construction of a decision tree, is known as divide and conquer, which is the process of generating it from a set of training cases. The successive division of the set of training cases proceeds until all the subsets, that the set is partitioned to, consists of cases belonging to a single class. This decision tree construction method is a non-backtracking, greedy algorithm. Once a test has been selected to partition the current set of training cases, usually on the basis of maximizing some local measure of progress, the choice is cast in concrete and the consequences of alternative choices are not explored. So the only information available for guidance is the distribution of classes in the set of training cases and its subsets. C4.5 can re-express the classification model as production rules that are induced from the already built decision tree, a format that appears to be more intelligible than trees. Hybrid decision tree algorithm The Hybrid decision tree algorithm 15 is a hybrid algorithm based on Quinlan’s ID3 and Classification and Regression (CART) algorithms. It supports both classification and regression and it works well for predictive modeling. In building a model, the algorithm examines how each input attribute in the dataset affects the result of the predicted attribute, and then it uses the input attributes with the strongest relationship to create a series of splits called nodes using an Entropy metric (Ent) or Bayesian metrics (K2 prior or Dirichlet Equivalent method with Uniform prior). As new nodes are added to the model, a tree structure begins to form. The top node of the tree describes the breakdown of the predicted attribute over the overall population. Each additional node is created based on the distribution of states of the predicted attribute as compared to the input attributes.
IFMBE Proceedings Vol. 22
___________________________________________
920
M.S. Neofytou, A. Loizou, V. Tanos, M.S. Pattichis, C.S. Pattichis
If an input attribute is seen to cause the predicted attribute to favor one state over another, a new node is added to the model. The model continues to grow until none of the remaining attributes create a split that provides an improved prediction over the existing node. The model seeks to find a combination of attributes and their states that creates a disproportionate distribution of states in the predicted attribute, therefore allowing you to predict the outcome of the predicted attribute.
Table 2 tabulates the performance of the C4.5 and HDT classification models for training (TR) and evaluation (EV), based on texture features from hysteroscopy images. A total of 36 models are given for the classification of normal and abnormal ROIs of the endometrium based on the texture features of the grey level, and the RGB, HSV, and YCrCb colour systems. The performance measures are given for the average of 10 runs per model. The maximum %CC was 88,8% in the training procedure and was achieved using the C4.5 algorithm, in the YCrCb system (model no. 33). Also, the maximum %CC was 72,2% for the evaluation procedure and was achieved using the HDT algorithm for the Y channel (models no. 26-28). Very close performance to this one (%CC = 72,1%) was achieved for the YCrCb, HDT models (no. 34-36). Overlay, similar performance was obtained for both the C4.5 and HDT algorithms. Tables 3 and 4 tabulate the rules extracted with the C4.5 algorithm for the classification of normal and abnormal ROIs for model Y(1) with %CC=78,7 and for model YCrCb with %CC=77,2, respectively. These findings can be compared to previous results using the SVM and PNN classification algorithms, the same texture feature sets, and a similar data set. The highest %CC for these models was 79%, and was achieved with the combination SF+GLDS feature sets in the YCrCb system 5.
E. Training and Evaluation procedure Training and evaluation of the classifiers were done in grey level and in all color systems and their channels. For runs carried out for each channel separately the SF, SGLDM, and GLDS texture features were used (a total of 26); similarly, for each color system 3x26 (a total of 78) texture features were used. For each channel or color system, a total of 10 runs were carried out, and its average performance was computed. Training was carried out on 202 images (101 normal, and 101 abnormal ROIs) and the performance of the classifiers was evaluated on the remaining 202 images (101 normal, and 101 abnormal ROIs). The performance of the classifier systems were measured using the measures of the receiver operating characteristic (ROC) curves: true positives (TP), false positives (FP), false negatives (FN), true negatives (TN), positive predictive value (PPV), negative predictive value (NPV), sensitivity (SE), specificity (SP), and precision (PR). We also computed the percentage of correct classification ratio (%CC) based on the correctly and incorrectly classified cases. III. RESULTS Results show that abnormal ROIs had lower grey scale median and homogeneity values, and higher entropy and contrast values when compared to the normal ROIs. Table 1 tabulates the texture characteristics of normal vs abnormal ROIs as these were obtained by interpretation of the texture features values. Table 1 Texture characteristics of normal vs abnormal ROIs of the endometrium as these were obtained by interpretation of the texture features values. Gray level Variance Contrast Homogeneity Entropy
Normal High Low Low Normal range Normal range
Abnormal Slighthly darker Very High High Slighthly lower Slighthly higher
_________________________________________
IV. CONCLUDING REMARKS In this study, a CAD system based on color texture analysis for the classification of hysteroscopy images of the endometrium, in support of the early detection of gynaecological cancer was investigated. The maximum average correct classifications score was 72,2% and was achieved using the HDT algorithm using 26 texture features, for the Y channel. Also, the best performance for a single model was %CC=78,7, and was achieved with the C4.5 algorithm in the Y channel. Similar performance was achieved with both the HDT and the C4.5 algorithms when trained with the YCrCb texture features. These results support the application of texture analysis for the assessment of normal and abnormal endometrium tissue. Although similar performance to these models was also achieved when using the SVM and PNN models, the decision tree algorithms investigated, facilitated also the rule extraction, and their use for classification. These models can help the physician especially in the assessment of difficult cases of gynaecological cancer. However, more cases have to be collected and analysed before the proposed CAD system can be exploited in clinical practise.
IFMBE Proceedings Vol. 22
___________________________________________
Classification and Data Mining for Hysteroscopy Imaging in Gynaecology
921
Table 2 Performance of the C4.5 and HDT classification models for training (TR) and evaluation (EV), based on texture features from hysteroscopy images. (No. of ROIs for both training and evaluation: 101 normal, and 101 abnormal = 101). No.
Model
No. of Features
Algorithm C4.5/HDT
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36.
Grey
26 26 26 26 26 26 26 26 26 26 78 78 78 78 26 26 26 26 26 26 78 78 78 78 26 26 26 26 26 26 26 26 78 78 78 78
C4.5 HDT Entr HDT BK2 HDT BDEU C4.5 HDT Entr C4.5 HDT Entr C4.5 HDT Entr C4.5 HDT Entr HDT BK2 HDT BDEU C4.5 HDT Entr C4.5 HDT Entr C4.5 HDT Entr C4.5 HDT Entr HDT BK2 HDT BDEU C4.5 HDT Entr HDT BK2 HDT BDEU C4.5 HDT Entr C4.5 HDT Entr C4.5 HDT Entr HDT BK2 HDT BDEU
R G B RGB
H S V HSV
Y
Cr Cb YCrCb
%CC TR 74,0 63,7 65,2 63,7 74,2 69,3 82,4 71,2 82,4 68,0 83,7 71,6 71,7 70,8 75,5 65,2 83,5 70,3 74,2 69,3 77,4 71,4 70,6 70,7 85,9 77,3 77,3 77,3 73,1 67,4 75,0 63,5 88,8 76,9 76,9 76,9
%PR EV 57,1 57,8 55,6 57,8 63,3 64,1 64,5 64,2 64,5 61,7 63,2 63,4 64,1 64,2 59,1 56,2 62,4 64,9 63,3 64,1 61,2 62,8 62,0 62,4 71,4 72,2 72,2 72,2 56,9 63,2 61,8 57,8 68,5 72,1 72,1 72,1
TR 90,1 88,1 89,4 88,1 68,4 78,2 82,7 66,5 82,7 51,0 79,5 74,3 69,9 66,2 66,2 53,1 82,7 54,5 68,4 78,2 61,3 70,6 70,8 67,6 84,1 74,3 74,3 74,3 53,0 50,4 65,9 32,9 96,7 71,7 71,7 71,7
%SE EV 69,4 80,9 86,1 80,9 56,9 73,2 61,1 57,7 61,1 41,5 56,0 64,0 60,9 59,5 50,1 51,8 59,9 49,0 56,9 73,2 46,3 61,0 60,9 57,9 69,4 68,4 68,4 68,4 37,2 45,6 52,5 24,0 76,6 66,8 66,8 66,8
TR 70,1 59,3 60,2 59,3 79,2 67,4 83,2 75,4 83,2 82,6 88,0 72,8 74,8 75,0 82,6 73,1 84,7 81,2 79,2 67,4 82,9 73,9 73,9 75,7 87,8 79,5 79,5 79,5 88,7 79,2 82,6 85,8 84,0 80,1 80,1 80,1
%SP EV 55,9 55,5 53,5 55,5 66,3 63,3 68,1 70,2 68,1 76,8 65,8 67,4 68,2 69,1 63,0 58,8 65,7 73,4 66,3 63,3 66,4 66,3 66,7 67,4 73,0 74,4 74,4 74,4 63,5 72,3 66,3 77,7 66,5 75,2 75,2 75,2
%NPV
TR 87,2 76,9 79,6 76,9 75,0 75,6 83,9 71,8 83,9 66,4 85,2 76,2 73,3 71,7 72,3 63,5 84,9 65,8 75,0 75,6 76,7 73,4 74,0 72,4 84,9 75,8 75,8 75,8 68,2 64,6 72,1 58,6 96,1 74,5 74,5 74,5
EV 65,5 64,9 64,5 64,9 64,2 69,1 63,7 63,4 63,7 59,3 62,9 64,8 64,4 64,2 57,5 55,8 62,8 61,5 64,2 69,1 60,1 64,1 64,3 63,4 70,7 70,7 70,7 70,7 55,4 59,8 60,1 54,7 72,1 70,1 70,1 70,1
TR 57,9 39,3 40,9 39,3 79,9 60,4 82,1 75,9 82,1 85,0 87,8 69,0 73,6 75,3 84,9 77,2 84,4 86,1 79,9 60,4 93,5 72,3 70,4 73,9 87,8 80,4 80,4 80,4 93,1 84,5 84,2 94,1 80,9 82,1 82,1 82,1
EV 44,8 34,7 25,1 34,7 69,7 55,0 67,9 70,7 67,9 81,9 70,4 62,8 67,2 68,9 68,1 60,5 65,0 80,8 69,7 55,0 76,1 64,7 63,2 66,8 73,5 76,0 76,0 76,0 76,5 80,8 71,2 91,6 60,4 77,3 77,3 77,3
Table 3 Rules extracted with the C4.5 algorithm for the classification of normal and abnormal ROIs for model Y(1) with %CC=78,7 for the evaluation set. No. 1. 2. 3. 4. 5.
Rule sgldm_cor 102 AND glds_eng > 0.26 sf_mode > 102 AND sf_kurt 0.26 sf_mode 0.97 AND sf_mode > 160 AND sf_mode 0.37 AND glds_eng_Y 341.36 sf_mean_Cr > 153.60 AND glds_eng_Y 32 weeks are interpreted using the algorithm. For the same set of data experts interpretations are recorded and stored. The computerized interpretation results are compared with the results from manual CTG interpretation using Bland Altman, a statistical method for method comparison. The entire process is tested on 15 CTG recordings. The results obtained this way shows that the CTG interpretation using Kubli Score is reliable and specific, especially, while rating amplitude of fluctuation and frequency of fluctuation. And also they offer major advantages compared to subjective assessment. A scoring system for fetal surveillance, like Kubli score, is a systematic way of interpreting antepartum cardiotocograph recordings. Keywords — Fetal Heart Rate, Cardiotocograph, Scoring System, Kubli Score, Fetal Surveillance.
I. INTRODUCTION Cardiotocograph is being used for fetal surveillance since 1960’s. It has two biosignals: Fetal Heart Rate (FHR) and Uterine Activity (UA). A typical cardiotocograph tracing is shown in Fig. 1. Heart Rate (HR) is known to contain reliable indications about the synergic activity of the autonomic nervous system (ANS), which regulates the heart dynamics [1]. Parameters from HR signal really
differentiate pathological states, providing interesting hints about the generation of the disease condition [2]. HR measurements and analysis provide a quantitative tool for evaluating the synergetic control activity performed by the sympathetic and parasympathetic branches of the ANS. They represent a powerful method for establishing the development of the nervous system of the fetus during the last period of pregnancy, starting from the 25th week of gestation [3].
Fig. 1 A typical cardiotocograph tracing It is strongly believed that FHR tracings truly convey much more information than what is actually interpreted [4]. Several features can be extracted from the FHR signal which can help obstetricians to predict the fetal well being. CTG analysis is currently being done manually by obstetricians [5]. Therefore, CTG interpretation involves a high degree of inter observer and intra observer variability, and the potential value of the test is consequently affected. According to the literature review assessment of CTGs with the scoring system showed a higher inter and intra observer reliability than the subjective assessment [6]. Development of an application that can provide a systematic and objective method of analysis is thus highly desirable. In our work we have tried to achieve the same by developing automized method for CTG interpretation. The results show that the scoring system is indeed a systematic way of CTG interpretation. In most of the South East Asian countries
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 962–965, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Computerized Interpretation of Cardiotocographs Using Kubli Score
where a scoring system for CTG analysis is seldom used this work could be an initial step for systemizing the CTG analysis process. The overall system design is shown in Fig. 2. II. MATERIALS AND METHODS
963
Acceleration and deceleration pattern information are extracted and stored in two vectors. This pattern information is used to remove acceleration and deceleration from the FHR signal. All the information obtained from this stage is passed as inputs to the next stage, the scoring system. C. Scoring System
A. System Design
Fig. 3 Scoring algorithm design
Fig. 2 Work flow diagram B. Pre-processing and Feature Extraction Pre-processing: FHR is a noisy signal with spiky artifacts, which occur due to fetal movements or displacement of the transducer. In the preprocessing stage the biosignals are conditioned, where the spiky artifacts are removed using a method described in [7]. This method first detects a stable FHR segment. A stable segment is an FHR segment where the difference between five adjacent samples is less than 10 bpm, whenever a difference between adjacent samples beats is beats higher than 25bpm is found, a linear interpolation is applied between the first of those two signals and the first signal of a new stable FHR segment. The numbers of interpolated points are used in measuring the signal quality [7, 8]. Filtering of UA is done with a 9 point moving average filter, thus eliminating most of the high frequency noise that impairs the contraction detection [7]. Number of points less than 50 bpm would count for signal loss. In addition to spiky segments, FHR includes segments of missing values. Even in this case linear interpolation is applied. Feature Extraction: Some of the important patterns are recognized from the FHR. In this phase the baseline of the FHR signal is estimated using an algorithm based on the number and continuity of occurrences of FHR values [9].
_______________________________________________________________
The scoring system is based on the kubli score generation, the algorithm scores five separate FHR components: baseline rate, amplitude, frequency, deceleration and acceleration. Each component is scored on a scale of 0 to 2 [10]. Design of the scoring algorithm is as shown in Fig. 2. According to Kubli score guidelines [11] the five components are defined as follows: 1. Baseline Rate: The average range of the FHR resting rate where there are no accelerations, decelerations or uterine activity. 2. Amplitude of Fluctuation: Having identified the baseline portions, scrutinize one-minute interval. Note the lowest fetal heart beat and the highest fetal heart beat during that one minute. Calculate the range and score according to the parameter. Select two other one-minute intervals and repeat the same process so that you can have an overall average that is an accurate representation of the entire fetal heart strip. 3. Frequency of Fluctuation: Take the same three identified one-minute periods and estimate the amount of cycles present in each minute. A cycle must last for 5 seconds and have measurable amplitude of 5 to be judged a genuine cyclical cardiac movement and not machine artifact. Calculate the average cycles per minute and score accordingly. 4. Deceleration Pattern: Scrutinize the graph for any identified pattern of either late or variable decelerations.
IFMBE Proceedings Vol. 22
_________________________________________________________________
964
B.N. Krupa, F.M. Hasan, M.A. Mohd. Ali and E. Zahedi
Classify the decelerations observed and score according to persistence and severity of the pattern. 5. Acceleration Pattern: Measure the fetal heart accelerations from the baseline immediately preceding the acceleration. The fetal heart rate must increase by at least 15 beats from the baseline and be sustained at that level or higher for at least 15 seconds in order for it to fulfill the criteria of one satisfactory acceleration. Measure all accelerations and score accordingly. Kubli et al and Lyons et al proposed the usefulness of the evaluation of FHR patterns in the absence of contractions [10]. The scoring system algorithm developed using Netbeans IDE 6.0.1 extracts all the five components and scores them according to Table 1 and calculates the Kubli score which could range from 0 to 10. A maximum score of 10 is given for a reactive CTG tracing [12]. A kubli score thus generated is given as an input to the next stage, the classification stage.
III. RESULTS AND DISCUSSION A set of 15 CTG recordings of 20 minutes duration and greater than 32 gestation weeks are considered for testing the application. The same set of CTG recordings are given to 2 experts for visual interpretation. The automated CTG analysis results are compared with that of experts’ interpretation results using Bland Altman’s statistical method for method comparison [13].
Table 1 Kubli scoring system Parameter
Score 1
0 Baseline Rate
160 5 >25 2
2
m, Call user interface manager for context information;
Figure 3: Condition evaluation and user interface IV. RESULTS
Δt
Δt
ΔΔ t
Δt
Beat
σ(Δt) Event
Event Processor
Figure 2: Event processing showing the calculation of heartbeat intervals and heart rate variability from beat event “queue”
The condition part also has a “listener” that listens to the calculated result from the Event part. A healthy adult should have a time interval between heart beats of around 800 milliseconds (75beats/min) [20]. So when the time interval has become much smaller or larger than 800 milliseconds, this condition part will place the calculated time interval
_______________________________________________________________
The context collection facility in this system is composed of three parts: data transportation, algorithm and E-C-A rule parser. In agreement with the literature for Bluetooth data transportation, it was discovered that when the PDA is 30 meters away in line of sight from the simulator, it could no longer receive data from the Bluetooth channel. The beat detection algorithm that was developed in Java, implements Kohler’s zero-crossing algorithm [13] thus proving that it can work in real time and has low computational load and high detection accuracy. The E-C-A rule parser has a memory efficient interface which uses a logger file. This file is used to record all the context information. When a suspicious heart beat interval is detected, the application asks the user to modify the logger file to input the current context information and also records the associated time information automatically. V. CONCLUSIONS This work presented a mobile intelligent context collection system and an accompanying healthcare scenario that can be generalized to represent a context information
IFMBE Proceedings Vol. 22
_________________________________________________________________
A mobile ECG monitoring system with context collection
collection to support other mobile self-wellness monitoring systems such as respiration, glucose levels and EEG [21]. It can also in principle be applied to other time series signals in power signature analysis, and weather reporting. Experiences of the authors during the project indicate that the contemporary mobile devices and wireless communications at time of writing still suffer from certain limitations which may restrict the ability to store, process and transmit large volumes of multimedia clinical data in real time. However, while mobile devices still have limited memory and processing power, and are especially restricted by their battery life, these capacities are increasing rapidly and it is the authors’ firm view that the future is bright for mobile healthcare. With the rapid advances in computational power and communications infrastructure PDA-like devices with low memory usage and integrated context information collection which are managed by the patient have the potential to be an important and powerful tool for the doctor to make a better decision for the user’s health condition afterwards or on remote monitoring.
1225 7.
8.
9.
10. 11. 12.
13.
14.
15. 16.
REFERENCES 17. 1.
2.
3.
4. 5.
6.
Kowey P. R and Kocovic D. Z (1993) Ambulatory Electrocardiographic Recording, American Heart Association, ISSN: 0009-7322, Online 72514, pp 337-341 Kadish AH, Buxton AE, Kennedy HL (2001) ACC/AHA clinical competence statement on electrocardiography and ambulatory electrocardiography: a report of the ACC/AHA/ACP-ASIM task force on clinical competence, Circulation,104:3169-3178 HL K., Podrid P. J (2001) Role of Holter monitoring and exercise testing for arrhythmia assessment and management, Cardiac Arrhythmia Philadelphia, Lippincott Williams & Wilkins, pp 165-193 Ambulatory ECG at http://www.americanheart.org/presenter.jhtml? identifier=4425 Chen G. and Kotz D (2000) A Survey of Context-Aware Mobile Computing Research, Computer Science Technical Report TR2000381, Department of Computer Science, Dartmouth College, Dartmouth, pp 1-3 Crawford M. H., Bernstein SJ, Deedwania PC, DiMarco JP, Ferrick KJ, Garson A Jr (1999) ACC/AHA guidelines for ambulatory electrocardiography, Journal of the American College of Cardiology, Volume 34, Issue 4, pp 1262-1347
_______________________________________________________________
18. 19.
20.
21.
22.
Jones V.M, and Mei H, and Broens, T.H.F. and Widya, I.A. and Peuscher J (2007) Context Aware Body Area Networks for Telemedicine, 8th Pacific Rim Conference on Multimedia, Hong Kong, China, pp 590-599 Fensli R, Gunnarson E, Hejlesen O (2004) A Wireless ECG System for Continuous Event Recording and Communication to a Clinical Alarm Station, Agder University College, Faculty of Technology, Grimstad, Norway, Medinfo, Amsterdam: IOS Press, IMIA, pp 22082211 Rodríguez J, Goni A and Illarramendi (2005) A Real-Time Classification of ECGs on a PDA, IEEE Transactions on Volume 9, Issue 1, pp 23-34 Holter Devices Comparison at: http://www.medcompare.com/matrix/ 132/Holter-Monitor.html Bluetooth at http://www.bluetooth.com/bluetooth/ Cronin A (2005) MPhil Thesis The Investigation of a Wireless Medical Homecare Data Logger, Dublin Institute of Technology, Department of Control Systems and Electrical Engineering pp 53-63 Kohler B.U, Orglmeister R (2003) “QRS Detection Using Zero Crossing Counts”, Progress in Biomedical Research, 8 (3), pp 138145. Kohler B.U, Hennig C, and Orglmeister R (2002) “The Principles of Software QRS Detection”, Engineering in Medicine and Biology Magazine, IEEE, Volume 21, Issue 1, pp 42-57, ISSN: 0739-5175 Oracle at www.oracle.com Wu B and Dube K (2001) Applying Event-Condition-Action Mechanism in Healthcare: a Computerised Clinical Test-Ordering Protocol System (TOPS), Cooperative Database Systems for Advanced Applications, CODAS, IEEE INSPEC Accession Number: 7068993, ISBN: 0-7695-1128-7 pp 2 -9 Bailey J (2002) An Event-Condition-Action Language for XML, International World Wide Web Conference, Proceedings of the 11th international conference on World Wide Web, Honolulu, Hawaii, USA, pp 486 -495 Gatziu S (1998) Unbundling Active Functionality, ACM SIGMOD Record, Volume 27, Issue 1, pp 35-40 Zoumboulakis M (2004) Active Rules for Sensor Databases, ACM International Conference Proceeding Series, Volume 72, Proceeedings of the 1st international workshop on Data management for sensor networks: in conjunction with VLDB, Toronto, Canada, Session: Programming languages and architectures, pp 98-103 Rickards A. F (1996) An Implantable Intracardiac Accelerometer for Monitoring Myocardial Contractility, Pacing and Clinical Electrophysiology, Volume 19, Issue 12, pp 2066-2071 Papadelis C, Kourtidou-Papadeli C, Bamidis P. D, Chouvarda I (2006) “Indicators of Sleepiness in an ambulatory EEG study of night driving, Proceedings of the 28th IEEE EMBS Annual International Conference, New York City, USA Podrid P. J, Kowey P (2001) Cardiac Arrhythmia Philadelphia, Lippincott Williams & Wilkins, pp165-193
IFMBE Proceedings Vol. 22
_________________________________________________________________
Identification of Signal Components in Multi-Channel EEG Signals via Closed-Form PARAFAC Analysis and Appropriate Preprocessing Dunja Jannek1, Florian Roemer2, Martin Weis2, Martin Haardt2, and Peter Husar1 1
Ilmenau University of Technology, Institute of Biomedical Engineering and Informatics, Ilmenau, Germany 2 Ilmenau University of Technology, Communications Research Laboratory, Ilmenau, Germany
Abstract — It is a major task in EEG analysis to identify signal components based on time-frequency distributions. The main objective is to decompose a multichannel EEG into timefrequency-space atoms. A lot of work was done in the field of subspace estimation with two of the aforementioned three dimensions, e.g., by using an SVD, PCA or ICA as well as space-time filtering or beam-forming. A more powerful approach is the use of tensor decompositions. For example, PARAFAC (Parallel Factor) analysis decomposes a tensor into rank-one components and thereby represents a multidimensional extension of the SVD. This renders it an attractive approach for EEG signal analysis. The selection of an appropriate time-frequency preprocessing scheme improves the results of the PARAFAC analysis. In a first study, we have investigated several time-frequency preprocessing techniques to create a tensor in time, frequency, and space for multichannel EEG signals. The common approach in PARAFAC analysis is the use of a wavelet transformation based on the MORLET wavelet as a preprocessing step. In this paper, we show that preprocessing based on the Wigner distribution leads to much better results than a wavelet analysis. First results have been obtained by the use of EEG signals of evoked potentials. Keywords — PARAFAC analysis, MORLET Wavelet, Wigner-Ville Distribution, RID kernel, EEG Preprocessing.
I. INTRODUCTION Finding the components of activity in the brain from recorded EEG signals is a great challenge in the field of biomedical signal analysis. This knowledge can be used to detect and localize sources of epileptic seizures as well as sources of cognitive processing like speech or auditory handling. Unfortunately, the solutions to these types of inverse problems may not be unique: Different sources in the brain can produce the same EEG pattern on the scalp. Therefore, different approaches to find a suitable approximation have been developed. For example, LORETA is one out of a class of methods which resolve the ambiguity by assuming that neighboring neurons are active synchronously [1]. This guarantees that a set of bipolar sources exists over the whole cortical surface.
Another approach is based on the dipole model. It is assumed that there exist a limited number of dipoles as point sources in the brain. Dipole fitting methods estimate the location of these dipoles by iterative calculations. It has to be defined how many dipoles have to estimated, where they can be and how they interact in time. To improve the estimation process, preprocessing of the signal in the form of a subspace decomposition can be applied. There exist several contributions in the field of EEG processing for applying techniques such as PCA, ICA, SVD (which ignore the spatial information) or beam-forming strategies (which exploit the spatial information) [2]. However, not all the assumptions for these methods are fulfilled in the case of EEG signals. Moreover, not all dimensions (time, frequency, space) are integrated in the analysis. Tensor-based methods are a more natural approach to handle signals that vary in more than two dimensions (e.g., time, space, and frequency). The well-known PARAFAC decomposition (also known as CANDECOMP) is a powerful approach to decompose a tensor into components. In the last few years a lot of work was done in applying PARAFAC for EEG signal analysis, e.g., for estimating the sources of cognitive processing using a Wavelet decomposition [3], for ERP analysis [4] or for epileptic seizure localization [5]. It is well known that Wavelet analysis is not always a suitable time-frequency decomposition because it may not provide adequate time and frequency resolution. Due to this problem we compare the results of a Wavelet decomposition in an ERP analysis with a Wigner distribution. In this contribution, both methods are applied as preprocessing steps for a new closedform PARAFAC solution [12,13]. II. MATERIAL AND METHODS A. Signal Component Analysis Preprocessing The first step in the signal component analysis is applying preprocessing in the form of an appropriate timefrequency decomposition (TFD). That means that the measured time signal from each channel is transformed into a time-frequency representation in order to resolve both, the
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1226–1230, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Identification of Signal Components in Multi-Channel EEG Signals ...
temporal evolution as well as the frequency content of the measured signal. There exist a large number of methods to perform this task. An approach that is very often used is some form of wavelet analysis. In particular, the continuous wavelet transform (CWT) can be applied to decompose a signal into its time and frequency content [6]. The CWT at scale a and time t of a signal x(t ) is defined as
C (a,τ ) =
time and frequency support. This method is the Reduced Interference Distribution (RID) which can be combined with several window functions [8]. Three-Way PARAFAC After the time-frequency analysis the overall signal comprises three dimensions: For every channel, the signal is represented by a time-frequency matrix. Therefore, the signal can be expressed as a threedimensional tensor
+∞
³ x(t )ϕ (a, t ,τ )dt
where ϕ represents the chosen wavelet. Common choices include the class of biorthogonal wavelets, Debauchy wavelets, and the MORLET wavelets. The disadvantage of CWT-based time-frequency preprocessing is the limited resolution, especially in the low-frequency region, which is very important in EEG signal analysis. A more powerful approach to time-frequency analysis is given by the family of Wigner-Ville distribution functions, based on the seminal work by Wigner in 1932 and Ville in 1948. The distribution is based on the temporal correlation function (TCF) q (t ,τ ) of the complex signal x ( t )
x
§ τ· § τ· q (t ,τ ) = x ¨ t + ¸ x* ¨ t - ¸ . x © 2¹ © 2¹
³ q (t ,τ ) e x
− j 2π f τ
dτ .
time and frequency and N C the number of channels, respectively. In order to separate signal components in this tensor, three-dimensional extensions of the singular value decomposition can be used. The SVD has a long standing history in signal component analysis in the form of PCA. In the tensor case, the PARAFAC decomposition is known as a multi-dimensional extension of the SVD that decomposes a tensor into a minimal sum of rank-one tensors. The underlying model can be represented in the following fashion d
(2)
(3)
−∞
The main drawback of the TCF is that it produces cross terms in the WVD and in the Ambiguity Function (AF), which is the FT of TCF with respect to t. On the other hand, its advantage is that time and frequency resolution can be adjusted separately. Cohen introduced a class of TFDs based on the WVD which allow the use of kernel functions for reducing cross terms [7]. There exist a great variety of TFDs for a large number of applications. To apply the Pseudo WVD (PWD), the WVD has to be filtered with a one-dimensional filter as a sliding window function. This leads to a spectral leakage but has no effect on the cross terms in time-frequency plane. One way to reduce the influence of these cross terms is the construction of a cross-shaped low-pass filter which allows high time and frequency resolution and has a large
_______________________________________________________________
(4)
where NT and N F represent the number of samples in
X i , j ,k = ¦ un (ti ) ⋅ vn ( f j ) ⋅ wn (k ),
The Fourier Transform (FT) of the TCF with respect to the lag parameter τ leads to the Wigner-Ville Distribution (WVD) of x(t ) +∞
X ∈ NT × N F × N C
(1)
−∞
Wx (t , f ) =
1227
n =1
(5)
i = 1, 2,! NT , j = 1, 2,! N F , k = 1, 2,! N C Here, un (ti ) and vn ( f j ) represent the sampled time and frequency responses of the n -th component at time instant ti and frequency bin f j , respectively. Also, wn (k ) represents the strength of the n -th component in the k -th channel. Moreover, d represents the number of signal components (i.e., the model order) of the signal. In practice, the measured tensor does not obey the model exactly for a number of reasons: • • • •
There is noise in the system. For EEG data this noise is in general not Gaussian distributed and also not spatially uncorrelated. The individual components are not necessarily rankone, each of them may have a higher rank. The superposition of components is not ideally linear, nonlinear couplings have been observed and are in general included in the models. The observed process is not stationary. This issue can be partially taken care of by dividing the original signal into smaller time intervals and analyzing each interval on its own.
IFMBE Proceedings Vol. 22
_________________________________________________________________
1228
Dunja Jannek, Florian Roemer, Martin Weis, Martin Haardt, and Peter Husar
Therefore, we require an algorithm to compute an approximate fit of a measured tensor to a PARAFAC model. Also, we need an algorithm that is numerically stable to cope with the non-ideal conditions in practical data. The existing algorithms to compute approximate PARAFAC model fits can coarsely be divided into three categories. The first category includes the iterative algorithms. These are based on the alternating least squares (ALS) idea [9]: In each iteration, two of the three factors are fixed and the third factor is computed via a least squares fit. Then, this factor is fixed and the next one is optimized for in a similar fashion. This iterative procedure is repeated until convergence is detected. While it can be shown that ALS converges monotonically, it is not guaranteed that it reaches the global optimum. Also, the number of iterations may be too big for practical purposes. A large amount of research was dedicated to making ALS faster either through smart initializations or optimized update rules. A fast implementation of ALS is given by the PARAFAC-algorithm [10], which is available in the N-way toolbox. The second class of algorithms are suboptimal solutions that enable a closed-form solution through coarse approximations. Well-known methods in this category include the Generalized Rank Annihilation Method (GRAM) and the Direct Trilinear Decomposition (DTLD) [11]. While these methods are very fast, the obtained fit is usually not very satisfactory. Finally, the third class of algorithms for approximate PARAFAC model fitting is based on a framework introduced in [12, 13], which shows a class of closed-form solutions that can achieve a very good performance without the necessity of ALS iterations. The approach is based on the Higher-Order SVD [14] and simultaneous matrix diagonalizations. The authors demonstrate the enhanced robustness of the Closed-Form PARAFAC scheme which renders it an attractive approach for the non-ideal EEG signals.
a)
b)
Fig. 1: a) Time course of ERP from an occipital EEG channel b) Time course for all 64 channels – occipital channels show the response earlier than frontal ones 1600 times. The triggered EEG answers are averaged over all 1600 trials for all channels (see Fig.1). III. RESULTS We have applied the signal component analysis scheme to the measured EEG data. For the results shown here, the Closed-Form PARAFAC algorithm is applied together with three preprocessing methods: a CWT with MORLET Wavelets, the PWD, and the RID. The analysis is carried out on the windowed EEG signal. The window length is 80 ms with an overlap of 20 ms between adjacent windows. Hence, 47 windows are analyzed for the whole one second signal. As it was stated before the number of components has to be determined by hand. For the results shown here, three components are used. Fig. 2 shows the six windows with MORLET preprocessing as time-frequency plot and component strength for all three
B. EEG Recording The EEG signal is recorded from a 23 year old woman, healthy and right-handed. The position of the 64 EEG electrodes is based on the international 10-10-system with earlobe reference [(A1+A2)/2]. The sampling frequency is chosen to 1000 sps. For preprocessing of the raw signal, several filters are applied: a 7 Hz high-pass, a 135 Hz lowpass and a band-stop filter between 45 and 55 Hz. Because of the investigation of effects in the field of evoked potentials we use data taken from a visual stimulus. The subject is sitting in front of a hemispherical perimeter. The stimulus is set by a 20 ms central flash with white LED here realized for the right eye. The experiment is repeated
_______________________________________________________________
Fig. 2: We display the three components for six adjacent time windows (from left to right) from 101 to 280 ms with MORLET as preprocessing step. For each component in each window the spatial distribution is indicated by a topographical plot, the time-frequency signature by the image below and the bar visualizes the strength of the component.
IFMBE Proceedings Vol. 22
_________________________________________________________________
Identification of Signal Components in Multi-Channel EEG Signals ...
1229
The Closed-form PARAFAC solution offers new possibilities in the dipole fitting estimation. The next studies will show whether it is possible to take into account the non-stationary nature of EEG signals even better. This can be achieved by a sliding window for example. Furthermore we have to develop a procedure to track the components in their temporal evolution. At the moment, components are ordered by their power, which may however vary over time. IV. CONCLUSIONS
Fig. 3: We display the three components for six adjacent time windows (from left to right) from 101 to 280 ms with preprocessing by Pseudo Wigner Ville Distribution.
components. It is expected that there is a strong component in the right hemisphere right from the beginning of the signal. This was already observed from the potential mapping in a previous study. Because of the bad time resolution for low frequencies and the bad frequency resolution for high frequencies, the CWT cannot exactly localize the signal sources. Much better results are achieved with a WVD. Fig. 3 shows the results of the PWD. The localization is much more accurate than using the MORLET analysis. Cross terms are not reduced in this kind of analysis and hence spectral leakage occurs. The use of RID leads to a reduction of the cross terms, which can be seen in Fig. 4. However, for RID there is a leakage in time and frequency which influences the spatial localization of the sources.
The choice of an appropriate preprocessing scheme is an important factor for the success of the entire EEG signal analysis process. We have observed that WVD based methods enhance the spatial localization of components compared to Wavelet based methods since they have a better time and frequency resolution compared to Wavelet or STFT analysis. We have shown that RID based preprocessing can be useful to reduce the cross terms, however, it also introduces unwanted leakage effects. Optimizing the cross term suppression is hence an issue of future studies.
ACKNOWLEDGMENTS The authors gratefully acknowledge the partial support of the internal excellence initiative of Ilmenau University of Technology.
REFERENCES 1.
2.
3.
4.
5.
6. 7. 8.
Fig. 4: We display the three components for six adjacent time windows
Pascual-Marqui RD, Michel CM, Lehmann D (1994) Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. Int J Psychophysiol 18:49-65. Husar P, Berkes S, Drozd M et.al. (2002) An Approach to Adaptive Beamforming in Measurement of EEG, Proc EMBEC, Vienna, Austria, 2002, pp. 1438-1439. Miwakeichi F, Martínez-Montes E, Valdés-Sosa PA et.al. (2004) Decomposing EEG data into space-time-frequency components using Parallel Factor Analysis. Neuroimage 22(3):1035-1045. Mørup M, Hansen LK, Hermann CS et.al. (2006) Parallel Factor Analysis as an exploratory tool for wavelet transformed event-related EEG. NeuroImage 29(3):938-947. De Vos M, De Lathauwer L, Vanrumste B et.al. (2007) Canonical Decomposition of Ictal Scalp EEG and Accurate Source Localisation: Principles and Simulation Study. Comput Intell Neurosci.2007:58253. Torrence C, Compo G (1998) A Practical Guide to Wavelet Analysis. Bull Amer Meteorol 79(1):61-78. Cohen L (1995) Time-Frequency Analysis: Theory and Applications. Prentice Hall, Upper Saddle River, NJ. Akay M (1996) Detection and estimation methods for biomedical signals. Academic Press, Orlando, FL.
(from left to right) from 101 to 280 ms with preprocessing by Reduced Interference Distribution.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
1230 9.
Dunja Jannek, Florian Roemer, Martin Weis, Martin Haardt, and Peter Husar
Kroonenberg PM, De Leeuw J (1980) Principal component analysis of threemode data by means of alternating least squares algorithms. Psychometrika 45(1):69–97. 10. Bro R, Sidiropoulos N, Giannakis GB (1999) A fast least squares algorithm for separating trilinear mixtures. Proc Int Workshop on Independent Component Analysis for Blind Signal Separation, Aussois, France, 1999, pp. 289–294. 11. De Lathauwer L, De Moor B, Vanderwalle J (2000) A multilinear singular value decomposition. SIAM J Matrix Anal Appl 21(4):1253– 1278.
_______________________________________________________________
Author: Institute:
Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Dr.Dunja Jannek Ilmenau University of Technology, Institute of Biomedical Engineering and Informatics POB 100565 D-98684 Ilmenau Germany
[email protected] _________________________________________________________________
Analysis of Epileptic EEG Signals by Means of Empirical Mode Decomposition and Time-Varying Two-Sided Autoregressive modelling A. Kacha, G. Hocepied and F. Grenez Laboratoires d’Images, Signaux et Dispositifs de Télécommunications Université Libre de Bruxelles, Brussels, Belgium Abstract — The presentation concerns the analysis of EEG recordings for epileptic seizure detection. The EEG signal is decomposed adaptively into oscillating components called implicit mode functions (IMFs) using the empirical mode decomposition (EMD) algorithm and then a time-frequency analysis is carried out on the first two components using a parametric time-frequency distribution based on two-sided autoregressive modeling. The local frequency of the IMF extracted at some iteration is lower than that of the IMF extracted at the previous iteration which enables to analyze the signal at different scales. The relative variation of the instantaneous frequency of the IMFs estimated using two-sided autoregressive model-based time-frequency distribution is used as a feature for automatic seizure detection in the EEG recordings. The effectiveness of the proposed method is demonstrated on EEG signals recorded from 18 patients suffering from different kinds of epileptic seizures as well as on normal EEG data recorded from control population. Keywords — Epileptic EEG, seizure detection, empirical mode decomposition, two-sided autoregressive modelling, time-frequency analysis.
I. INTRODUCTION The electroencephalogram (EEG) recordings are most often used to monitor and document the brain activity of epileptic patients. EEG signals of epileptic patients include transient signals called spikes, sharp waves and spike-andwave activity. These transients can be regarded as superimposed to the more stationary EEG background activity [1]. Detection and assessment of epileptic seizures are usually based on visual inspection of EEG recordings. However, seizure detection based on visual screening of EEG recordings is subjective and requires highly trained professionals to obtain reliable results. Besides, this method becomes time-consuming for long-term EEG recordings, so that developing an analysis method for automatic and reliable epileptic seizure detection is of great importance. Many computer-based techniques for epileptic EEG analysis have been proposed in the literature. Conventional methods of epileptic activity detection that use time-domain descriptors are unreliable because of their high sensitivity to transients and artifacts. Frequency domain-based
approaches for automatic seizure detection assume the stationarity of the EEG signal; however, EEG signals are known to be nonstationary, i.e. characterized by timevarying spectral contents, so that the analysis of such signals by means of spectral methods can not reveal rapid transient events. In an attempt to reveal certain structures that neither time domain methods nor frequency domain methods alone can reveal, joint time-frequency (TF) analysis has been proposed [2]. TF representations perform a mapping of the one-dimensional signal in time into the two dimensional function of time and frequency either by using a TF distribution (e.g. pseudo Wigner-Ville distribution and smoothed-pseudo Wigner-Ville distribution) or by decomposing the signal onto a set of basis functions (e.g. matching pursuit and wavelet transform). The major drawback of the TF approaches is that the time-frequency tiling pattern and the basis functions are fixed and therefore can not be optimal for all EEG signals. Indeed, EEG signals of epileptic patients include different types of transients associated to different kinds of epileptic seizures or artifacts. Ideally, one would like to have signaldependent TF tiling basis functions. In the present study, we propose an alternative approach for automatic detection of epileptic seizures in EEG recordings. The analysis method consists of two steps. In the fist step, the EEG signal is decomposed adaptively into locally oscillating components called intrinsic mode functions (IMFs) via the empirical mode decomposition (EMD) algorithm developed by Huang et al. for multicomponent nonlinear and nonstationary signals analysis [3]. The IMFs can be regarded as basis functions extracted from the signal itself. The local oscillating frequency of the IMF extracted at some iteration is lower than that of the IMF extracted at the previous iteration which enables to analyze the signal at different scales. In the second step, a parametric time-frequency analysis of each IMF is carried out by fitting a time-varying two-sided autoregressive (TAR) model [4]. The instantaneous frequencies of the two first IMFs estimated as the frequencies corresponding to the time-varying spectral lines of the TAR-based TF distribution are used to extract features for automatic seizure detection.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1231–1235, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
1232
A. Kacha, G. Hocepied and F. Grenez
The proposed method is physiologically relevant because of the nature of generation of EEG signals as a sum of oscillating modes. Its effectiveness in automatic seizure detection is demonstrated by analysing EEG signals recorded from 18 patients suffering from different kinds of epileptic seizures as well as normal EEG recordings of control population. The remainder of the paper is organized as follows. Section II introduces the empirical mode decomposition algorithm and TAR-based time-frequency distribution for estimating instantaneous frequency of the IMFs as well as the analysis method for epileptic seizure activity detection in EEG signals. Experimental results are presented and discussed in Section III and conclusions are given in Section IV. II. METHODS A. Empirical mode decomposition The EMD is a time-frequency analysis tool that does not require a priori fixed basis function like conventional timefrequency representations (e.g. Wigner-Ville distribution or the wavelet transform). It has been proposed initially in [3] to analyse nonlinear and nonstationary signals like ocean waves and has found applications in many fields such as geophysics and biomedical signal processing [5]. The EMD algorithm decomposes adaptively a given signal x(t) into oscillation modes namely the IMFs extracted from the signal itself. Each IMF component has a zeromean value and only one extremum between zero-crossings. The IMFs are obtained via the iterative sifting process which involves the following steps: 1. Initialize the algorithm: j=1, initial residue r0(t)=x(t) and fix the threshold δ 2. Extract local maxima and minima of rj-1(t) 3. Compute the upper envelope Uj(t) and lower envelope Lj(t) by cubic spline interpolation of local maxima and minima, respectively 4. Compute the mean envelope
(
)
m j (t ) = U j (t ) + L j (t ) 2 5. Compute the jth component hj(t)=rj-1(t)-mj(t) 6. hj(t) is processed as rj(t). Let hj,0(t)=hj(t) and mj,k(t), k=0, 1,. . ., the mean envelope of hj,k(t), then compute hj,k(t)=hj,k-1(t)-mj,k-1(t) until
SD k =
T
¦
t =0
h j ,k −1 (t ) − h j ,k (t )
(h j,k −1 (t ))
2
8. Update the residue rj(t)=rj-1(t)-IMFj(t) 9. Increase the sifting index j and repeat steps 2 to 8 until the number of local extrema in rj(t) is less than 3. Each IMF is a narrowband AM-FM component that can be characterized by its instantaneous frequency. The signal can be reconstructed exactly by summing all the J IMFs J x ( n) = ¦ IMF j ( n) + rJ +1 (n) j =1
B. TAR model-based time-frequency distribution Two-sided autoregressive modelling was introduced in the stationary case to improve the performance of the usual AR spectral estimator and extended to the nonstationary case in [4]. Let x(n) be a discrete-time signal. The pth-order TAR model estimates the present sample x(n) as a linear combination of its past and future values. To model nonstationary signals, the TAR coefficients are assumed to be time-dependent which yields p
x ( n ) = − ¦ bi ( n − i ) ( x ( n − i ) + x ( n + i ) ) + u ( n )
where u(n) is a Gaussian, zero-mean white noise with variance σ2. The corresponding time-dependent power spectrum is expressed as
Px ( n, f ) =
0.05). However, the other 6 radiographers’ AUC (0.74–0.62) were lower than the radiologist’s AUC (0.86) (p < 0.05). The highest AUC of radiographer (AUC, 0.85; sensitivity, 78%; specificity, 84%) and the AUC of radiologist (AUC, 0.86; sensitivity, 86%; specificity, 76%) were almost equal (p = 0.673). These results show a possibility that untrained radiographers with higher film reading performance to detect cancer can assist diagnosis in X-ray examination of the stomach, because some higher film reading performance of several untrained radiographers was comparable to that of a radiologist. Keywords — radiographer, film reading, stomach, X-ray examination, cancer screening.
I. INTRODUCTION Until now, evaluation of the film reading performance of radiographers has been used to make better use of radiographers and to rectify the shortage of radiologists [1–5]. For example, in breast cancer screening, many studies have shown the effectiveness of film reading by radiographers, including studies on the equivalence of film reading performance of radiographers and radiologists [2, 3]. Thus, the film reading performance of radiographers has attracted much attention over the years. Especially in the United Kingdom, The National Health Service Breast Screening Programme incorporates trained and certified radiographers as film readers [6]. Film reading by radiographers is expected to assist with diagnosis of mammograms and radiographs of other areas. Therefore, there have been many studies of plain radiographs of accident and emergency patients [4] and chest radiographs [5], and radiographers have assisted in the diagnosis of these patients. However, little has been published on evaluating film reading by radiographers during X-ray examination of the stomach. To investigate whether radiographers can assist in X-ray examination of the stomach, we evaluated the film reading performance of untrained radiographers for detecting gastric cancer. II. MATERIALS AND METHODS A. Case study The Institutional Review Board approved the present study, and informed consent was not required because of the retrospective use of the images. A test set of films of 100 patients who underwent X-ray examination of the stomach was evaluated. The films were selected from 192,404 patients (95,826 male and 96,578 female; mean age, 55 years; range, 19–92 years; 274 cases of cancer) who underwent gastric cancer screening. The screening was
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1603–1606, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
1604
H. Yatake, T. Katsuda, C. Kuroda, H. Yamazaki, T. Kubo, R. Gotanda, K. Yabunaka, K. Yamamoto, Y. Sawai and Y. Takeda
performed at a screening center in Osaka, Japan between April 2000 and March 2002. The test set of 100 cases (36 male and 64 female; mean age, 62 years; range, 33–78) consisted of 50 negative and 50 positive cases. The negative cases were selected at random from cases without cancer that were confirmed as negative in the next year or two. The positive cases were defined as cancer cases, which were selected from 274 cases of cancer as follows. First, all cases of cancer were classified into five categories depending on difficulty of film-reading. Then 10 cases were selected from each of the five categories. The film reading difficulty was determined by reference to the screening findings of film reading by physicians. All cancer cases were confirmed by histological analysis. B. Materials In gastric cancer screening, a series of seven films (Table 1), as recommended by the Japanese Society of Gastroenterological Mass Survey in 1984 as a standardized method [7], was used. Gastric cancer screening was performed by 10 screening cars with indirect radiographic equipment (U-MA5N; Hitachi Medical, Tokyo, Japan) and two types of indirect radiographic film (MI-FA or MI-FG; Fujifilm Medical, Tokyo, Japan) with one processing unit (CEPROS M2; Fujifilm Medical, Tokyo, Japan). Bloating agent 4.0 g (Baros Effervescent-S; Horii Pharmaceutical, Osaka, Japan) and two kinds of barium sulfate formulation (200 mL barium sulfate, 145% w/v; Barytgen Sol 145; Fushimi Pharmaceutical, Osaka, Japan; and 200 mL barium sulfate, 150% w/v; Baritop Sol 150; Sakai Chemical Industry, Osaka, Japan) were used. Table 1 Standard method of radiography in the gastric cancer screening Method
Examinee is asked to take effervescent granules before examination Examiner is to make 7 exposures in the following positions, and to use a roll of film 70-l00 mm in width, and 200-300 ml barium, l00 w/v%, as contrast medium
position
1
Double-contrast study in prone position
2
Filling method in prone position
3
Double-contrast radiograph in supine position
4
Double-contrast radiograph in supine and right anterior oblique position
5
Double-contrast radiograph in supine and left anterior oblique positions
6
Double-contrast radiograph in semiupright and left anterior oblique positions
7
Filling method in upright sagittal projection
_______________________________________________________________
C. Image interpretation Eleven radiographers and one radiologist participated in this study as film readers. All participating radiographers were male, aged 34–57 years (median 42), and were certified as technologists in gastric cancer screening by the Japanese Society of Gastroenterological Cancer Screening. They did not receive training in film reading for this study. The radiographers had 12–35 years (median 18) experience in Xray examination of the stomach. The participating radiologist (male, aged 57 years) was certified as a film reader in gastric cancer screening by the Japanese Society of Gastroenterological Cancer Screening. The radiologist had 30 years work experience in X-ray examination of the stomach, and he had read films of over 10,000 cases per year. The radiographers and the radiologist interpreted the test set and scored the films on a five-point scale (1, negative; 2, probably benign; 3, indeterminate; 4, probably malignant; 5, malignant). A score of 2–5 was defined as positive, which required diagnostic workup for cancer. D. Statistical analysis A receiver operating characteristic (ROC) analysis was performed to evaluate the overall sensitivity and specificity of each scale, as a measure of interpretation by the radiographers or the radiologist. The area under the ROC curve (AUC) indicates the performance characteristics of a test and the index of diagnostic performance in the ROC [8, 9]. The AUC was defined as the film reading performance of the radiographers or the radiologist for detecting cancer. The AUC value of the radiographers and the radiologist were compared. Values of p < 0.05 were considered to indicate a significant difference. The ROC analysis was performed according to the method of DeLong et al. [10], and was calculated and tested for significance using Statistical Software for Microsoft Excel (Analyse-it version 2.07; Analyse-it Software, Leeds, UK). III. RESULT The sensitivity, specificity and AUC values of 11 radiographers (A–K) and the radiologist are shown in Table 2. Five of the 11 radiographers’ AUC values (0.85–0.79) were slightly lower than that of the radiologist (0.86), however, the difference was not significant (p > 0.05). The other six radiographers’ AUC values (0.74–0.62) were lower than that of the radiologist (0.86), and this difference was significant (p < 0.05). The ROC curves for the radiologist and the radiographer with the highest performance to detect cancer are shown in
IFMBE Proceedings Vol. 22
_________________________________________________________________
The usefulness of film reading to detect cancer by untrained radiographer in X-ray examination of the stomach
Figure 1. The two curves crossed over because the radiographer with the highest performance to detect cancer was partially more accurate than the radiologist. The radiographer with the highest film reading performance for detecting cancer (AUC 0.85) had almost the same performance as the radiologist (AUC 0.86), and no significant difference was found (p = 0.67). The ROC curves for the radiologist and the radiographer with the lowest performance to detect cancer are shown in
1605
Figure 2. The radiologist was consistently more accurate for both sensitivity and specificity than the radiographer. The radiographer with the lowest ability to detect cancer (AUC 0.62) was much lower than the radiologist (AUC 0.86), and the difference was significant (p < 0.001). The ROC curves for the radiographers with the highest and lowest film reading performance for detecting cancer are shown in Figure 3. The two radiographers’ AUC values are shown in Table 3. The radiographer with the highest
Table 2 Receiver operating characteristic analyses
Radiographer A
39/50 (78)
42/50 (84)
Area under the curve 0.85
B
39/50 (78)
42/50 (84)
0.83
0.493
C
33/50 (66)
49/50 (98)
0.82
0.401
D
35/50 (70)
42/50 (84)
0.80
0.154
E
31/50 (62)
47/50 (94)
0.79
0.127
F
25/50 (50)
49/50 (98)
0.74
0.010
G
27/50 (54)
45/50 (90)
0.74
0.017
H
44/50 (88)
31/50 (62)
0.72
0.007
I
28/50 (66)
40/50 (80)
0.71
0.001
Reader
†
Specificity*
p value† 0.673
J
23/50 (56)
45/50 (90)
0.69
0 :
§ ax · ¸¸ © az ¹
α x = arctan¨¨
§a · az < 0 ∧ ax ≥ 0 : α x = arctan¨¨ x ¸¸ + π © az ¹ §a · az < 0 ∧ ax < 0 : α x = arctan¨¨ x ¸¸ − π © az ¹
(1)
Fig. 2 Calculating accelerometer offset corrections with the equation of a sphere from 4 points.
Although the equation above represents a simple mathematic task, for an 8-bit microcontroller used in our design it may be very time consuming using standard C libraries. Methods for fast integer calculation of goniometric functions have to be used. Fast integer computing of tilt angle was finally implemented using a method built around a 256-value per quadrant look-up table for arctan calculation. B. Accelerometer offset error correction When sensing static acceleration forces, MEMS accelerometers often introduce a slight offset error in one or more axes. We eliminate this error by calculating these offsets at the time of initial setup of the device and including them into further computations as offset correction constants. The method used for offset calculation is based on solution to the general equation of a sphere from 4 points on its surface. The principle is clearly illustrated in fig 2. Coordinates of four points (P1 up to P4) on the spherical surface are given by acceleration force vector components measured stepwise in four different positions of the sensor. Solving the equation of a sphere, we get the coordinates of the sphere center C, which are in fact the offset correction values x0, y0, z0 for all three axes. The main advantage of this method to a classical approach based on computing average values from minimum and maximum values in each axis is its ease of use in common practice. One does not need to search for limit values but simply puts the whole tilt sensing device in 4 or more different positions with different tilt angles. Described procedure needs to be done at the time of production or when the accelerometer is being replaced.
_______________________________________________________________
III. RESULTS A. Hardware design A complete block diagram of the prototype device can be seen in fig. 3. We are using AtMega16 microcontroller from Atmel or alternately HC908GZ32 from Freescale for initial testing. These microcontrollers support in-system programming and are optimized for high-level programming language development. Both contain 10-bit ADC for processing analog signals from accelerometers equipped with analog outputs. Currently we are experimenting with both analog and digital output accelerometers. The analog type used is ADXL330 triple axis MEMS accelerometer. The digital type used in experiments is LIS3LV02DL accelerometer with I2C serial interface. To reduce noise, averaging of the input signals is implemented in the firmware. This low-pass filtering also reduces number of possible false alarms caused by rapid movements or short-term deviations from the desired tilt angle. Achievable accuracy of angular measurements is approx. 1 degree, which is sufficient for clinical practice. An acoustic transducer is used as a warning indicator in case the patient moves the head so its tilt is not within the preset desired range. This bio-feedback application helps the patient in maintaining the optimal head position that was determined by the ophthalmologist at the time of operation. EEPROM memory integrated in the microcontroller is used for saving calibration data (maximum and minimum in every axis, allowed angles etc.) and basic settings. External FLASH memory is used for storing larger amounts of data obtained during regular head position measurements.
IFMBE Proceedings Vol. 22
_________________________________________________________________
1760
M. Cizek, J. Dlouhy, I. Vicha and J. Rozman
Flash memory
3 cm
RTC
RF module 5 cm
Fig. 4 Top side of the printed circuit board of a prototype device. C. Computer software
Fig. 3 A block diagram of the designed device Accelerometer data is sampled every 5 seconds as the device regularly wakes up from its power saving mode. Position data are stored into external non-volatile memory every 5 minutes or immediately in case of deviation from the desired head position lasting longer than a defined amount of time. Each record in the memory contains a time stamp and position data. A 3V lithium cell is used as a power supply. Thanks to power-saving mode of the MCU only a single battery is sufficient for powering the device during the whole monitoring period. A photo of a prototype device is pictured in fig. 4 where communication module, real time clock (RTC) and external flash memory are visible. The microcontroller and accelerometer are placed on the other side of the board.
Computer software application is used for communicating with the device. It serves for two main purposes: it sets the monitoring parameters and acquires measured data from the device. A screenshot of the main window can be seen in fig. 5. The main window of the application is intended to serve as a quick setup interface.
B. Communication interface Our system is based on so-called thin client philosophy. That means the tasks performed by the wearable monitoring device are reduced to a minimum. Complicated tasks like sensor calibration or initial setup for a new patient have to be done in collaboration with the server – a computer application with graphic user interface. Therefore a communication interface is needed. Currently we use both cable and wireless interfaces. So far serial cable, IrDa and ISM RF interface have been implemented.
_______________________________________________________________
Fig. 5 A screenshot of a computer software application
IFMBE Proceedings Vol. 22
used for setting up the device and data acquisition.
_________________________________________________________________
Electronic Monitoring of Head Position after Vitrectomy
The user is able to monitor actual measured angles of tilt. Current tilt angle of the sensor can be marked as the reference position or the optimal position that has to be maintained. After maximum angular deviations from the optimum are entered, configuration data can be sent to the monitoring device. The graphical representation of measured data can work in two basic modes. In the off-line mode users can explore a set of measured values downloaded from a flash memory of a monitoring device. The real-time mode can be used for displaying a graph of currently received incoming data. The graphic view can be expanded to a more detailed full-screen window. Acquired sets of measured data can be exported in CSV file format for further processing and analysis using software such as MS Excel or Matlab.
1761
Our developing system for post operative head tilt monitoring consists of two basic parts: a wearable intelligent tilt sensor capable of wireless data transfers and a personal computer equipped with communication interface and a special software application used for controlling the sensor and acquiring measured data. Prototypes of both the monitoring device and computer software were successfully tested. Remote controlling and data acquisition from the monitoring device is possible trough RF interface, IrDa or serial cable connection. Achievable accuracy of tilt angle measurements using ADXL330 MEMS accelerometer with analog outputs is approx. 1 degree, which is sufficient for clinical practice. Future efforts will be focused on the overall mechanical design of the device. Recent research has been primarily focused on system functionality and electronic hardware.
D. Field of application Our efforts are aimed on developing a wearable electronic device (intelligent sensor) for post-operative head tilt monitoring that would improve the overall quality of the recovery process after complicated ophthalmologic operations. Shortly after the operation, the patient is equipped with the monitoring device and desired angles of tilt are set by the clinician. During the post-operative recovery period, the device helps the patient in maintaining the desired head position. The position data are being stored regularly in the devices built-in flash memory. During the recovery period the patient regularly visits the clinic for examinations. At the time of these visits, position data stored in the monitoring device can be downloaded to a personal computer equipped with proper communication interface and our software. The clinician is then able to analyze the overall success rate of proper head position maintaining and confront it with the actual findings and treatment progress.
The work is supported by Czech Science Foundation project No. 102/08/1373.
REFERENCES 1.
2.
3.
4.
Cullen R. Macular hole surgery: helpful tips for preoperative planning and postoperative face-down positioning. J. Ophthal. Nursing. Technol., 1998, 17, s. 179-181. Dahl A., Stöppler M. Retinal Detachment Causes, Symptoms, Signs, Treatment and Risks. MedicineNet.com. Online: . 2007 Jacobs P. M. Vitreous loss during cataract surgery: prevention and optimal management. Eye. Online: . Feb. 2008 Vitreous–retina–macula consultants of New York. Vitrectomy. Online: < http://www.vrmny.com/pe/ vitrectomy.html >. New York, 2006 Corresponding author:
Author: Institute:
IV. CONCLUSIONS Eye surgeons from clinics performing vitreoretinal surgery find proper head positioning crucial for a successful result of the operative treatment. Currently there are discussions on how many days after the operation a head needs to be positioned [1] [2] [3].
_______________________________________________________________
ACKNOWLEDGMENT
Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Martin Cizek Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology Kolejni 4 Brno, 612 00 Czech Republic
[email protected] _________________________________________________________________
Investigation of Heart Rate Variability after Cardiopulmonary Resuscitation and Subsequent Hypothermia J. Hopfe1, R. Pfeifer2, C. Ehrhardt2, M. Goernig2, H.R. Figulla2, A. Voss1 1
Department of Medical Engineering and Biotechnology, University of Applied Sciences Jena, Jena, Germany 2 Clinic of Internal Medicine I, University Clinic of Friedrich-Schiller-University Jena, Jena, Germany
Abstract — The time between cardiac arrest and cardiopulmonary resuscitation (CPR) determines the intensity of neuronal cell damage. Induction of mild hypothermia (cooling the body kernel to 33°C for about. 24 hours) directly after cardiopulmonary reanimation improves the patient’s neurological outcome. Therefore, the objective of this study was to investigate if there are different patterns of heart rate regulation in cardiopulmonary resuscitated patients during mild hypothermia differentiating between survivors (rated by the Glasgow outcome score, GOS 4-5) and deceased patients (GOS 1). Long-term ECG was monitored for 30 minutes from patients in intensive care unit directly after normothermia achievement. 18 CPR patients (5 patients with GOS 4, age 59.8±19.1 and 4 patients with GOS 5, age 56.2±14.5 vs. 9 patients with GOS 1, age 66.7±7.7) with stable cardiovascular circulation and cardiac sinus rhythm were enrolled in this study. Autonomic regulation was assessed applying heart rate variability (HRV) analysis. Six HRV parameters were calculated and revealed significant differences in the autonomous regulation of heart rate between both groups during the hypothermia period directly after thermal recovery: sdNN (40.2±19.5 vs. 10.9±4.1, p=0.013), cvNN (0.06±0.03 vs. 0.02±0.007, p=0.01), sdaNN1 (21.2±9.3 vs. 7.5±2.7, p=0.008), renyi0.25 (4.2±0.6 vs. 2.8±0.4, p=0.01), renyi2 (3.5±0.6 vs. 1.9±0.4, p=0.008) and finally, shannon (3.7±0.6 vs. 2.2±0.4, p=0.008). Applying HRV analysis in this study we could demonstrate the occurrence of different patterns in the autonomic cardiovascular regulation in cardiopulmonary resuscitated patients after mild hypothermia treatment. Reduced HRV was recognised in deceased patients, which suggests an association with the final neurological outcome. Therefore, the HRV measures might support the prognosis of beneficial effects of mild hypothermia and thus, the prediction of the patient’s outcome. Keywords — Hypothermia, cardiopulmonary resuscitation, heart rate variability, ICU.
I. INTRODUCTION Considerable enhancements in organisations and structures in in- and outside hospital life rescuing services distinctively contributed to a higher rate of successful cardiopulmonary reanimation (CPR) after cardiac arrest. Short-term induced reanimation and cardiovascular stabilization essentially determine the patient’s survival
perspectives and the extent of cerebral cell damages [1]. Mild hypothermia (i.e. cooling the body kernel temperature to 33°C for approx. 24 hours) initiated directly after CPR aims to support the inhibition of degenerative biochemical reactions in cerebral cells and thus, to support the improvement of providential neural outcome [2]. In the western countries, cardiac stroke is estimated to be caused by fundamental cardiac diseases in approx. 78% of cases. To this date, the beneficial effects of hypothermia in cardiopulmonary resuscitated patients remain uncertain. In this study, all resuscitated patients underwent hypothermia treatment and were grouped depending on the neurological outcome (rated by the Glasgow Outcome Score – GOS [3]). The objective of this study was to examine the alteration and thermal adaptation of the autonomous heart rate regulation during hypothermal exposure. For that purpose, parameters from heart rate variability were determined from ecg tachograms retrieved in specific hypothermia periods [4]. In statistical analysis parameters were optimized to classify the patients groups. In futural intensive care monitoring, the assessment of dynamic cardiovascular measures should enable an individual risk stratification and prognosis of therapeutic benefits even within hypothermia. II. MATERIALS AND METHODS From the emergency care service resuscitated patients collective admitted to the clinical intensive care unit, in 18 patients with stable cardiovascular circulation and cardiac sinus rhythm the mild hypothermia treatment and analysis of cardiovascular dynamics were performed. Patients were sedated and artificially respirated. The body kernel temperature was decreased to 33°C within approx. 3 hours and maintained for 24 hours. In case of critical bradycardia or cardiac dysrhythmia, the temperature was increased up to 34°C. After hypothermia the temperature recovered passively to 36°C within approx. 5 hours. From the continuous biosignal monitoring system the ecg was copied for offline signal processing. 30 minute data sequences were extracted from six characteristical phases of hypothermia: Δt1 at hypothermia initiation, Δt2 within cooling phase, Δt3 within stable hypothermia, Δt4 within
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1762–1764, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
temperature [°C]
Investigation of Heart Rate Variability after Cardiopulmonary Resuscitation and Subsequent Hypothermia
36
[
[] [] ]
summarized in table 1 for survivors (GOS 4-5) versus deceased patients (GOS 1).
]
[
Table 1 Classifying HRV parameters: range and p-value
[ ]
Parameter sdNN
[]
33 Δt1 Δt2
Δt3
1763
Δt4 Δt5 Δt6
time [h] Fig. 1 Sketch of 30 minute data periods extraction from ecg monitoring autonomous temperature recovery, Δt5 immediately after achievement of normothermia and finally, Δt6 approx. 3 hours after normothermia achievement. The principle model of the 30 minute data periods extraction according to the specific temperature circumstances is sketched in figure 1. In signal processing, ecg data was filtered by an adaptive mean filtering algorithm to eliminate artefacts as e.g. ectopic beats. Subsequently, the time series of the interbeat intervals were extracted and long-term as well as short-term heart rate variability parameters from time and frequency domains were determined. Statistical Mann-Whitney-UTest was applied to identify significant parameters (p 0.1 are shown.
with variance inflation > 2 were omitted from the model. Heart (end-diastolic volume and LV mass) and body size (BMI) were considered as potential effect modifiers, and thus included in the regression model. Regression coefficients reflect the relative effect magnitude of each determinant. III. RESULTS
coeff
Table 3 summarizes univariate relations of typical diastolic function indices in men and women. All indices are correlated with one or more arterial function parameters. The relationship between E’ and DC was most pronounced with r = 0.349 and r = 0.402 in men and women, respectively (all p < 0.001). Moreover, all indices are age-dependent. In particular, a strong inverse relationship was noted between E’ and age (r = -0.422 and r = -0.539 in men and women, respectively). B. LV relaxation rate versus arterial loading A multiple linear regression analysis showed E’ to be independently related to age, BMI, DC, LVM, PP and Ea in men and to age, LVM, Ea, BMI and SBP in women. These models were able to explain 31.9 and 45.0 % of the variance in E’, respectively (both p < 0.001). In both men and women age was the primary determinant of E’ (highest coefficient), followed by LV massa. The remaining
p
adj R²
Men Constant
151.49 (8.52)
Age (y)
-0.91 (0.09)
-0.288
1500.
III. RESULTS We can calculate the mean flow from fig. 4 and find a mean oszillatoric tidalvolume of V=17.5l for inspiration and V=22.0l in expiration mode in case of low pressure and V=67.3l and V=127.8l in case of high application pressure, respectively. The determined high frequency flows for different blasting pressures is pictured in Fig. 4. If the blasting pressures are small, the applied high frequent sinus shaped pressure curve is mirrored in the measured data. The interpretation of the data from the set up in Fig. 4(a) yields a Reynolds’ number of Re = 1303 and Re = 1985 for the used small pressures and flows. In the set up seen in Fig. 4(b) higher blasting pressures are used, therefore the result are even higher Reynolds’ numbers of Re = 2481 in case of expiration and Re = 4838 in case of inspiration. In Fig. 5 the resulting exponents α from the analysis of the flows from the set up from Fig. 2 are illustrated.
2,25 exponential factor α
paraboloid and the maximum velocity vmax has twice the value of the mean velocity. The pressure drop Δp is proportional to the Flow: Δp ~ V , whereas the proportionality is realized via the flow resistance R: Δp = R ⋅ V
1,75
1,25
0,75 0
40
80 Flow V& [l/min]
120
160
Fig. 5: Measured exponential factors α depending on flow V
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
2060
M. Wurm, A. Drauschke, J. Mader, K. Stiglbrunner, M. Weingant, J. Bawitsch and P. Krösl
IV. DISCUSSION
V. CONCLUSIONS
The geometric dimensions of the used Spirolog-sensor resemble those of the upper tracheal region. Thus the occuring flow characteristics can be assumed to be similar. In particular it becomes clear that the turbulence originating from the constriction within the Spirolog-sensor will also occur in the tracheal region. These turbulences affect the ventilation characteristics decisively. It could be demonstrated that, within an open system and in the case of a superimposed high frequency jet, in the inspiration phase a higher flow and thus a higher tidal volume can be found than in the expiration phase. Furthermore Fig. 4 shows that at higher blasting pressures the flow no longer forms an almost sinus shaped time depended profile. We take this behaviour as a consequence of a increased occurrance of turbulences in the flow. On this account the degree of turbulence in the flow was studied within the realized geometry. The interpretation of the measured data is validated both by the determined high Reynolds' numbers and by the identified exponent α. Examined cases of low blasting pressures suggest that the flow ranges in the change-over between laminar and turbulent. High blasting pressures, however, induce a highly turbulent flow. These strong turbulances moreover provoke that no almost sinus shaped time dependend flow behaviour (as to be seen in Fig. 4(a)) exists, even though the generated Jet-pressure originating from the apparat was varied sinusoidally. These observations lead to the flow characteristics shown in Fig. 4(b), which show a behaviour similar to compliance. As presented in chapter II the value of the exponent α depends on the degree of turbulence of the flow. It could hence be expected, that small blasting pressures and thus small flow velovities result in laminar flows. The higher the flow velocity, the more turbulent the observed flow. This behaviour could in principle be validated, as seen in Fig. 5. With rising flow, the exponential factor α increases in the defined area of 1 for small flows and 2 for large flows.
In the presented paper a difference in oscillatory tidal volume between inspiration and expiration phase using SHFJV® could be demonstrated. The origin of this difference lies in the occuring turbulences in the examined flows. The artificial ventilation can work in an optimal way, if the needed gas exchange can be realized using preferably small blasting pressures, as higher pressures could lead to complications and side effects for the patient. In order for the ventilation to be as protective as possible for the lung , the gas flows should be applied in a laminar way. If turbulences occur, the applied volume of gas is decreased because of the elevated flow resistance in the trachea. As it was shown, such turbulences in the gas can take place already at comparatively small flows when using SHFJV®.
ACKNOWLEDGMENT The authors used the TwinStream™ Multi Mode Respirator donated by Carl Reiner GmbH. This work is supported by the MA27–Projekt 04–06 “Stärkung des Kompetenzbereiches Beatmungstechnik mit Schwerpunkt Hochfrequenzbeatmung“ of the Fachhochschul – Förderung of gouverment of Vienna.
REFERENCES 1.
2. 3. 4.
5.
6.
_________________________________________
A. Oczenski, M. Singer, W. Oczenski, H. Andel and A. Werba (1997), Breathing and Mechanical Support: Physiology of Respiration and Mechanical Methods of Artificial Ventilation, Blackwell Publishing Limited, 1997, ISBN: 3894123877/9783894123871 Carl Reiner GmbH (2006), Operation Manual for TwinStream® Multimode Respirator for SHFJV®-respiration. Vienna C. H. Chung and K.O.Won (1987), High frequency Ventilation, Yonsei Medical Journal, 28(3) E. Schragl, A. Donner, M.C. Grasl, M. Zimpfer and A. Aloy (2000), Superimposed High Frequency Jet Ventilation for laryngeal and tracheal Surgery, Arch. Otolaryngol, head neck Surgery, 126:40-44 A. Bacher, K. Pichler and A. Aloy (2000), Supraglottic combined frequency jet ventilation vs. subglottic monofrequent jet ventilation in patients undergoing microlaryngeal surgery, Anesth Analg, 85:460-465 D. Geschke (editor) (2000), Physikalisches Praktikum. Mit multimedialen Ergaenzungen.
IFMBE Proceedings Vol. 22
___________________________________________
Effect of Viscoelastic Constraints to Kinematic Parameters during Human Gait T. Miyoshi1, N. Sasagawa1, S.-I. Yamamoto1, T. Komeda1 and K. Nakazawa2 1 Department of Mechanical Engineering, Shibaura Institute of Technology, Saitama, Japan Research Institute, National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Japan
2
Abstract — The aim of this study was to investigate the effect of viscoelastic factors on human gait forms, limb segment angles, and endpoint trajectories during walking in water relative to their effect during walking on land. Basic motion analysis methods were applied for each walking condition, and the joint angular displacements are shown in a three-dimensional (3D) plot. The drag force had little effect on the human gait form and the angle between the planar plane of the stance and swing phases in the 3D plot. These results suggest that the joint angular displacement controller, which consisted of three inputs and the feedback signals of the joint torque, was adopted in the central nervous system. In addition, the foot contact information might include the transition of the joint angular displacement controller from the stance / swing phase to the swing / stance phase. Keywords — Foot trajectory, Drag force, walking in water, Joint angular displacements, Planar plane
I. INTRODUCTION Human locomotion control is one of the multijointed segmental control theorems. Limb segment angle covariation might be related to the global kinematic variables introduced in a series of studies using cats [1]. In addition, limb segment rotations in the sagittal plane covaried in human treadmill walking; as a result, the changes in the three-dimensional (3D) trajectories of lower limb joint angular displacements converged into a single plane [2]. This planar constraint of intersegmental coordination during human walking holds for the forward versus the backward direction [3] and different levels of partial body weight support [4]. The question arises, then, of whether or not environmental factors, such as viscoelastic variables, might affect the limb endpoint trajectories during walking. Viscoelastic components might affect joint stiffness or generate a propulsive force against fluid resistance, e.g., a drag force. The purpose of this study was to investigate the effect of viscoelastic factors on gait forms and to evaluate whether or not the patterns of the limb segment angles and the endpoint trajectories while walking on land are preserved during walking in water.
II. MATERIALS AND METHODS Twenty healthy subjects, whose age, height, and body weight were, on average, 23.2 ± 2.3 years, 173.5 ± 2.6 cm, and 62.0 ± 7.2 kg, respectively, participated in this study. All subjects gave informed consent to the experimental procedures, which had been approved by the ethics committee of the National Rehabilitation Center for Persons with Disabilities. A 3D four-body segment model consisting of the trunk (trunk + pelvis), thigh, shank, and foot was defined using five markers at the following landmarks: the midpoint on the iliac wing between the anterior and the posterior superior iliac spine, the greater trochanter, the lateral femoral condyle, the lateral malleolus, and the fifth metatarso-phalangeal joint. All markers were put over the right side of the leg and trunk. On land, normal walking and treadmill walking (1600PRO VEGAS, Fitcrew Inc.) were recorded at 100 Hz using the Optotrack system (model 3020, Northern Digital Inc.). In water, normal walking, treadmill walking (Aquagator model 1104, Ferno Japan Inc.), and uprightstanding data were also recorded with a two-camera videobased system (30 Hz). The water temperature was set to 34 degrees C, and the water depth was set to a level corresponding to about 20 % of the body weight, which was approximately at the axillary level. Motion analysis system software (Dipp-Motion XD, DITECT Co., Ltd.) was used to reconstruct the marker positions into 3D coordinates. Subjects were asked to walk at a self-determined pace along the walkway during normal walking on land and in water, and these trials were repeated 10 times in each case. During the treadmill walking sessions, the gait speed was set at 2.0 km/h, and the data were recorded for one minute. A fourth-order zero-lag Butterworth filter was applied to reduce the noise in the video data (cut-off frequency: 3 Hz for water sessions, 10 Hz for land sessions). Body segment kinematics and the hip, knee, and ankle joint angular displacement data were calculated in the sagittal plane throughout each gait cycle. Angular displacements were defined as positive when the hip and knee moved in the flexion direction and when the ankle joint moved in the plantarflexion direction, and these data were expressed in relation to the 100 % gait cycle under each walking condition.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2061–2064, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2062
T. Miyoshi, N. Sasagawa, S.-I. Yamamoto, T. Komeda and K. Nakazawa
We constructed a 3D plot of the hip angular displacement (z-axis) as a function of the ankle and the knee angular displacement. The ensemble-averaged hip joint angular displacement throughout one gait cycle, during the stance phase, and during the swing phase can be expressed as a single plane on the 3D plot. The angle ( ) between the two normal vectors of each single plane was calculated under each condition. ANOVA was applied to determine the significant differences between the obtained in water and on land. Significance at the statistical level was accepted when p g_thres g_out(i,j) = 1 else g_out(i,j) = 0 endif // end of DICS endfor endfor remains the same for all image coordinates (i,j) for randomly chosen sequences for i and j, and the DICSs therefore can be processed in a random order. Most image processing algorithms contain a lot of DICSs. They can be
processed in parallel on a hardware architecture which offers independent processors. Single instruction multiple data (SIMD) systems are an example for such an architecture. [1] Parallel DICS processing on a SIMD system can potentially reduce computing time and increase the socalled speedup, i.e. the ratio of program execution time on a single- to a multi-processor system. As parallel programming assumes special knowledge about hardware as well as software implementation, a prediction of the expected speedup is helpful to avoid time- and costinefficient developments. The total execution duration of a program can mainly be decomposed into: 1. a duration τ s for processing a serial (data-dependent, non parallizable) instruction sequence, 2. a duration τ p for processing a parallel (dataindependent, parallizable) instruction sequence, 3. a problem size n ∈ N which determines how often the parallel sequence is executed (e.g. the number of input data n), and 4. an overhead time To (n) which depends on the processor system and is generated by e.g. flow control, additional data transfer, etc.. Thus, the total execution time on a single-processor system is Ts ,tot (n ) = τ s + nτ p + Ts ,o (n ) .
(1)
For the determination of single-processor system execution time Ts,tot (n ) , overhead times Ts,o (n ) can be disregarded. However, for multi-processor system execution time T p,tot (n ) , major time overheads must be respected [2]: •
• •
computational overhead duration T p ,o ,comp : additional processing steps due to parallelization, e.g. preparation or initialization of parallel execution, load balancing, memory administration communication overhead duration Tp , o , comm : e.g. time for data transfer or message exchange between different processors synchronization overhead duration Tp , o , sync : e.g. idle times while waiting until other processors have finished their tasks.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2556–2559, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
A mathematical speedup prediction model for parallel vs. sequential programs
The total overhead time in multi-processor system therefore equals Tp ,o (n, p ) = Tp , o , comp + Tp , o ,comm + Tp , o , sync
Similar to the total execution time on a single-processor system (1), a general assumption of the total execution time on a multiprocessor system is: ªnº Tp ,tot (n, p ) = τ s + « » ⋅ τ p + Tp , o (n, p ) . « p»
(2)
ªnº The factor s(n, p ) = « » indicates, that the number of « p» processors p technically limits the parallel execution of n data-independent instruction sequences and therefore shall be denoted as technical parallelizability scaling factor. The overhead times motivate, that parallelizable algorithms profit on parallel processing only if Tp ,tot (n, p ) < Ts ,tot (n, p ) , i.e. if §ª n º · Tp , o (n, p ) < ¨¨ « » − n ¸¸ ⋅ τ p + Ts , o (n, p ) ©« p» ¹
(3)
or if the ratio of serial to parallel processing time, the socalled speedup S, S (n, p ) =
Ts ,tot (n, p )
T p ,tot (n, p )
=
τ s + n ⋅ τ p + Ts ,o (n, p )
(4)
ªnº τ s + « » ⋅ τ p + T p ,o (n, p ) « p»
is greater than 1. The mathematical model for speedup prediction shall be developed for NVIDIA’s Compute Unified Device Architecture (CUDA). It is a triple of NVIDIA’s GeForce GPU hardware architecture, specific C-programming language extensions and compiler NVCC. Thus, CUDA is a platform to implement and execute SIMD code. As the analysis will show, the parallel processing time model does not fit CUDA’s execution model. According to it, the total processing time of a parallel program shall be remodeled and its parameters shall be identified. As parallel programming assumes special knowledge about hardware as well as software implementation, a prediction of the expected speedup is helpful to avoid timeand cost-inefficient developments. The model shall therefore be used for process duration and speedup prediction from a serial C code to a parallel CUDA program. It shall be evaluated for a typical image processing task, the application of a correlation filter of adjustable size.
_________________________________________
2557
II. MATERIAL AND METHODS To establish a mathematical model, time consuming steps in the CUDA program flow have to be identified and investigated to determine their individual execution time depending on hardware and algorithm parameters: 1. A serial part with execution time τ serial ,1 for nonparallelizable command sequences, 2. a serial preparation overhead τ prep for program steps which prepare data for parallel processing, 3. a transfer time τ H 2 D for data transfer from the host computer to the CUDA processing device (host to device, H2D), 4. an initialization time τ call for program initialization on the processing device, 5. a time τ parallel for total execution of all DICSs, 6. a transfer time τ D2 H for result-data transfer from device back to host (D2H), 7. a post processing time τ post for additional result processing (e.g. result merging) and 8. a second serial program part for non parallelizable program steps after execution of the DICSs with the duration τ serial , 2 The sum of these single execution times is the total program execution time on a CUDA device Tp ,tot (n, p ) (2). Whereas times for data transfers, initialization, preparation and post-processing can be determined by parameters like problem size or transfer bandwidth, identification of τ parallel is more complicated. This is due to three reasons: • • •
The time for instruction execution can differ between a common CPU and CUDA processing units (PUs). CUDA GPUs distinguish between different memories (global, shared, local, register) which have different access times. CUDA GPUs consist of a number of multiprocessors (MPs) Nmp, which represent an independent SIMD architecture with NPU=8 PUs and one instruction unit each. Therefore, the technical parallelizability scaling factor s(n, p ) can have values which are multiples of NPU. An instruction is processed for 4 data independent threads, i.e. a MP processes one instruction always on Nth=NPU×4=32 threads. [3]
The first two aspects are due to differences in the hardware properties in combination with algorithm specific usage and compiler optimization. The third is related to CUDA’s algorithm structure and the CUDA execution model.
IFMBE Proceedings Vol. 22
___________________________________________
2558
H.M. Overhoff, S. Bußmann and D. Sandkühler
throughput difference in Mbit/s
The execution of the DICSs for all data is coordinated by the Thread Execution Manager (TEM), which acts like an operating system and distributes hardware resources to the processing tasks. A processing task in CUDA is a DICS which is processed at a specific time with specific data. A combination of DICS and specific data is called a thread. Every thread is individualized by an own ID which is important for the main purpose of the TEM: to optimize throughput and reduce idle times, the TEM can replace threads, which are waiting for data from memory by threads, which are ready for processing [3, 4]. Fig. 1 shows the data throughput increase due to this switching strategy.
7000 6000 5000 4000 3000 2000 1000 0
0
15 5
10 10
5 0
15
no. of warps (x 32 threads)
no. of blocks
Fig. 1 Difference of test kernel throughputs with enabled and disabled task-switching
Because instructions are executed on MPs, and a MP processes groups Nth = 32 threads, these threads are an organizational unit for the TEM, called warp. A number of Nwarps warps is grouped in a block which offers benefits like synchronization and shared memory access due to complete execution on one MP. A group of Nbl blocks forms a grid. This grid structure (Nbl blocks, Nwarps warps) within every block is called an execution configuration and has to be defined by the user. [3] As a consequence, the pure processing time for n DICSs in (2) must be replaced for CUDA: ª º ªnº n . « p » ⋅ τ p = « N ⋅ min( N , N ) » ⋅ τ p ,cuda « » mp bl » « th
τ serial ,1 + τ prep + τ H 2 D + τ call + ª º n « » ⋅ τ p ,cuda + τ d 2 H + τ post + τ serial , 2 « N th ⋅ min( N mp , N bl ) »
_________________________________________
graphics board GB1 GeForce 8500 GT
GPU
Number of multi- 2 processors Nmp Video Memory 512 MB
graphics board GB2 GeForce 8800GTX 16 869 MB
Memory Interface
128 bit
384 bit
Memory Type
DDR2
DDR3
RAMDAC
2 × 400 MHz
2 × 400 MHz
Interface
PCI-Express × 16
PCI-Express × 16
Core Clock
450 MHz
575 MHz
Memory Clock
800 MHz
1800 MHz
To predict execution time and speedup, the hardware must be investigated and benchmarked to determine the individual durations for a certain algorithm. Whereas this can be done easily in most cases, the influence of task switching and optimization of memory access operations by the CUDA compiler NVCC makes it nearly impossible to determine τ p,cuda . Modeling the time consumption of these two strategies necessitates deep insight into the CUDA technology. A conservative estimate can be derived instead if both effects are eliminated. Optimization of memory access can be avoided by addressing non-consecutive memory addresses, and by configuring large shared memories consumption block switching can be suppressed. Two CUDA-capable graphics boards (GBs) have been investigated. Table 1 shows their technical specifications. The host-PC for the GBs was based on an ASUS P5K-E mainboard equipped with a 3 GHz Intel® Core™2 DUO E6850 CPU and a 4×1024 MB RAM. The Front Side Bus frequency was 1333 MHz and the memory operated at 400 MHz. Subject of the test implementation for the model verification was a convolution filtering algorithm. A prediction for execution time and speedup was made twice. Once with parameters like memory access operations etc. derived from a serial C code, once with details like usage of fast constant memory from the CUDA implementation.
(5)
This replacement takes into account the hardware properties and the fourfold instruction execution. Thus, the total execution time is composed as follows: T p ,tot =
Table 1 Test platforms
III. RESULTS The measurement results of the execution times for the implementation of the convolution filtering algorithm in C and CUDA on both GBs as well as the predicted times including C algorithm parameters and more detailed CUDA parameters are presented in Table 2. The execution time on the GBs has been measured twice, once with activated task switching (TS) and once without. All times are listed for three different pattern sizes.
IFMBE Proceedings Vol. 22
___________________________________________
A mathematical speedup prediction model for parallel vs. sequential programs
Table 2 measured and predicted execution times in seconds pattern size 6×6 0.5299 0.4078 0.6639 2.4174 1.0989 0.1294 0.1309 0.2360 0.1166
2559
IV. DISCUSSION
For most pattern sizes, the execution time with task switching is lower than without. Also the predicted times based on CUDA implementation specific parameters are always shorter than the ones predicted only with knowledge about the C algorithm. For this reason, the times predicted with CUDA parameters are closer to the smallest measured times for each pattern size. Nevertheless, the predicted times for GB1 are much higher than actually measured times. For GB2 the times predicted with CUDA parameters are very close the ones measured without task switching. The respective speedup S which is computed with these times is presented in Table 3.
As the results show, the speedup never is precisely predictable. In most cases, the predicted speedup is lower than the actually achieved speedup. The main problem for an exact prediction is the determination of the time for the DICSs τ p,cuda . The influence of the TEM and the compiler optimizations on τ p,cuda is very strong. Partially, this is reflected by the approximation of execution time predicted with CUDA parameters and measured time without task switching. But the magnitude in which this time is affected, depends on the individual algorithm structure. Because these strategies are essential for the increase of performance by the use of CUDA, the current model can only be used for a kind of worst case prediction. Under consideration of the effects of task switching and compiler optimization and some experience in CUDA programming, this is still sufficient to get an impression if an implementation would be efficient. Because of the necessary replacement of τ p by the much more complex τ p,cuda for CUDA, it follows that the common model is not sufficient for a speedup prediction. Apart from the predictions, the achieved speedups in particular for GB2 show that CUDA is capable of speeding up already simple algorithms with standard graphics boards.
Table 3 Measured and predicted speedup for pattern matching
REFERENCES
CPU GB1 GB1 (without TS) GB1 prediction (C) GB1 prediction (CUDA) GB2 GB2 (without TS) GB2 prediction (C) GB2 prediction (CUDA)
GB1 prediction (C parameters) GB1 prediction (CUDA parameters) GB1 measured maximum GB2 prediction (C parameters) GB2 prediction (CUDA parameters) GB2 measured maximum
3×3 0.2668 0.1793 0.2397 0.6544 0.6395 0.0617 0.0785 0.0776 0.0762
3×3 0.4077 0.4172 1.488 3.438 3.501 4.336
pattern size 6×6 0.219 0.4822 1.299 2.245 4.544 4.096
9×9 0.9834 0.7325 1.2282 5.3255 2.1360 0.1925 0.2086 0.4982 0.2093
9×9 0.1864 0.4604 0.8006 1.9739 4.6985 5.108
The speedup prediction for GB1 is much lower than the effectively achieved speedup. The prediction for GB2 is much closer to the really measured value. Except for the case with a pattern size of 9×9 in combination with GB1, all speedups are greater than one, i.e. the CUDA implementation is faster than the one in C.
_________________________________________
1. 2.
3. 4.
Flynn MJ, Rudd KW (1996) Parallel architectures, ACM Comput Surv (28):67-70 Juhas Z (1998) An Analytical Method for Predicting the Performance of Parallel Image Processing Operations, J Supercomput, Kluwer Academic Publishers (12):157-174 NVIDIA Corporation, CUDA - Compute Unified Device Architecture - Programming Guide 1.1 (2007) at www.nvidia.com NVIDIA Corporation, NVIDIA GeForce 8800 GPU Architecture Overview (2006) at www.nvidia.com Author: Heinrich Martin Overhoff Institute: Medical Engineering Laboratory, University of Applied Sciences Gelsenkirchen Street: Neidenburger Strasse 43 City: 45877 Gelsenkirchen Country: Germany Email:
[email protected] IFMBE Proceedings Vol. 22
___________________________________________
Model-Based Method of Non-Invasive Reconstruction of Ectopic Focus Locations in the Left Ventricle D. Farina1 , Y. Jiang1 , O. D¨ossel1 , C. Kaltwasser2 and W. R. Bauer3 1
2
Institute of Biomedical Engineering, University of Karlsruhe (TH), Karlsruhe, Germany University Hospital, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany 3 University Hospital, Julius-Maximilian University W¨ urzburg, W¨urzburg, Germany
Abstract— The knowledge of ventricular ectopic focus locations is important for the effective treatment of ventricular arrhythmias. As the introduction of a catheter into the left ventricle is a complicated task, a non-invasive approach to reconstruct the locations of ectopic foci has to be developed. In this paper the applicability of a model-based method to perform this reconstruction is investigated, which employs the optimization of a cardiac model in order to minimize the difference between the measured and simulated ECGs. The model computes a sequence of transmembrane voltage distributions for a single premature ventricular heart beat (PVB). The optimized parameters include the location of the ectopic focus as well as excitation conduction velocities in various cardiac tissues. In order to compute the simulated ECG, the forward problem of electrocardiography is solved. It employs a realistic anatomical model of the patient’s thorax. The proposed method results in a personalized model of the patient’s heart, which might be important for cardiologists e.g. to evaluate the stability of the heart function. It has been applied to 2 PVBs detected on real patients. The initial estimation of the ectopic beat location has been performed by solving the inverse problem of electrocardiography. Afterwards it was optimized using the model-based approach. The resulting simulated ECGs are close to the measured ones, which indicates the correct estimation of the pacing site. Keywords— Cardiac modeling, inverse problems, modelbased methods.
complicated. Therefore several non-invasive approaches have been recently developed, allowing to estimate the PVB focus location from the body surface potential measurements. First, the inverse problem of electrocardiography can be solved in order to reconstruct the activation time distribution within the ventricles [1, 2]. This approach works fine under ideal conditions. Still if the patient suffers from various cardiac diseases, the number of system parameters drastically increases, which makes the solution time-consuming and unstable. Another approach consists in the estimation of the PVB focus using the critical point theory [3]. Investigations performed by our work group show that the results of such estimations are very vulnerable to the ECG measurement noise [4]. In the current work a model-based approach is considered. A personal anatomical model of the patient’s thorax has been previously built from Magnetic Resonance Images. An electrophysiological model of the patient’s heart delivers a first estimate of the distributions of transmembrane voltage during a single PVB. The forward problem of electrocardiography is solved, the simulated ECG is recorded and compared with the measured one. The parameters of the cardiac model are optimized until the simulated and measured ECGs are similar. This approach has been tested with simulated reference ECGs by several groups [5, 6] with quite convincing results. In this work the method is applied to real patient data.
I. I NTRODUCTION Ventricular arrhythmias belong to the main causes of mortality. A common factor significantly increasing the probability of life threatening arrhythmia is the combination of a cardiac disease (ventricular ischemia or infarction) with frequent premature ventricular beats (PVB). Thus by ablation of the ectopic center the stability of the heart function can be considerably increased. For planning of such an intervention the preliminary localization of the ectopic focus is important. In the case of atrial arrhythmias catheter measurements are employed. But the introduction of a catheter into the ventricles can be quite
II. M ETHODS A. Data Acquisition The model-based method is tested on the patient data obtained from the University Hospital of W¨urzburg, Germany. A male patient, age 61, with one posterolateral and one posterior infarction is considered. The 64-channel ECG has been recorded using the BioSemi ActiveTwo measurement system. The electrode locations have been acquired using the Polhemus FASTRAK positioning system. A bandpass filter in frequency domain is applied to the ECG signals to suppress the measurement noise.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2560–2563, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Model-Based Method of Non-Invasive Reconstruction of Ectopic Focus Locations in the Left Ventricle
2561
tion within the left ventricular myocardium is defined in the spherical coordinate system relatively to the center of gravity and the main axis of the left ventricle. C. Forward Problem of Electrocardiography
Fig. 1: Left: anatomical model of the patient’s thorax. Body surface, lungs, liver, heart, kidneys can be seen. Right: heart model containing both ventricles and excitation conduction system. LV and RV mark the left and right ventricles, respectively.
The TMV distributions generated by the cardiac model are interpolated onto the tetrahedral mesh of the volume conductor. The bidomain model [9, 10] is used to compute the corresponding potential distributions within the volume conductor. The following equation is connecting these two distributions: ∇ · ((σe + σi )∇ϕe ) = −∇ · (σi ∇Vm ),
In order to build an anatomical model of the patient, two MRI data sets have been created using a Siemens Magnetom Vision 1.5T device. The first of these data sets is a wholethorax scan with the resolution of 4 × 4 × 4 mm3 . Its segmentation has resulted in the anatomical model of the patient’s thorax defined on a tetrahedral mesh (figure 1, left). This model contains 80 086 nodes and 489 833 tetrahedra. The distance between nodes varies between 2 mm within the ventricles and up to 20 mm at the body surface. The second data set contains a short-axis image of the ventricles in the diastolic phase with the resolution of 2.27 × 2.27 × 4 mm3 . Based on this image, a cardiac model has been created on a regular cubic voxel grid with the resolution of 1 × 1 × 1 mm3 (figure 1, right). The ventricular myocardium is anisotropic, the fiber orientation has been generated using the rules described in [7]. B. Electrophysiological Cardiac Model The electrophysiological cardiac model employed in this study is based on the cellular automaton principle. If some voxel of the model gets excited, it triggers the activation in the neighboring voxels. The time delay between these events is defined by the predefined excitation conduction velocity (ECV) and the angle to the direction of fiber orientation. After the excitation the transmembrane voltage (TMV) in each voxel is changing according to the corresponding action potential curve. Its shape has been previously computed from the ten Tusscher model of the human ventricular cell [8]. The transmural heterogeneity of ventricular cells has been taken into account. As the patient suffers from infarctions, an area with a slower excitation conduction and smaller amplitude of depolarization has been created within the left ventricle. In the center of this area the ventricular tissue is not excitable, at the periphery a partial excitability is assumed. An external stimulus emulates the ectopic focus. Its loca-
_________________________________________
(1)
where ϕe is the distribution of extracellular potential within the thorax, Vm represents the transmembrane voltage within the ventricles, σe and σi denote the extra- and intracellular conductivity tensors, respectively. The conductivity values have been taken from [11], whereas the intracellular conductivity σi is assumed to be zero everywhere except inside the cardiac myocardium. Extracellular potential ϕe is subject to Dirichlet boundary conditions at the point where the reference ECG electrode is located as well as Neumann boundary conditions at the rest of the body surface [12]. D. Optimization The simulated potentials are recorded at the locations of ECG electrodes and compared to the measured signals. Both multichannel ECGs, measured and simulated, are previously normalized by their absolute amplitudes. The root-meansquare of the signal difference in each channel is computed, its mean value for all the channels is used as the criterion for the cardiac model optimization. Following parameters of the cardiac model are optimized: • location of the ectopic focus (3 parameters); • excitation conduction velocity (ECV) in different cardiac tissues (2 or 4 parameters). One of the two ectopic beats considered in this work has taken place immediately after the previous repolarization of the heart. Therefore the Purkinje fibers are assumed to have still been in the refractory state, possessing the ECV of zero m/s. This has reduced the number of ECV-parameters. Downhill simplex optimization method has been utilized in this work. The optimization has been restarted after each 100 iterations. The initial estimation of the ectopic focus location is made by the solution of the inverse problem of electrocardiography
IFMBE Proceedings Vol. 22
___________________________________________
2562
D. Farina, Y. Jiang, O. Dössel, C. Kaltwasser and W.R. Bauer All Channels 0.6 Potential, mV
Potential, mV
All Channels 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3
0.4 0.2 0 -0.2 -0.4
0
0.2
0.4
0.6
0
0.2
0.4
0.6
Time, s
Time, s
Fig. 2: Multichannel ECGs recorded for the first (left) and second (right) premature ventricular beats (PVB). In the first case the end of the previous T-wave can be seen. in terms of activation times [1]. Following equation is solved: A ·τ = −
1 ΔVm
te
ϕECG dt,
(2)
tb
where A is the transfer matrix connecting the TMV distributions and BSPMs, computed as described in [12]; τ is the vector of activation times in the cardiac nodes; ΔVm = 80mV is the constant amplitude of the activation function; tb and te represent the time instants of the beginning and the end of the QRS-complex, respectively; ϕECG is the vector of signals in the BSPM electrodes changing with time. The problem described by the matrix equation (2) is illposed, which means that small ECG measurement noise can lead to arbitrary large errors in the solution τ. Tikhonov 2order regularization has been used to stabilize the solution [13]. The L-curve criterion has been utilized to determine the optimal regularization parameter [14].
III. R ESULTS During the recording of the 64-channel ECG for the considered patient, two PVBs have been detected. In both cases the part of the ECG corresponding to the depolarization of the ventricles has been ”cut out”, the integration of the signal in each channel over the time has been performed. The resulting vector containing 64 elements has been divided by −80 mV , which is the negative amplitude of the ventricular action potential. Afterwards equation (2) has been solved. The original 64-channel ECGs as well as the resulting estimations of activation time distributions are shown in figures 2 and 3, respectively. From the solutions of the inverse problem the initial estimations of the ectopic focus location have been made for each PVB. These locations have been transfered to the cardiac model. For each case an area of posterior infarction has been created. Afterwards the optimization of the PVB focus location and ECV in various cardiac tissues has been started. In figures 4 and 5 the measured and simulated standard
_________________________________________
Fig. 3: Initial estimations of activation time distributions for the first (top) and second (bottom) PVBs computed out of the measured ECGs by solving equation (2). Markers denote the estimated locations of ectopic foci.
leads of ECG are compared. Although the amplitudes of the signals may be different due to the errors in conductivity estimations, the overall polarity and form of the QRS-complexes are quite similar. Figure 6 demonstrates the isochrones resulting from the optimization. In case 1 the optimization process has ended up with the isochrones quite different from the initial estimation (see figure 2). This difference can be explained as follows: equation (2) assumes, that the amplitude of excitation ΔVm is the same all over the heart, which is not the case because the patient suffers from infarction. Still the initial estimation using the same approach for the case 2 delivers good results.
IV. C ONCLUSION The exact locations of ectopic foci have not been measured within the scope of the current project. Still the similarity of measured and simulated ECG in figures 4 and 5 indicates that the model is near to the reality. The authors are aware, that modeling errors (such as segmentation errors, errors in estimated conductivity values, discretization errors etc) may affect the quality of reconstruction. Nevertheless the proposed approach results in a 4D-electrophysiological model of the patient’s heart, enabling a physician to develop an effective way to treat the cardiac diseases (e.g. in [15]), or estimate the stability of the heart function.
IFMBE Proceedings Vol. 22
___________________________________________
Model-Based Method of Non-Invasive Reconstruction of Ectopic Focus Locations in the Left Ventricle Measured Einthoven Leads
Potential, mV
Potential, mV
0.1 0 -0.1 -0.2
Einthoven I Einthoven II Einthoven III
-0.3 -0.4 0
0.1
0.2
0.3 0.4 Time, s
0.5
0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5
0.6
0
0.1
0.1
0.2
0.3 0.4 Time, s
Potential, mV
Potential, mV 0.5
0.6
0.5
0.6
Wilson V1 Wilson V2 WilsonV3 0
0.1
0.2
0.3 0.4 Time, s
0.5
0.6
0.2
0.1
0.2
0.3 0.4 Time, s
0.1 0
Wilson V4 Wilson V6
-0.3 0.5
0.6
0
0.1
0.2
0.3 0.4 Time, s
0.5
0.6
Fig. 4: Einthoven leads I, II and III as well as Wilson leads V1-V4 and V6 for the case 1: measured (left) vs. simulated (right). 0.2 0.1
0.05
0 -0.1 Einthoven I Einthoven II Einthoven III
-0.2 -0.3 0
0.1
0.2
0.3
0.4
9. Geselowitz D. B. (1989) On the Theory of the Electrocardiogram. Proc. of the IEEE 77:857-876.
0 -0.05 -0.1
Einthoven I Einthoven II Einthoven III
-0.15 0.5
0.6
0
0.1
0.2
Time, s
0.8
0.5
0.6
0.4 0.2
12. Skipa O. (2004) Linear inverse problem of electrocardiography: epicardial potentials and transmembrane voltages. Helmesverlag Karlsruhe.
0 -0.2
0.1
0.2
0.3
0.4
0.5
0.6
0
0.1
0.2
Time, s
0.3
0.4
0.5
0.6
Time, s
Measured Wilson V4,V6 Leads 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2
Simulated Wilson V4,V6 Leads
Potential, mV 0
0.1
0.2
0.3
0.4
0.05 0 -0.05 -0.1 Wilson V4 Wilson V6
-0.15 0.5
0.6
13. Hansen P. C. (1998) Rank-deficient and discrete ill-posed problems: Numerical aspects of linear inversion. SIAM. 14. Hansen P. C.The L-curve and its use in the numerical treatment of inverse problems. Computational inverse problems in electrocardiography, ch. 4, :119-142WIT Press, Advances in Computational Bioengineering (2001) .
0.1
Wilson V4 Wilson V6
10. Geselowitz D. B. (1992) Description of cardiac sources in anisotropic cardiac muscle. Application of bidomain model. J. Electrocardiol. 25:65-67. 11. Gabriel S., Lau R. W., Gabriel C. (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phys. Med. Biol. 41:2251-2269.
Wilson V1 Wilson V2 WilsonV3
0.6 Potential, mV
Potential, mV
0.4
Simulated Wilson V1-V3 Leads
Wilson V1 Wilson V2 WilsonV3
0
Potential, mV
0.3 Time, s
Measured Wilson V1-V3 Leads 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2
7. Streeter D. D.Gross morphology and fiber geometry of the heart. Handbook of Physiology: The Cardiovascular System, Bethesda B., ed., 1:61112American Physiology Society (1979) . 8. ten Tusscher K. H. W. J., Noble D., Noble P. J., Panfilov A. V. (2004) A Model for Human Ventricular Tissue. Am. J. Physiol. 286:H1573H1589.
Simulated Einthoven Leads 0.1 Potential, mV
Potential, mV
Measured Einthoven Leads
4. Reimund V., Farina D., Jiang Y., D¨ossel O. (2008) Reconstruction of Ectopic Foci using the Critical Point Theoy: Simulation Study. Proc. EMBEC .
6. Farina D., D¨ossel O. (2008) Non-Invasive Model-Based Localization of Ventricular Ectopic Centers from Multichannel ECG. Proc. OIPE .
-0.1 -0.2
Wilson V4 Wilson V6 0
2. Fischer G., Pfeifer B., Seger M. (2005) Computationally Efficient Noninvasive Cardiac Activation Time Imaging. Methods Inf. Med. 44:674686.
5. He B., Li G., Zhang X. (2003) Noninvasive imaging of cardiac transmembrane potentials within three-dimensional myocardium by means of a realistic geometry anisotropic heart model. IEEE Trans. Biomed. Eng. 50:1190-1202.
Simulated Wilson V4,V6 Leads
Potential, mV
Potential, mV
0.3 0.4 Time, s
3. Pullan A. J., Cheng L. K., Nash M. P., Bradley C. P., Paterson D. J. (2001) Noninvasive Electrical Imaging of the Heart: Theory and Model Development. Ann. Biomed. Eng. 29:817-836.
0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4
Measured Wilson V4,V6 Leads 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5
0.2
Simulated Wilson V1-V3 Leads
Wilson V1 Wilson V2 WilsonV3 0
1. Huiskamp G., van Oosterom A. (1988) The Depolarization Sequence of the Human Heart Surface Computed from Measured Body Surface Potentials. IEEE Trans. Biomed. Eng. 35:1047-1058.
Einthoven I Einthoven II Einthoven III
Measured Wilson V1-V3 Leads 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4
R EFERENCES
Simulated Einthoven Leads
0.2
2563
0
Time, s
0.1
0.2
0.3
0.4
0.5
0.6
Time, s
Fig. 5: Einthoven leads I, II and III as well as Wilson leads V1-V4 and V6
15. Reumann M., Farina D., Miri R., Lurz S., Osswald B., Dossel O. (2007) Computer model for the optimization of AV and VV delay in cardiac resynchronization therapy. Medical & Biological Engineering & Computing 45:845–854.
for the case 2: measured (left) vs. simulated (right). • Author: Dr. Dmytro Farina • Institute: University of Karlsruhe (TH) • Street: Kaiserstraße 12 • City: Karlsruhe • Country: Germany • Email:
[email protected] Fig. 6: Isochrones reconstructed using the model-based approach: case 1 (left), case 2 (right).
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
Engineering Support in Surgical Strategy for Ventriculoplasty Y. Shiraishi1, T. Yambe1, Y. Saijo2, S. Masuda3, G. Takahashi3, K. Tabayashi3, T. Fujimoto4 and M. Umezu5 1 2
Tohoku University, Institute of Development, Aging and Cancer, Sendai, Japan Tohoku University, Graduate School of Biomedical Engineering, Sendai, Japan 3 Tohoku University, Department of Cardiovascular Surgery, Sendai, Japan 4 Shibaura Institute of Technology, Tokyo, Japan 5 Waseda University, Tokyo, Japan
Abstract — Endoventricular patch plasty, called the Dor procedure, is performed as a heart reconstruction for the surgical treatment of the patients with severe heart failure associated with anteroapical large myocardial infarction. The authors have been establishing a new engineering method for the individual simulation of operational procedure in order to determine the optimal left ventricular size and shape and to estimate the volumetric reduction after the surgical ventricular restoration in each patient. In this study, three individual ventricular shapes were fabricated by numerically resampled data which were obtained from the diagnostic magnetic resonance imaging in each subject. Prior to the fabrication of the models, epi- or endocardial envelope curve was outlined without papillary muscles or tendinous cords in each enddiastolic and end-systolic phase for the compatible reference in echocardiogram, computed tomography or magnetic resonance imaging investigations. And the mechanical silicone rubber models were made by the use of female moulding for the discussion of each subject among surgeons. In this paper, we examined a methodology for the simulation of ventriculoplasty by using a silicone rubber model and evaluated the capability of quantitative expression of ejection fraction.
A
LA LV
B
Keywords — Ventriculoplasty, simulation, silicone rubber model, magnetic resonance imaging, ejection fraction
I. INTRODUCTION LA Ventricular aneurysm is one of the cause of severe heart failure. Surgical procedures such as Dor’s operation are applied for the reconstruction of shape as well as cardiac function to the patients with the disease [1-2]. In general, these surgical procedures and the way to reconstruct the hearts are clinically decided by the diagnostic information which is provided by 2D/3D echocardiography, computed tomography (CT), or magnetic resonance imaging (MRI). However, as those imaging technologies indicate the virtual pictures, it is difficult to estimate the postoperative function as well as the structure [3].
LV
Fig. 1 Sagittal plane view of a patient’s heart with ventricular aneurysm taken by magnetic resonance imaging; Figure A (top) indicated the preoperative patient’s heart, and the hatched area in B (bottom) was the image of the region to be patched surgically.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2564–2567, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Engineering Support in Surgical Strategy for Ventriculoplasty
Moreover, the ventriculoplasty implies not only the volumetric reduction at the lesion of the aneurysm but also the reconstruction of the internal structure such as papillary muscle positions against the mitral valve. Therefore it is anticipated that the excessive ventricular configuration reconstitution may cause degradation of cardiac function as postoperative complication, and also the effective volumetric reduction cannot be achieved due to patching insufficiency. The purpose of this study was to reproduce and provide the tangible diseased ventricular model, which was identical to each patient’s heart and could simulate physical images of those surgical procedures quantitatively. In this study, we established a new rapid prototyping method for the fabrication of the elastic diseased ventricular model, and examined its reproducibility on the parameter of ejection fraction.
2565
Diastole
Systole
Fig. 2 Cross sectional view for preoperative diagnostic evaluation of the patient’s ventricular shape at the mitral cord-papillary muscle position; the inside or outside lines were defined by surgeons for the evaluation of ejection fraction.
II. METHODS A. Ventricular aneurysm and ventriculoplasty Figure 1(A) indicated the sagittal chest view of a patient with severe aneurysm. The wall thickness of the left ventricle at the lesion was less than 6mm, and the akinetic or dyskinetic properties could be seen by MRI investigation. Then the ventriculoplaty was to be designed to make a patch for covering the extended epicardial portion as shown in Figure 1(B). Ejection fraction of the subject was 22.6% by Teichholz method from the measurement of diameters in MRI figures (Figure 2) [4]. These morphological characteristics in the lesion were diagnosed by 3D echocardiography (Philips, Sonos, QLAB) as shown in Figure 3. The surgical treatment so called modified Dor procedure for volume reduction was to be applied for the lesion by using an ePTFE patch.
Systole
Fig. 3 Abnormal left ventricular cavity reconstructed by 3-D measurement of echocardiography in the subject shown in Fig. 1 and 2.
Fig. 4 An example of a short axis view of the patient heart with the lines of
B. Measurement of ventricular dimensions Based on the measurement of ventricular configuration and motions, we extracted the two phases of the contraction; end diastolic and end systolic shapes. The MRI data were obtained every 7-8 mm in short axis view, and around twelve layers were extracted for the reconstruction. Inside and outside edges were determined and these envelops were outlined as shown in Figure 4 (left). Each shape of the lines was decided by the discussions with surgeons, and the bulging portion at the papillary muscles and tendinous cords was excluded for more sophisticated surgical investigations synchronizing with the images of the operator.
_______________________________________________________________
Diastole
internal or external edges of ventricular wall for the 3D tangible modeling (left); the ventricular walls were outlined as vector data, and the data were transferred to be smoothed DXF format. The distance between layers in the patient’s MRI data was 7.69mm. And a schematic illustration of the patient heart configuration (end-diastolic phase) by the reconstruction in the 3D CAD software (right).
C. Numerical and mechanical reconstruction of ventricular shape All the configuration data were imported to 3D NURBS modelling software (McNeel North America, Rhinoceros) for the numerical reconstruction. Each layered data was
IFMBE Proceedings Vol. 22
_________________________________________________________________
2566
Y. Shiraishi, T. Yambe, Y. Saijo, S. Masuda, G. Takahashi, K. Tabayashi, T. Fujimoto and M. Umezu
repositioned at appropriate distance as shown in Figure 4 (right). Then the smoothed surfaces were formed approximately. In every stages of calculation, we compared the result and original MR images in order to prove to be the fusion of the configurations in the process. Then the recalculated lumens were obtained as shown in Figure 6 (a), and the data was transported to CAM (Mimaki & Toki Corp, NC-5 Machining Star) for the cutting process. Each layer was cut out from a plastic plate of 2-5 mm in thickness. And the apex portion was carved out by 1mm thickness contour. The plates were integrated and laminated accurately, and the male moulds of end-diastolic or endsystolic forms had been constructed (Figure 6 and 7). Elastic tangible ventricular model of each patient could be casted by silicone rubber (Shin-etsu, KE-1300T) by using the female moulds, and the elasticity of each model was arranged according to the surgeons’ request.
(a) End-diastolic lumen (left) and contours (right) for cutting layers
III. RESULTS AND DISCUSSION A. Tangible diseased heart Figure 8 indicated the examples of the patient’s heart with ventricular aneurysm. The lesion of the silicone models was investigated and it could represent the similar features of the information given by echocardiography or CT, MR imaging. Furthermore, the surgeons could examine their techniques on the same elastic models, which could be casted by same moulds, by using scissors and needles several times. And consequently, the total processing time from the data calculation to fabrication was around 72hours. And the model was also useful for the pathophysiological and surgical explanation to patients by using their own tangible disease.
Fig. 5 An example of the surfaces formed; these data were calculated by
(b) Contour shapes and the male moulds
Fig. 6 Schematic illustrations of CAD/CAM data for cutting (a), the male moulds for end-diastolic and end-systolic lumen shapes (b).
Fig. 7 Female mould of the patient’s heart; the internal and external profiles consisted of integrated layered plastic plates, and the silicone elastic models were able to be moulded.
the interpolation of the measured data obtained in MR images.
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Engineering Support in Surgical Strategy for Ventriculoplasty
Diastole
Systole
(a) Septal-apical view of the models
Diastole
Systole
2567
B. Preoperative investigation by using model These models seemed to be useful for the preoperative quantitative investigation. A patient’s results of the volumetric investigations by MRI or the model, which was indicated in Figure 8, were shown in Figure 9. End-diastolic and end-systolic blood volumes were 172.5, 133.6 mL by Teichholz method respectively, whereas the values of the silicone model measured by water weighing method were 133.0 and 88.5 mL. The ejection fraction which was derived from MRI was 23%, and that obtained from the model was 33%. Although the structural changes in the contractile process of the diseased left ventricle was so complicated, the silicone model was useful for more quantitative examination of reconstructive surgery before the operation. IV. CONCLUSION A sophisticated method of the rapid prototyping of tangible diseased heart was established. The model fabricated in this study was applied for three clinical cases for the preoperative investigation of modified Dor Procedure. As a result, this modelling method was useful for the surgical planning based on quantitative evidence, as well as the clinical explanation to patients.
(b) Frontal view of the models
ACKNOWLEDGMENT
Fig. 8 End-diastolic (left) and end-systolic (right) The authors acknowledge the support of Grand in Aid for Scientific Research of and Ministry of Education, Culture, Sports, Science and Technology (17790938, 19689029, 20659213).
silicone models moulded.
REFERENCES Teichholz(MRI)
1.
2.
Model(diast) Model(syst)
3.
4.
Fig. 9 Comparison of the changes obtained from a subject in ventricular blood volume calculated by Teichholz method (MRI) and by the elastic models fabricated.
_______________________________________________________________
Kreitmann P (1982) Surgical treatment of primitive ventricle and complex congenital heart malformation with total exclusion of the right heart: report of a case, J Thorac Cardiovasc Surg. 84(1), 150 Adams JD, et al. (2001) Does preoperative ejection fraction predict operative mortality with left ventricular restoration?, Ann Thorac Surg. 82(5), 1715-9. Swillens A, et al. (2008) Effect of an abdominal aortic aneurysm on wave reflection in the aorta, IEEE Trans on Biomed Eng 55(5), 16021611. Kihara Y, et al. (2006) Standard measurement of cardiac function indexes, Jpn J Med Ultrasonics 33(3), 371-381..
Author: Yasuyuki Shiraishi Institute: Tohoku University, Institute of Development, Aging and Cancer Street: 4-1 Seiryo-machi, Aoba-ku City: Sendai 980-8575 Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
Simulation-based femoro-popliteal bypass surgery M. Willemet1,2, G. Compère1,2, J.F. Remacle1 and E. Marchandise1 1
Université Catholique de Louvain/Institute of Materials, Mechanicals and Civil Engineering, Louvain-la-Neuve, Belgium 2 Fonds National de la Recherche Scientifique, rue d’Egmont 5, 1000 Bruxelles, Belgium
Abstract — We present a comparative analysis of different models which goal is to help surgeons in peripheral vascular bypass surgery planning. The models considered are based on the coupling between 3D, 1D and 0D models. Those are supplemented with different boundary conditions computed on basis of physiological patient-specific measurements. These multiscale models allow a more accurate analysis of the hemodynamic that is responsible for intimal hyperplasia development. Keywords — Blood Flow, Multiscale modeling, bypass prediction, patient-specific
I. INTRODUCTION Femoro-popliteal bypasses are one of the most common types of vascular surgery. Such interventions are performed on patients suffering from the atherosclerosis disease, a widespread pathology. Indeed, 14,5% of the population older than 70 years suffers from the resulting lack of vascularization in lower limb arteries. This is due to an occlusion of the native femoral artery caused by the deposit of proteins and fat along its arterial wall. Nowadays, surgeon's decision to perform the bypass surgery is based on the patient's morphology, its own experience and literature advices. No numerical tool provides him with objective results from hemodynamical simulations, helping to choose the bypass that will provide the best patency rate. After 3 years, 45% of the performed bypass surgeries present a graft occlusion mainly located at the anastomosis. This is mainly due to abnormal hemodynamic and wall mechanics that are responsible of intimal hyperplasia development, artery wall remodeling and thrombus formation. The organization of this paper will begin with a brief description of the mathematical models used: the 1D and 3D models supplemented with different boundary conditions. Details about the 3D mesh generation are also given. Some results are then introduced and discussed. II. MATHEMATICAL MODELS
associated with a tube law that relates the pressure difference across the wall to the area (equation 1) [2,4]. This model assumes an incompressible and Newtonian fluid. These governing equations are solved with a high order Runge-Kutta Discontinuous Galerkin method.
p ( x, t ) = p0 + β ( A − A0 )
β = β ( x) =
4 π h0 E ( x) 3 A0
(1)
This 1D model of the cardiovascular circulation has been largely developed and validated in human arteries [4,5]. Despite good agreements between predicted and measured flows, such a model cannot provide a detailed study of the hemodynamic at more complex geometries (anastomosis), where hyperplasia mostly develops. Three-dimensional models that solve the 3D NavierStokes equations [1,6] can then be used to study for example the hemodynamic at the distal anastomosis of the bypass. A few illustrations and further details on the mesh generation are given in the next section. For the 1D and 3D models, we have implemented different types of inlet and outlet boundary conditions as listed hereafter. First, we consider zero-dimensional models as outlet boundary conditions: a resistance R [3], an impedance Z(t) [3], a constant pressure [3] or a lumped RCR windkessel model [7]. Physiological measurements done on patients are used in order to compute the resistance R and compliance C parameters. At the inlet, velocity or mass flux is prescribed using defective boundary conditions [8]. Second, we consider the coupling of the 3D or 1D models with an electric-analog closed-loop of the cardiovascular system [9]. Third, a global multiscale model 3D-1D-0D is considered [1, 6, 10]. By using those different types of boundary conditions, we draw some conclusions on the best layout of model that can help for the bypass hemodynamic predictions.
The one-dimensional equations for the blood flow in arteries are the conservation of mass and momentum
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2568–2570, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Simulation-based femoro-popliteal bypass surgery
2569
III. THREE-DIMENSIONAL MESH GENERATION OF THE PATIENT'S SPECIFIC ARTERIAL NETWORK
In this section, we describe the main steps of the tetrahedral 3D mesh generation. Using the commercial software Amira, a segmentation of the computed tomography data of the patient (CT scan) is performed. We highlight here some points that require a particular attention. In the case study, the bypass graft material is an “in situ” vein, i. e. a vein that has not been extracted from the muscles environment. When a vein is used as bypass, some surgical treatments need to be done. One is the removal of the vein valves that don't allow blood flow in the downward direction. Another is the suture of all collateral vessels of the vein in order to avoid blood leakage. These sutures are realized with metal clips that are visible on the CT scan Dicom images. These volume extrusion need to be manually removed in order to perform a proper segmentation.
Figure 1 represents the result of the segmentation of the arterial network of the leg: a 2D surface mesh. Notice that, due to the smoothing algorithm, the inlet and outlet faces are not flat. Such a geometry is very large and will require a large computational time when one wants to solve the 3D NavierStokes equations. Besides, it is more worthwhile to focus on the details of the blood flow at the anastomosis (Figure 2).
Fig. 2: Detailed mesh of the anastomosis. Before the volume mesh generation, inlet and outlet faces need to be flattened in order to guarantee optimal convergence of the boundary conditions. To do so, the surface mesh is exported in .stl format to another commercial software SolidWorks. By using the intersection of plain surfaces and the surface mesh, flat faces are obtained. The refinement of the surface triangular mesh is then obtained with the YAMS software. Finally, the tetrahedral volume mesh generation is done with the GMSH software [11]. These steps are illustrated in Figures 3.
Fig. 1: Arterial network of the left patient's leg with an in-situ vein. (The arrows indicate the two sites of anastomosis.) The occlusion of the superficial artery has been bypassed from the common femoral artery to the popliteal artery.
_______________________________________________________________
(a)
(b)
(c)
Fig. 3: Mesh generation of the inlet face of the anastomosis. The face obtained after the segmentation (a) is first flattened (b). Then, the mesh is refined (c)
IFMBE Proceedings Vol. 22
_________________________________________________________________
2570
M. Willemet, G. Compère, J.F. Remacle and E. Marchandise
velocity systolic value of the 1D-0D model is lower than those measured on the patients. From the study of a 3D domain, we also analyze the development of the wall shear stress on the arterial wall: those results are compared with computed mean WSS from the 1D model [13]. V. CONCLUSIONS
Fig. 4: Velocity in the common femoral artery prescribed
We have presented an analysis of multiscale models and its boundary conditions that allow the optimal patientspecific hemodynamic simulations. With this effective tool, we can help surgeons in their practice by predicting the best hemodynamic bypass parameters.
as the inlet boundary condition
REFERENCES 1.
2.
3.
4.
5.
6.
Fig. 5: 1D-0D model of the whole arterial tree. The legend indicates the diameter of the arteries
7.
8.
IV. RESULTS Considering the predictive patient-specific goal of our work, we show that taking into account the patient-specific measurements is of great importance when computing the boundary conditions.
9.
As an example, let us consider the comparison between the velocity that can be prescribed at the inlet of the 3D domain. Figure 4 shows measured velocities by US Doppler at the common femoral artery in different patients. These are compared with the velocity at the same artery computed by a whole 1D-0D arterial tree when a physiological flow rate is imposed at the heart inlet and when the bypass is taken into account (Figure 5) [4,12]. We see that the
11. 12.
_______________________________________________________________
10.
13.
Formaggia L, Gerbeau J, Nobile F, Quarteroni A (2001) On the coupling of 3D and 1D Navier-Stokes equations for flow problems in compliant vessels, CMAME, 191:561-582 Sherwin S, Franke V, Peiro J, Parker K (2003) One-dimensional modelling of a vascular network in space-time variables, J Eng Math, 47:217–250. Vignon-Clementel I, Figueroa C, Jansen K, Taylor C (2006) Outflow boundary conditions for three-dimensional finite element modeling of blood flow and pressure in arteries, CMAME, 195: 3776-3796 Marchandise E, Willemet M, Lacroix V (2008) A numerical hemodynamic tool for predictive vascular surgery, Med. Eng Phys, in press Steele B, Wan J, Ku J, Hughes T, Taylor C (2003) In vivo validation of a one-dimensional finite-element method for predicting blood flow in cardiovascular bypass grafts, Ieee Transactions On Biomedical Engineering, 50:649-656 Urquiza S, Blanco P, Venere M, Feijoo R (2006) Multidimensional modelling for the carotid artery blood flow. CMAME, 195:4002-4017 Stergiopulos N, Young D, Rogge T (1992) Computer simulation of arterial flow with application to arterial and aortic stenoses, J. Biomechanics, 25:1477-1488 Formaggia L, GerbeauJF, Nobile F, Quarteroni A (2001) Numerical treatment of defective boundary conditions for the Navier-Stokes equations. Rapports de recherche INRIA, 4093, Unité de recherche de Rocquencourt. Avanzolini G, Barbini P, Cappello A, Cevenini G (1988) CADCS simulation of the closed-loop cardiovascular system. Int J Biomed Comput, 22:39-49 Taylor C A, Draney M T, Ku J P, Parker D, Steele B N, Wang K, Zarins C K (1999) Predictive Medicine: Computational Techniques in Therapeutic Decision-Making, Computer Aided Surgery, 4:231-247. GMSH at http://www.geuz.org/gmsh/ Lamponi D (2004), One dimensional and multiscale models for blood fow circulation, Ecole polytechnique fédérale de Lausanne (EPFL), PhD Thesis Bessems D (2007) On the propagation of pressure and flow waves through the patient-specific arterial system, TU Eindhoven, PhD Thesis
IFMBE Proceedings Vol. 22
_________________________________________________________________
An applicability of Impedance Technique in evaluation of cardiac resynchronization therapy M. Lewandowska1, J. Wtorek1 and L. Mierzejewski2 1
Biomedical Engineering Department, Gdansk University of Technology, Gdansk, Poland 2 Cardiac Rehabilitation Centre, Szymbark, Poland
Abstract — An impedance measurement method is proposed for evaluation of cardiac resynchronization therapy. Impedance changes are calculated for an electrode array enabling a multiple channel measurements. It has been shown that the spatial sensitivity to conductivity changes undergoing inside the thorax can be modified by appropriate geometry of electrode array. As a result, the selected measurement channels may be sensitive mainly to heart chambers while other to conductivity changes localized in the lungs. Keywords — Electrical bioimpedance, heart failure, cardiac resynchronization.
I. INTRODUCTION Congestive heart failure (CHF) is an imbalance in heart’s pump function. The heart fails to maintain the circulation of blood in a physiological manner. Pulmonary edema may develop when this imbalance causes an increase in lung fluid secondary to leakage from pulmonary capillaries into interstitum and alveoli of the lung [1]. Cardiac resynchronization therapy (CRT) is a relatively new therapy for patients with symptomatic heart failure resulting from systolic dysfunction. CRT is achieved by simultaneously pacing both the left and right ventricles. Theoretically, biventricular pacing resynchronizes the timing of global left ventricular depolarization and as a result improves mechanical contractility and mitral regurgitation [2]. There a few methods PQ interval, QRS duration and echocardiographic [3,4] Cardiac resynchronization therapy (CRT) is a treatment that can relieve CHF symptoms by making of heart contractions more physiological. Patients classification for further CRT treatment is mainly based on QRS width. However, it may not always correlate well to mechanical dissynchrony which is the main abnormality treated by CRT. The main aim of the study presented in this paper is evaluation of suitability of multi-electrode impedance measurements to support diagnosis of patients with congestive heart failure (CHF). Electrical impedance measurements may appear as the method of choice in this application as it is sensitive to mechanical heart activity
[5,6]. However, as it is sensitive indirectly, via responding to geometry and conductivity changes appropriate processing of the measured signal should be applied [7,8,9] II. METHOD Measurements of electrical impedance are utilized to evaluate mechanical activity of the heart when using impedance cardiography [5]. In fact, it is a four-electrode method of impedance measurement (Fig. 1) and two electrodes (outer, marked C) are used for applying current while other two, marked P, for measuring resulting voltage. Using this method a mutual impedance Zt is measured. It is described by the following relation [7]: Zt =
(1)
UP−P IC−C
where U P− P marks the measured voltage while I C −C applied current. In general, impedance value depends on configuration of electrode array, shape of the object and internal distribution of conductivity [7]. C P
P
C
Fig. 1 Configuration of electrode array used in classical impedance cardiography
When the conductivity distribution inside the thorax changes from σ(x,y,z) to σ(x,y,z)+Δσ(x,y,z) the change of mutual impedance ΔZt is given by relationship [10]:
³
ΔZ t = − Δσ V
∇φ (σ + Δσ ) ∇ψ (σ ) ⋅ dv Iφ Iψ
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2571–2574, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
(2)
2572
M. Lewandowska, J. Wtorek and L. Mierzejewski
where ψ indicates the potential distribution resulting from the current Iψ flowing between current electrodes and φ is the hypothetical potential distribution associated with the current flowing between to the voltage pair of electrodes. The integration is done over the region where the conductivity change is non-zero; Δσ(x,y,z) ≠ 0. The potential distribution φ is calculated after the conductivity change Δσ(x,y,z) has occurred. Lψ = ∇ψ Iψ and Ltφ = ∇φ Iφ are called the lead fields. The superscript t
indicates that the lead field Ltφ is to be evaluated following the change in conductivity. The scalar product of two lead fields is called a sensitivity function. To shorten form of equations the spatial conductivity dependence is omitted in the equation (2) and following ones. The thorax can be divided into regions of a constant conductivity change Δσ, then equation (4) can be expressed [5,7,8]: I
ΔZ t =
¦ K Δσ i
i
(3)
i
where
³
K i = − Lψ ⋅ Ltφ dv ,
(4)
Vi
ΔZt - the total impedance change, Δσi – regional (local) conductivity change, I - number of regions. The relation (3) allows division of ΔZt into components ΔZi arising from a chosen region (organ) or phenomenon. To solve relation (3) The potential distribution in a volume conductor is described by Maxwell equations. Taking into account that the thorax is a source-free region and biological tissues are mainly conductive, anisotropic and that magnetic fields can be omitted the resulting relationship is reduced to the equation: ∇⋅ j= 0
(5)
j = −σ∇φ
(6)
where j - current density, φ - potential, σ - conductivity of the medium. In order to solve equation (5) boundary conditions are to be assumed. The boundary conditions reflect, among others, a type of a technique used to measure impedance. In this case boundary conditions state that the current flux [density] normal to the outer surface is zero everywhere except the places where the current electrodes are attached:
_______________________________________________________________
∂φ = 0 on S-Sei ∂n
(8)
At the surface points being in contact with current electrodes a constant potential is assumed: V = Vei on Sei,
(9)
where: i - the number of current electrodes, S - outer surface of volume conductor, Sei - surface of i electrode, n - vector normal to S. An electrode array different from standard one has been examined in our study (Fig. 2). It contains two electrodes (marked black) to apply current and fourteen for voltages measurements. They forms seven measurements ports and are marked using different colors. FEM has been used to calculate the potential distribution. The model of the thorax has been constructed from horizontal cross sections parallel to x, y plane (slices) taken from anatomical maps and tomographic pictures using six nodes (pentahedral) elements. Impedance cardiography is able to evaluate mechanical activity of heart (therefore/thus) seems suitable for CRT candidates classification. To assess the usability of that method three-dimensional FEM model of human thorax was constructed. It consisted of 230 000 tetrahedral elements in 30 layers. Example of layer is presented in Fig. 2. Each layer contained different regions (in the presented one 7 regions are marked) described by appropriate conductivity [11] Conductivity changes in selected regions, e.g. heart right or left ventricle resulted in impedance measured at different locations on the chest. The number of independent
Fig. 2 Configuration of electrode array (marked by big letters) and assumed regions of conductivity changes. Black and broken line
IFMBE Proceedings Vol. 22
_________________________________________________________________
An applicability of Impedance Technique in evaluation of cardiac resynchronization therapy
2573
III. RESULTS Potential distribution is shown in Figure 5 for the selected pair of electrodes. A sensitivity to conductivity changes associated with this potential distribution is not uniform and exhibits the greatest values in the area adjacent to electrodes (Fig. 6). Using sensitivity distributions, obtained for each measurement channel the impedance changes have been calculated. First, impedance changes were calculated only for conductivity changes created only in heart chambers (Fig. 7). Then, conductivity changes have been localized both in the heart chambers and the lungs and corresponding impedance changes have been calculated (Fig. 8). Fig. 3 Simulated changes of conductivity during time in heart chambers (marked 1,3,4 and 5 in Fig. 2)
measurements was equal to number of impedance change sources. The sensitivity of each measurement port to certain source was calculated using Geselowitz formula (2). It allowed to decompose the total measured impedance change to components associated with each region of conductivity change using relations (3) and (4) It is assumed that conductivity changes in the lung are created by blood ejected from right ventricle. Blood flowing through lungs’ vessels modifies its apparent conductivity. As a first approximation, it is assumed that this modification is uniform throughout the lung’s tissue. A breathing phenomena is also omitted Changes of lung conductivity are much lower than changes associated with heart. It is due to the fact that lungs have a very big volume in comparison to stroke volume. It is also assumed that the lung tissue, can be approximated Thus, assuming that the conductivity of lung’s tissue can be approximated
Fig. 4 Simulated changes of conductivity in the lungs (marked 6 and 7).
_______________________________________________________________
Fig. 5 Potential distribution for the channel 2.
Fig. 6 Distribution of sensitivity to conductivity changes for the channel 2
IFMBE Proceedings Vol. 22
_________________________________________________________________
2574
M. Lewandowska, J. Wtorek and L. Mierzejewski
conductivity changes are low enough and do not modify the sensitivity distribution inside the thorax. V. CONCLUSIONS It follows from the performed simulation studies that it may be possible to extract component arising from the conductivity changes associated with heart (ventricles) contraction using appropriate methods.
REFERENCES 1.
Fig. 7 Impedance changes measured in each channel in response to .conductivity changes localized in heart’s chambers. Channels are ordered from top (the first, 1) to down (the lat, 7).
Fig. 8 Impedance changes measured in each channel in response to .conductivity changes localized in heart’s chambers and lungs. The first two waveforms from the top present conductivity changes while other impedance changes. IV. DISCUSSION It follows from the performed simulation studies that the spatial distribution of sensitivity is different for each measurement channels. It means that the impedance waveform obtained from each channel can be modeled as a sum of contributions from different regions. It follows from the relationships (3) and (4). However, it is true as far as the
_______________________________________________________________
Martin DO, Stolen K Q, Brown S, et al (2007) Pacing evaluation – atrial support study in cardiac resynchronization therapy (PEGASUS CRT): design and rationale Am. Heart Journal 153-1: 7-13 2. Strickberger S A, Conti J, Daoud E G, et al (2005) Patient Selection for Cardiac Resynchronization Therapy: From the Council on Clinical Cardiology Subcommittee on Electrocardiography and Arrhythmias and the Quality of Care and Outcomes Research Interdisciplinary Working Group in Collaboration With the Heart Rhythm Society Circulation 111: 2146 – 2150 DOI: 10.1161/01.CIR.0000161276.09685.4A 3. Pitzalis M V, Iacoviello M, Romito R, et al (2002) Cardiac resynchronization therapy tailored by echocardiographic evaluation of ventricular asynchrony JACC 40: 1615-1622 4. Abraham W T, Fisher WG, Smith A L, et al (2002) Cardiac resynchronization in chronic heart failure, N Engl J Med 346: 18451853 5. Patterson, R. P. (1989) Fundamentals of impedance cardiography. IEEE Eng. in Med. and Biol. Mag. 3: 35-38, 1989. 6. Patterson R P (1985) Sources of the thoracic cardiogenic electrical impedance signal as determined by a model. Med. & Biol. Eng. & Comput. 23: 411-417 7. Wtorek J (2000) Relations between components of impedance cardiogram analyzed by means of finite element model and sensitivity theorem. Ann. Biomed. Eng. 28: 1352-1361 8. Kauppinen, P.K., J. A. Hyttinen and J.A. Malmivuo (1988) Sensitivity distributions of impedance cardiography using band and spot electrodes analyzed by a three-dimensional computer model. Ann. Biomed. Eng. 26: 694-702. 9. Kim D W, Baker L E, Pearce J A, et al 1988 Origins of the impedance change in impedance cardiography by a three-dimensional Finite Element Model. IEEE Trans. Biomed. Eng. 35: 993-1000. 10. Geselowitz D B (1971) An application of electrocardiographic lead theory to impedance plethysmography. IEEE Trans. Biomed. Eng. 18: 38-41 11. Geddes L A, Baker L E 1967 The specific resistance of biological material - a compendium of data for the biomedical engineer and physiologist. Med. Biol. Eng. 5: 271-293.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Magdalena Lewandowska Biomedical Engineering Department Narutowicza 11/12 Gdansk Poland
[email protected] _________________________________________________________________
Porcine model for CPR artifact generation in ECG signals A.C. Mendez1, M. Roehrich2 and H. Gilly2,3 1
2
Biomedical Engineering, Univ Applied Sciences - FH Technikum Wien, Vienna, Austria Dept. Anaesthesia and Intensive Care Medicine, Medical University Vienna, Vienna, Austria 3 L. Boltzmann Institute for Anaesthesia and Intensive Care, Vienna, Austria
Abstract — Interruption of cardiopulmonary resuscitation (CPR) worsens the chance for a successful defibrillation with stable return of spontaneous circulation. Therefore “no flow times” (NFT) have to be minimized. However, analysis of the electrocardiogram (ECG) for fibrillation detection requires interruption of CPR with the ECG-analysis algorithms currently implemented in (automatic external) defibrillators. In contrast, ECG-analysis during ongoing cardiac massage could considerably reduce NFT. New analyzing algorithms should be optimized for removal of CPR-artifacts from the ECG. In order to test these algorithms “corrupted” ECGs are needed. We have designed a pig experimental model for generating CPR artifacts in the ECG. Either the pig’s sinus rhythm ECG or the pig’s ventricular fibrillation ECG (or any ECG previously recorded in human victims) is fed in into the pigs thorax, by this the (dead) animal representing an “ECG generator” which allows to record the electrical potential changes in the defibrillator pad electrodes as induced during CPR. Performing CPR at the same time, we were able to generate ECGs with true CPR artifacts. The corrupted ECG signal as well as the corresponding reference signal (pressure or force or equivalent) can be recorded simultaneously The electrical signals recorded via defibrillator pads were nearly identical with the pigs live ECG. Using a black box modeling approach (MATLAB) we were able to define an appropriate transfer function. When analyzing the transfer function, the pig was identified to act as a high pass filter considerably attenuating frequencies below 1-2 Hz. Using inverse transformation we could reconstitute the “true” corrupted ECG signal. Our experimental approach provides a sound basis to provide the data needed for extensive testing of artifact removal algorithms. We are able to generate ECG artifacts even resembling situations when lay people or unprofessional rescuers perform CPR. Keywords — CPR, artifact reduction, ECG analysis, pig model, transfer function.
I. INTRODUCTION When recorded from the defibrillator pads cardiopulmonary resuscitation (CPR) produces baseline variations in the ECG. In order to analyze the ECG on-line and to decide about a shockable or non-shockable rhythm,
currently available (semi-)automatic external defibrillators (AEDs) require an interruption of CPR chest compressions. The longer the period during which resuscitation is stopped (commonly referred as “hands off” time) the lower are the chances for a better outcome of the victim [1, 2]. Present research efforts [3] aim at developing methods to remove CPR-produced noise from ECG signals. For validation of suitable algorithms and methods, CPRcorrupted ECG data with sufficient quality are needed. The model commonly used for the artificial corruption of ECG data is a linear mathematical approach, called “additive model”, where the CPR-generated noise is simply added to the ECG signal. This additive model provides a quantitative measure, the Signal to Noise Ratio (SNR) comparison, when analyzing different CPR removal algorithms. However, one major drawback of the additive model is that it doesn’t include all the variables present in real CPR. The present work presents a porcine model using a new approach to generate CPR-corrupted data. II. MATERIALS AND METHODS A. ECG recordings ECG signals were obtained from human victims in emergency cases as well as from dedicated pig experiments. These digitized signals (10 bit, 250 Hz digitizing rate) represented ECG sequences of about 15 s. A microprocessor controlled signal generator was used for DA conversion and continuous output of the saved ECG data (“prearranged signal”). Figure 1 shows the block diagram of the experimental setup to generate CPR corrupted ECG signals. ECG data: The signals to be corrupted were obtained from different sources. The origin is detailed as follows: • Normal ECG recorded from the pig while anesthetized (ECG Lead II) and VF ECG after induced fibrillation. These signals were then processed and, after asystole, fed into the pig. • Human test data. ECG data recorded during emergency cases by a Welsh-Allyn PIC 50 defibrillator in Innsbruck in the period from 2003 to 2006. This dataset was fed in order to generate and record CPR corruption.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2575–2578, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2576
A.C. Mendez, M. Roehrich and H. Gilly
In the porcine experiments all ECG signal were recorded using defibrillator pads.
from noisy time-series data (System Identification Toolbox) was used to estimate a transfer function that describes the filtering behavior of the pig on the fed-in signal. The identification process relates to repeatedly selecting a model structure, computing the best model in the structure, and evaluating the model's properties. In addition to the transfer function estimation, a validation test was designed to prove that the inverse transfer function of the model could reconstruct the originally fed-in signal from the electrical signal as measured experimentally on the pig’s surface. III. RESULTS
Fig. 1: Outline of the pig model: The ECG is fed into the dead pig via the feed in stage (consisting of a level shifter for ECG base line correction, a constant current source and the needle electrodes) in such a way that a “real” ECG signal can be recorded from the pigs thorax (standard Einthoven bipolar three lead ECG). CPR is performed on the pig (corrupting action) distorting the ECG. The corrupting signal is measured using a fluid filled compression pad connected to a pressure transducer. The corrupting and corrupted signals are recorded using a data acquisition system (National Instruments Data Acquisition Board NI6036E). The DAQ board is connected to a PC with datalogger software (P. Hamm, Univ. Innsbruck) for off-line evaluation.
A. Changes in the ECG signal due to electrical properties of the pig These effects can be summarized as amplitude attenuation and frequency filtering. Fig. 2 gives an example of the decrease in amplitude in the low frequency range 1.2
1
B. Corrupting signal (second channel) Amplitude (normalized)
0.8
The corrupting action (CPR compressions) was monitored using a self built fluid filled compression pad (size: approx 8*10 cm) the area of which does not change more than about 5 %. Therefore, the force of compression can be monitored measuring the bag pressure (0 to 1,4 bar for 0 to 700 N) connected to a physiological pressure transducer (TruWave, Edwards Lifesciences, USA).
0.6
0.4
0.2
0
C. Frequency characteristics and estimation of transfer function
Original + MRx Original+PIG+MRx -0.2 1000
In order to characterize the effects of the equipment’s input ECG filters on the relevant frequencies of the signals under analysis (corrupted and corrupting signals), a sinus sweep test (.05 Hz to 100 Hz) was performed. A sine wave generator was directly connected to the different devices (ISODAM-B Isolated Biological Amplifier; World Precision Instruments, USA, Corpuls 08/16 defibrillator/monitor system; GS elektromedizinische geräte, GER, and, Philips Heartstart MRx defibrillator/ monitor system; Philips, GER) in their respective configuration. Input and output was compared in the frequency domain to characterize the frequency response of the recording equipment. A MatLab (Math Works, Natick, MA, USA) add-on for building accurate, simplified models of complex systems
_______________________________________________________________
Original
1500
2000
2500
3000 Time (ms)
3500
4000
4500
5000
Fig. 2: Effects of Heartstart MRx and the experimental pig on the fed-in ECG signal. The very low frequency component of the original signal (caused by mechanical ventilation of the pig) is partially filtered by the Heartstart MRx monitor and completely removed after the signal has been fed into the pig.
The signal distortion exerted by the animal on the input signal (ECG fed-in) may be described by the following transfer function
G( s) = K ∗
(1 + Tz * s ) (1 + Tp1 * s )(1 + Tp 2 * s )
(1)
With K = 0.075, Tp1 = 0.010111, Tp2 = 0.001, Tz = 0.20523
IFMBE Proceedings Vol. 22
_________________________________________________________________
Porcine model for CPR artifact generation in ECG signals
2577
The gain K can be adjusted to numerically fit the amplification factor for a given experiment. B. Corrupted/corrupting signal Corruption of the ECG is due to the CPR manoeuvre itself causing impedance changes in the thorax and movement of electrodes with concomitant changes in the electrode-skin interface due to the compression. The CPR parameter used in the experiments was the force exerted when performing CPR on the pig’s chest. The accuracy of the force measurement is about 8 % which is sufficient for monitoring purposes. The dynamic response is flat up to 12 Hz. Fig 3 shows a typical CPR compression cycle.
Fig. 3: Compression pattern: Force of compression (red), compression depth as monitored using the Philips Q-CPR device which calculates the compression out of the acceleration signal (blue). Note that the peaks of both signals remain synchronized. The rising slopes are nearly identical whereas the discrepancies in the falling edge are probably due to the higher frequency characteristic of the force transducer.
IV. CONCLUSIONS The porcine model presented in this work is a completely new approach to the problem of CPR-corrupted data generation. A clear advantage of the porcine model is the generation of “true” CPR artifacts in any given ECG signal, in contrast to the additive model where the artifacts are mathematically processed and added to the SR or VF ECG signal as shown in Fig 4. Also, the model and setup allow the simultaneous recording of the corrupting signal (second channel), just like in the present experiments the compression force. Another advantage of the porcine model is the straightforward approach which is necessary to simulate different CPR scenarios. For example, it can be examined how the noise generated by CPR changes between well trained rescue personnel and untrained lay people. The porcine model proposed would allow a basis for the extensive testing and refinement of CPR removal algorithms.
_______________________________________________________________
Fig. 4: Spectrograms of a nonshockable signal (idioventricular rhythm), and the corresponding CPR corrupted signal (lower panel). Graph taken from T.Werther: A Comparative Study of Two-Channel Methods for Removing CPR Artifacts. Master Thesis Medical Univ Vienna, (unpublished)
However, before a widespread use of the porcine model to systematically corrupt ECG signals, several considerations have to be taken into account, such as the selection of a representative second channel signal, the inserting position of the needle electrodes and anatomical similarities between human and pig, also eventual signal distortion, and, to a minor extent the importance of the frequency characteristics of the monitoring/recording equipment. We could establish a quantifiable relationship between the signal that is fed-in into the pig via needle electrodes and the signal that is recorded with the defibrillator pads
IFMBE Proceedings Vol. 22
_________________________________________________________________
2578
A.C. Mendez, M. Roehrich and H. Gilly
placed on the pig’s chest, the relationship being given by the transfer function. From the recordings obtained from the pig and the bode plot of the estimated transfer function, it can be concluded that the pig filters the low frequencies similar to a high pass/band pass filter. Nonetheless, it needs to be demonstrated in a series of experiments if this transfer function of the model is valid for several animals or whether the parameters are peculiar for the animal in test (4). In addition it should be clarified to which extent (changing) resistive, capacitive and conductive properties of the tissue are involved (5,6,7) The feed-in setup cannot simulate changes in the electrical axis of the heart, that is, the direction and magnitude of the overall dipole of the heart at any given moment during CPR. At present this is a drawback inherent to the technique used. So far the original emergency data feed into the pig must always come from recordings performed using ECG lead II or adhesive defibrillator pads placed in the same axis (right pectoral region and below the left axilla), this would guarantee a consistent simulation of the original ECG within the pig. The principal drawback to overcome in the presented model is the low frequency filtering that affects the signal feed-in. As already discussed, this is an effect caused by the dielectric properties of the pig and, possibly, the skinelectrode interface. To resolve this problem, a new approach of the present model is proposed: the main change consists in the manipulation of the original signal using the inverse transfer function of the model already before it is fed into the pig via needle electrodes. At least theoretically, this manipulation should abolish the effect of the pig on the signal, providing the original waveform to be corrupted by the CPR compressions. Still, more experiments have to be performed having all the variables fixed in order to generate sufficient data to do a comprehensive statistical analysis of the model’s performance.
_______________________________________________________________
ACKNOWLEDGMENT The study was supported in part by a grant from the Austrian Science Fund (Grant L288-N13) as well as from the University Jubilee Fund. The Department of Biomedical research provided the facilities for the animal experiments; colleagues from the Department of Emergency Medicine helped in performing these experiments. Tobias Werther, PhD, provided the ECG data. The Datalogger software was kindly provided by Peter Hamm, Medical Univ. Innsbruck.
REFERENCES 1.
2.
3.
4.
5. 6. 7.
Berg RA, Sanders AB, Kern KB et al. (2001) Adverse Hemodynamic Effects of Interrupting Chest Compressions for Rescue Breathing During Cardiopulmonary Resuscitation for Ventricular Fibrillation Cardiac Arrest. Circulation.104:2465-2470 Y. Sato, M. H. Weil, S. Sun et al (1997) Adverse effects of interrupting precordial compression during cardiopulmonary resuscitation Crit Care Med, 25, 733–736 Werther T, Klotz A, Kracher G, Baubin M, Feichtinger HG, Gilly H, Amann A (2008) PR Artifact Removal in Ventricular Fibrillation ECG Signals Using Gabor Multipliers. IEEE-TBME in press Goldberger JJ, Subacius H, Sen-Gupta I et al. A new method to determine the electrical transfer function of the human thorax. Am J Physiol Heart Circ Physiol 293: H3440–H3447, 2007. Rush S, Abildskov JA, McFee R. Resistivity of body tissues at low frequencies. Circ Res 12: 40–50, 1993. Schwan HP, Kay CF. Capacitive properties of body tissues. Circ Res 5:439–443, 1957. Schwan HP, Kay. The conductivity of living tissues. Ann NY Acad Sci 65: 1007–1013, 1956.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Dr. Hermann Gilly Dept Anaesthesia & General Intens. Care (B) Waehringerguertel 18-20 Vienna Austria
[email protected] _________________________________________________________________
Design and Assessment of Fuzzy Rules by Multi Criteria Optimization to Classify Anaesthetic Stages R. Baumgart-Schmitt, C. Walther and K. Backhaus University of Applied Sciences Schmalkalden, Faculty of Electrical Engineering, Germany Abstract — Fuzzy rules have been developed to model and predict the time dependent depth profile during induction and maintenance of anesthesia. Features of times series of the frontal EEG measured at 47 patients during the operating theatre were extracted. Expert knowledge has been used to define 5 fuzzy sets for each of the 62*5 linguistic variables. Five anesthetic stages have been represented by 55 fuzzy rules, selected from a large pool of rules. The non dominated sorting genetic algorithm (NSGA II) controlled a multi criteria selection. To evaluate the concordance between the expert and the rule assisted classification during optimization procedure five different criteria were included. This multi-criteria evaluation proved to be useful because of the imbalance in the number of depth specific EEG segments and the aim to get good generalization properties. The basic structures of the fuzzy rules which were automatically received by multicriteria optimization show strong similarities to the rules applied by experts. The generalization ability of the rules to separate the different stages has been assessed in a second step. Patterns of labeled EEG records were generated to test the degree of concordance with the results of the binary coded rules for all five stages. According to the design aims the degree of concordance represented by the Pareto set in the four dimensional space of fitness criteria served us to infer to classifying power. In cooperation with neural networks, sets of fuzzy rules adapted to different feature sets should be implemented in hybrid fuzzy neural software running on small microcontroller to support the anesthetist during operating tasks. The results of the adapted fuzzy rules working as classifiers are compared with two alternative approaches: populations of optimized neural networks and support vector machines. Despite multi-criteria optimization the performances are slightly lower, but the structure is more transparent for experts. Keywords — Depth of anaesthesia, Fuzzy Modelling, Multi Criteria Optimization.
I. INTRODUCTION The depth of anesthesia should be estimated and predicted to control the induction and maintenance of anesthesia in an optimal way. The EEG was measured at the forehead of wake and anesthetized patients. Different approaches of multi-objective learning classifier systems were developed and compared by means of this frontal
EEG. The final classifier of the favorite approach should be implemented and run on a mobile phone or PDA platform to assist the anesthetist during operations. Different strategies of induction and maintenance of anesthesia, inter-individual differences in scoping the mental and physical load of the patients and the length of operations can influence the features of the frontal EEG. That means the classifier should be robust against these deflections. Furthermore there are some sources of technical and biological artifacts which should be recognized as outliers and they have to be eliminated to prevent misclassification. The time dependent depth of anesthesia can be visually presented by a so called profile of anesthesia. These profiles were generated by expert knowledge, data from the operation protocol and autonomous data like heart rate and blood pressure. To train the learning classifier systems profiles of anesthesia were used to label the records of EEG data. Therefore supervised learning procedures could be applied. Each multi-objective learning classifier system should distinguish between the four different stages of anesthesia called A1, A2, A3, A4 and wake before and after the operation. The durations of the stages can be extremely different, therefore a strong class imbalance problem has to be solved. To prevent results biased toward the strongly represented class or most common class the data were preprocessed by oversampling the minority classes or by undersampling the majority classes or by aggregation of data from concordant operations. The performances and the training effort of three used approaches characterized by populations of topologically optimized neural networks, populations of sets of fuzzy inference rules and support vector machines with different kernels are strongly dependent on the balancing method. The data base of the supervised learning scheme was divided in three subsets: the training set, the test set and the validation set. The training set was used to adapt the parameter of the classifier. The test set served to calculate the criterion to stop the adaptation to prevent over fitting. The third set was employed to calculate the performance of the classifier. The structure of fuzzy rules bases on the features used by optimized neural networks and support vector machines. The number and the parameter of the fuzzy rules were selected by a multi-objective classifier system. The core of our system consists of evolutionary multi-objective optimization techniques. In accordance with
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2579–2582, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2580
R. Baumgart-Schmitt, C. Walther and K. Backhaus
Bernado-Mansilla et. al. [1] the developed system tried to maximize generalization and accuracy of the rules. Some further criteria were implemented which were evaluated by genetic algorithms to find pareto sets which represents optimal compromise in forecasting strong and weak represented classes. We have got some experiences in dealing with imbalance problems by classifying data of anesthesia and sleep data. Markowska-Kaczmar and Mularczyk [2] discuss the problem of multimodality and multiobjectivness in genetic algorithm and they made some proposals to find the global optimum. In a further step we decided the combine different fuzzy rules trained and tested by different data sets. This approach corresponds to our experiences with populations of neural networks and the proposals by Ishibuchi and Nojima [3]. They have made excellent experiences with ensemble classifier. They combined multiple fuzzy classifiers and found an improved performance against single optimized classifiers.
Rule Ri : IF
( x1i is X11i or x1i is X12i …. or x1i is X15i )
and ( x2i is X21i or x2i is X22i …. or x2i is X25i ) and ( xNi is XN1i or xNi is XN2i …. or xNi is XN5i ) than yi is Ai
(1)
were implemented. The fuzzy sets Xkli ….with k = 1, 2, … N, l = 1, 2, … 5 and i =1,2,..5 are introduced with the following membership functions
II. METHODS The EEG and vegetative parameters of 47 patients were recorded during surgery in the cooperating hospitals of Schmalkalden and Zella-Mehlis (Germany). The complete data pool of all patients was generally divided into three independent parts: n EEG epochs with 10 second length and n= 14038 of 22 patients were included in the training data set, m epochs with m= 8720 of 15 patients were used to test the performance of fuzzy rules and the data of 10 patients with v epochs, v= 6324 served as validation set. Anaesthesia was induced either by a mixture of ketamine and propofol, or rapifen and propofol. The hypnotic stage was maintained by the administration of sevofluran. The EEG was measured bipolar near Fp1 and Fp2 and sampled by a rate of 505 values per second by our mobile recorder system Quisi mini. This system uses a 16 Bit Sigma-Delta analog-to-digital converter (16 Bit ADC) and a flash card to store the data in two byte mode. At least four stages and awake should be distinguished to guide the anaesthesia: The label A1 marks the transitional stage of sedation or light anaesthesia, A2 the stage of moderately deep anaesthesia or the stage of surgical tolerance, A3 the stage of deep anaesthesia and A4 the stage of very deep anaesthesia or stage with the occurrence of so called burst suppression patterns. The classification problem could be solved by optimized populations of neural networks and support vector machines (Baumgart-Schmitt et. al. [5] and Walther et. al. [6]). Both classification approaches which are based on neural networks and support vector machines have the disadvantage that it is difficult for the user to recognize how
_________________________________________
a decision was performed by the machine [4]. To convince the expert in our case the anaesthetist we tried to extract guidelines from the neural networks and support vector machines to design and optimize fuzzy rules. According to the four stages of anesthesia and wake five rules Ri ,i=1,2,…5 of the following form
μ(x,a,b) = e[
-0.5*((x-a)/b) ]
(2)
with a and b as real valued parameters which are included in the multi-criteria optimization. That means each fuzzy set has its own individual parameter set. The number of linguistic variables is N and each feature of the EEG is connected with one linguistic variable. The maximum value of two membership functions is selected to unify two fuzzy sets (or operator) and the minimum value serves for the intersection (and operator). The non dominated sorting genetic algorithm (NSGA II) [7] generates individuals and evaluates its fitness by four different criteria. The value of each criterion reflects one aspect of the concordance between the expert and the rule concerning class assignment and the potential ability for generalization. One individual is defined by a vector of N*5*5 boolean and N*5*5*2 real valued variables. The variables will be changed by the evolutionary operators like recombination and mutations. The aggregation of the five class specific rules is performed according to the maximum membership value. Different combinations of learning and test data sets serve as the basis for the multi-criteria optimization runs. The multi-criteria optimization has been performed in a four dimensional criteria space and the fitness of each individual is characterized by the tuple of criteria ( q1, q2, q3, q4 ). The criterion q1 is calculated by the quotient between the number of correctly classified 10 second EEG epochs and the total number of included epochs. The average of the class specific quotient is represented by q2. The lowest value of the class specific quotient is q3. The fourth criterion q4 is equivalent to the number of unused fuzzy sets in the
IFMBE Proceedings Vol. 22
___________________________________________
Design and Assessment of Fuzzy Rules by Multi Criteria Optimization to Classify Anaesthetic Stages
rule set. Additionally the criterion q5 represents the classification quality measured by the second test set which is completely independent from the learning set and is used as a stopping criterion in the optimization process, but not used by the genetic optimization algorithm. To get final populations of fuzzy rules for classification of the validation data set, certain pareto optimal tuple from the different training runs were selected and combined to a set of pareto optimal rules. The final selection of rules from the pareto set to get an optimal ensemble classifier was supported by the weighted sum of the first four criteria values. The rules with the highest accumulated values were preferred. The results of the rules in the final population were averaged by the median operator to assign the epoch with the class label. III. RESULTS Confusion matrices have been used to analyze and compare the classification performance of the ensembles of optimized fuzzy rules with the results of the populations of neural networks and the support vector machines. Table 1 to 3 contain the degrees of concordances and the deviants in percentage between the manually scored 10 seconds EEG epochs and the results received by the used three different approaches. The bold printed diagonal values present the degree of concordances. The deviants are presented in the other table cells and can be regarded more or less critical for the controlling of the operation. The complete validation data base consisting of 6324 records from 10 patients was used. The features were selected in the frequency range until 64 Hz. Table 1 shows the confusion matrix of the results which has been received by an ensemble of 11 different fuzzy rules for each class. The leftmost column indicates the decision of the expert and the top row should mark the class specific outcomes of the rules. To compare the performance of the adapted fuzzy rules with the results of optimized populations of neural networks table 2 was included. The results of the support vector machines are shown in table 3. In Walther et. al. [6] the superiority of the support vector machines for solving selected two class problems with the same EEG data set was shown. We used this feature to separate outliers caused by the electrically supported operations for instance from the undisturbed EEG data. All tables have an identical structure. The bold printed diagonal values indicate the concordance in percentages between the expert’s meaning and the automatic classification. Confusions between Wake and A1 are noncritical. A confusion between A2 and A3 also seems to be harmless. Therefore we highlighted this sub-confusion
_________________________________________
2581
matrices grey within all three tables. The classification of burst suppression patterns or the very deep anaesthesia class A4 is very important. There should be low confusions to the other classes. So this class is highlighted in grey alone. Table 1: Confusion matrix with the results which has been received by an ensemble of fuzzy rules (FR) for each class. The EEG frequency range which served as the primary basis of feature extraction was restricted to 64 Hz. The training data base contains 10160 records from 10 different patients, the test data base 7807 records from 8 patients and the validation data base 6324 records from 10 other patients. %
Wake (FR)
A1 (FR)
A2 (FR)
A3 (FR)
A4 (FR)
Wake
66.7
22.5
0.9
1.8
8.1
A1
21.0
45.2
20.2
6.5
7.3
A2
0.1
7.3
61.8
30.2
0.6
A3
0.0
3.4
37.8
57.5
1.3
A4
0.3
11.6
18.9
25.6
43.6
Table 2: Confusion matrix generated by optimized populations of neural networks (NN). The same validation data set of tab. 1 was employed. The classification was done by 55 networks. All 55 networks are topologically optimized by the evolutionary shell of SASCIA (a developed training tool [5]). They use different subspaces of the complete features space. Each value in this table represents the median of the different values from each patient of the validation set. Therefore the sum of the values in each row might not be equal to 100%. The training data base contains 14038 records from 22 different patients, the test data base 8720 records from 15 patients and the validation data base 6324 records from 10 other patients. %
Wake (NN)
Wake
95.1
A1
14.2
A2
0.0
A3
0.0
A4
0.0
A1 (NN)
A2 (NN)
A3 (NN)
A4 (NN)
0.0
0.0
0.0
0.0
40.8
1.4
3.5
1.9
3.2
54.7
28.1
0.0
0.0
15.8
68.7
5.3
0.0
2.0
2.1
92.9
Table 3: Confusion matrix performed by the support vector machines (SVM). An optimized radial basis function kernel was applied. The same validation data set of table 1 was employed. The training data base contains 8980 records from 9 different patients and cross-validation was used. The validation data base is represented by 6324 records from 10 other patients. %
Wake (SVM)
A1 (SVM)
A2 (SVM)
Wake
45.1
51.4
0.9
0.9
1.8
A1
10.9
59.7
8.9
12.5
8.1
A2
0.1
3.5
54.7
34.4
7.3
A3
0.0
0.8
46.6
45.9
6.6
A4
0.0
2.5
5.6
10.9
81.0
IFMBE Proceedings Vol. 22
A3 (SVM)
A4 (SVM)
___________________________________________
2582
R. Baumgart-Schmitt, C. Walther and K. Backhaus 3.
IV. CONCLUSION The results of the fuzzy rules adapted by multi-criteria optimization have been supported by the experiences made with the training of neural networks and support vector machines. The repeated training with different subsets of the complete learning set and different initial parameter and the aggregation of the results seems to be the optimal way to get powerful classifiers. The training effort can be minimized in this way. The three approaches based on supervised learning were applied to offer useful generalizations. The included features should contribute to the explanation of the results. The population of neural networks offers the best separation of the anesthetic depths, but at the effort of extraordinary computing time. The support vector machine generates lower, but still good classification results with minimum efforts. The adapted fuzzy rules have taken the third place in generalization performance (essentially because of the very low classification of A4), but they offered the best opportunities to explain the separation.
4.
5.
6.
7.
Ishibuchi H, Nojima Y (2006) Fuzzy Ensemble Design through Multi-Objective Fuzzy Rule Selection. Jin Y (Ed.) Multi-objective Machine Learning, Springer Diederich J (2008) Rule Extraction from Support Vector Machines. Diederich J (Ed.) Rule Extraction from Support Vector Machines, Springer Baumgart-Schmitt R, Walther C, Backhaus K, Reichenbach R, Sturm K-P, Jaeger U (2008) Robust Nonlinear Adaptive Network Classification of Anaesthesia, Proceedings of IAPR Workshop on Cognitive Information Processing June 9-10, 2008, Santorini, Greece Walther C, Baumgart-Schmitt R, Backhaus K (2008) Support Vector Machines and Optimized Neural Networks – Adaptive Tools for Monitoring and Controlling the Depth of Anaesthesia, The 3rd International Conference on Electrical and Control Technologies May 8-9, 2008 Kaunas, Lithuania Laabidi K, Bouani F, Ksouri M (2008) Multi-criteria optimization in nonlinear predictive control. Mathematics and Computers in Simulation, Volume 76 , Issue 5-6 (Jan. 2008), pp 363-374
First corresponding author: Author: Institute: Street: City: Country: Email:
R. Baumgart-Schmitt University of Applied Sciences Schmalkalden, Faculty of Electrical Engineering, Germany Blechhammer 4-9 98574 Schmalkalden Germany
[email protected] REFERENCES Second corresponding author: 1.
2.
Bernado-Mansilla E, Llora X, Traus I (2006) Multi-objective Learning Classifier Systems. Jin Y (Ed.) Multi-objective Machine Learning, Springer Markowska-Kaczmar U, Mularczyk K (2006) GA-Based Pareto Optimization for Rule Extraction from Neural Networks. Jin Y (Ed.) Multi-objective Machine Learning, Springer
_________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
C. Walther University of Applied Sciences Schmalkalden, Faculty of Electrical Engineering, Germany Blechhammer 4-9 98574 Schmalkalden Germany
[email protected] ___________________________________________
Impact of the hERG Channel Mutation N588K on the Electrical Properties of the Human Atrium P. Carrillo1 , G.Seemann1 , E. Scholz2 , D.L. Weiss1 and O. D¨ossel1 1
2
Institute of Biomedical Engineering, Universit¨at Karlsruhe (TH), Germany Department of Internal Medicine III Universit¨at Hospital Heidelberg, Germany
Abstract— Atrial fibrillation is the most common cardiac arrhythmia in humans. The precise cellular mechanisms underlying atrial fibrillation are still poorly understood. Recent studies have identified several genetic defects as predisposing factors for this pathology. One of the identified genetic defects is the mutation N588K, which affects the cardiac IKr channel. Genetic variants in this channel have been identified to modify ventricular repolarization. The aim of this work is to investigate the effect of this mutation on atrial repolarization and the predisposition to atrial fibrillation. Measured data obtained with whole cell voltage clamp technique of wild-type and mutated hERG channel were implemented in the Courtemanche et al. ionic model. For this purpose, channel kinetics and density of the model were adjusted using parameter fitting to the measured data. By this way, the effects of the mutation in the hERG channel could be analyzed in the whole cell and in tissue, as well. The channel mutation N588K showed a gain of function effect, causing a rapid repolarization and consequently, a shortening of the action potential duration. Computer simulations of a schematic anatomical model of the right atrium were then carried out to investigate the excitation propagation and the repolarization. The action potential duration of the mutant cell was reduced to 116 ms and the effective refractory period to 220 ms. Both factors are linked to a shortening of the wavelength, indicating that the mutation N588K predisposes the initiation and perpetuation of atrial fibrillation. Keywords— Arrhythmia, Atrial fibrillation, hERG, IKr
I. I NTRODUCTION Atrial fibrillation (AF) is the most common cardiac arrhythmia. It is characterized by an abnormal rapid activation of the atrial muscle, which results in a reduction of its contractility. The atrial activity is normally under the control of the cardiac pacemaker function of the sinus node (SN) with 60–80 beats per minute (bpm) and is increased to 400–600 bpm during AF [1]. Sustained AF presents severe effects, like congestive heart failure, thromboembolism, ventricular arrhythmia and electrical remodeling of the atria, which favors the maintenance of AF [1]. The occurrence of AF increases with age. Several predisposing factors have been identified,
e.g. coronary artery disease, congestive heart failure, pericarditis and hypertension. However, AF has been also recognized as a heritable disorder. One of the identified genetic defects is the mutation N588K in the gene KCNH2, which encodes the hERG protein. This protein forms the α-subunit of the myocardial rapid delayed rectifier potassium channel IKr , which is crucial to repolarization. The mutation N588K causes a gain of function effect, leading to a shortening of the action potential duration (APD) and effective refractory period (ERP). Both effects support the initiation and perpetuation of AF. In this work, the gating and kinetics properties of this mutant channel were implemented into the Courtemanche et al. ionic model of human cardiomyocytes [2]. The effects were then visualized in a single-cell and in a schematic anatomic model of the right atrium. In these environments the APD and ERP were analyzed.
II. M ATERIALS AND M ETHODS A. Implementation of the measured data in the electrophysiological model McPate et al. [3] recorded the mutant channel function by whole cell patch-clamping in hamster ovary cells. The current-voltage relationship was measured by a step protocol. Starting from –80 mV holding potential, 2 s duration voltage steps were used up to 100 mV in 10 mV increments. The mutated currents showed an inactivation at significantly more positive voltages than the wild-type currents. This factor contributes to an increased repolarizing current earlier in the action potential. The Courtemanche et al. model was used in order to describe the channel properties in the human atrium. The model describes the electrical behavior of human atrial cells with a set of nonlinear-coupled ordinary differential equations that reconstruct ion concentrations, ionic currents, intracellular structures, and the transmembrane voltage [2]. Parameters of the electrophysiological model were adjusted to the measured data using an optimization algorithm called Particle Swarm Optimization (PSO) [4]. Hereby, only the parameters of the rate constants, the maximum conduc-
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2583–2586, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2584
P. Carrillo, G. Seemann, E. Scholz, D.L. Weiss and O. Dössel
tance of IKr and the intracellular potassium concentration were modified. The other parameters from the original mathematical description of the channel were left unchanged. The environmental conditions were adopted from the measurement conditions: Temperature was set to 37 ◦ C and intracellular potassium concentration was set to 4nM. B. Computer simulations In order to determine the excitation propagation and the ERP, a tissue model which represents anatomically the atrium and describes the electrical coupling of excitable cells was used. The anatomical model includes the superior caval vein (SVC), the terminal crest (CT), some pectinate muscles (PM) and the atrial working myocardium (AWM), as shown in Fig 2 A. The action potential hetereogeneities of CT und PM were considered by using adapted electrophysiological parameters [5]. AWM was set to isotropic, CT and PM to anisotropic conductivity. For CT and PM the anisotropy factor was adapted so that both anatomical structures presented physiological excitation velocity in longitudinal direction. By this way, CT had an excitation velocity of 1.2 ms−1 and PM of 1.68 ms−1 . This fast longitudinal direction was set parallel to the main axis of each CT and PM. The stimulus for the excitation propagation was set on the right side of the CT under the SCV, simulating the position of the SN. To investigate the predisposition for rotating waves, a S1–S2 protocol was used. During the refractory period of the first activation, a second stimulus (S2) was positioned temporally shifted in a different place, where a part of the tissue was still refractory and the other excitable. This produced an unidirectional block in the excitation propagation. A monodomain model was used in order to represent the electrical coupling of cardiomyocytes. This model calculates the intercellular current flow through the gap junctions and through the intracellular space. The monodomain model was calculated from Poisson’s equation for stationary electrical fields combined with a finite difference method.
III. R ESULTS A. Cell Simulations The best fit of the simulation of the mutant IKr to the measured data is given by: IKr xr(∞)
gKr xr (V − Ek )
+33.13 1 + exp V50.673
V + 0.00758 −1 = 1 + exp − 16.93 =
_________________________________________
Fig. 1: (A) Steady state current-voltage relationship of the measured wild-type (red) and mutated (green) IKr channel. Action potential of (B) physiological, and (C) mutant cells of the Courtemanche et al. model at different stimulation frequencies.
αx(r) βx(r)
V + 0.09494
1 − exp − V −0.09494 1.4883 V − 0.1346
−0.1346 = 7.3898 · 10−5 exp V0.04446 = 0.00177
with a maximum conductance gKr of 0.3682 nS/pF, an intracellular potassium concentration of 100nM, the gating variable xr , the transmembrane voltage V , the Nernst potential of potassium Ek , the gating variables αx(r) and βx(r) , and the steady-state constant xr(∞) . These characteristics were integrated into the Courtemanche et al. model. The simulations showed that mutation N588K shifted inactivation of hERG towards more positive voltages, causing an increase of IKr (Fig 1 A). This results in a faster repolarization and, thus to a shortening of the APD. The physiological and mutated action potentials (AP) for different frequencies are shown in Fig 1 B and C. The APD 90 of the mutant cell was reduced from
IFMBE Proceedings Vol. 22
___________________________________________
Impact of the hERG Channel Mutation N588K on the Electrical Properties of the Human Atrium
2585
Action Potential
Transmembrane Voltage (mV)
40
AP in Tissue AP in the Cell
20 0 -20 -40 -60 -80 -100 0
0.05
0.1
0.15
0.2
0.25
0.3
Time (s) (ms) Time
Transmembrane Voltage (mV)
40
AP in Tissue AP in the Cell
20 0 -20 -40 -60 -80 -100 0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Time (s)
Fig. 2: (A) Schematic anatomical model of the rigth atrium. Excitation propagation during depolarization at (B), 5 ms (C) 15 ms, and (D) 30 ms after the stimulation. Red color indicates + 10 mV, blue color –80 mV.
302 ms to 116 ms at 1 Hz. Both cell types showed a similar frequency adaptation. B. Tissue Simulations The excitation propagation in the schematic model of the atrium is shown in Figure 2. The velocity of the excitation was not modified by the mutation, only at basic cycle lengths (BCL) shorter than 200 ms. The main differences between wild-type and mutated case are visible in the repolarization propagation, and in the APD of the tissue. Repolarization in the mutant model is much faster than in the physiological one. This results in a decrease of the wavelength, which is defined as the product of ERP and conduction velocity. The ERP describes the shortest time at which one premature beat can start an excitation propagation. It was simulated by applying three beats with 1 Hz and then a premature one. In physiological case the ERP was 317 ms, whereas in the mutant model it was reduced to 233 ms. Furthermore, the APD of the mutant tissue showed a dependence of the voltage reached within the depolarization. Other than the difference between the maximum upstroke due to electrotonic coupling, the APD of single-cell and tissue environment did not differ in the physiological case. In contrast, the APD in tissue was significantly larger than the APD in the single-cell environment in the mutant model (Fig 3). Due to
_________________________________________
Fig. 3: Action potential in tissue of (top) physiological, and (bottom) mutant cells of the Courtemanche et al. model. In contrast to the physiological case, the APD of the mutated case differs from the APD of the single-cell.
electrotonical coupling, the voltage amplitude reached within depolarization in tissue is not as large as in a single-cell. As a result, IKr did not show the expected increase that caused the drastic shortening of the APD. The effective APD was 220 ms. The S1–S2 protocol produced an unidirectional block that led only in the mutant case to a rotating wave (Fig 4). 135 ms after the normal stimulation (S1), a second impulse was set in a different place (S2) where a part of the tissue was refractory and the other not. The wave could propagate only in one direction, and rotated around the not excitable tissue generated by S2. The refractory period of these cells was shorter than the time that the wave needed to return to its starting point. As a result, a new AP could be activated, generating a new propagating wave. However, this new wave could not produce the same effects as the first one, as it soon encountered still refractory tissue and died out.
IV. D ISCUSSION AND C ONCLUSION Mutation N588K showed a gain of function effect of IKr caused by the shifted inactivation of hERG towards more positive potentials. In a single-cell, this resulted in a significant shortening of the APD to 116 ms. This factor is linked to a shortening of the wavelength, building a substrate for AF.
IFMBE Proceedings Vol. 22
___________________________________________
2586
P. Carrillo, G. Seemann, E. Scholz, D.L. Weiss and O. Dössel
strong as in a single-cell, it still differs from the physiological case, so that the predisposition for the cardiac arrhythmia is present. Furthermore, the ERP was also reduced in the tissue simulations, from 317 ms in physiological case to 233 ms in the pathological model. Combined with the fact that the conduction velocity decreases by short BCL this effect builds a substrate for the initation and perpetuation of AF. As shown in the simulations, the effects of mutation N588K can produce rotating waves in contrast to the physiological case. The anatomical model used in this work was a schematic model of the human atrium. Factors like conductivity and curvature, that play a very important role for the conduction velocity, were not considered. We expect that in a model including curvatures, the conduction velocity will decrease, resulting in another shortening of the wavelength. By this way, the model could not only produce one rotating wave, but fluttery or fibrillation. In a future work, we will transfer these findings into a complete anatomical model of the atria, in order to investigate the effects of this mutation in AF.
R EFERENCES 1. Nattel S.. (2002) New ideas about atrial fibrillation 50 years on. Nature 415:219–226. 2. Courtemanche M., Ramirez R.J., Nattel S.. (1998) Ionic mechanisms underlying human atrial action potential properties: Insights from a mathematical model. Am. J. Physiol. 275:301.
Fig. 4: Excitation propagation induced by a S1–S2 protocol. A second stimulus is set temporally shifted in a different place. The wave can only propagate in one direction (A) and rotates around the not excitable tissue generated by S2 (B), (C). Before the wave returns to its starting point, the cells located there are no longer refractory. This allows a new AP to be produced there, which leads to another propagating wave (D), (E). The new stimulation dies out after a few ms, as it encounters still refractoy tissue (F). Red color indicates +10 mV, blue color –80 mV.
However, our results also indicate that the behavior of the mutant channel in tissue differs from the one in a singlecell, as it depends on the maximum upstroke reached by depolarization. This results from the characteristic effect of mutation N588K on hERG channel. As shown in Fig 1, both, physiological and mutant currents react in a similar way by negative voltages. The increase of IKr is significant only at more positive potentials. Due to the electrotonic coupling, the maximal upstroke in a multi-cell environment is 0 V. As a result, mutated hERG channels do not have the expected effect, resulting in a longer APD in tissue than in a single-cell. This behavior ist shown in Fig 3.
3. McPate M.J, Duncan R.S, Milnes J.T, Witchel H.J, Hancox J.C. (2005) The N588K-HERG K+ channel mutation in the ’short QT syndrome’: mechanism of gain-in-function determined at 37 degrees C. Biochemical and Biophysical Research Communications 334:441– 449. 4. Lurz S.. Multidimensional Adaption of Electrophysiological Cell Models To Experimentally Characterized Pathologies. Master’s thesisUniversit¨at Karlsruhe (2008) . 5. Seemann G, H¨oper Christine, Houghton L et al. (2006) Heterogeneus three-dimensional anatomical and electrophysiological model of human atria. Phil. Trans. Roy. Soc. A 364:1465–1481.
• Author: Paola Carrillo • Institute: Institute of Biomedical Engineering, Universit¨at Karlsruhe (TH) • Street: Kaiserstr. 12 • City: 76131 Karlsruhe • Country: Germany • Email:
[email protected] Although the effect of the mutation in tissue is not as
_________________________________________
IFMBE Proceedings Vol. 22
___________________________________________
The Effect of Laser Characteristics in the Generation and Propagation of Laser Generated Guided Waves in Layered-skin Model Adèle L’Etang and Zhihong Huang School of Engineering, Physics and Mathematics, University of Dundee, Dundee, DD1 4HN, UK Abstract — This paper concerns the use of surface acoustic waves for accurate characterization of human skin, it presents a Finite Element (FE) study of the laser generated ultrasonic waves in a 3-layered models of human skin to study the effects of laser characteristics has on the properties of the generated Surface Acoustic Waves (SAWs). Using the commercially available FE code ANSYS, the effects of laser beam width, pulse rise time and laser wavelength has been taken into consideration on the generated guided waves in the skin models. The simulation is a sequential coupled field analysis, with the heating of the multilayered skin model due to a short laser pulse is simulated by a dynamic thermal analysis with the laser pulse represented as a volumetric heat generation. The results of the thermal analysis are then subsequently applied as a load in the mechanical analysis where the out-of-plane displacement histories and stress fields are analyzed. The two analyses can be assumed to be uncoupled in this work as in the timescale of interest the elastic effects do not feed back into the thermal problems. In order to keep the generation of ultrasonic waves in the thermoelastic regime the energy incident on the tissue must be kept below the threshold at which irreversible changes occur, yet sufficient to produce thermoelastic waves that can be readily detected. Results shows that the laser generated SAW in skin models is dominated by the optical penetration depth which is determined by the properties of the material and the laser wavelength used. Keywords — Laser Ultrasonics, FEM, Surface Waves, Skin Characterization
I. INTRODUCTION The use of laser generated ultrasonic has been widely utilized in industry for the non-destructive evaluation of layered materials. From velocity measurements of the generated ultrasonic bulk or surface waves, information can be deduced regarding layer thickness and mechanical properties of the layers in a material[1,2]. The work on laser to generated acoustic waves by Wu and Chen [2] discussed the dispersion of the laser-generated surface waves in an epoxy bonded copper-aluminum layered specimen. The results show a clear influence of bonding layer thickness on the surface wave dispersion and the method could be applied to the non-destructive evaluation of the bonding properties.
When a tissue sample is illuminated with an ultra-short laser pulse, the absorbed radiation results in rapid localized increase in temperature of the irradiated area, this increase in temperature causes a rapid thermal expansion and results in the generation and propagation of mechanical waves. In the nondestructive thermoelastic regime various ultrasonic waves can be generated including longitudinal and transverse waves, surface waves and lamb waves [3]. In biomedical applications of laser ultrasonics, temperature changes of the tissue due to short laser pulses have to be limited by degrees or fractions of degrees in order to not destroy or damage the skin. The characteristics of laser ultrasonic waves depend strongly not only on the optical penetration depth, thermal diffusion, elastic and geometrical features of the tissue as well as the parameters of the exciting laser pulse, including the shape, focus spot and pulse width and can be used to characterize tissue properties. II. FINITE ELEMENT ANALYSIS Due to its capability in obtaining full-field approximate numerical solutions and it flexibility in modeling complicated geometry, ANSYS code is used in calculating the laser induced excitation process where thermal diffusion and optical penetration depth of laser irradiation is being considered. The modeling technique that is presented is a sequential coupled-field analysis where the thermal and mechanical analyses are treated separately, as it can be assumed that the effect that the stress field has on the temperature field is assumed to be negligibly small. The skin models in this paper consist of three uniform layers, the epidermis, dermis and subcutaneous fat. We make the assumption in this work that the thicknesses of the different layers are constant on a small scale. The meshes are placed parallel according to the layer thicknesses and are assumed to be bonded together. The heating of the skin model due to a laser pulse is simulated by a dynamic thermal analysis and the nodal temperatures obtained from the thermal analysis are input as the loads in the subsequent mechanical analysis and the time-dependent out-of-plane displacement histories at various locations on the surface of the model are analyzed.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2587–2591, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2588
Adèle L’Etang and Zhihong Huang
Table 1. Thermal and Mechanical Properties of Skin Layers used in FE Simulations. Density (gmm3) Specific Heat (Jg-1K-1) Thermal conductivity(Wmm-1K-1) Young’s Modulus (Pa) Poisson’s Ratio Thermal Expansion Coefficient (K-1)
Epidermis 1.2x10-3 3.590 2.4x10-4 1.36x105 0.499 3.0x10-4
Dermis 1.2x10-3 3.300 4.5x10-4 8.0x104 0.499 3.0x10-4
Due to symmetry a 2D axisymmetric model is employed to reduce computer run times. For laser induced heat-transfer, heat loss by convection and radiation is neglected in this study. The classical heat conduction equations for the finite element analysis can be expressed as:
[K ]{T } + [C ]{T} = {p1 } + {p 2 }
[1]
With [C] = the heat capacity matrix, [K] = The conductivity matrix, {p1} = the heat flux vector {p2} = the
heat source vector {T} = the temperature vector, { T } = the temperature rate vector. Ignoring damping, governing finite element equation for Wave propagation can be expressed as: [M ] DU + [K ]{U } = { f } [2]
{ }
ext
Where [M] = the mass matrix, [K] = the stiffness matrix,
} = the acceleration {U} =the displacement vector, { U vector; {fext} is the external force vector. For thermoelasticity the external force vector for an element can be described as:
³ [B ] [ E ]{ε T
ve
o
}dV
[3]
With { o} = the thermal strain vector, [B]T = the transpose of the derivative of the shape functions.
Subcutaneous Fat 1.0x10-3 1.900 1.9x10-4 3.4x104 0.499 9.2x10-4
losses due to radiation and convection are neglected. In the thermal analysis, the mesh is constructed using 4-node axisymmetric quadrilateral elements. The model dimensions are 20mm in length with an epidermis depth of 0.08mm, dermis depth of 1mm and subcutaneous fat depth of 10mm. The thermal properties of the skin layers used in the simulations are given in Table 1. These skin property values have been used to present an average data set of the thermal and elastic properties of skin in the normal range. The heating of the multilayered skin model due to a short laser pulse is simulated by a dynamic thermal analysis with the laser pulse represented as a volumetric heat generation. The laser beam is modeled as having Gaussian spatial and temporal distribution with the intensity of the laser beam decreasing with depth according to Beer-Lambert’s Law as shown in Eq. 1. A cylindrical coordinate system is set up with the origin at the centre of the point of incidence and the z-axis directed into the material. The irradiated energy is assumed to be vertically incident on the material surface, the distribution of physical quantities such as fluence and heat generation are symmetrical to the axis of the incident light.
φ (r , z ) = Eo exp[−2r 2 / ro 2 ] exp[−( μ a + μ s ) z ]
[4]
Where (r,z) is the laser fluence, Eo is the radiant exposure at the tissue surface, r the radial coordinate, z is the coordinate that describes the depth below the surface, a
III. THERMAL ANALYSIS Accurate estimation of the laser induced temperature distribution is crucial in the development of the laser ultrasonic surface wave technique into an effective method of quantitative characterization. In order to develop an accurate close representation of the light propagation into skin requires a model which accurately characterizes the spatial and size distribution of the tissue structures, their absorbing qualities and the refractive indexes, a number of assumptions and simplifications have been made. Assumptions that are made in this work regarding the thermal response of the material model are that the thermal expansion that occurs due to the laser heating occurs over a time span close to that of the pulse duration and the thermal
_________________________________________
Fig. 1 Schematic diagram of laser-irradiated sample.
IFMBE Proceedings Vol. 22
___________________________________________
The Effect of Laser Characteristics in the Generation and Propagation of Laser Generated Guided Waves in Layered-skin Model
is the absorption coefficient, s the scattering coefficient and ro is the beam radius. The highly forward scattering nature of soft tissue suggests that most of the scattered light is in the same direction as the collimated beam. It is therefore possible to improve Beer’s law by replacing the scattering coefficient with the effective scattering coefficient [’s = s(1-g)]. (where g is the average cosine of the scattering angle) Thus Beer’s law can be improved for laser wavelengths where there is considerable scattering as:
φ (r , z ) = E o exp[−2r 2 / ro 2 ] exp[−( μ a + μ s (1 − g )) z ] [5] This is still an approximation and only considers collimated light and the forward scattered light is in the zdirection. Light scattered in all other directions is neglected. The source term Q(r,z) describes the rate of heat deposition in tissue due to laser irradiation. The heat source term is a product of the absorption coefficient and the laser fluence. The rate of heat deposition per unit area is then described as:
Q ( r , z ) = μ aφ ( r , z )
[6]
The temporal distribution of laser irradiation is assumed to be Gaussian in nature also given by: § t · t [7] exp¨¨ − ¸¸ to © to ¹ Where t is time and to is the rise time of the laser pulse. Therefore the volumetric heat generation boundary condition used to simulate the laser in the skin model is given as: g (t ) =
Q(r , z , t ) = μ aφ (r , z ) g (t )
[8]
The temperature fields in the multiplayer skin models are calculated with time steps in the range of 0.1ns for the duration of the laser pulse and allowed to increase thereafter. The thermal analysis is run for 0.1s in order to provide a complete temperature history for the entire duration of the mechanical analysis.
2589
chosen in the same manner so that the propagating waves are spatially resolved. As a rule more than 20 nodes per minimum wavelength (min) is used.
V. RESULTS Fig. 2 shows the contour plots of the model at the end of the laser pulse, in Fig. 2(a) that of the highly absorbing laser pulse the heat affected zone is localized and is absorbed within a 0.08mm depth with a maximum temperature increase obtained of 0.482K at the centre of the laser pulse. This is the characteristic temperature distribution for the three simulations using this type of laser source. This is in contrast to Fig. 2(b) which shows the contour plot of the more highly scattering laser wavelength. When using this laser wavelength the laser energy is absorbed over a much larger volume of the tissue due to the scattering of the energy. The maximum temperature increase obtained here is 0.206K; this corresponds to a smaller increase in temperature than in the CO2 laser simulation but there is a greater possibility of damage to the skin in this case due to the larger affected area. It is the distribution of optical energy in the skin model corresponds to the shape of an acoustic transmitter and has a strong influence on the characteristics of the generated ultrasonic waves. Fig. 3 shows the simulated out-of-plane displacement histories at different points on the surface of the skin models for the four simulations. SAWs penetrate into a solid by a few wavelengths with amplitudes that decay exponentially with depth and a penetration depth that varies with the wavelength of the elastic wave. As the SAW travels along the distance of the model the characteristics of the waveform changes which is due to the dispersion effect. It is clear that the amplitudes, frequency and shape of the
IV. MECHANICAL ANALYSIS The mesh used in the mechanical analysis is identical to the mesh used in the thermal analysis and is composed of four-node axisymmetric quadrilateral elements. The nodal temperatures from the thermal analysis are mapped onto the mesh as the loads for the mechanical analysis. In the mechanical analysis the temporal and spatial resolution are critical for the accurate convergence of the numerical results. The rule that applies is that time steps should be small enough to measure 20 points per cycle of the highest frequency component (fmax). The minimum element size is
_________________________________________
Fig. 2 Temperature distribution at 20ns after (a) 1mJ laser in which absorption predominates and (b) 50mJ laser in which scattering dominates.
IFMBE Proceedings Vol. 22
___________________________________________
2590
Adèle L’Etang and Zhihong Huang
Fig. 3 Out-of-plane displacement histories recorded on the surface of the skin models using the 4 different laser sources described in table 2.
generated waveforms vary considerably depending on the laser pulse characteristics. Higher frequency waveforms which are confined closer to the surface are produced by using smaller laser beam radius and rise times and using a strongly absorbing laser source instead of one that penetrates further into the skin layers. It can also be seen from the above results that larger amplitudes of waves can be generated when there is a large acoustic transmitter in the tissue, however when the temperature increase is larger or affects a larger section of
Table 2. Laser Properties used in simulations [4,5,6]
_________________________________________
the sample the possibility of thermal damage has to be considered as well as the possibility of mechanical damage due to disruption caused by the generation and propagation of the elastic wave.
VI. CONCLUSIONS The simulated waveforms in multilayered skin models using four different laser sources are presented in this paper. For the use of laser generated surface waves to characterize skin properties it is necessary to generate waves that can be easily measured using interferometric techniques for these simulations. In order to efficiently generate SAWs in the skin model the laser needs to be strongly absorbed in the sample rather than using a source which is strongly scattered this results in more elastic wave energy being generated closer to the surface of the material being tested and of higher frequency waves being produced. The shorter the laser pulse time also results in the generation of higher frequency SAWS which will be closely confined to the surface of the sample this is also true of using smaller laser beam radii.
IFMBE Proceedings Vol. 22
___________________________________________
The Effect of Laser Characteristics in the Generation and Propagation of Laser Generated Guided Waves in Layered-skin Model 4. 5.
REFERENCES 1. 2. 3.
C.B.Scruby, L.E.Drain, Laser Ultrasonics: Techniques and Applications, Adam Hilger, Bristol, 1990 T.-T. Wu, Y.-C.Chen Ultrasonics, Vol. 34, (1996), p793-799 B.Xu, Z.Shen, X,Ni, J.Lu, Numerical simulation of laser-generated ultrasound by the finite element method J.Appl.Phys,Vol 94 (4) (2004) p2116-2122
_________________________________________
6.
2591
S.C.Jiang, N.Ma, H.J.Li, X.X.Zhang, Burns, 28 (2002) 713-717 G.J.Gerling, G.W.Thomas, The Effect of Fingertip Microstructures on Tactile Edge Perception WHC First Joint Eurohaptics Conference and Symposium (2005) p.63-67 N.M.Thalmann., P.Kalra, J.L.Lévêque, R.Bazin, D.Batisse, B.Querleux, IEEE Transactions on IT in Biomedicine, Vol.6,(4) (2002) p317-323
IFMBE Proceedings Vol. 22
___________________________________________
A Mesh-Based Model for Prediction of Initial Tooth Movement K. De Bondt1, A. Van Schepdael1, J. Vander Sloten1 1
K.U.Leuven, Division of Biomechanics and Engineering Design, Belgium
Abstract — Orthodontics is making an evolution from a highly experience based treatment towards a computer assisted patient specific therapy. The development of models that can predict tooth movement therefore is a critical research topic. Many existing models, FEM as well as analytical, simplify the calculations by assuming that the tooth root can be approximated by a paraboloïd or an elliptical paraboloïd. Other studies state that these approximations are only allowed within a certain range for tipping movements. But when it comes to bodily movement the real root and the approximation can give very different FEM results. This paper presents a model that is based on an analytical approach using paraboloïds. The analytical approach is chosen to avoid FEM analysis and to limit calculation time. The developed model expands the applicability so that it can do the calculations for a mesh representation of the root. As a validation a set of elliptical paraboloïds is constructed and the results for the analytic and the mesh-based procedure are compared. This is done by comparing the forces necessary to cause a certain displacement or rotation of the root. Both methods produced the same results except for small deviations that can be assigned to the mesh accuracy and the accuracy of the numerical integration that is used within the analytical approach. This makes the presented model an interesting tool for the evaluation of the correctness of an approximating paraboloïd. It is also suitable for the calculation of initial tooth movement as part of a model for the simulation of orthodontic tooth movement. Keywords — mesh, modeling, movement, prediction, tooth
I. INTRODUCTION Orthodontics is still a highly experience-based treatment and a ‘perfect dentition’ is often reached through a process of trial and error. There is a tendency towards movement predictive modeling and simulation to avoid this trial and error procedure and to shorten the treatment time and cost. Therefore FEM models and analytical models have been built to get a clearer view of the movement determining factors. Many authors use a paraboloïd as an approximation of the tooth root, with its height and diameter as adaptable parameters [1], [2], [3]. Paraboloïds are not suited to describe all root geometries in an exact way. For this reason the elliptical paraboloïds are introduced as a common representation of the root geometry [4]. Few studies report on the accuracy of these paraboloïds to predict the
movement of a real tooth. Some [5] state, based on FEM analysis, that for tipping movements the results only differ within a range of 10% but that the differences are much higher for bodily movement. As a contribution to this research topic, this paper will present a model that predicts the stress in the periodontal ligament for a triangle mesh of a tooth root and that derives the initial displacement of the root. The use of FEM analysis will be avoided and calculations will be based on analytical formulas to keep the calculation time low. This makes the model applicable as an evaluation tool to compare real root geometries to paraboloïd approximations, and as a model for initial tooth movement prediction. II. MATERIALS AND METHODS The developed model is based on the approach of Provatidis [1] and Van Schepdael [4]. Provatidis reports the calculation of the one-directional movement of a singlerooted tooth induced by a force applied in the Cre of the paraboloïd representation Figure 1. The paraboloïd geometry is described by two parameters, height and diameter. He assumes the PDL has a constant thickness =0.229mm, with Young’s modulus E=0.68MPa and Poisson coefficient =0.49 which makes the PDL nearly incompressible. In [4] the force system (F, M) applied at the tooth bracket is transformed to the apex of the root and a stiffness matrix (K) of the tooth root is calculated. The initial displacement vector ‘u’ can then be found by the equation F=K.u.
Figure 1 The geometry of the tooth[1]
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2592–2595, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
A Mesh-Based Model for Prediction of Initial Tooth Movement
2593
The model which introduced a combined calculation method for translation and rotation also makes use of elliptical paraboloïd geometries. This adds an extra degree of freedom for the root geometry description, namely the eccentricity. The equation of a two-dimensional ellipse with its centre at (0, 0) in polar coordinates (r , φ ) is
r2 =
b2 1 − e2 cos 2 φ
(1)
where b is the short axis of the ellipse, and e is its eccentricity. The relation between the short axis and the long axis is expressed by:
b2 = a 2 .(1 − e 2 )
maximum allowed height for points in the defined grid was 15mm. All calculated y-coordinates higher than 15mm were set to 15mm. The mesh was exported as an stl-file and adapted in Magics© where the upper side, above 13mm, was cut. This was done to avoid the inaccuracies that showed up in the transition at y=15mm. The resulting geometry was an accurate representation of the elliptical paraboloïd with height 13mm. The constructed meshes were simulated with the mesh-based model and the forces necessary to induce a displacement on this root geometry were calculated. Three different translations were induced, namely ux = 2.10-4 mm, uy = 2.10-4 mm, uz = 2.10-4 mm and three different rotations x = 2.10-5 rad, y = 2.10-5 rad, z = 2.10-5 rad. Secondly the results were compared to those of the elliptical paraboloïd model [4].
(2) III. RESULTS
where a is the long axis length. In the case of an ‘elliptic’ paraboloid, where the long axis varies with the height y : a2 =
R2 y h
Table 1 Force necessary to cause a given displacement for elliptic (3)
Where R is the length of the long axis at y = h . The ycoordinate for every point in the xz-plane is then defined by:
(1 − e 2 cos φ ).4.h.r 2 y= D 2 (1 − e 2 )
(4)
In both the described analytical models the stiffness matrix is built starting from the strains defined in a local curvilinear system. By transforming the strains to the global axis and substituting them into Hooke’s law, the stresses are calculated. Together with the normals to the surface the stresses define the tractions along the tooth surface. The stiffness matrix results from integrating the tractions along the tooth surface over the total root geometry. The idea of this study was to integrate the appropriate formulas over a triangle mesh instead of analytically described paraboloïd. In that way all existing types of root geometries could be simulated. To make this possible the formulas were adapted to be applicable for one arbitrary triangle. Also the integration boundaries were defined for one arbitrary triangle and the values of all individual triangles were added. To validate the used methods a comparison was made between the values resulting from the analytical model and those of the introduced mesh-based approach. Therefore an elliptical paraboloïd stl-set was created in Matlab© defining a grid of 0.05mm x 0.05mm in the xzplane and calculating the appropriate y-coordinate (height) according to equation 4 for an elliptical paraboloïd. The
_______________________________________________________________
paraboloïds with rising eccentricity eccentricity
0
0.1
0.2
0.3
0.4
ux-Fx
1.0725
1.066
uy-Fy
0.1666 0.1656 0.1626 0.1575 0.1503 0.1406 0.1282
1.0463 1.0129 0.9647
0.5
0.6
0.9
0.816
uz-Fz
1.0725 1.0738 1.078
thx-Mx
8.8603 8.8607 8.8625 8.8668 8.8759 8.8936 8.9262
thy-My
0.0393
thz-Mz
8.8603 8.8063 8.6429 8.3652 7.9647 7.4272
0.039
1.0852 1.0958 1.1104 1.1301
0.0381 0.0367 0.0346 0.0319 0.0287 6.73
The forces necessary to cause a displacement ux = uy = uz = 2.10-4 mm or a rotation x = y =z = 2.10-5 rad, were calculated for elliptical paraboloïd meshes with increasing eccentricity. The long axis length at height y=13mm was kept constant for all the geometries. The results are listed in Table 1. Due to the axisymmetric shape of the paraboloïd (eccentricity=0) with the y-axis as the central axis, the forces necessary for a displacement in the x and z-direction are equal. So are the moments Mx and Mz. With ascending eccentricity (decreasing z-axis length a.k.a. short axis length) the needed force diminishes in the x-direction while it increases in the z-direction. This can be explained by the decrease in resisting surface for the x-direction and the fact that the tooth is getting flatter when moving in the zdirection. The change of the moments Mx and Mz can also be explained by these facts. In the y-direction the needed force Fy decreases also due to the decrease in resisting surface. The moment around the y-axis was expected to increase, but instead there is a diminishing trend. An
IFMBE Proceedings Vol. 22
_________________________________________________________________
2594
K. De Bondt, A. Van Schepdael, J. Vander Sloten
explanation can be found in the two effects that have an impact on the moment My. The first is the deviation from the perfect circle to more elliptic which makes the geometry more resistant to torsion moments.
Figure 2 Forces Fy as calculated for a an elliptic root shape set with constant volume
Figure 4 Stresses in the paraboloïd along the xy-plane, analytical results (blue,’line’) compared to the mesh results (green,’dots’)
The second is the decrease of total volume that must be rotated. To eliminate the second effect, a new set of elliptic paraboloïds was assessed where the total volume was remained constant. The moment My of this geometry set showed the expected increase as can be seen in Figure 2, while the other moments and forces changed in the same way as previously described. As an evaluation the necessary forces to cause a displacement were also calculated with the analytical model, for the same displacement and rotations. Results for the forces Fx and Fz are plotted within Figure 3. One can see that the trends tend to be the same: the analytical results (blue ‘x’) are approximately the same as the mesh-based results (red ‘o’). Small disturbances can be accorded to the mesh accuracy and to the accuracy of the numerical integration in the analytical model. The overall calculation
Figure 3 Forces Fx, Fz as calculated with analytical (blue,’x’) and mesh based (red,’o’)
_______________________________________________________________
time was limited to a few seconds for a triangle mesh consisting of ~40.000 triangles. A second evaluation was done based on the stresses reported by Provatidis [1], [6]. The stress patterns of the analytical model are compared to those of the mesh-based model in Figure 4. Again the presented model shows the same trends as the analytical approach. IV. DISCUSSION This study presents a model that calculates initial displacements from an applied force for tooth root geometries in a triangle mesh representation. Its concept was based on the work of Provatidis [1] and Van Schepdael [4], building a stiffness matrix for a paraboloïd tooth root. To validate the model, the forces necessary to induce a displacement were calculated for a mesh set of elliptical paraboloïds. This shows that the model is applicable for meshes of root geometries with results similar to the analytical model. One drawback for the applicability of the model that could be mentioned is its dependency of the mesh accuracy. But as the calculation time remains limited to a few seconds, it is possible to use large and accurate mesh representations. The presented method is a contribution to the modeling of tooth movement that is needed to make orthodontics more predictive and less trial and error based. It can also be a valuable tool for evaluation of tooth root approximations. Most authors use a paraboloïd as a root representation, but as stated by Vollmer and Bourauel [5], results for bodily
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Mesh-Based Model for Prediction of Initial Tooth Movement
2595
movement can differ significantly. To get a view on the correctness of an approximating paraboloïd the presented mesh-based model can be a valuable tool.
2.
ACKNOWLEDGMENT
4.
This research was funded by IWT-Vlaanderen and Materialise Dental NV (Belgium). An Van Schepdael is a research assistant of the Research Foundation Flanders (FWO-Vlaanderen) Clinical input from Prof Dr. G. Willems from the Departments of Orthodontics and Forensic Odontology (Leuven)
REFERENCES 1.
Christopher G.Provatidis (2001). An analytical model for stress analysis of a tooth in translation. Int. J. Eng Sci 39: 1361-1381.
_______________________________________________________________
3.
5.
6. 7.
Pedersen E., Andersen K., Gjessing P. E. (1990) Electronic determination of centres of rotation produced by orthodontic force systems, Eur. J. Orthod. 12: 272-280. Burstone C.J., Pryputniewicz R.J., Bowley W.W (1978) Holographic measurement of tooth mobility in three dimensions, J Periodontal Res.: 283-294. A.Van Schepdael, J.Vander Sloten (2008). [In Press] Effect of root form on stress patterns in the periodontal ligament Conference Proceedings CMBBE D Vollmer, C.Bourauel, K.Maier, A Jäger, (1999) Determination of the centre of resistance in an upper human canine and idealized tooth model. Eur. J. Orthod. 21: 633-648. Christopher G.Provatidis. Demetrios T.Venetsanos SDC (2007) Estimation of the flexibilities of tooth support of an ellipsoidal shape 2nd International Conference on Experiments/Process/System Modelling/Simulation & Optimization
Corresponding Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Kris De Bondt K.U.Leuven Celestijnenlaan 300C 3001 Heverlee Belgium
[email protected] _________________________________________________________________
Recipe Suggestion System Satoshi Morita1, Yasuyuki Shimada1, Tsutomu Matsumoto1, Shigeyasu Kawaji2, and Timothy Teo Zhong Hon3 2
1 Kumamoto National College of Technology/Koshi, Kumamoto, Japan Graduate School of Science and Technology, Kumamoto University/Kumamoto, Japan 3 School of Information Technology, Temasek Polytechnic/Singapore
Abstract — This paper describes a concept and implementation of ''Recipe Suggestion System'' (RSS) to enhance the Quality of Life (QOL). This system shows appropriate meal menu based on medical information, body condition, eating history, favorite food and market price for user. Keywords — smart home, recipe suggestion system (RSS), intelligent electrics appliances, lifestyle-related diseases, individually system
I. INTRODUCTION In 2003, Japanese Ministry of Public Management, Home Affairs, Posts and Telecommunications reported that more than 80% of Japanese households have the Internet facilities and mobile phones [1]. ''The smart home'' which controls intensively electronics and electrical appliances connected to home network has become popular in Japan.
User
Profile
Medical Intelligent examination Refrigerator device
Foods
Condition
Shop
For example, residents use mobile phone as a key to close a front door. And user can check and change driving state of air-conditioner and room-light by mobile phone. Another application provides control of washing machine, microwave oven and HDD/DVD recorder for outside user. The smart home is not only convenient but also safe for residents. For the first example, the electric pot sends e-mail to family whenever their aged parents use it. The second, security system alerts residents if visitors or strangers come by e-mail. And the third, the medical examination device communicates with distant doctor and takes care of user [2][3]. Some intelligent electric facilities have been studied and followings are examples. Intelligent floor checks health condition from movement of resident. Intelligent bathroom checks action of a user, and protects user from accident. Intelligent refrigerator manages quantity and freshness date of food storing [4]. These intelligent appliances can improve quality of life (QOL). In this paper, we propose ''Recipe Suggestion System'' (RSS) to enhance the QOL. The RSS stores information from medical examination device and intelligent refrigerator, and generates recipes according to the user’s desired methods. Currently, the RSS can generate recipes by using of 4 kinds of information; healthcondition, preferred ingredients, market price and eating history. For example, the RSS shows low calorie and low salt meal menu for a person suffering from high blood pressure. II. SYSTEM DESIGN
Price A. Scenario Recipe
Database
Fig. 1 Scenario flow
This section explains the scenario of the RSS that we assume. User has own profile, for example, user name, age, illness and favorite food, etc. In addition, user profile is used for not only generating recipes but also management of eating history. Some techniques which get information on users automatically are developed. One of the examples is intelligent refrigerator which manage foods inside by
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2596–2599, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Recipe Suggestion System
2597
applying RFID tags. By using same way, the RSS can get the foods information and automatic update it. The medical examination device checks pulse rate, heartbeat rate, blood pressure and body temperature every day. The device can update automatically health condition. Also the device can give feedback loop to the RSS. The manager of a supermarket updates price list of foods. This information helps users to have economical life. Above-mentioned information is stored away by the database, and the RSS suggests recipe for user (Fig.1). In this paper, we explain a user’s flow. B. User Interface This section explains user interface ofthe RSS. •
Creating an User Account
Fig. 3 Create User Profile form
The RSS needs user profile data to generate recipes. On Login page window (Fig.2), clicking on ''Create User Profile'' button will bring the user to the Create User Profile form (Fig.3) where the user should enter own information details. The following values would be required from the user when creating a new account: ''User name'', ''Birth day'', ''Pulse rate'', ''Heart beat rate'', ''Type of Illness'' and ''Preferred Ingredients''. The ''Check Validity'' button allows the RSS to check the database to see if the username is available for use. This ''Check Validity'' button also enables the ''Confirm'' button which brings the user to the next form whereby the user confirms his personal information inputs (Fig.4). The ''Create New Profile!'' button will allow the RSS to create a new user account and insert it into the database. The Account Creation Successful dialog is Fig.5. Fig. 4 Confirm page window
Fig. 5 The Account Creation Successful Dialog
Fig. 2 Login page window Fig. 6 The Invalid Username Dialog
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
2598
•
Satoshi Morita, Yasuyuki Shimada, Tsutomu Matsumoto, Shigeyasu Kawaji, and Timothy Teo Zhong Hon
Logging In
The logging in process is simple as the user would just need to input a username and the RSS will check with the database to see whether there is such username in the database and prompt if the login is unsuccessful (Fig.6). •
Update User Profile
After login in with the registered account the user will have a view of the main menu form which allows the user to choose what to do (Fig.7). In order to change the personal information of the user, the user would need to click on the ''Update User Profile!'' button which brings the user to the Update User Profile form (Fig.8). In this form, the user can edit his/her personal information. However, the username cannot be changed as it is being used to log in. The Illness textbox filed has a method which checks that the illness being entered is valid. Upon clicking the ''Edit!'' button, the RSS will edit all personal information in the database.
Fig. 9 choose methods type(s) form •
Generating Recipe(s)
From the Main menu Form (Fig.7), the user can click on the ''Generate Recipe!'' button to proceed to generate his/her desired recipe. Fig.9 would be shown which allows the user to choose the type(s) of next method to generate his/her desired recipe. 1. Generate By Health If the nutrition values of the dish are below that certain limit of the nutrition value, the RSS will recommend that recipe. 2. Generate By Preferred Ingredients
Fig. 7 Main menu form
The RSS will just retrieve the preferred ingredients and match it with the ingredients of the recipe from the recipe data. 3. Generate By Market Price This method will simply retrieve the cheapest main ingredient from the supermarket application (currently database), check the recipe’s ingredients in the recipe data. 4. Generate By Eating History It will retrieve the last 3 meals being generated and will not recommend those 3 meals but others which had not been tried before. The user clicks ''Confirm!'' the RSS will display the next form which contains all recipes for the specific meal (Fig.10). On this form, the user would be able to find out the details of every recipe listed in the form. The user would need to select the recipe name, and click on the respective buttons (e.g. Main Dish Details, Side Dish 1 Details etc.).
Fig. 8 Update User Profile form
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Recipe Suggestion System
2599
Upon clicking on those respective buttons, the following dialog would be shown (Fig.11).
The user has selected recipe and clicked ''Confirm Meals'' button, the RSS will insert an eating history and display the Recipe information form (Fig.12). On this form, the user will be able to view the recipe details (E.g. Energy, Sodium, Salt, Cholesterol etc.). From this form, the user can go back to the Main Menu, print out the various recipes. III. CONCLUSIONS
Fig. 10 Recipe list form
In this paper, we propose concept and design of integrated information system that assist planning wellbalanced meals. The RSS is still at the laboratory level. The RSS requires more convenient functions and simple design. For example, function to generate recipe by staple food (bread / rice / noodles) and show obvious output result. Some fundamental functions are implemented on the RSS, and the RSS is not easy for ordinary people to handle. Furthermore, the RSS is impracticable. Therefore the RSS had better to cooperate with external systems, such as check quantity of foods in the intelligent refrigerator, get pulse rate and heart beat rate from medical examination device, get market price from supermarket. And weare going to use the RSS with a mobile phone from outdoors. Furthermore we must think how to evaluate this system.
ACKNOWLEDGMENT
Fig. 11 Recipe details dialog
This project is founded by TATEISI SCIENCE AND TECHNOLOGY FOUNDATION. The authors would like to thank to the foundation.
REFERENCES 1.
2. 3. 4.
Japanese Ministry of Public Management, Home Affairs, Posts and Telecommunications, ''Pulse-taking of the network infrastructure in Japan''(2003) at http://www.johotsusintokei.soumu.go.jp/statistics/ data/040414_1.pdf TOSHIBA CORPORATION, Home IT system FEMINITY at http://www3.toshiba.co.jp/feminity/ Matsushita Electric Industrial Co., Ltd, kurashi-net service at http://national.jp/appliance/product/kurashi-net/ Ayako KONNO㧘Yoshifumi MASUNAGA (2005) Concept of an Intelligent Refrigerator System using RFID, DBSJ Letters Vol.4, The Database Society of Japan, 2005, pp.73-76
Fig. 12 Recipe information form
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
An Object-oriented Model of the Cardiovascular System with a Focus on Physiological Control Loops A. Brunberg1, D. Abel1 and R. Autschbach2 1
2
Institute of Automatic Control, RWTH Aachen University, Aachen, Germany Department for Cardiac and Thorax Surgery, University Hospital Aachen, Aachen, Germany
Abstract — As means for analysis of physiological control loops, but also to synthesize technological support (e. g. artificial heart) and to help deduce therapeutical measures, an object-oriented model in form of an open and expandable library is developed. This modeling method has many advantages compared to existing signal-oriented simulation models as will be shown in this paper. The simulation model shows good correspondence with physiological data, and offers a wide field of possible applications, such as research, development and training. Keywords — Modeling, simulation, cardiovascular system, physiological control loops
I. INTRODUCTION The proper function of the cardiovascular system is an important prerequisite not only for the survival but also for the well-being of a human being. If this system is severely disturbed, for example in patients with advanced, irreversible heart failure, appropriate measures have to be taken to sustain the necessary functions. In case of heart failure that can no longer be treated sufficiently with medication, a total artificial heart (TAH) or a ventricular assist device (VAD) is sometimes used as a temporary fixture until a suitable donor heart can be found. Ideally, such an assist device could adapt to physiological stress, e.g. climbing stairs or increased ambient temperature, using an automatic closed-loop control system, and thus sustain not only survival but also well-being of the patient. However, implementation of a control algorithm calls for detailed knowledge of the process to be controlled – the circulatory system including physiological control mechanisms. A time-dependent mathematical model implemented in a simulation platform allows better understanding of physiological control processes as well as the design and development of controls for technological assist devices. In this paper, an object-oriented model of the human cardiovascular system based on a library structure in the design and simulation environment Modelica / Dymola including physiological control mechanisms will be presented. The advantages of this method of implementation will be exemplified by simulation results.
II. MATERIALS AND METHODS A. Model requirements Simulation models and platforms respectively should fulfill several criteria to be suitable for controller design for TAH or VAD systems: • • • •
flexibility, i.e. a fast and simple way to change simulated variables, expandability, i.e. an opportunity to integrate new (e.g. more refined) model components into the existing model, a multidisciplinary representation (e.g. graphical) of the model, and integration of Hardware-in-the-loop simulations.
A mathematical description of the cardiovascular system has been the object of many studies in the past decades. These models comprise descriptions of fast changing processes (e.g. pressure-flow relations in the vessels, coupled with short-term regulatory mechanisms [1, 2]), the analysis of long-term processes (e.g. in the extensive model of physiological control mechanisms developed by Guyton et al. [3]) as well as combinations of both kind of model types. However, all models have a signal-oriented modeling approach as a common feature. Signal-oriented means that data between two components is exchanged in the form of a time-variant signal via a directional interface. Thus, cause and effect are given a direction, and output signals have to be set prior to simulation. This complicates fulfilling the requirements for flexibility and expandability. Another problem is caused by direct, i.e. undelayed, feedback of a signal, as numerical difficulties may occur during simulation. In contrast to the signal-oriented approach, an objectoriented way of modeling is better suited as a variable simulation platform for controller design, and it has been applied successfully to technological problems [4]. B. Object-oriented library Using the simulation tool Modelica/Dymola, an objectoriented component library (see Fig. 1) is implemented,
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2600–2603, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
An Object-oriented Model of the Cardiovascular System with a Focus on Physiological Control Loops
2601
Fig. 1 Library structure in Modelica/Dymola with an exemplary model of the cardiovascular system
Fig. 2 Fluid mechanic model of the cardiovascular system.
containing models of single organs (e.g. heart), as well as partial organs (e.g. vascular compartments) and models of nerval pathways and relevant parts of the central nervous system in order to provide an opportunity to simulate physiological control mechanisms. Additionally, models of exemplary pathological changes and models of technological assist devices can be integrated as library components [5]. Based on this library, users can easily generate customized simulation models, test the function of individual components, and include their own models of organs, regulation mechanisms or assist devices. The models are built according to the main concepts of object-oriented programming, e.g. the definition of classes comprising the abstract characteristics of an object, the use of subclasses which can inherit properties of more general classes or encapsulation of functional details inside a class. The library components communicate via nondirectional interfaces, so called connectors. In these, variables are linked depending on their classification as flow variables or as potentials. The latter have to be equal on both sides of the interface while the former ones have to sum up to zero. Two main classes for connectors are defined in the model library: a blood connector used to connect vascular compartments, and one for nerval signals.
veins, and pulmonary circulation). In splanchnic, extrasplanchnic and pulmonary circulations, a distinction is made between peripheral and venous circulation; furthermore, pulmonary arteries are included. The vascular compartments can be described as a combination of resistive and capacitive terms (hydraulic resistance and compliance). In the large vessels in proximity to the heart, the inertial effects of blood that is accelerated with each heart beat is taken into account as well. The model of the heart shows pulsatility by means of a periodic change in the ratio of pressure to volume (elastance) in the ventricle [2, 7, 8, 9, 10]; the contractile activity of the atria is not considered. The heart valves are modeled as unidirectional valves. A more detailed description of the model can be found in [5].
C. Fluid mechanic description of the cardiovascular system In order to study short-term effects in blood pressure regulation, a hydraulic model of the cardiovascular system based on [6] is implemented as shown in Fig. 2. It includes the heart and five large vascular compartments (systemic arteries, splanchnic and extrasplanchnic circulation, thoracic
_______________________________________________________________
Pthor, intrathoracic pressure, Pabd, intraabdominal pressure.
D. Physiological Control Mechanisms The baroreflex as an example for a physiological control loop has been included into the model. It acts as a means for short-term blood-pressure stabilization in the presence of disturbances such as blood loss or sudden changes of posture. The baroreceptors, stretch sensitive sensors, register changes in blood pressure and react with a change in the frequency of nerval action potentials. In the central nervous system, sympathetic and parasympathetic branches of the autonomous nervous system are activated and inhibited respectively to influence contractility of the heart, heart rate and vasoconstriction, and thus stabilize blood pressure. The simulation model includes both arterial and cardiopulmonary baroreceptors. The former ones are modeled as a concentrated sensor located in the carotid
IFMBE Proceedings Vol. 22
_________________________________________________________________
2602
A. Brunberg, D. Abel and R. Autschbach
sinus; the latter ones are concentrated in the right atrium [2, 6]. Additionally, the model includes the effect of respiratory volume (measured with lung stretch receptors) on cardiovascular parameters [11].
Table 1 lists a number of typical hemodynamic parameters. It can be seen that the simulation data approximates the textbook data well in most cases.
E. Integration of a New Component in the Existing Model Venous vessels can basically be described as tubes that can collapse under certain conditions. For positive transmural pressure, they show a nearly linear P-V curve; however, negative transmural pressure decreases the crosssectional area significantly, resulting in increased resistance to flow. Fig. 3 shows the P-V curve for a collapsible vessel based on [12]. This model for a collapsible venous vessel is now integrated into the simulation model instead of the previously used library component with constant compliance. Both vessel models have blood connectors as only interface to other components, and the code used to describe the properties of the component is encapsulated in each component. For this reason, replacing one component with another one (having the same interfaces), without having to change any other part of the simulation model can be done easily and comfortably. This principle works for more complex components (e.g. heart) as well, as the concepts of object-oriented programming are applied in the entire library structure.
Fig. 4 Comparison of simulation and textbook data [13] of the distributions of total blood volume in percent.
Table 1 Comparison of main hemodynamic parameters in simulation and textbook data. Hemodynamic Parameter
Textbook Data [13]
Constant Compliance Model
Collapsible Vein Model
5–6
6.3
6
80
101
75
Ejection Fraction
0.67
0.56
0.55
Systolic Duration [sec] Aortic Pressure (syst./diast) [mmHg] Heart Rate [beats/minute] Peak Aortic Flow Rate [ml/sec]
0.2
0.23
0.24
120 / 80
102 / 80
123 / 78
70
60
72
500
800
680
Cardiac Output [l/min] Stroke Volume [ml]
Fig. 3 Pressure-volume curve for a collapsible vessel: Compliance (slope of the P-V curve) changes significantly for negative transmural pressure.
III. RESULTS A. Hemodynamics In order to verify the simulation models, main hemodynamic parameters have been determined from simulation data and compared to textbook data [13]. Fig. 4 shows the allocation of blood volume to the vessel compartments. The simulation results are close to textbook data; however, in both models, the blood volume contained in the heart is too high by several percent.
_______________________________________________________________
Fig. 5 In reaction to a loss of blood volume at t = 100 sec heart rate increases, whereas the unstressed volume of the peripheral venous vessels is lowered. This stabilizes aortic blood pressure.
IFMBE Proceedings Vol. 22
_________________________________________________________________
An Object-oriented Model of the Cardiovascular System with a Focus on Physiological Control Loops
B. Baroreflex Fig. 5 shows results of a test of the baroreflex. Both simulation models show similar results, and for the sake of brevity the results of the model with constant venous compliance have been omitted. As a reaction to blood loss (10 % of total volume within 10 sec) at t = 100 sec, heart rate rises, and the unstressed volume of the peripheral venous vessels sinks. Thus, arterial pressure is stabilized. IV. DISCUSSION
simulation model like the one presented could e.g. be used in education and training of medical personnel as well as in design of technological assist devices.
REFERENCES 1. 2.
3.
The results presented show a good correspondence to physiologically expected behavior. This can be seen considering the close match between main hemodynamic parameters in simulation and literature on the one hand, and on the other hand the analysis of the baroreflex control loop. Some differences to physiological data remain; however, better parameterization of the simulation model is likely to correct this. Furthermore, the simulation model can be improved using more refined models of individual components (e.g. heart), and adding other parts of the cardiovascular system, such as pulmonary gas exchange and respiratory mechanics, blood gases and their influence on hemodynamics.
4.
5.
6.
7.
8.
9. 10.
V. CONCLUSIONS In this paper, a new way of modeling the human cardiovascular system was presented: an object-oriented component library implemented with the modeling and simulating tool Modelica / Dymola. The library comprises models to simulate the pulsating heart, the vascular system and the baroreflex as an example of physiological control. The advantages of this modeling method were shown by exchanging a part of the simulation model for a more detailed description of this component. First results show a good correspondence to the relevant physiological effects and data. Modeling and graphical representation in Dymola are oriented on physiological structures, and thus allow easier understanding of the complete model than pure programming code or signal-oriented block diagrams. A
_______________________________________________________________
2603
11.
12.
13.
Avolio AP (1980) Multi-branched model of the human arterial system. Med & Biol Eng & Comput 18: 709-718 Ursino M (1998) Interaction between carotid baroregulation and the pulsating heart: a mathematical model. Am J Physiol 275: H1733H1747 Guyton AC, Coleman TG, Granger HJ (1972) Circulation: overall regulation. Ann Rev Physiol 34: 13-44 Nötges T, Hölemann S, Bayer Botero N, Abel D (2007) Objektorientierte Modellierung, Simulation und Regelung dynamischer Systeme am Beispiel eines Oxyfuel-Kraftwerksprozesses. at – Automatisierungstechnik 55: 236-243 Brunberg A, Autschbach R, Abel D (2007) Ein objektorinetierter Ansatz zur Modellierung des menschlichen Herz-Kreislauf-Systems. at – Automatisierungstechnik, in press Magosso E, Biavati V, Ursino M (2001) Role of the Baroreflex in Cardiovascular Instability: A Modeling Study. Cardiovascular Engineering 1: 101-115 Suga H, Sagawa K (1974) Instantaneous pressure-volume relationships and their ratio in the excised, supported canine left ventricle. Circ Res 35: 117-126 Gaasch WH, Cole JS, Quinones MA, Alexander JK (1975) Dynamic determinants of letf ventricular diastolic pressure-volume relations in man. Circulation 51 : 317-323 Piene H (1984) Impedance matching between ventricle and load. Ann Biomed Eng 12: 191-207 Hunter WC, Janicki JS, Weber KT, Nordergraaf A (1983) Systolic mechanical properties of the left ventricle. Effects of volume and contractile state. Circ Res 52: 319-327 Ursino M, Magosso E (2003) Role of short-term cardiovascular regulation in heart period variability: a modeling study. Am J Physiol Heart Circ Physiol 284: H1479-H1493 Lu K, Clark JW, Ghorbel FH, Ware DL, Bidani A (2001) A human cardiopulmonary system model applied to the analysis of the Valsalva maneuver. Am J Physiol Heart Circ Physiol 281, H2661-H2679 Klinke R, Pape H, Silbernagl S (2003) Physiologie. Georg Thieme Verlag, Stuttgart
Corresponding author: Dipl.-Ing. Anja Brunberg, M.S. Institute of Automatic Control, RWTH Aachen Steinbachstr. 54 52074 Aachen Germany Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
Computer Simulations of a Blood Flow Behavior in Simplified Stenotic Artery Subjected to Strong Non-Uniform Magnetic Fields S. Kenjeres1 and R. Opdam 1
Department of Multi-Scale Physics and J.M. Burgerscentre for Fluid Dynamics, Delft University of Technology, Delft, The Netherlands
Abstract — The paper reports on details of derivation of a comprehensive mathematical model for bio-magnetic fluid (human blood) behaviour when subjected to a strong nonuniform magnetic field. The model consists of a set of NavierStokes equations accounting for the Lorentz and magnetization forces, and a simplified set of Maxwell's equations (Biot-Savart/Ampere's law) for treating the imposed magnetic fields. The hydrodynamic and electromagnetic properties of oxygenated and deoxygenated blood (that differently react to an external magnetic field) are collected from literature. The model is then validated by performing numerical simulations of blood flow in simplified healthy artery and an artery with stenosis – both subjected to an external magnetic field of different strength and orientations. It is shown that for sufficiently strong magnetic fields (already for |B0| > 3 T at Re=50) a significant reorganization of flow structures takes place, resulting in an increase of the wallshear-stress (WSS). It is concluded that an imposed nonuniform magnetic field can create significant changes in the secondary flow patterns, thus making it possible to use this technique for optimization of magnetically targeted drug delivery (MDT) as well as for understanding of the blood flow patterns influenced by magnetic fields of the new generation of the MRI scanners (|B0|>3 T).
higher compared to the ones delivered by standard delivery (systemic) methods. Potentials of the MDT in loco-regional cancer treatment are demonstrated in series of works by Alexiou et al. (2000, 2002, 2003, 2005). In addition to the MDT concept, interesting medical applications where interactions between blood flow and electromagnetic take place can be found in new generations of the magnetic resonance imaging (MRI) scanners that operate in strong magnetic fields regimes (|B0|>3 T). We believe that mathematical and computer modelling and simulations can provide many important insights into underlying blood flow/magnetic field interactions that can significantly contribute to further advancements of the MDT technique for the patient-based treatments as well for as better understanding of the MRI scanner magnetic field influence on the blood flow. Since the human blood is slightly electrically conductive and it behaves as a paramagnetic (affected by magnetic field) or diamagnetic (repulsed by a magnetic field) fluid (deoxygenated and oxygenated blood, respectively), it is important to build-up a mathematical model that mimics properly effects of magnetization and Lorentz force on the blood flow.
Keywords — blood flow, magnetic field, magnetization force, Lorentz force, stenotic aorta, Magnetic Drug Targeting.
II. MATHEMATICAL MODEL I. INTRODUCTION One of the main problems of chemotherapy is often not the lack of efficient drugs, but the inability to deliver and concentrate these drugs in affected areas. Failure to provide localized targeting results in an increase of toxic effects on neighbouring organs and tissues. One promising method to accomplish precise targeting is magnetic drug delivery (MDT). Here, a drug is bound to a magnetic compound injected into the blood stream. The targeted areas are subjected to an external magnetic field that is able to affect the blood stream. In these regions the drug is slowly released from the magnetic carriers. Consequently, relatively small amounts of a drug magnetically targeted to the localized disease site can replace large amounts of the freely circulating drug. At the same time, drug concentrations at the targeted site will be significantly
The equations describing a laminar incompressible flow of an electrically conducting bio-fluid (blood), subjected to external electromagnetic fields consist of extended NavierStokes equations:
∂V 1 + ( V ⋅∇ ) V=ν∇ 2 V+ ( -∇P + F L + F L ) ρ ∂t
(1)
where additional forces caused by imposed electromagnetic fields are the Lorentz force and magnetization forces, respectively. The Lorentz force is generated by movement of an electrically conductive fluid in a magnetic field, whereas the magnetization force is fluid response towards magnetic field gradients and are calculated as:
FL = J × B ,
F M = μ0 ( M ⋅∇ ) H
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2604–2608, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
(2)
Computer Simulations of a Blood Flow Behavior in Simplified Stenotic Artery Subjected to Strong Non-Uniform Magnetic Fields
2605
In order to obtain mathematically fully closed (solvable) system of equations, additional equations describing distributions of imposed electromagnetic fields and generated electric potential are included too (simplified set of Maxwell’s equations):
∇ × H=J , J=σ ( -∇Φ +V × B ) , ∇ 2Φ = ∇ ⋅ ( V × B )
(3) Fig. 1. Geometry of the simulated set-up: simplified stenotic artery;
In blood vessels with diameters exceeding 0.1 mm blood can be regarded as practically homogeneous fluid since the scales of the microstructures (with typical diameters of 8 μm for red and white cells and 2-4 μm for platelets) are much smaller than that of flow scales, Pedley (1980). In this work we adopted a relatively simple model for magnetization, which assumes linear dependency between magnetisation and magnetic field intensity giving following final expression:
F M = μ0 χ | H | ⋅∇ | H |
The magnetic field originates from a perpendicular wire with strong electric current.
-magnetic field-off-
(4)
Where χ is the magnetic susceptibility. It is experimentally determined that the magnetic susceptibility of human blood strongly depends from its local oxygen contained conditions, Haik (1999). The oxygenated blood behaves as a diamagnetic ( χ oxyg=-6.6x10-7) and deoxygenated blood behaves as a paramagnetic ( χ deoxyg=3.5x10-6) material, Berkovsky et al. (1993), Haik et al. (1999), Fujii et al. (1999). This change in magnetic susceptibility is caused by binding of oxygen to the blood protein haemoglobin, which is responsible for transport of oxygen within a human body. The equations system Eqs. (1) – (4) is discretized by using a second-order finitevolume method for general non-orthogonal geometries and the numerical solver can be run on a single or multiple processor utilising MPI directives, Kenjeres (2008). In order to reduce numerical diffusion, the second-order centraldifferencing scheme is used for all terms in equations.
-magnetic field on: deoxygenated blood-
-magnetic field on: oxygenated blood-
III. RESULTS The simulated geometry that mimics a stenotic artery is shown in Fig.1. The simplified geometry is selected in order to focus investigations into easily observable flow reorganizations under influence of an imposed magnetic field. Also, this study can be used for design of new experimental setups for investigations of the electromagnetic interactions with bio-magnetic fluids. In this study, we focus on a configuration with a single wire perpendicular to the artery. This orientation will create
_______________________________________________________________
Fig. 2. The 3D view of re-circulative flow patterns for neutral and situations with active magnetic fields perpendicular onto the flow direction: deoxygenated and oxygenated blood, Re=50 and stenosis severity of 50%.
IFMBE Proceedings Vol. 22
_________________________________________________________________
2606
S. Kenjeres and R. Opdam
strongly non-uniform magnetic field distributions. The configuration with a single wire expanding along a simplified artery without stenosis was investigated in our previous work, Kenjeres (2008). The numerical mesh consisting of 82x82x122 control volumes proved to be sufficient for objective assessments and analysis of obtained
-neutral case (no magnetic field)-
-neutral (no magnetic field)-magnetic field on (deoxygenated blood)-
O
-magnetic field on at the location 1: deoxygenated blood-magnetic field on (oxygenated blood)-
O
-magnetic field on at the location2: deoxygenated bloodFig. 3. The flow patterns and corresponding locations of the imposed magnetic fields – shown in the central vertical plane. The imposed magnetic field (|B0|=10 T) originates from a perpendicular single wire at different locations in a stenotic artery (Ds/D=50%, Re=50)
_______________________________________________________________
Fig. 4. The wall-shear-stress (WSS) distributions along the top and bottom line of a simplified stenotic artery subjected to external magnetic field, Re=50.
IFMBE Proceedings Vol. 22
_________________________________________________________________
Computer Simulations of a Blood Flow Behavior in Simplified Stenotic Artery Subjected to Strong Non-Uniform Magnetic Fields
results. In contrast to similar numerical studies in literature, Tzirtzilakis (2005, 2008), Khashan and Haik (2006), our geometry includes both three-dimensionality of phenomenon as well as a stenotic segment, Berger and Jou (2000). Different scenarios have been investigated for a range of Reynolds numbers (50
II. MODEL
Sc
of the problem. These two equations are coupled and have to be solved simultaneously. Namely, heating power per unit volume due to the Joule effect:
1
Fig. 1 Geometrical model of plate electrodes and tissue. Electrode length 21 mm, width 6 mm, thickness 0.5 mm; interelectrode distance: 4.4 mm.
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Multiphysics Model for Studying the Influence of Pulse Repetition Frequency on Tissue Heating During Electrochemotherapy
2611
1 3
2
Fig. 2 Geometrical model of needle electrodes and tissue. Needle diameter: 0.7 mm; insertion depth: 7 mm; interelectrode distance: 8 mm.
Fig. 3 Temperature in selected points during pulse delivery for the model of plate electrodes (see Fig. 1). Pulse repetition frequency: 1 Hz.
electrodes and 1200 V for needle electrodes. Pulse repetition frequency was 1 Hz (standard) and 1 kHz (high). Contact of tissue and electrodes to the surrounding air was modeled as convective boundary condition. Initial tissue temperature was 37ºC. All simulations were performed within COMSOL Multiphysics environment. For 1 Hz repetition frequency simulations were run for 10 s. Since the duty cycle is very low (100 s / 1 s = 10-4) special attention was given to the control of time steps in the variable-step solver. For 1 kHz repetition frequency simulations were run for 8.5 ms. III. RESULTS For the model with plate electrodes we selected two characteristic points to track the time course of temperature during pulse delivery: 1 – in the tissue, exactly in the middle between electrodes; and 2 – in the tissue, near the edge of the electrode (Fig. 1). Temperature in these points for nominal (0.126 S/m) and increased conductivity (0.504 S/m) for the pulse train having pulse repetition frequency of 1 Hz is shown in Fig. 3. The same for the repetition frequency of 1 kHz is shown in Fig. 4. Note different time scales (s vs. ms). For 1 kHz case, the tissue temperature is higher. However, the increase of bulk temperature (T1), even for four time increased tissue conductivity, is below 3ºC. Thus, the increase of pulse repetition frequency from 1 Hz to 1 kHz can be considered safe for electrochemotherapy. High frequency electrochemotherapy has several advantages for patients (a single unpleasant contraction instead of a number of individual contractions, shorter treatment time).
_______________________________________________________________
Fig. 4 Temperature in selected points during pulse delivery for the model of plate electrodes (see Fig. 1). Pulse repetition frequency: 1 kHz.
Nevertheless, tissue near the electrode edge (T2) is at higher risk for thermal damage (temperature reaches 50ºC at the end of pulse train if tissue conductivity is 0.504 S/m). For the model with needle electrodes we selected three characteristic points: 1 – in the tissue, exactly in the middle between electrodes; 2 – in the tissue, 2 mm radially from the electrode surface (in the direction of the opposite electrode); and 3 – exactly at the contact between electrode and tissue; all at the depth of 3.5 mm (Fig. 2). Temperature in these points for nominal (0.126 S/m) and increased conductivity (0.504 S/m) for the pulse train having pulse repetition frequency of 1 Hz is shown in Fig. 5. The same for the
IFMBE Proceedings Vol. 22
_________________________________________________________________
2612
I. Lackovi, R. Magjarevi and D. Miklavi
geometries (plates, needles) show that the increase of pulse repetition frequency from 1 Hz to 1 kHz causes an increase of bulk tissue temperature that is still low (< 3ºC) and unlikely to induce thermal damage. Overall, the results of this study further warrant the use of pulses with repetition frequency in kHz range for electrochemotherapy.
ACKNOWLEDGMENT This work was funded within the program of bilateral scientific cooperation between the Republic of Croatia and the Republic of Slovenia, and by national research grants.
REFERENCES 1.
Fig. 5 Temperature in selected points during pulse delivery for the model
2.
of needle electrodes (see Fig. 2). Pulse repetition frequency: 1 Hz. 3. 4.
5.
6.
7.
8.
9.
Fig. 6 Temperature in selected points during pulse delivery for the model of needle electrodes (see Fig. 2). Pulse repetition frequency: 1 kHz.
repetition frequency of 1 kHz is shown in Fig. 6. As for the model with plate electrodes, the increase of pulse repetition frequency to 1 kHz, does not cause tissue overheating.
10.
11. 12.
13.
IV. CONCLUSIONS For studying the influence of pulse repetition frequency on tissue heating during electrochemotherapy we developed a multiphysics model that involves electro-thermal interaction. Results of our simulations for two electrode
_______________________________________________________________
14.
Neumann E, Sowers AE, Jordan CA (1989) Electroporation and electrofusion in cell biology. Plenum Press, New York. Teissié J, Golzio M, Rols MP (2005) Mechanisms of cell membrane electropermeabilization: A minireview of our present (lack of ?) knowledge. Biochim Biophys Acta - General Subjects 1724:270-280 Mir LM (2000) Therapeutic perspectives of in vivo cell electropermeabilization. Bioelectrochem 53:1-10 Serša G (2006) The state-of-the-art of electrochemotherapy before ESOPE study: advantages and clinical uses. Eur J Cancer Suppl 4, 5259 doi:10.1016/j.ejcsup.2006.08.007 Colombo GL, Di Matteo S, Mir LM (2008) Cost-effectiveness analysis of electrochemotherapy with Cliniporator vs other methods for the control and treatment of cutaneous and subcutaneous tumors. Ther Clin Risk Manag 4:541-548 Marty M, et al. (2006) Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE study. Eur J Cancer Suppl 4, 3-13 Daskalov I, Mudrov N, Peycheva E (1999) Exploring new instrumentation parameters for electrochemotherapy. Attacking tumors with bursts of biphasic pulses instead of single pulses. IEEE Eng Med Biol Mag 18:62-66 Zupanic A, Ribaric S, Miklavcic D (2007) Increasing the repetition frequency of electric pulse delivery reduces unpleasant sensations that occur in electrochemotherapy. Neoplasma 54:246-250 Pucihar G, Mir LM, Miklavi D (2002) The effect of pulse repetition frequency on the uptake into electropermeabilized cells in vitro with possible applications in electrochemotherapy. Bioelectrochem 57:167-173 Davalos RV, Rubinsky B, Mir LM (2003) Theoretical analysis of the thermal effects during in vivo tissue electroporation. Bioelectrochem 61:99–107 Pliquett U (2003) Joule heating during solid tissue electroporation. Med Biol Eng Comput 41:215–219 Lackovic I, Magjarevic R, Miklavcic D (2005) Analysis of tissue heating during electroporation based therapy: A 3D FEM model for plate electrodes, IFMBE Proc. vol. 8, Tsukuba, Japan, 2005 Lackovic I, Magjarevic R, Miklavcic D (2007) Analysis of tissue heating during electroporation based therapy: A 3D FEM model for a pair of needle electrodes, IFMBE Proc. vol. 16, Ljubljana, Slovenia, pp. 631-634 doi:10.1007/978-3-540-73044-6_164 Miklavcic D, Semrov D, Mekid H, Mir LM (2000) A validated model of in vivo electric field distribution in tissues for electrochemotherapy and for DNA electrotransfer for gene therapy. Biochim Biophys Acta 1523:233–239
IFMBE Proceedings Vol. 22
_________________________________________________________________
A Multiphysics Model for Studying the Influence of Pulse Repetition Frequency on Tissue Heating During Electrochemotherapy 15. Sel D, Cukjati D, Batiuskaite D, Slivnik T, Mir LM, Miklavcic D (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans. Biomed. Eng. 52: 816–827 16. Duck FA (1990) Physical properties of tissue: A comprehensive reference book. Academic Press, London
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
2613
Igor Lackovic University of Zagreb FER Unska 3 Zagreb Croatia
[email protected] _________________________________________________________________
Transient Simulation of the Blood Flow in the thoracic Aorta based on MRI-Data by Fluid-Structure-Interaction Dipl.-Ing. Markus Bongert1, Prof. Dr.-Ing. Marius Geller1, Dr. med. Werner Pennekamp2, Dr. med. Daniela Roggenland2, Prof. Dr. med. Volkmar Nicolas2 1
Center of Research in Computersimulation Mechanical Engineering, University of Applied Sciences and Arts Dortmund, Germany 2 Institut of Radiology, Universitätsklinikum Bergmannsheil, Ruhr-University Bochum, Germany
Abstract — High-grade heart valve stenoses and insufficiencies are supplied with prosthetic heart valves. In 2006, heart valve operations in Germany increased to 20,000. Numerous medical questions demand precise knowledge of the effect on blood flow of aortic prosthetic valves and patientspecific aortic anatomy. A network of cardio-thoracic surgeons, radiologists, cardiologists and engineers has developed a simulation model designed to investigate preoperatively flow-induced effects using engineering process Fluid-Structure-Interaction (FSI). Both methods Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) will be connected bidirectional within a virtual simulation model. In this research project Magnetic Resonance Imaging (MRI) is used for scanning by Cine-SSFP-Sequence with Bright-Blood-View. The total heart and thoracic aorta is axial acquired in layers. Each layer contains geometrical information at any time of acquisition. For further use the structure of the 4D DICOM-Data has to be changed into several 3D Data-Blocks of the whole geometry for every point of time of acquisition. After reconstruction anatomy and geometry has to be identical furthermore. Physiological boundary conditions for the simulation model are used. Especially the elasticity of the aorta – so called Windkesseleffekt - is considered. Among other transient simulation visualises distribution of velocity, pressure and wall shear stress. Calculated radial displacement of aortic wall could be verified by MRI. In previous studies the engineering method (CFD) has been validated by reference measurements using MRI. They document the effect on blood flow due to aortic prosthetic valves and particularly patient-specific aortic anatomy. Data acquisition via MRI is essential because it spares the patient an effective dose of an ECG controlled CT scan of up to 10 mSv. This technology reproduces the anatomy without compromising quality. In the future transient computer simulation will be available as suitable tool for the clinician to analyse preoperatively non-invasive patient-specific blood flow with its impact on e.g. vascular wall. Keywords — FSI, CFD, MRI, Patient-specific, Aortic Blood Flow
I. INTRODUCTION In case of stenosis of heart valve (Fig. 1) left ventricel have to produce a higher pressure to guarantee a full supply with blood. A slow acent of pressure load causes swelling of the myocardal muscle (Hypertrophy). Among other things atherosclerosis or bacterial disease of heart valves bring out an aortic valve insufficiency. Because of disability for a completly closure of aortic valve there is a blood flow from aorta back into the left ventricel during ventricular diastole. This effect causes an enlargement of the heart by expansion of the interior of the heart (Dilatation). High-grade valve stenoses and insufficiencies are supplied with prosthetic heart valves. In 2006, heart valve operations nationwide increased to 20,000. [1]. This means an accession of 4.7% to previous year which is caused by increase of aortic valve surgery of older patients. There are a lot of questions about the effect on blood flow of aortic prosthetic valves and patient-specific aortic anatomy [2, 3, 4].
Fig. 1 Aortic valve (healthy, stenosis) II. MATERIAL AND METHODS On this fact use of engineering process Fluid-StructureInteraktion (FSI) is recommend. Thereby both methods Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM) will be connected bidirectional within a virtual simulation model.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2614–2618, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Transient Simulation of the Blood Flow in the thoracic Aorta based on MRI-Data by Fluid-Structure-Interaction
2615
A. Acquisition
C. Processing
Until now acquisition of patient-specific anatomy occurs by Computer Tomography (CT). In this research project a new way is chosen to scan the anatomy. Magnetic Resonance Imaging (MRI) is now used to get the necessary data. For this purpose Cardio-MRI-examination of clinical practice is applied. The examination is carried out with the Cine-SSFP-Sequence with Bright-Blood-View (SSFP = Steady State Free Procession). In this kind of sequences a contrast medium is not necessary. Data acquisition via MRI spares the patient an effective dose (ED) of an ECG controlled CT scan of up to 10 mSv.
Segmentation: Each 3D block will be imported by the software Mimics©. An efficient as well as easy segmantation and editing of the several layers is enabled by this software tool (Fig. 4). This work has to be done very carefully because the created geometry model should be identical with the scaned anatomy.
Fig. 4
Segmented aortic arch
Conversion: Every segmented layer has to follow up. Thereby the originality of the geometry has to be warranted by editing the segmented layers (Fig. 5). Geometry of patient-specific aorta will be exported in CAD format “STL” (Stereolithography format). This work step is done with the software Mimics©. Fig. 2
MRI-Scan in layers
The total heart and thoracic aorta is axial acquired in layers. Each layer contains geometrical information at any time of acquisition (Fig. 2). B. Separation After acquisition information of the scaned individual anatomy is available in DICOM format. For further use the structure of this DICOM-Data has to be changed (Fig. 3). Special software by MHGS builds from the time-dependent geometrical data of each layer serveral 3D blocks of the whole geometry for every point of time of acqusition.
Fig. 3
Fig. 5
Aorta (segmented / edited)
Fluid / Solid: As result of conversion by Mimics© fluid model is generated (Fig. 6, left). Commercial software 3Matic© thickened the fluid model in order to create the solid model (Fig. 6, right). The value of 1.59 mm is used as
Changing DICOM-Structure
Fig. 6
_______________________________________________________________
IFMBE Proceedings Vol. 22
Fluid model (left) and Solid model (right)
_________________________________________________________________
2616
Markus Bongert, Marius Geller, Werner Pennekamp, Daniela Roggenland, Volkmar Nicolas
average wall thickness of the aorta ascendens [5]. It is correlated with MRI-measurements. Line of sight in Fig. 6 (right) is from aorta ascendens to aortic arch. Inner surface of the solid model is the interface between both domains (fluid / solid) while FSI-simulation.
E. Boundary Conditions Physiological boundary conditions as inlet velocity are used for transient simulation (Fig. 10, left). Viscosity of blood as Non Newton Fluid is calculated by functional relation with parameter „Shear Strain Rate“ (Fig. 10, right).
D. Meshing Surface mesh which consists of tetrahedral elements is laid onto the fluid model. Adapted from the surface mesh which represents exactly the contour of the aorta a hybrid volume mesh is created and visualised in Fig. 7. The grid is made up of hexa elements (core flow), prismatic elements (boundary layer) and tetra elements between both layers (Fig. 8). Fig. 10
Fig. 7
Fig. 8
Profile of velocity (INLET) and viscosity (blood)
The statically determinate bearing is important for mechanical part of the FSI-simulation (CSM). For each plane (inlet, outlets) coordinate systems are so defined that all nodes are able to move in the same z-direction (Fig. 11). The three-dimensional motion of the aorta is damped by surrounding viscera. So shell elements with a defined stiffness are used to consider this inner resistance. The elasticity of the aorta – so called Windkesseleffekt - is also considered.
Grid on region INLET
Assembly of inner hybrid mesh
A volume mesh of the solid model is composed of tetrahedral elements. In Fig. 9 it is visualised in detail.
Fig. 9
Fig. 11
Solid mesh in detail
_______________________________________________________________
IFMBE Proceedings Vol. 22
Coordinate systems of the solid model
_________________________________________________________________
Transient Simulation of the Blood Flow in the thoracic Aorta based on MRI-Data by Fluid-Structure-Interaction
2617
III. RESULTS There are manifold possibilities in postprocessing. Within this paper visualisation of results of transient simulation is done for dedicated points of time only (Fig. 12). Analysis of simulation calculations are exemplary done for distribution of velocity, pressure as well as wall shear stress (Fig. 13-16). Results from the simulation computation require checking against measured values to ensure that mesh quality is adequate and the correct boundary conditions have been selected. In previous studies [3, 4, 6] the engineering method (CFD) is validated by reference measurements using Magnetic Resonance Imaging (MRI). Calculated radial displacement to the amount of approximately 4mm of aortic wall could be verified by MRI. In Fig 17 minimum and maximum of the diameter of aorta ascendens is shown.
Fig. 12
Dedicated points of time of transient simulation
Fig. 13
Fig. 17
Diameter of aorta ascendens (min/max)
IV. CONCLUSIONS Previous studies document effect on blood flow as well as aortic prosthetic valves and particularly patient-specific aortic anatomy. Data acquisition via MRI is essential because it spares the patient an effective dose of an ECG controlled CT scan of up to 10 mSv. This technology reproduces the anatomy without compromising quality. In this study used software from MHGS and Materialise has proved oneself during reconstruction of the patientspecific anatomy, their conversion into a CAD format and their processing. These software products close the gap between imaging process and simulation techniqes. In the future transient computer simulation will be available as suitable tool for the clinician to analyse preoperatively non-invasive patient-specific blood flow with its impact on e.g. vascular wall.
Velocity distribution in plane INLET
ACKNOWLEDGMENT Kindly supported by Materialise GmbH and MHGS.
Fig. 14
REFERENCES Velocity distribution in longintudinal section 1. 2.
3.
Fig. 15
Fig. 16
Pressure on aortic wall
4.
idw (2007) at http://www.idw-online.de/pages/de/news228818 Pennekamp W. (2003), Aortale Flussbestimmung nach Aortenklappenersatz, http://www.bergmannsheil.de/503.0.html?&L=0%20class%3Dl Pennekamp W., Laczkovics A., Nicolas V. (2004), Vergleich von AO-Klappen-Flowprofilen durch Through-plane PC-MRT in-vivo – erste Ergebnisse, Journal Fortschr Röntgenstr, DOI 10.1055/s-2004827931 Bongert M., Geller M., Pennekamp W., Nicolas V. (2007), Simulationsmodell mit patientenspezifischer Anatomie auf Basis von MRT-Daten zur Berechnung der arteriellen Blutströmung mittels CFD, DGBMT Proc., Jahrestagung der Deutschen Gesellschaft für Biomedizinische Technik
Wall shear stress on aortic wall
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
2618 5.
6.
Markus Bongert, Marius Geller, Werner Pennekamp, Daniela Roggenland, Volkmar Nicolas
Jeltsch M. et al (2006), Messung der thorakalen Aortenwanddicke mittels 40-Schicht-Spiral-CT als möglicher subklinischer Parameter der koronalen Atherosklerose: Vergleich zwischen gesunden Probanden und KHK-Patienten, Journal Fortschr Röntgenstr, DOI 10.1055/s-2006-940735 Bongert M., Geller M., Pennekamp W., Nicolas V. (2006), Modell zur Simulation der Blutströmung nach einer künstlichen Aortenklappe mittels CFD, DGBMT Proc., Jahrestagung der Deutschen, Österreichischen und Schweizerischen Gesellschaften für Biomedizinische Technik
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 22
Dipl.-Ing. M. Bongert University of Applied Sciences and Arts Dortmund Sonnenstr. 43 Dortmund Germany
[email protected] _________________________________________________________________
Micro-gripping of Small Scale Tissues R.E. Mackay1, H.R. Le1, K. Donnelly1 and R.P. Keatch1 1
University of Dundee, Division of Mechanical Engineering & Mechatronics, Dundee, UK
Abstract — This paper describes the design and simulation of an integrated micro-electro-mechanical system (MEMS) to be used for small scale tissue manipulation. The microgrippers are to be used to test the mechanical cell adhesion properties of the gut epithelium. In the majority of sporadic colon cancers the Adenomatous Polyopsis Coli (APC) protein is mutated or missing. Mutations of APC occur extremely early in the development of cancer, before formation of polyps. Micro-grippers were designed and finite element analysis (FEA) was used to find actuation displacements, tip temperature and stresses. Monolayers of gut epithelial tissue will be grown on collagen substrates and stretched under tensile force. Ni micro-grippers will be used to grip the substrate due to its high gripping stiffness and force resolution. SU-8 micro-grippers will be used to directly grip on cell membrane to analyze cell adhesion forces with APC present or absent. The following paper shows the design of the system and FEA of micro-grippers. Keywords — Micro-electro-mechanical systems, microgripping, finite element simulation, biomechanics, cell adhesion
I. INTRODUCTION The majority of sporadic colon cancers occur when the adenomatous polyopsis coli (APC) protein is mutated or lost [1]. The mutation of APC is also found in familial adenomatous polyposis (FAP) [2]. The APC protein is involved in a number of functions that control epithelial layers, including Wnt-signalling which supports differentiation of cells by regulation of -catenin. -catenin and E-cadherin are needed for cell adhesion. APC also regulates cytoskeletal proteins and therefore acts in cell adhesion, migration and mitosis [3]. APC loss occurs very early in the progression of colon cancer and occurs before the formation of polyps [4]. This project aims to characterize the mechanical properties of the gut epithelium cells and tissue. In particular looking at cell adhesion forces with APC turned on or off. Micro-gripping of constructs smaller than 50m diameter is a challenging problem. Several types of micro-grippers have been developed. Shape Memory Alloy grippers can only be used for a limited number of cycles before the shape memory alloy fails or loses its shape memory effects [5]. Piezoelectric actuators show high gripping forces with accurate displacement; however they need high operating
voltages (10-100V) and require amplification methods to obtain large displacements [6]. Thermoelectric bimorph micro-grippers are of great interest. Luo et al [7] developed three types of microgrippers, including bimorph grippers, whilst examining temperature rise and displacement. The results show small actuations of bimorph micro-grippers which operate in a small power range, however Type III micro-grippers, initially developed by Lin et al [8] with two horizontal hot arms yields largest displacement for given power input. This paper will adjust the design of the Type III tweezers to allow them to be used within tissue engineering conditions. Two materials will be investigated, Ni and SU-8. SU-8 is a non conductive polymer however by adding a conductive metal path to the micro-grippers actuation is possible [9;10]. FEA will be used to find design constraints; this will give an insight into structure optimization and operating conditions. Ni has poor biocompatibility, this will be overcome using a PTFE coating and this should also help reduce tip temperature. II. DESIGN CONCEPTS A. Design Concept for the System The design concept is shown in Figure 1. The structure will be manufactured using well established MEMS processes on a silicon chip. The structure includes a stage with incorporated micro-grippers for holding tissue samples. A piezoelectric actuator will be used to load tissues. The displacement will be measured using an optical fiber sensor; critical forces can then be derived. Removal of the substrate under the micro-grippers and springs will allow for smooth actuation. A collagen substrate will used to grow epithelial layers. The nickel system will be used to stretch the collagen substrate to examine the cell connectivity, while the SU-8 micro system will be used to grip directly on cell membrane. B. Design of the Micro-Gripping System A schematic of the micro-tweezers is shown in Figure 2. Assuming the hinges have minimal bending resistance, tip displacement due to thermal heating can be derived using a simple lever relationship,
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2619–2622, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
2620
R.E. Mackay, H.R. Le, K. Donnelly and R.P. Keatch Electrode
Micro-grippers
Optic Fibre
Tissue Springs
Figure 1.
Schematic of System
δ = LαΔT
LT LH
Figure 2.
(1)
where L is the horizontal hot arm length, α is the thermal expansion of the material, ΔT is the average temperature rise of the hot arm, LT is the total length of the gripping arm and LH is the length of the hot section of the gripping arm. For example Ni micro-tweezers with a hot arm length L=750 m, total arm length LT=2 mm, hot section length LH=50 m and an average temperature rise of 50 ºC, the tip displacement is about 50 m. Although this simple model gives a good estimation, the displacement of the tweezers is also affected by the bending of the hot arms, bending of the hinges and strains in the hot arms. The gripping stiffness can be derived by assuming that the bending of the thick section is negligible so that the arm rotation is determined by the deflection of the thinner section: S=
(
L1 LT2
EI1
)¡
− LT L1 + L12 / 3
(2)
where E is the Young’s modulus, L1 and I1 are the length and second moment of bending of the thinner section as indicated in Figure 2. For the dimensions given above and a thickness of 50 m, width of 20 m, this gives a gripping stiffness of 244 N/m for Ni and 4.7N/m for SU-8. The nickel micro-grippers would generate a gripping force of 1 mN when one arm is displaced by 4.47m, whilst the SU-8 arm would need displacement of 213m. Again there will be effects of the strains in the structure which can be accurately predicted by using FEA. Failure of the structure could occur due to the following mechanisms; hot arm overheating and hinge breakage. A full thermo-mechanical model is required. Therefore a robust FE model was developed to examine displacement,
_______________________________________________________________
Design of Micro-grippers
stresses, maximum temperature and tip temperature of the two devices. III. MODELING AND OPTIMIZATION Simulation of the micro-grippers was done using the finite element package ANSYS. The dimensions are in accordance with those given in the last section. Direct coupled electro-thermo-mechanical type elements were chosen (Solid 98). Tetragonal shape elements of thickness 50 m were used for meshing the model. Elements at hinges were refined, this was the area predicted to undergo the highest stress. Two materials were examined Ni and SU-8, material properties are shown in Table I. SU-8 micro-grippers were designed to be actuated by a 100nm copper layer and 10nm chromium layer. The SU-8 model was designed in accordance with Ngyuen et al., the electrical properties of the two thin metal layers and thermal properties of SU-8 were used to model the micro-gripper. The resistivity of the two metal layers was calculated using equation 3 [9]. The micro-grippers are normally closed so tweezers will cool on closing. Ni and SU-8 micro-grippers were modeled in air and convection to surrounding fluid was included in the analysis. Tip temperature is an important constraint in the design of micro-grippers being used to manipulate biological cells and tissues. Tip temperature should not exceed 400C.
ρ=
ρ1 ρ 2 (h1 + h2 ) ρ1h2 + ρ 2 h1
(3)
An input voltage is applied across the electrodes so the current can pass through the two hot arms. The microgrippers open due to the joule heating occurring in the hot
IFMBE Proceedings Vol. 22
_________________________________________________________________
Micro-gripping of Small Scale Tissues
2621 Table 1 – Material Properties
Nickel [7]
Thermal conductivity (W/mK)
Poisson ratio
Resistivity (Ω m)
CTE (K-1)
0.22
7.92×10-8
52×10-6
4.02 210
0.31
20×10
arms. Opening of micro-grippers can be seen in Figure 3 for an input voltage of 0.1V, displacement is 19.9 m.
-8
12.7×10
Thermal convection in air (W/m2K)
0.2
10
83
10
-6
B. Effect of Porosity Original models had no porosity. To decrease tip temperature various porous models were built and studied using FEA. Five models, solid (1), 50 micron pores (2), 20m pores (3), T-junction (4) and the final structure (5) were evaluated (Fig. 5). The tip temperature and displacement of each structure are shown in Fig. 5 for voltage of 0.1V, temperatures were reduced from 83.92 to 68.48°C.
Figure 3.
Displacement ( m)
300
Nickel Displacement Plot
IV. RESULTS AND DISCUSSION
120 Displacement (m)
250
100
Tip Temp (ºC) 200
80
150
60
100
40
50
20
0 0
0.002
0.004
0.006
Various input voltages were applied to the microgrippers to examine the effects of materials on the displacement, temperature rises and stresses.
0.008
0.01
(a) 300
_______________________________________________________________
0 0.012
Voltage (V)
Displacment ( m)
A. Effect of Materials Micro-grippers were modeled in air. The effect of input voltage on the relative displacement and tip temperature of SU-8 is shown in Figure 4 (a). Input voltages were 0.0025, 0.005, 0.0075 and 0.01V. Large displacements were seen from 66.4 m to 242 m. Tip temperature was kept low, the maximum increase in tip temperature was 1.4°C, i.e. tip temp was 26.40C when the maximum voltage, 0.01V was applied. The effect of input voltage on tip displacement and temperature for Ni is shown in Figure 4 (b). Input voltages of 0.05, 0.1, 0.15 and 0.2V were applied to the microgrippers. Tip displacement increased with rising voltage from 10.2m to 112.4m. Tip temperature increased from 45.10C to 209.70C. Ni micro-grippers show a much higher temperature rise at the tips than SU-8. Ni has thermal conductivity around 400 times that of SU-8, convection to air is 10W/m2K, Ni has high conductivity therefore causing high tip temperature.
Tip Temperature (ºC)
SU-8 [9]
Young’s modulus (GPa)
250 Displacement (m)
250
200
Tip Temp (ºC) 200
150 150 100 100 50
50 0 0
0.05
0.1
0.15
0.2
Tip Temperature (ºC)
Material
0 0.25
Voltage (V)
(b) Figure 4.
Comparison of the displacement and maximum temperature of SU-8 (a) and Ni (b)
C. Stresses in the Structure Hinge design has a large impact on the maximum Von Mises stress of the structure (Fig. 6). Maximum stress was up to 390MPa at 0.1V without hinges. With hinges present Von Mises stress was reduced 259MPa. These hinges
IFMBE Proceedings Vol. 22
_________________________________________________________________
2622
R.E. Mackay, H.R. Le, K. Donnelly and R.P. Keatch
showed low gripping stiffness at the tips. To increases gripping stiffness the hinge radius was decreased from 8.5m to 6.5m, however Von Mises stress increased. Maximum stress occurs solely at the hinges, this is shown in Figure 7. A number of voltage inputs 0.1, 0.15, 0.2 and 0.25V were investigated. Results for Ni are presented.
Tip Temperature (ºC)
90
80
70
a micro-gripping device. Operating voltage of 0.1V will be used for micro-gripping of the collagen substrate. The results show SU-8 has large displacement for small operating voltages and low temperature rise at tips. Ni micro-grippers showed slightly smaller displacements but had high tip temperatures. Porosity helped reduce tip temperature from 80.10C to 68.500C. Tip temp will be decreased further using biocompatible PTFE coating and cooling will occur during micro-gripper closure. SU-8 is biocompatible but has a much lower stiffness than nickel which could cause difficulties in tissue handling, large displacements of gripping arms will be needed to obtain required gripping force; operating voltage will therefore be 0.01V.
60 0
1
2
3
4
5
6
ACKNOWLEDGMENT
Hinge Type
Figure 5.
Financial support of EPSRC and IDB Technologies Ltd. towards a PhD Studentship for R.E.M. is acknowledged. The authors want to thank Prof. I Nathke for biological input to project.
Porosity vs Tip Temperature
Stress (MPa)
1600 1400
No Hinge
1200
Hinge 8.5 m
REFERENCES
Final Hinge 6.5 m
1000
1.
800 600 400 200 0 0
Figure 6.
Figure 7.
0.05
0.1 0.15 Voltage (V)
0.2
0.25
Von Mises Stress for varying hinge design
Von Mises Stress Central Hinge and Maximum Stress at LHS V. CONCLUSIONS
Nathke IS. (1996) The adenomatous polyposis coli tumor suppressor protein localizes to plasma membrane sites involved in active cell migration. J of Cell Biology 134:165-79. 2. McCartney BM, Nathke IS. (2008) Cell regulation by the Apc protein: Apc as master regulator of epithelia. Current Opinion in Cell Biology 20:186-93. 3. Dikovskaya D, Zumbrunn J, Penman GA et al. 2001 The adenomatous polyposis coli protein: in the limelight out at the edge. Trends in Cell Biology 11:378-84. 4. Nathke IS. (2004) THE ADENOMATOUS POLYPOSIS COLI PROTEIN: The Achilles Heel of the Gut Epithelium. Annual Review of Cell and Developmental Biology 20:337-66. 5. Kohl M, Just E, Pfleging W et al. (2000) SMA microgripper with integrated antagonism. Sensors and Actuators A: Physical 83:208-13. 6. Nah SK, Zhong ZW. (2007) A microgripper using piezoelectric actuation for micro-object manipulation. Sensors and Actuators A: Physical 133:218-24. 7. Luo JK, Flewitt AJ, Spearing SM et al. (2005) Comparison of microtweezers based on three lateral thermal actuator configurations. Journal of Micromechanics and Microengineering 15:1294-302. 8. Lin LL, Howe RT, Pisano AP. (1993) A passive, in situ micro strain gauge. MEMS '93 Proc. An Investigation of Micro Structures, Sensors, Actuators, Machines and Systems, Fort Lauderdale, FL, USA p. 201-6. 9. Nguyen NT, Ho SS, Low CL-N. (2004) A polymeric microgripper with integrated thermal actuators. Journal of Micromechanics and Microengineering 14:969-74. 10. Chronis N, Lee LP. (2005) Electrothermally Activated SU-8 Microgripper for Single Cell Manipulation in Solution. Journal of Microelectromechanical Systems 14:857-63.
An electro-thermo-mechanical model has been developed to investigate the displacement, tip temperature, stresses of
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Optimizing drug delivery using non-uniform magnetic fields: a numerical study J.W. Haverkort1 and S. Kenjereš1,2 1
Delft University of Technology / Multi-Scale Physics, Delft, The Netherlands 2 J. M. Burgerscentre for Fluid Dynamics, The Netherlands
Abstract — A comprehensive computational model for simulating magnetic drug targeting was developed and extensively tested in a cylindrical geometry. The efficiency for particle capture in a specific magnetic field and geometry was shown to be dependent on a single dimensionless number. The effect of secondary flows, a non-Newtonian viscosity and oscillatory flows were quantified. Simulations under the demanding flow conditions of the left coronary artery were performed. Using the properties of present-day magnetic carriers and superconducting magnets, approximately one third of 4μm particles could be captured with an external field. These promising results could open up the way to a minimally invasive treatment of coronary atherosclerosis.
with V the volume of the material, μ0 = 4π ⋅10−7 N A2 the G magnetic permeability of vacuum, M the magnetization G (magnetic dipole moment per unit volume) and H the applied (auxiliary) magnetic field. For many materials the magnetization can be taken approximately proportional to the applied magnetic field up to a certain value. Beyond this G saturation magnetization M sat all the constituent dipoles are aligned with the field and no further increase in magnetization is possible
Keywords — Magnetic Drug Targeting (MDT), Coronary Artery, Particle Capture, Inhomogeneous Magnetic Fields, Atherosclerosis
(2)
H < M sat χ H ≥ M sat χ
For fully oxygenated blood the proportionality constant, the magnetic susceptibility, is approximately -6.6·10-7. This small and negative number implies via Eqn. 1 a slight repulsive force from inhomogeneous magnetic fields.
I. INTRODUCTION The targeting of drugs to a specific location inside the human body can be significantly enhanced using magnetic fields. Drugs attached to a magnetic particle can be slowed down or even captured from the bloodstream in the presence of a non-uniform magnetic field. This promising magnetic drug targeting (MDT) technique for improving the specificity of for example chemotherapy has recently been successfully applied to human patients. We believe that numerical simulations can form a valuable tool for optimizing and estimating in advance the effectiveness of a treatment. Our goal is to be able to perform such simulations on a patient-specific basis, this paper reporting some of the recent developments in that direction. II. THEORY A. Magnetization Force Various materials acquire a net magnetic dipole moment when placed in an external magnetic field, in reaction to which a force is exerted on the material given by G G G FM = μ0VM ⋅ ∇H
G G ° χ H M =® G °¯ M sat
B. Equations of motion G For the blood velocity u the Navier-Stokes equations for G an incompressible fluid ( ∇ ⋅ u = 0 ) are solved augmented
with the magnetization force G G G § ∂u G G · + u ⋅∇u ¸ = −∇p + η∇ 2u + FM V © ∂t ¹
ρ¨
(3)
The relative importance of the inertial forces compared to the viscous forces is denoted by the dimensionless Reynolds number Re = ρν l η with ν and l characteristic velocity and length scales of the flow under consideration. G The particle trajectories and velocities r (t ) G G u p = dr ( t ) dt of particles with mass m can be obtained from
m
G G d 2r (t ) G = FD + FM 2 dt
(1)
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2623–2627, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
(4)
2624
J.W. Haverkort and S. Kenjereš
G G For small particle Reynolds numbers (with ν = u − u p and l = D the particle diameter) the drag force G G G FD = 3πη ( u − u p ) D and usually the particle acceleration is G G negligible such that FD + FM = 0 . This balance between magnetization force and drag force shows that the relevant dimensionless quantity to be maximized for particle capture is G G G G μ0VM ⋅ ∇H μ0 D 2 M ⋅ ∇H FM = = (5) Mn p ≡ G 3πη uD 18η u FD
Fig. 1 A schematic overview of the used geometry. The line with dot indicates the location of a current carrying wire. The magnetization force and drag force on a particle are indicated. A tube length of 7 cm (larger for high Re) and a diameter of 7 mm have been used throughout.
This shows for example that for twice as large particles the flow velocity can be four times as high, or the field gradient four times as small to capture the same amount of particles. III. METHODS The fluid and particle equations were solved in Fluent 6.3 software by ANSYS, Inc. which is a linear multigrid finite volume solver. Eqn. 3 is solved fully implicit with a quadratic upwind (QUICK) discretization of the nonlinear term. Various used-defined functions were written in C to increase the functionality of material properties, boundary conditions and external forces. Eqn. 4 is numerically integrated using a sixth order Runge-Kutta scheme whenever the first term is significant and an implicit scheme otherwise. Alternative schemes, various time steps and accuracy options were tested with no appreciable differences, providing confidence that the integration is performed accurately. To validate the implementation a comparison was made with an analytical result for which satisfactory correspondence was obtained. For the complex geometry of the left coronary artery a mesh obtained from average data of the angiographies of 83 healthy patients [1] is used. IV. RESULTS
A. Cylindrical geometry Fluid magnetization: Due to its negative magnetic susceptibility, oxygenated blood experiences a magnetization force opposing an existing magnetic field gradient. Simulations in the cylindrical geometry of Fig. 1 showed that when an infinitely long current carrying wire was placed halfway, fluid motion perpendicular to the main flow was induced as shown in Fig. 2. These secondary flow patterns arise as fluid is pushed away in axial direction,
_______________________________________________________________
Fig. 2 Secondary motions induced by the magnetization force of a wire carrying a current of 105 A, placed at a distance of 1 cm from the cylinder axis. The coloring indicates the secondary flow velocity as a fraction of the average main velocity of 0.1 mm/s (Re = 0.2).
fluid is drawn in by continuity below the wire where an opposing pressure gradient arises. In order to investigate the importance of these secondary motions for magnetic drug targeting, particles were inserted homogeneously distributed over the circular cross-section of the domain. It was found that the overall capture efficiency ((in-out)/in) decreased due to the downward fluid motion along the tube wall dragging especially the smaller, less attracted particles away from the magnet. The effect was found to be fairly small though and dependent on the particle size. Even when the secondary flow became of the same order as the main flow the capture efficiency was influenced by typically only 15%. The strength of the secondary motions was found to depend mainly on the strength of the magnetization force relative to inertial forces, and thus for a given magnetic field on the Reynolds number. Only for physiologically low Reynolds numbers do these secondary flows become of the some order of magnitude as the main flow. Non-Newtonian viscosity: The viscosity influences the drag force on a particle directly, but also through its effect Table 1 Material properties used for cylindrical geometry Material properties 3
Blood
Particles
Density (kg/m )
1000
5150
Magnetic susceptibility (-)
-6.6·10-7
3
IFMBE Proceedings Vol. 22
_________________________________________________________________
Optimizing drug delivery using non-uniform magnetic fields: a numerical study
Fig. 3 The capture efficiency as a function of Mnp for blood flow in a cylindrical pipe using a GPL viscosity model [2] compared to a Newtonian model for various Reynolds numbers.
on the flow field. Blood has a higher viscosity for smaller shear rates existing e.g. near the centerline of a straight pipe flow. Accordingly the flow profile becomes flattened compared to the parabolic profile arising for a constant viscosity. This flattening is most pronounced for low Reynolds numbers and a decrease in centerline velocity by over 10% for Reynolds numbers below 200 was obtained using the generalized power law (GPL) model of [2]. In order to be able to compare the effect on the capture of particles for various Reynolds numbers we used different particle sizes such that the quantity Mnp (evaluated using the average flow velocity and the field gradient at the centerline below the wire) was equal in all simulations. So for ten times higher Reynolds numbers we used 10 times larger particles. From Fig. 3 we see that for a Newtonian viscosity model (with ¢=3.5·10-3 Pas, as used up to now) the capture efficiency curves for all three simulated Reynolds numbers almost perfectly overlap. So using a two times larger particle diameter and at the same time a four times higher flow velocity in this case yields exactly the same capture efficiency. This implies that the characterization of the capture efficiency in terms of Mnp is a very good one. Equivalently one can conclude that particle inertia is indeed negligible in this case. We also see however that using the same constant viscosity for Mnp does not suffice and that fewer particles are captured due to an increased viscosity at lower Reynolds numbers. Oscillatory flow: To investigate what impact oscillatory flow has on the magnetic capturing of particles a superposition of a steady and a harmonically oscillating flow of equal amplitude was used. For the inlet velocity the analytical solution that exists for unsteady periodic cylindrical pipe flow was used. Periods of 5.5s, 1.375s and 0.34375s were used to yield Womersley numbers of α ≡ R 2πρ ηT = 2,4 and 8 representative for small arteries to very large arteries respectively.
_______________________________________________________________
2625
Fig. 4 The capture efficiency vs. particle diameter (μm) for various value of the Womersley parameter , zero being a steady flow.
It was found that depending on the time of injection less or more particles can be captured compared to steady flow conditions, but that averaged over one cycle these differences are not significant (see Fig. 4). A similar simulation with elastic cylinder walls extending up to 5% radially at the highest pressure led to similar results. Although these conclusions might not hold in more complex geometries, they do show that the specifics of the transient flow profile on the magnetic capturing of particles might not be of crucial importance compared to the average flow.
B. Left Coronary Artery We next test the capabilities of the magnetic drug targeting technique under the demanding flow conditions of a left coronary artery using present-day available magnetic fields and materials. We use state-of-the-art cylindrical superconducting magnets [4] in which a supercurrent flows primarily near the surface. Owing to this physical origin a good fit to the field is obtained using that of a circular line current (diameter 4.5 cm) halfway the magnet (i.e. 7.5 mm below its surface) for a current of 2.1·105 A (see Fig. 5).
Fig. 5 The magnetic field B (T) and magnetization force per unit volume F/V (N/m3). The magnitude, vertical and radial components 5 to 7 cm below the circular current loop (right) and the magnitudes at the location of the coronary artery (left) are displayed.
IFMBE Proceedings Vol. 22
_________________________________________________________________
2626
J.W. Haverkort and S. Kenjereš
Fig. 6 The distribution of particles in seconds after the first injection of particles as seen from the inlet. In the absence of a magnetic field the particles of different diameter overlap as they follow approximately the same path due to negligible inertia. In the case of an applied magnetic field especially the heavier particles are attracted towards the sidewalls where they can exchange any attached drugs with the arterial wall.
significant fraction of 34 % of the 4 μm particles could be captured, with this efficiency rapidly decreasing for smaller diameters. Note that Mnp evaluated at the target location is approximately 1/20 for D = 4μm, but still many particles can be captured owing to the wide extent of the magnetic field. Fig. 7 The inlet velocity u ( t ) ⋅ (m/s) vs. time (s) and mesh. A somewhat
(
flattened inlet velocity profile u ( r ) = ( 4 3) u ( t ) 1 − ( r R )
6
) was used
For the properties of the particles we resorted to the high susceptibility, high saturation magnetization material iron which has been successfully made into drug susceptible carriers using carbon coatings [4]. With 67.5 weight % iron the particles have a density of approximately 6450 kg/m3 and a saturation magnetization 106 A/m. As the particles are almost completely saturated even for fields as low as 0.05 T they are assumed saturated throughout the calculation. Particles with diameters of 250nm, 500nm, 1 μm, 2 μm and 4 μm were inserted homogeneously distributed over the inlet and spread over one flow cycle of 1s at consecutive intervals of 0.1s. The particle velocity was assumed to vanish after impact with the vessel wall. The cylindrical superconducting magnet was assumed to be positioned approximately at the patient’s chest (its center at 5cm from the vessel), directed towards the outer curvature of the left main coronary artery (the arrow in Fig. 5). The region opposite flow dividers is a well-known location of low endothelial shear stress correlated with the formation of atheromatous plaques. A possible future application could include loading the carbon-coated particles with thrombolytic agents to prevent a threatening plaque rupture. The results are summarized in Fig. 6 where the effect of the magnetic field can be clearly seen from the distribution of primarily the largest particles near the arterial wall. A
_______________________________________________________________
V. CONCLUSIONS The efficiency for magnetic particle capture was for a given geometry and magnetic field found to be effectively described by a single dimensionless number representing the ratio between the magnetization and the drag force. A higher viscosity however should be used for smaller strains to take into account the non-Newtonian behavior of blood. Secondary and oscillatory flows were found to be of little importance. Simulations of the left coronary artery showed that present day materials can be used to magnetically capture a significant fraction of 4μm particles using external fields. These results could lead the way to a minimally invasive treatment of coronary atherosclerosis.
ACKNOWLEDGMENT Much gratitude goes out to Johannes V. Soulis (Demokrition University of Thrace, Greece) for providing the left coronary artery mesh and pulsatile flow profile.
REFERENCES 1.
2.
Giannoglou GD, Soulis JV, Farmakis TM, Louridas GE (2003) Molecular Viscosity Distribution in the Left Coronary Artery Tree. Comp in Cardiol 30:641-644 Johnston BM, Johnston PR, Corney S, Kilpatrick, D (2003) NonNewtonian blood flow in human right coronary arteries: steady state simulations. J Biomech 37:709–720
IFMBE Proceedings Vol. 22
_________________________________________________________________
Optimizing drug delivery using non-uniform magnetic fields: a numerical study 3.
4.
Takeda S-i, Mishima F, Fujimoto S et al. (2007) Development of magnetically targeted drug delivery system using superconducting magnet. J Magn Magn Mat 311:367–371 Cao H, Huang G, Xuan S, Wu Q et al. (2008) Synthesis and characterization of carbon-coated iron core/shell nanostructures. J. Alloys Comp 448:272-276
_______________________________________________________________
2627
Author: J.W. (Willem) Haverkort Institute: Delft University of Technology, Department of Multi-Scale Physics Street: Prins Bernhardlaan 6 City: Delft Country: The Netherlands Email:
[email protected] IFMBE Proceedings Vol. 22
_________________________________________________________________
A real bicycle simulator in a virtual reality environment: the FIVIS project O. Schulzyk1, U. Hartmann1, J. Bongartz1, T. Bildhauer1, R. Herpers2,3 1
Department of Mathematics and Technology, RheinAhrCampus, University of Applied Sciences, Remagen, Germany 2 Department of Computer Science, University of Applied Sciences Bonn-Rhein-Sieg, Sankt Augustin, Germany 3 Department of Computer Science and Engineering, York University, Toronto, Canada
Abstract — For almost all modern means of transportation (car, train, airplane) driving simulators exist that provide realistic models of complex traffic situations under defined laboratory conditions. For many years, these simulators have been successfully used for drivers’ training and education and have considerably contributed to the overall road safety. Unfortunately, there is no such advanced system for the bicycle, although the number of bike accidents has been increasing against the common trend during the last decade. Hence the objective of this project is to design a real bicycle simulator that is able to generate any desired traffic situation within an immersive visualization environment. For this purpose the bike is mounted onto a motion platform with six degrees of freedom that enables a close-to-reality simulation of external forces acting on the bike. This system is surrounded by three projection walls displaying a virtual scenario. A physical model is developed in order to compute the bike’s mechanical behavior that corresponds to the visualized traffic and the reaction of the driver. In order to validate the model an off-the-shelve mountain bike is equipped with a set of physical sensors (e.g. acceleration, steering angle, declination) to monitor the mechanics of the bike during real test drives. This data is also used to feed the motion platform with real measurements (e.g. to model a bumpy street). As the driver in our bike simulator experiences both controllable physical and visual stimuli, this system facilitates a range of completely new applications in the field of safety at work, in the area of neuropsychological research and in road safety education. Keywords — bicycle, simulator, motion platform, virtual reality, physical model, immersive.
I. INTRODUCTION The objective of this project is to develop a real bicycle simulator with the ability to represent real life traffic situations as a virtual scenario within an immersive environment. The bicycle is fixed onto a motion platform to enable a close to reality simulation of turns and balance situations. Therefore the platform is fed with data that was previously recorded during real bicycle test drives and also with accelerations that have been computed using a model of the bike’s physical behaviour. The source for this model is the action of the cyclist and the virtual environment
acting on that bike. Different sensors measure forces, step rate and steering angle of the bike and the visualisation software sends a protocol with track information related to the front and rear tire. Because of the fact, that the platform can not provide unlimited movement due to its limited work space and furthermore centrifugal forces can not be simulated, it is necessary to find solutions that bluff the cyclist’s perception. A common trick is to use the force of gravity to substitute acceleration of translatory inertia. As the projection plane of the visual scenario is fixed to the ground and the platform is moving, such an approach can not be undertaken in an extensive manner. Therefore the focus is laid on initial movement in order to boost the visual perception. A classic washout filter moves the platform back to its starting position. To obtain a realistic simulation, a smooth interaction between the individual parts must be arranged. The layout is shown with a brief explanation of its functionality.
II. SIMULATION SETUP A. Immersive visualization environment The platform (including bike and cyclist) is placed within a set of three projection walls. Each rear projection screen (1.36 m x 1.02 m) seamlessly connects to the other with an angle of 120°. The field of vision is almost completely covered. This arrangement provides an immersive visualization environment and uses the Immersion Square Technology [1, 2]. The whole assembly is 5 m wide, 3.5 m deep and 2.5 m high. Figure 1 shows a sketch of the simulation setup, Figure 2 shows a photograph taken during a presentation. It is possible to imagine the immersive effect, which the enveloping screens generate. The peculiarity of this effect depends on the individual and in some situations can even lead to dizziness and great excitement. As different basic visual stimuli can be added to the virtual environment, it is possible to examine the impact of visual perception on physical and mental performance.
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2628–2631, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
A real bicycle simulator in a virtual reality environment: the FIVIS project
2629
C. The bicycle The bicycle is firmly attached to the platform and is equipped with a set of actuators and sensors: • •
• • • Fig. 1 Motion platform and virtual environment
Pedaling resistance: The rear wheel drives a roll. Its drag is controllable. It is realized by an eddy current brake and a flywheel. Steering torque: Two types of reaction forces are transferred to the handle bar that need to be simulated, the active and passive reaction forces. During a bike ride forces act on the front wheel and generate torque in the handle bar. Thus, two pneumatic muscles are attached on either side of the fork and act like adjustable springs. Speed: Magnetic switches are used to count the revolutions of the rear wheel. Front brake sensor: A load cell is situated between the shoes of the front brake. Steering angle: An angle transmitter is mounted between shaft and frame. III. COMMUNICATION PROTOCOL
A. The protocol
Fig. 2 Ride through a virtual scenario B. The motion platform An off-the-shelf bike is rigidly fixed to a motion platform. This platform performs all the movement of the bike. The chosen design of a hexapod offers robustness plus six degrees of freedom. This design consists of a lower and upper triangle frame. Hydraulic actuators connect both frames. Their combined alternation of length leads to the desired change of position. So far, the simulation of the bicycle ride does not depend on the use of all degrees of freedom. Rotation about the vertical axis and shift sideways will not be implemented.
_______________________________________________________________
The exchange of information between the physical unit (bike, sensors and platform) and the visual simulation is essential for a homogenous interplay and sine qua non for a realistic handling sensation of the bike. A UDP protocol is used for the transmission of this information. The message sent contains data that gets updated by its associated partner. The update frequency is set by the physical unit and the visual simulation sends its response directly after receiving a message. Currently the transmission frequency is set at 25 Hz. Because of the continuous update the variation between following messages is small and no crosscheck whether a packed has arrived correctly or not needs to be implemented. Table 1 summarizes what kind of information is sent by the physical unit and the visualization. The visual simulation basically receives two inputs. Speed and steering angle. This determines the direction of travel and the Table 1 Data members of the communication protocol Data from bike
Data from Visualization
Steering angle
Friction
Speed
Position vector front wheel bottom
Platform rotation (pitch, yaw, roll)
Position vector front wheel front
Platform translation (x, y, z)
Position vector rear wheel bottom
IFMBE Proceedings Vol. 22
6D - acceleration vector of bike
_________________________________________________________________
2630
O. Schulzyk, U. Hartmann, J. Bongartz, T. Bildhauer, R. Herpers
velocity i. e. frame rate. Information about the position of the platform can be helpful in order to adjust projection settings, as the viewer moves respectively with the platform. The information sent by the visualization unit delivers a description of the track, which is relevant to bicycle handling. A vector can describe the orientation of the ground in reference to the gravitational vector. As there are two contact points of the bike with the ground, this is done for the back and the front wheel. But a collision with some object – in most cases - first occurs at the front end of the bike. Hence one more vector describes this location. In order to capture further forces that act on the bike, rotational and translatory accelerations are also delivered. The condition of the track is represented by a numerical value and describes its inherent friction. B. Software Setup A data acquisition board constantly records the different sensor parameters. Its resolution for analog signals is 12 Bit with a sample rate of 1.25 MHz. It also possesses digital input and output and two 24 bit counters plus two analog outputs. The data acquisition runs within an own timer thread and every 10 ms the different sensor values are taken. Furthermore a server thread continuously listens to incoming messages from the visualization unit. This periodic update of bicycle and visualization parameters permits a continuous computing of the state variables by the physical model. These state variables are then sent to the platform and the visual simulation where further calculation is carried out e.g. the computing of the platforms’ inverse kinematics transforms the desired position and orientation of the platform to linear displacements of its belonging pistons. The code is written in C++ to guarantee easy portability or reusability. IV. PHYSICAL MODEL Both, the visualization software and the program that controls the platform and the actuators need their own physical model. Within the virtual environment, collisions need to be detected; the field of vision has to be calculated and the behavior of the virtual bicycle has to be defined. Therefore a commercial physics engine (PhysX SDK from AGEIA) is used. The physics, which controls the movement of the real bike, is implemented as a C++ program. A. Velocity and pedaling resistance Measuring the current velocity is simple: small permanent magnets move over a magnetic switch while the
_______________________________________________________________
rear wheel turns. The counts in relation to time and wheel diameter lead to the momentary speed. The power required to overcome the aerodynamic drag is given by
P = F⋅v =
1 ρν 3 AC D 2
(1)
P is the density of the air, ν is the bikes’ speed, A the reference area and CD is the drag coefficient. As this drag again affects the travelling speed a closed loop system is set up. But the pedaling resistance derives from more forces that need to be considered. Data from the virtual scenery provides information on the inclination of the track and rolling resistance. Driving uphill generates a force that points to the opposite direction of motion. An additional force that also contributes to the pedaling resistance occurs when the front brake is put on. Combining these forces finally leads to the output value for the eddy current brake. B. Curve radius and lean The geometry of the bike and the steering angle defines the turning radius r. In order to turn, the bike must lean to balance the relevant forces. The leaning angle is found by using the laws of circular motion. (g is the acceleration of gravity.)
θ = arctan
ν2 gr
(2)
Tests have shown, that due to the lack of centrifugal forces during a simulated ride, the calculated lean angle can not be adopted as it stands. In order to avoid sliding off the bike, the rider leans to the opposite direction. Satisfactory results have been achieved when the platform tilts away from the center of its circular path. Thus the driver needs to lean inwards. Although in this case the simulation behaves contrary to a bike ride outdoors, nevertheless the sensation the driver gets is closer to reality. C. Accelerations and washout filter The visualization software features its own physics engine. It bases on both, internal data and information received from the physical bike via the UDP-protocol. Accelerations in various directions can be caused by obstacles, a bumpy street, a jump off the road curb and so forth. They are sent to the computer that drives the platform and the actuators of the bike. Here, a second physical model evaluates these accelerations and in case the current position of the platform permits further displacement, it initiates a dynamic impulse. This initial boost for itself has a major
IFMBE Proceedings Vol. 22
_________________________________________________________________
A real bicycle simulator in a virtual reality environment: the FIVIS project
impact on human perception of accelerations [3]. After each translational displacement a basic washout filter causes the platform to slowly return to the initial point. V. DATA RECORDING SYSTEM A standard mountain bike is equipped with a set of different sensors and a data logging unit [4]. During real test drives a wide variety of data can be recorded for a later verification of platform motion. For instance, information on acceleration of the bike’s frame and information about the swept volume of the suspension fork while driving on tracks of different condition can be used to provide realistic ride characteristics within the simulation. Figure 3 shows a mountain bike with the described setup.
2631
Currently the data measured includes: • • • • • •
Horizontal and vertical acceleration of the frame The angle of lean and pitch Steering angle The swept volume of the suspension fork Forces of the front and rear brake Speed and driven distance
Furthermore, these measured values can be combined with data from a video camera and a GPS system, both taken during the bike ride.
ACKNOWLEDGMENT The authors gratefully acknowledge the financial support of the BMBF- FH³ program "Angewandte Forschung an Hochschulen im Verbund mit der Wirtschaft"; project “FIVIS” grant: 1736A05.
REFERENCES 1. 2.
3.
4.
Herpers R, Hetmann F, Hau A, Heiden W (2005) The Immersion Square, Aktuelle Methoden der Laser und Medizintechnik, Berlin Hetmann F, Herpers R (2002) The Immersion Square – Immersive VR with Standard Components, VEonPC02 Proc., Protvino, St. Petersburg Forsstrom K. S, Doty J, Cardullo F. M (1985) Using human motion perception models to optimize flight simulator motion algorithms, Technical Papers (A85-40551 19-09), Flight Simulation Technologies Conference, New York 1985, pp 46-51 Bildhauer T, Schulzyk O, Hartmann U (2005) Zur Mechanik des Fahrradfahrens, Aktuelle Methoden der Laser- und Medizinphysik, Berlin
Author: Institute: Street: City: Country: Email:
Oliver Schulzyk Fachbereich Mathematik und Technik, RheinAhrCampus Suedallee 2 53424 Remagen Germany
[email protected] Fig. 3 Mountain bike with data logging system and sensors
_______________________________________________________________
IFMBE Proceedings Vol. 22
_________________________________________________________________
Influence of body worn wireless mobile devices on implanted cardiac pacemakers Sebastian Seitz1 and Olaf D¨ossel1 1
Institute of Biomedical Engineering, Universitaet Karlsruhe (TH), Karlsruhe, Germany
Abstract— The number of implanted cardiac pacemakers and defibrillators is constantly increasing. At the same time, more and more of those patients use wireless mobile communication devices. Aim of this work was the development of a pacemakerelectrode model and its ”implantation” into a detailed anatomical correct voxel model. Additionally generic body models were examined. It consists of several layers with varying thickness and conductivity/permittivity values corresponding to different tissue types. This approach was chosen to avoid numerical errors at tilted boundaries. The excitation sources were modeled as generic dipoles and as plane waves operating at the frequency range normally used by cellular phones and wireless networks (900 to 2450 MHz). The dipoles were designed to provide maximum radiation efficiency at the frequencies of interest. Finally numerical calculation of induced fields by external signal sources were conducted. The results were then evaluated regarding the compliance to the guidelines of ICNIRP and a draft by DIN/VDE. For the Visible Man model, the computed specific absorption rate (SAR) values were well below the thresholds both for single and multi-antenna setups and for all frequencies of interest if the power did not exceed the regulatory specifications. The same results were obtained for the electrical field values determined at commonly used implantation sites for pacemakers. For some tissue configurations in the generic model, higher SAR values than allowed by regulations could be observed. Keywords— Electromagnetic fields, pacemaker, electrode, SAR
I. I NTRODUCTION The demographic development leads to an increasing number of implanted electronic devices like cardiac pacemakers or defibrillators. At the same time, the use of body worn wireless communication devices like cell phones or personal electronic devices becomes more popular. Several reports in the literature describe harmful side effects of fields emitted by those devices on the function of pacemaker systems [1, 2]. Some of the disturbances were only observed during the programming phase of the implanted devices [3]. It could also be observed, that recent pacemaker designs are immune against electromagnetic fields emitted by cellular phones [4, 5]. These studies examined the problem from the
physician’s point of view and did not take into account the underlying technical aspects. We therefore decided to conduct a numerical study to determine occurring fields inside the human body at sites commonly used for placing the pacemaker casing and the electrode. The results were then evaluated if they match regulations.
II. M ETHODS All computations described in this paper were made with the commercially available software packages Microwave Studio (MWS) from Computer Simulation Technology, Darmstadt, Germany and SEMCAD from SPEAG, Zurich, Switzerland. MWS implements the finite integration technique (FIT) to solve the discretized form of Maxwell’s equations, whereas SEMCAD employs a finite-difference time-domain (FDTD) approach. For this work the boundary conditions of the calculation domain were configured as perfectly matched layer (PML) in all directions if not stated otherwise.
A. Anatomical models The calculation of numerical field distributions in biological tissue requires a detailed description of the dielectric properties of the exposed volume. Furthermore at the frequencies of interest (900-2450 MHz), the values for permittivity ε and conductivity σ are varying significantly. Gabriel et al. proposed a method to calculate these values for arbitrary frequencies [6]. Table 1 lists the values for the examined frequencies. Because it is nearly impossible to reproduce the complexity of the real anatomy, experimental compliance test are conducted with homogeneously filled phantoms using a body tissue simulating liquid (BTSL) [7]. The basis for all simulations on anatomical data was the Visible Human data set [8] with resolution of 1 mm3 . It comprises in the upper chest 23 different tissue types. Some of them are shown in Table 1. The changing dielectric properties can form boundary layers, for example in the region of the heart, that may lead to standing wave effects [9].
J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2632–2635, 2008 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2009
Influence of body worn wireless mobile devices on implanted cardiac pacemakers
Table 1: Dielectric properties of tissue types at different frequencies ([σ ] = 1 S/m) Tissue Blood Bone BoneMarrow Fat Heart Lung Muscle Skin BTSL
900 MHz σ ε
1800 MHz σ ε
2450 MHz σ ε
1.5379 1.538 0.040
61.360 61.360 5.504
2.044 0.275 0.069
59.372 11.781 5.3716
2.545 0.394 0.095
58.264 11.381 5.297
0.0510 1.230 0.457 0.943 0.867 1.05
5.462 59.893 22.00 55.032 41.405 55.0
0.078 1.7712 0.637 1.341 1.1847 1.52
5.349 56.323 20.946 53.549 38.872 53.3
0.105 2.256 0.804 1.7389 1.464 1.95
5.280 54.814 20.477 52.729 38.007 52.7
2633
1800 MHz and 100 mW for 2450 MHz according to regulations effective in Germany. To reduce the return loss at the frequency of interest, the lengths of the dipoles were iteratively adjusted, for example for 1800 MHz to l = 7.3 cm (