IFMBE Proceedings Series Editors: R. Magjarevic and J. H. Nagel
Volume 23/1
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Former-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany IFMBE Proceedings ICBMEC 2005 “The 12th International Conference on Biomedical Engineering”, Vol. 12, 2005, Singapore, CD IFMBE Proceedings EMBEC’05 “3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering”, Vol. 11, 2005, Prague, Czech Republic, CD IFMBE Proceedings ICCE 2005 “The 7th International Conference on Cellular Engineering”, Vol. 10, 2005, Seoul, Korea, CD IFMBE Proceedings NBC 2005 “13th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 9, 2005, Umeå, Sweden IFMBE Proceedings APCMBE 2005 “6th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 8, 2005, Tsukuba, Japan, CD IFMBE Proceedings BIOMED 2004 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 7, 2004, Kuala Lumpur, Malaysia IFMBE Proceedings MEDICON and HEALTH TELEMATICS 2004 “X Mediterranean Conference on Medical and Biological Engineering”, Vol. 6, 2004, Ischia, Italy, CD
IFMBE Proceedings Vol. 23/1 Chwee Teck Lim · James C.H. Goh (Eds.)
13th International Conference on Biomedical Engineering ICBME 2008 3–6 December 2008 Singapore
123
IFMBE Proceedings Series Editors: R. Magjarevic and J. H. Nagel
Volume 23/2
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Former-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany IFMBE Proceedings ICBMEC 2005 “The 12th International Conference on Biomedical Engineering”, Vol. 12, 2005, Singapore, CD IFMBE Proceedings EMBEC’05 “3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering”, Vol. 11, 2005, Prague, Czech Republic, CD IFMBE Proceedings ICCE 2005 “The 7th International Conference on Cellular Engineering”, Vol. 10, 2005, Seoul, Korea, CD IFMBE Proceedings NBC 2005 “13th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 9, 2005, Umeå, Sweden IFMBE Proceedings APCMBE 2005 “6th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 8, 2005, Tsukuba, Japan, CD IFMBE Proceedings BIOMED 2004 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 7, 2004, Kuala Lumpur, Malaysia IFMBE Proceedings MEDICON and HEALTH TELEMATICS 2004 “X Mediterranean Conference on Medical and Biological Engineering”, Vol. 6, 2004, Ischia, Italy, CD
IFMBE Proceedings Vol. 23/2 Chwee Teck Lim · James C.H. Goh (Eds.)
13th International Conference on Biomedical Engineering ICBME 2008 3–6 December 2008 Singapore
123
IFMBE Proceedings Series Editors: R. Magjarevic and J. H. Nagel
Volume 23/3
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Former-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany IFMBE Proceedings ICBMEC 2005 “The 12th International Conference on Biomedical Engineering”, Vol. 12, 2005, Singapore, CD IFMBE Proceedings EMBEC’05 “3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering”, Vol. 11, 2005, Prague, Czech Republic, CD IFMBE Proceedings ICCE 2005 “The 7th International Conference on Cellular Engineering”, Vol. 10, 2005, Seoul, Korea, CD IFMBE Proceedings NBC 2005 “13th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 9, 2005, Umeå, Sweden IFMBE Proceedings APCMBE 2005 “6th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 8, 2005, Tsukuba, Japan, CD IFMBE Proceedings BIOMED 2004 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 7, 2004, Kuala Lumpur, Malaysia IFMBE Proceedings MEDICON and HEALTH TELEMATICS 2004 “X Mediterranean Conference on Medical and Biological Engineering”, Vol. 6, 2004, Ischia, Italy, CD
IFMBE Proceedings Vol. 23/3 Chwee Teck Lim · James C.H. Goh (Eds.)
13th International Conference on Biomedical Engineering ICBME 2008 3–6 December 2008 Singapore
123
Editor Chwee Teck LIM Division of Bioengineering & Department of Mechanical Engineering Faculty of Engineering National University of Singapore 7 Engineering Drive 1 Block E3A #04-15 Singapore (117574) Email:
[email protected] ISSN 1680-0737 ISBN-13 978-3-540-92840-9
James C.H. GOH Department of Orthopaedic Surgery, YLL School of Medicine & Division of Bioengineering, Faculty of Engineering & NUS Tissue Engineering Program, Life Sciences Institute Level 4, DSO (Kent Ridge) Building 27 Medical Drive Singapore 117510 Email:
[email protected] e-ISBN-13 978-3-540-92841-6
DOI 10.1007/978-3-540-92841-6 Library of Congress Control Number: 2008944088 © International Federation of Medical and Biological Engineering 2009 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Offical Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting: Data supplied by the authors Production: le-tex publishing services oHG, Leipzig Cover design: deblik, Berlin Printed on acid-free paper 987654321 springer.com
About IFMBE The International Federation for Medical and Biological Engineering (IFMBE) was established in 1959 to provide medical and biological engineering with a vehicle for international collaboration in research and practice of the profession. The Federation has a long history of encouraging and promoting international cooperation and collaboration in the use of science and engineering for improving health and quality of life. The IFMBE is an organization with membership of national and transnational societies and an International Academy. At present there are 52 national members and 5 transnational members representing a total membership in excess of 120 000 worldwide. An observer category is provided to groups or organizations considering formal affiliation. Personal membership is possible for individuals living in countries without a member society The International Academy includes individuals who have been recognized by the IFMBE for their outstanding contributions to biomedical engineering.
Objectives The objectives of the International Federation for Medical and Biological Engineering are scientific, technological, literary, and educational. Within the field of medical, clinical and biological engineering it’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. In pursuit of these aims the Federation engages in the following activities: sponsorship of national and international meetings, publication of official journals, cooperation with other societies and organizations, appointment of commissions on special problems, awarding of prizes and distinctions, establishment of professional standards and ethics within the field, as well as other activities which in the opinion of the General Assembly or the Administrative Council would further the cause of medical, clinical or biological engineering. It promotes the formation of regional, national, international or specialized societies, groups or boards, the coordination of bibliographic or informational services and the improvement of standards in terminology, equipment, methods and safety practices, and the delivery of health care. The Federation works to promote improved communication and understanding in the world community of engineering, medicine and biology.
Activities Publications of IFMBE include: the journal Medical and Biological Engineering and Computing, the electronic magazine IFMBE News, and the Book Series on Biomedical Engineering. In cooperation with its international and regional conferences, IFMBE also publishes the IFMBE Proceedings Series. All publications of the IFMBE are published by Springer Verlag. The Federation has two divisions: Clinical Engineering and Health Care Technology Assessment. Every three years the IFMBE holds a World Congress on Medical Physics and Biomedical Engineering, organized in cooperation with the IOMP and the IUPESM. In addition, annual, milestone and regional conferences are organized in different regions of the world, such as Asia Pacific, Europe, the Nordic-Baltic and Mediterranean regions, Africa and Latin America. The administrative council of the IFMBE meets once a year and is the steering body for the IFMBE: The council is subject to the rulings of the General Assembly, which meets every three years. Information on the activities of the IFMBE can be found on the web site at: http://www.ifmbe.org.
Foreword On behalf of the organizing committee of the 13th International Conference on Biomedical Engineering, I extend our warmest welcome to you. This series of conference began in 1983 and is jointly organized by the YLL School of Medicine and Faculty of Engineering of the National University of Singapore and the Biomedical Engineering Society (Singapore). First of all, I want to thank Mr Lim Chuan Poh, Chairman A*STAR who kindly agreed to be our Guest of Honour to give the Opening Address amidst his busy schedule. I am delighted to report that the 13th ICBME has more than 600 participants from 40 countries. We have received very high quality papers and inevitably we had to turndown some papers. We have invited very prominent speakers and each one is an authority in their field of expertise. I am grateful to each one of them for setting aside their valuable time to participate in this conference. For the first time, the Biomedical Engineering Society (USA) will be sponsoring two symposia, ie “Drug Delivery Systems” and “Systems Biology and Computational Bioengineering”. I am thankful to Prof Tom Skalak for his leadership in this initiative. I would also like to acknowledge the contribution of Prof Takami Yamaguchi for organizing the NUS-Tohoku’s Global COE workshop within this conference. Thanks also to Prof Fritz Bodem for organizing the symposium, “Space Flight Bioengineering”. This year’s conference proceedings will be published by Springer as an IFMBE Proceedings Series. Finally, the success of this conference lies not only in the quality of papers presented but also to a large extent upon the dedicated team efforts of the many volunteers, in particular members of the Organizing Committee and International Advisory Committee. Their dedicated contribution, diligence and encouragement have been exemplary. I would also like to thank the staff at Integrated Meetings Specialist, who have given their best to ensure the smooth running of the conference. Last but not least, I would like to acknowledge with sincere thanks to our sponsors, supporters and exhibitors. To all our delegates, I hope the 13th ICBME 2008 will be memorable not only from the scientific perspective but in the joy of meeting old friends and making new ones. Do take time to experience Singapore, especially during this year-end festivity. Best wishes Prof James Goh Chairman 13th ICBME Organising Committee
Conference details Committees Organising Committee Conference Advisors Yong Tien Chew Eng Hin Lee Chair James Goh Co-Chair Siew Lok Toh Secretary Peter Lee Asst Secretary Sangho Kim Treasurer Martin Buist Program Chwee Teck Lim Exhibition & Sponsorship Michael Raghunath Publicity Peck Ha Khoo-Tan Members Johnny Chee Chuh Khiun Chong Chu Sing, Daniel Lim Mei Kay Lee Stephen Low Teddy Ong Fook Rhu Ong Subbaraman Ravichandran
International Advisory Committee An Kai Nan, Mayo Clinic College of Medicine Leendert Blankevoort, Orthopaedic Research Center Friedrich Bodem, Mainz University Cheng Cheng Kung, National Yang Ming University Cheng Dong, Penn State University Shu Chien, Univeristy of California, San Diego Barthes-Biesel Dominique, University of Technology of Compiegne David Elad, Tel Aviv University Fan Yu-Bo, Beihang University Peter J. Hunter, University of Auckland Walter Herzog, University of Calgary Fumihiko Kajiya, Okayama University Roger Kamm, Massachusetts Institute of Technology Makoto Kikuchi, National Defense Medical College Kim Sun I, Hanyang University Chandran Krishnan B, University of Iowa Kam Leong, Duke University Lin Feng-Huei, National Taiwan University Lin Kang Pin, Chung Yuan Christian University Marc Madou, University of California Urvine Banchong Mahaisavariya, Mahidol University Karol Miller, University of Western Australia Bruce Milthorpe, University of New South Wales Yannis F. Misirlis, University of Patras Joachim Nagel, University of Stuttgart Mark Pearcy, Queensland University of Technology Robert O. Ritchie, University of California, Berkeley Savio L. Y. Woo, University of Pittsburg Takami Yamaguchi, Tohoku University Ajit P. Yoganathan, Georgia Institute of Technology & Emory University Zhang Yuan-ting, City University of Hong Kong
Acknowledgments
History of ICBME In 1983, a number of academics from the Faculty of Medicine and Faculty of Engineering at the National University of Singapore (NUS) organized a Symposium on Biomedical Engineering (chaired by N Krishnamurthy). It was held on NUS Kent Ridge Campus. The scientific meeting attracted great interest from members of both faculties. It also facilitated cross faculty research collaboration. The 2nd Symposium (chaired by J Goh) was held in 1985 at the Sepoy Lines Campus of the Faculty of Medicine with the aim to strengthen collaboration between the two faculties. The keynote speaker was Dr GK Rose, Oswestry, UK. In 1986, the 3rd Symposium (chaired by K Bose & N Krishnamurthy) was organized to promote the theme of “Biomedical Engineering: An Interdisciplinary Approach”, it attracted regional participation. The keynote speaker was Prof JP Paul, Strathclyde, UK. It was highly successful and as such motivated the creation of the International Conference on Biomedical Engineering (ICBME). In order to maintain historical continuity, the ICBME series began in 1987 with the 4th ICBME. From 1997 onwards, the ICBME series was jointly organized by Faculty of Engineering & YLL School of Medicine, NUS and the Biomedical Engineering Society (Singapore). The following table shows the growth of the ICBME series over the years:
Year
Chairs & Co-Chairs
Invited Speakers
Oral Papers
Poster Papers
Total
4th ICBME
1987
AC Liew, K Bose
8
29
10
47
5th ICBME
1988
K Bose, K Ong
7
67
18
92
6th ICBME
1990
YT Chew, K Bose
10
132
34
176
7th ICBME
1992
K Bose, YT Chew
19
140
37
196
8th ICBME
1994
YT Chew, K Bose
25
137
20
182
9th ICBME
1997
K Bose, YT Chew
39
178
23
240
10th ICBME
2000
SH Teoh, EH Lee
34
208
60
302
11th ICBME
2002
J Goh, SL Toh
44
261
150
455
12th ICBME
2005
SL Toh, J Goh
23
370
123
516
13th ICBME
2008
J Goh, SL Toh
36
340
299
675
ICBME 2008 Young Investigator Award Winners YIA (1st Prize) A Biofunctional Fibrous Scaffold for the Encapsulation of Human Mesenchymal Stem Cells and its Effects on Stem Cell Differentiation S. Z. Yow 1, C. H. Quek 2, E. K. F. Yim 1, K. W. Leong 2,3, C. T. Lim 1 1. National University of Singapore, Singapore 2. Duke University, North Carolina, USA 3. Duke-NUS Graduate Medical School, Singapore YIA (2nd Prize) Multi-Physical Simulation of Left-ventricular Blood Flow Based On Patient-specific MRI Data S. B. S. Krittian, S. Höttges, T. Schenkel, H. Oertel University of Karlsruhe, Germany YIA (3rd Prize) Landing Impact Loads Predispose Osteocartilage to Degeneration C. H. Yeow 1, S. T. Lau 1, P. V. S. Lee 1,2,3, J . C. H. Goh 1 1. National University of Singapore, Singapore 2. Defence Medical and Environmental Research Institute, Singapore 3. University of Melbourne, Australia YIA (Merit) Investigating Combinatorial Drug Effects on Adhesion and Suspension Cell Types Using a Microfluidic-Based Sensor System S. Arora 1, C. S. Lim 1, M. Kakran 1, J. Y. A. Foo 1,2, M. K. Sakharkar 1, P. Dixit 1,3, J. Miao 1 1. Nanyang Technological University, Singapore 2. Singapore General Hospital, Singapore 3. Georgia Institute of Technology, Atlanta, USA YIA (Merit) Synergic Combination of Collagen Matrix with Knitted Silk Scaffold Regenerated Ligament with More Native Microstructure in Rabbit Model X. Chen, Z. Yin, Y.-Y. Qi, L.-L. Wang, H.-W. Ouyang Zhejiang University, China YIA (Merit) Overcoming Multidrug Resistance of Breast Cancer Cells by the Micellar Drug Carriers of mPEG-PCL-graft-cellulose Y.-T. Chen 1, C.-H. Chen 1, M.-F. Hsieh 1, A. S. Chan 1, I. Liau 2, W.-Y. Tai 2 1. Chung Yuan Christian University, Taiwan 2. National Chiao Tung University, Taiwan YIA (Merit) Three-dimensional Simulation of Blood Flow in Malaria Infection Y. Imai 1, H. Kondo 1, T. Ishikawa 1, C. T. Lim 2, K. Tsubota 3, T. Yamaguchi 1 1. Tohoku University, Sendai, Japan 2. National University of Singapore, Singapore 3. Chiba University, Chiba, Japan
XVI
ICBME Award Winners
ICBME 2008 Outstanding Paper Award Winners Oral Category Assessment of the Peripheral Performance and Cortical Effects of SHADE, an Active Device Promoting Ankle Dorsiflexion S. Pittaccio 1, S. Viscuso 1, F. Tecchio 2, F. Zappasodi 2, M. Rossini 3, L. Magoni 3, S. Pirovano 3 1. Unità staccata di Lecco, Italy 2. Unità MEG, Ospedale Fatebenefratelli, Italy 3. Centro di Riabilitazione Villa Beretta, Costamasnaga, Italy Amino Acid Coupled Liposomes for the Effective Management of Parkinsonism P. Khare, S. K. Jain Dr H S Gour Vishwavidyalaya, Sagar, MP, India Transcutaneous Energy Transfer System for Powering Implantable Biomedical Devices T. Dissanayake 1, D. Budgett 1,2, A. P. Hu 1, S. Malpas 1,2, L. Bennet 1 1. University of Auckland, New Zealand 2. Telemetry Research, Auckland, New Zealand Multi-Wavelength Diffuse Reflectance Plots for Mapping Various Chromophores in Human Skin for Non-Invasive Diagnosis S. Prince, S. Malarvizhi SRM University, Chennai, India. Application of the Home Telecare System in the Treatment of Diabetic Foot Syndrome P. Ladyzynski 1, J. M. Wojcicki 1, P. Foltynski 1, G. Rosinski 2, J. Krzymien 2 1. Polish Academy of Sciences, Warsaw, Poland 2. Medical University of Warsaw, Warsaw, Poland Nano-Patterned Poly--caprolactone with Controlled Release of Retinoic Acid and Nerve Growth Factor for Neuronal Regeneration K. K. Teo, E. K. F. Yim National University of Singapore, Singapore Organic Phase Coating of Polymers onto Agarose Microcapsules for Encapsulation of Biomolecules with High Efficiency J. Bai 1, W. C. Mak 2, X. Y. Chang 1, D. Trau 1 1. National University of Singapore, Singapore 2. Hong Kong University of Science and Technology, China Patch-Clamping in Droplet Arrays: Single Cell Positioning via Dielectrophoresis J. Reboud 1, M. Q. Luong 1,2, C. Rosales 3,L. Yobas 1 1. Institute of Microelectronics, Singapore 2. National University of Singapore, Singapore 3. Institute of High Performance Computing, Singapore
ICBME Award Winners
Statistical Variations of Ultrasound Backscattering From the Blood under Steady Flow C.-C Huang 1, Y.-H. Lin 2, S.-H Wang 2 1. Fu Jen Catholic University, Taiwan 2. Chung Yuan Christian University, Taiwan Postural Sway of the Elderly Males and Females during Quiet Standing and Squat-and-Stand Movement G. Eom 1, J.-W. Kim 1, B.-K. Park 2, J.-H. Hong 2, S.-C. Chung 1, B.-S. Lee 1, G. Tack 1, Y. Kim 1 1. Konkuk University, Choongju, Korea 2. Korea University, Seoul, Korea Computational Simulation of Three-dimensional Tumor Geometry during Radiotherapy S. Takao, S. Tadano, H. Taguchi, H. Shirato Hokkaido University, Sapporo, Japan Flow Imaging and Validation of MR Fluid Motion Tracking K. K. L. Wong 1,2, R. M. Kelso 1, S. G. Worthley 2, P. Sanders 2, J. Mazumdar 1, D. Abbott 1 1. University of Adelaide, Australia 2. Royal Adelaide Hospital, Adelaide, Australia Successful Reproduction of In-Vivo Fracture of an Endovascular Stent in Superficial Femoral Artery Utilizing a Novel Multi-loading Durability Test System K. Iwasaki, S. Tsubouchi, Y. Hama, M. Umezu Waseda University, Tokyo, Japan Fusion Performance of a Bioresorbable Cage Used In Porcine Model of Anterior Lumbar Interbody Fusion A. S. Abbah 1, C. X. F. Lam 1, K. Yang 1, J. C. H. Goh 1, D. W. Hutmacher 2, H. K. Wong 1 1. National University of Singapore, Singapore 2. Queensland University of Technology, Australia A Low-Noise CMOS Receiver Frontend for NMR-based Surgical Guidance J. Anders 1, S. Reymond 1, G. Boero 1, K. Scheffler 2 1. Ecole Polytechnique Fédérale de Lausanne, Switzerland 2. University of Basel, Basel, Switzerland Pseudoelastic alloy devices for spastic elbow relaxation S. Viscuso 1, S. Pittaccio 1, M. Caimmi 2, G. Gasperini 2, S. Pirovano 2, S. Besseghini 1, F. Molteni 2 1. Unità staccata di Lecco, Lecco, Italy 2. Centro di Riabilitazione Villa Beretta, Costamasnaga, Italy Small-world Network for Investigating Functional Connectivity in Bipolar Disorder: A Functional Magnetic Images (fMRI) Study S. Teng 1,2, P. S. Wang 1,2,3, Y. L. Liao 4, T. C. Yeh 1,2, T. P. Su 1,2, J. C. Hsieh 1,2, Y. T. Wu 1,2 1. National Yang-Ming University, Taiwan 2. Taipei Veterans General Hospital, Taiwan 3. Taipei Municipal Gan-Dau Hospital, Taiwan 4. National Cheng Kung University, Taiwan Finite Element Modeling Of Uncemented Implants: Challenges in the Representation of the Press-fit Condition S. E. Clift University of Bath, UK
XVII
XVIII
ICBME Award Winners
Poster Category Optimization and Characterization of Sodium MRI Using 8-channel 23Na and 2-channel 1H RX/TX Coil J. R. James 1,2, C. Lin 1, H. Stark 3, B. M. Dale 4, N. Bansal 1,2 1. Indiana University School of Medicine, Indianapolis, USA 2. Purdue University, West Lafayette, USA 3. Stark Contrast, MRI Coils Research, Erlangen, Germany 4. Siemens Medical Solutions, Cary, NC, USA A Test for the Assessment of Reaction Time for Narcotic Rehabilitation Patients S. G. Patil, T. J. Gale, C. R. Clive University Of Tasmania, Hobart, Australia Development of Noninvasive Thrombus Detection System with Near-Infrared Laser and Photomultiplier for Artificial Hearts S. Tsujimura 1, H. Koguchi 1, T. Yamane 2, T. Tsutsui 3, Y. Sankai 1 1. University of Tsukuba, Tsukuba, Japan 2. National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan 3. Institute of Clinical Medicine, University of Tsukuba, Tsukuba, Japan Magnetic field transducers based on the phase characteristics of GMI sensors and aimed at biomedical applications E. Costa Silva, L. A. P. Gusmao, C. R. Hall Barbosa, E. Costa Monteiro Pontifícia Universidade Católica do Rio de Janeiro, Brazil Windowed Nonlinear Energy Operator-based First-arrival Pulse Detection for Ultrasound Transmission Computed Tomography S. H. Kim 1, C. H. Kim 2, E. Savastyuk 2, T. Kochiev 2, H.-S. Kim 2, T.-S. Kim 1 1. Kyung Hee University, Korea 2. Samsung Electro-Mechanics Co. LTD., Korea Microfabrication of High-density Microelectrode Arrays for in vitro Applications L. Rousseau 1,2, G. Lissorgues 1,2, F. Verjus 3, B. Yvert 4 1. ESIEE, France 2. ESYCOM, Université Paris-EST, Marne-La-Vallée, France 3. NXP Caen, Talence, France Packaging Fluorescent Proteins into HK97 Capsids of Different Maturation Intermediates: A Novel Nano-Particle Biotechnological Application R. Huang1,2, K. Lee 2, R. Duda 3, R. Khayat 2, J. Johnson 1,2 1. University of California, USA 2. The Scripps Research Institute, USA 3. University of Pittsburgh, USA Design and Implementation of Web-Based Healthcare Management System for Home Healthcare S. Tsujimura, N. Shiraishi, A. Saito, H. Koba, S. Oshima, T. Sato, F. Ichihashi, Y. Sankai University of Tsukuba, Tsukuba, Japan Sensitivity Analysis in Sensor HomeCare Implementation M. Penhaker 1, R. Bridzik 2, V. Novak 2, M. Cerny 1, M. Rosulek 1 1. Technical University of Ostrava, Czech Republic 2. University Hospital Ostrava, Czech Republic
ICBME Award Winners
Fabrication of Three-Dimensional Tissues with Perfused Microchannels K. Sakaguchi 1, T. Shimizu 2, K. Iwasaki 1, M. Yamato 2, M. Umezu 1, T. Okano 2 1. Waseda University, Tokyo, Japan 2. Tokyo Women’s Medical University, Twins, Tokyo, Japan A Serum Free Medium that Conserves The Chondrogenic Phenotype of In Vitro Expanded Chondrocytes S. T. B. Ho 1, Z. Yang 1, H. P. J. Hu 1, K. W. S. Oh 2, B. H. A. Choo 2, E. H. Lee 1 1. National University of Singapore, Singapore 2. Bioprocessing Technology Institute, Singapore Estimating Mechanical Properties of Skin using a Structurally-Based Model J. W. Y. Jor, M. P. Nash, R. M. F. Nielsen, P. J. Hunter University of Auckland, New Zealand The Motility of Normal and Cancer Cells in Response to the Combined Influence of Substrate Rigidity and Anisotropic Nanostructure T. Tzvetkova-Chevolleau, A. Stephanou, D. Fuard, J. Ohayon, P. Schiavone, P. Tracqui Centre National de la Recherche Scientifique, France Accurate Estimation of In Vivo Knee Kinematics from Skin Marker Coordinates with the Global Optimization Method T.-W Lu, T.-Y. Tsai National Taiwan University, Taiwan A Brain-oriented Compartmental Model of Glucose-Insulin-Glucagon Regulatory System G.-H. Lu, H. Kimura Institute of Physical and Chemical Research, Nagoya, Japan Mechanical Loading Response of Human Trabecular Bone Cores M. Kratz, P. Hans, J. David Universitat Marburg, Germany
XIX
Content Track 1: Bioinformatics; Biomedical Imaging; Biomedical instrumentation; Biosignal Processing; Digital Medicine; Neural Systems Engineering Electroencephalograph Signal Analysis During Ujjayi pranayama..................................................................................... 1 Prof. S.T. Patil and Dr. D.S. Bormane
A Study of Stochastic Resonance as a Mathematical Model of Electrogastrography during Sitting Position ................. 5 Y. Matsuura, H. Takada and K. Yokoyama
Possibility of MEG as an Early Diagnosis Tool for Alzheimer’s Disease: A Study of Event Related Field in Missing Stimulus Paradigm................................................................................................................................................. 9 N. Hatsusaka, M. Higuchi and H. Kado
New Architecture For NN Based Image Compression For Optimized Power, Area And Speed .................................... 13 K. Venkata ramanaiah, Cyril Prasanna raj and Dr. K. Lal kishore
A Statistical Model to Estimate Flow Mediated Dilation Using Recorded Finger Photoplethysmogram....................... 18 R. Jaafar, E. Zahedi, M.A. Mohd Ali
Automatic Extraction of Blood Vessels, Bifurcations and End Points in the Retinal Vascular Tree.............................. 22 Edoardo Ardizzone, Roberto Pirrone, Orazio Gambino and Francesco Scaturro
Recent Developments in Optimizing Optical Tools for Air Bubble Detection in Medical Devices Used in Fluid Transport .................................................................................................................................................................. 27 S. Ravichandran, R. Shanthini, R.R. Nur Naadhirah, W. Yikai, J. Deviga, M. Prema and L. Clinton
General Purpose Adaptive Biosignal Acquisition System Combining FPGA and FPAA ................................................ 31 Pedro Antonio Mou, Chang Hao Chen, Sio Hang Pun, Peng Un Mak and Mang I. Vai
Segmentation of Brain MRI and Comparison Using Different Approaches of 2D Seed Growing.................................. 35 K.J. Shanthi, M. Sasi Kumar and C. Kesavdas
SQUID Biomagnetometer Systems for Non-invasive Investigation of Spinal Cord Dysfunction .................................... 39 Y. Adachi, J. Kawai, M. Miyamoto, G. Uehara, S. Kawabata, M. Tomori, S. Ishii and T. Sato
Human Cardio-Respiro Abnormality Alert System using RFID and GPS - (H-CRAAS) ............................................... 43 Ahamed Mohideen, Balanagarajan
Automatic Sleep Stage Determination by Conditional Probability: Optimized Expert Knowledge-based Multi-Valued Decision Making ............................................................................................................................................. 47 Bei Wang, Takenao Sugi, Fusae Kawana, Xingyu Wang and Masatoshi Nakamuara
A Study on the Relation between Stability of EEG and Respiration ................................................................................. 51 Young-Sear Kim, Se-Kee Kil, Heung-Ho Choi, Young-Bae Park, Tai-Sung Hur, Hong-Ki Min
The Feature-Based Microscopic Image Segmentation for Thyroid Tissue........................................................................ 55 Y.T. Chen, M.W. Lee, C.J. Hou, S.J. Chen, Y.C. Tsai and T.H. Hsu
Heart Disease Classification Using Discrete Wavelet Transform Coefficients of Isolated Beats..................................... 60 G.M. Patil, Dr. K. Subba Rao, K. Satyanarayana
Non-invasive Techniques for Assessing the Endothelial Dysfunction: Ultrasound Versus Photoplethysmography.......................................................................................................................... 65 M. Zaheditochai, R. Jaafar, E. Zahedi
XXII
Content
High Performance EEG Analysis for Brain Interface......................................................................................................... 69 Dr. D.S. Bormane, Prof. S.T. Patil, Dr. D.T. Ingole, Dr. Alka Mahajan
Denoising of Transient Visual Evoked Potential using Wavelets ....................................................................................... 73 R. Sivakumar
A Systematic Approach to Understanding Bacterial Responses to Oxygen Using Taverna and Webservices............... 77 S. Maleki-Dizaji, M. Rolfe, P. Fisher, M. Holcombe
Permeability of an In Vitro Model of Blood Brain Barrier (BBB)..................................................................................... 81 Rashid Amin, Temiz A. Artmann, Gerhard Artmann, Philip Lazarovici, Peter I. Lelkes
Decision Making Algorithm through LVQ Neural Network for ECG Arrhythmias ....................................................... 85 Ms. T. Padma, Dr. Madhavi Latha, Mr. K. Jayakumar
A Low-Noise CMOS Receiver Frontend for NMR-based Surgical Guidance................................................................... 89 J. Anders, S. Reymond, G. Boero and K. Scheffler
Automated Fluorescence as a System to Assist the Diagnosis of Retinal Blood Vessel Leakage ..................................... 94 Vanya Vabrina Valindria, Tati L.R. Mengko, Iwan Sovani
A New Method of Extraction of FECG from Abdominal Signal........................................................................................ 98 D.V. Prasad, R. Swarnalatha
Analysis of EGG Signals for Digestive System Disorders Using Neural Networks......................................................... 101 G. Gopu, Dr. R. Neelaveni and Dr. K. Porkumaran
A Reliable Measurement to Assess Atherosclerosis of Differential Arterial Systems..................................................... 105 Hsien-Tsai Wu, Cyuan-Cin Liu, Po-Chun Hsu, Huo-Ying Chang and An-Bang Liu
An Automated Segmentation Algorithm for Medical Images .......................................................................................... 109 C.S. Leo, C.C. Tchoyoson Lim, V. Suneetha
Quantitative Assessment of Movement Disorders in Clinical Practice............................................................................ 112 Á. Jobbágy, I. Valálik
Design and Intra-operative Studies of an Economic Versatile Portable Biopotential Recorder ................................... 116 V. Sajith, A. Sukeshkumar, Keshav Mohan
Comparison of Various Imaging Modes for Photoacoustic Tomography ....................................................................... 121 Chi Zhang and Yuanyuan Wang
Ultrasonographic Segmentation of Cervical Lymph Nodes Based on Graph Cut with Elliptical Shape Prior............ 125 J.H. Zhang, Y.Y. Wang and C. Zhang
Computerized Assessment of Excessive Femoral and Tibial Torsional Deformation by 3D Anatomical Landmarks Referencing ....................................................................................................................... 129 K. Subburaj, B. Ravi and M.G. Agarwal
Modeling the Microstructure of Neonatal EEG Sleep Stages by Temporal Profiles ...................................................... 133 V. Kraja, S. Petránek, J. Mohylová, K. Paul, V. Gerlaand L. Lhotská
Optimization and Characterization of Sodium MRI Using 8-channel 23Na and 2-channel 1H RX/TX Coil ................ 138 J.R. James, C. Lin, H. Stark, B.M. Dale, N. Bansal
Non-invasive Controlled Radiofrequency Hyperthermia Using an MR Scanner and a Paramagnetic Thulium Complex ................................................................................................................................................................. 142 J.R. James, V.C. Soon, S.M. Topper, Y. Gao, N. Bansal
Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification ........................................................ 146 S. Devuyst, T. Dutoit, T. Ravet, P. Stenuit, M. Kerkhofs, E. Stanus
Content
XXIII
Medical Image Registration Using Mutual Information Similarity Measure ................................................................. 151 Mohamed E. Khalifa, Haitham M. Elmessiry, Khaled M. ElBahnasy, Hassan M.M. Ramadan
A Feasibility Study of Commercially Available Audio Transducers in ABR Studies .................................................... 156 A. De Silva, M. Schier
Simultaneous Measurement of PPG and Functional MRI................................................................................................ 161 S.C. Chung, M.H. Choi, S.J. Lee, J.H. Jun, G.M. Eom, B. Lee and G.R. Tack
A Study on the Cerebral Lateralization Index using Intensity of BOLD Signal of functional Magnetic Resonance Imaging............................................................................................................................................................... 165 M.H. Choi, S.J. Lee, G.R. Tack, G.M. Eom, J.H. Jun, B. Lee and S.C. Chung
A Comparison of Two Synchronization Measures for Neural Data ................................................................................ 169 H. Perko, M. Hartmann and T. Kluge
Protein Classification Using Decision Trees With Bottom-up Classification Approach ................................................ 174 Bojan Pepik, Slobodan Kalajdziski, Danco Davcev, Member IEEE
Extracting Speech Signals using Independent Component Analysis ............................................................................... 179 Charles T.M. Choi and Yi-Hsuan Lee
Age-Related Changes in Specific Harmonic Indices of Pressure Pulse Waveform......................................................... 183 Sheng-Hung Wang, Tse-Lin Hsu, Ming-Yie Jan, Yuh-Ying Lin Wang and Wei-Kung Wang
Processing of NMR Slices for Preparation of Multi-dimensional Model......................................................................... 186 J. Mikulka, E. Gescheidtova and K. Bartusek
Application of Advanced Methods of NMR Image Segmentation for Monitoring the Development of Growing Cultures............................................................................................................................................................. 190 J. Mikulka, E. Gescheidtova and K. Bartusek
High-accuracy Myocardial Detection by Combining Level Set Method and 3D NURBS Approximation................... 194 T. Fukami, H. Sato, J. Wu, Thet-Thet-Lwin, T. Yuasa, H. Hontani, T. Takeda and T. Akatsuka
Design of a Wireless Intraocular Pressure Monitoring System for a Glaucoma Drainage Implant ............................. 198 T. Kakaday, M. Plunkett, S. McInnes, J.S. Jimmy Li, N.H. Voelcker and J.E. Craig
Integrating FCM and Level Sets for Liver Tumor Segmentation .................................................................................... 202 Bing Nan Li, Chee Kong Chui, S.H. Ong and Stephen Chang
A Research-Centric Server for Medical Image Processing, Statistical Analysis and Modeling .................................... 206 Kuang Boon Beh, Bing Nan Li, J. Zhang, C.H. Yan, S. Chang, R.Q. Yu, S.H. Ong, Chee Kong Chui
An Intelligent Implantable Wireless Shunting System for Hydrocephalus Patients ...................................................... 210 A. Alkharabsheh, L. Momani, N. Al-Zu’bi and W. Al-Nuaimy
Intelligent Diagnosis of Liver Diseases from Ultrasonic Liver Images: Neural Network Approach............................. 215 P.T. Karule, S.V. Dudul
A Developed Zeeman Model for HRV Signal Generation in Different Stages of Sleep.................................................. 219 Saeedeh Lotfi Mohammad AbadP, Nader Jafarnia DabanlooTP, Seyed Behnamedin JameieP, Khosro SadeghniiatP
Two wavelengths Hematocrit Monitoring by Light Transmittance Method .................................................................. 223 Phimon Phonphruksa and Supan Tungjitkusolmun
Rhythm of the Electromyogram of External Urethral Sphincter during Micturition in Rats ...................................... 227 Yen-Ching Chang
Higher Order Spectra based Support Vector Machine for Arrhythmia Classification ................................................. 231 K.C. Chua, V. Chandran, U.R. Acharya and C.M. Lim
XXIV
Content
Transcutaneous Energy Transfer System for Powering Implantable Biomedical Devices............................................ 235 T. Dissanayake, D. Budgett, A.P. Hu, S. Malpas and L. Bennet
A Complexity Measure Based on Modified Zero-Crossing Rate Function for Biomedical Signal Processing............. 240 M. Phothisonothai and M. Nakagawa
The Automatic Sleep Stage Diagnosis Method by using SOM ......................................................................................... 245 Takamasa Shimada, Kazuhiro Tamura, Tadanori Fukami, Yoichi Saito
A Development of the EEG Telemetry System under Exercising .................................................................................... 249 Noriyuki Dobashi, Kazushige Magatani
Evaluation of Photic Stimulus Response Based on Comparison with the Normal Database in EEG Routine Examination .............................................................................................................................................. 253 T. Fukami, F. Ishikawa, T. Shimada, B. Ishikawa and Y. Saito
A Speech Processor for Cochlear Implant using a Simple Dual Path Nonlinear Model of Basilar Membrane........... 257 K.H. Kim, S.J. Choi, J.H. Kim
Mechanical and Biological Characterization of Pressureless Sintered Hydroxapatite-Polyetheretherketone Biocomposite.......................................................................................................... 261 Chang Hengky, Bastari Kelsen, Saraswati, Philip Cheang
Computerized Cephalometric Line Tracing Technique on X-ray Images ...................................................................... 265 C. Sinthanayothin
Brain Activation in Response to Disgustful Face Images with Different Backgrounds.................................................. 270 Takamasa Shimada, Hideto Ono, Tadanori Fukami, Yoichi Saito
Automatic Segmentation of Blood Vessels in Colour Retinal Images using Spatial Gabor Filter and Multiscale Analysis........................................................................................................................................................ 274 P.C. Siddalingaswamy, K. Gopalakrishna Prabhu
Automated Detection of Optic Disc and Exudates in Retinal Images .............................................................................. 277 P.C. Siddalingaswamy, K. Gopalakrishna Prabhu
Qualitative Studies on the Development of Ultraviolet Sterilization System for Biological Applications .................... 280 Then Tze Kang, S. Ravichandran, Siti Faradina Bte Isa, Nina Karmiza Bte Kamarozaman, Senthil Kumar
From e-health to Personalised Medicine ............................................................................................................................ 284 N. Pangher
Quantitative Biological Models as Dynamic, User-Generated Online Content............................................................... 287 J.R. Lawson, C.M. Lloyd, T. Yu and P.F. Nielsen
Development of Soft Tissue Stiffness Measuring Device for Minimally Invasive Surgery by using Sensing Cum Actuating Method ................................................................................................................................................................. 291 M.-S. Ju, H.-M. Vong, C.-C.K. Lin and S.-F. Ling
A Novel Method to Describe and Share Complex Mathematical Models of Cellular Physiology ................................. 296 D.P. Nickerson and M.L. Buist
New Paradigm in Journal Reference Management ........................................................................................................... 299 Casey K. Chan, Yean C. Lee and Victor Lin
Incremental Learning Method for Biological Signal Identification ................................................................................. 302 Tadahiro Oyama, Stephen Karungaru, Satoru Tsuge, Yasue Mitsukura and Minoru Fukumi
Metal Artifact Removal on Dental CT Scanned Images by Using Multi-Layer Entropic Thresholding and Label Filtering Techniques for 3-D Visualization of CT Images .............................................................................. 306 K. Koonsanit, T. Chanwimaluang, D. Gansawat, S. Sotthivirat, W. Narkbuakaew, W. Areeprayolkij, P. Yampri and W. Sinthupinyo
Content
XXV
A Vocoder for a Novel Cochlear Implant Stimulating Strategy Based on Virtual Channel Technology ..................... 310 Charles T.M. Choi, C.H. Hsu, W.Y. Tsai and Yi Hsuan Lee
Towards a 3D Real Time Renal Calculi Tracking for Extracorporeal Shock Wave Lithotripsy.................................. 314 I. Manousakas, J.J. Li
A Novel Multivariate Analysis Method for Bio-Signal Processing................................................................................... 318 H.H. Lin, S.H. Change, Y.J. Chiou, J.H. Lin, T.C. Hsiao
Multi-Wavelength Diffuse Reflectance Plots for Mapping Various Chromophores in Human Skin for Non-Invasive Diagnosis .................................................................................................................................................. 323 Shanthi Prince and S. Malarvizhi
Diagnosis of Diabetic Retinopathy through Slit Lamp Images......................................................................................... 327 J. David, A. Sukesh Kumar and V.V. Vineeth
Tracing of Central Serous Retinopathy from Retinal Fundus Images ............................................................................ 331 J. David, A. Sukesh Kumar and V. Viji
A Confidence Measure for Real-time Eye Movement Detection in Video-oculography ................................................ 335 S.M.H. Jansen, H. Kingma and R.L.M. Peeters
Development of Active Guide-wire for Cardiac Catheterization by Using Ionic Polymer-Metal Composites............. 340 B.K. Fang, M.S. Ju and C.C.K. Lin
Design and Development of an Interactive Proteomic Website........................................................................................ 344 K. Xin Hui, C. Zheng Wei, Sze Siu Kwan and R. Raja
A New Interaction Modality for the Visualization of 3D Models of Human Organ ....................................................... 348 L.T. De Paolis, M. Pulimeno and G. Aloisio
Performance Analysis of Support Vector Machine (SVM) for Optimization of Fuzzy Based Epilepsy Risk Level Classifications from EEG Signal Parameters .......................................................................................................... 351 R. Harikumar, A. Keerthi Vasan, M. Logesh Kumar
A Feasibility Study for the Cancer Therapy Using Cold Plasma ..................................................................................... 355 D. Kim, B. Gweon, D.B. Kim, W. Choe and J.H. Shin
Equation Chapter 1 Section 1Space State Approach to Study the Effect of Sodium over Cytosolic Calcium Profile ..................................................................................................................................................................... 358 Shivendra Tewari and K.R. Pardasani
Preliminary Study of Mapping Brain ATP and Brain pH Using Multivoxel 31P MR Spectroscopy............................. 362 Ren-Hua Wu, Wei-Wen Liu, Yao-Wen Chen, Hui Wang, Zhi-Wei Shen, Karel terBrugge, David J. Mikulis
Brain-Computer Interfaces for Virtual Environment Control ........................................................................................ 366 G. Edlinger, G. Krausz, C. Groenegress, C. Holzner, C. Guger, M. Slater
FPGA Implementation of Fuzzy (PD&PID) Controller for Insulin Pumps in Diabetes ................................................ 370 V.K. Sudhaman, R. HariKumar
Position Reconstruction of Awake Rodents by Evaluating Neural Spike Information from Place Cells in the Hippocampus.............................................................................................................................................................. 374 G. Edlinger, G. Krausz, S. Schaffelhofer, C. Guger, J. Brotons-Mas, M. Sanchez-Vives
Heart Rate Variability Response to Stressful Event in Healthy Subjects........................................................................ 378 Chih-Yuan Chuang, Wei-Ru Han and Shuenn-Tsong Young
Automatic Quantitative Analysis of Myocardial Perfusion MRI ..................................................................................... 381 C. Li and Y. Sun
Visualization of Articular Cartilage Using Magnetic Resonance Imaging Data............................................................. 386 C.L. Poh and K. Sheah
XXVI
Content
A Chaotic Detection Method for Steady-State Visual Evoked Potentials........................................................................ 390 X.Q. Li and Z.D. Deng
Speckle Reduction of Echocardiograms via Wavelet Shrinkage of Ultrasonic RF Signals............................................ 395 K. Nakayama, W. Ohyama, T. Wakabayashi, F. Kimura, S. Tsuruoka and K. Sekioka
Advanced Pre-Surgery Planning by Animated Biomodels in Virtual Reality................................................................. 399 T. Mallepree, D. Bergers
Computerized Handwriting Analysis in Children with/without Motor Incoordination ................................................ 402 S.H. Chang and N.Y. Yu
The Development of Computer-assisted Assessment in Chinese Handwriting Performance ........................................ 406 N.Y. Yu and S.H. Chang
Novel Tools for Quantification of Brain Responses to Music Stimuli.............................................................................. 411 O. Sourina, V.V. Kulish and A. Sourin
An Autocorrection Algorithm for Detection of Misaligned Fingerprints ........................................................................ 415 Sai Krishna Alahari, Abhiram Pothuganti, Eshwar Chandra Vidya Sagar, Venkata Ravi kumar Garnepudi and Ram Prakash Mahidhara
A Low Power Wireless Downlink Transceiver for Implantable Glucose Sensing Biosystems....................................... 418 D.W.Y. Chung, A.C.B. Albason, A.S.L. Lou and A.A.S. Hu
Advances in Automatic Sleep Analysis ............................................................................................................................... 422 B. Ahmed and R. Tafreshi
Early Cancer Diagnosis by Image Processing Sensors Measuring the Conductive or Radiative Heat......................... 427 G. Gavriloaia, A.M. Ghemigian and A.E. Hurduc
Analysis of Saccadic Eye Movements of Epileptic Patients using Indigenously Designed and Developed Saccadic Diagnostic System ....................................................................................................................... 431 M. Vidapanakanti, Dr. S. Kakarla, S. Katukojwalaand Dr. M.U.R. Naidu
System Design of Ultrasonic Image-guided Focused Ultrasound for Blood Brain Barrier disruption ......................... 435 W.C. Huang, X.Y. Wu, H.L. Liu
A Precise Deconvolution Procedure for Deriving a Fluorescence Decay Waveform of a Biomedical Sample............. 439 H. Shibata, M. Ohyanagi and T. Iwata
Laser Speckle Contrast Analysis Using Adaptive Window............................................................................................... 444 H.-Y. Jin, N.V. Thakor, H.-C. Shin
Neural Decoding of Single and Multi-finger Movements Based on ML .......................................................................... 448 H.-C. Shin, M. Schieber and N. Thakor
Maximum Likelihood Method for Finger Motion Recognition from sEMG Signals ..................................................... 452 Kyoung-Jin Yu, Kab-Mun Cha and Hyun-Chool Shin
Cardiorespiratory Coordination in Rats is Influenced by Autonomic Blockade............................................................ 456 M.M. Kabir, M.I. Beig, E. Nalivaiko, D. Abbott and M. Baumert
Influence of White Matter Anisotropy on the Effects of Transcranial Direct Current Stimulation: A Finite Element Study ........................................................................................................................................................ 460 W.H. Lee, H.S. Seo, S.H. Kim, M.H. Cho, S.Y. Lee and T.-S. Kim
Real-time Detection of Nimodipine Effect on Ischemia Model......................................................................................... 465 G.J. Lee, S.K. Choi, Y.H. Eo, J.E. Lim, J.H. Park, J.H. Han, B.S. Oh and H.K. Park
Windowed Nonlinear Energy Operator-based First-arrival Pulse Detection for Ultrasound Transmission Computed Tomography..................................................................................................... 468 S.H. Kim, C.H. Kim, E. Savastyuk, T. Kochiev, H.-S. Kim and T.-S. Kim
Content
XXVII
Digital Dental Model Analysis ............................................................................................................................................. 472 Wisarut Bholsithi, Chanjira Sinthanayothin
Cervical Cell Classification using Fourier Transform ...................................................................................................... 476 Thanatip Chankong, Nipon Theera-Umpon, Sansanee Auephanwiriyakul
An Oscillometry-Based Approach for Measuring Blood Flow of Brachial Arteries ...................................................... 481 S.-H. Liu, J.-J. Wang and K.-S. Huang
Fuzzy C-Means Clustering for Myocardial Ischemia Identification with Pulse Waveform Analysis........................... 485 Shing-Hong Liu, Kang-Ming Chang and Chu-Chang Tyan
Study of the Effect of Short-Time Cold Stress on Heart Rate Variability....................................................................... 490 J.-J. Wang and C.-C. Chen
A Reflection-Type Pulse Oximeter Using Four Wavelengths Equipped with a Gain-Enhanced Gated-Avalanche-Photodiode .............................................................................................................................................. 493 T. Miyata, T. Iwata and T. Araki
Estimation of Central Aortic Blood Pressure using a Noninvasive Automatic Blood Pressure Monitor ..................... 497 Yuan-Ta Shih, Yi-Jung Sun, Chen-Huan Chen, Hao-min Cheng and Hu Wei-Chih
Design of a PDA-based Asthma Peak Flow Monitor System............................................................................................ 501 C.-M. Wu and C.-W. Su
Development of the Tongue Diagnosis System by Using Surface Coating Mirror ......................................................... 505 Y.J. Jeon, K.H. Kim, H.H. Ryu, J. Lee, S.W. Lee and J.Y. Kim
Design a Moving Artifacts Detection System for a Radial Pulse Wave Analyzer ........................................................... 508 J. Lee, Y.J. Woo, Y.J. Jeon, Y.J. Lee and J.Y. Kim
A Real-time Interactive Editor for 3D Image Registration............................................................................................... 511 T. McPhail and J. Warren
A Novel Headset with a Transmissive PPG Sensor for Heart Rate Measurement ......................................................... 519 Kunsoo Shin, Younho Kim, Sanggon Bae, Kunkook Park, Sookwan Kim
Improvement on Signal Strength Detection of Radio Imaging Method for Biomedical Application............................ 523 I. Hieda and K.C. Nam
Feature Extraction Methods for Tongue Diagnostic System ............................................................................................ 527 K.H. Kim, J.-H. Do, Y.J. Jeon, J.-Y. Kim
Mechanical-Scanned Low-Frequency (28-kHz) Ultrasound to Induce localized Blood-Brain Barrier Disruption..... 532 C.Y. Ting, C.H. Pan and H.L. Liu
Feasibility Study of Using Ultrasound Stimulation to Enhancing Blood-Brain Barrier Disruption in a Brain Tumor Model ...................................................................................................................................................... 536 C.H. Pan, C.Y. Ting, C.Y. Huang, P.Y. Chen, K.C. Wei, and H.L. Liu
On Calculating the Time-Varying Elastance Curve of a Radial Artery Using a Miniature Vibration Method .......... 540 S. Chang, J.-J. Wang, H.-M. Su, C.-P. Liu
A Wide Current Range Readout Circuit with Potentiostat for Amperometric Chemical Sensors ............................... 543 W.Y. Chung, S.C. Cheng, C.C. Chuang, F.R.G. Cruz
Multiple Low-Pressure Sonications to Improve Safety of Focused-Ultrasound Induced Blood-Brain Barrier Disruption: In a 1.5-MHz Transducer Setup ..................................................................................................................... 547 P.H. Hsu, J.J. Wang, K.J. Lin, J.C. Chen and H.L. Liu
Phase Synchronization Index of Vestibular System Activity in Schizophrenia .............................................................. 551 S. Haghgooie, B.J. Lithgow, C. Gurvich, and J. Kulkarni
XXVIII
Content
Constrained Spatiotemporal ICA and Its Application for fMRI Data Analysis............................................................. 555 Tahir Rasheed, Young-Koo Lee, and Tae-Seong Kim
ARGALI : An Automatic Cup-to-Disc Ratio Measurement System for Glaucoma Analysis Using Level-set Image Processing ....................................................................................................................................... 559 J. Liu, D.W.K. Wong, J.H. Lim, H. Li, N.M. Tan, Z. Zhang, T.Y. Wong, R. Lavanya
Validation of an In Vivo Model for Monitoring Trabecular Bone Quality Changes Using Micro CT, Archimedes-based Volume Fraction Measurement and Serial Milling........................................................................... 563 B.H. Kam, M.J. Voor, S. Yang, R. Burden, Jr. and S. Waddell
A Force Sensor System for Evaluation of Behavioural Recovery after Spinal Cord Injury in Rats............................. 566 Y.C. Wei, M.W. Chang, S.Y. Hou, M.S. Young
Flow Imaging and Validation of MR Fluid Motion Tracking .......................................................................................... 569 K.K.L. Wong, R.M. Kelso, S.G. Worthley, P. Sanders, J. Mazumdar and D. Abbott
High Frequency Electromagnetic Thermotherapy for Cancer Treatment ..................................................................... 574 Sheng-Chieh Huang, Chih-Hao Huang, Xi-Zhang Lin, Gwo-Bin Lee
Real-Time Electrocardiogram Waveform Classification Using Self-Organization Neural Network............................ 578 C.C. Chiu, C.L. Hsu, B.Y. Liau and C.Y. Lan
The Design of Oximeter in Sleep Monitoring..................................................................................................................... 582 C.H. Lu, J.H. Lin, S.T. Tang, Z.X. You and C.C. Tai
Integration of Image Processing from the Insight Toolkit (ITK) and the Visualization Toolkit (VTK) in Java Language for Medical Imaging Applications........................................................................................................ 586 D. Gansawat, W. Jirattiticharoen, S. Sotthivirat, K. Koonsanit, W. Narkbuakaew, P. Yampri and W. Sinthupinyo
ECG Feature Extraction by Multi Resolution Wavelet Analysis based Selective Coefficient Method ......................... 590 Saurabh Pal and Madhuchhanda Mitra
Microarray Image Denoising using Spatial Filtering and Wavelet Transformation...................................................... 594 A. Mastrogianni, E. Dermatas and A. Bezerianos
Investigation of a Classification about Time Series Signal Using SOM........................................................................... 598 Y. Nitta, M. Akutagawa, T. Emoto, T. Okahisa, H. Miyamoto, Y. Ohnishi, M. Nishimura, S. Nakane, R. Kaji, Y. Kinouchi
PCG Spectral Pattern Classification: Approach to Cardiac Energy Signature Identification...................................... 602 Abbas K. Abbas, Rasha Bassam
Characteristic of AEP and SEP for Localization of Evoked Potential by Recalling....................................................... 606 K. Mukai, Y. Kaji, F. Shichijou, M. Akutagawa, Y. Kinouchi and H. Nagashino
Automatic Detection of Left and Right Eye in Retinal Fundus Images........................................................................... 610 N.M. Tan, J. Liu, D.W.K. Wong, J.H. Lim, H. Li, S.B. Patil, W. Yu, T.Y. Wong
Visualizing Occlusal Contact Points Using Laser Surface Dental Scans ......................................................................... 615 L.T. Hiew, S.H. Ong and K.W.C. Foong
Modeling Deep Brain Stimulation....................................................................................................................................... 619 Charles T.M. Choi and Yen-Ting Lee
Implementation of Trajectory Analysis System for Metabolic Syndrome Detection ..................................................... 622 Hsien-Tsai Wu, Di-Song Yzng, Huo-Ying Chang, An-Bang Liu, Hui-Ming Chung, Ming-Chien Liu and Lee-Kang Wong
Diagnosis of Hearing Disorders and Screening using Artificial Neural Networks based on Distortion Product Otoacoustic Emissions ......................................................................................................... 626 V.P. Jyothiraj and A. Sukesh Kumar
Content
XXIX
Detection of Significant Biclusters in Gene Expression Data using Reactive Greedy Randomized Adaptive Search Algorithm ................................................................................................................................................. 631 Smitha Dharan and Achuthsankar S. Nair
Development of Noninvasive Thrombus Detection System with Near-Infrared Laser and Photomultiplier for Artificial Hearts .............................................................................................................................................................. 635 S. Tsujimura, H. Koguchi, T. Yamane, T. Tsutsui and Y. Sankai
Using Saliency Features for Graphcut Segmentation of Perfusion Kidney Images........................................................ 639 Dwarikanath Mahapatra and Ying Sun
Low Power Electrocardiogram QRS Detection in Real-Time .......................................................................................... 643 E. Zoghlami Ayari, R. Tielert and N. Wehn
Analytical Decision Making from Clinical Data- Diagnosis and Classification of Epilepsy Risk Levels from EEG Signals-A Case Study......................................................................................................................................... 647 V.K. Sudhaman, Dr. (Mrs.) R. Sukanesh, R. HariKumar
Magnetic field transducers based on the phase characteristics of GMI sensors and aimed at biomedical applications................................................................................................................................. 652 E. Costa Silva, L.A.P. Gusmão, C.R. Hall Barbosa, E. Costa Monteiro
Effects of Task Difficulty and Training of Visuospatial Working Memory Task on Brain Activity ............................ 657 Takayasu Ando, Keiko Momose, Keita Tanaka, Keiichi Saito
Retrieval of MR Kidney Images by Incorporating Shape Information in Histogram of Low Level Features............. 661 D. Mahapatra, S. Roy and Y. Sun
Performance Comparison of Bone Segmentation on Dental CT Images ......................................................................... 665 P. Yampri, S. Sotthivirat, D. Gansawat, K. Koonsanit, W. Narkbuakaew, W. Areeprayolkij, W. Sinthupinyo
Multi Scale Assessment of Bone Architecture and Quality from CT Images.................................................................. 669 T. Kalpalatha Reddy, Dr. N. Kumaravel
An Evolutionary Heuristic Approach for Functional Modules Identification from Composite Biological Data.......................................................................................................................................... 673 I.A. Maraziotis, A. Dragomir and A. Bezerianos
An Empirical Approach for Objective Pain Measurement using Dermal and Cardiac Parameters ............................ 678 Shankar K., Dr. Subbiah Bharathi V., Jackson Daniel
A Diagnosis Support System for Finger Tapping Movements Using Magnetic Sensors and Probabilistic Neural Networks ..................................................................................................................................... 682 K. Shima, T. Tsuji, A. Kandori, M. Yokoe and S. Sakoda
Increasing User Functionality of an Auditory P3 Brain-Computer Interface for Functional Electrical Stimulation Application............................................................................................................. 687 A.S.J Bentley, C.M. Andrew and L.R. John
An Electroencephalogram Signal based Triggering Circuit for controlling Hand Grasp in Neuroprosthetics........... 691 G. Karthikeyan, Debdoot Sheet and M. Manjunatha
A Novel Channel Selection Method Based on Partial KL Information Measure for EMG-based Motion Classification ................................................................................................................................ 694 T. Shibanoki, K. Shima, T. Tsuji, A. Otsuka and T. Chin
A Mobile Phone for People Suffering From The Locked In Syndrome........................................................................... 699 D. Thiagarajan, Anupama.V. Iyengar
Generating Different Views of Clinical Guidelines Using Ontology Based Semantic Annotation ................................ 701 Rajendra Singh Sisodia, Puranjoy Bhattacharya and V. Pallavi
XXX
Content
A High-Voltage Discharging System for Extracorporeal Shock-Wave Therapy............................................................ 706 I. Manousakas, S.M. Liang, L.R. Wan
Development of the Robot Arm Control System Using Forearm SEMG ........................................................................ 710 Yusuke Wakita, Noboru Takizawa, Kentaro Nagataand Kazushige Magatani
Tissue Classification from Brain Perfusion MR Images Using Expectation-Maximization Algorithm Initialized by Hierarchical Clustering on Whitened Data................................................................................................................... 714 Y.T. Wu, Y.C. Chou, C.F. Lu, S.R. Huang and W.Y. Guo
Enhancement of Signal-to-noise Ratio of Peroneal Nerve Somatosensory Evoked Potential Using Independent Component Analysis and Time-Frequency Template ...................................................................... 718 C.I. Hung, Y.R. Yang, R.Y. Wang, W.L. Chou, J.C. Hsieh and Y.T. Wu
Multi-tissue Classification of Diffusion-Weighted Brain Images in Multiple System Atrophy Using Expectation Maximization Algorithm Initialized by Hierarchical Clustering ..................................................... 722 C.F. Lu, P.S. Wang, B.W. Soong, Y.C. Chou, H.C. Li, Y.T. Wu
Small-world Network for Investigating Functional Connectivity in Bipolar Disorder: A Functional Magnetic Images (fMRI) Study.................................................................................................................... 726 S. Teng, P.S. Wang, Y.L. Liao, T.-C. Yeh, T.-P. Su, J.C. Hsieh, Y.T. Wu
Fractal Dimension Analysis for Quantifying Brain Atrophy of Multiple System Atrophy of the Cerebellar Type (MSA-C) ......................................................................................................................................... 730 Z.Y. Wang, B.W. Soong, P.S. Wang, C.W. Jao, K.K. Shyu, Y.T. Wu
A Novel Method in Detecting CCA Lumen Diameter and IMT in Dynamic B-mode Sonography............................... 734 D.C. Cheng, Q. Pu, A. Schmidt-Trucksaess, C.H. Liu
Acoustic Imaging of Heart Using Microphone Arrays...................................................................................................... 738 H. Kajbaf and H. Ghassemian
Statistical Variations of Ultrasound Backscattering From the Blood under Steady Flow ............................................. 742 Chih-Chung Huang, Yi-Hsun Lin, and Shyh-Hau Wang
Employing Microbubbles and High-Frequency Time-Resolved Scanning Acoustic Microscopy for Molecular Imaging ......................................................................................................................................................... 746 P. Anastasiadis, A.L. Klibanov, C. Layman, W. Bost, P.V. Zinin, R.M. Lemor and J.S. Allen
Application of Fluorescently Labeled Lectins for the Visualization of Biofilms of Pseudomonas Aeruginosa by High-Frequency Time-Resolved Scanning Acoustic Microscopy................................................................................ 750 P. Anastasiadis, K. Mojica, C. Layman, M.L. Matter, J. Henneman, C. Barnes and J.S. Allen
A Comparative Study for Disease Identification from Heart Auscultation using FFT, Cepstrum and DCT Correlation Coefficients ...................................................................................................................................... 754 Swanirbhar Majumder, Saurabh Pal and Pranab Kishore Dutta
Multi Resolution Analysis of Pediatric ECG Signal .......................................................................................................... 758 Srinivas Kachibhotla, Shamla Mathur
3D CT Craniometric Study of Thai Skulls Revelance to Sex Determination Using ogistic Regression Analysis......... 761 S. Rooppakhun, S. Piyasin and K. Sitthiseripratip
Analysis of Quantified Indices of EMG for Evaluation of Parkinson’s Disease ............................................................. 765 B. Sepehri, A. Esteki, G.A. Shahidi and M. Moinodin
A Test for the Assessment of Reaction Time for Narcotic Rehabilitation Patients......................................................... 769 S.G. Patil, T.J. Gale and C.R. Clive
Content
XXXI
Track 2: Biosensors, Biochips & BioMEMs; Nanobiotechnology Microdevice for Trapping Circulating Tumor Cells for Cancer Diagnostics ................................................................. 774 S.J. Tan, L. Yobas, G.Y.H Lee, C.N. Ong and C.T. Lim
In-situ Optical Oxygen Sensing for Bio-artificial Liver Bioreactors................................................................................ 778 V. Nock, R.J. Blaikie and T. David
Quantitative and Indirect Qualitative Analysis Approach for Nanodiamond Using SEM Images and Raman Response .......................................................................................................................... 782 Niranjana S., B.S. Satyanarayana, U.C. Niranjan and Shounak De
Non-invasive Acquisition of Blood Pulse Using Magnetic Disturbance Technique ........................................................ 786 Chee Teck Phua, Gaëlle Lissorgues, Bruno Mercier
Microfabrication of high-density microelectrode arrays for in vitro applications ......................................................... 790 Lionel Rousseau, Gaëlle Lissorgues, Fabrice Verjus, Blaise Yvert
A MEMS-based Impedance Pump Based on a Magnetic Diaphragm ............................................................................. 794 C.Y. Lee, Z.H. Chen, C.Y. Wen, L.M. Fu, H.T. Chang, R.H. Ma
Sample Concentration and Auto-location With Radiate Microstructure Chip for Peptide Analysis by MALDI-MS................................................................................................................................... 799 Shun-Yuan Chen, Chih-Sheng Yu, Jun-Sheng Wang, Chih-Cheng Huang, Yi-Chiuen Hu
The Synthesis of Iron Oxide Nanoparticles via Seed-Mediated Process and its Cytotoxicity Studies .......................... 802 J.-H. Huang, H.J. Parab, R.S. Liu, T.-C. Lai, M. Hsiao, C.H. Chen, D.-P. Tsai and Y.-K. Hwu
Characterization of Functional Nanomaterials in Cosmetics and its Cytotoxic Effects................................................. 806 J.-H. Huang, H.J. Parab, R.S. Liu, T.-C. Lai, M. Hsiao, C.H. Chen and Y.K. Hwu
Design and Analysis of MEMS based Cantilever Sensor for the Detection of Cardiac Markers in Acute Myocardial Infarction ........................................................................................................................................... 810 Sree Vidhya& Lazar Mathew
Integrating Micro Array Probes with Amplifier on Flexible Substrate .......................................................................... 813 J.M. Lin, P.W.Lin and L.C. Pan
Investigating Combinatorial Drug Effects on Adhesion and Suspension Cell Types Using a Microfluidic-Based Sensor System ........................................................................................................................ 817 S. Arora, C.S. Lim, M. Kakran, J.Y.A. Foo, M.K. Sakharkar, P. Dixit, and J. Miao
Organic Phase Coating of Polymers onto Agarose Microcapsules for Encapsulation of Biomolecules with High Efficiency ............................................................................................................................................................. 821 J. Bai, W.C. Mak, X.Y. Chang and D. Trau
LED Based Sensor System for Non-Invasive Measurement of the Hemoglobin Concentration in Human Blood ...... 825 U. Timm, E. Lewis, D. McGrath, J. Kraitl and H. Ewald
Amperometric Hydrogen Peroxide Sensors with Multivalent Metal Oxide-Modified Electrodes for Biomedical Analysis........................................................................................................................................................ 829 Tesfaye Waryo, Petr Kotzian, Sabina Begi, Petra Bradizlova, Negussie Beyene, Priscilla Baker, Boitumelo Kgarebe, Emir Turkuši, Emmanuel Iwuoha, Karel Vytas and Kurt Kalcher
Patch-Clamping in Droplet Arrays: Single Cell Positioning via Dielectrophoresis ........................................................ 834 J. Reboud, M.Q. Luong, C. Rosales and L. Yobas
Label-free Detection of Proteins with Surface-functionalized Silicon Nanowires........................................................... 838 R.E. Chee, J.H. Chua, A. Agarwal, S.M. Wong, G.J. Zhang
XXXII
Content
Bead-based DNA Microarray Fabricated on Porous Polymer Films............................................................................... 842 J.T. Cheng, J. Li, N.G. Chen, P. Gopalakrishnakone and Y. Zhang
Monolithic CMOS Current-Mode Instrumentation Amplifiers for ECG Signals .......................................................... 846 S.P. Almazan, L.I. Alunan, F.R. Gomez, J.M. Jarillas, M.T. Gusad and M. Rosales
Cells Separation by Traveling Wave Dielectrophoretic Microfluidic Devices ................................................................ 851 T. Maturos, K. Jaruwongrangsee, A. Sappat, T. Lomas, A. Wisitsora-at, P. Wanichapichart and A. Tuantranont
A Novel pH Sensor Based on the Swelling of A Hydrogel Membrane ............................................................................. 855 K.F. Chou, Y.C. Lin, H.Y. Chen, S.Y. Huang and Z.Y. Lin
Simulation and Experimental Study of Electrowetting on Dielectric (EWOD) Device for a Droplet Based Polymerase Chain Reaction System.................................................................................................. 859 K. Ugsornrat, T. Maturus, A. Jomphoak, T. Pogfai, N.V. Afzulpurkar, A. Wisitsoraat, A. Tuantranont
A Label-Free Impedimetric Immunosensor Based On Humidity Sensing Properties of Barium Strontium Titanate ............................................................................................................................................. 863 M. Rasouli, O.K. Tan, L.L. Sun, B.W. Mao and L.H. Gan
Physical Way to Enhance the Quantum Yield and Analyze the Photostability of Fluorescent Gold Clusters............. 867 D.F. Juan, C.A.J. Lin, T.Y. Yang, C.J. Ke, S.T. Lin, J.Y. Chen and W.H. Chang
Biocompatibility Study of Gold Nanoparticles to Human Cells ....................................................................................... 870 J.H. Fan, W.I. Hung, W.T. Li, J.M. Yeh
Gold Nanorods Modified with Chitosan As Photothermal Agents................................................................................... 874 Chia-Wei Chang, Chung-Hao Wang and Ching-An Peng
QDs Capped with Enterovirus As Imaging Probes for Drug Screening.......................................................................... 878 Chung-Hao Wang, Ching-An Peng
Elucidation of Driving Force of Neutrophile in Liquid by Cytokine Concentration Gradient...................................... 882 M. Tamagawa and K. Matsumura
Development of a Biochip Using Antibody-covered Gold Nano-particles to Detect Antibiotics Resistance of Specific Bacteria ............................................................................................................................................................... 884 Jung-Tang Huang, Meng-Ting Chang, Guo-Chen Wang, Hua-Wei Yu and Jeen Lin
Photothermal Ablation of Stem-Cell Like Glioblastoma Using Carbon Nanotubes Functionalized with Anti-CD133 ................................................................................................................................................................... 888 Chung-Hao Wang, Yao-Jhang Huang and Ching-An Peng
A Design of Smart Dust to Study the Hippocampus.......................................................................................................... 892 Anupama V. Iyengar, D. Thiagarajan
Determination of Affinity Constant from Microfluidic Binding Assay ........................................................................... 894 D. Tan, P. Roy
Nucleic Acid Sample Preparation from Dengue Virus Using a Chip-Based RNA Extractor in a Self-Contained Microsystem......................................................................................................................................... 898 L. Zhang, Siti R.M. Rafei, L. Xie, Michelle B.-R. Chew, C.S. Premchandra, H.M. Ji, Y. Chen, L. Yobas, R. Rajoo, K.L. Ong, Rosemary Tan, Kelly S.H. Lau, Vincent T.K. Chow, C.K. Heng and K.-H. Teo
In-Vitro Transportation of Drug Molecule by Actin Myosin Motor System .................................................................. 902 Harsimran Kaur, Suresh Kumar, Inderpreet Kaur, Kashmir Singh and Lalit M. Bharadwaj
Content
XXXIII
Track 3: Clinical Engineering; Telemedicine & Healthcare; Computer-Assisted Surgery; Medical Robotics; Rehabilitation Engineering & Assistive Technology Tumour Knee Replacement Planning in a 3D Graphics System ...................................................................................... 906 K. Subburaj, B. Ravi and M.G. Agarwal
Color Medical Image Vector Quantization Coding Using K-Means: Retinal Image ..................................................... 911 Agung W. Setiawan, Andriyan B. Suksmono and Tati R. Mengko
Development of the ECG Detector by Easy Contact for Helping Efficient Rescue Operation...................................... 915 Takahiro Asaoka and Kazushige Magatani
A Navigation System for the Visually Impaired Using Colored Guide Line and RFID Tags........................................ 919 Tatsuya Seto, Yuriko Shiidu, Kenji Yanashima and Kazushige Magatani
A Development of the Equipment Control System Using SEMG..................................................................................... 923 Noboru Takizawa, Yusuke Wakita, Kentaro Nagata, Kazushige Magatani
The Analysis of a Simultaneous Measured Forearm’s EMG and f-MRI ........................................................................ 927 Tsubasa Sasaki, Kentaro Nagata, Masato Maeno and Kazushige Magatani
Development of A Device to Detect SPO2 which is Installed on a Rescue Robot ............................................................ 931 Yoshiaki Kanaeda, Takahiro Asaoka and Kazushige Magatani
A Estimation Method for Muscular Strength During Recognition of Hand Motion...................................................... 935 Takemi Nakano, Kentaro Nagata, Masahumi Yamada and Kazusige Magatani
The Navigation System for the Visually Impaired Using GPS ......................................................................................... 938 Tomoyuki Kanno, Kenji Yanashima and Kazushige Magatani
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 1 .................................................................................................................................... 942 S.C. Chen, S.T. Hsu, C.L. Liu and C.H. Yu
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 2 .................................................................................................................................... 946 S.C. Chen, S.T. Hsu, C.L. Liu and C.H. Yu
Circadian Rhythm Monitoring in HomeCare Systems ..................................................................................................... 950 M. Cerny, M. Penhaker
Effects of Muscle Vibration on Independent Finger Movements ..................................................................................... 954 B.-S. Yang and S.-J. Chen
Modelling Orthodontal Braces for Non-invasive Delivery of Anaesthetics in Dentistry ................................................ 957 S. Ravichandran
Assessment of the Peripheral Performance and Cortical Effects of SHADE, an Active Device Promoting Ankle Dorsiflexion................................................................................................................ 961 S. Pittaccio, S. Viscuso, F. Tecchio, F. Zappasodi, M. Rossini, L. Magoni, S. Pirovano, S. Besseghini and F. Molteni
A Behavior Mining Method by Visual Features and Activity Sequences in Institution-based Care............................. 966 J.H. Huang, C.C. Hsia and C.C. Chuang
Chronic Disease Recurrence Prediction Model for Diabetes Mellitus Patients’ Long-Term Caring............................ 970 Chia-Ming Tung, Yu-Hsien Chiu, Chi-Chun Shia
The Study of Correlation between Foot-pressure Distribution and Scoliosis ................................................................. 974 J.H. Park, S.C. Noh, H.S. Jang, W.J. Yu, M.K. Park and H.H. Choi
XXXIV
Content
Sensitivity Analysis in Sensor HomeCare Implementation............................................................................................... 979 M. Penhaker, R. Bridzik, V. Novak, M. Cerny, M. Rosulek
Fall Detection Unit for Elderly ............................................................................................................................................ 984 Arun Kumar, Fazlur Rahman, Tracey Lee
Reduction of Body Sway Can Be Evaluated By Sparse Density during Exposure to Movies on Liquid Crystal Displays ................................................................................................................................. 987 H. Takada, K. Fujikake, M. Omori, S. Hasegawa, T. Watanabe and M. Miyao
Effects of Phototherapy to Shangyingxiang Xue on Patients with Allergic Rhinitis ...................................................... 992 K.-H. Hu, D.-N. Yan, W.-T. Li
The Study of Neural Correlates on Body Ownership Modulated By the Sense of Agency Using Virtual Reality ....... 996 W.H. Lee, J.H. Ku, H.R. Lee, K.W. Han, J.S. Park, J.J. Kim, I.Y. Kim and S.I. Kim
Diagnosis and Management of Diabetes Mellitus through a Knowledge-Based System .............................................. 1000 Morium Akter, Mohammad Shorif Uddin and Aminul Haque
Modeling and Mechanical Design of a MRI-Guided Robot for Neurosurgery ............................................................. 1004 Z.D. Hong, C. Yun and L. Zhao
The Study for Multiple Security Mechanism in Healthcare Information System for Elders ...................................... 1009 C.Y. Huang and J.L. Su
Individual Movement Trajectories in Smart Homes ....................................................................................................... 1014 M. Chan, S. Bonhomme, D. Estève, E. Campo
MR Image Reconstruction for Positioning Verification with a Virtual Simulation System for Radiation Therapy........................................................................................................................................................ 1019 C.F. Jiang, C.H. Huang, T.S. Su
Experimental setup of hemilarynx model for microlaryngeal surgery applications .................................................... 1024 J.Q. Choo, D.P.C. Lau, C.K. Chui, T. Yang, S.H. Teoh
Virtual Total Knee Replacement System Based on VTK................................................................................................ 1028 Hui Ding, Tianzhu Liang, Guangzhi Wang, Wenbo Liu
Motor Learning of Normal Subjects Exercised with a Shoulder-Elbow Rehabilitation Robot................................... 1032 H.H. Lin, M.S. Ju, C.C.K. Lin, Y.N. Sun and S.M. Chen
Using Virtual Markers to Explore Kinematics of Articular Bearing Surfaces of Knee Joints.................................... 1037 Guangzhi Wang, Zhonglin Zhu, Hui Ding, Xiao Dang, Jing Tang and Yixin Zhou
Simultaneous Recording of Physiological Parameters in Video-EEG Laboratory in Clinical and Research Settings ........................................................................................................................................................ 1042 R. Bridzik, V. Novák, M. Penhaker
Preliminary Modeling for Intra-Body Communication .................................................................................................. 1044 Y.M. Gao, S.H. Pun, P.U. Mak, M. Du and M.I. Vai
Application of the Home Telecare System in the Treatment of Diabetic Foot Syndrome............................................ 1049 P. Ladyzynski, J.M. Wojcicki, P. Foltynski, G. Rosinski, J. Krzymien, B. Mrozikiewicz-Rakowska, K. Migalska-Musial and W. Karnafel
In-vitro Evaluation Method to Measure the Radial Force of Various Stents................................................................ 1053 Y. Okamoto, T. Tanaka, H. Kobashi, K. Iwasaki, M. Umezu
Motivating Children with Attention Deficiency Disorder Using Certain Behavior Modification Strategies ............. 1057 Huang Qunfang Jacklyn, S. Ravichandran
Development of a Walking Robot for Testing Ankle Foot Orthosis- Robot Validation Test....................................... 1061 H.J. Lai, C.H. Yu, W.C. Chen, T.W. Chang, K.J. Lin, C.K. Cheng
Content
XXXV
Regeneration of Speech in Voice-Loss Patients................................................................................................................ 1065 H.R. Sharifzadeh, I.V. McLoughlin and F. Ahmadi
Human Gait Analysis using Wearable Sensors of Acceleration and Angular Velocity................................................ 1069 R. Takeda, S. Tadano, M. Todoh and S. Yoshinari
Deformable Model for Serial Ultrasound Images Segmentation: Application to Computer Assisted Hip Athroplasty........................................................................................................ 1073 A. Alfiansyah, K.H. Ng and R. Lamsudin
Bone Segmentation Based On Local Structure Descriptor Driven Active Contour ..................................................... 1077 A. Alfiansyah, K.H. Ng and R. Lamsudin
An Acoustically-Analytic Approach to Behavioral Patterns for Monitoring Living Activities ................................... 1082 Kuang-Che Liu, Gwo-Lang Yan, Yu-Hsien Chiu, Ming-Shih Tsai, Kao-Chi Chung
Implementation of Smart Medical Home Gateway System for Chronic Patients ........................................................ 1086 Chun Yu, Jhih-Jyun Yang, Tzu-Chien Hsiao, Pei-Ling Liu, Kai-Ping Yao, Chii-Wann Lin
A Comparative Study of Fuzzy PID Control Algorithm for Position Control Performance Enhancement in a Real-time OS Based Laparoscopic Surgery Robot................................................................................................... 1090 S.J. Song, J.W. Park, J.W. Shin, D.H. Lee, J. Choi and K. Sun
Investigation of the Effect of Acoustic Pressure and Sonication Duration on Focused-Ultrasound Induced Blood-Brain Barrier Disruption................................................................................. 1094 P.C. Chu, M.C. Hsiao, Y.H. Yang, J.C. Chen, H.L. Liu
Design and Implementation of Web-Based Healthcare Management System for Home Healthcare ......................... 1098 S. Tsujimura, N. Shiraishi, A. Saito, H. Koba, S. Oshima, T. Sato, F. Ichihashi and Y. Sankai
Quantitative Assessment of Left Ventricular Myocardial Motion Using Shape–Constraint Elastic Link Model ..... 1102 Y. Maeda, W. Ohyama, H. Kawanaka, S. Tsuruoka, T. Shinogi, T. Wakabayashi and K. Sekioka
Assessment of Foot Drop Surgery in Leprosy Subjects Using Frequency Domain Analysis of Foot Pressure Distribution Images ............................................................................................................................... 1107 Bhavesh Parmar
The Development of New Function for ICU/CCU Remote Patient Monitoring System Using a 3G Mobile Phone and Evaluations of the System............................................................................................... 1112 Akinobu Kumabe, Pu Zhang, Yuichi Kogure, Masatake Akutagawa, Yohsuke Kinouchi, Qinyu Zhang
Development of Heart Rate Monitoring for Mobile Telemedicine using Smartphone................................................. 1116 Hun Shim, Jung Hoon Lee, Sung Oh Hwang, Hyung Ro Yoon, Young Ro Yoon
Cognitive Effect of Music for Joggers Using EEG........................................................................................................... 1120 J. Srinivasan, K.M. Ashwin Kumar and V. Balasubramanian
System for Conformity Assessment of Electrocardiographs .......................................................................................... 1124 M.C. Silva, L.A.P. Gusmão, C.R. Hall Barbosa and E. Costa Monteiro
The Development and Strength Reinforcement of Rapid Prototyping Prosthetic Socket Coated with a Resin Layer for Transtibial Amputee ................................................................................................................... 1128 C.T. Lu, L.H. Hsu, G.F. Huang, C.W. Lai, H.K. Peng, T.Y. Hong
A New Phototherapy Apparatus Designed for the Curing of Neonatal Jaundice......................................................... 1132 C.B. Tzeng, T.S. Wey, M.S. Young
Study to Promote the Treatment Efficiency for Neonatal Jaundice by Simulation...................................................... 1136 Alberto E. Chaves Barrantes, C.B. Tzeng and T.S. Wey
XXXVI
Content
Low Back Pain Evaluation for Cyclist using sEMG: A Comparative Study between Bicyclist and Aerobic Cyclist ........................................................................................ 1140 J. Srinivasan and V. Balasubramanian
3D Surface Modeling and Clipping of Large Volumetric Data Using Visualization Toolkit Library ........................ 1144 W. Narkbuakaew, S. Sotthivirat, D. Gansawat, P. Yampri, K. Koonsanit, W. Areeprayolkij and W. Sinthupinyo
The Effects of Passive Warm-Up With Ultrasound in Exercise Performance and Muscle Damage........................... 1149 Fu-Shiu Hsieh, Yi-Pin Wang, T.-W. Lu, Ai-Ting Wang, Chien-Che Huang, Cheng-Che Hsieh
Capacitive Interfaces for Navigation of Electric Powered Wheelchairs ........................................................................ 1153 K. Kaneswaran and K. Arshak
Health Care and Medical Implanted Communications Service ..................................................................................... 1158 K. Yekeh Yazdandoost and R. Kohno
MultiAgent System for a Medical Application over Web Technology: A Working Experience ................................. 1162 A. Aguilera, E. Herrera and A. Subero
Collaborative Radiological Diagnosis over the Internet.................................................................................................. 1166 A. Aguilera, M. Barmaksoz, M. Ordoñez and A. Subero
HandFlex ............................................................................................................................................................................. 1171 J. Selva Raj, Cyntalia Cipto, Ng Qian Ya, Leow Shi Jie, Isaac Lim Zi Ping and Muhamad Ryan b Mohamad Zah
Treatment Response Monitoring and Prognosis Establishment through an Intelligent Information System ........... 1175 C. Plaisanu, C. Stefan
A Surgical Training Simulator for Quantitative Assessment of the Anastomotic Technique of Coronary Artery Bypass Grafting ................................................................................................................................ 1179 Y. Park, M. Shinke, N. Kanemitsu, T. Yagi, T. Azuma, Y. Shiraishi, R. Kormos and M. Umezu
Development of Evaluation Test Method for the Possibility of Central Venous Catheter Perforation Caused by the Insertion Angle of a Guidewire and a Dilator ......................................................................................... 1183 M. Uematsu, M. Arita, K. Iwasaki, T. Tanaka, T. Ohta, M. Umezu and T. Tsuchiya
The Assessment Of Severely Disabled People To Verify Their Competence To Drive A Motor Vehicle With Evidence Based Protocols............................................................................................ 1187 Peter J. Roake
Track 4: Artificial Organs; Biomaterials; Controlled Drug Delivery; Tissue Engineering & Regenerative Medicine Viscoelastic Properties of Elastomers for Small Joint Replacements ............................................................................ 1191 A. Mahomed, D.W.L. Hukins, S.N. Kukureka and D.E.T. Shepherd
Synergic Combination of Collagen Matrix with Knitted Silk Scaffold Regenerated Ligament with More Native Microstructure in Rabbit Model ........................................................................................................ 1195 Xiao Chen, Zi Yin, Yi-Ying Qi, Lin-Lin Wang, Hong-Wei Ouyang
Preparation, Bioactivity and Antibacterial Effect of Bioactive Glass/Chitosan Biocomposites .................................. 1199 Hanan H. Beherei, Khaled R. Mohamed, Amr I. Mahmoud
Biocompatibility of Metal Sintered Materials in Dependence on Multi-Material Graded Structure ......................... 1204 M. Lodererova, J. Rybnicek, J. Steidl, J. Richter, K. Boivie, R. Karlsen, O. Åsebø
PHBV Microspheres as Tissue Engineering Scaffold for Neurons................................................................................. 1208 W.H. Chen, B.L. Tang and Y.W. Tong
Content
XXXVII
Fabrication of Three-Dimensional Tissues with Perfused Microchannels .................................................................... 1213 Katsuhisa Sakaguchi, Tatsuya Shimizu, Kiyotaka Iwasaki, Masayuki Yamato, Mitsuo Umezu, Teruo Okano
The Effects of Pulse Inductively Coupled Plasma on the Properties of Gelatin............................................................ 1217 I. Prasertsung, S. Kanokpanont, R. Mongkolnavin and S. Damrongsakkul
Dosimetry of 32P Radiocolloid for Radiotherapy of Brain Cyst...................................................................................... 1220 M. Sadeghi, E. Karimi
Overcoming Multidrug Resistance of Breast Cancer Cells by the Micellar Drug Carriers of mPEG-PCL-graft-cellulose............................................................................................................................................ 1224 Yung-Tsung Chen, Chao-Hsuan Chen, Ming-Fa Hsieh, Ann Shireen Chan, Ian Liau, Wan-Yu Tai
Individual 3D Replacements of Skeletal Defects.............................................................................................................. 1228 R. Jirman, Z. Horak, J. Mazanek and J. Reznicek
Brain Gate as an Assistive and Solution Providing Technology for Disabled People................................................... 1232 Prof. Shailaja Arjun Patil
Compressive Fatigue and Thermal Compressive Fatigue of Hybrid Resin Base Dental Composites......................... 1236 M. Javaheri, S.M. Seifi, J. Aghazadeh Mohandesi, F. Shafie
Development of Amphotericin B Loaded PLGA Nanoparticles for Effective Treatment of Visceral Leishmaniasis................................................................................................................................................... 1241 M. Nahar, D. Mishra, V. Dubey, N.K. Jain
Swelling, Dissolution and Disintegration of HPMC in Aqueous Media......................................................................... 1244 S.C. Joshi and B. Chen
A Comparative Study of Articular Chondrocytes Metabolism on a Biodegradable Polyesterurethane Scaffold and Alginate in Different Oxygen Tension and pH ......................................................................................................... 1248 S. Karbasi
Effect of Cryopreservation on the Biomechanical Properties of the Intervertebral Discs ........................................... 1252 S.K.L. Lam, S.C.W. Chan, V.Y.L. Leung, W.W. Lu, K.M.C. Cheung and K.D.K. Luk
A Serum Free Medium that Conserves The Chondrogenic Phenotype of In Vitro Expanded Chondrocytes ........... 1255 Saey Tuan Barnabas Ho, Zheng Yang, Hoi Po James Hui, Kah Weng Steve Oh, Boon Hwa Andre Choo and Eng Hin Lee
High Aspect Ratio Fatty Acid Functionalized Strontium Hydroxyapatite Nanorod and PMMA Bone Cement Filler........................................................................................................................................ 1258 W.M. Lam, C.T. Wong, T. Wang, Z.Y. Li, H.B. Pan, W.K. Chan, C. Yang, K.D.K. Luk, M.K. Fong, W.W. Lu
The HIV Dynamics is a Single Input System.................................................................................................................... 1263 M.J. Mhawej, C.H. Moog and F. Biafore
Evaluation of Collagen-hydroxyapatite Scaffold for Bone Tissue Engineering ............................................................ 1267 Sangeeta Dey, S. Pal
Effect of Sintering Temperature on Mechanical Properties and Microstructure of Sheep-bone Derived Hydroxyapatite (SHA) ....................................................................................................................................................... 1271 U. Karacayli, O. Gunduz, S. Salman, L.S. Ozyegin, S. Agathopoulos, and F.N. Oktar
Flow Induced Turbulent Stress Accumulation in Differently Designed Contemporary Bi-leaflet Mitral valves: Dynamic PIV Study ............................................................................................................................................................ 1275 T. Akutsu, and X.D. Cao
A Biofunctional Fibrous Scaffold for the Encapsulation of Human Mesenchymal Stem Cells and its Effects on Stem Cell Differentiation ..................................................................................................................... 1279 S.Z. Yow, C.H. Quek, E.K.F. Yim, K.W. Leong, C.T. Lim
XXXVIII
Content
Potential And Properties Of Plant Proteins For Tissue Engineering Applications ...................................................... 1282 Narendra Reddy and Yiqi Yang
Comparison the Effects of BMP-4 and BMP-7 on Articular Cartilage Repair with Bone Marrow Mesenchymal Stem Cells .................................................................................................................. 1285 Yang Zi Jiang, Yi Ying Qi, Xiao Hui Zou, Lin-Lin Wang, Hong-Wei Ouyang
Local Delivery of Autologous Platelet in Collagen Matrix Synergistically Stimulated In-situ Articular Cartilage Repair................................................................................................................................................. 1289 Yi Ying Qi, Hong Xin Cai, Xiao Chen, Lin Lin Wang,Yang Zi Jiang, Nguyen Thi Minh Hieu, Hong Wei Ouyang
Bioactive Coating on Newly Developed Composite Hip Prosthesis................................................................................ 1293 S. Bag & S. Pal
Development and Validation of a Reverse Phase Liquid Chromatographic Method for Quantitative Estimation of Telmisartan in Human Plasma...................................................................................................................................... 1297 V. Kabra, V. Agrahari, P. Trivedi
Response of Bone Marrow-derived Stem Cells (MSCs) on Gelatin/Chitosan and Gelatin/Chitooligosaccharide films............................................................................................................................ 1301 J. Ratanavaraporn, S. Kanokpanont, Y. Tabata and S. Damrongsakkul
Manufacturing Porous BCP Body by Negative Polymer Replica as a Bone Tissue Engineering Scaffold ................. 1305 R. Tolouei, A. Behnamghader, S.K. Sadrnezhaad, M. Daliri
Synthesis and Characterizations of Hydroxyapatite-Poly(ether ether ketone) Nanocomposite: Acellular Simulated Body Fluid Conditioned Study ....................................................................................................... 1309 Sumit Pramanik and Kamal K. Kar
Microspheres of Poly (lactide-co-glycolide acid) (PLGA) for Agaricus Bisporus Lectin Drug Delivery.................... 1313 Shuang Zhao, Hexiang Wang, Yen Wah Tong
Hard Tissue Formation by Bone Marrow Stem Cells in Sponge Scaffold with Dextran Coating............................... 1316 M. Yoshikawa, Y. Shimomura, N. Tsuji, H. Hayashi, H. Ohgushi
Inactivation of Problematic Micro-organisms in Collagen Based Media by Pulsed Electric Field Treatment (PEF)........................................................................................................................ 1320 S. Griffiths, S.J. MacGregor, J.G. Anderson, M. Maclean, J.D.S. Gaylor and M.H. Grant
Development, Optimization and Characterization of Nanoparticle Drug Delivery System of Cisplatin.................... 1325 V. Agrahari, V. Kabra, P. Trivedi
Physics underling Topobiology: Space-time Structure underlying the Morphogenetic Process ................................. 1329 K. Naitoh
The Properties of Hexagonal ZnO Sensing Thin Film Grown by DC Sputtering on (100) Silicon Substrate ............ 1333 Chih Chin Yang, Hung Yu Yang, Je Wei Lee and Shu Wei Chang
Multi-objective Optimization of Cancer Immuno-Chemotherapy................................................................................. 1337 K. Lakshmi Kiran, D. Jayachandran and S. Lakshminarayanan
Dip Coating Assisted Polylactic Acid Deposition on Steel Surface: Film Thickness Affected by Drag Force and Gravity...................................................................................................... 1341 P.L. Lin, T.L. Su, H.W. Fang, J.S. Chang, W.C. Chang
Some Properties of a Polymeric Surfactant Derived from Alginate .............................................................................. 1344 R. Kukhetpitakwong, C. Hahnvajanawong, D. Preechagoon and W. Khunkitti
Nano-Patterned Poly--caprolactone with Controlled Release of Retinoic Acid and Nerve Growth Factor for Neuronal Regeneration ................................................................................................................................................ 1348 K.K. Teo, Evelyn K.F. Yim
Content
XXXIX
Exposure of 3T3 mouse Fibroblasts and Collagen to High Intensity Blue Light .......................................................... 1352 S. Smith, M. Maclean, S.J. MacGregor, J.G. Anderson and M.H. Grant
Preparation of sericin film with different polymers ........................................................................................................ 1356 Kamol Maikrang, M. Sc., Pornanong Aramwit, Pharm.D., Ph.D.
Fabrication and Bio-active Evolution of Mesoporous SiO2-CaO-P2O5 Sol-gel Glasses................................................ 1359 L.C. Chiu, P.S. Lu, I.L. Chang, L.F. Huang, C.J. Shih
The Influences of the Heat-Treated Temperature on Mesoporous Bioactive Gel Glasses Scaffold in the CaO - SiO2 - P2O5 System ........................................................................................................................................ 1362 P.S. Lu, L.C. Chiou, I.L. Chang, C.J. Shih, L.F. Huang
Influence of Surfactant Concentration on Mesoporous Bioactive Glass Scaffolds with Superior in Vitro Bone-Forming Bioactivities ................................................................................................................................. 1366 L.F. Huang, P.S. Lu, L.C. Chiou, I.L. Chang, C.J. Shih
Human Embryonic Stem Cell-derived Mesenchymal Stem Cells and BMP7 Promote Cartilage Repair .................. 1369 Lin Lin Wang, Yi Ying Qi, Yang Zi Jiang, Xiao Chen, Xing Hui Song, Xiao Hui Zou, Hong Wei Ouyang
Novel Composite Membrane Guides Cortical Bone Regeneration ................................................................................ 1373 You Zhi Cai, Yi Ying Qi, Hong Xin Cai, Xiao Hui Zou, Lin Lin Wang, Hong Wei Ouyang
Morphology and In Vitro Biocompatibility of Hydroxyapatite-Conjugated Gelatin/Thai Silk Fibroin Scaffolds .... 1377 S. Tritanipakul, S. Kanokpanont, D.L. Kaplan and S. Damrongsakkul
Development of a Silk-Chitosan Blend Scaffold for Bone Tissue Engineering ............................................................. 1381 K.S. Ng, X.R. Wong, J.C.H. Goh and S.L. Toh
Effects Of Plasma Treatment On Wounds ....................................................................................................................... 1385 R.S. Tipa, E. Stoffels
Effects of the Electrical Field on the 3T3 Cells ................................................................................................................ 1389 E. Stoffels, R.S. Tipa, J.W. Bree 99m
Tc(I)-tricarbonyl Labeled Histidine-tagged Annexin V for Apoptosis Imaging ...................................................... 1393
Y.L. Chen, C.C. Wu, Y.C. Lin, Y.H. Pan, T.W. Lee and J.M. Lo
Cell Orientation Affects Human Tendon Stem Cells Differentiation............................................................................. 1397 Zi Yin, T.M. Hieu Nguyen, Xiao Chen, Hong-Wei Ouyang
Synthesis and Characterization of TiO2+HA Coatings on Ti-6Al-4V Substrates by Nd-YAG Laser Cladding ........ 1401 C.S. Chien, C.L. Chiao, T.F. Hong, T.J. Han, T.Y. Kuo
Computational Fluid Dynamics Investigation of the Effect of the Fluid-Induced Shear Stress on Hepatocyte Sandwich Perfusion Culture..................................................................................................................... 1405 H.L. Leo, L. Xia, S.S. Ng, H.J. Poh, S.F. Zhang, T.M. Cheng, G.F. Xiao, X.Y. Tuo, H. Yu
Preliminary Study on Interactive Control for the Artificial Myocardium by Shape Memory Alloy Fibre ............... 1409 R. Sakata, Y. Shiraishi, Y. Sato, Y. Saijo, T. Yambe, Y. Luo, D. Jung, A. Baba, M. Yoshizawa, A. Tanaka, T.K. Sugai, F. Sato, M. Umezu, S. Nitta, T. Fujimoto, D. Homma
Synthesis, Surface Characterization and In Vitro Blood Compatibility Studies of the Self-assembled Monolayers (SAMs) Containing Lipid-like Phosphorylethanolamine Terminal Group.............................................. 1413 Y.T. Sun, C.Y. Yu and J.C. Lin
Surface Characterization and In-vitro Blood Compatibility Study of the Mixed Self-assembled Monolayers.......... 1418 C.H. Shen and J.C. Lin
Microscale Visualization of Erythrocyte Deformation by Colliding with a Rigid Surface Using a High-Speed Impinging Jet ...................................................................................................................................................................... 1422 S. Wakasa, T. Yagi, Y. Akimoto, N. Tokunaga, K. Iwasaki and M. Umezu
XL
Content
Development of an Implantable Observation System for Angiogenesis ........................................................................ 1426 Y. Inoue, H. Nakagawa, I Saito, T. Isoyama, H. Miura, A. Kouno, T. Ono, S.S. Yamaguchi, W. Shi, A. Kishi, K. Imachi and Y. Abe
New challenge for studying flow-induced blood damage: macroscale modeling and microscale verification............ 1430 T. Yagi, S. Wakasa, N. Tokunaga, Y. Akimoto, T. Akutsu, K. Iwasaki, M. Umezu
Pollen Shape Particles for Pulmonary Drug Delivery: In Vitro Study of Flow and Deposition Properties ............... 1434 Meer Saiful Hassan and Raymond Lau
Effect of Tephrosia Purpurea Pers on Gentamicin Model of Acute Renal Failure ...................................................... 1438 Avijeet Jain, A.K. Singhai
Successful Reproduction of In-Vivo Fracture of an Endovascular Stent in Superficial Femoral Artery Utilizing a Novel Multi-loading Durability Test System ................................................................................................. 1443 K. Iwasaki, S. Tsubouchi, Y. Hama, M. Umezu
Star-Shaped Porphyrin-polylactide Formed Nanoparticles for Chemo-Photodynamic Dual Therapies ................... 1447 P.S. Lai
Enhanced Cytotoxicity of Doxorubicin by Micellar Photosensitizer-mediated Photochemical Internalization in Drug-resistant MCF-7 Cells .......................................................................................................................................... 1451 C.Y. Hsu, P.S. Lai, C.L. Pai, M.J. Shieh, N. Nishiyama and K. Kataoka
Amino Acid Coupled Liposomes for the Effective Management of Parkinsonism ....................................................... 1455 P. Khare and S.K. Jain
Corrosion Resistance of Electrolytic Nano-Scale ZrO2 Film on NiTi Orthodontic Wires in Artificial Saliva ........... 1459 C.C. Chang and S.K. Yen
Stability of Polymeric Hollow Fibers Used in Hemodialysis........................................................................................... 1462 M.E. Aksoy, M. Usta and A.H. Ucisik
Estimation of Blood Glucose Level by Sixth order Polynomial...................................................................................... 1466 S. Shanthi, Dr. D. Kumar
Dirty Surface – Cleaner Cells? Some Observations with a Bio-Assembled Extracellular Matrix .............................. 1469 F.C. Loe, Y. Peng, A. Blocki, A. Thomson, R.R. Lareu, M. Raghunath
Quantitative Immunocytochemistry (QICC)-Based Approach for Antifibrotic Drug Testing in vitro...................... 1473 Wang Zhibo, Tan Khim Nyang and Raghunath Michael
Fusion Performance of a Bioresorbable Cage Used In Porcine Model of Anterior Lumbar Interbody Fusion ........ 1476 A.S. Abbah, C.X.F. Lam, K. Yang, J.C.H. Goh, D.W. Hutmacher, H.K. Wong
Composite PLDLLA/TCP Scaffolds for Bone Engineering: Mechanical and In Vitro Evaluations........................... 1480 C.X.F. Lam, R. Olkowski, W. Swieszkowski, K.C. Tan, I. Gibson, D.W. Hutmacher
Effects of Biaxial Mechanical Strain on Esophageal Smooth Muscle Cells................................................................... 1484 W.F. Ong, A.C. Ritchie and K.S. Chian
Characterization of Electrospun Substrates for Ligament Regeneration using Bone Marrow Stromal Cells........... 1488 T.K.H. Teh, J.C.H. Goh, S.L. Toh
Cytotoxicity and Cell Adhesion of PLLA/keratin Composite Fibrous Membranes ..................................................... 1492 Lin Li, Yi Li, Jiashen Li, Arthur F.T. Mak, Frank Ko and Ling Qin
Tissue Transglutaminase as a Biological Tissue Glue ..................................................................................................... 1496 P.P. Panengad, D.I. Zeugolis and M. Raghunath
The Scar-in-a-Jar: Studying Antifibrotic Lead Compounds from the Epigenetic to the Extracellular Level in One Well.......................................................................................................................................................................... 1499 Z.C.C. Chen, Y. Peng and M. Raghunath
Content
XLI
Engineering and Optimization of Peptide-targeted Nanoparticles for DNA and RNA Delivery to Cancer Cells ..... 1503 Ming Wang, Fumitaka Takeshita, Takahiro Ochiya, Andrew D. Miller and Maya Thanou
BMSC Sheets for Ligament Tissue Engineering.............................................................................................................. 1508 E.Y.S. See, S.L. Toh and J.C.H. Goh
In Vivo Study of ACL Regeneration Using Silk Scaffolds In a Pig Model .................................................................... 1512 Haifeng Liu, Hongbin Fan, Siew Lok Toh, James C.H. Goh
Establishing a Coculture System for Ligament-Bone Interface Tissue Engineering.................................................... 1515 P.F. He, S. Sahoo, J.C. Goh, S.L. Toh
Effect of Atherosclerotic Plaque on Drug Delivery from Drug-eluting Stent................................................................ 1519 J. Ferdous and C.K. Chong
Perfusion Bioreactors Improve Oxygen Transport and Cell Distribution in Esophageal Smooth Muscle Construct................................................................................................................................................................ 1523 W.Y. Chan and C.K. Chong
Track 5: Biomechanics; Cardiovascular Bioengineering; Cellular & Molecular Engineering; Cell & Molecular Mechanics; Computational Bioengineering; Orthopaedics, Prosthetics & Orthotics; Physiological System Modeling Determination of Material Properties of Cellular Structures Using Time Series of Microscopic Images and Numerical Model of Cell Mechanics.......................................................................................................................... 1527 E. Gladilin, M. Schulz, C. Kappel and R. Eils
Analysis of a Biological Reaction of Circulatory System during the Cold Pressure Test – Consideration Based on One-Dimensional Numerical Simulation –........................................................................... 1531 T. Kitawaki, H. Oka, S. Kusachi and R. Himeno
Identification of the Changes in Extent of Loading the TM Joint on the Other Side Owing to the Implantation of Total Joint Replacement ................................................................................................................................................ 1535 Z. Horak, T. Bouda, R. Jirman, J. Mazanek and J. Reznicek
Effects of Stent Design Parameters on the Aortic Endothelium..................................................................................... 1539 Gideon Praveen Kumar & Lazar Mathew
Multi-Physical Simulation of Left-ventricular Blood Flow Based On Patient-specific MRI Data.............................. 1542 S.B.S. Krittian, S. Höttges, T. Schenkel and H. Oertel
Spinal Fusion Cage Design................................................................................................................................................. 1546 F. Jabbary Aslani, D.W.L. Hukins and D.E.T. Shepherd
Comparison of Micron and Nano Particle Deposition Patterns in a Realistic Human Nasal Cavity.......................... 1550 K. Inthavong, S.M. Wang, J. Wen, J.Y. Tu, C.L. Xue
Comparative Study of the Effects of Acute Asthma in Relation to a Recovered Airway Tree on Airflow Patterns ............................................................................................................................................................ 1555 K. Inthavong, Y. Ye, S. Ding, J.Y. Tu
Computational Analysis of Stress Concentration and Wear for Tibial Insert of PS Type Knee Prosthesis under Deep Flexion Motion ............................................................................................................................................... 1559 M. Todo, Y. Takahashi and R. Nagamine
Push-Pull Effect Simulation by the LBNP Device............................................................................................................ 1564 J. Hanousek, P. Dosel, J. Petricek and L. Cettl
XLII
Content
An Investigation on the Effect of Low Intensity Pulsed Ultrasound on Mechanical Properties of Rabbit Perforated Tibia Bone ....................................................................................................................................... 1569 B. Yasrebi and S. Khorramymehr
Influence of Cyclic Change of Distal Resistance on Flow and Deformation in Arterial Stenosis Model .................... 1572 J. Jie, S. Kobayashi, H. Morikawa, D. Tang, D.N. Ku
Kinematics analysis of a 3-DOF micromanipulator for Micro-Nano Surgery .............................................................. 1576 Fatemeh Mohandesi, M.H. Korayem
Stress and Reliability Analyses of the Hip Joint Endoprosthesis Ceramic Head with Macro and Micro Shape Deviations .............................................................................................................................................. 1580 V. Fuis, P. Janicek and L. Houfek
Pseudoelastic alloy devices for spastic elbow relaxation ................................................................................................. 1584 S. Viscuso, S. Pittaccio, M. Caimmi, G. Gasperini, S. Pirovano, S. Besseghini and F. Molteni
Musculoskeletal Analysis of Spine with Kyphosis Due to Compression Fracture of an Osteoporotic Vertebra ....... 1588 J. Sakamoto, Y. Nakada, H. Murakami, N. Kawahara, J. Oda, K. Tomita and H. Higaki
The Biomechanical Analysis of the Coil Stent and Mesh Stent Expansion in the Angioplasty.................................... 1592 S.I. Chen, C.H. Tsai, J.S. Liu, H.C. Kan, C.M. Yao, L.C. Lee, R.J. Shih, C.Y. Shen1
Effect of Airway Opening Pressure Distribution on Gas Exchange in Inflating Lung................................................. 1595 T.K. Roy, MD PhD
Artificial High – Flexion Knee Customized for Eastern Lifestyle .................................................................................. 1597 S. Sivarasu and L. Mathew
Biomedical Engineering Analysis of the Rupture Risk of Cerebral Aneurysms: Flow Comparison of Three Small Pre-ruptured Versus Six Large Unruptured Cases............................................................................... 1600 A. Kamoda, T. Yagi, A. Sato, Y. Qian, K. Iwasaki, M. Umezu, T. Akutsu, H. Takao, Y. Murayama
Metrology Applications in Body-Segment Coordinate Systems ..................................................................................... 1604 G.A. Turley and M.A. Williams
Finite Element Modeling Of Uncemented Implants: Challenges in the Representation of the Press-fit Condition ................................................................................................................................................... 1608 S.E. Clift
Effect of Prosthesis Stiffness and Implant Length on the Stress State in Mandibular Bone with Dental Implants .......................................................................................................................................................... 1611 M. Todo, K. Irie, Y. Matsushita and K. Koyano
Rheumatoid Arthritis T-lymphocytes “Immature” Phenotype and Attempt of its Correction in Co-culture with Human Thymic Epithelial Cells ........................................................................................................................................ 1615 M.V. Goloviznin, N.S. Lakhonina, N.I. Sharova, V.T. Timofeev, R.I. Stryuk, Yu.R. Buldakova
Axial and Angled Pullout Strength of Bone Screws in Normal and Osteoporotic Bone Material............................... 1619 P.S.D. Patel, D.E.T. Shepherd and D.W.L. Hukins
Experimental Characterization of Pressure Wave Generation and Propagation in Biological Tissues ..................... 1623 M. Benoit, J.H. Giovanola, K. Agbeviade and M. Donnet
Finite Element Analysis into the Foot – Footwear Interaction Using EVA Footwear Foams...................................... 1627 Mohammad Reza Shariatmadari
Age Effects On The Tensile And Stress Relaxation Properties Of Mouse Tail Tendons ............................................. 1631 Jolene Liu, Siaw Meng Chou, Kheng Lim Goh
Do trabeculae of femoral head represent a structural optimum? .................................................................................. 1636 H.A. Kim, G.J. Howard, J.L. Cunningham
Content
XLIII
Gender and Arthroplasty Type Affect Prevalence of Moderate-Severe Pain Post Total Hip Arthroplasty............... 1640 J.A. Singh, S.E. Gabriel and D. Lewallen
Quantification of Polymer Depletion Induced Red Blood Cell Adhesion to Artificial Surfaces........................................................................................................................................................... 1644 Z.W. Zhang and B. Neu
An Investigation on the Effect of Low Intensity Pulsed Ultrasound on Mechanical Properties of Rabbit Perforated Tibia Bone ....................................................................................................................................... 1648 B. Yasrebi and S. Khorramymehr
Integrative Model of Physiological Functions and Its Application to Systems Medicine in Intensive Care Unit ...... 1651 Lu Gaohua and Hidenori Kimura
A Brain-oriented Compartmental Model of Glucose-Insulin-Glucagon Regulatory System ...................................... 1655 Lu Gaohua and Hidenori Kimura
Linkage of Diisopropylfluorophosphate Exposure and Effects in Rats Using a Physiologically Based Pharmacokinetic and Pharmacodynamic Model............................................................................................................. 1659 K.Y. Seng, S. Teo, K. Chen and K.C. Tan
Blood Flow Rate Measurement Using Intravascular Heat-Exchange Catheter............................................................ 1663 Seng Sing Tan and Chin Tiong Ng
Mechanical and Electromyographic Response to Stimulated Contractions in Paralyzed Tibialis Anterior Post Fatiguing Stimulations ....................................................................................................................................................... 1667 N.Y. Yu and S.H. Chang
Modelling The Transport Of Momentum And Oxygen In An Aerial-Disk Driven Bioreactor Used For Animal Tissue Or Cell Culture................................................................................................................................... 1672 K.Y.S. Liow, G.A. Thouas, B.T. Tan, M.C. Thompson and K. Hourigan
Investigation of Hemodynamic Changes in Abdominal Aortic Aneurysms Treated with Fenestrated Endovascular Grafts ............................................................................................................................. 1676 Zhonghua Sun, Thanapong Chaichana, Yvonne B. Allen, Manas Sangworasil, Supan Tungjitkusolmun, David E. Hartley and Michael M.D. Lawrence-Brown
Bone Morphogenetic Protein-2 and Hyaluronic Acid on Hydroxyapatite-coated Porous Titanium to Repair the Defect of Rabbit’s Distal Femu .................................................................................................................. 1680 P. Lei, M. Zhao, L.F. Hui, W.M. Xi
Landing Impact Loads Predispose Osteocartilage To Degeneration ............................................................................. 1684 C.H. Yeow, S.T. Lau, Peter V.S. Lee, James C.H. Goh
Drug Addiction as a Non-monotonic Process: a Multiscale Computational Model...................................................... 1688 Y.Z. Levy, D. Levy, J.S. Meyer and H.T. Siegelmann
Adroit Limbs....................................................................................................................................................................... 1692 Pradeep Manohar and S. Keerthi Vasan
A Mathematical Model to Study the Regulation of Active Stress Production in GI Smooth Muscle.......................... 1696 Viveka Gajendiran and Martin L. Buist
Design of Customized Full Contact Shoulder Prosthesis using CT-data & FEA.......................................................... 1700 D. Sengupta, U.B. Ghosh and S. Pal
An Interface System to Aid the Design of Rapid Prototyping Prosthetic Socket Coated with a Resin Layer for Transtibial Amputee..................................................................................................................................................... 1704 C.W. Lai, L.H. Hsu, G.F. Huang and S.H. Liu
Correlation of Electrical Impedance with Mechanical Properties in Models of Tissue Mimicking Phantoms.......... 1708 Kamalanand Krishnamurthy, B.T.N. Sridhar, P.M. Rajeshwari and Ramakrishnan Swaminathan
XLIV
Content
Biomechanical Analysis of Influence of Spinal Fixation on Intervertebral Joint Force by Using Musculoskeletal Model ....................................................................................................................................... 1712 H. Fukui, J. Sakamoto, H. Murakami, N. Kawahara, J. Oda, K. Tomita and H. Higaki
Preventing Anterior Cruciate Ligament Failure During Impact Compression by Restraining Anterior Tibial Translation or Axial Tibial Rotation ................................................................................................................................ 1716 C.H. Yeow, R.S. Khan, Peter V.S. Lee, James C.H. Goh
The Analysis and Measurement of Interface Pressures between Stump and Rapid Prototyping Prosthetic Socket Coated With a Resin Layer for Transtibial Amputee ......................................................................................... 1720 H.K. Peng, L.H. Hsu, G.F. Huang and D.Y. Hong
Analysis of Influence Location of Intervertebral Implant on the Lower Cervical Spine Loading and Stability ....... 1724 L. Jirkova, Z. Horak
Computational Fluid Analysis of Blood Flow Characteristics in Abdominal Aortic Aneurysms Treated with Suprarenal Endovascular Grafts.............................................................................................................................. 1728 Zhonghua Sun, Thanapong Chaichana, Manas Sangworasil and Supan Tungjitkusolmun
Measuring the 3D-Position of Cementless Hip Implants using Pre- and Postoperative CT Images ........................... 1733 G. Yamako, T. Hiura, K. Nakata, G. Omori, Y. Dohmae, M. Oda, T. Hara
Simulation of Tissue-Engineering Cell Cultures Using a Hybrid Model Combining a Differential Nutrient Equation and Cellular Automata....................................................................................................................... 1737 Tze-Hung Lin and C.A. Chung
Upconversion Nanoparticles for Imaging Cells ............................................................................................................... 1741 N. Sounderya, Y. Zhang
Simulation of Cell Growth and Diffusion in Tissue Engineering Scaffolds................................................................... 1745 Szu-Ying Ho, Ming-Han Yu and C.A. Chung
Simulation of the Haptotactic Effect on Chondrocytes in the Boyden Chamber Assay............................................... 1749 Chih-Yuan Chen and C.A. Chung
Analyzing the Sub-indices of Hysteresis Loops of Torque-Displacement in PD’s ........................................................ 1753 B. Sepehri, A. Esteki and M. Moinodin
Relative Roles of Cortical and Trabecular Thinning in Reducing Osteoporotic Vertebral Body Stiffness: A Modeling Study ............................................................................................................................................................... 1757 K. McDonald, P. Little, M. Pearcy, C. Adam
Musculo-tendon Parameters Estimation by Ultrasonography for Modeling of Human Motor System ..................... 1761 L. Lan, L.H. Jin, K.Y. Zhu and C.Y. Wen
Mechanical Vibration Applied in the Absence of Weight Bearing Suggest Improved Fragile Bone .......................... 1766 J. Matsuda, K. Kurata, T. Hara, H. Higaki
A Biomechanical Investigation of Anterior Vertebral Stapling ..................................................................................... 1769 M.P. Shillington, C.J. Adam, R.D. Labrom and G.N. Askin
Measurement of Cell Detaching force on Substrates with Different Rigidity by Atomic Force Microscopy ............. 1773 D.K. Chang, Y.W. Chiou, M.J. Tang and M.L. Yeh
Estimation of Body Segment Parameters Using Dual Energy Absorptiometry and 3-D Exterior Geometry ............ 1777 M.K. Lee, M. Koh, A.C. Fang, S.N. Le and G. Balasekaran
A New Intraoperative Measurement System for Rotational Motion Properties of the Spine...................................... 1781 K. Kitahara, K. Oribe, K. Hasegawa, T. Hara
Content
XLV
Binding of Atherosclerotic Plaque Targeting Nanoparticles to the Activated Endothelial Cells under Static and Flow Condition ............................................................................................................................................................ 1785 K. Rhee, K.S. Park and G. Khang
Computational Modeling of the Micropipette Aspiration of Malaria Infected Erythrocytes...................................... 1788 G.Y. Jiao, K.S.W. Tan, C.H. Sow, Ming Dao, Subra Suresh, C.T. Lim
Examination of the Microrheology of Intervertebral Disc by Nanoindentation ........................................................... 1792 J. Lukes, T. Mares, J. Nemecek and S. Otahal
Effects of Floor Material Change on Gait Stability ......................................................................................................... 1797 B.-S. Yang and H.-Y. Hu
Onto-biology: Inevitability of Five Bases and Twenty Amino-acids .............................................................................. 1801 K. Naitoh
An Improved Methodology for Measuring Facet Contact Forces in the Lumbar Spine ............................................. 1805 A.K. Ramruttun, H.K. Wong, J.C.H. Goh, J.N. Ruiz
Multi-scale Models of Gastrointestinal Electrophysiology.............................................................................................. 1809 M.L. Buist, A. Corrias and Y.C. Poh
Postural Sway of the Elderly Males and Females during Quiet Standing and Squat-and-Stand Movement............. 1814 Gwnagmoon Eom, Jiwon Kim, Byungkyu Park, Jeonghwa Hong, Soonchul Chung, Bongsoo Lee, Gyerae Tack, Yohan Kim
Investigation of Plantar Barefoot Pressure and Soft-tissue Internal Stress: A Three-Dimensional Finite Element Analysis ................................................................................................................ 1817 Wen-Ming Chen, Peter Vee-Sin Lee, Sung-Jae Lee and Taeyong Lee
The Influence of Load Placement on Postural Sway Parameters................................................................................... 1821 D. Rugelj and F. Sevšek
Shape Analysis of Postural Sway Area ............................................................................................................................. 1825 F. Sevšek
Concurrent Simulation of Morphogenetic Movements in Drosophila Embryo ............................................................ 1829 R. Allena, A.-S. Mouronval, E. Farge and D. Aubry
Application of Atomic Force Microscopy to Investigate Axonal Growth of PC-12 Neuron-like Cells ....................... 1833 M.-S. Ju, H.-M. Lan, C.-C.K. Lin
Effect of Irregularities of Graft Inner Wall at the Anastomosis of a Coronary Artery Bypass Graft........................ 1838 F. Kabinejadian, L.P. Chua, D.N. Ghista and Y.S. Tan
Mechanical Aspects in the Cells Detachment................................................................................................................... 1842 M. Buonsanti, M. Cacciola, G. Megali, F.C. Morabito, D. Pellicanò, A. Pontari and M. Versaci
Time Series Prediction of Gene Expression in the SOS DNA Repair Network of Escherichia coli Bacterium Using Neuro-Fuzzy Networks ............................................................................................................................................ 1846 R. Manshaei, P. Sobhe Bidari, J. Alirezaie, M.A. Malboobi
Predictability of Blood Glucose in Surgical ICU Patients in Singapore ........................................................................ 1850 V. Lakshmi, P. Loganathan, G.P. Rangaiah, F.G. Chen and S. Lakshminarayanan
Method of Numerical Analysis of Similarity and Differences of Face Shape of Twins ................................................ 1854 M. Rychlik, W. Stankiewicz and M. Morzynski
Birds’ Flap Frequency Measure Based on Automatic Detection and Tracking in Captured Videos.......................... 1858 Xiao-yan Zhang, Xiao-juan Wu, Xin Zhou, Xiao-gang Wang, Yuan-yuan Zhang
Effects of Upper-Limb Posture on Endpoint Stiffness during Force Targeting Tasks ................................................ 1862 Pei-Rong Wang, Ju-Ying Chang and Kao-Chi Chung
XLVI
Content
Complex Anatomies in Medical Rapid Prototyping ........................................................................................................ 1866 T. Mallepree, D. Bergers
Early Changes Induced by Low Intensity Ultrasound in Human Hepatocarcinoma Cells.......................................... 1870 Y. Feng, M.X. Wan
Visual and Force Feedback-enabled Docking for Rational Drug Design ...................................................................... 1874 O. Sourina, J. Torres and J. Wang
A Coupled Soft Tissue Continuum-Transient Blood flow Model to Investigate the Circulation in Deep Veins of the Calf under Compression.......................................................................................................................................... 1878 K. Mithraratne, T. Lavrijsen and P.J. Hunter
Finite Element Analysis of Articular Cartilage Model Considering the Configuration and Biphasic Property of the Tissue......................................................................................................................................................................... 1883 N. Hosoda, N. Sakai, Y. Sawae and T. Murakami
Principal Component Analysis of Lifting Kinematics and Kinetics in Pregnant Subjects .......................................... 1888 T.C. Nguyen, K.J. Reynolds
Evaluation of Anterior Tibial Translation and Muscle Activity during “Front Bridge” Quadriceps Muscle Exercise................................................................................................................................................................... 1892 M. Sato, S. Inoue, M. Koyanagi, M. Yoshida, N. Nakae, T. Sakai, K. Hidaka and K. Nakata
Coupled Autoregulation Models ....................................................................................................................................... 1896 T. David, S. Alzaidi, R. Chatelin and H. Farr
Measurement of Changes in Mechanical and Viscoelastic Properties of Cancer-induced Rat Tibia by using Nanoindentation .................................................................................................................................................. 1900 K.P. Wong, Y.J. Kim and T. Lee
Surface Conduction Analysis of EMG Signal from Forearm Muscles .......................................................................... 1904 Y. Nakajima, S. Yoshinari and S. Tadano
A Distributed Revision Control System for Collaborative Development of Quantitative Biological Models............. 1908 T. Yu, J.R. Lawson and R.D. Britten
Symmetrical Leg Behavior during Stair Descent in Able-bodied Subjects ................................................................... 1912 H. Hobara, Y. Kobayashi, K. Naito and K. Nakazawa
Variable Interaction Structure Based Machine Learning Technique for Cancer Tumor Classification ................... 1915 Melissa A. Setiawan, Rao Raghuraj and S. Lakshminarayanan
Assessing the Susceptibility to Local Buckling at the Femoral Neck Cortex to Age-Related Bone Loss .................... 1918 He Xi, B.W. Schafer, W.P. Segars, F. Eckstein, V. Kuhn, T.J. Beck, T. Lee
Revealing Spleen Ad4BP/SF1 Knockout Mouse by BAC-Ad4BP-tTAZ Transgene..................................................... 1920 Fatchiyah, M. Zubair, K.I. Morohashi
The impact of enzymatic treatments on red blood cell adhesion to the endothelium in plasma like suspensions...... 1924 Y. Yang, L.T. Heng and B. Neu
Comparison of Motion Analysis and Energy Expenditures between Treadmill and Overground Walking .............. 1928 R.H. Sohn, S.H. Hwang, Y.H. Kim
Simultaneous Strain Measurements of Rotator Cuff Tendons at Varying Arm Positions and The Effect of Supraspinatus Tear: A Cadaveric Study ........................................................................................... 1931 J.M. Sheng, S.M. Chou, S.H. Tan, D.T.T. Lie, K.S.A. Yew
Tensile Stress Regulation of NGF and NT3 in Human Dermal Fibroblast ................................................................... 1935 Mina Kim, J.W. Hong, Minsoo Nho, Yong Joo Na and J.H. Shin
Content
XLVII
Influence of Component Injury on Dynamic Characteristics on the Spine Using Finite Element Method................ 1938 J.Z. Li, Serena H.N. Tan, C.H. Cheong, E.C. Teo, L.X. Guo, K.Y. Seng
Local Dynamic Recruitment of Endothelial PECAM-1 to Transmigrating Monocytes............................................... 1941 N. Kataoka, K. Hashimoto, E. Nakamura, K. Hagihara, K. Tsujioka, F. Kajiya
A Theoretical Model to Mechanochemical Damage in the Endothelial Cells................................................................ 1945 M. Buonsanti, M. Cuzzola, A. Pontari, G. Irrera, M.C. Cannatà, R. Piro, P. Iacopino
Effects Of Mechanical Stimulus On Cells Via Multi-Cellular Indentation Device ....................................................... 1949 Sunhee Kim, Jaeyoung Yun and Jennifer H. Shin
The Effect of Tumor-Induced Bone Remodeling and Efficacy of Anti-Resorptive and Chemotherapeutic Treatments in Metastatic Bone Loss................................................................................................................................. 1952 X. Wang, L.S. Fong, X. Chen, X. Yang, P. Maruthappan, Y.J. Kim, T. Lee
Mathematical Modeling of Temperature Distribution on Skin Surface and Inside Biological Tissue with Different Heating........................................................................................................................................................ 1957 P.R. Sharma, Sazid Ali and V.K. Katiyar
Net Center of Pressure Analysis during Gait Initiation in Patient with Hemiplegia.................................................... 1962 S.H. Hwang, S.W. Park, H.S. Choi and Y.H. Kim
AFM Study of the Cytoskeletal Structures of Malaria Infected Erythrocytes.............................................................. 1965 H. Shi, A. Li, J. Yin, K.S.W. Tan and C.T. Lim
Adaptive System Identification and Modeling of Respiratory Acoustics ...................................................................... 1969 Abbas K. Abbas, Rasha Bassam
Correlation between Lyapunov Exponent and the Movement of Center of Mass during Treadmill Walking .......... 1974 J.H. Park and K. Son
The Development of an EMG-based Upper Extremity Rehabilitation Training System for Hemiplegic Patients .... 1977 J.S. Son, J.Y. Kim, S.J. Hwang and Youngho Kim
Fabrication of Adhesive Protein Micropatterns In Application of Studying Cell Surface Interactions ..................... 1980 Ji Sheng Kiew, Xiaodi Sui, Yeh-Shiu Chu, Jean Paul Thiery and Isabel Rodriguez
Modeling of the human cardiovascular system with its application to the study of the effects of variations in the circle of Willis on cerebral hemodynamics ............................................................................................................ 1984 Fuyou Liang, Shu Takagi and Hao Liu
Low-intensity Ultrasound Induces a Transient Increase in Intracellular Calcium and Enhancement of Nitric Oxide Production in Bovine Aortic Endothelial Cells...................................................................................... 1989 S. Konno, N. Sakamoto, Y. Saijo, T. Yambe, M. Sato and S. Nitta
Evaluation of Compliance of Poly (vinyl alcohol) Hydrogel for Development of Arterial Biomodeling .................... 1993 H. Kosukegawa, K. Mamada, K. Kuroki, L. Liu, K. Inoue, T. Hayase and M. Ohta
People Recognition by Kinematics and Kinetics of Gait ................................................................................................. 1996 Yu-Chih Lin, Bing-Shiang Yang and Yi-Ting Yang
Site-Dependence of Mechanosensitivity in Isolated Osteocytes ...................................................................................... 2000 Y. Aonuma, T. Adachi, M. Tanaka, M. Hojo, T. Takano-Yamamoto and H. Kamioka
Development of Experimental Devices for Testing of the Biomechanical Systems....................................................... 2005 L. Houfek, Z. Florian, T. Bezina, M. Houfek, T. Návrat, V. Fuis, P. Houška
Stability of Treadmill Walking Related with the Movement of Center of Mass ........................................................... 2009 S.H. Kim, J.H. Park, K. Son
XLVIII
Content
Stress Analyses of the Hip Joint Endoprosthesis Ceramic Head with Different Shapes of the Cone Opening .......... 2012 V. Fuis and J. Varga
Measurement of Lumbar Lordosis using Fluoroscopic Images and Reflective Markers............................................. 2016 S.H. Hwang, Y.E. Kim and Y.H. Kim
Patient-Specific Simulation of the Proximal Femur’s Mechanical Response Validated by Experimental Observations .......................................................................................................................................... 2019 Zohar Yosibash and Nir Trabelsi
Design of Prosthetic Skins with Humanlike Softness ...................................................................................................... 2023 J.J. Cabibihan
Effect of Data Selection on the Loss of Balance in the Seated Position.......................................................................... 2027 K.H. Kim, K. Son, J.H. Park
The Effect of Elastic Moduli of Restorative Materials on the Stress of Non-Carious Cervical Lesion ....................... 2030 W. Kwon, K.H. Kim, K. Son and J.K. Park
Effect of Heat Denaturation of Collagen Matrix on Bone Strength ............................................................................... 2034 M. Todoh, S. Tadano and Y. Imari
Non-linear Image-Based Regression of Body Segment Parameters............................................................................... 2038 S.N. Le, M.K. Lee and A.C. Fang
A Motion-based System to Evaluate Infant Movements Using Real-time Video Analysis........................................... 2043 Yuko Osawa, Keisuke Shima, Nan Bu, Tokuo Tsuji, Toshio Tsuji, Idaku Ishii, Hiroshi Matsuda, Kensuke Orito, Tomoaki Ikeda and Shunichi Noda
Cardiorespiratory Response Model for Pain and Stress Detection during Endoscopic Sinus Surgery under Local Anesthesia ...................................................................................................................................................... 2048 K. Sakai and T. Matsui
A Study on Correlation between BMI and Oriental Medical Pulse Diagnosis Using Ultrasonic Wave...................... 2052 Y.J. Lee, J. Lee, H.J. Lee, J.Y. Kim
Investigating the Biomechanical Characteristics of Transtibal Stumps with Diabetes Mellitus ................................. 2056 C.L. Wu, C.C. Lin, K.J. Wang and C.H. Chang
A New Approach to Evaluation of Reactive Hyperemia Based on Strain-gauge Plethysmography Measurements and Viscoelastic Indices............................................................................................................................ 2059 Abdugheni Kutluk, Takahiro Minari, Kenji Shiba, Toshio Tsuji, Ryuji Nakamura, Noboru Saeki, Masashi Kawamoto, Hidemitsu Miyahara, Yukihito Higashi, Masao Yoshizumi
Electromyography Analysis of Grand Battement in Chinese Dance ............................................................................. 2064 Ai-Ting Wang, Yi-Pin Wang, T.-W. Lu, Chien-Che Huang, Cheng-Che Hsieh, Kuo-Wei Tseng, Chih-Chung Hu
Landing Patterns in Subjects with Recurrent Lateral Ankle Sprains ........................................................................... 2068 Kuo-Wei Tseng, Yi-Pin Wang, T.-W. Lu, Ai-Ting Wang, Chih-Chung Hu
The Influence of Low Level Near-infrared Irradiation on Rat Bone Marrow Mesenchymal Stem Cells .................. 2072 T.-Y. Hsu, W.-T. Li
Implementation of Fibronectin Patterning with a Raman Spectroscopy Microprobe for Focal Adhesions Studies in Cells .................................................................................................................................................................... 2076 B. Codan, T. Gaiotto, R. Di Niro, R. Marzari and V. Sergo
Parametric Model of Human Cerebral Aneurysms......................................................................................................... 2079 Hasballah Zakaria and Tati L.R. Mengko
Content
XLIX
Computational Simulation of Three-dimensional Tumor Geometry during Radiotherapy........................................ 2083 S. Takao, S. Tadano, H. Taguchi and H. Shirato
Finite Element Modeling of Thoracolumnar Spine for Investigation of TB in Spine................................................... 2088 D. Davidson Jebaseelan, C. Jebaraj, S. Rajasekaran
Contact Characteristics during Different High Flexion Activities of the Knee............................................................. 2092 Jing-Sheng Li, Kun-Jhih Lin, Wen-Chuan Chen, Hung-Wen Wei, Cheng-Kung Cheng
Thumb Motion and Typing Forces during Text Messaging on a Mobile Phone........................................................... 2095 F.R. Ong
Oxygen Transport Analysis in Cortical Bone Trough Microstructural Porous Canal Network................................. 2099 T. Komeda, T. Matsumoto, H. Naito and M. Tanaka
Identification of Microstructural Mechanical Parameters of Articular Cartilage ....................................................... 2102 T. Osawa, T. Matsumoto, H. Naito and M. Tanaka
Computer Simulation of Trabecular Remodeling Considering Strain-Dependent Osteosyte Apoptosis and Targeted Remodeling.................................................................................................................................................. 2104 J.Y. Kwon, K. Otani, H. Naito, T. Matsumoto, M. Tanaka
Fibroblasts Proliferation Dependence on the Insonation of Pulsed Ultrasounds of Various Frequencies ................. 2106 C.Y. Chiu, S.H. Chen, C.C. Huang, S.H. Wang
Acromio-humeral Interval during Elevation when Supraspinatus is Deficient ............................................................ 2110 Dr. B.P. Pereira, Dr. B.S. Rajaratnam, M.G. Cheok, H.J.A. Kua, Md. D. Nur Amalina, H.X.S. Liew, S.W. Goh
Can Stretching Exercises Reduce Your Risks of Experiencing Low Back Pain? ......................................................... 2114 Dr. B.S. Rajaratnam, C.M. Lam, H.H.S. Seah, W.S. Chee, Y.S.E. Leung, Y.J.L. Ong, Y.Y. Kok
Streaming Potential of Bovine Spinal Cord under Visco-elastic Deformation.............................................................. 2118 K. Fujisaki, S. Tadano, M. Todoh, M. Katoh, R. Satoh
Probing the Elasticity of Breast Cancer Cells Using AFM ............................................................................................. 2122 Q.S. Li, G.Y.H. Lee, C.N. Ong and C.T. Lim
Correlation between Balance Ability and Linear Motion Perception............................................................................ 2126 Y. Yi and S. Park
Heart Rate Variability in Intrauterine Growth Retarded Infants and Normal Infants with Smoking and Non-smoking Parents, Using Time and Frequency Domain Methods.................................................................... 2130 V.A. Cripps, T. Biala, F.S. Schlindwein and M. Wailoo
Biomechanics of a Suspension of Micro-Organisms........................................................................................................ 2134 Takuji Ishikawa
Development of a Navigation System Included Correction Method of Anatomical Deformation for Aortic Surgery .............................................................................................................................................................. 2139 Kodai Matsukawa, Miyuki Uematsu, Yoshitaka Nakano, Ryuhei Utsunomiya, Shigeyuki Aomi, Hiroshi Iimura, Ryoichi Nakamura, Yoshihiro Muragaki, Hiroshi Iseki, Mitsuo Umezu
Bioengineering Advances and Cutting-edge Technology ................................................................................................ 2143 M. Umezu
Effect of PLGA Nano-Fiber/Film Composite on HUVECs for Vascular Graft Scaffold ............................................. 2147 H.J. Seo, S.M. Yu, S.H. Lee, J.B. Choi, J.-C. Park and J.K. Kim
Muscle and Joint Biomechanics in the Osteoarthritic Knee ........................................................................................... 2151 W. Herzog
L
Content
Bioengineering Education Multidisciplinary Education of Biomedical Engineers.................................................................................................... 2155 M. Penhaker, R. Bridzik, V. Novak, M. Cerny and J. Cernohorsky
Development and Measurement of High-precision Surface Body Electrocardiograph ............................................... 2159 S. Inui, Y. Toyosu, M. Akutagawa, H. Toyosu, M. Nomura, H. Satake, T. Kawabe, J. Kawabe, Y. Toyosu, Y. Kinouchi
Biomedical Engineering Education Prospects in India ................................................................................................... 2164 Kanika Singh
Measurement of Heart Functionality and Aging with Body Surface Electrocardiograph........................................... 2167 Y. Toyosu, S. Inui, M. Akutagawa, H. Toyosu, M. Nomura, H. Satake, T. Kawabe, J. Kawabe, Y. Toyosu, Y. Kinouchi
Harnessing Web 2.0 for Collaborative Learning ............................................................................................................. 2171 Casey K. Chan, Yean C. Lee and Victor Lin
Special Symposium – Tohoku University Electrochemical In-Situ Micropatterning of Cells and Polymers................................................................................... 2173 M. Nishizawa, H. Kaji, S. Sekine
Estimation of Emax of Assisted Hearts using Single Beat Estimation Method ............................................................... 2177 T.K. Sugai, A. Tanaka, M. Yoshizawa, Y. Shiraishi, S. Nitta, T. Yambe and A. Baba
Molecular PET Imaging of Acetylcholine Esterase, Histamine H1 Receptor and Amyloid Deposits in Alzheimer Disease .......................................................................................................................................................... 2181 N. Okamura, K. Yanai
Shear-Stress-Mediated Endothelial Signaling and Vascular Homeostasis .................................................................... 2184 Joji Ando and Kimiko Yamamoto
Numerical Evaluation of MR-Measurement-Integrated Simulation of Unsteady Hemodynamics in a Cerebral Aneurysm .................................................................................................... 2188 K. Funamoto, Y. Suzuki, T. Hayase, T. Kosugi and H. Isoda
Specificity of Traction Forces to Extracellular Matrix in Smooth Muscle Cells........................................................... 2192 T. Ohashi, H. Ichihara, N. Sakamoto and M. Sato
Cochlear Nucleus Stimulation by Means of the Multi-channel Surface Microelectrodes ............................................ 2194 Kiyoshi Oda, Tetsuaki Kawase,Daisuke Yamauchi, Hiroshi Hidaka and Toshimitsu Kobayashi
Effects of Mechanical Stimulation on the Mechanical Properties and Calcification Process of Immature Chick Bone Tissue in Culture ..................................................................................................................... 2197 T. Matsumoto, K. Ichikawa, M. Nakagaki and K. Nagayama
Regional Brain Activity and Performance During Car-Driving Under Side Effects of Psychoactive Drugs ............. 2201 Manabu Tashiro, MD. Mehedi Masud, Myeonggi Jeong, Yumiko Sakurada, Hideki Mochizuki, Etsuo Horikawa, Motohisa Kato, Masahiro Maruyama, Nobuyuki Okamura, Shoichi Watanuki, Hiroyuki Arai, Masatoshi Itoh, and Kazuhiko Yanai
Evaluation of Exercise-Induced Organ Energy Metabolism Using Two Analytical Approaches: A PET Study....... 2204 Mehedi Masud, Toshihiko Fujimoto, Masayasu Miyake, Shoichi Watanuki, Masatoshi Itoh, Manabu Tashiro
Strain Imaging of Arterial Wall with Reduction of Effects of Variation in Center Frequency of Ultrasonic RF Echo ........................................................................................................................................................ 2207 Hideyuki Hasegawa and Hiroshi Kanai
In Situ Analysis of DNA Repair Processes of Tumor Suppressor BRCA1.................................................................... 2211 Leizhen Wei and Natsuko Chiba
Content
LI
Evaluating Spinal Vessels and the Artery of Adamkiewicz Using 3-Dimensional Imaging ......................................... 2215 Kei Takase, Sayaka Yoshida and Shoki Takahashi
Development of a Haptic Sensor System for Monitoring Human Skin Conditions...................................................... 2219 D. Tsuchimi, T. Okuyama and M. Tanaka
Fabrication of Transparent Arteriole Membrane Models .............................................................................................. 2223 Takuma Nakano, Keisuke Yoshida, Seiichi Ikeda, Hiroyuki Oura, Toshio Fukuda, Takehisa Matsuda, Makoto Negoro and Fumihito Arai
Normal Brain Aging and its Risk Factors – Analysis of Brain Magnetic Resonance Image (MRI) Database of Healthy Japanese Subjects ............................................................................................................................................ 2228 H. Fukuda, Y. Taki, K. Sato, S. Kinomura, R. Goteau, R. Kawashima
Motion Control of Walking Assist Robot System Based on Human Model .................................................................. 2232 Yasuhisa Hirata, Shinji Komatsuda, Takuya Iwano and Kazuhiro Kosuge
Effects of Mutations in Unique Amino Acids of Prestin on Its Characteristics ............................................................ 2237 S. Kumano, K. Iida, M. Murakoshi, K. Tsumoto, K. Ikeda, I. Kumagai, T. Kobayashi, H. Wada
The Feature of the Interstitial Nano Drug Delivery System with Fluorescent Nanocrystals of Different Sizes in the Human Tumor Xenograft in Mice.......................................................................................................................... 2241 M. Kawai, M. Takeda and N. Ohuchi
Three-dimensional Simulation of Blood Flow in Malaria Infection............................................................................... 2244 Y. Imai, H. Kondo, T. Ishikawa, C.T. Lim, K. Tsubota and T. Yamaguchi
Development of a Commercial Positron Emission Mammography (PEM)................................................................... 2248 Masayasu Miyake, Seiichi Yamamoto, Masatoshi Itoh, Kazuaki Kumagai, Takehisa Sasaki, Targino Rodrigues dos Santos, Manabu Tashiro and Mamoru Baba
Radiological Anatomy of the Right Adrenal Vein: Preliminary Experience with Multi-detector Row Computed Tomography .......................................................................................................... 2250 T. Matsuura, K. Takase and S. Takahashi
Atrial Vortex Measurement by Magnetic Resonance Imaging....................................................................................... 2254 M. Shibata, T. Yambe, Y. Kanke and T. Hayase
Fabrication of Multichannel Neural Microelectrodes with Microfluidic Channels Based on Wafer Bonding Technology............................................................................................................................... 2258 R. Kobayashi, S. Kanno, T. Fukushima, T. Tanaka and M. Koyanagi
Influence of Fluid Shear Stress on Matrix Metalloproteinase Production in Endothelial Cells.................................. 2262 N. Sakamoto, T. Ohashi and M. Sato
Development of Brain-Computer Interface (BCI) System for Bridging Brain and Computer ................................... 2264 S. Kanoh, K. Miyamoto and T. Yoshinobu
First Trial of the Chronic Animal Examination of the Artificial Myocardial Function............................................... 2268 Y. Shiraishi, T. Yambe, Y. Saijo, K. Matsue, M. Shibata, H. Liu, T. Sugai, A. Tanaka, S. Konno, H. Song, A. Baba, K. Imachi, M. Yoshizawa, S. Nitta, H. Sasada, K. Tabayashi, R. Sakata, Y. Sato, M. Umezu, D. Homma
Bio-imaging by functional nano-particles of nano to macro scale.................................................................................. 2272 M. Takeda, H. Tada, M. Kawai, Y. Sakurai, H. Higuchi, K. Gonda, T. Ishida and N. Ohuchi
Author Index....................................................................................................................................... 2275 Subject Index ...................................................................................................................................... 2289
Electroencephalograph Signal Analysis During Ujjayi pranayama Prof. S.T. Patil1 and Dr. D.S. Bormane2 1
Computer Dept. B.V.U. College of Engineering, Pune, India.
[email protected] Principal, Rajarshi Shahu Engineering college, Pune, India.
[email protected] 2
Abstract — Ujjai pranayama is one part of the Pranayama, as traditionally conceived, involves much more than merely breathing for relaxation. Ujjai pranayama is a term with a wide range of meanings. "The regulation of the incoming and outgoing flow of breath with retention.". Ujjai pranayama also denotes cosmic power. Because of this connection between breath and consciousness. Pranayama has devised ujjai pranayama to stabilize energy and consciousness. A wavelet transformation is applied to electroencephalograph (EEG) records from persons under ujjai pranayama. Correlation dimension, largest lyapunov exponent, approximate entropy and coherence values are analyzed. This model & software is used to keep track on the improvement of the persons mind, aging, balance, flexibility, personnel values, mental values, social values, love, sex, knowledge, weight reduction and body fitness. Keywords — Ujjai pranayama, approximate entropy, EEG, coherence, largest lyapunov exponent, correlation dimension, wavelets.
I. INTRODUCTION A. Ujjai pranayama The word ujjai pranayama means stretch, extension, expansion, length, breadth, regulation, prolongation, restraint and control to create energy, when the selfenergizing force embraces the body, fast inhalation and fast exhalation, followed by inhaling through right nostril and performing kumbhaka with bandhas and exhaling through left nostril. Patanjali has said that one develops concentration and clarity of thought by practicing ujjai pranayama. It helps in increasing the mental and physical powers of endurance. It is the path to deeper relaxation and meditation and is a scientific method of controlling breath. It provides complete relaxation to the nervous system. It provides relief from pain caused by the compression of nerve ending. It helps in increasing oxygen supply to the brain which in turn helps controlling the mind. B. Electroencephalography The brain generates rhythmical potentials, which originate in the individual neurons of the brain.
Electroencephalograph (EEG) is a representation of the electrical activity of the brain. Numerous attempts have been made to define a reliable spike detection mechanism. However, all of them have faced the lack of a specific characterization of the events to detect. One of the best known descriptions for an interictal "spike" is offered by Chatrian et al. [1]: " transient, clearly distinguished from background activity, with pointed peak at conventional paper speeds and a duration from 20 to 70 msec". This description, however, is not specific enough to be implemented into a detection algorithm that will isolate the spikes from all the other normal or artifactual components of an EEG record. Some approaches have concentrated in measuring the "sharpness" of the EEG signal, which can be expected to soar in the "pointy peak" of a spike. Walter [2] attempted the detection of spikes through analog computation of the second time derivative (sharpness) of the EEG signals. Smith [3] attempted a similar form of detection on the digitized EEG signal. His method, however required a minimum duration of the sharp transient to qualify it as a spike. Although these methods involve the duration of the transient in a secondary way, they fundamentally consider "sharpness" as a point property, dependent only on the very immediate context of the time of analysis. More recently, an approach has been proposed in which the temporal sharpness is measured in different "spans of observation", involving different amounts of temporal context. True spikes will have significant sharpness at all of these different "spans". The promise shown by that approach has encouraged us to use a wavelet transformation to evaluate the sharpness of EEG signals at different levels of temporal resolution. C. Data Collection Medic-aid systems, Chandigarh (India), machine was used to aquire 32-channel eeg signal with the international 10-20 electrode coupling. The sampling frequency of the device is 256 Hz with 12-bit resolution and stored on hard disc. 32 channel EEG data was recorded simultaneously for both referential and bipolar montages. Recordings are made before, meanwhile and after the person is doing ujjai pranayama, and also we have kept track on recording the EEG data after one, two and three months of the same
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1–4, 2009 www.springerlink.com
2
S.T. Patil and D.S. Bormane
persons doing ujjai pranayama. Such 10 persons data is collected for analysis.
The present work pertains to the analysis of the EEG signal using various characteristic measures like Correlation Dimension(CD), Largest Lyapunov Exponent(LLE), Hurst exponent(HE) & Approximate Entropy(AE). A. Correlation Dimension The dimension of a graph can give much more information about the nature of the signal Grassberger & Procaccia Algorithm is used N
C(r)
) ¦ T ( r / xi x j /) i 1 w
N-no. of data points T -Heaviside function r- radial distance w-Tac x j step away from x i B. Approximate Enropy amount of disorder in the system. Amount of information stored in a more general probability distribution. Steyn-Ross algorithm is used Lm
AE(m, r, l )
1 Lm
¦Log
Evaluates presence or absence of long range dependence and it degree. Hurst algorithm is used
H
II. PARAMETERS
2 N ( N 1)
D. Hurst Exponent
Log ( RS ) / LogT
R/S - Rescaled Range, T- Duration of sample of data III. RESULTS Artifactual currents may cause linear drift to occur at some electrodes. To detect such drifts, we designed a function that fits the data to a straight line and marks the trial for rejection if the slope exceeds a given threshold. The slope is expressed in microvolt over the whole epoch (50, for instance, would correspond to an epoch in which the straight-line fit value might be 0 μv at the beginning of the trial and 50 μ v at the end). The minimal fit between the EEG data and a line of minimal slope is determined using a standard R-square measure. We usually apply the measures described above to the activations of the independent components of the data. As independent components tend to concentrate artifacts, we have found that bad epochs can be more easily detected using independent component activities. The functions described above work exactly the same when applied to data components as when they are applied to the raw channel data.
m1
Ci (r) L(1m1) LogmCi (r)
i 1
Where
m= Pattern Length = 2 r = noise threshold = 15% L= Time interval between two datasets. Ci (r) = correlation integral
C. Largest Lyapunov Exponent It is rate at which the trajectories of a signal separate one from other. Wolf algorithm is used to calculate Largest Lyapunov Exponent
GZ (t )
eOt / Gz0 /
z0=Initial seperation = 1, 2, 3,…….. n Phase spaces
_______________________________________________________________
Fig.1: time frequency components
It is more interesting to look at time-frequency decompositions of component activations than of separate channel activities, since independent components may directly index the activity of one brain EEG source, whereas channel activities sum potentials volume-conducted from different parts of the brain. to visualize only frequencies up
IFMBE Proceedings Vol. 23
_________________________________________________________________
Electroencephalograph Signal Analysis During Ujjayi pranayama
to 30 Hz. decompositions using FFTs allow computation of lower frequencies than wavelets, since they compute as low as one cycle per window, whereas the wavelet method uses a fixed number of cycles (default 3) for each frequency. The following time window appears IN Fig.1. The ITC image (lower panel) shows strong synchronization between the component activity and stimulus appearance, first near 15 Hz then near 4 Hz. The ERSP image (upper panel) shows that the 15-Hz phase-locking is followed by a 15-Hz power increase, and that the 4-Hz phase-locking event is accompanied by, but outlasts, a 4-Hz power increase.
Fig.2: correlation dimension
3
(in the top panel) is significant. In this example, the two components are synchronized with a phase offset of about 120 degrees (this phase difference can also be plotted as latency delay in ms, using the minimum-phase assumption. Channel statistics may help determine whether to remove a channel or not. To compute and plot one channel statistical characteristics. Some estimated variables of the statistics are printed as text in the lower panel to facilitate graphical analysis and interpretation. These variables are the signal mean, standard deviation, skewness, and kurtosis (technically called the 4 first cumulants of the distribution) as well as the median. The last text output displays the Kolmogorov-Smirnov test result (estimating whether the data distribution is Gaussian or not) at a significance level of p=0.05. The upper right panel shows the empirical quantilequantile plot (QQ-plot). Plotted are the quantiles of the data versus the quantiles of a Standard Normal (i.e., Gaussian) distribution. The QQ-plot visually helps to determine whether the data sample is drawn from a Normal distribution. If the data samples do come from a Normal distribution (same shape), even if the distribution is shifted and re-scaled from the standard normal distribution (different location and scale parameters), the plot is linear. IV. CONCLUSION x x x x
From this model & software we conclude that, The EEG signal after ujjai pranayama becomes less complex. Corretion dimension, Largest lyapunov exponent, Approximate entropy & Hurst exponent decreases. Less parallel functional activity of the brain predictability of the EEG signal increases.
Fig.3: cross-coherence
V. DISCUSSION To determine the degree of synchronization between the activations of two components, we may plot their eventrelated cross-coherence as shown in fig.3. (a concept first demonstrated for EEG analysis by Rappelsberger). Even though independent components are (maximally) independent over the whole time range of the training data, they may become transiently (partially) synchronized in specific frequency bands. In the cross window below, the two components become synchronized (top panel) around 11.5Hz (click on the image to zoom in). The upper panel shows the coherence magnitude (between 0 and 1, 1 representing two perfectly synchronized signals). The lower panel indicates the phase difference between the two signals at time/frequency points where cross-coherence magnitude
_______________________________________________________________
During the ujjai pranayama Meditation technique individuals often report the subjective experience of Transcendental Consciousness or pure consciousness, the state of least excitation of consciousness. This study found that many experiences of pure consciousness were associated with periods of natural respiratory suspension, and that during these respiratory suspension periods individuals displayed higher mean EEG coherence over all frequencies and brain areas, in contrast to control periods where subjects voluntarily held their breath. Results are 98 % true when discussed with doctors.
IFMBE Proceedings Vol. 23
_________________________________________________________________
4
S.T. Patil and D.S. Bormane [6] Prof. S. T. Patil “Wi-Fi” in February 2006 at Sanghavi college of
BIOGRAPHIES
Engineering, Mumbai.
[7] Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during Ujjai
Prof. S.T. Patil- Completed B.E. Electronics from Marathwada University, Aurangabad in 1988. M.Tech. Computer from Vishveshwaraya Technological University, Belgum in July 2003. Persuing Ph.D. Computer from Bharati Vidyapeeth Deemed University, Pune. Having 19 years of experience in teaching as a lecturer, Training & Placement Officer, Head of the department & Assistant Professor. Presently working as an Assistant Professor in computer engineering & Information Technology department in Bharati Vidyapeeth Deemed University, College of Engineering, Pune (India). Presented 14 papers in national & international conferences. Dr. D.S. Bormane- Completed B.E. Electronics from Marathwada University, Aurangabad in 1987. M.E. Electronics from Shivaji University, Kolhapur. Ph.D. computer from Ramanand Tirth University, Nanded. Having 20 years of experience in teaching as a Lecturer, Assistant Professor, Professor, and Head Of Department. Currently working as a Principal in Rajrshi Shahu College of Engineering, Pune (India). Published 24 papers at national & international conferences and journals.
[1] Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis Using FFT”
[3] [4] [5]
(BVCON-March 2005, Sangli ) Prof. S. T. Patil & Dr. D. S. Bormane, “ “Broadband Multi-carrier based air interface” ( BVCON-March 2005, Sangli ) Prof. S. T. Patil “ “Network security against cyber attacks” ( BVCON- March 2005, sangli) Prof. S. T. Patil & Dr. D. S. Bormane, “ “Dynamic EEG Analysis Using Multi-resolution Time & frequency” (BIOCON- September, 2005, Pune ) Prof. S. T. Patil & Dr. D. S. Bormane, “Fast Changing Dynamic & High non-stationary EEG signal Analysis Using Multi-resolution Time & frequency. In January 2006 at Government college of Engineering, Aurangabad.
_______________________________________________________________
Vidyapeeth College of Engineering, New Delhi.
[9] Prof. S. T. Patil “Distributed Storage Management, July 2006, College of Engineering, Kopargaon.
[10] Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during [11] [12] [13] [14] [15] [16] [17]
REFERENCES
[2]
pranayama, April 2006, JNEC, Aurangabad.
[8] Prof. S. T. Patil “Clustering Technology, April 2006, Bharati
[18] [19] [20]
[21] [22]
Kapalbhati, August 2006, ICEMCC – PESIT International conference, Bangalore. Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during Bramari using wavelet, selected in CIT, International conference, Bhubaneshwar, December 2006. Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during Ujjai pranayama using wavelet, selected in CODEC, International conference, Culcutta, December 2006. Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during Ujjai pranayama using wavelet, ADCOM, NIT, Suratkal, International conference, Mangalore, December 2006. Prof. S. T. Patil “Enhanced adaptive mesh generation for image representation, NCDC-06, national conference, Pune, September 2006. Prof. S. T. Patil “Enhanced adaptive mesh generation for image representation, ETA-2006, National conference, Rajkot, October 2006. Prof. S. T. Patil & Dr. D. S. Bormane, “EEG Analysis during Kapalbhati, December 2006, BIOTECH-06, International conference, Nagpur. Chatrian et al., "A glossary of terms most commonly used by clinical electroencephalographers", Electroenceph.and Clin. Neurophysiol., 1994, 37:538-548. D. Walter et al., "Semiautomatic quantification of sharpness of EEG phenomena". IEEE Trans. on Biomedical Engineering, 3, Vol. BME20, pp. 53-54. J. Smith, "Automatic Analysis and detection of EEG Spikes", IEEE Trans. on Biomedical Engineering, 1999, Vol. BME-21, pp. 1-7. Barreto et al., "Intraoperative Focus Localization System based Spatio-Temporal ECoG Analysis", Proc. XV Annual Intl. Conf. of the IEEE Engineering in Medicine and Biology Society, October, 2003. Lin-Sen Pon, “Interictal Spike Analysis Using Stochstic Point Process”,Proceedings of the International conference, IEEE – 2003. Susumo Date, “A Grid Application For An Evaluation Of Brain Function Using ICA” Proceedings of the International conference, IEEE - 2002.
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Study of Stochastic Resonance as a Mathematical Model of Electrogastrography during Sitting Position Y. Matsuura1,2, H. Takada3 and K. Yokoyama1 1
Graduate School of Natural Science, Nagoya City University, Nagoya, Japan 2 JSPS Research Fellow, Tokyo, Japan 3 Department of Radiology, Gifu University of Medical Science, Seki, Japan 4 Graduate School of Design and Architecture, Nagoya City University, Nagoya, Japan Abstract — Electrogastrography (EGG) is an abdominal surface measurement of the electrical activity of the stomach. It is very important clinically to record and analyze multichannel EGGs, which provide more information on the propagation and co-ordination of gastric contractions. This study measured the gastrointestinal motility with an aim to obtain a mathematical model of EGG and to speculate factors to describe the diseases resulting from constipation and erosive gastritis. The waveform of the electric potential in the Cajal cells is similar to the graphs of numerical solutions to the van der Pol equation. Hence, we added the van der Pol equation to a periodic function and random white noises, which represented the intestinal motility and other biosignals, respectively. We rewrote the stochastic differential equations (SDEs) into difference equa-tions, and the numerical solutions to the SDEs were obtained by the Runge–Kutta–Gill formula as the numerical calculus, where we set the time step and initial values to be 0.05 and (0, 0.5), respectively. Pseudorandom numbers were substituted in the white noise terms. In this study, the pseudorandom num-bers were generated by the Mersenne Twister method. These numerical calculations were divided into 12000 time steps. The numerical solutions and EGG were extracted after every 20 steps. The EGG and numerical solutions were compared and evaluated by the Lyapunov exponent and translation error. The EGG was well described by the stochastic resonance in the SDEs. Keywords — Electrogastrography (EGG), numerical analysis, Stochastic Resonance
I. INTRODUCTION It is known that attractors can be reconstructed by dynamical equation systems (DESs) such as the Duffing equation, Henon map, and Lorenz differential equation. It is very interesting to note that the structure of an attractor is also derived from time series data in a phase space. The DESs were obtained as mathematical models that regenerated time series data. Anomalous signals are introduced by nonstationary processes, for instance, the degeneration of singular points in the potential function involved in the DESs; their degree of freedom increases or stochastic factors are added to them. The visible determinism in the
latter case would be different from that in the case where random variables do not exist. It is well known that the empirical threshold translation error of 0.5 is used to classify mathematical models as being either deterministic or stochastic generators [1]; however, the estimated translation error is generally not the same as that in the case of smaller signal-to-noise (S/N) ratios. Takada (2008) [2] quoted an example of analyzing numerical solutions to the nonlinear stochastic differential equations (SDEs): x
y agradf ( x ) P w1 (t ) ,
y
x P w 2 (t ) ,
s.t.ᇫf ( x )
1 4 b 2 x x 12 2
(1.1) (1.2) (2)
where w1 (t ) and w2 (t ) were independent white noise terms P 0,1, " ,20 . By enhancing in eq.(1), we can obtain numerical solutions for smaller S/N ratios. Percutaneous electrogastrography (EGG) is a simple method to examine gastrointestinal activity without constraint. EGG is a term generally applied to the measurement of human gastric electrical activity. In 1921, Walter C. Alvarez reported performing EGG for the first time in humans [3]. In EGG, the electrical activity of the stomach is recorded by placing the electrodes on the surface of the abdominal wall [4]. In the stomach, a pacemaker on the side of the greater curvature generates electrical activity at a rate of 3 cycles per minute (3 cpm); the electrical signal is then transferred to the pyloric side [5]–[7]. Previously, it was difficult to measure this electrical activity because the EGG signal was composed of low-frequency components and high-frequency noise caused by the electrical activity of the diaphragm and heart. However, the accuracy of EGG measurements has improved recently, and gastroenteric motility can be evaluated by spectrum analysis of the EGG signals [8]–[9]. Many previous studies on EGG have been reported, and most of these studies pertain to the clinical setting [10], e.g.,
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 5–8, 2009 www.springerlink.com
6
Y. Matsuura, H. Takada and K. Yokoyama
evaluation of the effects of hormones and drugs on EGG and the relationship between EGG and kinesia. EGG has been used to study the effects of warm compresses (for the improvement of constipation) on gastrointestinal activity [11], the usefulness of warm compresses in the epigastric region for the improvement of constipation [12], and characterization of intestinal activity in patients with chronic constipation[13]. Gastric electrical potential is generated by interstitial cells of Cajal (ICCs) [14]. ICCs are pacemaker cells that spontaneously depolarize and repolarize at a rate of 3 cpm. They demonstrate low-amplitude, rhythmic, and circular contractions only if the electrical potential is over a threshold. Human gastric pacemaker potential migrates around the stomach very quickly and moves distally through the antrum in approximately 20 seconds, resulting in the normal gastric electrical frequency of 3 cpm. This moving electrical wavefront is recorded in EGG, in which the gastric myoelectrical activity is recorded using electrodes placed on the surface of the epigastrium. However, electrogastrogram also contains other biological signals, for instance, electrical activity of the heart, intestinal movements, and myenteric potential oscillations in general. In the present study, the gastrointestinal motility was measured with an aim to obtain a mathematical model of EGG and to speculate factors to describe the diseases resulting from constipation and erosive gastritis. II. METHODS A. Mathematical Model and the Numerical Simulations
in the SDEs . We also investigate the effect of the SR and evaluate the SDEs as a mathematical model of the EGG. B. Physiological Procedure The subjects were 14 healthy people (7 M & 7 F) aged 21–25 years. Sufficient explanation of the experiment was provided to all subjects, and a written consent was obtained from them. EGGs were obtained for 30 min in the sitting position at 1 kHz by using an A/D converter (AD16-16U (PCI) EH; CONTEC, Japan). EGGs were amplified using a bioamplifier (MT11; NEC Medical, Japan) and recorded using a tape recorder (PC216Ax; Sony Precision Technology, Japan). To remove the noise from the time series of EGG data obtained at 1 kHz, resampling was performed at 0.5 Hz. For analysis, we then obtained a resampled time series as follows;
xi
1 1999 ¦ y(2000 i j ) (i 2000 j 0
0,1, " ,1799)
In this experiment, 9 disposable electrodes (Blue Sensor; Medicotest Co. Ltd., Ølstykke, Denmark) were affixed on ch1–ch8 and e, as shown in Fig. 1. The electrode affixed on e was a reference electrode. Prior to the application of electrodes, the skin resistance was sufficiently reduced using SkinPure (Nihon Kohden Inc., Tokyo, Japan). Several methods have been proposed for analyzing the EGG data. The EGG data obtained at ch5, which is the position closest to the pacemaker of gastrointestinal motility, were analyzed in this study.
As a mathematical model of the EGG during sitting position, we propose the following SDEs in which the periodic function is added to Eq.(1.1).
x
y O gradf ( x ) s (t ) P w1 (t ) y
x Pw2 (t )
(3.1) (3.2)
The function s(t ) and the white noise wi (t ) represent intestinal movements and other biosignals, for instance, myenteric potential oscillations that are weak and random, respectively (i 1,2) . In most cases, there is an optimum for noise amplitude, which has motivated the name stochastic resonance (SR) for this rather counterintuitive phenomenon. In other words, the SR occurs when the S/N ratio (SNR) of a nonlinear system is maximized for a moderate value of noise intensity [15]. In this study, we numerically solve eq.(3) and verify the SR
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 1 Positions of electrode
_________________________________________________________________
A Study of Stochastic Resonance as a Mathematical Model of Electrogastrography during Sitting Position
C. Calculation Procedure 1. We rewrote eq.(3) as difference equations and obtained numerical solutions to them by the Runge–Kutta–Gill formula as the numerical calculus; the initial values were (0, 0.5). Pseudorandom numbers were substituted for . The pseudorandom numbers used here were generated using the Mersenne Twister [16]. These numerical calculations were performed in N = 12000 time steps. The unit of the time step is 0.05. 2. Values in the numerical solutions were recorded every 40 time step, which is related to a signal sampling rate of 0.5 Hz. 3. The autocorrelation function was calculated for each component of the numerical solution.
7
Numerical solutions involved in the SR highly correlated with the periodic function s(t ) , which represented intestinal movements (6 cpm). Gastric electrical activity in a healthy person might synchronize with the intestinal one.
III. RESULTS AND DISCUSSION In the 12000 time steps, there was no exception wherein the numerical solutions did not diverge. For P 0,1, " ,20 , the value of derived from the first component of the numerical solution was not different from that derived from the second component. With regard to eq.(3), the SR occurred under the condition of an appropriate coefficient . Some biosystems are based on the nonlinear phenomenon of SR, in which the detection of small afferent signals can be enhanced by the addition of an appropriate amount of noise [15]. Furthermore, this mechanism would facilitate behavioral, perceptive, and autonomic responses in animals and humans, for instance, information transfer in crayfish mechanoreceptors [17], tactile sensation in rats [18], availability of electrosensory information for prey capture [19], human hearing [20], vibrotactile sensitivity in the elderly[21], and spatial vision of amblyopes [22]. Here, we examined whether the SR generated by eq.(3) can describe the EGG time series (Fig.4). A cross-correlation coefficient Uˆ zs between observed sig-
Fig. 2 The autocorrelation function for each component of the numerical solution
Fig. 3 An example of numerical solutions. P 12
nal x(t ) and the periodic function s(t ) was calculated as a substitute for the SNR in previous studies in which the occurrence of the SR was investigated. Fig. 2 shows Uˆ xs be-
tween the numerical solutions and the periodic function in eq.(3.1). The cross-correlation coefficient was maximized for a moderate value of noise intensity P 12 (Fig. 2). Thus, the SR could be generated by eq.(3) with P 12 , which is regarded as a mathematical model of the EGG in this study. We then compared this numerical solution with the EGG data (Fig. 3). Temporal variations in the numerical solutions were similar to those in the EGG data (Fig.4).
_______________________________________________________________
Fig. 4 A typical EGG time series. (Sampling frequency is 0.5 Hz.)
In the next step, we would quantitatively evaluate the affinity by using translation errors [23] and Lyapunov exponents [24]-[26] in embedding space. Translation error (Etrans) measures the smoothness of flow in an attractor, which is assumed to generate the time-series data. In general, the threshold of the translation error for classifying the time-series data as deterministic or stochastic is 0.5, which
IFMBE Proceedings Vol. 23
_________________________________________________________________
8
Y. Matsuura, H. Takada and K. Yokoyama
is half of the translation error resulting from a random walk. The chaos processes are sensitively dependent on initial conditions and can be quantified using the Lyapunov exponent [26]. If the Lyapunov exponent has a positive value, the dynamics are regarded as a chaos process.
ACKNOWLEDGMENT This work was supported in part by the JSPS Research Fellowships for Young Scientists, 07842, 2006.
REFERENCES 1.
Matsumoto T, Tokunaga R, Miyano T, Tokuda I (2002) Chaos and time series prediction, Tokyo, Baihukan, 49–64. (in Japanese) 2. Takada H (2008) Effect on S/N ratio on translation error estimated by double-wayland algorithm, Bulletin of gifu university of medical science, 2, 135-140 3. Alvarez W C (1922) The electrogastrogram and what is shows, J Am Med Assoc, 78, 1116–1118. 4. Hongo M, Okuno H (1992) Evaluation of the function of gastric motility, J. Smooth Muscle Res., 28, 192–195 (in Japanese). 5. Couturier D, Roze C, Paolaggi J, Debray C (1972) Electrical activity of the normal human stomach. A comparative study of recordings obtained from the serosal and mucosal sides, Dig. Dis. Sci., 17, 969– 976. 6. Hinder R A, Kelly K A (1977) Human gastric pacesetter potentials. Site of origin, spread, and response to gastric transection and proximal gastric vagotomy, Amer. J. Surg., 133, 29–33. 7. Kwong N K, Brown B H, Whittaker G E, Duthie H L (1970) Electrical activity of the gastric antrum in man, Br. J. Surg., 57, 913–916. 8. Van Der Schee E J, Smout A J P M, Grashuis J L (1982) Application of running spectrum analysis to electrogastrographic signals recorded from dog and man, in Motility of the digestive tract, ed. M. Wienbeck, Raven Press, New York. 9. Van Der Schee E J, Grashuis J L (1987) Running spectrum analysis as an aid in the representation and interpretation of electrogastrographic signals, Med. Biol. Eng. & Comput., 25, 57–62. 10. Chen J D, Mccakkum R W (1993) Clinical applications of electrogastrography, Am. J. Gastroenterol, 88, 1324–1336. 11. Nagai M,,Wada M, Kobayashi Y, Togawa S (2003) Effects of lumbar skin warming on gastric motility and blood pressure in humans, Jpn. J. of Physiol., 53, 45–51.
_______________________________________________________________
12. Kawachi N, Iwase S, Takada H, Michigami D, Watanabe Y, Mae N (2002) Effect of wet hot packs applied to the epigastrium on electrogastrogram in constipated young women, Autonomic Nervous System, 39, 433–-437. (in Japanese) 13. Matsuura Y, Iwase S, Takada H, Watanabe Y, Miyashita E (2003) Effect of three days of consecutive hot wet pack application to the epigastrium on electrogastrography in constipated young women, Autonomic Nervous System, 40, 406–411. (in Japanese) 14. Cajal S R (1911) Historogie du systeme nerveux de l' homme et des vertebres. 2:942, Maloine, Paris. 15. Benzi R, Sutera A, Vulpiani A(1981) The mechanism of stochastic resonance. Journal of Physics. A14. L453-L457 16. Matsumoto M, Nishimura T: Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator, ACM Transaction Modeling and Computer Simulation, 8(1), 3-30, 1998 17. Douglass J K, Wilkens L, Pantazelou E, Moss F (1993) Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance. Nature. 365. pp. 337-340. 18. Collins J J, Imhoff T T, Grigg P (1996) Noise enhanced tactile sensation. Nature. 383. pp. 770. 19. Russell E V, Israeloff N E(2000) Direct observation of molecular cooperativity near the glass transition. Nature. 408. pp.695-698. 20. Zeng F G, Fu Q J, Morse R (2000) Human hearing enhanced by noise. Brain Res. 869. pp. 251-255. 21. Liu W, Lipsitz L A, M Montero-Odasso J Bean, Kerrigan D C, Collins J J (2002) Noise-enhanced vibrotactile sensitivity in older adults, patients with stroke, and patients with diabetic neuropathy. Arch Phys Med Rehabil. 83. pp. 171-176. 22. Levi D M, Klein S A (2003) Noise provides some new signals about the spatial vision of amblyopes. J Neurosci. 23. pp. 2522-2526. 23. Wayland R, Bromley D, Pickett D, Passamante A (1993) Recognizing determinism in a time series, Phys. Rev. Lett, 70, 580–582. 24. Lyapunov A M (1892) The general problem of the stability of motion, Comm. Soc. Math. Kharkow (in Russian) (reprinted in English, Lyapunov A M (1992) The general problem of the stability of motion. International Journal of Control, 55(3), 531-534) 25. Sato S, Sano M, Sawada Y (1987) Practical methods of measuring the generalized dimension and the largest Lyapunov exponent in high dimensional chaotic systems, Prog. Theor. Phys., 77, 1–5. 26. Rosenstein M T, Collins J J, De Luca C J (1993) a practical method for calculating largest Lyapunov exponents from small data sets, Physica. D., 65, 117–134. Author: MATSUURA Yasuyuki Institute: Graduate School of Natural Sciences, Nagoya City University and JSPS Research Fellow Street: 1 Yamanohata, Mizuho-cho, Mizuho-ku City: Nagoya Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Possibility of MEG as an Early Diagnosis Tool for Alzheimer’s Disease: A Study of Event Related Field in Missing Stimulus Paradigm N. Hatsusaka, M. Higuchi and H. Kado Applied Electronics Laboratory, Kanazawa Institute of Technology, Kanazawa, Japan Abstract — We are investigating a diagnosis to find Alzheimer’s disease (AD), which is a kind of dementia and one of the most prominent neurodegenerative disorders, using a magnetoencephalography (MEG). MEG is one of non-invasive technique to investigate the brain function by using an array of superconducting quantum interference device (SQUID) sensors arranged around the head. In this study, we observed the event-related field in ‘missing stimulus’ paradigm. The subjects were presented short beep tones with a certain interval. Some tones were omitted randomly from the sequence and this omission was called ‘tone-omitted event’. We focused on the specific magnetic field component induced by the tone-omitted event. 32 patients with early AD and 32 age-matched controls were examined by 160-ch whole-head MEG system. The MEG signals related to the tone-omitted events were collected from each subject. The amplitude of the averaged waveform in the AD group was significantly smaller than that in the control group. This result suggests that MEG is useful for AD diagnosis. Keywords — MEG, Alzheimer’s disease, auditory stimulus, Event Related Field, missing stimulus paradigm
I. INTRODUCTION Magnetoencephalography (MEG) is one of the noninvasive methods to investigate the neural activity in the brain, based on biomagnetism measurement. MEG can detect magnetic field generated by electronic neural activity of the brain. The neural activity is inducing by post synaptic activity in the cortex. The intensity of the magnetic field elicited from the brain is several femto or pico tesla. Such small magnetic field can be detected only by superconducting quantum interference device (SQUID) sensors. A recent whole-head MEG system is equipped with a headshaped array of more than one hundred SQUID sensors. There are several brain imaging techniques to measure the brain function other than MEG system. Positron emission topography (PET) and single-photon emission computed tomography (SPECT) observe the brain function by using radioactive isotope doped in the subject’s body. Functional MRI observes the brain function using a strong magnetic field. These systems measure the chemical changes by metabolic activity of nerves. MEG and electroencephalography (EEG) can directly observe the neural elec-
tronic activity. The EEG has the lower spatial resolution than the MEG because the electric potential distribution on the scalp is influenced by the difference of the conductivity among the tissues, such as skull and scalp etc. On the other hand, the magnetic field distribution is not distorted because the permeability of the body tissues is constant and is the almost same as the space permeability. MEG has higher temporal and/or higher spatial resolution than other techniques. Alzheimer’s disease (AD) is one of the most severe neurodegenerative disorders. AD diagnosis is mostly based on clinical features, and difficult to find this disease in early stages. MEG is expected to be an early diagnosis tool for cerebrodegenerative disorder. We are developing diagnostic protocols to find out the early stage of AD using MEG system. II. MISSING STIMULUS PARADIGM Intensity of MEG signals responding to single stimulus is usually too small and submerged in noise. Therefore, the stimulus is repeatedly given to the subject and every MEG signal evoked by the stimulus is averaged to improve the signal to noise (S/N) ratio. For example, in the case of typical auditory evoked magnetic field measurement a tone must be repeated more than one hundred times. It takes several minutes to acquire the signals with the good S/N ratio. It is often difficult for AD patients to control their concentration of attention to the stimulus during the MEG measurement. The missing stimulus paradigm is a kind of passive attention tasks, in which subjects don’t need to pay attention to stimuli. The subjects were presented short beep tones with a certain interval. Some tones were omitted randomly from the sequence and omissions were called ‘toneomitted event’. In EEG study with the missing tone stimulus paradigm, it was reported that the specific response with about 200 ms in latency, called N200, was evoked by the tone-omitted event [1][2]. The N200 response is one of the event-related potential (ERP). It is considered as the components related to the stimulus discrimination and is used in cognitive function [3]. The missing tone stimulus paradigm is expected to be useful for diagnosis of the dementia because the subject has only to receive the tone sequence with the omission passively.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 9–12, 2009 www.springerlink.com
10
N. Hatsusaka, M. Higuchi and H. Kado
In this study, we measured event related magnetic field (ERF) which is the magnetic counterpart of ERP in the missing tone stimulus paradigm. AD patients and agematched controls (NC) were examined. We compare the results of both group and discuss the possibility of MEG as an early diagnosis tool. III. MATERIALS AND METHODS
Table 1 Demographic clinical and neuropsychological data of AD and NC AD group NC group mean±S.D.(range) mean±S.D.(range) 32 32 Number of subjects 70.8±9.2 (54-86) 71.4±4.7 (64-83) Age 19/13 13/19 Sex (males/females) 21.72±3.95 (14-30)* 28.56±1.72 (24-30) MMSE 0.96±0.26 (0.5-2) CDR Mean value ± S.D. and range in neuropsychological data for each group. MMSE, Mini Mental State Examination; CDR, Clinical Dementia Rating; *, P 0.05 in all situations) for age, gender and risk for CVD. However, the coefficients for BA diameter and peak PPG AC were significant coefficients (p < 0.05). Therefore, the age, gender and risk for CVD were removed from the model and the coefficients of the new model are shown in Table 1. In this model, only baseline BA diameter and peak PPG AC are the independent predictors for estimating the peak FMD. All coefficients remain significant after the removal of the three variables in the initial model. Model residues (Table 2), which are the differences between observed value and the predicted value, for the new model have no outliers (minimum and maximum > ± 3), are independent (DurbinWatson estimate ranges between 0 and 4.0 in Table 3) and are normally distributed [5]. There was no multicollinearity between BA diameter and peak PPG AC that is indicated by high tolerance, which gives the strength of the linear relationship among the independent variables [5]. Therefore, this model can be accepted and considered as the final model. The coefficients (B) from the final model coefficients represent the values of 0, 1 and 2 of the generalized linear multiple regression as represented by equation 1. The final model can be represented as: peak FMD = 35.724.71×BA diameter+0.03×peak PPG AC (2) The adjusted R square (Table 3) indicates that 44.4 % of the BA diameter and peak PPG AC data are taken together to influence the dependent peak FMD data in the model.
_______________________________________________________________
The estimated model output were then computed and compared to the measured data. The correlation between model output (estimated using equation 2) and measured data as well as the mean of the absolute error, which is the absolute difference between measured and estimated data, were evaluated to provide information regarding the model performance and the group statistics (Table 4). Table 1 Modela coefficients Unstd. Coeff.
Std. Coeff.
Constant
B 35.72
Std. Error 3.99
Beta Nil
t 8.95
Sig. Tolerance < 0.001
BA diameter
-4.71
1.02
-0.43
-4.63
< 0.001 0.761
peak PPG AC 0.03 0.01 0.35 3.82 < 0.001 0.761 a Dependent variable: peak FMD. Predictors: Constant, BA diameter, peak PPG AC
Table 2 Residual statistics Minimum 15.47
Predicted Value
Maximum 34.40
Mean SD 24.34 4.62
N 86 86
Residual
-11.67
11.45
0.00
5.03
Std. Predicted Value
-1.92
2.18
0.00
1.00
86
Std. Residual
-2.29
2.25
0.00
0.99
86
Table 3 Model summary R 0.676
R Square 0.457
Adjusted R Square 0.444
Std. Error of the Estimate 5.093
DurbinWatson 1.630
Table 4 Group statistics for the final model N Measured 43 peak FMD Estimated 43 peak FMD Model error 43 * Not significant
Healthy group Mean SD
N
Risk group Mean SD
t-test p value
26.4
7.67
43
22.3
5.19
0.005
26.6
4.29
43
22.4
4.21
< 0.001
0.16
0.115 43
0.17
0.120
0.728*
The estimated FMD for the healthy group (26.4 ± 7.67 %) was significantly higher than that of the risk group (22.3 ± 5.19 %), p = 0.005; similar trend as observed from the measured data, risk group (22.4 ± 4.21 %) versus healthy group (26.1 ± 7.8 %), p < 0.001. There was no difference between the two groups in terms of the absolute model error (0.16 ± 0.115 versus 0.17 ± 0.120), p = 0.768. Thus, the model absolute error can be represented as the total mean
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Statistical Model to Estimate Flow Mediated Dilation Using Recorded Finger Photoplethysmogram
error of 0.16 ± 0.117. The model output (estimated peak FMD) is correlated with the measured peak FMD for the sample population (R = 0.725) (Fig. 2). The receiver operating characteristics (ROC) for the estimated model output and measured data is shown in Fig. 3. Obviously, the model has better performance as the total area under the ROC for the model is larger than that of the measured data.
21
IV. CONCLUSIONS In this paper, we demonstrated an exercise of statistical modeling. The results show that a statistical model by linear multiple regression can predict (by calculation) the peak BA FMD. The model uses the information of baseline BA diameter and peak PPG AC for a person to calculate for an estimated peak FMD for the person. The model provides means of estimating the peak BA FMD allowing for an alternative technique of evaluating the endothelial function.
ACKNOWLEDGMENT This work has been supported by the Science Fund grant (01-01-02-SF0227) from the Ministry of Science, Technology and Innovation, Malaysia. We would like to thank Noraidatulakma Abdullah for her graceful involvement in the statistical data analysis.
REFERENCES 1. 2.
Fig. 2 Regression of the estimated and measured data 3.
4. 5.
Vanhoutte P M (1989) Endothelium and control of vascular function: State of the art lecture. Hypertension 13: 658-667 Abularrage C J, Sidawy A N, Aidinian G, et al. (2005) Evaluation of macrocirculatory endothelium-dependent and endothelium independent vasoreactivity in vascular disease. Perspectives in Vascular Surgery Endovascular Therapy 3: 45-53 Zahedi E, Jaafar R, Mohd Ali M A, et al. (2008) Finger photoplethysmogram pulse amplitude changes induced by flow mediated dilation. Physiological Measurement 29: 625-637 Brace N., Kemp R, Snelgar R (2006) SPSS for Psychologists. Psychology Press, London Chan Y H (2004). Biostatistics 201: Linear Regression Analysis. Singapore Medical Journal 45(2): 55-61 Author: Institute: Street: City: Country: Email:
Rosmina Jaafar University Kuala Lumpur-British Malaysian Institute Jalan Sg Pusu, Gombak, Selangor Malaysia
[email protected] Fig. 3 ROC curve for the model (estimated) output and measured data
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Automatic Extraction of Blood Vessels, Bifurcations and End Points in the Retinal Vascular Tree Edoardo Ardizzone, Roberto Pirrone, Orazio Gambino and Francesco Scaturro Università degli Studi di Palermo Dipartimento di Ingegneria Informatica Building 6-3rd floor – 90128 Palermo (Italy) Abstract — In this paper we present an effective algorithm for automated extraction of the vascular tree in retinal images, including bifurcations, crossovers and end-points detection. Correct identification of these features in the ocular fundus helps the diagnosis of important systematic diseases, such as diabetes and hypertension. The pre-processing consists in artefacts removal based on anisotropic diffusion filter. Then a matched filter is applied to enhance blood vessels. The filter uses a full adaptive kernel because each vessel has a proper orientation and thickness. The kernel of the filter needs to be rotated for all possible directions. As a consequence, a suitable kernel has been designed to match this requirement. The maximum filter response is retained for each pixel and the contrast is increased again to make easier the next step. A threshold operator is applied to obtain a binary image of the vascular tree. Finally, a length filter produces a clean and complete vascular tree structure by removing isolated pixels, using the concept of connected pixels labelling. Once the binary image of vascular tree is obtained, we detect vascular bifurcations, crossovers and end points using a cross correlation based method. We measured the algorithm performance evaluating the area under the ROC curve computed comparing the number of blood vessels recognized using our approach with those labelled manually in the dataset provided by the Drive database. This curve is used also for threshold tuning. Keywords — Anisotropic Diffusion, Matched Filter, Retinal Vessels, ROC curve.
I. INTRODUCTION The retinal image is easily acquirable with a medical device called fundus camera, consisting in a powerful digital camera with a dedicated optics [1]. The two main anatomical structures of the retinal image involved in the diagnostic process are the blood vessels and the optic disc. In particular, the vascular tree is a very important structure because the analysis of the vascular intersections allows discovering lesions of the retina and performing clinical studies [2]. We propose a vessel extraction method that has been tested on the fundus images provided by the DRIVE database [6][4]. It contains 20 test images where three observers performed manual extraction of the retinal vessels along with the corresponding ROIs to distin-
guish the foreground from the background. In [10] the vessel extraction is performed using an adaptive threshold followed by a multi-scale analytical scheme based on Gabor filters and scale multiplication. The method described in [11] makes use of a tracking technique and a classification phase based on fuzzy c-mean for vessel cross-section identification. Moreover, a greedy algorithm is used to delete false vessels. A final step is devoted to detect bifurcations and crossing points. In [12] the optic nerve and the macula are detected using images acquired with digital red-free fundus photography. After the vascular tree extraction, spatial features like average thickness, and average orientation of the vessels are measured. Our method starts removing artefacts using the anisotropic diffusion filter. Then a suitable matched filter kernel has been developed that is used with a contrast stretching operator to put in evidence the vessels with respect to the background. A threshold is used to obtain a binary image of the vasculature. The threshold has been experimentally tuned using the ROC curve, which describes the efficiency of the vessels classification with respect to the reference one provided by the Drive dataset. The last step is the use of a length filter to erase little and isolated. End-, bifurcation-, and crossover- points are detected like in [8]. In the rest of the paper the processing steps are detailed and the performance evaluation set-up is described. Finally some conclusions are drawn. II. BLOOD VESSELS EXTRACTION A. Artifacts removal The retinal image is corrupted by strong intensity variations. A circle-shaped luminance peak can be seen in the region of the optical nerve, the fovea region exhibits intensity attenuation, because it is a depression on the retinal surface, and a diffuse shading artifact afflicts all the eye fundus. Before applying any segmentation task, an intensity compensation phase must be performed. Being the retinal image a RGB one, we consider the G channel of the RGB triad as the intensity image component (see fig.1-a). The most common way to suppress intensity variations consists in the application of the homomorphic filter, but it generates a halo artifact on the
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 22–26, 2009 www.springerlink.com
Automatic Extraction of Blood Vessels, Bifurcations and End Points in the Retinal Vascular Tree
boundary between the retina and the background (see fig.1-b). The filter must take into account a suitable Region Of Interest (ROI) to select the retina, thus avoiding this undesirable phenomenon. The G channel image is filtered using an anisotropic diffusion filter [13] and it is tuned in correspondence of the region selected by the ROI. The ROI is created as a binary image where the region surrounded by the boundary of the retina is filled with 1s (see fig.1-c). The boundary is extracted using a Canny edge detector. The resulting image (see fig. 1-d) is obtained using the following formula:
R (i, j )
volved with the image to enhance the blood vessels. A suit
G
e
2
k2
Being an edge-preserving filter aimed to noise removal, the value of the parameter k must be enough high to warrant the low-pass behavior and it has to be selected as a function of the retinal intensity gradient G in correspondence of the ROI. The parameter k is chosen as follows:
k
a)
b)
c)
d)
e)
f)
G (i, j ) G f (i, j )
Where Gf(i,j) is the filtered versions of the original image G(i,j). We wanted low-pass behavior for the anisotropic diffusion filter D, so adopted the Gaussian-like diffusion function instead of the Lorentzian-like one:
D(G )
23
D std (G )
=5 and 50 iterations have been used for all the dataset. The intensities of the resulting image R(i,j) have been normalized to the interval [0,1] to be independent by the input dynamics. A side effect of the adopted filtering is a contrast decreasing, so a contrast stretching is performed, as shown in fig.1-e. The target dynamics to be obtained by the stretching operator has been selected as follows: [ P - · V , P + V ] where P and V are the mean and the standard deviation, respectively. To avoid residual halo phenomenon, the output dynamics has been limited to 60% of the available one, and the input interval is unbalanced by the coefficient, as it can be seen above. The value of has been set to 1.5 for all the dataset. A final noise removal step based on anisotropic filter is performed, obtaining the image in fig.1-f. We determined experimentally the parameters for this task: k=0.3 and 3 iterations. B. Matched Filter The matched filter is a spatial filter whose kernel is the template of the vessel cross section. The kernel can be rotated so that it imitates the vessel displacement; it is con-
_________________________________________
Fig. 1 a) Green channel of the RGB original image; b) the image in a) filtered with a standard homomorphic filter; c) the ROI; d) filtered image R(i,j) using anisotropic diffusion; e) applying contrast stretching to the image in d); f) noise removal. able definition of the kernel is fundamental to obtain good performance. We used a kernel defined as follows: xT
x cos(T ) + y sin(T )
yT
x sin(T ) + y cos(T )
f1 xT , yT 0.5
f 2 xT , yT
f ( xT , yT )
1 esp
2 · 2 § §x · §y · 1 ¨ ¨ T ¸ ¨ T ¸ ¸ ¨¨ © V x ¹ ¨ V y ¸ ¸¸ © ¹ ¹ © 1 0.5 2 § § x · § y 1 ¨ ¨ T ¸ ¨ T ¨¨ © k1 V x ¹ ¨ k2 V y © © f1 xT , yT f 2 xT , yT
· ¸¸ ¹
2
· ¸ ¸¸ ¹
esp
Here T is the rotation angle, Vx and Vy control the length along the main directions, esp determines the roundness of the inverted Gaussian, k1 and k 2 control lateral elongation along both x and y, and we subtract the mean value of f to keep them positive. We obtain 12 kernels varying T [2]. The best results have been obtained using a 15x15 pixels wide kernel and the following parameters: Vx = 100, Vy = 0.18, esp = 2, k1 = k 2 = 3 (see fig.2). Like in [8], a contrast stretching is performed on the Matched Filter Response (MFR) in the range > x V , x 2V @ where x is the mean value of the dynamics, and V is the standard deviation.
IFMBE Proceedings Vol. 23
___________________________________________
24
Edoardo Ardizzone, Roberto Pirrone, Orazio Gambino and Francesco Scaturro
Fig. 3 Binary image obtained using the threshold (left), and small object removal (right).
III. PERFORMANCE EVALUATION
Fig. 2 UP: The kernel used for the Matching filter; DOWN: The Matched filter result (left) and contrast stretching application (right).
The efficiency analysis can be performed measuring the ROC curve area [7]. A ROC curve has been obtained for each test image in Drive database increasing the threshold values of the threshold. The mean sensitivity-specificity values have been used to draw the ROC curve in fig.4. The area under the ROC curve with our method is 0.953. Table 1 shows a comparison among different methods in literature (see [9] for a review) and our approach.
C. Threshold operator A sliding threshold operator has been applied to the MFR image to obtain a binary image, which allows detecting blood vessels as connected components. The threshold value has been tuned comparing our result with the corresponding segmented image in the dataset. A ROC curve (Receiver Operating Characteristic) [7] has been drawn that is a Sensitivity vs. (1-Specificity) diagram while moving the threshold. These quantities are defined as follows:
Sens
TP ; Spec TP FN
TN FP TN
True positives (TP) are the recognized vessels, true negatives (TN) are non-vessel objects, and false positives (FP) are binary objects considered erroneously as vessels. Finally, we considered the background component in both images as the false negative (FN). The threshold value is selected in correspondence of the closest point to the unitary value of sensitivity and (1-specificity) (see fig. 4). D. Small objects removal The length filter cleans the image obtained applying the threshold operator by deleting little and isolated objects. Each object is labeled using the 8-connected component criterion and its area is measured. The object whose area is less than a threshold is cancelled. We fixed this value to 150 pixels.
_________________________________________
Fig. 4 ROC curve mean(sensitivity) vs. 1-mean(specificity) computed on the test images of DRIVE. The red circle indicates the best Cut-Off point used to select the threshold value. Tab. 1 Detection method Matched filter; Chauduri et al. [14] Adaptive local thresholding; Jiang and al. [3] Ridge-baed segmentation; Staal et al. [4] Single-scale Gabor filters; Rangayyan et al. [5] Our Method Multiscale Gabor filters; Oloumi at al. [9]
IFMBE Proceedings Vol. 23
Az 0.91 0.93 0.95 0.95 0.953 0.96
___________________________________________
Automatic Extraction of Blood Vessels, Bifurcations and End Points in the Retinal Vascular Tree
IV. FEATURE POINTS DETECTION
25
V. CONCLUSIONS AND FUTURE WORK
End-points, bifurcations, and crossover points are detected like in [8]. Briefly, the morphological skeleton filter is applied on the binary image of the vascular tree and dedicated kernels are convolved with the resulting image to detect the feature points (see fig. 5 and fig. 6).
A method for vessels extraction on retinal images has been presented. The performance is comparable with the most recent methods presented in literature, as shown in tab.1. The pre-processing plays a fundamental role so that better artifacts removal techniques and local filters must be developed to increase the contrast between vessels and background. In this way the Matched filter will be able to enhance the vessels in a better way.
REFERENCES 1.
Fig. 5 Some examples of vascular trees extracted by the algorithm.
Patton N, Aslam TM, MacGillivray T, Dearye IJ, Dhillon B, Eikelboom RH, Yogesan K, and Constable IJ. Retinal image analysis: Concepts, applications and potential. Progress in Retinal and Eye Research, 25:99–127, 2006. 2. Thitiporn Chanwimaluang, Guoliang Fan. An Efficient Blood Vessel Detection Algorithm for Retinal Images using Local Entropy Thresholding, in Proc. of the 2003 IEEE International Symposium on Circuits and Systems, Bangkok, Thailand, May 25-28, 2003. 3. Jiang X and Mojon D. Adaptive local thresholding by verificationbased multithreshold probing with application to vessel detection in retinal images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(1):131–137, 2003. 4. Staal J, Abramoff MD, Niemeijer M, Viergever MA, and van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging, 23(4):501–509, 2004. 5. Rangayyan RM, Oloumi Faraz, Oloumi Foad, Eshghzadeh-Zanjani P, and Ayres FJ. Detection of blood vessels in the retina using Gabor filters. In Proceedings of the 20th Canadian Conference on Electrical and Computer Engineering (CCECE 2007), page in press, Vancouver, BC, Canada, 22-26 April 2007. IEEE. 6. DRIVE: Digital Retinal Images for Vessel Extraction, http://www.isi. uu.nl/Research/Databases/DRIVE/, accessed on October 5, 2006 7. Metz CE. Basic principles of ROC analysis. Seminars in Nuclear Medicine, VIII(4):283–298, 1978. 8. E. Ardizzone, R. Pirrone, O. Gambino and S. Radosta. Blood Vessels and Feature Points Detection on Retinal Images. 30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, August 20-24, 2008 9. Faraz Oloumi, Rangaraj M. Rangayyan_, Foad Oloumi, Peyman Eshghzadeh-Zanjani, and F´abio J. Ayres. Detection of Blood Vessels in Fundus Images of the Retina using Gabor Wavelets. 29th Annual International Conference of the IEEE EMBS Cité Internationale, Lyon, France, August 23-26, 2007 10. Qin Li, Lei Zhang, David Zhang, Hong Kong A New Approach to Automated Retinal Vessel Segmentation Using Multiscale Analysis 18th International Conference on Pattern Recognition (ICPR'06) Volume 4 pp. 77-80 11. E. Grisan, A. Pesce, A. Giani, M. Forachhia, A. Ruggeri A new tracking system for the robust extraction of retinal vessel structure (2004) Proc. 26th Annual International Conference of IEEE-EMBS, pp. 1620-1623, IEEE, New York.
Fig. 6 Feature points detected on the trees depicted in fig. 5.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
26
Edoardo Ardizzone, Roberto Pirrone, Orazio Gambino and Francesco Scaturro
12. Kenneth W. Tobin, Edward Chaum, V. Priya Govindasamy and Thomas P. Karnowski, Detection of Anatomic Structures in Human Retinal Imagery. IEEE Trans. On Med. Imaging, vol 26, no.12, December 2007 13. P. Perona and J. Malik, Scale-Space and Edge Detection Using Anisotropic Diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7):629-639, July 1990
_________________________________________
14. Chaudhuri S, Chatterjee S, Katz N, Nelson M, and Goldbaum M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on Medical Imaging, 8:263– 269,1989.
IFMBE Proceedings Vol. 23
___________________________________________
Recent Developments in Optimizing Optical Tools for Air Bubble Detection in Medical Devices Used in Fluid Transport S. Ravichandran1, R. Shanthini2, R.R. Nur Naadhirah2, W. Yikai2, J. Deviga2, M. Prema2 and L. Clinton2 1 2
Faculty, Temasek Engineering School, Temasek Polytechnic, Singapore Student, Temasek Engineering School, Temasek Polytechnic, Singapore
Abstract — Modelling techniques for optimizing bubble detection tools in the infrared band used in conjunction with a drug transport mechanism for the transport of intravenous fluids in clinical practice have been evaluated qualitatively using simulation techniques. Sensors which are used in signal processing often have limitation when they are made to work in noisy environment. Noise in the working environment can be from various sources such as from power line, high frequency RF fields and background luminance. In case of certain optical sensors, coupling schemes and the interference from devices which are in close proximity to the sensors can also contribute to significant noise. Qualitative studies on the effect of background luminance noise have provided a good understanding on the degree of susceptibility of the optoelectronic receiver for various luminance noises resulting in interferences in the output of a tool developed for the detection of air bubbles in fluid pathways called the “Optical Bubble Detection Tool”. Experience gained from the earlier studies has provided the required knowledge for developing a design which is less susceptible to external disturbances. The dynamic excitation method for the “Optical Bubble Detection Tool” was introduced by modifying the electronic interface of the earlier “Optical Bubble Detection Tool”. This was easily achieved by understanding the requirements of the pulsed current for energizing the transmitter of the “Optical Bubble Detection Tool”. As the frequency of excitation and the intensity of the emitted infrared are factors which decide the resolution and sensitivity of the “Optical Bubble Detection Tool”, it was optimized for several operating environments which are under the influence of other external radiations in the visible range and non-visible range. Keywords — Bubble detection tool, Background luminance noise, Optical sensors
I. INTRODUCTION The rate of infusion in intravenous fluid delivery depends on the selected drug, dosage and the pathological condition of the subject receiving infusion. It is important that any drug transported into the body through the venous circulation should be free from air bubbles as accidental induction of air bubble in the vascular system can cause serious complications and is sometimes fatal [1].
Air bubble detection in a drug transport system is an important safety parameter and there are a few ways of detecting air bubble in the tubing of the drug transport system. Our studies were focused on two popular tools namely the “Ultrasound Bubble Detection Tool” and the “Optical Bubble Detection Tool”. As the coupling factor between transducer and the tubing are very critical in the “Ultrasound Bubble Detection Tool”, it demands specific tubing and interfaces for reliable detection. This problem related to coupling factor is however not a serious issue with the “Optical Bubble Detection Tool” designed for detecting air bubble in the drug transport system. Studies on the development of the “Optical Bubble Detection Tool” were conducted with matched emitters and receivers working in the infrared band and we optimized the receiver and the transmitter based on the optical band suitable for intravenous drug delivery applications [1]. Sensors used in signal processing often have limitations when they are made to work in noisy environment. Noise in the working environment can be from various sources such as from power line, high frequency RF fields, background luminance in case of certain optical sensors, coupling schemes and the interference from electro mechanical devices which are in close proximity to the sensors. Noise components associated with the electronic detector circuit and system electronics have been discussed in detail by several authors [2, 3] and these are mostly Johnson noise, Flicker noise and Short noise. Most of the problems related to electrical noise and RF interference can be overcome by using suitable signal conditioners and the proper shielding techniques. However, noise problems associated with the optical devices due to background luminance and other optical interference cannot be easily handled by simple electronic signal conditioning tools and thus necessitates alternative techniques to overcome the limitations [1]. Based on our experimental study, we have modeled the “Optical Bubble Detection Tool” to improve its performance in real time applications. We have carefully analyzed the factors which are critical in operating the “Optical Bubble Detection Tool” in a given Infra Red (IR) band based on a simulation study and have also assessed the performance of the tool under various conditions [4].
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 27–30, 2009 www.springerlink.com
28
S. Ravichandran, R. Shanthini, R.R. Nur Naadhirah, W. Yikai, J. Deviga, M. Prema and L. Clinton
II. MATERIALS AND METHODS Before evaluating the modeling techniques for this system, it is important to have some idea on the system architecture and the various modules it contains for meeting the requirements of the system as a whole. The system architecture contains modules such as the optoelectronic module, microcontroller module, and the electromechanical module and these modules are all integrated in such a way that it is possible for the user to configure a model specific application protocol in drug delivery. Each module is further discussed in detail in this paper. III. MODELING PROTOCOLS The conventional excitation mode, incorporating static excitation, is the most common mode of excitation seen in the earlier systems. A steady source of light in the infrared or near infrared range is generated with the help of constant current source generated from the common power supply. The steady beam of light after passing through the fluid transport tubing is translated as voltage variations reflecting the optical density of the medium transported. The advantages of the conventional excitation mode are that it is easy to tune the system for the required range of excitation for capturing the signals corresponding to a specific optical density and it is quite convenient to optimize the wavelength for a given optical pathway for the detection of the air bubble in the fluid transport tubing [5]. Though this excitation mode can easily be configured for the detection of the air bubble, it has many disadvantages in real time applications. The disadvantage of this excitation mode is that this mode is usually more susceptible to the ambient noise signal present around the source and the receiver. Thus, this system necessitates a very reliable optical shielding around the optical pathway to prevent interference from external luminance, which is present in any clinical setup. Qualitative studies on the effect of background luminance noise have provided a good understanding on the degree of susceptibility of the optoelectronic receiver for various luminance noises resulting in interferences in the output of the “Optical Bubble Detection Tool”. The performance of the “Optical Bubble Detection Tool” for various operating conditions related to the external environment and the flow pattern has shown that there is no major deviation in the linearity of the system related to flow in the static excitation mode. However, in this mode, the “Optical Bubble Detection Tool” was found to be highly susceptible to the ambient noise created by the surrounding background luminance. A simplified scheme of the system is as shown in Fig. 1.
_______________________________________________________________
Fig. 1 Scheme for modelling “Optical Bubble Detection Tool” designed for Drug Transport System.
Studies conducted on the performance of the optoelectronic interface under various conditions have provided valuable data related to static excitation techniques. Table 1 Result for Static Excitation Mode Test Condition Air bubble present Air bubble absent
Output in the Presence of Ambient Noise Heavily Shielded Sensor Logic 1 (2.7V).
Unshielded Sensor Logic 1 (2.7V).
Logic 0 (>0.6V).
Logic 1 (2.7V).
The limitations of the “static excitation” method have provided the basis for implementing “dynamic excitation” methods for using the “Optical Bubble Detection Tool”. This has been further improved by the modeling techniques investigated during the development of the “Optical Bubble Detection Tool” in the drug transport system. A. Dynamic Excitation Mode In general, parameters such as optical wavelength, modulation frequency and receivers’ capture threshold are critical in deciding the overall efficiency of the bubble detection system. The dynamic excitation mode can be achieved in several ways and the most efficient way is to customize the parameters for a specific task related to the detection of bubble in the presence of common external interference in the given clinical setup. The dynamic excitation method for the “Op-
IFMBE Proceedings Vol. 23
_________________________________________________________________
Recent Developments in Optimizing Optical Tools for Air Bubble Detection in Medical Devices Used in Fluid Transport
29
tical Bubble Detection Tool” was first introduced by modifying the electronic interface of the existing “Optical Bubble Detection Tool”. This was easily achieved by providing pulsed current for energizing the transmitter of the “Optical Bubble Detection Tool”. As the frequency of excitation and the intensity of the emitted infrared are factors which decide the resolution and sensitivity of the “Optical Bubble Detection Tool”, it is to be optimized for an operating environment which is under the influence of other external radiations in the visible range and non-visible range. The microcontroller module, which is the heart of the central controller, is fully supported by essential interfaces such as system console and LCD display for setting the rate of infusion of the intravenous fluid and also for selecting an appropriate model for “Optical Bubble Detection Tool” [5]. Table 2 Results for Dynamic Excitation Mode Test Condition Air bubble present Air bubble absent
Output in the Presence of Ambient Noise Heavily Shielded Sensor Logic 1 (2.7V).
Unshielded Sensor Logic 1 (2.7V).
Logic 0 (>0.6V).
Logic 0 (>0.6V).
It can be seen that the results obtained using the dynamic excitation mode are more promising in the absence of air bubble, especially in situations where the output is to be realized in the presence of ambient noise with not fully shielded optical sensors. IV. OPERATING PRINCIPLES Flow of fluids in the drug transport mechanisms for therapeutic applications is assisted by the electromechanical module. The electromechanical module provides controlled peristalsis of the intravenous fluid through the transport tubing in a pre-programmed fashion and this is made possible by using the system console. The system console is user friendly and helps the user to set the parameters for treatment. The “Optical Bubble Detection Tool” interfacing the central controller is at the entry point of the fluid transport mechanism and monitors for any air bubbles in the contents transported in the tubing. A. Microcontroller Module The basic program flow chart of the microcontroller architecture is shown in Fig. 2.
_______________________________________________________________
Fig. 2 Flowchart of the microcontroller module. Intravenous fluid delivery mechanism is precisely controlled with the help of a microcontroller which has interfaces such as those linked to optical bubble detection system, system console for setting parameters, LCD display for displaying parameters and electromechanical module consisting of the linear peristalsis mechanism for regulating the fluid flow rate. The ports of the microcontroller interfacing the system console allows the user to enter data related to flow volume and flow rate, and also to activate the “keep vein open function (KVOF)” if required. B. System operations After running the initialization routine, the microcontroller displays a welcome message on the LCD display of the system and prompts the user to set in parameters related to Flow Rate and Flow Volume. These parameters are entered using the system console.
IFMBE Proceedings Vol. 23
_________________________________________________________________
30
S. Ravichandran, R. Shanthini, R.R. Nur Naadhirah, W. Yikai, J. Deviga, M. Prema and L. Clinton
Once the infusion process is activated, the process goes on and will end once the set amount of fluid is delivered at the pre-programmed rate. At the end of the fluid delivery it is possible to activate another function called the KVOF, which will ensure that the fluid pathway inside the intravenous needle will not get occluded due to the presence of venous blood flow. If by accident, an air bubble should be present in the intravenous fluid infused, the “Optical Bubble Detection System” will detect the bubble and would activate an alarm for the user to remove the bubble before it could be infused into the venous system. V. PRELIMINARY STUDIES
application specific model for a given working environment. It was also possible to optimize the intensity of excitation based on the selected model working under a given situation. VII. CONCLUSION Studies carried out with the “Optical Bubble Detection Tool” in clinical practice has clearly indicated that the dynamic excitation mode modeled with the help of the central controller is more promising in providing reliable data in a noisy clinical environment. The electromechanical module of the system was found reliable in providing linear peristalsis of fluid over a long duration during preliminary studies.
A. Reliability studies of flow rate and volume The Linear Peristalsis Driver has been tested in real-time. We have tested the reliability of the device after setting a given Flow Rate and Flow Volume by running the system with the standard infusion package consisting of the infusion bag and intravenous tubing containing the drip chamber. By validating the settings for repeated trials for various settings, of flow rates and flow volumes, the reliability of the system was established under conditions existing in clinical setup.
REFERENCES 1.
2.
3.
B. Reliability studies on bubble detection
4.
The sensitivity of the “Optical Bubble Detection Tool” was also checked under various conditions to record the immunity offered by the bubble detection circuitry in the presence of several luminance backgrounds. The luminance backgrounds were simulated to record the output of the “Optical Bubble Detection Tool” based on qualitative assessment. VI. RESULTS In our studies we have modeled the “Optical Bubble Detection Tool” with the help of the central controller for dynamic excitation mode and we have found pulsed frequencies from 2 kHz to 10 kHz suitable in designing an
_______________________________________________________________
5.
Muhammad Bukhari Bin Amiruddin, S.Ravichandran, Teo Xu Lian Eunice, Low Wei Song Klement, Tho Wee Liang, Lim Yu Sheng Edward and Oon Siew Kuan, “Studies on modelling techniques for optimizing optical bubble detection tools (OBDT) in drug delivery”, Proceedings of the 12th International Conference on Biomedical Engineering (ICBME)December 2005, Singapore. Watts, R., “Infrared Technology Fundamentals and System Applications: Short Course”, Environmental Research Institute of Michigan, June 1990. Holst, G., “Electro-Optical Imaging System Performance”, Orlando, FL: JCD Publishing, 1995, 146. Ronald G. Driggers, Paul Cox, Timothy Edwards (1999): “Introduction to Infrared and Electro- Optical Systems”, (Artech House INC.Norwood, MA 02062), 132-134 S. Ravichandran1, R. Shanthini2, R. R. Nur Naadhirah2, W. Yikai2, J. Deviga2, M. Prema2 and L. Clinton2 “Modelling and Control of Intravenous Drug Delivery System Aided by Optical Bubble Detection Tools”, i-CREATe 2008 conference from 13 to 15 May 2008, Bangkok, Thailand.
Author: Institute: Street: City: Country: Emails:
IFMBE Proceedings Vol. 23
S.Ravichandran Temasek Polytechnic 21 Tampines Ave 1 Singapore 529757 Singapore
[email protected],
[email protected] _________________________________________________________________
General Purpose Adaptive Biosignal Acquisition System Combining FPGA and FPAA Pedro Antonio Mou, Chang Hao Chen, Sio Hang Pun, Peng Un Mak and Mang I. Vai Department of Electrical and Electronics Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China Abstract — This paper introduces a general purpose intelligent adaptive biosignal acquisition system both using Field Programmable Gate Array (FPGA) and Field Programmable Analog Array (FPAA). The design system inherits the powerful properties from the above devices to provide a stable and adaptive platform for processing various biosignals like ECG, EMG and EEG, etc. This system implementation solves the complexity in biosignal samples acquisition in home healthcare and stability in long term monitoring. It can be dynamically reconfigured and adapt to acquire different biosignals without changing any hardware. This system provides a simplified testing platform allowing different algorithms for processing real time biosignals embedded into it. In additions, the RS232 serial data port (or USB) can implement a connection with personal computer to ease and speed up the acquiring of biosignal samples or test results for further analysis. Alternatively, the system is also designed for outputting acquired and/or processed biosignals to PDA/PC at real time for visual inspection of results. Keywords — adaptive, biosignal, acquisition, FPAA, FPGA
I. INTRODUCTION According to the World Health Organization (WHO), the percentage of death and disability caused by chronic diseases (including cardiovascular diseases) will soar from 43% in 2002 to 76% in 2020 [1]. WHO and governments in the world foresee this great demand for medical services and recognize the importance to reduce the rapidly growing pressure on limited medical resources. In addition, the growing concerns of people on self healthiness also speed up the pressure on medical resources and demand for home healthcare system. Home healthcare system and telemedicine is one of the effective methods to assist in reducing the pressure on public medical resources by distributing health-caring and monitoring to patients’ home or other remote locations. The cost in money and time can be controlled at a lower level (which could be very high in rural areas or developing countries) while providing more conveniences to patients in reading self-healthiness, and moreover, feasible to have functions for long term health status logging which is very rare but useful in today’s medical system for treatment and prevention of diseases (including chronic diseases).
The emergence and rapid development of home healthcare system makes affordable and portable home healthcare devices available to patients. But on the other hand, these existing or proposed systems are usually signals dependant (for example, ECG only) like [2], [3]. One important reason or the difficulty for these systems to extend the usage to a wider range of biosignals is the fundamental differences between various common biosignals. Different biosignals require frontends with different characteristics (amplification, bandwidth, etc) for acquisition, and of which is an essential procedure before any processing or analysis. This difference restricted the possibility of having a single general purpose system which is efficient and capable of acquiring different biosignals at the same time with higher flexibility, lower cost and smaller in size for non-clinical medical usage. In this paper, a novel concept is proposed to build a general purpose adaptive biosignal acquisition system combining Field Programmable Gate Array (FPGA) and Field Programmable Analogue Array (FPAA) to act as an intelligent and re-configurable general frontend for various biosignals including electrocardiogram (ECG), electromyography (EMG), electroencephalography (EEG), etc. This adaptive frontend can provide a simplified platform for integrating different algorithms for processing different biosignals at real time in home healthcare. II. SYSTEM ARCHITECTURE In designing the acquisition system, characteristics and inherent properties of several hardware are studied and analyzed. To support a re-configurable environment and opportunity for converting to SOC in future, FPGA is chosen to be responsible for the digital part and FPAA for the analog part, with an ADC between them. A. Field Programmable Gate Array (FPGA) Field Programmable Gate Array, or FPGA, is a kind of semiconductor device with programmable and re-configurable logic blocks and interconnects which can build up digital components like basic logic gates (AND, OR, etc), make up more complex functions such as decoders or other mathematical functions. Because of this freely re-configura-
The work presented in this paper is supported by The Science and Technology Development Fund of Macau under grant 014/2007/A1 and by the Research Committee of the University of Macau under Grants RG051/05-06S/VMI/FST, RG061/06-07S/VMI/FST, and RG075/07-08S/VMI/FST.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 31–34, 2009 www.springerlink.com
32
Pedro Antonio Mou, Chang Hao Chen, Sio Hang Pun, Peng Un Mak and Mang I. Vai
ble characteristic, it can be programmed to process different functions in parallel and independently, using different sections in one FPGA. For example, the control of biosignal acquisitions can work independently with an ECG QRS detection algorithm and sending acquired data to a host computer without affecting each other. In our system, FPGA acts as the main controller to manage different components and I/Os between them.
D. General Purpose Adaptive Biosignal Acquisition System In order to achieve and build a general frontend for the acquisition of multiple biosignals, FPGA and FPAA are combined to be used in the proposed system.
B. Field Programmable Analogue Array (FPAA) Field Programmable Analogue Array, or FPAA, is an analog counterpart device to FPGA but it contains configurable analog blocks instead of logic blocks which can be programmed and used to implement analog signal processing functions like amplifications, differentiation, integration, subtraction, addition, multiplication, log, exponential and configured to build up operational amplifiers and filters. The feasibility and performance in processing biosignals using FPAA is studied and tested in [34] with acceptable results. In the proposed system, FPAA acts as a general frontend to be dynamically programmed by the FPGA. C. Biosignals Acquisition The characteristics between different bioelectric signals vary a lot [5]. As illustrated in Fig. 1, the frequency range for electrocardiogram (ECG) is between 0.05-100Hz with a dynamic range of 1-10mV. For surface electromyography (surface EMG), the frequency range becomes 2-500Hz with a 50PV-5mV dynamic range. While for electroencephalogram (EEG), the dynamic range becomes 2PV -0.1mV although the frequency range is 0.5-100Hz which is similar to other signals. Comparing only these three commonly discussed biosignals, the great variance in the dynamic range among different biosignals induced the design challenge of a general frontend for multiple biosignals acquisition.
Fig. 1 – Signal Amplitude (V) against Frequency (Hz) of different biosignals ECG, EMG and EEG
_________________________________________
Fig. 2 – Architecture overview of acquisition system In this adaptive acquisition system as illustrated in Fig. 2, biosignals input is amplified and filtered by the FPAA, digitized through the Analog-to-Digital Converter (ADC) and fed into the FPGA. FPAA is under the control of FPGA and the inherent re-configurable property of FPAA allows its amplification and filter to be dynamically re-configured through a serial interface. Appropriately selected parameters for the amplifiers and filters are programmed into the FPAA in order to retrieve a less tainted biosignal from the noisy human-body. This allows the acquisition frontend to adapt different biosignals to a common and an acceptable input range of the ADC before they can be processed in digital within the FPGA. In this way, signals will always be within the operating input range of the ADC. Acquired samples of signal can be transferred to the PDA/PC for post-processing and displaying in real-time, and on the other hand, the FPGA provides a platform for embedding different signal processing algorithms to manipulate the
IFMBE Proceedings Vol. 23
___________________________________________
General Purpose Adaptive Biosignal Acquisition System Combining FPGA and FPAA
acquired data. It allows a single or multiple algorithms to be embedded for processing biosignals in parallel and independently. Local display is used as an indicator for verification of the results of the processed biosignals. III. IMPLEMENTATION In building the adaptive acquisition system, we used a DE2 FPGA evaluation board from Altera, an AN231 FPAA board from Anadigm and a 24-bit ADC AD7764 from Analog Device. Standard VHDL is also chosen as the programming language to implement the initial framework of the whole system because it leaves the opportunity for migrating the whole system to a System-On-Chip (SOC) in the future. The 24-bit ADC employed in this system is mainly for improving the dynamic range of the system to cope with any unexpected input. It also has an adjustable sampling rate for controlling power consumption (slower is lower). As this system mainly targets at non-clinical medical device for home healthcare or portable device, a low cost FPGA and FPAA with VHDL combination was chosen. This combination can provide the system with numerous advantages including but not limited to higher flexibility, lower power consumption, smaller in size. In implementing the re-configurable frontend, configuration data of the FPAA for acquiring different biosignals are stored inside the FPGA non-volatile memory preventing data lost when power lost. Configuration code is loaded into the FPAA according to the input biosignals. The system allows user to easily switch the frontend between an ECG, an EMG or even an EEG configuration in real-time. Once the FPAA is configured correctly, the FPGA can start acquiring the signal samples through the ADC. Through this dynamic programmable frontend and the freely programmable property of FPGA, the area inside FPGA is able to be designed into two separate sections. The first part is for configuring FPAA and acquiring samples from ADC, while the other part is reserved for embedding different signal processing algorithms and acting as the backend system for analyzing real-time data, which can miniaturize the system. Transmission of acquired samples to personal computers (PC) using RS232 (or USB in the future) is considered in the design for extended usage which requires higher computational power for complicated algorithms that does not migrate-able to FPGA. IV. RESULTS A general purpose adaptive biosignals acquisition system has been built for evaluation in this paper. For each biosig-
_________________________________________
33
nal (ECG, EMG, EEG), a separate configuration is designed to be programmed into the adaptive FPAA frontend which is stored inside the FPGA. The 24-bit ADC is configured to work at a sampling rate around 507Hz which is limited by the combination of DE2 onboard 27MHz clock generator and the decimation rate of AD7764. Although this is not a preferred sampling rate, this is enough for our experiment on verifying the system functionality. In the experiment, an ECG patient simulator PS-420 from Metron is used instead of a human subject to provide raw ECG signal. An EEG simulator EEGSIM from Glass Technologies has also be used to acquire raw EEG signal, while the EMG signal is acquired from a human subject. A simple PC program is written to receive and display the acquired samples from FPGA through RS232 in real-time as shown in Fig. 3 for visual inspection. A simple algorithm [6] is also tuned and migrated into the FPGA with Standard VHDL for verification of the real-time signal processing capability with embedded algorithms. This algorithm can perform QRS detection and calculate the beat rate of the ECG signal. The local display on the FPGA evaluation board is used as an output displaying the beat rate in beat per minute (BPM). Algorithms for processing EMG and EEG are not implemented and the acquiring of these signals is only verified by real-time output of acquired data. From Fig. 3, an ECG signal can be acquired using the system with clear characteristics and the embedded algorithms can successfully retrieve the heart beat rate. Comparing acquired EEG signal between Fig. 4 and Fig. 5, the EEG signal, which is acquired using our acquisition system, shown in Fig. 5 is similar to the signal from the EEG simulator captured using ADI PowerLab shown in Fig. 4. After evaluative testing of the system, it is possible to achieve an adaptive frontend with acceptable performance using existing devices. The acquired samples of signal are similar to the input signal with amplification which is possible for post-processing, even for the microvolt-level signals. The combination of FPGA and FPAA is also proved to be able to cooperate with embedded algorithms to retrieve useful information from the input biosignals, for example, showing the correct beat rate of the input ECG. The final resources usage of the FPGA is around 10% of “Logic Elements” for the system without counting the embedded algorithms in the post-processing part. The addition of a simple QRS detection algorithm (migrated to FPGA using standard VHDL) consumed a total of over 50% of “Logic Elements”. This is considered as a weakness or disadvantage because this means that signal processing algorithms need effort to be tuned and optimized when migrating to work in FPGA using standard VHDL.
IFMBE Proceedings Vol. 23
___________________________________________
34
Pedro Antonio Mou, Chang Hao Chen, Sio Hang Pun, Peng Un Mak and Mang I. Vai
Fig. 3 – Simple program to receive and display the acquired samples in real-time for quick visual inspection (ECG)
medical devices for home healthcare or portable device with its inherent characteristics of low power and small in size, like a long term ECG monitoring device. In designing the system, transformation of system from existing hardware to SOC is also considered, and therefore, standard VHDL language is chosen to provide maximum flexibility in the future. On the other hand, the difficulties and effort in migrating algorithms into the system for postprocessing still needs to be analyzed and “automatic FPAA configurations selection” is considered as one of the useful feature in the current status.
ACKNOWLEDGMENT I would here like to express my gratitude to all those who gave me the possibility to complete this paper. I want to specially thank you my supervisors, the department of Electrical and Electronics Engineering of the University of Macau and The Science and Technology Development Fund of Macau.
REFERENCES Fig. 4 –EEG signal from EEGSIMcaptured using ADI PowerLab
1. 2.
3.
4.
Fig. 5 – Simple program to receive and display the acquired samples 5.
in real-time for quick visual inspection (EEG)
6.
V. CONCLUSIONS A general purpose adaptive biosignals acquisition system is proposed and built for evaluation. The results from the evaluative testing are presented as a proof of concept in combining existing devices like FPGA and FPAA to form a simple and single platform at low cost for evaluating biosignal processing algorithms embedded into FPGA using multiple real-time biosignals. On the other hand, this can also be used as the essential part in building non-clinical
_________________________________________
World Health Organization (WHO) at http://www.who.int/topics/ chronic_diseases/en/ Ying-Chien Wei, Yu-Hao Lee, Ming-Shing Young (2008) A Portable ECG Signal Monitor and Analyzer, ICBBE 2008, The 2nd International Conference on Bioinformatics and Biomedical Engineering, 2008, pp 1336-1338 Borromeo S., Rodriguez-Sanchez C., Machado F., HernandezTamames J.A., de la Prieta R., (2007) A Reconfigurable, Wearable, Wireless ECG System, EMBS 2007, 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp 1659-1662 Chan U.F., Chan W.W., Sio Hang Pun, Mang I Vai, Peng Un Mak, (2007) Flexible Implementation of Front-End Bioelectric Signal Amplifier using FPAA for Telemedicine System, EMBS 2007, 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 3721–3724 Joseph D. Bronzino (1995) The Biomedical Engineering Handbook. CRC Press, IEEE Press, pp 808 Pan, Jiapu, Tompkins Willis J. (1985) A Real-Time QRS Detection Algorithm, IEEE Transactions on Biomedical Engineering, Volume BME-32 Issue 3 March 1985, pp 230 - 236
Author: Pedro Antonio MOU Institute: Department of Electrical and Electronics Engineering, University of Macau, Macau SAR, China Street: University of Macau, Av. Padre Tomas Pereira, Taipa, Macau SAR, China City: Macau SAR Country: China Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Segmentation of Brain MRI and Comparison Using Different Approaches of 2D Seed Growing K.J. Shanthi1, M. Sasi Kumar2 and C. Kesavdas3 1
SCT College of Engg., Department of Electronics & Comm. Engg., Assistant Prof., Trivandrum, India 2 Marian College of Engg. Department of Electronics & Comm.Engg., Prof., Trivandrum, India 3 SCT Institute of Medical Sciences & Tech., Radiology Department, Associate Prof., Trivandrum, India Abstract — Automatic segmentation of human brain from MRI scan slices without human intervention is the objective of this paper. The DICOM images are used for segmentation. The Segmentation is the process of extraction of the White matter (WM), Gray matter (GM) and Cerebrospinal Fluid (CSF) from the MRI pictures. Volumetric calculations are carried out on the segmented cortical tissues. The accuracy in determining the volume depends on the correctness of the segmentation algorithm. Two different methods of seed growing are proposed in this paper. Keywords — kull Stripping, Segmentation, Seed growing, White matter, Gray matter.
I. INTRODUCTION Segmentation in image processing has wide range of applications. Segmentation in medical imaging in general has wide applications. The imaging modalities such as CT and MRI along with image processing has revolutionized the diagnosis and treatment scenario. In brain MRI, segmentation is helpful in determining the volume of different brain tissues such as White Matter(WM), Gray Matter(GM) and Cerebrospinal Fluid(CSF). The volumetric changes in these brain tissues help in the study of neural disorders like Multiple Sclerosis, Alzheimer’s disease, epilepsy etc. Brain MRI segmentation also helps in detection of tumours [4]. Many research papers are published in this area. Automatic segmentation also helps in guiding neurosurgery. We have presented two different methods for segmentation based on region growing and presented a comparison of these two methods in this paper. We have organized the remainder of this paper into the following sections. 2. Segmentation and the relative work in the area. 3. Overview 4. Method 1 Region growing based on seed value and connectivity. 5. Method 2. Region growing based only on seed value 6. Experimental Results 7. Comparison and conclusion
II. SEGMENTATION Segmentation or classification of data is defined as extracting homogeneous data from a wider set of data. Pixels with similar intensity or texture belong to the same group. Classification is based on looking for such similar properties in an image. There are numerous segmentation techniques available which are applied to medical imaging. Region growing is an important technique used for segmentation. One of the first region growing technique is the seed growing. Seed growing is done by choosing some pixels as seed points. The input seed points can be chosen based on a particular threshold value. Spatial information could also be specified along with the threshold gray value for choosing the seed pixel. Region of interest (ROI) can be grown from the seed and extracted. The result of segmentation depends on the correctness of the seed values chosen. Segmentation techniques based on seed growing can be fully automatic or semi automatic needing intervention. In some cases only the result need to be confirmed through the operator. Computer vision helps in saving much of the time. Manual segmentation will need hours together for segmenting a single image. A lot of work has been done in the area of MR images segmentation. [2] uses edge based techniques combined with spatial intensity. Many image models are used for classifying such as Hidden markov model which uses random field model [5]. Many works are published using classification algorithms based on Fuzzy classification methods such as Fuzzy K means and FCM algorithms [7,8,9,10]. Fuzziness in classification helps to classify data better compared to hard clustering. Neural network and neuro fuzzy algorithms are also applied [11]. [3] uses the geometric snake model to extract the boundary and combines that along with the fuzzy clustering. The latest development in MR image segmentation [13] which uses propagation of charge fluid model through the contours of the imaging data. This method requires no prior knowledge of the anatomic structure for the segmentation.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 35–38, 2009 www.springerlink.com
36
K.J. Shanthi, M. Sasi Kumar and C. Kesavdas
III. OVERVIEW Fig.1. Shows the different stages which gives the overview of the system developed in both the methods.
cussed in [1]. We have used a median filter of mask size 3 X 3 which retains the information of the edges and at the same time avoids blurring. C. Segmentation of Brain Tissues After skull stripping and filtering the next step is to segment the brain into its constituent tissues such as White Matter (WM), Gray Matter(GM) and Cerebro Spinal fluid(CSF). We have developed two different methods based on the two dimensional seed growing. Both the methods proposed here are fully automatic. IV. METHOD 1 BASED ON SEED VALUE AND CONNECTIVITY
Fig. 1 System Overview
A. Skull Stripping Volumetric analysis of brain requires segmenting the cortical tissues from the non cortical tissues. Removing these non cortical tissues is termed as skull stripping. Surrounding extra cortical tissues such as fat, skin, eye ball are removed and separated from the brain tissues. Skull stripping classifies the image into two classes, brain and non brain tissues. Skull stripping forms the first processing step in the segmentation of brain tissues. The skull removed MRI pictures are used for further classification of the brain tissues into White matter, Gray matter and Cerebrospinal fluid. The T1 weighted axial MR images has a distinct dark ring surrounding the brain tissues. We have exploited this spatial property and performed skull stripping. Result of skull stripping is shown in Fig.2. Fig 2(a) is the original MR image and Fig 2(b) shows the skull stripped image.
We have used the a priori information from the DICOM images regarding the intensity values of the different tissues represented in the DICOM images. The maximum intensity in the image correspond to the pixels representing WM in the skull stripped image. GM pixels are represented by intensity values lesser than WM. CSF pixels are represented by the lowest intensity values. These values from the histogram decide the threshold values in selecting the seed values. These ideas remained the same in both the methods developed. A. Algorithm for Segmenting White Matter x x x x x
x Fig. 2(a) Original image
Fig. 2(b) Skull Stripped
B. Preprocessing
x
The MR images show variations and noise due to the inhomogeneity of the RF coils. Various filter transformations developed to enhance the MR brain images so far are dis-
_______________________________________________________________
The algorithm begins by choosing the pixels automatically rather than manually. We computed the regional maxima from the histogram of the skull stripped image to get the seed value of the WM. The pixels within a range of small deviation to this seed value were grown through consecutive iterations. The number of iterations depend on the size of the image. The iterations have to be exhaustive such that all the pixels were checked. To make the algorithm more stable we checked along with the seed value the connectivity. The four neighbourhood connectivity ensures that the region is well connected. The connectivity test can be extended to eight point connectivity. An image corresponding to the mask of the WM was grown through iterations satisfying both the input criterion of seed value with a minimum deviation and four connectivity. The white matter was extracted from the skull stripped image using the WM mask and image arithmetic.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Segmentation of Brain MRI and Comparison Using Different Approaches of 2D Seed Growing
37
B. Segmenting gray matter Study of the DICOM images show that the pixel values with a smaller offset to the white matter represents the gray matter. The algorithm for segmenting the gray matte is the same as in the previous step but for the seed point pixel value. Following the same procedure we segment the gray matter. C. Segmenting Cerebrospinal fluid After removing the white matter and the gray matter from the skull stripped image we are left with third constituent of the cortical tissue the cerebrospinal fluid. This is extracted using simple image arithmetic from the skull stripped image after removing WM and GM. The segmented tissues using this method are shown in Fig. 3 (a) to 3(c).
Results using Method 1. Fig. 3(a) Segmented White matter, Fig. 3(b) Segmented Gray matter
V. METHOD 2 BASED ON SEED VALUE The second method we have developed is based only on the intensity value. It checks the pixels only for the intensity and neighbourhood connectivity is not a criterion for seed growing. x
x x x x x x x x
The algorithm begins from the maximum intensity value. This is taken as the seed point for the white matter region. All the pixels with this value are added to grow the white matter region. In the next iteration the intensity value is decremented by one and the new pixels are added to the white matter region. Simultaneously the gray matter region is grown starting from the lower intensity and incrementing the intensity by one. This process is continued through the iterations. After every iteration an index is measured, we called it as the segmentation index. Segmentation index = (Total count till last iterationCount in the present iteration) / Total count As the region grows the index value changes. With decrease in the index values, we test for common pixels between the two regions. The iterations are terminated if common pixels are found. VI. EXPERIMENTAL RESULTS
We used the T1 weighted axial view MR brain images. The algorithm was implemented using MATLAB. The
_______________________________________________________________
Fig. 3(c) Segmented CSF of MRI slice No.44
Results using Method 2. Fig. 4(a) White matter, Fig. 4(b) Gray matter
template image used was one the mid slices of the MRI slice no.44. VII. COMPARISON AND CONCLUSION Segmentation using both the methods yield the same result. The first method as it considers the neighbourhood connectivity takes excessively larger computation time for one single slice. The connectivity can also be tested across the 3 D images. The second method is faster as it picks all similar pixels simultaneously. The number of iterations change from one image to another. This method also ensures that all the pixels are properly classified and every pixel belong to one and only one class. In both the methods we have taken the seed values from the histogram and they
IFMBE Proceedings Vol. 23
_________________________________________________________________
38
K.J. Shanthi, M. Sasi Kumar and C. Kesavdas
are totally automatic. For region growing we have allowed offsets in the seed values , this will take into account the variations of the intensity value within the same class due to the inhomogeneity of the RF coils of the MRI scanner. Table 1 shows the comparison of the iterations for an image of size 512*512.
4. 5.
6.
Table 1. Comparison of Methods 7. Method
Method 1
Method 2
No. of iterations
512*512
Maximum 20 -25
Common count
180 ( Slice. No.44)
Zero
8.
9.
ACKNOWLEDGEMENT 10.
The authors acknowledge the financial support rendered by the Technical Education Quality Improvement Programme , Govt. of India and the authors would like to thank the following for their valuable contribution to the work: Vipin and Priyadarshan.
2. 3.
1 H M Zadeh, Joe P Windham, Donald J Peck and Andrew E Yagle “A Comparative Analysis of Several Transformations for Enhancement and Segmentation Magnetic Resonance Images” Medical Imaging Imaging IEEE Tansactions 0-7803-1487-5/94 IEEE 1994. James L. Lee and Jefrey J. Rodriguez “Edge Based Segmentation of 3D magnetic Resonance Images” 0-8186-6950-0194 © 1994 IEEE Jasjit S. Suri “Two-Dimensional Fast Magnetic Resonance Brain Segmentation” IEEE Engg .in Medicine and Biology July/August 2001 0739-5175/01/$10.00©2001IEEE
_______________________________________________________________
12.
13.
REFERENCES 1.
11.
Nathan Moon, Elizabeth B, “Model Based Brain and Tumour Segmentation” IEEE 2002. Y. Zhang, M. Brady, and S. Smith, "Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm," IEEE Transactions on Medical Imaging, vol. 20, pp. 45-57, 2001. R. Kyle Justice and Ernest M. Stokely ” 3 D Segmentation of MR Brain Images Using Seeded Region Growing” Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Amsterdam 1996 R J Almeida and JMC Sousa : Comparison of fuzzy clustering algorithms for classification. International symposium on evolving fuzzy systems September 2006 Ahmed MN , Yamany SM, Mohamed N, Farag AA .Moriarty T . A modified fuzzy C means algorithm for bias field estimation and segmentation of MRI data . IEEE Transactions on Medical Imaging 2002 Cybele Ciofolo, Christian Barillot, Pierre Hellier, “Combining Fuzzy Logic and Level Set Methods for 3D MRI Brain Segmentation.” 07803-8388-5/04 2004 IEEE Y Zhou, Hong Chen, QZhu,” The Researchof classification algorithm based on Fuzzy clustering “ Mohamed N. Ahmed and Aly A. Farag “Two-Stage Neural Network For Volume Segmentation of Medical Images” 0-7803-4122-8/97 1997 IEEE Ting Song1, Elsa D. Angelini2, Brett D. Mensh3, Andrew Laine1 “Comparison Study of Clinical 3D MRI Brain Segmentation Evaluation” Proceedings of the 26th Annual International Conference of the IEEE EMBS San Francisco, CA, USA • September 1-5, 2004 Herng-Hua Chang*, Daniel J. Valentino, Gary R. Duckwiler, and Arthur W. Toga “Segmentation of Brain MR Images Using a Charged Fluid Model” IEEE Transactions in Bio Medical Engg. VOL. 54, NO. 10, OCTOBER 2007 Author: Institute: Street: City: Country: Email
IFMBE Proceedings Vol. 23
Shanthi K J Sree Chitra Thirunal College of Engg. Pappanamcode Trivandrum India
[email protected] _________________________________________________________________
SQUID Biomagnetometer Systems for Non-invasive Investigation of Spinal Cord Dysfunction Y. Adachi1, J. Kawai1, M. Miyamoto1, G. Uehara1, S. Kawabata2, M. Tomori2, S. Ishii2 and T. Sato3 1
2
Applied Electronics Laboratory, Kanazawa Institute of Technology, Kanazawa, Japan Section of Orthopaedic and Spinal Surgery, Tokyo Medical and Dental University, Tokyo, Japan 3 Department of System Design and Engineering, Tokyo Metropolitan University, Tokyo, Japan
Abstract — We are investigating an application of the biomagnetic measurement to non-invasive diagnosis of spinal cord function. Two multichannel superconducting quantum interference device (SQUID) biomagnetometer systems for the measurement of the evoked magnetic field from spinal cords were developed as hospital-use apparatuses. One is optimized for sitting subjects. Another is for supine subjects. Both systems are equipped with an array of vector SQUID gradiometers. The conduction velocity, which is one of the significant information for the functional diagnosis of the spinal cord, was non-invasively estimated by the magnetic measurement.
logical examination is also necessary for the accurate diagnosis of spinal cord disease. We are investigating the SQUID biomagnetic measurement systems as a non-invasive functional diagnosis tool for spinal cord[3–5]. In this paper, the multichannel SQUID measurement systems recently developed for the application in hospitals and preliminary examination with the systems are described.
Keywords — Biomagnetism, SQUID, biomagnetometer, medical device, non-invasive diagnosis, spinal cord.
A. System Configuration
I. INTRODUCTION Biomagnetic measurement is a method to investigate the behavior of nerve system, muscles, or other live organs by observation of magnetic field generated by their activity. The intensity of the magnetic field elicited from the body is quite small, which is several femto or pico tesla. Only SQUID (superconducting quantum interference device) based magnetometers can detect such weak magnetic field. One of the large applications of SQUID biomagnetic measurement is MEG (magnetoencephalography)[1,2]. The MEG is an apparatus to detect the magnetic field from the brain nerve activity by a sensor array with more than one hundred SQUID sensors arranged around the head. It can non-invasively provide the information of brain nerve activity at high temporal and spatial resolution, and is already used in many hospitals and brain research institutes. In the field of the orthopaedic surgery and neurology, there is a strong demand for the non-invasive diagnosis of the spinal cord dysfunction. The conventional lesion localization of the spinal cord disease relies on image findings from MRIs or X-ray CTs in addition to clinical symptoms, physical findings and neurological findings. However, the doctors were often dissatisfied with falsepositive findings because the abnormal findings on the image are not always symptom-related. Therefore, evaluation of spinal cord function based on electrophysio-
II. INSTRUMENTATION
We focused on the cervical spinal cord evoked magnetic field (SCEF) measurement because spinal cord disease mainly occurs at the cervix. Two cervical SCEF measurement systems were developed. One was optimized for sitting subjects. The other was for supine subjects. The system for sitting subjects needs comparatively small footprint. Space-saving is a large advantage upon installation of the system to common hospitals. On the other hand, the supine mode system can be applied to patients, even if they have severe spinal cord disease, and provide larger observation area than the sitting mode system. Fig. 1 shows the configuration of the system for supine subjects. Basically, both systems have the same configuration. An X-ray imaging apparatus was integrated to the
Fig. 1 Configuration of the supine mode system.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 39–42, 2009 www.springerlink.com
40
Y. Adachi, J. Kawai, M. Miyamoto, G. Uehara, S. Kawabata, M. Tomori, S. Ishii and T. Sato
SQUID system to acquire anatomical structure images at the cervix. The two most distinctive components of the systems are the sensor arrays of vector SQUID gradiometers and the uniquely shaped cryostats, which are described in detail in this section. B. Sensor array The sensor arrays were composed of vector SQUID gradiometers[6] shown in Fig. 2(a). The vector SQUID gradiometer detects three orthogonal components of the magnetic field at one time thanks to the combination of three gradiometric pickup coils as shown in Fig. 2(b). Fig. 2(c) shows the sensor array of the sitting mode system. It has 5 × 5 matrix-like arrangement for the observation area of 80 mm × 90 mm. The sensor array arrangement and the observation area of the supine mode system are 8 × 5 and 140 mm × 90 mm, respectively. The sensors are positioned along the cylindrical surface to fit to the posterior of the cervix. All SQUID sensors are driven by flux locked loop (FLL) circuits for the linearization of the output and the improvement of dynamic range[7]. The resulting sensitivity and typical noise level in the white noise region are about 1–1.5 nT/V and 2.3 fT/Hz1/2, respectively.
Fig. 2 Vector SQUID sensor array. (a) Conceptual drawing of a vector SQUID gradiometer. (b) Structure of pickup coils (c) Appearance of the sensor array of the sitting mode system.
_______________________________________________________________
C. Cryostat The cryostat is a double-layered vessel with a vacuum thermal insulating layer to hold the SQUID sensors in liquid helium and to keep them in a superconducting state. The cryostats and their inner parts are made of glass fiber reinforced plastic to avoid interference from magnetic materials. Fig. 3 shows the inner structure and the appearance of the cryostats. The cryostats of both sitting mode and supine mode have the same uniquely designed structure. They have a cylindrical main body to reserve liquid helium and a protrusion from its side surface. The sensor array of the sitting mode system is installed in the protrusion oriented in the horizontal direction. The sensor array of the supine mode system is also installed in the protrusion but is oriented in the vertically upward direction. In both systems, the surface that is in contact with the subject’s cervix has a cylindrical curve, as does the front of the sensor array. The cool-to-warm separation of this surface is less than 7 mm. The cryostats are supported by the non-magnetic gantries and made tiltable so that the relative position between the sensor array and the cervix would readily be adjusted. The capacity of cryostats of
Fig. 3 (a) Inner structure and (b) appearance of the cryostat of the sitting mode system. (c) Inner structure and (d) front view of the cryostat of the supine mode system.
IFMBE Proceedings Vol. 23
_________________________________________________________________
SQUID Biomagnetometer Systems for Non-invasive Investigation of Spinal Cord Dysfunction
the sitting and the supine mode systems are about 25 liters and 70 liters, respectively. The intervals of the liquid helium refill are 84 hours and 120 hours, respectively. III. SCEF MEASUREMENT A. Material and method For performance verification of the systems, preliminary cervical SCEF measurement of normal subjects was executed in a magnetically shielded room. Three volunteer male subjects, TS, MT, and KK at the ages of 23–30 were examined with the sitting mode system. The subject TS was also examined in the supine mode. All subjects didn’t have dysfunction at the cervix. In the sitting mode measurement, the subjects sat on a chair in the reclining position with the posterior of the cervix tightly fitted to the protrusion of the cryostat as shown in Fig. 4(a). In the supine mode measurement, the subject lay on a bed in the supine position with the cervix and the head running off the edge of the bed. The cervix was in close contact with the upper surface of the protrusion of the cryostat as shown in Fig. 1 and Fig. 4(b). Fig. 4(c) and (d) are lateral X-ray images showing the relative positions between the cervical spine and the sensor array and the orientation of the coordinate system. The median line of the subject’s body was roughly positioned at the center of the observation area in each measurement.
Fig. 4 SCEF measurement (a) in the sitting mode and (b) in the supine mode. Lateral X-ray image (c) in the sitting mode and (d) in the supine mode.
_______________________________________________________________
41
Electric stimulation was given to the median nerve at the left wrist with skin surface electrodes. The stimuli were repetitive square pulse current whose intensity and duration were 6–8 mA and 0.3 ms, respectively. The repetition rate was 17 Hz or 8 Hz. Signals from all SQUID sensors were filtered with band pass filters of 100–5000 Hz before digital data acquisition. The sampling rate of the data acquisition was 40 kHz. The stimulus was repeated 4000 times and all responses were averaged for the improvement of the S/N ratio. After the data acquisition, a digital low pass filter of 1290 Hz was applied to the averaged data. B. Result and discussion The SCEF signals were successfully detected from every subject and the pattern transition of the SCEF distribution over the cervix was clearly observed about 10 ms after the stimulation. The SCEF pattern transition of every subject had the same tendency. Fig. 5(a) and (b) show the transition of the SCEF distribution from the subject TS acquired by the sitting mode system and the supine mode system, respectively. The maps represent views from the posterior of the subject. Upper and lower sides correspond to the cranial and caudal directions, respectively. The arrow maps show the orientation and intensity of the SCEF components tangential to the body surface. The contour maps represent the distribution of the radial component. In both Fig. 5(a) and (b), a large outward component was found in the left side of the observation area in the early stage of the SCEF transition. After that, the zero field line of the radial component took on the anti-clockwise rotation. The tangential component was also turning as well as the radial component. This rotation was in good agreement with a result of a preceding study[8]. In the right side of the observation area, the progression of an inward component along the y-axis from the lower side to the upper side, which was parallel to the spinal cord, was clearly found. The inward component was followed by an outward component. This was interpreted that a quadrupole distribution, which is the specific magnetic distribution pattern corresponding to the axonal action potential[9], was partially observed. The whole of the behavior of the inward and outward extrema was observed in Fig. 5(b) while some part of the extrema was missing in Fig. 5(a). Thus, the wide observation area of the supine mode system is effective to survey the complicated pattern transition of cervical SCEF induced by the brachial nerve stimulation. The conduction velocity, which is significant to functional diagnosis of the spinal cord, was estimated from the movement of the extrema to be about 60–90 m/s. This value was within the normal physiological range.
IFMBE Proceedings Vol. 23
_________________________________________________________________
42
Y. Adachi, J. Kawai, M. Miyamoto, G. Uehara, S. Kawabata, M. Tomori, S. Ishii and T. Sato
IV. CONCLUSION Two SQUID spinal cord evoked field measurement systems, sitting mode and supine mode, were developed in the scope of the application at hospitals. The systems are equipped with an array of vector SQUID gradiometers and a uniquely shaped cryostat optimized for sitting or supine subjects. Using the developed systems, the SCEF signals were successfully detected and their specific pattern transitions were observed. The supine mode system had a wider observation area and was more suitable for the acquisition of the complicated SCEF distribution that induced especially by the brachial peripheral nerve stimulation. The signal propagation along the spinal cord was found and its velocity was non-invasively estimated. This indicates that the SCEF measurement will be a powerful tool for the non-invasive diagnosis of the spinal cord dysfunction.
ACKNOWLEDGMENT This work is partly supported by CLUSTER project, MEXT, Japan.
REFERENCES 1.
2.
3.
4.
5.
6.
7.
8. Fig. 5 Transition of the SCEF distribution between 10 ms and 13.5 ms in latency (a) in the sitting mode and (b) in the supine mode. Plain, dotted, and bold lines in the contour maps represent outward, inward, and zero magnetic fields, respectively. The interval between contour lines is 5 fT.
_______________________________________________________________
9.
Hämäläinen M, Hari R, Ilmoniemi RJ et al. (1993) Magnetoencephalography – theory, instrumentation, and application to noninvasive studies of the working human brain. Reviews of modern physics 65:413–498 Kado H, Higuchi M, Shimogawara et al. (1999) Magnetoencephalogram system developed at KIT. IEEE Trans Applied Supercond 9:4057–4062 Kawabata S, Komori H, Mochida K, Ohkubo H, Shinomiya K (2003) Visualization of conductive spinal cord activity using a biomagnetometer. Spine 27:475–479 Adachi Y, Kawai J, Miyamoto M, Kawabata S, et al. (2005) A 30channel SQUID vector biomagnetometer system optimized for reclining subjects. IEEE Trans Applied Supercond 15:672–675 Tomizawa S, Kawabata S, Komori H, Hoshino Fukuoka Y, Shinomiya K (2008) Evaluation of segmental spinal cord evoked magnetic fields after sciatic nerve stimulation. Clin Neurophys 119:1111–1118 Adachi Y, Kawai J, Uehara G, Kawabata S et al. (2003) Three dimensionally configured SQUID vector gradiometers for biomagnetic measurement. Supercond Sci Technol 16:1442–1446 Drung D, Cantor R, Peters M, Scheer H.J, Koch H (1990) Low-noise high-speed dc superconducting quantum interference device magnetometer with simplified feedback electronics. Appl Phys Lett 57: 406–408 Marckert B.M, Burghoff M, Hiss L-H, Nordahn M et al. (2001) Magnetoneurography of evoked compound action currents in human cervical nerve roots. Clin. Neurophys. 112:330–335 Hashimoto I, Mashiko T, Mizuta T, et al. (1994) Visualization of a moving quadrupole with magnetic measurements of peripheral nerve action fields, Electroencephalograph. Clin. Neurophysiol. 93:459–467
IFMBE Proceedings Vol. 23
_________________________________________________________________
Human Cardio-Respiro Abnormality Alert System using RFID and GPS (H-CRAAS) Ahamed Mohideen, Balanagarajan Syed Ammal Engineering College, Anna University, Ramanathapuram, India Abstract — The most crucial minute in a human’s life is the minute in which he oscillates between life and death. These deaths caused due to the failure of heart and respiratory mechanisms because of lack of medical assistance in time are increasing. This paper tries to give an insight of a technology about a whole new wireless and RFID (Radio Frequency Identification) enabled frontier in which victim’s actual location is integral for providing a valuable medical service. This paper will be demonstrating for the first time ever the usage of wireless telecommunications systems and miniature sensor devices like RFID passive tags, which are smaller than a grain of rice and equipped with a tiny antenna which will capture and wirelessly transmit a person’s vital body-function data, such as pulse/respiration rate or body temperature, to an integrated ground station. In addition, the antenna in the ground station will also receive information regarding the location of the individual from the GPS (Global Positioning satellite) System. Both sets of data of medical information and location will then be wirelessly transmitted to the ground station and made available to save lives by remotely monitoring the medical conditions of at-risk patients and providing emergency rescue units with the person’s exact location. It gives a predicted general model for Heart and respiration abnormality alert system. It discusses in detail the various stages involved in tracking the exact location of the Victim using this technology. Keywords — RFID tags, RFID reader, GPS, H-CRAAS.
and children. Once this technology gains mass appeal, what other potential venues are available for using such a cheap, simple and effective identification technology. Such technology has yet to be used effectively as preventative measures for medical complications of an average live at home individual. Many devices have come and gone that claim to help or aid the elderly in medical situations, but are deplorable in concept and highly ineffective at both preventing and solving medical crisis’s in a time effective manner. To better understand the intention of this project and the underlying problem, below is a sample scenario that I feel is worth potentially solving with the current state of RFID technology. “An individual at their personal residence needs medical assistance and dials 9-1-1 but does not remain conscious for long. As paramedics arrive on scene, they are faced with many questions. Who is the individual in need of assistance and what is the nature of their problem? Does the medical emergency require medical assistance elsewhere or immediate resuscitation on the scene? What are the victim’s vital signs? Is there an important medicinal history that could be vital to solving the medical problem?” The above situation is quite often the brunt of any paramedics work during an average day. The time taken to answer those questions can ultimately determine the victims’ chances of survival. It is my goal to aid a paramedic’s deci-
I. INTRODUCTION InformID is an out of sight, out of mind medical technology that can save lives. This device consists of two parts. The first is a small tag shown in Fig-1 that is easily concealed on an individual and carries crucial medical information. The second is a reading device that, in the case of an emergency, a paramedic or doctor could use in a hands free way to access this important information. This protected medical information would then allow doctors and paramedics to quickly and easily diagnose and identify medical conditions and emergencies on a patient to patient basis. InformID is a truly unique medical system because it is powerless and wire free! I.A. Overview: Medical applications for RFID are quickly becoming accepted as a safe and effective means in which to track patients in hospitals and to easily keep track of pets
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 43–46, 2009 www.springerlink.com
Fig 1- Grain sized RFID tag
44
Ahamed Mohideen, Balanagarajan
sion-making process by potentially answering some of these questions using RFID technology. The key to potentially uncovering a viable solution to the problem lies mainly with the transparency of the technology involved, and a system that is totally passive that can be referenced only if the paramedic has the time or needs the information
long ranges, usually powers active tags. A passive tag, much like the one used in my proposed system, does not require a power source at all! Instead, the range in which the tag can be read is very limited (sometimes less than six inches). RFID is safe and effective for maintaining privacy. Each tag is encrypted to allow a specific reader or set of readers to access their information
Table 1 Different class of RFID RFID tag class
Definition
Programming
Class0
Read only passive tags “Write-once, Readmany” passive tags Rewritable passive tags Semi passive tags Active tags Readers
Programmed by the manufacturer Programmed by the customer cannot be reprogrammed Reprogrammable
Class1 Class2 Class3 Class4 Class5
II. METHODOLOGY
Reprogrammable Reprogrammable Reprogrammable
Heart beat sensor hear
RFID Tag Respiration .. sensor
GPS
RFID Receiver
….
II.A. Sensors: By Using separate sensors the heartbeat and respiration rate is monitored continuously. The outputs of the sensors are then given to the separate port in the RFID tag in the digitized form. II.B. RFID tag: The RFID tag is working with the principle of Code Division Multiple Access (CDMA). Since RFID is a multiport tag, the two-sensor output is given separately. The RFID tag then analyzes the 8-bit digital inputs. If any one of the codes exceeds the predefined value then the corresponding code is encoded as a 4-bit code and it is transmitted to the RFID receiver. Passive e RFID systems that we used typically couple the transmitter to the receiver depending on whether the tags are operating in the near or far field of the reader, respectively. II.C. Global positioning system (GPS): This technology is mainly used to collect the details of the victim’s exact area of location, so that it is easy for the rescue unit to reach the victim with required medical assistance. II.D. RFID receiver: With the information of the location of the victim under crucial condition the reader make an alert to the rescue unit. So that it is possible for them to reach the victim soon.
II. E. Rescue unit: Soon after the alert from the reader near by the rescue unit should reach the location as soon as possible.
Rescue unit (An SMS)
Fig 2- Block description of H-CRAAS
Radio Frequency Identification (RFID) is a system of transmitting a unique encrypted number wirelessly between a tag and a transponder (reader). The number is 96 bits long and has enough unique combinations to potentially label every atom in the universe. RFID is both interesting and unique for a variety of reasons. The system of reading embedded tags does not need line of site transmission like a barcode reader. Instead, multiple tags may be read simultaneously just by being within a few feet of them. RFID tags are unique in that they come in two flavors, passive and active. A battery of some sort, allowing the tag to be read at
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Human Cardio-Respiro Abnormality Alert System using RFID and GPS - (H-CRAAS)
45
III. EXPERIMENTAL RESULTS
Fig 3- Output of normal respiration rate
Fig 5- Output of abnormal heartbeat and respiration rate
IV. CONCLUSION This new technology will open up a new era in the field of Biomedical Engineering. This paper aimed to provide an alert and there by providing required medical assistance to the victim. We have achieved this through the technology RFID and GPS. This technology would probably become cheaper in the future. In the near future we hope this new technology would probably reduce the deaths due to heart and respiratory abnormalities. V. ACKNOWLEDGEMENT Fig 4- Output of normal heart beat rate
First of all our sincere gratitude goes to Almighty, because “without him we are nothing”, next our unfined thanks goes to our lovable parents because they are the backbone of all our endeavours. And our heartfelt thanks goes to our Correspondent, Principal, Vice Principal, HOD, and all other staff members who supported us in all possible ways to complete our paper. Finally we would like to thank our friends for their thankless encouragement and dedicated support.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
46
Ahamed Mohideen, Balanagarajan
REFERENCE [1] Nan Bu, Naohiro Ueno and Osamu Fukuda,“Monitoring of Respiration and Heartbeat during Sleep using a Flexible Piezoelectric Film Sensor and Empirical Mode Decomposition” [2] “AMON: A Wearable Multiparameter Medical Monitoring and Alert System”, Urs Anliker, Student Member, IEEE, Jamie A. Ward, Student Member, IEEE, Paul Lukowicz, Member, IEEE,Gerhard Tröster, Senior Member, IEEE, François Dolveck, Michel Baer, Fatou Keita, Eran B. Schenker,Fabrizio Catarsi, Associate Member, IEEE, Luca Coluccini, Andrea Belardinelli, Dror Shklarski, Menachem Alon,Etienne Hirt, Member, IEEE, Rolf Schmid, and Milica Vuskovic [3] Mann, “Wearable computing as means for personal empowerment,”in Proc. Int. Conf. on Wearable Computing (ICWC), Fairfax, VA, May 1998 [4] .Starner, “The challenges of wearable computing: Part 1,” IEEE Micro,vol. 21, pp. 44–52, July 2001. [5] A. Belardinelli, G. Palagi, R. Bedini, A. Ripoli, V. Macellari, and D. Franchi, “Advanced technology for personal biomedical signal logging and monitoring,” in Proc. 20th Annu. Int. Conf. IEEE Engineering Medicine and Biology Society, vol. 3, 1998, pp. 1295–1298. [6] E. Jovanov, T. Martin, and D. Raskovic, “Issues in wearable computing for medical monitoring applications: A case study of a wearable ecg monitoring device,” in 4th Int. Symp. Wearable Computers (ISWC), Oct. 2000, pp. 43–49. [7] E. Jovanov, P. Gelabert, B. Wheelock, R. Adhami, and P. Smith, “Real time portable heart monitoring using lowpower dsp,” in Int. Conf. Signal Processing Applications and Technology (ICSPAT), Dallas, TX, Oct. 2000. [8] E. Jovanov, D. Raskovic, J. Price, A. Moore, J. Chapman, and A. Krish- Namurthy, “Patient monitoring using personal area networks
_________________________________________
[9]
[10]
[11]
[12]
[13]
of wireless intelligent sensors,” Biomed. Sci. Instrum., vol. 37, pp. 373–378, 2001. S. Pavlopoulos, E. Kyriacou, A. Berler, S. Dembeyiotis, and D. Koutsouris, “A novel emergency telemedicine system based on wireless communication technology–AMBULANCE,” IEEE Trans. Inform. Technol. Biomed., vol. 2, pp. 261–267, Dec. 1998. S. L. Toral, J. M. Quero, M. E. Pérez, and L. G. Franquelo, “A microprocessor based system for ecg telemedicine and telecare,” in 2001 IEEE Int. Symp. Circuits and Systems, vol. IV, 2001, pp. 526–529. F. Wang, M. Tanaka, and S. Chonan, “Development of a PVDF piezopolymer sensor for unconstrained in-sleep cardio respiratory monitoring”, J. Intell. Mater. Syst. Struct., vol. 14, 2003, pp. 185-190. N. Ueno, M. Akiyama, K. Ikeda, and H. Takeyama, “A foil type pressure sensor using nitelide aluminum thin film”, Trans. of SICE, vol. 38, 2002, pp. 427-432. (in Japanese) Y. Mendelson and B. D. Ochs, “Noninvasive pulse oximetry utilizing skin reflectance photoplethysmography,” IEEE Trans. Biomed. Eng., vol. 35, pp. 798–805, Oct. 1988.
Author: Institute: Street: City: Country: Email:
A. Ahamed Mohideen, Syed Ammal Engineering College Dr.E.M.Abdullah campus, Ramanathapuram, India
[email protected] Author: Institute: Street: City: Country: Email:
M. Balanagarajan, Syed Ammal Engineering College Dr.E.M.Abdullah campus, Ramanathapuram, India
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Automatic Sleep Stage Determination by Conditional Probability: Optimized Expert Knowledge-based Multi-Valued Decision Making Bei Wang1,4, Takenao Sugi2, Fusae Kawana3, Xingyu Wang4 and Masatoshi Nakamuara1 1
Department of Advanced Systems Control Engineering, Saga University, Saga, Japan Department of Electrical and Electronic Engineering, Saga University, Saga, Japan 3 Department of Clinical Physiology, Toranomon Hospital, Tokyo, Japan 4 Department of Automation, East China University of Science and Technology, Shanghai, China 2
Abstract — The aim of this study is to develop a knowledgebased automatic sleep stage determination system which can be optimized for different cases of sleep data at hospitals. The main methodology of multi-valued decision making includes two modules. One is a learning process of expert knowledge database construction. Visual inspection by a qualified clinician is utilized to obtain the probability density functions of parameters for sleep stages. Parameter selection is introduced to find out optimal parameters for variable sleep data. Another is automatic sleep stage determination process. The decision making of sleep stage is made based on conditional probability. The result showed close agreement comparing with the visual inspection. The developed system is flexible to learn from any clinician. It can meet the customized requirements in hospitals and institutions. Keywords — Automatic sleep stage determination, Expert knowledge database, Multi-valued decision making, Parameter selection, Conditional probability.
I. INTRODUCTION There are two sleep states: rapid eye movement (REM) sleep and non rapid eye movement (NREM) sleep. The NREM sleep consists of stage 1 (S1), stage 2 (S2), stage 3 (S3) and stage 4 (S4). Another stage of awake is often included, during which a person falls asleep. The most wellknown criteria for sleep stage scoring were published by Rechtschaffen and Kales (R&K criteria) in 1968 [1]. Currently, sleep stage scoring has been widely used for evaluating the condition of sleep and diagnosing the sleep related disorders in hospitals and institutions. Automatic sleep stage determination can free the clinicians from the heavy task of visual inspection on sleep stages. The rule-based waveform detection methods, according to R&K criteria, can be found in many studies. The waveform detection method was firstly applied by Smith et al. [2], [3]. Waveform and phasic event detection of rule-based and case-based hybrid reasoning method was proposed in [4]. An expert system based on characteristic waveforms and background EEG activity by using a decision tree was described in [5]. The limitations of R&K criteria have been noticed [6].
The insufficiency is that it only provides typical characteristic waveforms of healthy and normal persons for staging. Although various methodologies have been developed, effective technique is still needed for clinical application. Sleep data is inevitably being affected by various artifacts [7]. Individual differences are also commonly existed even under the same recording condition. For the patients with sleep-related disorders, their sleep data has particular characteristics. The recorded sleep data containing complex and stochastic factors will increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. In this study, sleep stage determination is considered as a multi-valued decision making problem in the field of clinics. The main methodology, proposed in our previous study, has been proved to be successful for sleep stage determination [8], [9]. The aim of this study is to develop a flexible technique adapting to different cases of sleep data, which can meet the customized requirements in hospitals and institutions. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. A process of parameter selection is introduced in order to make our algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. II. METHODS A. Data acquisition The sleep data investigated in this study was recorded in the Department of Clinical Physiology, Toranomon Hospital, at Tokyo, Japan. Four subjects, aged 49-61 years old, were participated. These patients had breathing disorder during sleep (Sleep Apnea Syndrome). Their overnight sleeping data were recorded after the treatment of Continuous Positive Airway Pressure (CPAP) based on the polysomnographic (PSG) measurement. The PSG measurement used in Toranomon Hospital included four EEG (electroencephalogram) recordings, two EOG recordings and one
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 47–50, 2009 www.springerlink.com
48
Bei Wang, Takenao Sugi, Fusae Kawana, Xingyu Wang and Masatoshi Nakamuara
EMG (electromyogram) recording. EEGs were recorded on central lobes and occipital lobes with reference to opposite earlobe electrode (C3/A2, C4/A1, O1/A2 and O2/A1) according to the International 10-20 System [10]. EOGs were derived on Left Outer Canthus and Right Outer Canthus with reference to earlobe electrode A1 (LOC/A1 and ROC/A1). EMG was obtained from muscle areas on and beneath chin and termed as chin-EMG. Initially, EEGs and EOGs were recorded under a sampling rate of 100 Hz, with a high frequency cutoff of 35Hz and a time constant of 0.3s. Chin-EMG was recorded under a sampling rate of 200 Hz, with a high frequency cutoff of 70 Hz and a low frequency cutoff of 10Hz. B. Visual inspection A qualified clinician F.K. in Toranomon hospital scored sleep stages on the overnight sleep recording of subjects. In total, seven types of stages were inspected. In the stage of the awake, predominant rhythmic alpha activity (8-13 Hz) can be observed in EEGs (O1/A2, O2/A1) when the subject is relaxed with the eyes closed. This rhythmic EEG pattern significantly attenuates with attention level, as well as when the eyes are open. The awaking EOG consists of rapid eye movements and eye blinks when the eyes are open, while few or no eye movements when the eyes closed. The clinician determined the awake with eyes open (O(W)) or awake with eyes closed (C(W)) according to the alpha activity (8-13 Hz) in O1/A2 and O2/A1 channels as well as the existence of eye movements in EOG channels. The REM sleep was scored by the episodic REMs and low voltage EMG. The NREM sleep was categorized into S1, S2, S3 and S4 stages. S1 was scored with low voltage slow wave activity of 2-7 Hz. S2 was scored by the existence of sleep spindle or K-complex. Usually, S3 is defined when 20% to 50% of slow wave activity (0.5-2 Hz) presented in an epoch, whereas more than 50% for S4 according to R&K criteria. For elder persons, S3 and S4 of deep sleep may not be obviously determined. The clinician inspected S3 and S4 based on a relatively different presence of slow wave activity (0.5-2 Hz) within an epoch.
rameters of consisting segments were taken average to derive the parameter value of one epoch. The epochs were classified into sleep stage groups according to the visual inspection by clinician. The histogram for each parametric variable was created for each sleep stage. The probability density function (pdf) was approximately evaluated using histogram with Cauchy distribution. The pdf of parameter y in stage is mathematically expressed by b
f (y |] )
S (( y a)2 b2 )
,
(1)
where a is the location and b is the scale of Cauchy distribution. a is determined by media and b is determined by quartile [11]. The distance of the pdfs between stage i and stage j was calculated by ai a j .
d (i, j )
(2)
The larger distance indicates smaller overlap between the pdfs. It is measured by d (i, j ) ! max{3b i ,3b j } .
(3)
When the distance is larger than three times of the deviations of the probability density functions of both stages, the parameter is selected. 2) Automatic sleep stage determination The overnight sleep recordings of subjects were divided into the same length of epochs and segments as the training data. The values of selected parameters were calculated for each segment. Initially, predicted probability of first segment P1|0 for various sleep stages shared the probability equally with a value of 1/n. n is the number of the types of sleep stages. The joint probability of the parameters for current segment k was calculated by f ( yk | ] i )
m
f (y
l k
|] i) ,
(4)
l 1
C. Multi-valued decision making 1) Expert knowledge database construction The overnight sleep recording from subjects were divided into consecutive 30s epochs for training purpose. Each epoch was subdivided into 5s segments. A set of characteristic parameters, extracted from the periodogram of EEGs, EOGs and EMG, were calculated for each segment. There are three types of parameters: ratio, amplitude and amount. Totally, 20 parameters were calculated. The pa-
_______________________________________________________________
where yk { y1k , yk2 ," , ykm } is a parameter vector, and i denotes the sleep stage. In Eq.4, parameters in yk were assumed to be independent with each other. Conditional probability of segment k was calculated based on the Bayesian rule, Pk |k (] i )
f ( yk | ] i ) Pk |k 1 (] i ) n
¦ f (y
k
,
(5)
| ] j ) Pk |k 1 (] j )
j 1
where Pk|k-1 (i) is the predicted probability of current segment.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Automatic Sleep Stage Determination by Conditional Probability: Optimized Expert Knowledge-based Multi-Valued Decision Making 49
The sleep stage * was determined by choosing the maximum value among the conditional probabilities corresponding to various sleep stages as
] * : max( Pk |k (] i )) .
(6)
The predicted probability Pk+1|k (i) of next segment k+1 was given by Pk 1|k (] i )
n
¦t P
ij k |k
(] j ) ,
indicated that stage awake of O(W) and C(W) were separated from other stages. The amount of 5 (2-10 Hz) in EOGs, S2 showed larger location value comparing with other stages. In the amount of 6 (25-100 Hz) in EMG, REM had the smaller location value comparing with other sleep stages. The combination of those selected parameters was utilized for manipulating the automatic sleep stage determination.
(7) B. Sleep stage determination
j 1
where tij denotes the probability of transition from stage i to stage j. III. RESULT A. Probability density function The overnight sleep recordings of two subjects were utilized as the training data for expert knowledge database construction. The pdfs of the selected parameters are illustrated in Fig. 1. In the ratio of 1 (0.5-2 Hz) in EEGs, S3 and S4 of deep sleep had lager location values separated from other stages. S3 and S4 were slightly separated from each other among the training subjects of elder persons. The amplitude of 2 (2-7 Hz) in EEGs, REM and light sleep (S1, S2) showed relatively large location values comparing with others. The ratio of 3 (8-13 Hz) in EEGs can be the evidence for C(W). The amplitude of 4 (25-35 Hz) in EEGs
The overnight sleep recordings of another two subjects were analyzed, which were different from the training data. The result of automatic sleep stage determination was evaluated comparing with the visual inspection. The recognition of sleep stage was appreciably satisfied. The average accuracy of two test subjects showed that stage awake was 85.8%, stage REM was 76.2%, light sleep (S1 and S2) was 80.6% and deep sleep (S3 and S4) was 95.7%. IV. DISCUSSION A. Expert knowledge database Rule-based method, which had been designed based on R&K criteria, can be found in many studies of computerized sleep stage scoring. The R&K criteria provide rules for sleep stage scoring with typical and normal waveforms of healthy persons. Additionally the conventional rule-based
Fig. 1 Probability density functions of Cauchy distribution for the selected parameters corresponding to each sleep stage. The horizontal axis indicates the types of stages. The vertical axis is the value of parameters. "" denotes the location of Cauchy distribution. " " denotes the scale of Cauchy distribution.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
50
Bei Wang, Takenao Sugi, Fusae Kawana, Xingyu Wang and Masatoshi Nakamuara
methods did not consider the artifacts and surrounding circumstance in clinical practice. However, sleep data is inevitably affected by internal and external influences. The influences are complex and variable. Using R&K criteria only, those rule-based methods may be successful for the sleep data under ideal recording condition of healthy persons, but not for the sleep data under usual recording condition of patients at hospitals. Unlike the rule-based method, our method is expert knowledge-based. The visual inspection by a qualified clinician takes an important role during the learning process of expert knowledge database construction. The clinician made visual inspection not only referring to R&K criteria, but also considering the artifacts and surrounding circumstance in the hospital. The visual inspection by a qualified clinician, thus, can be reliable to construct the knowledge database of probability density functions of parameters and manipulate the automatic sleep stage determination. B. Parameter selection Parameter selection was one component included in our learning process of expert knowledge database construction. The principle of parameter selection is to decrease the positive error and negative error of sleep stage determination. In our study, one parameter is not expected to distinguish all the sleep stages from each other. The pdfs of some stages may be overlapped. If the pdf of the stage is separated from others, this parameter can be selected. The next parameter would be selected if it can distinguish the stages in the overlapped part of previous parameters. A distance of three times of the deviation, which covers 99% of the pdf, is adopted for measurement. The combination of the selected parameters is optimized for manipulating the automatic sleep stage determination algorithm. C. Clinical significance In this study, the patients were from Toranomon hospital. Toranomon hospital is named for the diagnosing and treatment of Sleep Apnea Syndrome. A qualified clinician F.K. from Toranomon hospital made visual inspection on sleep stages. According to the visual inspection, expert knowledge database was constructed. The result of automatic sleep stage determination showed close agreement comparing with the visual inspection. Our system can satisfy the sleep stage scoring requirement in Toranomon hospital. In addition, our method is flexible to learn from any clinicians. Accordingly, the developed automatic sleep stage determination system can be optimized to meet the requirements in different hospitals and institutions.
_______________________________________________________________
V. CONCLUSION An expert knowledge-based method for sleep stage determination was presented. The process of parameter selection enhanced the flexibility of the algorithm. The developed automatic sleep stage determination system can be optimized for clinical practice.
ACKNOWLEDGMENT This study is partly supported by Nation Nature Science Foundation of China (NSFC) 60543005 and 60674089, and Shanghai Leading Academic Discipline Project B504.
REFERENCES 1.
Rechtschaffen A, Kales A A. (1968) Manual of standardized terminology, techniques and scoring system for sleep stages of human subject. UCLS Brain Information Service/Brain Research Institute, Los Angeles 2. Smith J R, Karakan I (1971) EEG sleep stage scoring by an automatic hybrid system. Electroencephalogr Clin Neurophysiol 31(3):231-237 3. Smith J R, Karakan I, Yang M (1978) Automated analysis of the human sleep EEG, waking and sleeping. Electroencephalogr Clin Neurophysiol 2:75-82 4. Park H J, Oh J S, Jeong D U, Park K S (2000) Automated sleep stage scoring using hybrid rule and case based reasoning. Comput Biomed Res 33(5):330-349 5. Anderer P et al. (2005) An E-health solution for automatic sleep classification according to Rechtschaffen and Kales: validation study of the Somnolyzer 24 x 7 utilizing the Siesta database. .Neuropsychobiology 51(3):115-133 6. Himanen S L, Hasan J (2000) Limitations of Rechtschaffen and Kales. Sleep Med Rev 4(2):149-167 7. Anderer P, Roberts S, Schlogl A et al. (1999) Artifact processing in computerized analysis of sleep EEG - a review. Neuropsychobiology 40(3):150-157 8. Nakamura M, Sugi T (2001) Multi-valued decision making for transitional stochastic event: determination of sleep stages through EEG record, ICASE Transactions on Control, Automation and Systems Engineering 3(2):1-5 9. Wang B, Sugi T, Kawana F, Wang X, Nakamura M (2008) MultiValued Decision Making of Sleep Stages Determination Based on Expert Knowledg, Proc International conference on Instrumentation, Control, and Information Technology, Chofu, Japan, 2008, pp 31943197 10. Jasper H H (1958) Ten-twenty electrode system of the international federation, Electroencephalogr Clin Neurophysiol 10:371-375 11. Spiegel M R (1992) Theory and Problems of Probability and Statistics. McGraw-Hill, New York.
Author: Bei Wang Institute: Department of Advanced Systems Control Engineering, Saga University Street: Honjoh machi 1 City: Saga 840-8502 Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
A Study on the Relation between Stability of EEG and Respiration Young-Sear Kim1, Se-Kee Kil2, Heung-Ho Choi3, Young-Bae Park4, Tai-Sung Hur5, Hong-Ki Min1 1
Dept. of Information & Telecom. Engineering, Univ. of Incheon, Korea 2 Dept. of Electronic Engineering, Inha Univ., Korea 3 Dept. of Biomedical Engineering, Inje Univ., Korea 4 School of Oriental Medicine, Kyunghee Univ., Korea 5 Dept. of Computing & Information System, Inha Technical College, Korea
[email protected] 82-32-770-8284 Abstract — In this paper, we analyzed the relation between Respiration and EEG. For this, we acquired the EEG signal, ECG signal and respiration signal synchronously with keep time. And we defined SSR and Macrate which represent beat rate of heart per single unit of respiration and relational ratio of alpha wave and beta wave in EEG respectively. The reason why we defined SSR and Macrate is to compare and analyze two signals quantitatively. From the examination result about 10 reagents, we verified our proposal. Keywords — EEG, SSR, MACRATE, RESPIRATION
I. INTRODUCTION Traditionally, it is said that alpha wave is strongly appeared when the person is in stable state [1]. And in oriental medicine, it is said that long-length respiration is close to more stable state than short-length respiration [2]. Then in this paper, we tried to analyze the relationship between EEG and respiration from the viewpoint of stable state. But in that case, there is no outstanding method to compare the degree of stability between EEG and respiration quantitatively. Because of this, we defined SSR and Macrate which mean respectively relational ratio of stable state in EEG and relational heartbeat count per one period of respiration. To find out SSR and Macrate from the biomedical signals, it is essential to extract alpha and beta wave of EEG and feature points of ECG and respiration signal. Wavelet transform is widely used in the field of signal processing, compression and decompression, neural network and etc. Comparing with Fourier transform, it has strong advantage that don’t lose information of time location when the data transformation from spatial domain to frequency domain [3]. It could be significant thing to notice the occurring points of particular event during the processing data in frequency domain. Then wavelet transform could be proper method in case of this paper which aims at pursuit of variation of EEG spectrum with flow of time.
Using wavelet transform, in this paper, we decomposed the EEG signal to various signals which has different frequency bandwidth such as alpha wave and beta wave. And we estimated the power spectrum of EEG, alpha wave and beta wave to calculate the SSR. In case of Macrate, respiration count and heartbeat count from ECG are essential for calculation. We used zerocrossing method to extract respiration count per minute and used wavelet based algorithm [4] to extract heartbeat count per minute.
II. DEFINITION OF MACRATE AND SSR We defined SSR(Stable State Ratio) as following equation (1). Alpha wave is said that it is strongly extracted when the state is stable and beta wave is strong when the state is active. But everybody has his own amount of alpha wave then it cannot explain accurate degree of stable state of EEG. And then we used relative ratio of stable state and active state of EEG.
SSR
power spectrum of alpha wave power spectrum of beta wave
(1)
We assumed the most stable state period as the period which one unit length of respiration is most long in the whole data of 20 minute respiration data. The Macrate is defined as following equation 2.
Macrate
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 51–54, 2009 www.springerlink.com
beat count per mi nute (2) respiration count per mi nute
52
Young-Sear Kim, Se-Kee Kil, Heung-Ho Choi, Young-Bae Park, Tai-Sung Hur, Hong-Ki Min
III. FEATURE POINT EXTRACTION AND RELATIONAL ANALYSIS
To compute SSR, extraction of alpha wave and beta wave from EEG and estimation of power spectrum are necessary. EEG data used in this paper is the data from the frontal lobe (the sixth channel) among the data of eight channel EEG data. And its sampling frequency is 256 Hz. Then the valid range of frequency are 0 ~ 128Hz according to the Nyquist sampling principle. Following table 1 shows the seven level decomposition results of EEG data using wavelet transform
method in this paper, it is because the length of EEG signal was sufficiently long as its length was 20 minutes. Next, to get the Macrate, extraction of heartbeat from ECG and respiration count per minute is necessary. At first, to recognize QRS complex of ECG and to achieve heartbeat count per minute, we used wavelet method [6]. At second, to recognize period of respiration, we used zero crossing method Relational analysis is the method which is used to find out the linear relationship between two different variables. And generally relational coefficient as seen as following equation (3) is used.
Table 1: Frequency band of EEG signal according to the decomposition level.
Level A
Frequency
D
Frequency
1 2 3 4 5 6 7
0 Hz 0 Hz 0 Hz 0 Hz 0 Hz 0 Hz 0 Hz
cD1 cD2 cD3 cD4 cD5 cD6 cD7
64 Hz 32 Hz 16 Hz 8 Hz 4 Hz 2 Hz 1 Hz
cA1 cA2 cA3 cA4 cA5 cA6 cA7
~ ~ ~ ~ ~ ~ ~
64 Hz 32 Hz 16 Hz 8 Hz 4 Hz 2 Hz 1 Hz
~ ~ ~ ~ ~ ~ ~
128 Hz 64 Hz 32 Hz 16 Hz 8 Hz 4 Hz 2 Hz
Generally, according to frequency, brain wave pattern is classified to alpha wave correspond to 8~12Hz, beta wave correspond to 13~35Hz , theta wave correspond to 4~7 Hz, delta wave correspond to 0.3~3.5 Hz and etc. Alpha wave is the signal which is outstanding pattern among the various brain wave patterns and it is generally appeared when the eyes is closed, the stable state mentally and silence circumstances from the frontal lobe or occipital region of head [1]. As shown in the above table 1, the frequency band of alpha wave is belonged in the component cD4. To decompose that component again using wavelet, cD4A1(8 ~ 12 Hz) and cD4D1(12 ~ 16 Hz) is appeared. Because frequency band of alpha wave is about 8 ~ 12Hz, we choose the cD4A1 component as alpha wave and cD3 component as beta wave. With extracted alpha and beta wave, we estimated the power spectrum of wave to find out SSR which represents the degree of stability quantitatively. There are generally three kinds of method in power spectral analyzing method such as correlation function method (Blackman-Tukey), FFT method and linear prediction model method. Among these, B-T method and FFT method are generally used and linear prediction model is used when the data length of acquisition is very short relatively [5]. So we used FFT
_________________________________________
s
r
s
XX
XY
,
s
1 d r d 1
(3)
YY
In this paper, we aimed to find out the quantitative relation between EEG and respiration at the viewpoint of stable state. Then, with accomplished SSR and Macrate, we did relational analysis. IV. RESULTS The data used in this paper was EEG signal, ECG signal and respiration signal which were synchronized with time accurately from 10 persons for 20 minutes respectively. The following figure 1 shows acquired EEG signal and extracted alpha & beta signal by wavelet method. And the following figure 2 shows moving spectral result of alpha wave spectrum over EEG signal spectrum, beta wave spectrum over EEG signal spectrum and alpha wave spectrum over beta wave spectrum for 20 minutes. In the figure 3, horizontal axis means time and vertical axis rate of each spectrum.
IFMBE Proceedings Vol. 23
Figure 1: Extraction of alpha wave and beta wave.
___________________________________________
A Study on the Relation between Stability of EEG and Respiration
53
Figure 2: The result of power spectrum of EEG.
The following figure 3 shows SSR in most high Macrate in the whole respiration signal. In the figure 3, upside figure is SSR and downside figure is Macrate. And these two signals are synchronized with accurate time.
Figure 4: Scatter plot of result which is shown in table 2. It can see that SSR is more growing in high Macrate.
The following table 3 is the result of all 20 reagents. As shown in table, SSR and Macrate have relation from 20.5% to 43.1% Table 3: The result of relational analysis about 10 reagents
Reagent Result 1 0.247 2 0.239 3 0.220 4 0.356 5 0.205 Average
Reagent 6 7 8 9 10
Result 0.268 0.380 0.431 0.292 0.237 0.290
Figure 3: The SSR and Macrate.
The following table 2 is result of relational analysis about SSR and Macrate about a reagent. And figure 4 is the scatter plot of table 2. Relational analysis in this paper was performed by SPSS software version 12K. Table 2: The result of relational analysis between SSR and Macrate
SSR SSR
Macrate
r P-value N r P-value N
Macrate 1
114 .431 .000 114
_________________________________________
.431 .000 114 1
V. CONCLUSION In this paper, we analyze the relationship between SSR and Macrate in order to find out quantitative relation between EEG and respiration at the viewpoint of stable state. To accomplish this aim, we extracted the feature points of bio-signals such as EEG, ECG and respiration signal from 10 reagents. And we defined and calculate the SSR and Macrate as a quantitative index of stable state. And last, from the relational analysis, we could come to conclusion that EEG and respiration have relation each other. Future work contains more plenty of experiment and etc.
114
IFMBE Proceedings Vol. 23
___________________________________________
54
Young-Sear Kim, Se-Kee Kil, Heung-Ho Choi, Young-Bae Park, Tai-Sung Hur, Hong-Ki Min 3.
REFERENCES 1. 2.
J. J. Carr, “Introduction to Biomedical Equipment Technology,” Prentice Hall Press, pp 369-394, 1998 B.K.Lee ,”Diagnostics in oriental medicine,” the schools of oriental medicine of Kyung Hee University, 1985.
_________________________________________
4.
5.
Akram, Aldroubi & Michael Unser, "Wavelet in Medicine and Biology," CRC Press, 1996 S. K. Kil, “Concurrent recognition of ECG and human pulse using wavelet transform”, Journal of KIEE, Vol 55D, No. 7, pp. 75-81, 2006 N. H. Kim, “A Study on the Estimation of Power Spectrum of the Heart Rate using Autoregressive Model,” the thesis of degree of Ph.D, Inha Univ., 2001
IFMBE Proceedings Vol. 23
___________________________________________
The Feature-Based Microscopic Image Segmentation for Thyroid Tissue Y.T. Chen1, M.W. Lee1, C.J. Hou1, S.J. Chen2, Y.C. Tsai1 and T.H. Hsu1 1
2
Institute of Electrical Engineering, Southern Taiwan University, Tainan, Taiwan Department of Radiology, Buddhist Dalin Tzu Chi General Hospital, Chia-Yi, Taiwan
Abstract — Thyroid diseases are prevalent among endocrine diseases. The microscopic image of thyroid tissue is the necessary and important material for investigating thyroid functional mechanism and diseases. A computerized system has been developed in this study to characterize the textured image features of the microscopic image in typical thyroid tissues, and then the compositions in the heterogeneous thyroid tissue image were classified and quantified. Seven image features were implemented to characterize the histological structure representation for tissue types including blood cells, colloid, fibrosis tissue, follicular cell. The statistical discriminant analysis was implemented for classification to determine which features discriminate between two or more occurring classes (types of tissues). The microscopic image was divided to be contiguous grid images. The image features of each grid image were evaluated. Multiple discriminant analysis was used to classify each grid image into the appropriated tissue type and Markov random fields were then employed to modify the results. 100 random selected clinical image samples were employed in the training and testing procedures for the evaluation of system performance. The results show that accuracy of the system is about 96%. Keywords — Image segmentation, Markov random fields, Thyroid nodule, Feature classification
and the epithelial cells are enlarged and columnar. The follicle is a colloid-rich area. Epithelial cells lining the follicles are flat and most of the follicles are distended with colloid. Fig. 1 (a) shows the typical image of nodular goiter. Fig.1 (b) shows the architectural characteristics of papillae in papillary carcinoma. The cells are arranged around welldeveloped papillae. The stroma is represented by loose connective tissue. Fig. 1(c) shows the microscopic image of follicular adenoma. These types of tumors exhibit morphological evidence of follicular cell differentiation. A detail of the adenoma depicted in Fig.1(c) shows a regular microfollicular and solid architectural pattern with little cytological atypia. Medullary carcinoma usually has clear microscopic evidence of infiltration into the surrounding thyroid parenchyma. Blood vessel invasion is common, and involvement of lymph vessels and regional lymph nodes are also frequently found.
(a)
(b)
(c)
I. INTRODUCTION
Fig. 1 Typical tissue microscope image of thyroid disease (H&E staining, 200X) (a) Thyroid nodular goiter (b) Papillary carcinoma (c) Follicular adenoma
Thyroid diseases including nodules, goiters, adenomas, even carcinomas, and other related diseases are prevalent among endocrine diseases [1,2]. A thyroid nodule is the common initial manifestation of most thyroid tumors. There are many methods in clinical examination for thyroid diseases. Observation and examination of histological tissue images can help in understanding the cause and pathogenesis of the thyroid diseases. The benignancy or malignancy of thyroid nodules and tumors can be discriminated by microscopic observation on image features of tissue. From the viewpoint of morphological architectures in histopathology, many specific image features are important indexes and references for thyroid diseases [3]. The functional unit of the main endocrine system in the thyroid is the follicle, a closed spheroid structure lined by a single layer of epithelial cells. It is filled with colloid. With the onset of nodular goiter, the number of follicles is increased
Because of the histological complexity and diversity, the image examination and related clinical practices are redundant and time-consuming. Furthermore, the quality of clinical reading heavily depends on the experiences of clinical practicer. Therefore, a computerized system has been developed in this study to characterize the textured image features of the microscopic image in typical thyroid tissues, and then the compositions in the heterogeneous thyroid tissue image are classified and quantified. The image features and morphological differentiation of follicles within the thyroid are the determined characteristics for various diseases. With advancements of methodologies in digital image processing, digital images can assist in deducing meaningful revelations from implicit details. Armed with the image characteristics of thyroid tissue, the theorems and technologies of digital image processing and
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 55–59, 2009 www.springerlink.com
56
Y.T. Chen, M.W. Lee, C.J. Hou, S.J. Chen, Y.C. Tsai and T.H. Hsu
feature classification can be implemented for thyroid tissue microscopic image feature characterizing and quantifying. In this paper, methodologies using image and texture analysis techniques regarding characterizations and quantifications of typical microscopic images of follicles, colloid, blood cells, stroma are proposed. The results of identical tissue segmentation from heterogeneous images by implementing the statistics-based image segmentation with MRF modification technique are presented as well.
one which gives the best approximation to the data can be determined. The approach are widely used for image recovery and denoising.[6-9] Based on the concept of MRF, the states of the sites in the system have to be limited. After evaluating the conditional probabilities between neighber sites, the probabilities would be applied to determine the state of the site. With finite states space, the transition probability distribution can be represented by a matrix, called the transition probability matrix(TPM). The TPM with sites in n states takes the following basic form
II. MATERIALS AND METHODS
P
>pij @
A. Image Acquisition Slices of histopathological thyroid tissue were made through routine paraffin embedding and hematoxylin-eosin ( H&E ) staining. The digitized tissue images were obtained from microscope (Nikon 80i) at magnification of 100x using a DSLR camera (Nikon D80). The image resolution was set as 3872×2592 pixels and 24 bits per pixel. The aperture and shutter of camera was kept consistent between samples.
pij t 0,
(1)
f
¦ pij
1 for any i .
(2)
j 0
z Neighborhood System and Clique Let S {s11, s12 ,..., sij ,..., snn } be a set of sites (grid images in this study). A set NSS { {NSS sij }sij S is called a neighborhood site set on the set of sites S if
B. Statistical image Features analysis and tissue characterization Seven image features including pixel-level features and high-order texture features were used in the study to evaluate the image characteristics of five typical histological thyroid tissues. The statistical pixel-level features include mean of Brightness, Standard Deviation of Brightness(SDB), mean of Hue, Entropy, and Energy of the selected image areas. These features provide quantitative information about the pixels within a segmented region [4]. Hurst value derived from fractal analysis reveals the roughness of the selected image areas. The high-order texture features are Regularity of statistical feature matrix(SFM). The statistical stepwise selection was implemented to exclude insignificant features. Then multiple discriminant analysis was used for the classification of features. Our previous study for tissue classification with these image and texture features has been proposed in [5].
0 d i, j d n , and
sij NSS sij
(3)
sij NSS s ml sml NSS sij .
(4)
and
The size of NSS is determined by the geometrical configuration. A subset c S is a clique if every pair of distant sites in c is neighbors. C denotes the set of c . Fig. 1 shows two kinds of neighborhood system in different order with cliques.
(a) First order
C. Markov Random Fields Markov random fields (MRF) theory is a stochastic model-based approach for texture analysis. The image and texture features are considered in the random status. For every latticed image, with the neighborhood system and the property of joint condition probability density, the texture data can be fitted with defined stochastic models and the
_______________________________________________________________
IFMBE Proceedings Vol. 23
(b) Second order Fig. 1 neighborhood system and cliques
_________________________________________________________________
The Feature-Based Microscopic Image Segmentation for Thyroid Tissue
z Gibbs Distributions Let / ^1,2,3,..., L 1` denote the common state space, where L is the number of the state. In this study, / labels the classified type of tissue. Therefore, all of the sites in the MRF must belong to / . Let : Z (Zs1 ,..., Zs N ) : Zsi /, 1 d i d N be the set o all possible configurations. X is an MRF with respect to NSS if
^
`
P( X
Z) ! 0
for all Z : ;
P( X s
xs | X r
xr , r z s )
P( X s
(6)
(7)
xr , r NSS )
xs | X r
The probability distribution P ( X Z ) in Eq(6) is uniquely determined by these conditional probabilities. Gibbs distribution and Hammersley-Clifford theorem were proposed to characterize the probability distribution [8]. The Gibbs distribution has a probability measure S on : with the following equation: p( X
1 U (Z ) e , Z
Z)
(8)
where Z is the normalizing constant represented by: Z
¦ eU (Z ) ,
(9)
Z:
and U Z
¦Vc Z .
57
z Training phase Two steps were included in this phase. One step was performed to evaluate the feature weightings for applying the discriminant classification rules. The tissue type of every grid image was manually assigned. These aforementioned statistical texture features were then calculated. Finally, discriminant analysis was implemented to evaluate the feature weightings of discriminant classification rules for types of tissue including blood cells, colloid, fibrosis tissue, and follicular cell. The following training step is performed to estimate the transition probability matrix for MRF-based class modification. The images in the database for training were extracted and latticed. All of the grid image were roughly classified by applying the discriminant analysis. The misclassified grid images were manually corrected. Finally, the MRF algorithm was implemented to the grid images to establish the TPM. z Recognition phase For every grid image of latticed microscopic image, the aforementioned image and texture features were implemented to characterize the histological structure representation for tissue types. These image and texture feature were implemented to evaluate the characteristics of textures. Using the feature weightings, the statistical discriminant analysis was implemented for classification to determine tissue type of grid image. MRF using established TPM were then employed to modify the results of the misclassified grid images.
(10)
cC
The function U () is called the energy function, and Vc () is called a potential function determined from the relational entries of TPM depending on the values Zs of Z for which s C . D. System Operation The aforementioned theories were implemented in this system to train the ability for recognition and improve the performance of the system for the tissue classification in heterogeneous tissue microscopic image. Fig. 2 shows the schematic flowchart of the proposed system. The image samples were divided to be contiguous grid images. Two phases, the training phase and recognition phase, were then established in the procedures and they are described in the following paragraphs.
_______________________________________________________________
Fig. 2 Schematic flowchart of this proposed system
IFMBE Proceedings Vol. 23
_________________________________________________________________
58
Y.T. Chen, M.W. Lee, C.J. Hou, S.J. Chen, Y.C. Tsai and T.H. Hsu
III. RESULTS AND DISCUSSION Fig. 3 shows an example of image segmentation by using this proposed system. This image in (a) mainly contains two types of tissue: follicular cells and colloid blocks. Because of the problem of tissue samples making and preserving, some empty areas and fissures appear between normal tissues. Fig. 3(b) shows the result of image classification applying discriminant analysis. Some grid images were misclassified as blood cells (red blocks). As shown in (c), after
(a)
Original image with follicular cells and colloid
the process of MRF-based modifications, most of the misclassified grids were corrected. Sensitivity, specificity, and accuracy of four approaches with different combinatorial methods were evaluated. These methods used in this four approaches were: 1) the discriminant analysis (DA), 2) the discriminant analysis with empirical rules (DA+ER), 3) discriminant analysis with empirical rules and MRF-based modification (DA+ER+MRF), and 4) discriminant analysis and MRF-based modification (DA+MRF). The empirical rules were determined from the statistical distribution of the image features of tissue classes. The images of every kind of tissue were selected by our clinical fellow. 100 random selected clinical image samples were employed in the training and testing procedures for the evaluation of system performance. 50 images are used for training and the others are used for testing. The performance evaluations of the proposed methodologies are listed in Table 1. The results show the two approaches using MRF have higher sensitivity and specificity. The approach of DA+ MRF has the highest performance with accuracy about 96%.
Table 1
Sensitivity Specificity Accuracy
Performance Measure of the proposed methods DA
DA+ER
0.576 0.919 0.897
0.561 0.917 0.896
DA+ER +MRF 0.655 0.951 0.936
DA+MRF
0.793 0.977 0.966
IV. CONCLUSIONS
(b) The results of classification after discriminant analysis
In this paper, the microscopic images of thyroid heterogeneous tissues were characterized and classified by combinatorial approaches applying image features classification, statistical discriminant analysis and MRF based class modification. The results show the algorithm has good performance and high capability for classification of pathological tissues of thyroid nodule. We hope that the information provided from these succeeding studies will have a reliable means for the related clinical analysis of thyroid diseases.
ACKNOWLEDGMENT This work was supported in part by the National Science Council, ROC, under the Grant# NSC 96-2221-E-218 -053
(c)The results of modification after using MRF Fig. 3 The examples for microscopic image segmentation with heterogeneous tissues
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Feature-Based Microscopic Image Segmentation for Thyroid Tissue
REFERENCES [1] Harburger JI (1989) Diagnostic Methods in Clinical Thyroidology, Springer-verlag, NY [2] Wynford-Thomas D, Williams ED (1989) Thyroid Tumours: Molecular Basis of Pathogenesis, Churchill Livingstone, London [3] Ljungburg O (1992) Biopsy pathology of the thyroid and parathyroid. Chapman & Hall Medical, London [4] Dhawan AP (2003) Medical Image Analysis, Wiley-IEEE Press, NJ [5] Chen YT, Hou CJ, Lee MW, Chen SJ, Tsai YC, Hsu TH (2008) The image feature analysis for microscopic thyroid tissue classification. 30th Annual International Conference of the IEEE EMBS, Vancouver, Canada, 2008, pp.4059-4062 [6] Geman S, Geman D (1994) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell PAMI-6(6):721-741
_______________________________________________________________
59
[7] Cai J, Liu ZQ (2002) Pattern recognition using Markov random field models. Pattern Recognit 35:725-733 [8] Szirányi T, Zerubis J, Czúni L, Geldreich D, Kato Z (2000) Image segmentation using Markov random field model in fully parallel cellular network architectures. Real-Time Imaging 6:195-211 [9] Wilson R, Li CT (2002) A class of discrete multi-resolution random fields and its application to image segmentation. IEEE Trans Pattern Anal Mach Intell 25(1):42-56 Author: Yen-Ting Chen Institute: Department of Electrical Engineering, Southern Taiwan University Street: No.1, Nan-Tai St. City: Yung-Kung City, Tainan County Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Heart Disease Classification Using Discrete Wavelet Transform Coefficients of Isolated Beats G.M. Patil1, Dr. K. Subba Rao2, K. Satyanarayana3 1
Dept. of I.T., P.D.A. College of Engineering, Gulbarga-585 102 (Karnataka) INDIA 2 Dept. of E&CE, UCE, Osmania University, Hyderabad-500 007 (A.P.) INDIA 3 Dept. of BME, UCE, Osmania University, Hyderabad-500 007 (A.P.) INDIA
Abstract — In this work the authors have developed and evaluated a new approach for the feature analysis of normal and abnormal beats of electrocardiogram (ECG) based on the discrete-wavelet-transform (DWT) coefficients using Daubechies wavelets. In the first step the real ECG signals were collected from the normal and abnormal subjects. DWT coefficients of each data window were calculated in the Matlab 7.4.0 environment using Daubechies wavelets of order 5 to a scale level of 11. The detail information of levels 1 and 2 was discarded, as the frequencies covered by these levels were higher than frequency content of the ECG. Thus for a scale level 11 decomposition, coefficients associated with approximation level 11 and details level 3 to 11 were retained for further processing. The recorded ECG was classified in to normal sinus rhythm (NSR) and three different disease conditions, namely: atrial fibrillation (AF), acute myocardial infarction (AMI) and myocardial ischaemia based on the discrete wavelet transform coefficients of isolated beats from the multi-beat ECG information. Keywords — ECG, Discrete-wavelet-transform (DWT), atrial fibrillation (AF), acute myocardial infarction (AMI), myocardial ischaemia.
I. BACKGROUND Cardiac arrhythmias can be catastrophic and life threatening. Special kind of arrhythmia: atrial fibrillation (AF) has its special pattern on the shape of ECG; it perturbs the electrocardiogram and in the same time complicates automatic detection of other kinds of arrhythmia [1]. The problem has been described as a challenge by both Computers in Cardiology and PhysioNet. Permanent and paroxysmal AF is a risk factor for the occurrence and the recurrence of stroke, which can occur as its first manifestation. However, its automatic identification is still unsatisfactory [2]. Atrial fibrillation (AF) is an arrhythmia associated with the asynchronous contraction of the atrial muscle fibres. It is the most prevalent cardiac arrhythmia in the western world, and is associated with significant morbidity. Heart diseases, in particular acute myocardial infarction (AMI), are the primary arrhythmic events in the majority of patients who present with sudden cardiac death
[3]. Myocardial ischaemia is yet another cardiac disorder which needs urgent medical attention. The single most common cause of death in Western culture is ischaemic heart disease, which results from insufficient coronary blood supply [4]. Approximately 35 percent of all human beings who suffer from cardiac disorders in the United Sates die of congestive heart failure, the most common cause of which is progressive coronary ischaemia [5]. II. INTRODUCTION Cardiac arrhythmias detection is important because they determine the emergency conditions (risk of life). Heart abnormalities are not identifiable in the electrocardiogram (ECG) recorded from the surface of the chest at the very early stage. They become visible in the ECG after the disease is in place. If certain heart diseases are not diagnosed, evaluated and treated at the early stage, such a condition may lead to the risk of sudden cardiac death. Since heart diseases can be treated, early recognition is important. In particular, AF beat, AMI and myocardial ischaemia indicate susceptibility to life-threatening conditions. Moreover, the probability of recovery is often greatest with proper treatment during the first hour of a cardiac disturbance [6]. Measurements of the width or duration of waves in the ECG widely used to define abnormal functioning of the heart [7], to detect myocardial damage and to stratify patients at risk of cardiac arrhythmias is not only time consuming and slow but also inadequate. We therefore need a faster and accurate method of cardiac analysis using wavelet coefficients. In this study, a new wavelet-based technique has been proposed for the identification, classification and analysis of arrhythmic ECG signals. In the recent times the wavelet transform has emerged as a powerful time–frequency signal analysis tool widely used for the interrogation of nonstationary signals. Its application to biomedical signal processing has been at the forefront of these developments where it has been found particularly useful in the study of these, often problematic, signals: none
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 60–64, 2009 www.springerlink.com
Heart Disease Classification Using Discrete Wavelet Transform Coefficients of Isolated Beats
more so than the ECG. The method is particularly useful for the analysis of transients, aperiodicity and other nonstationary signal features where, through the interrogation of the transform coefficients, subtle changes in signal morphology may be highlighted over the scales of interest. The method proposed by the authors makes use of the isolated beats from real ECG records by importing the ECG data files into Matlab 7.4.0 environment. III. METHODS The authors collected the ECG records from 233 subjects and examined with the help of an expert cardiologist and are used throughout this study. The control data of 120 records comprising 80 normals (NOR) and 40 abnormals (ABN) with arrhythmias were sorted and analyzed using Daubechies wavelet transform. Then a validation group of 70 subjects’ ECG records known as the test data were studied using these criteria. Out of the 70 records 40 taken from normal persons and 30 with pathological heart beats were analyzed which allowed the authors to establish the specific cardiac cycle characteristics to differentiate the healthy sinus rhythm from the abnormal cardiac functioning using wavelet analysis. The ECG data files are converted to text (.txt) files and later to Mat (.mat) files. The Mat files were imported in to Matlab workspace and with the help of the code written in Matlab version 7.4.0 the decomposition of coefficients were computed. This method decomposes the ECG originally into 5 transformed signals at 11 different scales and makes use of scales 3 to 11 of the wavelet transform. In this method only scales 3 to 11 are used to characterize the ECG signal as these prove to give optimum results. A derived criterion is applied on a combination of such scales to determine the abnormal cardiac waves, in addition to the specific level of coefficients as discussed hereafter. For this study, the Daubechies wavelet family was retained that represented the most commonly used family of orthogonal wavelets for detecting ECG events. The member of the corresponding family, DB5, to use was then picked out: on the one hand by relying on a thorough investigation of related literature. And on the other, by analyzing the results of the applications. The complete separation of relevant waves from the ECG signals prior to interpretation is still an open problem because of the noisy nature of the input data. A. Experimental setup and data acquisition The battery operated, portable and cost effective ECG acquisition system has been designed and developed as a
_______________________________________________________________
61
separate part of this project [8]. The module, developed and implemented is used for ECG recording. The ECG is amplified, filtered (0.5 -120 Hz) and converted into digital form before being processed. The signal is sampled at 2 kHz. A digital signal processing system, PC with Matlab 7.4.0 was placed for data acquisition, processing and storage. The 30 seconds of signals comprising 36 cycles for a normal heart rate of 72 beats per minute were recorded and stored for analysis. Authors have used discrete wavelet transform (DWT) for data processing to extract classifier features’ decomposition coefficients allowing authors to differentiate the heart diseases from the sinus rhythm (SR). B. Selection of the wavelet functions There is no absolute rule to determine the most adequate analyzing wavelets; the choice must always be specific to the application as well as to the analysis requirements. The efficiency in extracting a given signal based on wavelet decomposition depends greatly on the choice of the wavelet function and on the number of decomposition scales [9]. In this study, a straightforward approach for wavelet selection was based on: 1) Literature where wavelets have already been used for ECG processing. 2) Suitability of a particular member of the family of wavelets for the analysis of specific cardiac abnormality signals. The Daubechies wavelets DB5 have shown to be very adequate for the present analysis, based on the similarity between ECG samples and the selected member of the considered family. These two approaches resulted in an optimal group of results. Table 1 shows the ECG waves, their morphology and the suggested diseases. Table 1 ECG waves, morphology and the suggested diseases Wave
Morphology
Disease
P wave
Absent
Atrial fibrillation
S-T segment
Eleveted
Acute myocardial infarction
S-T segment
Depressed
Myocardial ischaemia
T wave
Too tall
Acute myocardial infarction
T wave
Inverted
Myocardial ischaemia
IV. RESULTS AND VALIDATION The authors used their own ECG database to carry out tests on the method used, which is aimed at helping doctors analyze patient records. Finally real ECG records were used to determine the DWT coefficients. The method proposed by the authors makes use of the isolated beats from real ECG records. The sample numbers of onset of P wave and
IFMBE Proceedings Vol. 23
_________________________________________________________________
62
G.M. Patil, K. Subba Rao, K. Satyanarayana
offset of T wave were provided in the Matlab code for single cycle extraction and each ECG cycle both normal and abnormal was viewed before the corresponding mat file being used for computation of DWT coefficients. The DWT coefficient values corresponding to P and T waves and S-T segment appearing with certain morphologies are analyzed and classified. The efficiency of algorithm performance of the presented method was tested on the control data group and over the validation database namely test data group. Wavelet decomposition coefficients computed for quite representative signals were classified and tabulated for analysis. Real ECG signals recorded on healthy and pathological persons provided a very interesting aspect for the heart beats evaluation, given that the total ECG vector was viewed in the Matlab 7.4.0 environment and the positions of the arrhythmias in the signal were known a priori and hence could be isolated and compared to the result of the normal cycle extraction. The decomposition coefficients cd3 to cd11 of a normal sinus beat; the intervals and amplitudes defined by its features being within the normal limits are tabulated for the analysis. Single cycles of normal cardiac cycles are shown in figures 1 and 2. The corresponding decomposition coefficients cd3 to cd11 are shown in table 2 for subject 1 and cycle 1.
Figure 1 Normal cardiac cycle
Figure 3 Atrial fibrillation
Figure 4 Atrial fibrillation
Table 3 Detail coefficients cd8-cd11 Sub No Sub1c1 Sub2c1
Cd11 10.120 12.938
Cd10 -12.948 -6.9889
Cd9 -1.2119 -0.0571
Cd8 -0.6643 -0.0877
condition. The negative values of cd8 to cd10 and cd10 being the most significant one as examined by the authors represent presence of normal T wave. S-T segment elevation merges with upstroke of the T wave and makes it too tall both being suggestive of acute myocardial infarction as shown in figures 5 and 6. The corresponding detail coefficients cd8 to cd11 are shown in table 4. In this particular abnormality it is again seen that the coefficients cd8 to cd11 are negative indicating the presence of normal P wave and too tall T wave. The authors used a derived criterion of combination of scales’ coefficients is applied and determined that cd10 (11) is positive viz. 1353.417, 1404.941 for subjects 1 and 2 respectively. Also S-T segment depression overlaps with the upward deflection of the T wave tending it to be inverted, which suggests myocardial ischaemia are shown in figures 7 and 8.
Figure 2 Normal cardiac cycle
Table 2 Detail coefficients cd3-cd11 Sub No Sub1c1 Cd7 -0.4279
Cd11 -30.5687 Cd6 -0.1813
Cd10 -5.2672 Cd5 -0.0518
Cd9 -1.085 Cd4 -0.5416
Cd8 -0.7601 Cd3 -0.0335
All the coefficients cd3 to cd11 are found to be negative for the normal cardiac cycle. The single cardiac cycles with P wave absent indicating atrial fibrillation are shown in figures 3 and 4. The corresponding detail coefficients cd8 to cd11 pertaining to the low frequencies of P wave for cycles one each of subjects 1 and 2 are shown in table 3. The positive value of cd11 indicates the abnormal (P wave absent)
_______________________________________________________________
Figure 5 Acute myo cardial infarction
Figure 6 Acute myo cardial infarction
Table 4 Detail coefficients cd8-cd11 Sub No Sub1c1 Sub2c1
IFMBE Proceedings Vol. 23
Cd11 -50.254 -54.196
Cd10 -10.701 -14.072
Cd9 -1.5729 -2.4537
Cd8 -1.4563 -1.5028
_________________________________________________________________
Heart Disease Classification Using Discrete Wavelet Transform Coefficients of Isolated Beats
Figure 7 Myocardial ischaemia
Figure 8 Myocardial ischaemia
The corresponding detail coefficients cd8 to cd11 pertaining to the low frequencies of T wave for cycles one each of subjects 1 and 2 are shown in the table 5. The negative value of cd11 indicates the presence of normal P wave. The coefficients cd8 to cd10 being positive represent the abnormal (inverted) T wave. Scales 8, 9 and 10 were found to characterize the ECG signals well enough for the detection T wave whereas scale 11 was used for the detection of the P wave. Table 5 Detail coefficients cd8-cd11 Sub No Sub1c1 Sub2c1
Cd11 -14.392 -12.257
Cd10 1.0497 1.6625
Cd9 0.0689 0.1379
Cd8 0.0742 0.1069
The classification of recorded ECG into normal cycles and diseases such as atrial fibrillation, acute myocardial infarction and myocardial ischaemia based on DWT coefficients is summarized in table 6. Table 6 Classification of NSR and cardiac diseases
63
resolution interrogation of the ECG over a wide range of applications. It provides the basis of powerful methodologies for partitioning pertinent signal components, which serve as a basis for potent diagnostic strategies. Much work has been conducted over recent years into AF, AMI and myocardial ischaemia centered on attempts to understand the pathophysiological processes occurring in sudden cardiac death, predicting the efficacy of therapy, and guiding the use of alternative or adjunct therapies to improve resuscitation outcomes. The authors have achieved 99.1% sensitivity for discriminating, AF episodes, AMI and myocardial ischaemia beats. The final structure for the proposed Matlab code is short and computationally very efficient and easily lends itself to real-time implementation. In conclusion, it has been shown that the wavelet transform is a flexible time–frequency decomposition tool that can form the basis of useful signal analysis and coding strategies. It is envisaged that the future will see further application of the wavelet transform to the ECG as the emerging technologies based on them are honed for practical purpose. Detecting and separating P wave, T waves and S-T segment can be a difficult task. This technique provided a basis for distinguishing healthy patients from those presenting with atrial fibrillation, acute myocardial infarction and myocardial ischaemia. The study of the changes in DWT coefficients on a beat-by-beat basis provided important information about the state of heart mechanisms in both physiological and pathological conditions. The authors found that wavelet analysis was superior to time domain analysis for identifying patients at increased risk of clinical deterioration. The approaches shown here have proved to yield results of comparative significance with other current methods and will continue to be improved.
based on DWT coefficients Cd8-Cd10
Cd11
Cd10 (11)
Normal * AF
Negative Negative
Negative Positive
Negative
AMI Myocardial Ischemia
Negative Positive
Negative Negative
Positive
---------------
* For normal ECG cycles all DWT coefficients including cd3-cd7 (table 2) are found to be negative.
V. DISCUSSION AND CONCLUDING REMARKS The wavelet transform has emerged over recent years as a key time–frequency analysis and coding tool for the ECG. The wavelet transform allows a powerful analysis of nonstationary signals, making it ideally suited for the high-
_______________________________________________________________
ACKNOWLEDGEMENTS The author is highly indebted to Sri. Basavaraj S.Bhimalli, President H.K.E. Society, Gulbarga, for all the encouragement and support. The author is highly thankful to Dr. S.S.Chetty, Administrative officer, H.K.E. Society, Gulbarga, for the inspiration. The author is thankful to Dr. L.S. Birader, Principal, P.D.A. College of Engineering, Gulbarga, for the help. The author is profusely thankful to Dr. R.B. Patil, Professor,M. R. Medical College, Gulbarga, for the expert opinions. I thank Sri. Rupam Das for all the help. The author wishes to thank Sri. Dharmaraj M. for providing the technical help.
IFMBE Proceedings Vol. 23
_________________________________________________________________
64
G.M. Patil, K. Subba Rao, K. Satyanarayana 6.
REFERENCES 1.
2.
3.
4.
5.
R. Magjarevic and J. H. Nagel, Atrial fibrillation recognizing using wavelet transform and artificial neural network, World Congress on Medical Physics and Biomedical Engineering 2006, Imaging the Future Medicine, 978-83, August 27 – September 1, 2006 COEX Seoul, Korea. Watson J.N., Addison P.S., Clegg G.R., et. al., Wavelet Transformbased Prediction of the Likelihood of Successful Defibrillation for Patients Exhibiting Ventricular Fibrillation, Measurement Science and Technology, 2005, Vol.16, L1-L6. G.M. Patil, K. Subba Rao, K. Satyanarayana, Characterization of ECG using Multilevel Discrete Wavelet Decomposition, National Symposium on Instrumentation, NSI-32, Tiruchengode, Tamil Nadu, India, 24-26 Oct. 2007. Erçelebi E, Electrocardiogram signals de-noising using lifting-based discrete wavelet transform, Computers in Biology and Medicine, 34(6): 479-93, 2004. M. Boutaa, F. Bereksi-Reguig, S. M. A. Debbal, ECG signal processing using multiresolution analysis, Journal of Medical Engineering & Technology, 6, 543-548, 2008.
_______________________________________________________________
7.
8.
9.
G.M. Patil, K. Subba Rao, K. Satyanarayana, Emergency Healthcare and Follow-Up Telemedicine System for Rural Areas Based on LabVIEW, World Scientific J. of Biomedical Engineering Applications, Basis and Communications (in press). G.M. Patil, K. Subba Rao, K. Satyanarayana, Analysis of the variability of waveform intervals in the ECG for detection of arrhythmia, Interenational Conference on Information Processing, ICIP 2007, Bangalore, India, 10-12 Aug 2007. G.M. Patil, K. Subba Rao, K. Satyanarayana, Embedded microcontroller based digital telemonitoring system for ECG, J. Instrum. Soc. India, 37(2), pp. 134-149, June 2007. Tewfik AH, Sinha D, Jorgensen P, On the optimal choice of a
wavelet for signal representation, IEEE Transactions on Information Theory, 38(2), 747-765, 1992. Address of corresponding author : Author : Prof. G. M. Patil Institute: Head (I.T.), P. D. A. College of Engineering City: GULBARGA – 585 102 (Karnataka State) Country: INDIA Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Non-invasive Techniques for Assessing the Endothelial Dysfunction: Ultrasound Versus Photoplethysmography M. Zaheditochai1, R. Jaafar1, E. Zahedi2 1 2
Dept. of Electrical, Electronic & System Engineering, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia Medical Engineering Section, University Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 3 School of Electrical Engineering, SHARIF University of Technology, PO Box 11365-9363, Tehran, IRAN
Abstract — Endothelial dysfunction, which can be noninvasively assessed by flow mediated vasodilation (FMD), predicts an increased rate of adverse cardiovascular events. The endothelial dysfunction is considered as a leading cause of development and progression of the atherosclerotic process. The main aim of this study is to review different non-invasive methods to assess the endothelial dysfunction and to propose enhancements to a new method based on photoplethysmography (PPG). First, non-invasive techniques developed for the evaluation of peripheral endothelial function (including the Doppler ultrasound (US)) will be reviewed. Although noninvasive, US -based techniques present a few disadvantages. To remedy to the above disadvantages, another technique based on pulse wave analysis is introduced. Although amplitudebased features from the photoplethysmogram has produced encouraging results, there are cases where the complete equivalence with US-FMD based on ultrasound measurement cannot be established. Therefore more elaborated features combined with data processing techniques are proposed which seem promising enough for the PPG-based technique to be used as a replacement method for US-FMD measurement. The ultimate aim is to assess the endothelial function and the presence of significant atherosclerotic events leading to peripheral vascular disease using a practical, simple, low-cost, nonoperator dependent and alternative non-invasive technique in a clinical setting. Keywords — Endothelial function, Flow mediated vasodilation, Photoplethysmography.
I. INTRODUCTION Endothelial cells play a critical role to control the vascular function. They are multifunctional cells which regulate vascular tone, smooth muscle cell growth, passage of molecules across their cell membranes, immune response, platelet activity and fibrinolytic system [1,2]. They are located in vascular wall which is consisted of 3 layers; intima that is the closest to the blood lumen, media is in the middle and adventitia which is the exterior ones. The endothelium is a thin layer of cells which are placed in the interior surface of intima. The strategic location of the endothelium allows it to sense changes in hemodynamic forces by membrane receptor mechanisms and respond to physical and chemical
stimulus which provokes the endothelium to release nitric oxide (NO) with subsequent vasodilation [1,2,3]. Endothelial dysfunction (ED) is implicated in the pathogenesis and clinical course of the majority of cardiovascular diseases. It is thought to be an important factor in the development of atherosclerosis, hypertension, and heart failure and is strongly correlated with all the major risk factors for cardiovascular disease (CVD). ED by definition is any kind of alteration in the physiology of the endothelium which produces a decompensation (failure of compensation in cardiac) in its regulatory functions which represents a systemic disorder that affects the vasculature [4]. Many blood vessels respond to an increase in flow, or more precisely shear stress, by dilating. This phenomenon is designated as flow mediated vasodilation (FMD) where vascular dilation occurs in response to agents which stimulate the secretion of nitric oxide. FMD appears to be mediated primarily by the action of endothelium derived nitric oxide (NO) on vascular smooth muscle. Generation of NO is dependent on the activation of the enzyme endothelial Nitric Oxide Synthase (eNOS). Inhibition of this enzyme abolishes FMD in arteries. In recent years, various non-invasive techniques have been developed for the evaluation of coronary and peripheral endothelial function. Doppler ultrasonography is one of the non-invasive techniques which is based on detecting alterations in arterial vasoreactivity under different physiological and pharmacological stimuli [4]. II. NON-INVASIVE TECHNIQUES A. FMD Ultrasound Flow-mediated vasodilation is an endothelium-dependent process that reflects the relaxation of the artery when it is faced with increased flow following shear stress which is raised during post-occlusive reactive hyperemia. In this technique the ultrasound system (US) must be equipped with two-dimensional (2D) imaging, color and spectral Doppler, an internal electrocardiogram (ECG) monitor and a high-frequency vascular linear array transducer with a minimum frequency of 7 MHz. Timing of each
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 65–68, 2009 www.springerlink.com
66
M. Zaheditochai, R. Jaafar, E. Zahedi
image frame with respect to the cardiac cycle is via simultaneous ECG recording on the US video monitor. Flow stimulus in the brachial artery (BA) is produced by a blood pressure (BP) cuff which is placed above the antecubital fossa. The arterial 5-min occlusion is created by cuff inflation to suprasystolic pressure (typically 50 mmHg above the systolic pressure). Both BA diameter and ECG should be measured simultaneous during image acquisition to define when the artery is the largest. Based on the cardiac cycle, the diameter of the BA should be measured in the beginning of the R-wave which identifies the start of the systole. In this phase vessel expands to accommodate the increase in pressure and volume generated by left ventricular contraction. Although this technique is non-invasive, it has some disadvantages. Firstly, this technique is subject to variations due to the subjectivity of the operator performing the experiment and requires a skilled operator. Secondly, accurate analysis of BA reactivity is highly dependent on the quality of US images and it is sensitive to the US probe location. Thirdly it is to say that arteries smaller than 2.5 mm in diameter are difficult to measure and vasodilation is generally less difficult to perceive in vessels larger than 5.0 mm in diameter. Blockage of blood supply to the BA for a relatively long period of time (up to 5 minutes) is another issue. The blockage duration has to last enough to ensure that the dilation reaches few millimeters for the imaging system to be able to detect it; otherwise image artifacts will prevent the investigation from having a clear view of the amount of dilation. This is why the main focus of this study is introducing the next method which is considered in previous research and promotes it to get the comparable results [5]. Another limitation of US-FMD is that due to the size of the equipment, the patient needs to be physically transferred to the laboratory where the test could be performed. B. FMD Photoplethysmography Photoplethysmography (PPG) is a non-invasive technique that measures relative blood volume changes through optical method. By this it is possible to measure the pulsation of the blood. During systole that the artery’s diameter increases, the amount of transmitted light drops due to hemoglobin absorption and it goes into reverse during diastole. So there is a pulsatile signal which varies in time with the heart beat that is including both low (DC) and high frequency (AC) component. This means that the pulsatile signal is superimposed on a DC level which is related to respiration, sympathetic nervous system activity and thermoregulation. Whereas, the high frequency component is due to cardiac synchronous changes with each heart beat in skin micro-vascular blood volume [9]. As plethysmographic signal carries very rich information about cardiovascular regulation, it seems to be justified the
_________________________________________
use of advanced signal analysis techniques for the study of plethysmographic signals. There are several studies based on pulse wave analysis to consider the properties of arteries as well as studying largeartery damage which is the common cause of mortality and in industrialized countries. They showed significant correlations between PWV (Pulse Wave Velocity) and cardiovascular risk factors, such as hypertension, high cholesterol level, diabetes and smoking [6]. Pulse wave analysis is a well recognized way to evaluate aortic stiffness and, consequently, could be useful to evaluate the vascular effects of aging, hypertension, and atherosclerosis [7,8]. Moreover, the multi-site PPG concept has considerable clinical potential. Some of these studies are based on disease diagnostic in the lower limbs by investigating the clinical value of objective PPG pulse measurements collected simultaneously from the right and left great toes [9]. The previous study [5] which would be improved here was conducted to investigate if there is any relationship between US measurement and finger PPG pulse amplitude (AC) change in response to FMD of the corresponding BA. C. Simultaneous exams of PPG and Ultrasound The ultrasound data were gathered by the help of a high resolution ultrasound system (US) with a 7.5 MHz linear array transducer. The subjects were asked to abstain from food, alcohol and caffeine for at least 8 hours prior to the experiment. The US images were captured while subjects were in supine position at rest and the transducer was placed at few centimeters above the elbow on the right arm. The arterial occlusion was created by cuff inflation 50 mmHg above systolic BP for 4 minutes. The baseline diameter was obtained from the average of 3 measurements before introduction of blockage. Shear stress that caused endotheliumdependent dilation would be made by releasing the cuff suddenly. BA diameter was measured from the longitudinal image of the artery using wall tracking technique, manually at intervals of approximately 30 seconds for approximately 5 minutes following the release of blood flow blockage. Beside US equipment, two PPG systems consisting of the sensors, software and hardware were used to record PPG signals from the right and left index fingers respectively. By comprising diameter changes in two methods of PPG and US, we can see that AC of the PPG signal is similar to the US-FMD. However, this similarity is not observed for all subjects, probably for the following reasons: 1- The US-FMD dilation is not properly measured. It should be emphasized that the artery is being imaged with a high-resolution scanner but the viewing angle and position of the imaging probe held by the operator play an important role in correct evaluation of the dilation.
IFMBE Proceedings Vol. 23
___________________________________________
Non-invasive Techniques for Assessing the Endothelial Dysfunction: Ultrasound Versus Photoplethysmography
2- Currently, the measurement is done manually whereas images taken during the US-FMD experiment are examined one by one and the diameter is measured by positioning the cursor at chosen landmarks. It has been widely reported in the literature that this might be one of the main sources for errors. 3- The AC of the PPG is not sufficient to explain the FMD response alone. This is why we propose to complement this value by other demographic parameters.
67
Fig. 1 PPG signals of the left and right arm before blockage
In this study both ultrasound FMD and PPG FMD techniques were analyzed simultaneously. Furthermore, we focus on the role of clinical risk factors for cardiovascular diseases (CVD) in PPG FMD responses. Fig. 2 PPG signals of the left and right arm after release blockage
III. RESULTS OF PPG AND ULTRASOUND CROSS STUDY Eighty one subjects aged 21 to 76 years were included in this study. The subjects were divided into three groups: Group 1 comprises healthy individuals (25 subjects), Group 2 comprises individuals having any one risk factor (31 subjects) and Group 3 comprises individuals having more than one risk factor (25 subjects). Healthy subjects are free from any major CVD risk factors. The risk factors include obesity which is assessed by BMI (Body Mass Index), diabetes which is assessed by HbA1c and glucose level, hypertension which is assessed by systolic blood pressure and diastolic blood pressure and hypercholesterolemia which is assessed by LDL (lowdensity lipoprotein cholesterol) and total cholesterol in this study. To have a clear comparison graph of all risk factors, they were normalized in each group between 0 and 1. By this it means that for each risk factor the minimum value refers to 0 and maximum value refers to 1. The border line of each risk factor is identified in Table 1. In signal processing, the amplitude of the PPG signal (AC value) was extracted from the left and right index fingers, before blockage (Figure 1) and after release (Figure 2).
The PPG signal of the left arm remains the same before blockage and after release and can be used as the baseline values. On the other hand, the amplitude of PPG signal of right arm decreases to zero during occlusion and increases after release. Both PPG and ultrasound signals are normalized in amplitude between 0 and 1 and are shown along with the clinical data. In both PPG and ultrasound exam, all subjects show reactions to BA occlusion. Examples of the responses from each group are shown in Figure 3, 4 and 5, respectively. The dotted line shows the ultrasound response and the continuous line shows the PPG response. The clinical data refers to BMI, systolic BP, diastolic BP, heart rate, glucose level, HbA1c, total cholesterol, HDL, LDL, triglyceride, age and gender (female = 1, male = 0) from left to right. The risk is shown by the marker on top of identified risk factor. In 70 % of subjects who are in the first group (healthy) and second group (having only one risk factor),
Table 1 Risk factors Risk factors Systolic BP
Border line >140 mmHg
Normalized value
Min value
0.57
100mmHg
170 mmHg 112 mmHg
Max value
Diastolic BP
>90 mmHg
0.63
51 mmHg
Glucose level
>6 mmol/L
0.17
4.1 mmol/L 15.2 mmol/L
HbA1c
>6.5 %
0.25
4.5 %
12.5 %
Total Cholesterol >5.2 mmol/L
0.57
2 mmol/L
7.59 mmol/L
LDL
>3 mmol/L
0.42
1.2 mmol/L 5.47 mmol/L
BMI
>30
0.49>30 18.98
0.73 41.15
_________________________________________
Fig. 3 Normalized (a) PPG and ultrasound responses and (b) clinical data (healthy subject)
IFMBE Proceedings Vol. 23
___________________________________________
68
M. Zaheditochai, R. Jaafar, E. Zahedi
both responses follow each other (Figure 3 and 4). In Figure 4 the risk factor refers to hypercholesterolemia (high in total cholesterol and LDL). In the third group which included subjects having more than one risk factor, PPG and ultrasound responses follow each other only in 50 % of the subjects (Figure 5). In this group the risk factors refer to hypercholesterolemia and diabetes.
for non-similarity observed in some of the subjects in the earlier study.
ACKNOWLEDGMENT This work has been supported by the Science Fund grant (01-01-02-SF0227) from the Ministry of Science, Technology and Innovation, Malaysia. It is also supported by the Technical BME Laboratory from Sharif University of Technology, Iran.
REFERENCES 12-
3-
Fig. 4 Normalized (a) PPG and ultrasound responses and (b) clinical data (subject with only one risk factor)
4-
5-
6-
7-
8Fig. 5 Normalized (a) PPG and ultrasound responses and (b) clinical data (subject with more than one risk factor)
9-
J.A. Vita, J.F. Keaney (2002) Endothelial Function A Barometer for Cardiovascular Risk?, American Heart Association, Inc. 106:640-642 M.E. Widlansky, N. Gokce, J.F. Keaney, J.A. Vita (2003) The Clinical Implications of Endothelial Dysfunction, Journal of the American College of Cardiology, Vol. 42, No. 7 Corretti M C, Anderson T J, Benjamin E J, Celermajer D, Charbonneau F, Creager M A, Deanfield J, Drexler H, Gerhard- Herman M and Herrington D (2002) Guidelines for the ultrasound assessment of endothelial-dependent flow-mediated vasodilation of the brachial artery - a report of the international brachial artery reactivity task force, J. Am. Coll. Cardiol. 39 257-65 J.P. Tomas, J.L. Moya, R. Campuzano, V. Barrios, A. Megras, S. Ruiz, P. Catalan, M.A. Recarte, A. Murielb (2004) Noninvasive Assessment of the Effect of Atorvastatin on Coronary Microvasculature and Endothelial Function in Patients With Dyslipidemia, Rev Esp Cardiol ;57(10):909-15 E. Zahedi, R. Jaafar, M.A. Mohd Ali, A.L. Mohamed, O. Maskon (2008),Finger photoplethysmogram pulse amplitude changes induced by flow mediated dilation, Physiological Measurement, Vol. 29(5), pp 625-637. L. A. Bortolotto, J. Blacher, T. Kondo, K. Takazawa, M.E. Safar (2000), Assessment of Vascular Aging and Atherosclerosis in Hypertensive Subjects: Second Derivative of Photoplethysmogram Versus Pulse Wave Velocity, American Journal of Hypertension, Ltd. AJH 13:165–171 S.R. Alty, N. Angarita-Jaimes, S.C. Millasseau, P.J. Chowienczyk (2006), Predicting Arterial Stiffness from the Digital Volume Pulse Waveform, IEEE R. Asmar (2007) Effects of pharmacological intervention on arterial stiffness using pulse wave velocity measurement, Journal of the American Society of Hypertension 1(2) 104–112 J. Allen, C. P Oates, T. A. Lees, A. Murray (2005) Photoplethysmography detection of lower limb peripheral arterial occlusive disease: a comparison of pulse timing, amplitude and shapen characteristics, Physiol. Meas. 26, 811–821
IV. CONCLUSIONS In summary the results of this study show that PPG FMD and ultrasound FMD responses are similar in most of the healthy subjects as well as among those having just one risk factor. However, it is hard to say that with the increase in number of risk factors, the PPG and ultrasound responses can follow each other. This finding could explain the reason
_________________________________________
Author: Mojgan Zaheditochai Institute: Dept. of Electrical, Electronic & System Engineering, Universiti Kebangsaan Malaysia Street: 43600 UKM Bangi, Selangor City: Kuala Lumpur Country: Malaysia Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
High Performance EEG Analysis for Brain Interface Dr. D.S. Bormane1, Prof. S.T. Patil2, Dr. D.T. Ingole3, Dr. Alka Mahajan4 1
Rajarshi Shahu college of Engineering, Pune, India,
[email protected] 2 B.V.U. College of Engineering, Pune, India,
[email protected] 3 Ram Meghe Institute of technology and Research, Badnera,
[email protected] 4 Aurora Technological Institute, Hydrabad,
[email protected] Abstract — A successful brain interface (BI) system enables individuals with severe motor disabilities to control objects in their environment (such as a light switch, neural prosthesis or computer) by using only their brain signals. Such a system measures specific features of a person's brain signal that relate to his or her intent to affect control, then translates them into control signals that are used to control a device. Recently, successful applications of the discrete wavelet transform have been reported in brain interface (BI) systems with one or two EEG channels. For a multi-channel BI system, however, the high dimensionality of the generated wavelet features space poses a challenging problem. In this paper, a feature selection method that effectively reduces the dimensionality of the feature space of a multi-channel, self-paced BI system is proposed. The proposed method uses a two-stage feature selection scheme to select the most suitable movement-related potential features from the feature space. The first stage employs mutual information to filter out the least discriminant features, resulting in a reduced feature space. Then a genetic algorithm is applied to the reduced feature space to further reduce its dimensionality and select the best set of features. An offline analysis of the EEG signals (18 bipolar EEG channels) of four able-bodied subjects showed that the proposed method acquires low false positive rates at a reasonably high true positive rate. The results also show that features selected from different channels varied considerably from one subject to another. The proposed hybrid method effectively reduces the high dimensionality of the feature space. The variability in features among subjects indicates that a user-customized BI system needs to be developed for individual users. Keywords — EEG, Multiresolution Wavelet, Fuzzy Cmeans, Brain, BCI, Ensemble classifier
I. INTRODUCTION The brain generates rhythmical potentials, which originate in the individual neurons of the brain. Electroencephalograph (EEG) is a representation of the electrical activity of the brain. Numerous attempts have been made to define a reliable spike detection mechanism. However, all of them have faced the lack of a specific characterization of the events to detect. One of the best known descriptions for an interictal "spike" is offered by Chatrian et al. [1]: " transient, clearly distinguished from
background activity, with pointed peak at conventional paper speeds and a duration from 20 to 70 msec". This description, however, is not specific enough to be implemented into a detection algorithm that will isolate the spikes from all the other normal or artifactual components of an EEG record. Some approaches have concentrated in measuring the "sharpness" of the EEG signal, which can be expected to soar in the "pointy peak" of a spike. Walter [2] attempted the detection of spikes through analog computation of the second time derivative (sharpness) of the EEG signals. Smith [3] attempted a similar form of detection on the digitized EEG signal. His method, however required a minimum duration of the sharp transient to qualify it as a spike. Although these methods involve the duration of the transient in a secondary way, they fundamentally consider "sharpness" as a point property, dependent only on the very immediate context of the time of analysis. More recently, an approach has been proposed in which the temporal sharpness is measured in different "spans of observation", involving different amounts of temporal context. True spikes will have significant sharpness at all of these different "spans". The promise shown by that approach has encouraged us to use a wavelet transformation to evaluate the sharpness of EEG signals at different levels of temporal resolution. Bio kit datascope machine is used to aquire 32-channel eeg signal with the international 10-20 electrode coupling. The sampling frequency of the device is 256 Hz with 12-bit resolution and stored on hard disc. 32 channel EEG data was recorded simultaneously for both referential and bipolar montages. Recordings are made before, meanwhile and after the person is doing anulom vilom, and also we have kept track on recording the EEG data after one, two and three months of the same persons doing anulom vilom. Such 10 persons data is collected for analysis. II. PROBLEM FORMULATION Visual analysis and diagnosis of EEG signal using time domain analysis is very time consuming & tedious task it may vary from person to person. Frequency domain analysis also have limitations like more samples to be analyzed for
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 69–72, 2009 www.springerlink.com
70
D.S. Bormane, S.T. Patil, D.T. Ingole, Alka Mahajan
getting accurate results, More memory space required for storage of data, More processing time, More filter length, Non-linear phase, Lack of artifact removal, baseline rejection, lack of data epochs rejection, visualization of Data info, event fields and event values. In Greedy method decomposition problems. In Orthogonal matching complex computations. EEG has several limitations, most important is its poor spatial resolution. EEG is most sensitive to a particular set of post-synaptic potentials: those which are generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites which are deeper in the cortex, inside sulci, are in midline or deep structures or produce currents which are tangential to the skull have far less contribution to the EEG signal. The meninges, cerebrospinal fluid and skull "smear" the EEG signal, obscuring its intracranial source. It is mathematically impossible to reconstruct a unique intercranial current source for a given EEG signal. This is referred to as the inverse problem. III. PROBLEM SOLUTION Bio-kit data acquisition system, Mumbai (India), is used to acquire 32-channel EEG signal with the international 1020 electrode coupling. The sampling frequency of the device is 256 Hz with 12-bit resolution and stored on hard disc of a computer. To study the effect of a long term (six months or greater) practice of different techniques of ujyai, resting state and before examination on the EEG signals of young to middle aged Males and Females. 32-channel EEG data was recorded simultaneously for both referential and bipolar montages. The Electrical waveforms were obtained for all the subjects under different groups mentioned above.The research work in this thesis proposes analysis of four different types of wavelets, including the Daubechies wavelets with 4 (Db4) and 8 (Db8) vanishing moments, symlets with 5 vanishing moments (Sym5), and the quadratic B-spline wavelets (Qbs). The quadratic B-spline wavelet was chosen due to its reported suitability in analyzing ERP data in several studies. Db4 and Db8 were chosen for their simplicity and general purpose applicability in a variety of time-frequency representation, whereas Sym5 was chosen due to its similarity to Daubechies wavelets with additional near symmetry property. Different mother wavelets have been employed in analysis of the signals; performances of six frequency bands (0–1Hz, 1–2 Hz, 2–4Hz, 4–8 Hz, 8–16 Hz and 0–4 Hz) have been individually analyzed, The proposed pearl ensemble based decision is designed, implemented and compared to a multilayer perceptron and AdaBoost
_______________________________________________________________
classifier based decision; and most importantly, the earliest possible diagnosis of the alzeimer’s disease is targeted. Some expected, and some interesting outcomes were observed, with respect to each parameter analyzed. To exploit the information on the time frequency structure using different Wavelet transforms like Daubechies (Db4 and Db8), sym5 and the quadratic B-spline wavelets (Qbs). along with The proposed pearl ensemble based decision is designed, implemented and compared to a multilayer perceptron and AdaBoost classifier based decision during meditation. To estimate deterministic chaos like correlation dimension, largest lyapunov exponent, approximate entropy and hurst exponent of 207 persons (subjects) before attending written examination state, normal resting state and during meditation state. Figure: 1. shows the result of the classification of one experimental subject. Here the optimal number of clusters is 3. I have normalized every feature vector into 0 to 1. From the figure above we could obtain that alpha increased in the middle interval and decreased in the late interval of ujyai. After ujyai, the appearance of cluster #3 (centered on Cz) increased. As EEG is normally characterized by its frequency, the EEG patterns are conveniently classified into four frequency ranges: the delta (0-4Hz), theta (4-8 Hz), alpha (8-13 Hz), and beta (13-25 Hz). The meditationE E G signals, although composed of these standard rhythmic patterns, are found to orchestrate
Fig:1. Three clusters
Fig:2. Selected samples and the centre of cluster # 1.
IFMBE Proceedings Vol. 23
_________________________________________________________________
High Performance EEG Analysis for Brain Interface
71
Fig:5. Selected samples and the centre of cluster #2.
Fig:6. Selected samples and the centre of cluster #3.
symphonies of certain tempos. After systematic study by applying the FCM-merging strategies to a number of meditation EEG data sets, results of clustering indicate that rhythmic patterns reflecting various meditationstates normally involve five patterns: IV. CONCLUSION During the Meditationtechnique individuals often report the subjective experience of Transcendental Consciousness or pure consciousness, the state of least excitation of consciousness. This study found that many experiences of pure consciousness were associated with periods of natural respiratory suspension, and that during these respiratory suspension periods individuals displayed higher mean EEG coherence over all frequencies and brain areas, in contrast to control periods where subjects voluntarily held their breath. Results are 98 % true when discussed with experts and doctors. In this study we developed a scheme to investigate the spatial distribution of alpha power, and we adopted this procedure to analysis this characteristics of meditators and normal subjects. The results show that alpha waves in the
_______________________________________________________________
central and frontal regions appear more frequently in the experimental group.From the previous study, the enhancement of frontal alpha during meditationmay be related to the activation of the Anterior Cingulate Cortex (ACC) and medial Prefrontal Cortex (mPFC). The ACC has outflow to the autonomic, viscermotor and endocrine systems. Previous findings suggested that during meditation some changes of autonomic patterns and hormone are related to the ACC. Furthermore, the ACC and mPFC are considered to modulate internal emotional responses by controlling the neural activities of limbic system, that is, they may function via diffusing alpha waves. Besides, the trends of non-alpha are different in two groups. In control group the alpha wave was getting less during relaxation, and in the post-session it returned to the level which is the same in the pre-session. The reason of this trend may be drowsiness. Multiresolution M-Band wavelet (proposed) is most effective for EEG analysis in terms of high degree of accuracy with low computational load. This thesis reports a novel idea of understanding various meditationscenarios via EEG interpretation. Experimental subjects narration further corroborates, from the macroscopic viewpoint, the results of EEG interpretation obtained by the FCM-merging strategies. The FCM clustering method automatically identifies the significant features to be used as the meditationEEG interpreting protocol. The results show that the state of mind of a human body becomes more stable with ujyai, whereas the tense conditions affect inversely causing disturbance in the mind.
BIOGRAPHIES Prof. S.T. Patil- Completed B.E. Electronics from Marathwada University, Aurangabad in 1988. M.Tech. Computer from Vishveshwaraya Technological University, Belgum in July 2003. Persuing Ph.D. Computer from Bharati Vidyapeeth Deemed University, Pune. Having 19 years of experience in teaching as a lecturer, Training & Placement Officer, Head of the department & Assistant Professor. Presently working as an Assistant Professor in computer engineering & Information Technology department in Bharati Vidyapeeth Deemed University, College of Engineering, Pune (India). Presented 14 papers in national & international conferences. Dr. D.S. Bormane- Completed B.E. Electronics from Marathwada University, Aurangabad in 1987. M.E. Electronics from Shivaji University, Kolhapur. Ph.D. computer from Ramanand Tirth University, Nanded. Having 20 years of experience in teaching as a Lecturer, Assistant Professor, Professor, and Head Of Department.
IFMBE Proceedings Vol. 23
_________________________________________________________________
72
D.S. Bormane, S.T. Patil, D.T. Ingole, Alka Mahajan
Currently working as a Principal in Rajrshi Shahu College of Engineering, Pune (India). Published 24 papers at national & international conferences and journals.
REFERENCES [1] Daubechies, Ingred. Ten Lectures on Wavelets SIAM (Society for Industrial and Applied Mathematics), Philadelphia, Pennsylvania, 1992. [2] S. Mallat, "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 14, pp710-732, July 1992. [3] Chatrian et al., "A glossary of terms most commonly used by clinical electroencephalographers", Electroenceph.and Clin. Neurophysiol., 1994, 37:538-548.
_______________________________________________________________
[4] D. Walter et al., "Semiautomatic quantification of sharpness of EEG phenomena". IEEE Trans. on Biomedical Engineering, 3, Vol. BME20, pp. 53-54. [5] J. Smith, "Automatic Analysis and detection of EEG Spikes", IEEE Trans. on Biomedical Engineering, 1999, Vol. BME-21, pp. 1-7. [6] Barreto et al., "Intraoperative Focus Localization System based Spatio-Temporal ECoG Analysis", Proc. XV Annual Intl. Conf. of the IEEE Engineering in Medicine and Biology Society, October, 2003. [7] Lin-Sen Pon, “Interictal Spike Analysis Using Stochstic Point Process”,Proceedings of the International conference, IEEE – 2003. [8] Susumo Date, “A Grid Application For An Evaluation Of Brain Function Using ICA” Proceedings of the International conference, IEEE - 2002 . [9] Ta-Hasin Li, “Detection of Cognitive Binding During Ambiguous Figure”, Proceedings of the International conference, IEEE – 2000. [10] Alison A. Dingle, Tichard D. Jones and others "A multistage system to detect epileptiform activity in the EEG" IEEE transaction on biomedical engineering, vol.40, no.12, December 2003.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Denoising of Transient Visual Evoked Potential using Wavelets R. Sivakumar ECE Department, RMK Engineering College, Kavaraipettai, Tamilnadu, India Abstract — Transient Visual Evoked Potential (TVEP) is an important diagnostic test for specific ophthalmological and neurological disorders. The clinical use of VEP is based on the amplitudes and the latencies of the N75, P100, and N145 peaks. The amplitude and latencies of these peaks are measured directly from the signal. Quantification of these latency changes can contribute to the detection of possible abnormalities. We have applied the wavelet denoising method to 100 numbers of pre-recorded signals using all the available wavelets in MATLAB signal processing toolbox. From the results it is clear that the positive peak is clearer in the denoised version of the signal using the wavelets Sym5 and Bior3.5. As opposed to the previous studies however our study clearly shows that the output using the former wavelet effectively brings out the P100 when compared to the latter. The first negative peak N75 is clear in the denoised version of the signal using the wavelets Bior5.5 Bior6.8 and coif4. The second negative peak N145 is clear using all the above wavelets. All the three peaks are fairly clear in the denoised output using the wavelet sym7. Keywords — Transient Visual Evoked Potential, Latency, denoising, wavelets, MATLAB.
I. INTRODUCTION Evoked Potentials (EPs) are the alterations of the ongoing EEG due to stimulation. They are time locked to the stimulus and they have a characteristic pattern of response that is more or less reproducible under similar experimental conditions. In order to study the response of the brain to different tasks, sequence of stimuli can be arranged according to well defined paradigms. This allows the study of different sensitive functions, states etc., thus making the EPs an invaluable tool in neurophysiology. Transient Visual Evoked Potential (TVEP) is an import diagnostic test for specific ophthalmologic and neurological disorders. The clinical use of VEP is based on the amplitudes and the latencies of the N75, P100, and N145 peaks. Quantification of these latency changes can contribute to the detection of possible abnormalities [1-3]. Due to the low amplitudes of EPs in comparison with the ongoing EEG, they are hardly seen in the original EEG signal and therefore several trials are averaged in order to enhance the evoked responses. Since EPs are timed licked to the stimulus their contribution will add and while ongoing EEG will cancel. However, when averaging information related with variations between the single trials is lost. This information could be relevant in order to study behavioral
and functional processes. Moreover, in many cases a compromise must be made when deciding on the number of trials in an experiment. If we take a large number of trials we optimize the EP/EEG ratio but if the number of trial is too large, then we could deal with effects such as tiredness, which eventually corrupts the average results. This problem can be partially solved by taking sub-ensemble averages. However, in many cases the success of such procedure is limited, especially when not many trials can be obtained or when characteristics of the EPs change from trial to trial. Several methods have been proposed in order to filter averaged EPs. The successes of such methods world imply the need of less number of trials and would eventually allow the extraction of single trial EPs from the background EEG. Although averaging has been used since the middle of 1950’s, up to now none of these attempts has been successful in obtaining single trial EPs, at least in a level that they could be applied to different types of EPs and that they could be implemented in clinical settings. Most of these approaches involves Wiener filtering (or a minimum mean square error filter based on auto-and crosscorrelations) and have the common drawback of considering the signal as a stationary process. Since EPs are transient responses related with specific time and frequency locations, such time-invariant approaches are not likely to give optimal results. Using the wavelet formalism can solve these limitations, as well as the ones related with timeinvariant methods [4-6] . The wavelet transform is a time-frequency representation proposed first in, that has an optimal resolution both in the time and frequency domains and has been successfully applied to the study of EEG-EP signals. The objective of the present study is to follow an idea originally proposed and to present a very straight forward method based on the wavelet transform to obtain the evoked responses at the single trial level. The key point in the denoising of Eps is how to select in the wavelet domain the activity representing the signal (the EPs) and then eliminate the one related with noise (the background EEG [7-10]. In fact, the main difference between our implementation and previous related approaches is in the way that the wavelet coefficients are selected. Briefly, such choice should consider latency variations between the single trial responses and it should not introduce spurious effects in the time range where the EPs are expected to occur. In this
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 73–76, 2009 www.springerlink.com
74
R. Sivakumar
respect, the denoising implementation we propose will allow the study of variability between single trials, information that could have high physiological relevance. II. MATERIAL AND METHODS Experiments were carried with subjects in the Neurology Department of the leading Medical Institute. TVEP was performed in a specially equipped electro diagnostic procedure room (darkened, sound attenuated room). Initially, the patient was made to sit comfortably approximately 1m away from the pattern-shift screen. Subjects were placed in front of a black and white checkerboard pattern displayed on a video monitor. The checks alternate black/white at a rate of approximately twice per sound. Every time the pattern alternates, the patient’s visual system generates on electrical response that was detected and was recorded by surface electrodes, which were placed on the scalp overlaying the occipital and parietal regions with reference electrodes in the ear. The patient was asked to focus his gaze onto the center of the screen. Each eye was tested separately (monocular testing). Scalp recordings were obtained from the left occipital (O1) electrode (near to the location of the visual primary sensory area) with linked earlobes reference. Sampling rate was 250 Hz and after band pass filtering in the range 0.1-70 Hz, 2 s of data (256 data pre-and poststimulations) were saved on a hard disk (Figure 1). The average TVEP is decomposed by using the wavelet multi resolution decomposition. The wavelet coefficients are not correlated with the average VEP are identified and
Figure 1 Single Trial TVEP
_________________________________________
set to zero. The inverses transform is applied to recover the denoised signal. The same procedure is extended to all single trials. The denoising method was applied to a number of pre-recorded signals using all the available wavelets to analyze the peaks (N75, P100 and N145). III. RESULTS AND DISCUSSION The results show that the positive peak P100 is clearer in the denoised version of the signal using the wavelets Sym5 and Bior3.5 (Figure 2). The first negative peak N75 is clearer in the denoised version of the signal using the wavelets Bior5.5, Bior6.8 and coif4. The second negative peak N145 is clear in all the wavelets. All the peaks were clear in the denoised output using the wavelet sym7. This would greatly help the medical practitioners. In fact, there are many different functions suitable as wavelets, each one having different characteristics that are more or less appropriate depending on the application. Irrespective of the mathematical properties of the wavelet of choose, a basic requirement is that it looks similar the patterns we want to localize in the signal. This allows a good localization of the structures of interest in the wavelet domain and moreover, it minimizes spurious effects in the reconstruction of the signal via the inverse wavelet transform. For this, the previous analysis have been done by choosing quadratic bioorthogonal B-splines as mother functions doe to their similarity with the evoked response [7-9]. B-splines are piecewise polynomials that form a base in L2. Bur our analysis shows that the there is more wavelets that can be used which similar to TVEPs. We presented a method for extracting TVEPs from the background EEG. It is not limited to the study of TVEP/EEG and similar implementation can be used for recognizing transients even in signal with low signal to noise ratio. The denoising of EPs allowed the study of the variability between the single responses, information that could have a high physiological relevance in the study of different brain functions, states etc., It could be used to eliminate artifacts that do not appear in the same time frequency ranges of the relevant evoked responses. Our results proved that the method proposed in this paper gives better averaged EPs due to the high time-frequency resolution of the wavelet transform, this being hard to achieve with conventional Fourier filters. Moreover, since trials with good evoked responses can be easily identified, These advantages could significantly reduce the minimum number of trials necessary in a recording session, something of high importance for avoiding behavioral changes during the recording (e.g., effects of tiredness) of even more interesting, for obtaining EPs under strongly varying conditions, as with children, or patients with attention problems.
IFMBE Proceedings Vol. 23
___________________________________________
Denoising of Transient Visual Evoked Potential using Wavelets
75
Figure 2 – denoised tvep
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
76
R. Sivakumar 8.
REFERENCES 1.
2. 3.
4. 5. 6. 7.
Nuwer (1998) Fundamentals of evoked potentials and common clinical applications today. Electroencephalography and Clin. Neurophysiol., 106: 142-148. Kalith J, Misra U.K (1999) Clin. Neurophysiol. Churchill Livingstone Pvt Ltd, New Delhi, India. Nogawa T, Katayama K. Okuda H. Uchida M (1991) Changes in the latency of the maximum positive peak of visual evoked potential during anesthesia. Nippon Geka Hoken., 60: 143-153. Burrus C.S. Gopinath R.A. Guo H (1998) Introduction to wavelets and wavelet transforms a primer. Prentice Hall, NJ, USA. Kaiser G, (1994) A Friendly Guide to Wavelets. Birkhauser,Boston. Raghuveer M.R, Ajit S.B (1998) Wavelet Transforms. Introduction to theory and Applications, Addision Wesley Longman, Inc. Quiroga Q, Garcia H (2003) Single-trial event-related potentials with Wavelet Denoising. Clin. Neurophysiol., 114: 376-390.
_________________________________________
Quiroga Q, Sakowicz O, Basar E, Schurmann M (2001) Wavelet Transform in the analysis of the frequency composition of evoked potentials. Brain Res. Protocols 8: 16-24. 9. Quiroga Q.R (2000) Obtaining single stimulus evoked potentials with Wavelet Denoising. Physica D, 145: 278-292. 10. Dvorak I, Holden A.V (1991) Editors Mathematical Approaches to Brain Functioning diagnostics. Proceedings in Nonlinear Science. Manchester University Press. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Dr.R.Sivakumar RMK Engineering College Kavaraipettai Tamilnadu India
[email protected] ___________________________________________
A Systematic Approach to Understanding Bacterial Responses to Oxygen Using Taverna and Webservices S. Maleki-Dizaji1, M. Rolfe3, P. Fisher2, M. Holcombe1 1
The University of Sheffield, Computer Science, Sheffield, United Kingdom The University of Manchester, Computer Science, Manchester, United Kingdom 3 The University of Sheffield, Department of Molecular Biology and Biotechnology, Sheffield, United Kingdom 2
Abstract — Escherichia coli is a versatile organism that can grow at a wide range of oxygen levels; although heavily studied, no comprehensive knowledge of physiological changes at different oxygen levels is known. Transcriptomic studies have previously examined gene regulation in E. coli grown at different oxygen levels, and during transitions such as from an anaerobic to aerobic environment, but have tended to analyse data in a user intensive manner to identify regulons, pathways and relevant literature. This study looks at gene regulation during an aerobic to anaerobic transition, which has not previously been investigated. We propose a data-driven methodology that identifies the known pathways and regulons present in a set of differentially expressed genes from a transcriptomic study; these pathways are subsequently used to obtain a corpus of published abstracts (from the PubMed database) relating to each biological pathway Keywords — E. coli, Microarray, Taverna, Workflows, Web Services
I. INTRODUCTION Escherichia coli has been a model system for understanding metabolic and bio-energetic principles for over 80 years and has generated numerous paradigms in molecular biology, biochemistry and physiology [1]. E. coli is also widely used for industrial production of proteins and speciality chemicals of therapeutic and commercial interest. A deeper understanding of oxygen metabolism could improve industrial high cell-density fermentations and process scale-up. Knowledge of oxygen-regulation of gene expression is important in other bacteria during pathogenesis where oxygen acts an important signal during infection [2] and thus this project may underpin better antimicrobial strategies and the search for new therapeutics. However, current approaches have generally been increasingly reductionist, not holistic. Too little is known of how molecular modules are organised in time and space, and how control of respiratory metabolism is achieved in the face of changing environmental pressures. Therefore, a new systems-level approach is needed, which integrates data from all spatial and temporal domains. Many transcriptomic studies using microarrays have analysed data in a user-intensive manner to identify regulons, pathways and
relevant literature. Here, a two-colour cDNA microarray dataset comprising a time-course experiment of Escherichia coli cells during an aerobic to anaerobic environment is used to demonstrate a data-driven methodology that identifies known pathways from a set of differentially expressed genes. These pathways are subsequently used to obtain a corpus of published abstracts (from the PubMed database) relating to each biological pathway identified. In this research Taverna and Web Services were used to achieve the goal. II. TAVERNA AND WEB SERVICES Web services provide programmatic access to data resources in a language-independent manner. This means that they can be successfully connected into data analysis pipelines or workflows (Figure 1). These workflows enable us to process a far greater volume of information in a systematic manner. Unlike that of the manual analysis, which is severely limited by human resources, we are only limited by the processing speed, storage space, and memory of the computer executing these workflows. Still the major problems with current bioinformatics investigations remain; which are the lack of recording experimental methods, including software applications used, the parameters used, and the use of hyperlinks in web pages. The use of workflows limits issues surrounding the manual analysis of data, i.e. the bias introduced by researchers when conducting manual analyses of microarray data. Processing data through workflows also increases the productivity of the researchers involved in the investigations, allowing for more time to be spent on investigating the true nature of the detailed information returned from the workflows. For the purpose of implementing this systematic pathway-driven approach, we have chosen to use the Taverna workbench[3,4]. The Taverna Workbench allows bioinformaticians to construct complex data analysis pipelines from components (or web services) located on both remote and local machines. These pipelines, or workflows, are then able to be executed over a set of unique data values, producing results which can be visualised within the Taverna workbench itself. Advantages of the Taverna workflow work-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 77–80, 2009 www.springerlink.com
78
S. Maleki-Dizaji, M. Rolfe, P. Fisher, M. Holcombe
Figure 1 – Workflow Diagram
bench include, repeatable, re-useable the and limiting user bias by removing intermediate manual data analysis. Greater volume of data can be processed in a reduced time period We propose a data-driven methodology that identifies the known pathways from a set of differentially expressed genes from a microarray study (Figure 1). This workflow consists of three parts: microarray data analysis; pathways extraction; and PubMed abstract retrieval. This methodology is implemented systematically through the use of web services and workflows. A. Microarray Data Analysis Despite advances in microarray technology that have led to increased reproducibility and substantial reductions in cost, the successful application of this technology is still elusive for many laboratories. The analysis of transcriptome data in particular presents a challenging bottleneck for many biomedical researchers. These researchers may not possess the necessary computational or statistical knowledge to address all aspects of a typical analysis methodology; indeed, this is something which can be time consuming and expensive, even for experienced service providers with many users. Currently available transcriptome analysis tools include both commercial software (GeneSpring [5], ArrayAssist [6]) and non-commercial software (Bioconductor [7]). The open source Bioconductor package [7] is one of the most widely used suites of tools used by biostatisticians and bioinformaticians in transcriptomics studies. Although both highly powerful and flexible, users of Bioconductor face a steep learning curve, which requires users to learn the R statistical scripting language as well as the details of the Bioconductor libraries. The high overheads in using these tools provide a number of disadvantages for the less experienced user such as, requirement for expensive bioinformatics support, consider-
_______________________________________________________________
able effort in training, less than efficient utilisation of data, difficulty in maintaining consistent standards methodologies, even within the same facility, Difficult integration of additional analysis software and resources and limited reusability of methods and analysis frameworks. The aim of this work was to limit these issues. Users will, therefore, be able to focus on advanced data analysis and interpretational tasks, rather than common repetitive tasks. We have observed that there is a core of microarray analysis tasks common to many microarray projects. Additionally, we have identified a need for microarray analysis software to support these tasks that has minimal training costs for inexperienced users, and can increase the efficiency of experienced users. Microarray Data Analysis part provides support to construct a full data analysis workflow, including loading, normalisation, T-test and filtering of microarray data. In addition to returning normalised data, it produces a range of diagnostic plots of array data, including histograms, box plots and principal components analysis plots using R and Bioconductor. B. Pathway extraction This part of the workflow searches for genes found to be differentially expressed in the microarray data, selected based on a given p-value from the Microarray Data Analysis part. Gene identifiers from this part were subsequently cross-referenced with KEGG gene identifiers, which allowed KEGG gene descriptions and KEGG pathway descriptions to be returned from the KEGG database. C. PubMed abstract retrieval In this part, the workflow takes in a list of KEGG pathway descriptions. The workflow then extracts the biological pathway process from the KEGG formatted pathway descriptions output. A search is then conducted over the PubMed database (using the eFetch web service) to identify up to 500 abstracts related to the chosen biological pathway. At this stage, a MeSH tag is assigned to the search term, in order to reduce the number of false positive results returned from this initial search. All identified PubMed identifiers (PMID) are then passed to the eSearch function and searched for in PubMed. Those abstracts found are then returned to the user along with the initial query string – in this case, the pathway [3]. D. Pie chart At present, results from transcriptional profiling experiments (lists of significantly regulated genes) have largely been interpreted manually, or using gene analysis software
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Systematic Approach to Understanding Bacterial Responses to Oxygen Using Taverna and Webservices
(i.e. GeneSpring, GenoWiz) that can provide links to databases that define pathways, functional categories and gene ontologies. Many databases, such as EcoCyc [8] and RegulonDB [9], contain information on transcriptional regulators and regulons (genes known to be regulated by a particular transcription factor), and automatic interpretation of a transcriptional profiling dataset using these databases is still in its infancy. When applied to the results of a transcriptional profiling experiment, this may confirm the importance of a regulator that is already known, or suggest a role for a previously unknown regulator, which may be investigated further. The pie chart shown indicates the number of genes in a dataset that are regulated by a known transcriptional regulator, or by combination of regulators, and can suggest previously unknown regulatory interactions. The information for each regulon comes from files that are created manually from the EcoCyc database. III. CASE STUDY: ESCHERICHIA COLI Escherichia coli is a model laboratory organism that has been investigated for many years due to its rapid growth rate, simple growth requirements, tractable genetics and metabolic potential [10]. Many aspects of E. coli are well characterised, particularly with regards to the most familiar strain K-12, with a sequenced genome[11], widespread knowledge of gene regulation (Regulon DB; [9] and well documented metabolic pathways (EcoCyc; [8]). Indeed, it has been said that more is known about E. coli than about any other organism [1] and for these reasons E. coli stands out as a desirable organism on which to work (Mori, 2004). A. Growth conditions Escherichia coli K-12 strain MG1655 was grown to a steady-state in a Labfors-3 Bioreactor; Infors HT; Bottmingen, Switzerland) under the following conditions (vessel volume 2 L; culture volume 1 L; Evans medium pH 6.9 [12]; stirring 400 rpm; dilution rate 0.2 h-1). To create an aerobic culture 1 L min-1 air was sparged through the chemostat, whilst for anaerobic conditions 1 L min-1 5 % CO2 95 % N2 (v/v) was passed through the chemostat. For steady-state to be reached, continuous flow was allowed for at least 5 vessel volumes (25 hours) before cultures were used. Gas transitions were carried out on steady-state cultures by switching gas supply as required. B. Isolation of RNA A steady-state chemostat culture was prepared and samples were removed from the chemostat for RNA extraction
_______________________________________________________________
79
just prior to the gas transition and 2, 5, 10, 15 and 20 minutes after the transition. Samples were taken by direct elution of 2 ml culture into 4 ml RNAprotect (Qiagen; Crawley, UK) and using RNeasy RNA extraction kit (Qiagen Crawley, UK) following the manufacturers instructions. RNA was quantified spectrophotometrically at 260 nm. C. Transcriptional Profiling 16 g RNA for each time point was labelled with Cyanine3-dCTP (Perkin-Elmer; Waltham, USA) using Superscript III reverse transcriptase (Invitrogen; Paisley, UK). Manufacturers instructions were followed for a 30 l reaction volume, using 5 g random hexamers (Invitrogen; Paisley, UK) with the exception that 3 nmoles Cyanine3dCTP and 6 nmoles of unlabelled dCTP was used. Each Cyanine3-labelled cDNA sample was hybridised against 2 g of Cyanine5-labelled K-12 genomic DNA produced as described by Eriksson[13]. Hybridisation took place to Ocimum OciChip K-12 V2 microarrays (Ocimum; Hyderabad, India) at 42 °C overnight and washed according to the manufacturers instructions. Slides were scanned on an Affymetrix 428 microarray scanner at the highest PMT voltage possible that didn’t give excessive saturation of microarray spots. For each time point, two biological replicates and two technical replicates were carried out. IV. RESULTS To analyze the transcriptional dataset, the proposed workflow was applied; this workflow can accept raw transcriptional data files and ultimately generates outputs of differentially regulated genes, relevant metabolic pathways and transcriptional regulators, and even potentially relevant published material. This has many advantages compared to standard transcript profiling analyses. From a user aspect, it will be quicker than the time-consuming analysis that currently occurs, and ensures that the same stringency and statistical methods are used in all analyses, and hence should make analyses more user-independent. It can also remove any possibility that users can subconsciously ‘manipulate’ the data. In order to run the work flow following parameters have been set: NormalizationMethod = rma, Statistical testMethod = limma, p-value = 0.05, foldChange = 1, geneNumber = 100. The workflow results directly progress from a microarray file to outputs in the form of plots or text files in the case published abstracts from the PubMed database (Figure 2). Tables displaying the processed data can also be visualised From the outputs, the relevance of the transcriptional regulators FNR, ArcA, PdhR were immediately noticeable.
IFMBE Proceedings Vol. 23
_________________________________________________________________
80
S. Maleki-Dizaji, M. Rolfe, P. Fisher, M. Holcombe
coli transcriptional regulator or metabolic pathway. This can suggest relevant transcriptional networks and unexpected aspects of physiology that would otherwise have been missed by conventional analysis methods.
ACKNOWLEDGMENT (a) Gene.ID ybfD emrY bglG b0309 hemA ydcC ykgH melB insB_2
logFC -6.4251 -6.34166 -5.74389 -5.53166 -5.52479 -5.48627 -5.44609 -5.3852 -5.3752
(b)
(c)
AveExpr t-value P.Value adj.P.Val 12.53531 -20.9753 8.60E-20 1.59E-16 9.717566 -22.7469 8.21E-21 3.78E-17 11.21273 -20.1239 2.83E-19 1.84E-16 11.29123 -18.5888 2.72E-18 3.62E-16 11.21758 -18.469 3.27E-18 3.62E-16 12.03441 -19.6757 5.40E-19 1.91E-16 11.20526 -17.5063 1.48E-17 5.91E-16 11.64157 -19.8371 4.27E-19 1.84E-16 13.0089 -20.8395 1.04E-19 1.59E-16
B 34.97318 37.19892 33.83456 31.65645 31.47933 33.21571 30.01513 33.44017 34.79474
(d)
We thank SUMO team for very useful discussions and SysMO and BBSRC for finical support.
REFERENCES 1.
2.
3.
(e) 4. 5. 6. 7. 8.
(f)
9.
Figure 2 – Workflow outputs: The workflow produced several at the end of each stage. These show (left-right, top-bottom): (a) raw data image; (b) plotting a MA plot after normalization; (c) box-plotting the summary data pre and post normalisation; (d) filtered and sorted list of differentially expressed genes. (e) tiltle list of relevant paper from; (f) regulon pie chart of Fnr and Arc; 10.
V. DISCUSSION This workflow has successfully been used to interrogate a transcriptomic dataset and identify regulators and pathways of relevance. This has been demonstrated using experimental conditions in which the major regulators are known; however, the study of less characterised experiments the resulting outputs may have exciting and unanticipated results. From a knowledge point-of-view, an investigators experience is usually over a limited research theme; hence only regulators and pathways already well known to a researcher tend to be examined in detail. This workflow allows easy interrogation of a dataset to identify the role of potentially every known E.
_______________________________________________________________
11.
12.
13.
Neidhardt, F. C. (Ed. in Chief), R. Curtiss III, J. L. Ingraham, E. C. C. Lin, K. B. Low, B. Magasanik, W. S. Reznikoff, M. Riley, M. Schaechter, and H. E. Umbarger (eds). (1996). Escherichia coli and Salmonella: Cellular and Molecular Biology. American Society for Microbiology. 2 vols. 2898 pages. Rychlik, I. & Barrow, P.A. (2005) Salmonella stress management and its relevance to behaviour during intestinal colonisation and infection FEMS Microbiology reviews 29 (5) 1021-1040. Fisher, P., Hedeler, C., Wolstencroft, K., Hulme, H., Noyes, H., Kemp, S., Stevens, R., Brass, A. A systematic strategy for large-scale analysis of genotype-phenotype correlations: identification of candidate genes involved in African Trypanosomiasis Nucleic Acids Research 2007 35(16): 5625-5633 Taverna [http://taverna.sourceforge.net] GenesSpring [http://www.chem.agilent.com] ArrayAssist [http://www.stratagene.com] Bioconductor [http://www.bioconductor.org] Keseler, I.M., Collado-Vides, J., Gama-Castro, S., Ingraham, J., Paley, S., Paulsen, I.T., Peralta-Gil, M. and Karp, P.D. (2005) EcoCyc: a comprehensive database resource for Escherichia coli. Nucleic Acids Research 33 D334-D337. Gama-Castro, S., Jiménez-Jacinto, V., Peralta-Gil, M., SantosZavaleta, A., Peñaloza-Spinola, M.I., Contreras-Moreira, B., SeguraSalazar, J., Muñiz-Rascado, L., Martínez-Flores, I., Salgado, H., Bonavides-Martínez, C., Abreu-Goodger, C., Rodríguez-Penagos, C., Miranda-Ríos, J., Morett, E., Merino, E., Huerta, A.M., TreviñoQuintanilla, L. and Collado-Vides, J. (2008) RegulonDB (version 6.0): gene regulation model of Escherichia coli K-12 beyond transcription, active (experimental) annotated promoters and Textpresso navigation. Nucleic Acids Research. 36 D120-D124 Hobman, J.L., Penn, C.W. and Pallen, M.J. (2007) Laboratory strains of Escherichia coli: model citizens or deceitful delinquents growing old disgracefully? Molecular Microbiology 64 (4) 881-885. Blattner, F.R., Plunkett, G., Bloch, C.A., Perna, N.T., Burland, V., Riley, M., Collado-Vides, J., Glasner, J.D., Rode, C.K., Mayhew, G.F., Gregor, J., Davis, N.W., Kirkpatrick, H.A., Goeden, M.A., Rose, D.J., Mau, B. and Shao, Y. (1997) The complete genome sequence of Escherichia coli K-12 Science 277(5331) 1453-1474. Evans, C.G.T., Herbert, D. and Tempest, D.W. (1970) The continuous culture of microorganisms. 2. Construction of a chemostat. In: Norris JR, Ribbons DW (eds) Methods in microbiology, vol 2. Academic Press, London New York, pp 277–327. Eriksson, S., Lucchini, S., Thompson, A., Rhen, M. and Hinton, J.C. (2003) Unravelling the biology of macrophage infection by gene expression profiling of intracellular Salmonella enterica. Molecular Microbiology 47 (1) 103-118.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Permeability of an In Vitro Model of Blood Brain Barrier (BBB) Rashid Amin1,2,3, Temiz A. Artmann1, Gerhard Artmann1, Philip Lazarovici3,4, Peter I. Lelkes3 1
Aachen University of Applied Sciences, Germany, COMSATS Institute of Information Technology, Lahore, Pakistan, 3 Drexel University, Philadelphia, USA, 4 The Hebrew University, Israel
123
Abstract — The blood brain barrier (BBB) is an anatomical structure composed of endothelial cells, basement membrane and glia; preventing drugs and chemicals from entering the brain. Our aim is to engineer an in vitro BBB model in order to facilitate neurological drug development that will ultimately benefit patients. Tissue engineering approaches are useful for the generation of an in vitro BBB model. Our experimental approach is to mimic the anatomical structure of the BBB on polyester terphthalate (PET) cell culture inserts. Endothelial cells derived from brain capillaries, different peripheral blood vessels and epithelial cells (MDCK) were cultured on the apical side of the filter. Different concentrations of thrombin were applied on compact monolayer of MDCK. Physiological functions of this BBB model was evaluated by measuring the transendothelial resistance (TEER) using an EndOhm™ electrode. The epithelial cytoskeletal organization was observed by staining with BBZ-phalloidin. Epithelial monolayer formation and later distortion by thrombin was confirmed by fluorescence microscopy. Measurements of the TEER generated values up to 2020 ohm/cm2. A dose response of thrombin was observed, showing the permeability changes in the epithelial cells (MDCK). A relationship between permeability values TEER and cytoskeletal organization was observed. Keywords — Blood Brain Barrier; Permeability; MDCK;Epithelial cell; Thrombin;Transendothelial Electrical Resistance; TEER; Thrombin receptors on MDCK
in vitro BBB model to measure permeability of novel drugs which are under research and development. The aim of our research was to develop and characterize a new in vitro model involving endothelial and epithelial cells (MDCK). A novel approach was adopted to study the effects of thrombin on the permeability of a monolayer on transwel cell culture insert. Later, the results obtained from TEER measurements and changes in cytoskeleton were observed. II. MATERIALS AND METHODS The cells and equipments used in our experiments are as under in Table 1 and Table 2. Table 1, Cells
Cells BBMCEC MCEC HAEC RAOEC MDCK
Bovine Brain Microvascular Capillary Endothelial Cells Microvascular Capillary Endothelial Cells Human Aortic Endothelial Cells Rat Aortic Endothelial Cells Madin- Darby Canine Kidney (Epithelial Cells) Table 2, Equipments
Equipment
Application
Manufacturers
I. INTRODUCTION
Cell culture Inserts
Cell culture & permeability measurement
BD
The presence of a barrier separating blood from brain and vice versa was described for the first time more than 100 years ago by Paul Ehrlich (1885) and confirmed later by Edwin Goldman (1909). Both researches showed that trypan-blue, an albumin-binding dye, following intravenous injection dispersed in whole body except the brain. On the other hand, direct subarachnoidal injection selectively stained only the brain. Neurological disorders include a large variety of brain diseases which may be treated with drugs. Unfortunately, the many drugs do not cross the Blood Brain Barrier (BBB). To evaluate pharmacokinetic properties of neurological drugs mainly animal studies are performed, which render the development of drugs a long high cost process. Therefore, there is a need to develop an
Endohm™ Chambers EVOM
TEER Measurement
WPI
TEER Measurement
WPI
CytoFluor R Series 4000 LSM 510 Microscope Permeability Analyzer
Fluorescence Plate reader Imaging
Biosystems Zeiss
Permeability measurement
Cellular Engineering Lab. Fh.Aachen
A. In Vitro BBB Model Minimal Essential medium (MEM) high glucose (4.5 mg/ml) medium (ATTC); 10% fetal calf serum (FCS) or Fetal Bovine Serum; 1% penicillin and streptomycin was
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 81–84, 2009 www.springerlink.com
Rashid Amin, Temiz A. Artmann, Gerhard Artmann, Philip Lazarovici, Peter I. Lelkes
used to culture MDCK cells, 80,000 cells/cm2 on transwell cell culture inserts. Ready to use phosphate buffered saline (PBS) and sterile 0.25% trypsin solution with 1:2000 ethylene-diamino-tetra-acetate (EDTA) was used to detach the cells from the surface of the flask., Transepithelial transport [6] was measured by Transepithelial Electrical Resistance, TEER [1] to calculate any increase or decrease in permeability of our model. MDCK cells need minimum ingredients to produce a high) TEER.
TEER for Multi cell lines 100.00
TEER
82
10.00
B. Phalloidin & BBZ Staining Phalloidin / BBZ Stains were used to Stain Actin Filaments and Nuclei of cells. A cocktail of the stain was made by mixing 0.1 % triton –x in PBS, 2.5 μg /ml BBZ (2.5 μl from stock) and 1 μg/ml Phalloidin (1 μl/ml from stock). The samples were fixed with 10% formalin at RT for 30 minutes and then were washed for 2 – 3 minutes with PBS. Phalloidin/BBZ cocktail was applied for 20 minutes at room temperature and again the samples were washed 3*3 minutes with PBS and the images were observed under fluorescence Microscope.
1.00 48 HOUR
72HOUR ( TIME )
Fig.1 TEER for Multi cell Line Table 4 TEER for Different Cell Lines BBMCEC STDEV
MCEC STDEV
10.111
0.850
0.722
0.000
2.722
0.000
1.556
0.485
1 9.333
0.758
2.000
0.428
1.389
0.323
24.167
0.686
1.722
0.548
48 HOUR 72 HOUR 96 HOUR
C. Statistical Analysis
RAOEC STDEV
Table 5 TEER for Different Cell Lines
Statistical analysis were chosen as non parametric tests because the group size was n=9. Kruskal-Wallis- Test and Mann-Whitney-U tests were applied. The difference between the groups was accepted as significant when pt) l3
III. METHOD
l1
l2
Fig. 1 Measured Distance
In PWV-DVP and PWV-DVPE calculation, the li is the difference of the distance among the measuring places, it may cause a problem where the error of the measurement occurs. For example, PWV-DVP requires to measure difference of the distance of the sternal notch to the finger and the toe before measuring hand-foot PWV. If both subject’s vessel of the hand and foot become stiff, then difference of
_______________________________________________________________
In this experiment, we did not eliminate any subject. The study group comprised 33 healthy subjects (25 male/ 8 female), the mean age of these subjects was 25 (aged 19 to 47), none of the subjects had any cardiovascular diseases history. Table 1 described the characteristics of the study population. Blood pressure was measured with the subjects in sitting position after 5 minutes, in a quiet, temperature-controlled environment set at 24°C. Then we measure subject’s distance from the sternal notch to the left index finger (l1), left second toe (l2), and left ear lobe (l3); moreover, we placed the ECG (Lead-II) sensors on the subject’s right wrist (negative), left ankle (positive), and right ankle (ground). Before
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Reliable Measurement to Assess Atherosclerosis of Differential Arterial Systems
107
Table 1 The characteristics of the study population Male (n=25)
Female (n=8)
Age, yr
26.s
22.1s
Body Height, cm
171.8s
158.1s
68.16s
52s
Body Weight, kg 2
Body Mass Index (BMI), kg/m
23.1s
20.7s
Heart Rate, bpm
79.4s
92.2s
Systolic Blood Pressure (SBP), mmHg
115s
115.9s
Diastolic Blood Pressure (DBP), mmHg 69.9s
IV. RESULTS
72.9s
measuring PWV, let the subject rest in the supine position (4 min), then obtained and stored the ECG-Finger pulse wave, ECG-Toe pulse wave, and ECG-Ear pulse wave at the same time. Afterward system could automatically calculate ECG_H-PWV, ECG_F-PWV, and ECG_E-PWV. Overall we recorded 20 times data and average as one time result. The ECG_H-PWV, ECG_F-PWV, and ECG_EPWV formulas are as following:
ECG _ H PWV
l1 't3
(2)
ECG _ F PWV
l 2 't 4
(3)
l3 't5
(4)
ECG _ E PWV
As Table 2 shown, this study measured five types of PWV for 33 healthy subjects, and ECG matched the value which the finger, the toe and the ear pulse wave could observe the minimum value that happened to ECG_E-PWV and could not over 2 m/s, and the maximum value was ECG_F-PWV. As Table 3 shown, these three methods were relative to each other in statistical significance. As Fig. 4 shown, ECG_F-PWV and PWV-DVP had statistical significance, but PWV-DVP and PWV-DVPE did not (r=-0.41, p=0.819) Table 2 The characteristics of the study population ECG_H-PWV, m/s
Male
Female
All Subjects
4.6s
4.3s
4.5s
ECG_F-PWV, m/s
6.1s
5.5s
6s
ECG_E-PWV, m/s
1.4s
1.3s
1.4s
Table 3 Statistical analysis of the ECG_H-PWV, ECG_F-PWV and ECG_E-PWV ECG_H-PWV ECG_H-PWV
ECG_F-PWV
ECG_E-PWV
r=0.698, p40 5 240 100 93
IV. RESULTS 1
2
The following picture shows the results of application of the described methods to neonatal EEG record. Above is the compressed EEG signal (80 min); below are the temporal profiles, under them are the curves corresponding to the steps 1-5 from Fig. 3. At the bottom is the final detection curve. Artefacts cause the problems in the automatic EEG analysis. We show a simple method for artefact detection and elimination. The moving artefacts are classified into classes with highest value of amplitude feature. If we cut off the last classes exhibiting the highest amplitude values in temporal profile, and if we process the profiles without such grapholeements, the artefacts are eliminated and the quality of detection curve is improved. An example is shown in Fig. 4. In the original EEG there are amplitude artefacts in the
3 4
ARTEFACTS
5 6 7 Fig. 3. The temporal profile computing and processing. 1-comprimed original EEG (80 min), 8 channels. 2-temporal profiles. 3-average of 8 profiles. 4-profiles variance (subtraction of the mean and squaring). 5variance smoothing (solid black line). 6-sleep stages detection by threshold crossing. 7-visual evaluation by a physician (AW-awake, QS-quiet sleep, AS-active sleep). Full term neonate of 40 weeks PCA.
The length of MA filters was found experimentally. They differ for preterm and full term neonates (Tab.1). Nevertheless, as soon as they were set (one set of values for one group with the same age, the other for the group of a different age), the parameters remain the same for each group (age must be known in advance). Another parameter which contributed to the detection curve smoothing was the width of the raster used for summing and averaging the class membership for each channel (WLBin). Its value was constant for all age groups.
_______________________________________________________________
A
B
Fig. 4. Elimination of the artefacts by cutting the number of classes in the profile. The last three classes of total 14 containing artefacts were eliminated. They are not present in relevant profiles (B). A-original profiles, 14 classes. B-artefacts elimination, only 10 classes processed.
IFMBE Proceedings Vol. 23
_________________________________________________________________
136
V. Kraja, S. Petránek, J. Mohylová, K. Paul, V. Gerla and L. Lhotská
upper part of the picture. Below (4A) is the temporal profile with uncorrected class membership (14 classes). The artefacts signaled the false detection curve increase. If we use only the parts of the profile with the class membership less than 11, the noisy part of the profile (with the highest class membership containing artefacts) is eliminated (4B). The results of computerized analysis compared with the visual evaluation are presented in Tab. 2. In most cases there was a good agreement. In some individual cases, the above mentioned artefact correction must be used. Two records of 32 postconception age neonates exhibit no visible changes in the detection curve. Tab. 2. The three groups of neonates and the agreement with the visual evaluation. In the table the numbers of cases are presented. PCA (weeks) Number of records Agreement with expert Correction of artefacts Not possible to evaluate
31-33 10 8 0 2
36-38 13 13 0 0
>40 17 12 5 0
Total / % 40 [100,0%] 33 [ 82,5%] 5 [ 12,5%] 2 [ 5,0%]
V. DISCUSSION Adaptive segmentation parameters In all analyzed graphs the same adaptive segmentation parameters were used (window length 256 samples, segmentation threshold 0.5, and minimum length of segments 128 samples). Optimal number of classes The optimal number of classes was estimated with Bezdek´s criterium of validity for fuzzy c-means algorithm [10]. The optimal values were in the range of 12-18 classes both for term and preterm newborns. Parameters for profiles processing The only optional parameters for temporal profiles were the window lengths for detection curve creating and smoothing. The parameters were kept fixed during all experiments. It was very interesting (and it corresponds to our knowledge of the brain maturation), that we have to use two different sets of parameters for fullterm and preterm neonates. As soon as we selected the parameters, their values were kept constant within both groups. The resolution of stages The transition from one sleep stage to another is continuous, it does not occur abruptly. There is a question with what delay we are able to detect the step like changes in the detection curve. It depends on the width of relevant windows and it can be expressed as Delay = WLBin*WLSmooth. Tab. 1 shows the values for the sampling frequency 256 Hz. The delay affects the initial part of the
_______________________________________________________________
detection curve processing, and also the transition from one stage to another (Fig. 5).
Fig. 5. The delays in stages resolution in the beginning of the signal and during the step-like change of sleep stage. 28 min, delay is 3 minutes (sampling frequency 128 Hz), window lengths are WLBin 100 samples and WLSmooth is 240 samples.
VI. CONCLUSIONS The examples proved a good ability of proposed methodology to model the sleep microstructure for groups of children of a different age ranging from 31 to 40 weeks of PCA, even for the cases, where the sleep transition was not evident in the original EEG records. The best results were observed for the newborns of 36-38 weeks of PCA. The older children (40 weeks) exhibited the structure of EEG stages too, but because of their activity, the EEG was more contaminated by movement artefacts. Nevertheless, the QS could be detected very reliably and also the transitions from QS to AS and vice versa were clearly visible. The age of children (the brain maturation) was reflected in values of parameters necessary for successful distinction of the sleep stages in the detection curve: there were two stable sets of parameters, one set for fullterms and another set for preterms. If the EEG analysis did not prove to be distinctive with the use of relevant parameter set, the consultation with the physician disclosed, that the EEG graph of the neonate was visually described as immature. The part of the methodology is also the extraction of quantitative parameters (segment boundaries detected by adaptive segmentation, features describing each segment) which reflect the differences between quiet and active sleep and related neonatal brain maturation.
ACKNOWLEDGMENT This work was supported by grants IGA 1A8600-4, and AV 1ET101210512
IFMBE Proceedings Vol. 23
_________________________________________________________________
Modeling the Microstructure of Neonatal EEG Sleep Stages by Temporal Profiles 7.
REFERENCES 1. 2.
3.
4.
5.
6.
Scher M. S., (2004): ‘Automated EEG-sleep analyses and neonatal neurointensive care’, Sleep Medicine, 5, pp. 533-540 Jansen B.H., Hasman A., Lenten R., Pikaar R. (1980): Automatic sleep staging by means of profiles. In Lindberg D.A.B, Kaihara S., eds., MEDINFO 80, Amsterdam: North-Holland, 385-9 Bodenstein G., Praetorius H.M.: Feature extraction from the Electroencephalogram by adaptive segmentation, Proc. of the IEEE, vol. 65, no.5, (1977) 642-652. Barlow, J.S.“Computer characterization of trace´ alternant and REM patterns in the neonatal EEG by adaptive segmentation - an exploratory study”, Electroenceph Clin. Neurophysiol. 60, pp.163– 73, 1985. Krajca, V., Petranek, S., Paul, K., Matousek, M., Mohylova, J., and Lhotska L. Automatic Detection of Sleep Stages in Neonatal EEG Using the Structural Time Profiles , in EMBC05, Proceedings of the 27th Annual International Conference of the IEEE-EMBS, September 1-4, Shanghai, China Paul K., Kraja V., Roth Z., Melichar J., Petránek S. Quantitative topographic differentiation of the neonatal EEG. Clinical Neurophysiology 117, (2006) 2050-2058.
_______________________________________________________________
137
Gerla, V., Paul, K., Lhotska, L., and Krajca, V. Multivariate Analysis of the Full-Term Neonatal Polygraphic Data, IEEE Trans. Inf. Tech in Biomed., 2008 (in press) 8. Krajca V., Petranek S., Pataková I, and Varri A. (1991): ‘Automatic identification of significant graphoelements in multi-channel EEG recordings by adaptive segmentation and fuzzy clustering’, Int. J. Biomed. Comput. 28, pp.71-89, 9. Anderberg M.R. (1974): ‘Cluster Analysis for Applications’. Academic Press, New York. 10. Bezdek J.C.: Pattern Recognition with Fuzzy Objective Functions Algorithms. New York, Plenum Press (1981) Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Vladimír Kraja Faculty Hospital Na Bulovce Budínova 2 Prague 6 Czech Republic
[email protected] _________________________________________________________________
Optimization and Characterization of Sodium MRI Using 8-channel 23Na and 2-channel 1H RX/TX Coil J.R. James1,2, C. Lin1, H. Stark3, B.M. Dale4, N. Bansal1,2 1
Department of Radiology, Indiana University School of Medicine, Indianapolis, IN, USA 2 School of Health Sciences, Purdue University, West Lafayette, IN, USA 3 Stark Contrast, MRI Coils Research, Erlangen, Germany 4 Siemens Medical Solutions, Cary, NC, USA
Abstract — The initial results of in vivo 23Na magnetic resonance imaging (MRI) of the human torso at 3-Tesla using an 8-channel dual tuned 23Na and 1H transmit/receive coil for various body applications are presented. We are able to obtain 23 Na images of the human torso with 0.3 cm spatial resolution and ~ 20 SNR in ~15 min. These images were acquired with imaging parameters optimized under specific absorption rate limit for human scans. Because a trans-membrane sodium gradient is critical for cell survival, the ability to perform 23Na MRI of the torso in clinical settings would be useful to noninvasively detect and diagnose a number of diseases of the liver, gallbladder, pancreas, kidney, spleen and other body organs. Keywords — sodium, MRI, in vivo, SAR
Some of the areas that could be improved upon for obtaining better quality 23Na MRI include: 1) increased main magnetic field strength 2) efficient RF coil design, and 3) pulse sequence and imaging protocol optimization. Our goal in this work was to enhance the quality of 23Na MRI of the human torso using a 3-Tesla Siemens MR scanner and an 8-channel phase-arrayed dual tuned 23Na and 1H transmit (Tx)/receive (Rx) coil. A gradient-echo imaging sequence, commonly available on all clinical scanners, was optimized for the best possible combination of SNR, spatial resolution, and scan time without exceeding the US Food and Drug Administration (FDA) recommended specific absorption rate (SAR). II. METHODS
I. INTRODUCTION A. Coil design
The dual tuned 23Na/1H coil design for developing 23Na imaging on Siemens 3T TIM Trio scanner was adapted from Lanz et al. [1]. The coil array consisted of two identical top and bottom plates (30 x 30 cm2) as shown in Figure 1. The top plate was slightly curved to fit the curvature of the human torso. Each plate consisted of one 23Na Tx loop and four 23Na Rx elements that covered a field-of-view (FOV) of 20 x 24 cm2. The two large Tx loops provided a relatively homogenous radio-frequency (RF) excitation and deep penetration, and the eight Rx elements provided a high filling-factor and SNR. Each coil plate also consisted a 1H
30 cm Variable
Non-invasive sodium (23Na) MRI has the ability to advance MR imaging beyond the routinely used proton (1H) MRI because it can provide profound information about tissue physiology and metabolism at the cellular level. The motivation of 23Na MRI is based on the fact that in certain diseased state, there is an increase in tissue [Na+] due to interstitial and/or cytotoxic edema. Interstitial edema is caused by inflammation and vascular changes, and results in increased relative extracellular space where the Na+ concentration is ~10 times higher than the intracellular concentration. Cytotoxic edema is caused by plasma membrane disruption and improper functioning of the Na+-K+ pump, and results in increased intracellular [Na+]. The impact of 23 Na MRI has been overshadowed by existing technical difficulties when performing 23Na MR scans. These difficulties arise from the inherent biological and MR properties of the 23Na nucleus. Tissue 23Na MR signal is ~10-4-times weaker than the water signal because of its lower tissue concentration and MR receptivity compared to 1H. In addition, 23Na has very short transverse relaxation time (T2: ~10 ms) which results in significant signal loss from excitation to imaging data acquisition. These properties of 23 Na translate to long image data acquisition times, limited spatial resolution and low signal-to-noise ratio (SNR).
30 cm
20 cm 24 cm
Bo Fig. 1: Design of a dual tuned 8-channel Na and 1H Torso Coil.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 138–141, 2009 www.springerlink.com
23
Optimization and Characterization of Sodium MRI Using 8-channel 23Na and 2-channel 1H RX/TX Coil
Tx/Rx loop arranged around the 23Na Rx arrays for 1H MRI for co-registration purposes without having to use the inbuilt body coil. The flexibility to use the inbuilt body coil, when the 8-channel dual tuned 23Na/1H coil is plugged-in, was made possible by cable traps and detuning units. A number of low noise pre-amplifiers were also incorporated to reduce noise in the image. Two sets of polyethylene tubes ( = 6 mm) filled with 150 mM NaCl were placed around the top and bottom 23Na Rx elements covering 20 x 24 cm2. These tubes served as fiduciary markers for co- registration purposes, signal intensity normalization and RF inhomogeneity correction. B. Pulse sequence optimization A 10 L plastic carboy filled with 50 mM sodium chloride (NaCl) was used for pulse sequence development and protocol optimization. A modified 3D gradient-echo sequence was used for acquiring trans-axial 23Na images with the 8-channel dual tuned 23Na/1H coil. A very short echo time (TE) was used to minimize signal loss due to transverse relaxation. The short TE was achieved by using asymmetric echoes combined with volumetric interpolation. The minimum allowed receiver bandwidth (BW) was used to reduce the noise level. RF transmitter was calibrated by measuring 23Na signal intensity as a function of transmitter voltage and a flip angle that matched the Ernst angle condition was used for achieving maximum SNR. Use of a short repetition time (TR) allowed collection of large number of signal averages over a reasonable time period. Elliptical acquisition was applied to further improve the SNR and to maintain the spatial resolution while reducing imaging time. The 10 L phantom was imaged using both the 8-channel dual tuned 23Na/1H coil and the scanner’s in-built body coil for SNR comparisons in the 1H images. Trans-axial 1H images were acquired using Half Fourier Acquisition Single Shot Turbo Spin Echo (HASTE) sequence with clinically approved and optimized parameters. The center of the slices for both 23Na and 1H images were matched for coregistration and further quantification. Shimming of the magnet was done using the water 1H signal prior to acquiring any MR images. An automatic 3D field map technique and manual shimming was employed to achieve the minimum full width half maximum (FWHM) for signal in frequency domain over the entire volume to improve SNR. C. Specific Absorption Rate (SAR) evaluation of the coil for patient safety SAR for the 8-channel dual tuned 23Na/1H coil was characterized using the FDA recognized ‘Calorimetric
_______________________________________________________________
139
Method’ described in the National Electrical Manufacturers Association (NEMA) Standards [2]. The SAR measurements were performed at both 23Na and 1H frequencies using the optimized pulse sequence and imaging parameters as to be used for in vivo human imaging. The same phantom was used as used for pulse sequence optimization. 50 mM NaCl in the filling solution yielded a coil loading equivalent to that of a human. The phantom also contained 0.5 mM Omniscan Gadodiamide (Nycomed Inc, Princeton, NJ) to shorten the 1H relaxation times. The phantom was wrapped in two layers of plastic sheet to avoid heat loss. A Digi-sense standard temperature controller with Type-T high temperature thermocouple probe (Barnant Co., Barrinton, IL) was used to measure the temperature of the phantom filler material. Accuracy of this temperature measuring device was 0.1 C over -200 to 400 C. The phantom was equilibrated with the surrounding temperatures in the magnet bore for at least 2 hrs with fan in the scanner turned on. The initial temperature of the phantom (Ti) was noted using the temperature sensor. The phantom was then placed at the iso-center of the magnet for scanning. 23Na and 1H MRI sequences were run repeatedly for ~ 1 hr each to achieve a significant increase in temperature. The final temperature (Tf) of the phantom was noted after completing each of the 23Na and 1H scans. The energy, E (in joules), absorbed by the phantom of mass, M (in kilograms), and specific heat, c (in joules/ kg qC), was calculated using the equation: E = M u c u (Tf – Ti). The average power, P (in watts), during the total scan time, W (in seconds) was calculated using the equation: P = E/ . The SAR was calculated using the equation: SAR = P /M. D. Feasibility studies for in vivo imaging In vivo 3D 23Na images of healthy volunteers were acquired using the 8-channel dual tuned 23Na/1H coil after obtaining informed consent. The subjects were positioned supine on the bottom plate of the coil with liver positioned in the centre of the coil. The curved top plate was placed at the top of the torso and aligned with the bottom plate using laser markers. No respiratory or cardiac gating was used. 3D trans-axial 23Na images were acquired with the following optimized parameters: pulse sequence: FLASH, TR = 12 ms, TE = 2.81 ms, number of transits = 128, BW = 130 Hz/px, flip angle = 50q, data matrix = 128 u128, FOV = 40 u 40 cm2, number of slices = 12, slice thickness = 20 mm. Total imaging time was 14.15 min. 3D multi-slice 1H images were acquired using the dual tuned 23Na/1H coil without moving the patient for anatomical comparison. The following imaging parameters were used: pulse sequence: HASTE, TR = 1000 ms, TE =
IFMBE Proceedings Vol. 23
_________________________________________________________________
140
J.R. James, C. Lin, H. Stark, B.M. Dale, N. Bansal
105 ms, number of transits = 1, slice orientation =transaxial, data matrix = 512 u 512, FOV = 40 u 40 cm2, number of slices = 24, slice thickness = 8 mm, and slice gap = 2 mm. Total imaging time was 1.25 min. SNR for all images was estimated by dividing the average signal in a given region of interest (ROI) with the standard deviation of the background noise. III. RESULTS 1
H MR images of the 10 L NaCl phantom collected using the HASTE imaging sequence with the 8-channel 23Na/1H coil and the in-built body 1H coil are compared in Figure 2. The 8-channel dual-tuned 23Na/1H coil gave approximately two times better average SNR than the in-built body coil. The 8-channel 23Na/1H coil also provided better shimming compared to the body coil with both automatic 3D phase mapping technique and manual adjustments of shim currents. 23Na images of the same phantom collected using the 8-channel 23Na/1H coil with 3D FLASH imaging sequence and the optimized imaging parameters had an average SNR of ~20.
figure. The left and right ventricles and the septum of the heart can be seen in the first slice of the 23Na images. The ventricles appear hyper-intense in 23Na MRI due to high [Na+] in the blood. The liver can be seen in slices 2-5 with relatively homogeneous signal intensity. The gallbladder is seen as a hyper-intense feature between the liver lobes in slices 5 and 6. The kidneys in the last three slices appear hyper-intense because of high [Na+] in the medulla. The vertebral canal and spine also show hyper-intense signal in all the slices due to high [Na+] in the cerebral spinal fluid. The in-plane resolution of the 23Na images was 3.1 u 3.1 mm2 and the average SNR in the image was ~ 20. The corresponding 1H images collected for anatomical comparison had a resolution of 0.08 x 0.08 cm2.
A
b
a
SNR: 73
SNR: 37
B
Fig. 2: Proton images of 10 L NaCl phantom acquired with (a) dual tuned 8-channel 23Na/1H coil and (b) in-built body coil for SNR comparisons
SAR measurements using the FDA recognized NEMA Standards “Calorimetric Method” showed that the 8-channel 23 Na/1H coil produced a temperature increase of 1.1 qC during 23Na MRI and 1.0 qC during 1H MRI over one hour. These temperature changes correspond to an SAR of 1.1 W/kg at 23Na frequency and 1.2 W/kg at 1H frequency. The accuracy of the thermocouple used for measuring the phantom temperature was ±0.1qC. This accuracy introduced approximately 10% error in the SAR measurements. These experimentally determined SAR values are approximately one-fourth of the maximum safe SAR of 4 W/kg recommended by FDA for torso and head MRI. The feasibility of acquiring 23Na images of the human torso with the 8-channel 23Na/1H coil is shown in Figure 3. A series of representative trans-axial 23Na and the corresponding 1H images of the torso are shown in the
_______________________________________________________________
Fig. 3: Selected trans-axial 23Na images (A) and the corresponding 1H image (B) of a healthy volunteer acquired on the 3-T system using the 8-channel dual tuned 23Na/1H coil.
IV. DISCUSSION The ability to obtain in vivo 23Na images of the torso at 3T using the 8-channel 23Na/1H coil appear very promising. These images were acquired in >15 minutes, a time period acceptable for clinical imaging. The images had reasonably good resolution and SNR considering that the 23Na signal in tissue is ~10-4-times weaker than the water 1H signal. 3D imaging of the whole volume was employed for better coverage and for shorter imaging time since it aided in
IFMBE Proceedings Vol. 23
_________________________________________________________________
Optimization and Characterization of Sodium MRI Using 8-channel 23Na and 2-channel 1H RX/TX Coil
reducing the TE considerably by allowing the use of a nonselective RF excitation pulse. The various changes made to the pulse sequence included the use of an asymmetric kspace sampling combined with volumetric interpolation, use of minimum allowed receiver BW, a flip angle that matched the Ernst angle condition and a short TR with large number of signal averages. The optimizations were done to the modified 3D GE pulse sequence to obtain 23Na images with a maximum SNR. For in vivo human imaging, the FDA recommends that the SAR should not exceed 8 W/kg in any gram of tissue (head or torso) and 4 W/kg averaged over the whole body. SAR is directly related to the square of RF amplitude (B1). B1 is inversely proportional to the gyro-magnetic ratio of the concerned nuclei. Therefore a 23Na pulse would require four times more RF power than a 1H nucleus. But SAR is also directly proportional to the square of frequency (2). Due to a lower for 23Na than 1H, the SAR values computed for 23 Na and 1H are almost identical [3]. The results of our SAR measurements show that the SAR values for 23Na and 1H MRI were 1.1 ± 1 and 1.2 ± 1 Watts/kg, respectively. These values are approximately one-fourth of the maximum safe SAR of 4 W/kg recommended by FDA. The heat deposition is expected to be less than 1.1-1.2 W/kg during in vivo 23Na MRI experiments because of heat dissipation by blood flow and perfusion. Though the efficient coil design with eight receiveelements allowed us to image a large torso region in all directions with optimum SNR, a significant B1 inhomogeneity artifact was present in the images because the sensitivity of a coil decreases as a function of distance from the coil elements. There was an exponential drop in the signal intensity towards the centre of the image which made the central liver region appear darker (Figure 3). We plan to overcome this problem by applying RF field inhomogeneity corrections using fiduciary markers and a reference placed near the torso. There are numerous possible applications of in vivo 23Na MRI in the abdominal region. 23Na MRI could be useful for detecting and assessing ischemic damage which results in an increase in intracellular [Na+] [4]. It could aid in the diagnosis of hepatitis and cirrhosis which produce changes in intracellular [Na+] and extracellular matrix structure [5]. 23 Na MRI may also prove useful for distinguishing between benign and malignant tumor since rapidly proliferating
_______________________________________________________________
141
tumor cells are shown to have a high intracellular [Na+] [6]. This technique could also be used for monitoring response to cancer therapy. 23Na MRI is expected to be useful for the diagnosis of many acute and chronic renal diseases because the kidneys play a key role in maintaining the sodium homeostasis of the body. Thus, the feasibility of highquality 23Na MRI demonstrated here encourages us to develop 23Na MRI for a wide range of clinical applications. V. CONCLUSIONS With 8-channel 23Na/1H coil and optimized imaging parameters, 23Na MR images can be acquired with an inplane spatial resolution of 0.3 cm and an SNR of ~20 within 15 min at 3T without exceeding the SAR limit for human imaging. The obtained 23Na images allow clear delineation between different abdominal organs and their sub-regions. It is likely that this technique can yield useful information to study normal and abnormal physiology because of the great physiological significance of trans-membrane sodium gradient. The future area of development of 23Na MRI would be focused on evaluating the use of the technique for disease diagnosis and monitoring therapy response.
REFERENCES 1.
2.
3. 4.
5.
6.
Lanz T., Mayer .M., Roboson M.D., Neubauer S., Ruff J., Weisser A, (2007) An 8-channel 23Na Heart Array for Application at 3 T. Proceedings in International Society of Magnetic Resonance Medicine. Characterization of the Specific Absorption Rate for Magnetic Resonance Imaging systems at http://www.nema.org/stds/ms8.cfm. (1997), National Electrical Manufacturers Association, NEMA Standards Publication. Perman, W.H., et al., (1986) Methodology of in vivo human sodium MR imaging at 1.5 T. Radiology,. 160(3): p. 811-20. Babsky, A.M., et al., (2008) Evaluation of extra- and intracellular apparent diffusion coefficient of sodium in rat skeletal muscle: effects of prolonged ischemia. Magn Reson Med, 59(3): p. 485-91. Hopewell, P.N., Bansal N., (2008) Noninvasive Evaluation of Nonalcoholic Fatty Liver Disease (NAFLD) Using 1H and 23Na Magnetic Resonance Imaging and Spectroscopy in a Rat Model. Proceedings in International Society of Magnetic Resonance Medicine. Nagy, I..Z., et al., (1981) Intracellular Na+:K+ ratios in human cancer cells as revealed by energy dispersive x-ray microanalysis. J Cell Biol, 90(3): p. 769-77.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Non-invasive Controlled Radiofrequency Hyperthermia Using an MR Scanner and a Paramagnetic Thulium Complex J.R. James1,2, V.C. Soon1, S.M. Topper1, Y. Gao1, N. Bansal1,2 1
Department of Radiology, Indiana University School of Medicine, Indianapolis, Indiana, USA 2 School of Health Sciences, Purdue University, West Lafayette, Indiana, USA
Abstract — An MR technique has been developed to administer controlled radiofrequency (RF) hyperthermia (HT) to treat sc-implanted tumors using an MR scanner and its components. The method uses the 1H chemical shift of TmDOTA- to monitor tumor temeprature non-invasively. The desired HT temeprature is achieved and maintained using a feedback loop mechanism that uses a proportional-derivative (PD) controller. The RF HT techqinue has been able to heat the tumor from 33 to 45 C in ~ 10 min and was able to mainatain the tumor temeprature within ±0.2 °C of the target temperature. Simultaneous monitoring of the metabolic changes with RF HT using multinuclear MRS techniques showed a significant increase in the [Nai]+ as measured by TQF 23Na MRS and a significant decrease in cellular bioenergetics and pH as measured by 31P MRS. Keywords — RF hyperthermia, MRI, sodium, pH, cellular energy
I. INTRODUCTION Radiofrequency (RF) hyperthermia (HT) in combination with radiotherapy or chemotherapy has been proved to be useful for treating some human cancers. In HT therapy, the temperature of a tumor is raised by few degrees (~ 42 - 45 C) above the normal body temperature. HT can cause changes in cellular membrane permeability and ion exchange processes that could play a key role in cellular damage and death [1,2]. A magnetic resonance (MR) spectrometer that is equipped with RF system to excite spins enables an in-magnet HT application with simultaneous non-invasive temperature monitoring and control. Combining RF HT with multinuclear MR measurements enables investigation of metabolic and physiological effects of HT treatment on tumors. Our goal in this work were: a) to develop a robust noninvasive method for delivering in-magnet controlled RF HT to subcutaneously- (sc-) implanted tumors using the same RF volume coil as used for MR data collection, and incorporate this RF HT technique with 23Na and 31P MR spectroscopy (MRS) data collection and b) to apply the developed controlled HT technique to monitor the effects of HT on total and intracellular Na+ measured by singlequantum (SQ) and multiple-quantum-filtered (MQF) 23Na
MRS, and cellular energy status (ATP/Pi), and intra- and extra-cellular pH, (pHi and pHe, respectively) by 31P MRS in sc-implanted 9L-glioma in rats. II. METHODS A. In-magnet controlled RF HT during 23Na and 31P MRS All MR experiments were performed on a Varian 9.4-T, 31 cm diameter horizontal bore scanner. A 20 mm diameter slotted tube resonator dual tuned to 400 MHz for 1H, and either 106 MHz for 23Na or 163 MHz for 31P was used for MR data collection and heating. Initial development and testing was performed on a 2 ml bottle containing 2 mM TmDOTA-, 10 mM sodium phosphate and 0.9% NaCl in 6% agarose gel. The temperature of the sample was derived from the 1H chemical shift of TmDOTA-. RF heating was irradiated at 400 MHz during 23Na or 31P MRS data collection. The phantom was initially heated at different constant RF power levels to obtain open loop temperature curves. These curves were used to model in-magnet RF heating in a simulation program designed to evaluate the optimum values for the proportional (Kp), integral (Ki) and derivative (Kd) constants for the proportional-intigralderivative PID controller. The simulation program was developed in Simulink, Matlab 7.0 (Math Works, MA). As shown in Fig 1 the program used a feedback loop mechanism to adjust the RF power, u(t), based on the error between the target temperature and the current sample temperature, e(t), using the following equation:
u t 1 u t K p et K d
det K i ³ et dt dt
(1)
The following criteria were used for choosing the optimal values for Kp, Ki and Kd: (a) a rise time of >10-15 min to reach the target temperature, (b) a maximum overshoot of 0.5 °C, and (c) a maximum settling time of 5 min after achieving the target temperature. After optimizing the PID controller with computer simulations, the controller was implemented on the Varian MR scanner and tested using the same phantom and dual tuned volume coil as used for the open loop curves. 1H and
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 142–145, 2009 www.springerlink.com
Non-invasive Controlled Radiofrequency Hyperthermia Using an MR Scanner and a Paramagnetic Thulium Complex 23
Na or 31P spectra were continuously collected during the heating experiment. The RF power was set to a maximum of 33 watts and modulated between 0 to 4095 DAC (digital-to-analog) units. The PID controller constants were set to the optimal values as determined from simulation results. Target temperature
+
Unit Delay
+
Kd Kp
+ +
RF Power
Ki
The target temperature was set to 45 °C and the PID controller was set to the optimal values determined from computer simulations. The RF heating was turned off after 30 min of HT and the tumor temperature was allowed to return to the baseline temperature. The heating and cooling rate constants were estimated using exponential fitting equations using PSI Plot (Poly Software International, UT). The thermal dose at 43 C (tdm43) in degree-minutes (°C min) was calculated from the time interval, t, initial temperature, T0, and the average temperature, Tav, during t [3].
Unit Delay
Unit Delay Error
First order model of RF heating
+ + + +
143
Unit Delay
a. Effect of HT on SQ and MQF 23Na MRS
+ +
Sample temperature
Room temperature
Fig 1 Block diagram of a PID controller for maintaining tumor temperature during RF HT
B. Controlled Hyperthermia with MR Thermometry and Metabolic Measurements The above in-magnet controlled RF HT technique combined with 23Na and 31P MRS data collection was applied to investigate the effects of HT (45° for 30 min) on [Nat+] and [Nai+], cellular energy status (ATP/Pi), pHi and pHe in sc-implanted 9L tumors before, during and after HT treatment. 9L glioma cells (4 x 106) were subcutaneously implanted and grown in Fisher rats that were 6 weeks of age (70-135 g). The tumors were grown to ~2 cc before they were used for MR experiments. A nephrectomy was performed to avoid renal clearance of TmDOTA- during imaging experiments. A jugular vein was cannulated through a midline neck incision to inject 3-4 ml of 100 mM TmDOTA- prior to the MR experiments. A rectal fiberoptic probe (FOP) was inserted to monitor core body temperature. The rat was positioned on the cradle and the tumor was placed inside the 20 mm diameter slotted tube resonator through a copper tape placed above the coil to avoid contamination from muscles around the tumor. A pneumatic pillow sensor (SA Instruments Inc., NY) was placed under the animal for monitoring respiration during the MR experiments. The whole set up with the cradle was then positioned inside the horizontal bore magnet. The animal core body temperature was maintained at ~35 qC by blowing warm air with a commercial hair dryer into the magnet bore. The protocols for 23Na and 31P MR data collection during in-magnet controlled RF HT experiments are shown in Fig 2. After baseline data collection, the RF power was turned on at 400 MHz during 23Na or 31P data collection.
A 1 ml vial containing 500 mM NaCl and 2.5 mM TmDOTP5- in 10 % agarose gel was placed inside the dual tuned 1H/23Na coil and used as a SQ and TQF 23Na signal reference. TmDOTP5- was used to shift the reference 23Na signal away from the tumor signal and agarose was used to generate a TQF 23Na signal from the reference. 23Na MRS data were collected at 106 MHz using a 170 μs excitation pulse followed by an acquisition of 1,000 data points over a spectral width of 5 kHz. The data collection time for a SQ 23 Na spectrum was 13 sec and that for a TQF 23Na spectrum was 1 min and 24 sec. 23 Na SQ and TQF spectra were transferred to and processed with NMR Utility Transform Software (NUTSAcorn NMR, CA). 23Na free-induction decays (FIDs) were
SQ and MQF 23Na MRS during Controlled RF HT Tumor Heating Pre-HT Phase
…...
Tumor Cooling Phase
….....
…........................
..…... START HT
-15
0
END HT
15
30
45
60
75
90
105
Time, min 1H
SQ 23Na MRS
MRS for temperature
31
...
0
MQF 23Na MRS
RF HT
P MRS during Controlled RF HT
.….............
START HT
-15
…….…...…………..…...
END HT
15
30
45
60
75
90
105
Time, min 1H
MRS for temperature
31P
MRS
RF HT
Fig 2 Experimental protocols for SQ and MQF 23Na (top) and 31
_______________________________________________________________
Tumor temperature stable at ~45 C
P (bottom) MRS data collection during controlled RF HT
IFMBE Proceedings Vol. 23
_________________________________________________________________
144
J.R. James, V.C. Soon, S.M. Topper, Y. Gao, N. Bansal
baseline corrected and multiplied by a single exponential corresponding to 10 Hz line broadening and Fourier transformed. The signal intensities in 23Na SQ and TQF tumor and reference peaks were determined by integration. b. Effect of HT on pHi, pHe and cellular energy status 31
P MRS experiments were performed on separate group of rats. Effects of HT on pHi, pHe and cellular energy status (ATP/Pi) in 9L-glioma were examined before, during, and after HT treatment. The animals were prepared for 31P MRS experiments in a similar manner as described previously except a solution containing 75 mM 3-aminopropylphosphonate (3-APP) and 100 mM TmDOTA- was infused through the jugular vein. 31 P signal from 3-APP was used for monitoring the pHe. A 1 ml vial containing 100 mM methyl-phosphonic acid (MPA) was placed inside the dual tuned 1H/31P coil and used as a 31P signal reference. The protocol for 31P MRS data collection during controlled RF HT and temperature monitoring was similar to that used for 23Na experiments as shown in Fig 2. 31P MRS data was collected at 165 MHz using a 100 Ps excitation pulse followed by an acquisition of 4,000 data points over a spectral width of 20 kHz. The total data collection time for each 31P spectrum was 1 min and 11 s. After data acquisition, four sets of 31P spectra were added together to obtain a better SNR and quantification. 31 P FIDs were baseline corrected and multiplied by a single exponential corresponding to 25 Hz line broadening and Fourier transformed. The various signal intensities from the corresponding peaks on 31P spectra were determined by integration. pHi was estimated using the chemical shift of Pi signal referenced to -ATP signal [4]. pHe was calculated from the shift of 3-APP signal referenced to -ATP. III. RESULTS The PID based temperature controller simulation yielded optimal values Kp = 1000, Kd = 1000, and Ki = 0. The controlled RF HT technique was able to heat the phantom from 30 to 45 °C in ~10 min and maintain the phantom temperature at 45.0 ± 0.1 °C for over 60 minutes. The feasibility of controlling and maintaining the temperature in sc-implanted tumors using the PID controller is shown in Fig 3. The tumor temperature was raised from 33.7 to 45.0 °C in about 10-15 minutes. After the target temperature was achieved, the RF heating power was adjusted automatically as per Eq. 1 to maintain the tumor temperature at 45.0 °C. The RF power varied between 4095 and 2717 DAC units during heating. The rise
_______________________________________________________________
time for heating the tumor was slightly longer than that for the phantom due to heat dissipation by blood. In spite of this, the PID controller was able to maintain the tumor temperature at 44.9° ± 0.2 °C. After 30 minutes of HT, the RF power was turned off and the tumor returned to its baseline temperature (33.4 °C) in ~ 1 hr. The average cooling rate constant was estimated to be 16.9 ± 3.4 min. The tumor tdm43 was estimated to be 57 ± 2 min. The rectal temperature was 35 ± 1 °C throughout the experiment including during the HT treatment.
Fig 3 Changes in heating RF power level and tumor temperatures during controlled HT with the PID controller
Relative changes in SQ and TQF 23Na signal intensity averaged for all the tumors (n = 5) are shown in Fig 4A. Controlled RF HT produced a gradual increase in both SQ and TQF 23Na SI. There was a 30-40 % increase in TQF 23 Na SI compared to only a 12% increase in SQ 23Na SI. TQF 23Na SI continued to increase even after the heating was stopped and tumor temperature returned to the baseline value. Effects of HT in pHi and pHe, and relative ATP/Pi from 31 P spectra averaged for all the tumors are shown in Fig 4B and C respectively. Prior to HT, the tumor pHi and pHe were 7.08 ± 0.10 and 6.89 ± 0.05, respectively. This transmembrane pH gradient in the tumor is opposite to that seen in normal tissues which have a pHi of 7.1 and a pHe of 7.37.4. As the tumor temperature increased with the onset of HT, pHi decreased by ~0.2 units (p 0.05) but then returned back to the normal value within 15 minutes during HT. pHi remained at the baseline value during the remaining experimental period including the post HT period. pHe decreased by ~0.15 units (p 0.05) with onset of HT, but 10-12 min after the decrease in pHi. pHe remained decreased during the HT period but gradually returned back to the baseline value (6.9 ± 0.1) when the
IFMBE Proceedings Vol. 23
_________________________________________________________________
Non-invasive Controlled Radiofrequency Hyperthermia Using an MR Scanner and a Paramagnetic Thulium Complex
heating was turned off. ATP/Pi decreased by 40% (p 0.05) during HT and remained depressed after HT. IV. DISCUSSION We have been able to develop an in-magnet controlled RF HT technique using the following features: 1) RF HT was delivered using the same RF coil and MR system hardware as used for exciting the nuclear spins. 2) The tumor temperature was monitored from the 1H chemical shifts of TmDOTA-, which are 50-100 times more sensitive to temperature than water 1H signal. 3) The tumor temperature was maintained within ±0.2 qC of the target temperature with an integrated PD controller based RF heating technique 4) 23Na MRS or 31P MRS data were collected consecutively with 1H MRS data collection during controlled RF HT for monitoring the metabolic and
145
physiological effects of the treatment. Among all the NMR parameters that were simultaneously monitored with RF HT in the sc-implanted tumors, Nai+ signal intensity and cellular bioenergetics were the most sensitive to temperature. The dramatic increase in TQF 23Na SI suggests that HT causes an increase in [Nai+]. The initial increase in [Nai+] during HT may be a metabolic response to heating. This increase may result from 1) a decrease in the activity of Na+/K+-ATPase due to reduced cellular ATP levels or 2) an increase in the activity of Na+/H+ antiporter which helps maintain pHi. The continued late increase in [Nai+] after HT may result from cellular membrane damage leading to potential cell death. Decrease in the ATP/Pi suggests a compromised cellular energy metabolism which could reduce the activity of Na+/ K+-ATPase and lead to an increase in [Nai+]. The decrease in pHi can also increase the activity of Na+/H+ antiporter to maintain cellular pH homeostasis. This explains the initial drop in pHi followed by its recovery during HT and a consecutive decrease in pHe. In summary, the data presented show that HT causes a marked increase in [Nai+] in sc-implanted 9L glioma due to both a decrease in cellular energy status and increased acid production. V. CONCLUSIONS The above presented PD based RF HT technique, using an MR scanner for both RF HT application and MR data acquisition provides a robust non-invasive method for delivering controlled RF HT to sc-implanted tumors inmagnet using the same RF volume coil as used for MR data collection. Simultaneous measurements of sodium and cellular energetics during HT treatment show that 23Na and 31 P will prove very useful for monitoring therapy responses during HT treatment. The in vivo tumor data presented show that HT causes a dramatic increase in [Nai+] due to significant decreases in cellular ATP and pH. REFERENCES 1.
2.
3.
Fig 4 Effekts of HT on A) Nai+ as measured from SQ and Nai+ as measured 23
from TQF Na signal intensity, B) pHi and pHe and C) ATP/Pi measured from 31P MRS
_______________________________________________________________
4.
Amorino GP, Fox MH. Heat-induced changes in intracellular sodium and membrane potential: lack of a role in cell killing and thermotolerance. Radiat Res 1996;146(3):283-292. Babsky A, Hekmatyar SK, Wehrli S, Nelson D, Bansal N. Hyperthermia-induced changes in intracellular sodium, pH and bioenergetic status in perfused RIF-1 tumor cells determined by 23Na and 31P magnetic resonance spectroscopy. Int J Hyperthermia 2005;21(2):141-158. Sapareto SA, Dewey WC. Thermal dose determination in cancer therapy. Int J Radiat Oncol Biol Phys 1984;10(6):787-800. Kost GJ. pH standardization for phosphorus-31 magnetic resonance heart spectroscopy at different temperatures. Magn Reson Med 1990;14(3):496-506.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification S. Devuyst1, T. Dutoit1, T. Ravet1, P. Stenuit2, M. Kerkhofs2, E. Stanus3 1
Faculté Polytechnique de Mons, TCTS Lab, Avenue Copernic, 1, B-7000 Mons, Belgium,
[email protected] 2 Sleep Laboratory CHU Vésale, Montigny-le-Tilleul, Belgium 3 Tivoli hospital , La Louvière, Belgium
Abstract — In this paper, we present a series of algorithms for dealing with artifacts in electroencephalograms (EEG), electrooculograms (EOG) and electromyograms (EMG). The aim is to apply artifact correction whenever possible in order to lose a minimum of data, and to identify the remaining artifacts so as not take them into account during the sleep stage classification. Nine procedures were implemented to minimize cardiac interference and slow ondulations, and to detect muscle artifacts, failing electrode, 50/60Hz main interference, saturations, highlights abrupt transitions, EOG interferences and artifacts in EOG. Detection methods were developed in the time domain as well as in the frequency domain, using adjustable parameters. A database of 20 excerpts of polysomnographic sleep recordings scored in artifacts by an expert was available for developing (excerpts 1 to 10) and testing (excerpts 11 to 20) the automatic artifact detection algorithms. We obtained a global agreement rate of 96.06%, with sensitivity and specificity of 83.67% and 96.47% respectively. Keywords — Artifacts processing, EEG, EOG, EMG, ECG.
I. INTRODUCTION While trying to automatically classify sleep stages, one is generally faced with the problem of artifacts. Indeed, artifacts contained in the analyzed polysomnographic signals introduce spurious components during features extraction, which lead to incorrect interpretation of the results [1]. Dealing with artifacts is therefore mandatory before any other classification operation. In the literature, three main approaches have been proposed to detect and correct them. The first approach is based on autoregressive modeling [2, 3, 4] and is used for two purposes: (i) estimating the recorded EEG and identifies transient events like muscle or movement artifacts by locating the abrupt variations of the parameters; (ii) removing artifact from the EEG by estimating the parameters of the mathematical model that describe the recorded EEG as an overlap of the real EEG and the artifact interference. The second approach uses standard voltage thresholds (overflow check) [4-5]. While these thresholds can sometimes be fixed (e.g 50μV), it is univocally accepted that using values related to the energy distribution of the signal
in the frequency or time domain is preferable since voltage levels can strongly vary between subjects and recordings. Finally, some authors investigated the use of independent component analysis (ICA) to remove the artifacts [6-7]. Unfortunately, their methods often required many EEG channels and implied to visually select the origin of the interference among estimated sources. In the present study, we introduce algorithms for processing artifacts on EEG, EOG and EMG which are suitable for automatic sleep stages classification. The strategy is to imitate human behavior by locating the short duration artifacts so as to ignore them during the feature extraction stage of sleep stage classification. However, cardiac interferences and slow undulations (e.g caused by breath interferences or by sweat) could not be processed using this strategy because these artifacts can last several hours. This is why we also developed two artifact correction algorithms in order to minimize the loss of data. Nine procedures were finally implemented to remove cardiac interference and slow ondulations, and to detect muscle artifacts, failing electrode, 50/60Hz main interference, saturations, highlights abrupt transitions, EOG interferences and artifacts in EOG. The performances of the algorithms were evaluated on a database of 20 polysomnographic sleep recordings scored in artifacts by an expert. II. MATERIALS AND METHODS A. Data Data used in this study were recorded at the Sleep Laboratory of the André Vésale hospital (Montigny-le-Tilleul, Belgium). They are composed of 20 excerpts of 15 minuteslong polysomnographic (PSG) sleep recordings carried out during the night. The recordings were taken from 20 patients (15 males and 5 females aged between 31 and 73) with different pathologies (dysomnia, restless legs syndrome, insomnia, apnoea/hypopnoea syndrome). The sampling rates were 50, 100 and 200Hz. The 20 excerpts were visually examined by an expert to identify the various artifacts. Then they were separated into two groups for devel-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 146–150, 2009 www.springerlink.com
Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification
oping (excerpts 1 to 10) and testing (excerpts 11 to 20) the automatic artifact detection algorithms. B. Artifacts detection/correction processes
147
Such artifacts are sometimes obtained at the end of the nights when the electrodes are disconnected. P6. Highlights abrupt transitions detection on EEG (Atf_transE). Highlights abrupt transitions such as spikes are identified by locating slopes above some threshold. Let
Two procedures were developed to minimize cardiac interference and slow ondulations (P1-P2) and seven other procedures were implemented to indentify the remaining short artifacts (P3-P9). These detection algorithms operate on fixed length epochs (1.25 second by default). They are mainly binary: if any of the parameters exceeds the corresponding threshold, the epoch is marked as an artifact. As the signal energy distribution in the frequency or time domain varies strongly between subjects, we have chosen thresholds relative to the statistical properties of the considered signal. For example:
threshold
mean( EEG ) k * std ( EEG )
where k is a factor of proportionality and std is the standard deviation. The various procedures are the following (for more details, see http://tcts.fpms.ac.be/publications/techreports/DEA_sd.pdf): P1. Cardiac interference detection and correction on EEG (Atf_cardE) and on EOG (Atf_cardO). The basis of the method for removing cardiac interference was presented in [8]. It is based on a modification of the independent component analysis algorithm which gives promising results while only using a single-channel EEG (or EOG) and the ECG. P2. Slow ondulations detection and correction on EEG (Atf_ondE) and on EOG (Atf_ondO). Slow ondulation artifacts are generally due to breathing or sweating. Their frequencies are lower than those of the slowest waves of the sleep (rhythm delta). Therefore, their extraction can be realized by a simple filtering, with cut-off frequency adjusted to the smallest frequency of the delta band. P3. Saturations detection on EEG (Atf_satE), on EOG (Atf_satO) and on EMG (Atf_satM). The basic idea of this procedure is to locate epochs where the EEG signal remains at its maximal value of saturation during a sufficient time. P4. Unusual increase of EEG detection (Atf_highE). These artifacts can for example be caused by EOG interferences. If the amplitude of the EEG signal exceeds a first threshold for any of the epochs, the onset and the offset of the artifact are researched. These are defined as the instant after which the amplitude of the EEG becomes lower than a second threshold (lower than the first threshold). Then the corresponding epochs are marked as artifact epochs. P5. Failing electrode detection on EEG (Atf_noE) and on EOG (Atf_noO).This procedure locates the relatively constant amplitude (near to zero) of the signals EEG or EOG.
_______________________________________________________________
Fig. 1 Examples of muscle or movement artifacts us note that epileptic spikes are not actually artifacts (since they have no artificial origin), but they can nonetheless obstruct the sleep stage classification. This is why we identified them as the artifacts. P7. 50/60Hz mains interferences detection on EEG (Atf_50E). This interference network can easily be detected given the evident peak which takes place around 50Hz (or 60Hz) on the Fourier transform. P8. Muscle or movement artifacts detection on EEG (Atf_mvtE). This algorithm detects temporary increase of muscular tone accompanied by disturbances on the EEG. These disturbances are of two types: they can be either a voltage increase as illustrated on Fig 1a. , or a change of rhythm of the EEG activity such as illustrated on Fig 1b. P9. Detection of artifacts in EOG constituted of in-phase movements (Atf_phaseO). As the ocular movements are binocular and synchronous, the EOG recordings should appear in opposition of phase while placing electrodes on the lateral canthi and by using the same reference on the mastoid. The ocular artifacts are therefore easily identifiable since they correspond to in-phase movements of the EOG.
IFMBE Proceedings Vol. 23
_________________________________________________________________
148
S. Devuyst, T. Dutoit, T. Ravet, P. Stenuit, M. Kerkhofs, E. Stanus
III. RESULTS A. Content of the artifact database On the basis of the artifact scoring carried out by the expert, we first examined the content of the database in terms of short duration artifacts (Table 1).
cerpts were visually examined to compute the number of corrected peaks. We found a correction rate of 91.1%. The removal of slow ondulations artifacts was only checked visually by the expert. It seems that the artifact can be well corrected without distorting the EEG, as it can be seen in Fig 2.
Table 1 content of the artifact database based on the visual artifact scoring Code
Type
Number of seconds
%
P3-a
Saturations of EEG (Atf_satE)
0
P3-b
Saturations of EOG (Atf_satO)
0
0
P3-c
Saturations of EMG (Atf_satM)
0
0
P4
Unusual increases of EEG (Atf_highE)
639,180
3,551
P5-a
Failing electrode on EEG (Atf_noE)
118,610
0,659
P5-b
118,520
0,658
46,790
0,260
23,980
0,133
494,600
2,748
P9
Failing electrode on EOG (Atf_noO) Highlights abrupt transitions of EEG (Atf_transE) 50/60Hz on EEG (Atf_50E) Muscle or movement artifacts on EEG (Atf_mvtE) artifacts in EOG (Atf_phaseO)
566,270
3,146
O-a
Other artifacts on EEG
129,73
0,72
P6 P7 P8
O-b
0
Other artifacts on EOG
6,190
0,034
All short artifacts on EEG
1014,930
5,639
All short artifacts on EOG
690,980
3,839
All short artifacts on EMG
0,000
0,000
The difference between the total duration of all short artifacts on EEG and the sum of durations of each type of artifact on this signal (P3-a+P4+P5a+P6+P7+P8) is due to the presence of multiple artifacts in some epochs. As it can be seen, 5.64% of the total EEG recorded time contains artifacts. Among those, most frequent artifacts are "unusual increase of EEG" followed by "artifacts in EOG" and then "Muscle or movement artifacts". No "saturation" artifact was found in the database. We therefore used other polysomnographic signals to tune the parameters of this procedure. These signals were not scored by the expert but simply examined visually by the authors. B. Results of the minimization procedures The ECG artifact removal procedure was previously tested on 10 excerpts of polysomnographic sleep recordings containing ECG artifacts and other typical artifacts [8]. It was shown that it is robust to various waveforms of cardiac interference and to the presence of other artifacts. Two hundred successive interference peaks of each of these ex-
_______________________________________________________________
Fig. 2
Removal of slow ondulations artifacts on EEG
C. Results of the detection procedures Concerning the short duration artifacts, the concordance between the expert scoring and the automatic detection procedures with the default thresholds, was examined as follows: 1) a true positive (TP) was counted when an artifact was automatically detected in an epoch also marked as an artifact by the expert, 2) a false positive (FP) when an artifact was automatically detected in an epoch classed as non-artifact by the expert, 3) a true negative (TN) when no artifact was detected neither automatically nor visually by the expert, 4) a false negative (FN) when no artifact was automatically detected in an epoch marked as artifact by the expert. Then we computed the agreement rate= (TP+TN)/ (TP+TN+FP+FN), the sensitivity=TP/(TP+FN) and the specificity=TN/(TN+FP). The results obtained for each detection procedure on the central EEG are exposed in Fig 3, as well as the global results of the detection of any artifacts on this EEG. The results corresponding to the EOGs and the EMG are shown on Fig 4. If an artifact is not present (according to the expert) in the training database, the sensitivity figure has no sense. That is why we indicated it by an asterisk (*). As it can be seen, the procedures corresponding to the more frequent artifacts (i.e. unusual increase of EEG, artifacts in phase on EOG and Muscle artifacts) are unfortunately those which have the lowest sensitivities. However, as a whole, the results show an acceptable agreement between our package of software and the human scoring with agreement rates of 92.18%, 95.98% and 100% respectively on the EEG, EOG and EMG. Without making distinction between the various signals, these results correspond to a
IFMBE Proceedings Vol. 23
_________________________________________________________________
Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification
global agreement rate of 96.06%, a global sensitivity of 83.67% and a global specificity of 96.47%. Finally, by varying the length of the analysis epoch from 1.25s to 1s, we observed that it introduced only little changes in the expert/software concordance (agreement rate = 96.46%, sensitivity = 82.33% and specificity =96.90%).
149
rate. Actually, this is due to the fact that false positives (FP) introduced by the various algorithms are not always located at the same place, while true positives (TP) are sometimes detected at the same epochs (fig 5). There is thus an increase in FP (proportionally to the number of TP) in the global detection on EEG. This explains why the total agreement rate is lower, while sensitivity is slightly modified and sensitivity is not affected. Fortunately, the number of epochs classified as non-artifact remains sufficient for feature extraction and sleep stage classification. V. CONCLUSIONS
Fig. 3 Results of the detection procedure on the central EEG
Fig. 4 Results of the detection procedure on the EEGs and on the EMG
In conclusion, our findings showed that the proposed artifact minimization procedures and detection algorithms (although rather simple since they are mainly binary) are reliable in a context of classification in sleep stage classification. Indeed they give promising and repeatable results (agreement rate = 96.06%, sensitivity = 83.67% and specificity =96.47%) without requiring any human intervention. The approach has however some limitations: (i) it is running on epochs of fixed length rather than locating the actual onset and offset of the artifacts. However using fixed length epochs dramatically simplifies the algorithm and the remaining part of signal is generally sufficient for the sleep stage classification. (ii) Although the parameters are calculated according to the statistical properties of the signal, their values (once determined) remain unchanged for all the duration of the recording. This can be can be inappropriate in sleep recording with fluctuation of tonicity level. The use of adaptive threshold rather than absolute threshold could then be investigated in future works.
ACKNOWLEDGMENT This work was partly supported by the Région Wallonne (Belgium) and the DYSCO Interuniversity Attraction Poles.
REFERENCES 1.
Fig. 5 Illustrative example of detection process
2.
IV. DISCUSSION
3.
By looking at the Fig 3, one could be surprised of obtaining a total rate of agreement on the EEG of only 92.18% whereas each separate procedure has a higher agreement
4.
_______________________________________________________________
Anderer P. et al. (1999), Artifact Processing in Computerized Analysis of sleep EEG - A Review, Neuropsychobiology 40: 150-157 Schlögl A. et al (1999), Artefact detection in sleep EEG by the use of Kalman filtering, EMBEC'99 Proc, Medical & Biological Engineering & Computing, Supplement 2, November 4-7 1999, Vienna, Austria, pp 1648-1649. Van den Berg-Lenssen MM et al. (1989), Correction of ocular artifacts in EEGs using an autoregressive model to describe the EEG - a pilot study, Electroencephalogr Clin Neurophysiol, 73:72-83 Durka, P.J. et al. (2003), A simple system for detection of EEG artifacts in polysomnographic recordings, IEEE Transactions on Biomedical Engineering, Volume 50, Issue 4:526 – 528
IFMBE Proceedings Vol. 23
_________________________________________________________________
150 5.
6.
S. Devuyst, T. Dutoit, T. Ravet, P. Stenuit, M. Kerkhofs, E. Stanus Moretti D.V. et al. (2003), Computerized processing of EEG-EOGEMG artefacts for multicentric studies in EEG oscillations and eventrelated potential, international journal of psychophysiology, vol. 47, no3, pp. 199-216 Iriarte J., Urrestarazu E., Valencia M. (2003), Independent component analysis as a tool to eliminate artifacts in EEG: a quantitative study, Journal of Clinical Neurophysiology, 20(4): 249–257.
_______________________________________________________________
7.
8.
Delorme A. et al (2007), Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis, NeuroImage Volume 34, Issue 4, pp 1443-1449 S. Devuyst et al.(2008), Removal of ECG Artifacts from EEG using a Modified Independent Component Analysis Approach, EMBC Proc, 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, Aug 20-24 2008, SaET1.1, pp 5204-5207
IFMBE Proceedings Vol. 23
_________________________________________________________________
Medical Image Registration Using Mutual Information Similarity Measure Mohamed E. Khalifa1, Haitham M. Elmessiry2, Khaled M. ElBahnasy3, Hassan M.M. Ramadan4 1
Dean of Faculty of Computer and Information Sciences, Ain Shams University,Cairo,Egypt 2 Assistant Professor in Computer Science Department Ain Shams University,Cairo,Egypt 3 Assistant Professor in Information System Department Ain Shams University,Cairo,Egypt 4 TA, Faculty of Computer and Information Sciences, Minia University ,Egypt
[email protected] Abstract — Nowadays Medical Imaging has an increasing need in patient treatment ,not only it aids physicians in determining the correct diagnosis but also it reduces overall cost and speedup treatment plans .Medical Image Registration is a vital medical imaging application it aligns one image to another ,to obtain a registered image has the information content of both images . Our paper purpose is to provide a literature of Medical image registration methods based on Mutual Information, showing implementation techniques ,challenges ,and optimization approaches ,Mutual Information have been shown to be accurate and robust similarity measure at registering images specially for multimodal images taken from different imaging devices and/or modalities . Keywords — Image Registration, Image fusion, Mutual Information
I. INTRODUCTION Image Registration is the process of determining the optimal spatial transformation that brings at least two images into alignment with each other [1], Often in image processing, images must be spatially aligned in order to perform quantitative analyses of the images .Image registration applications grows daily ,it is used in computer vision, pattern recognition, target identification, remote sensing, and medicine (the focus of the current research). Transformation or warping function: This is the function used to warp the sensed image to take the geometry of the reference image. T (x A) = x B . In choosing the transformation function ,and registration method or similarity metric ,we shall consider set of image characteristics such as [2] : x x x x x
Dimensionality 2D/3D . Transformation Domain (Global Vs Local). Transformation Type(Rigid-Affine-Curved-Projective). Subject of registration (Head-Knee-Heart-etc.). Image Modality(CT ,MR ,PET ,etc.).
x
Extrinsic Vs Intrinsic features, in Extrinsic methods other objects are added to image to detect features like(using frames on patient brain),while Intrinsic methods extract and operates on image features like (edges, curves , Intensity values ).
Fully automated algorithms based on Intensity similarity measures have been shown to be accurate and robust at registering images compared to feature based methods, Registration process works to find the optimal spatial and intensity transformations function so that the images are matched using a similarity measure between both images. The choice of an image similarity measure depends on the nature of the images to be registered. Common examples of image similarity measures include CrossCorrelation, Mutual Information, Mean-square difference and Ratio Image Uniformity. Mutual Information and its variant, Normalized Mutual Information, are the most popular image similarity measures for registration of multimodality images. Cross correlation, Mean-square difference and Ratio Image Uniformity are commonly used for registration of images of the same modality. Collignon [3],and Viola [4] proposed A widely used Intensity based similarity measure which is Mutual Information (MI). Registration based on mutual information is robust and data independent and could be used for a large class of mono-modality and multimodality images. This method requires estimating joint histogram of the two images. As a result, it requires an extremely high computation time. This is its main drawback ,another implementation challenges are local extrema ,and image noise which may a rise because of interpolation during registration process. This paper illustrates Mutual Information based registration approach ,Section II, demonstrates methodology of Mutual Information technique(MI) ,concept ,properties ,and framework ,Section III discuss implementation issues and suggested optimization techniques, Section IV compares between some MI registration, finally our conclusion .
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 151–155, 2009 www.springerlink.com
152
Mohamed E. Khalifa, Haitham M. Elmessiry, Khaled M. ElBahnasy, Hassan M.M. Ramadan
Two random variables are considered to be independent
II. METHOD if: A. Mutual Information image based Registration
H(X,Y) = H(X) + H(Y)
Mutual information (MI) is usually used to measure the statistical dependence between two random variables, or the amount of information that one variable contains about the other. The method applies mutual information to measure the information redundancy between the intensities of corresponding Intensity in both images, which is assumed to be maximal if the images are geometrically aligned. There exist many important technical issues to be solved about the method such as how to compute MI more accurately and how to obtain the maximization of MI. There are some implementation issues, for example, sub sampling, interpolation, outlier strategy. The combination of these computation techniques and searching strategy leads to a fast and accurate multi-modality image registration.
The Mutual Information, MI, between two random variables X and Y is given by: [5] MI(X,Y) = H(Y) - H(Y|X) = H(X) + H(Y) - H(X,Y) (5) Maximizing the mutual information is equivalent to minimizing the joint entropy . Advantage in using mutual information over joint entropy is it includes the individual input’s entropy Works better than simply joint entropy in regions of image background (low contrast) . C. Some Properties of Mutual Information[6] x x x
B. Entropy and Mutual Information Let: X be a random variable (R.V) , P(X) be the probability distribution of X , p(x) be the probability density of X. Then The entropy of X, H(X) is defined by: H(X) = -EX[ log(P(X)) ]
(4)
x
Mutual Information is symmetric: I(X,Y) = I(Y,X) (6) I(X,X) = H(X) (7) I(X,Y) 0 is a fixed parameter to weight the different term (length functional) in the energy, H is the Heaviside function H(z), equal to 1 if z 0 and to 0 if z < 0. is a vector of level set functions. The boundary of region is given by the zero level set of a scalar Lipschitz continuous function (called level set function). A typical example of level set function is given by the signed distance function to the curve of boundary. In figures 2 and 3 we can see the principle of region classification by the Heaviside function. There are two initial level set functions, 1 and 2, and their zero levels (boundaries of regions). Clearly, we will need only the log2 n level set functions for the recognition of n segments with complex topologies. In our case for n = 4 (and therefore m = 2) we obtain the 4-phase energy given by: F4 c,
³ u
:
187
w2 wt
°
§ 2 © 2
G H 2 ®Q div ¨¨
¯°
· ¸¸ ¹
ª u0 c11 u0 c10 H 1 ¬
2
u0 c01 u0 c00 2
2
2
(5)
1 H ¼º 1
½ ¾, ¿
where t is a artificial time, is the Dirac function (derivation of Heaviside function). By applying the finite differences scheme we will give the numerical approximations of Euler-Lagrange equations for iterative process implementation.
c11 H 1 H 2 dxdy 2
0
³ u0 c10 H 1 1 H 2 dxdy 2
:
³ u0 c01 1 H 1 H 2 dxdy 2
(2)
Fig. 2 Two initial level set functions and zero level
:
³ u0 c00 1 H 1 1 H 2 dxdy 2
:
Q ³ H 1 Q ³ H 2 , :
:
where c = (c11, c10, c01, c00) is a constant vector and = (1, 2). We can express the output image function u as: u
c11 H 1 H 2 c10 H 1 1 H 2
c01 1 H 1 H 2
(3)
c00 1 H 1 1 H 2 .
Fig. 3 Zero-cross levels and regions classification by level set functions signs
By minimizing the energy functional (2) with respect to c and we obtain the Euler-Lagrange equations: w1 wt
°
§ 1 © 1
G H 1 ®Q div ¨¨
°¯
· ¸¸ ¹
III. IMPLEMENTATION
2 2 ª u0 c11 u0 c01 H 2 ¬
u0 c10 u0 c00 2
2
(4)
1 H º¼ 2
_______________________________________________________________
½ ¾, ¿
In view of the image properties, the segmentation can be performed after an appropriate pre-processing. The first to be performed was convolution, with subsequent sharpening mask to enhance the contrast:
IFMBE Proceedings Vol. 23
_________________________________________________________________
188
J. Mikulka, E. Gescheidtova and K. Bartusek
H
ª D D 1 D º 1 « D 1 D 5 D 1» , D 1 «« D D 1 D »» ¬ ¼
(6)
where the coefficient determines the form of the Laplace filter used; the suitable value 0.001 was established experimentally. The next step consists in smoothing the focused image. For this purpose, the simplest 3rd - order averaging mask was used:
H
ª1 1 1º 1« 1 1 1»» . 9« «¬1 1 1»¼
(7)
The pre-processed image was further subjected to the above-mentioned four-phase level set segmentation. Partial differential equations were transformed into corresponding difference equations, which are solved in iterations. The number of iterations was controlled by following the derivatives of energy function, which via successive segmentation of regions with similar intensities converged to zero.
Fig. 4 Image A segmentation
IV. EXPERIMENTAL RESULTS The result of segmentation is shown on the example of two NMR images in Figs. 4 and 5. These are two slices of the human head in the region of temporomandibular joint. At the top of the figures we can see a slice of the original image and the result of the segmentation of pre-processed image with the contours of segmented regions marked out. In the bottom of the figures the segmented image by fourphase segmentation is given. The intensity of each region is given by the mean intensity value of individual pixels in the respective region of the original image. Table 1 Parameters of segmentation with Celeron 1.4 GHz, 768 MB RAM and Windows XP, Matlab 7.0.1
Image
Number of iterations
Duration [s]
A
14
Bold
B
12
Regular
Fig. 5 Image B segmentation V. 3D MODELING The aim of the next work is creation of 3D model of tissue from segmented 2D slices. This process consists of three steps [3]:
_______________________________________________________________
1. Change of the data description – from discrete to vectorial description by the most used “Marching cubes” method for fully automated creation of geometrical models
IFMBE Proceedings Vol. 23
_________________________________________________________________
Processing of NMR Slices for Preparation of Multi-dimensional Model
2. Smoothing – e.g. by Laplace operator, because the changing of description cause that the geometrical model is stratified 3. Decimation – elimination of small triangles with maximal geometry preservation for surface simplification.
189
the mean values of pixel intensities of the original image. Segmenting the temporomandibular joint in several slices by the above-given method can be used, for example, to construct a three-dimensional model. Using the multiphase segmentation method, a more precise model can be obtained because by means of several levels of two-dimensional slice the resultant model can be approximated with greater precision.
ACKNOWLEDGMENT This work was supported within the framework of the research plan MSM 0021630513 and project of the Grant Agency of the Czech Republic 102/07/1086 and GA102/07/ 0389.
REFERENCES Fig. 6 Process of 3D creation (tooth), a) example of “Marching cubes” model, b) smoothed model, c) decimated model [3]
1. 2.
VI. CONCLUSIONS 3.
The paper describes the application of a modern segmentation method with a suitable combination of pre-processing of NMR image of the human head. The images used are of low contrast and low resolution. The region of temporomandibular joint that was the subject of segmentation is of a mere 60 x 60 pixels, which makes precise processing difficult. The output of the algorithm used is an image made up of regions with four levels of gray. These levels correspond to
_______________________________________________________________
Aubert G, Kornprobst P (2006) Mathematical problems in image processing. Springer, New York Vese L, Chan F (2002) A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision 50(3) 271-293 at www.math.ucla.edu/~lvese/PAPERS/IJCV2002.pdf Krsek P (2005) Problematika 3D modelovani tkani z medicinskych obrazovych dat. Neurologie v praxi 6(3) 149-153 Author: Jan Mikulka Institute: Brno University of Technology, Dept. of Theoretical and Experimental Electrical Engineering Street: Kolejni 4 City: Brno Country: Czech Republic Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Application of Advanced Methods of NMR Image Segmentation for Monitoring the Development of Growing Cultures J. Mikulka1, E. Gescheidtova1 and K. Bartusek2 1
Brno University of Technology, Dept. of Theoretical and Experimental Electrical Engineering, Kolejni 4, 612 00 Brno, Czech Republic 2 Institute of Scientific Instruments, Academy of Sciences of the Czech Republic, Kralovopolska 147, 612 64 Brno, Czech Republic
Abstract — The paper describes the pre-processing and subsequent segmentation of NMR images of growing tissue cultures. Images obtained by the NMR technique give three separately growing cultures. The aim of the work was to follow the speed of their development. Images obtained by means of the NMR device used are of very low resolution and contrast and there are no sharp edges between regions. Processing such images may prove to be quite difficult. A suitable algorithm was found, which consists of the pre-processing of the image and subsequent multiphase level set segmentation. The proposed method segments the image based on the intensity of the regions sought, and is suitable for working with NMR images in which there are no sharp edges. The method is described by partial differential equations that were transformed into corresponding difference equations solved numerically. Processing the detected images and measuring the sequence of NMR data give a graph of the growth development of the tissue cultures examined in comparison with manual measurement of their content. Keywords — NMR imaging, image segmentation.
I. INTRODUCTION MRI is useful to obtain the number of hydrogen nuclei in biology tissue or to follow growing of cultures. There was provided examination by MR techniques [1] for rate of growth considering, rising of percentage of protons and cluster’s shape of somatic germ. These measurements were part of research for hypothesis verification about increase percentage of water in process of tissue culture growth in case of cadmium contamination. In this case we put into tomograph operating area the measured tissue, choose the right slicing plane and measure MR image in this plane. The image is weighted by spin density and pixel intensity is equal to number of proton nucleus in chosen slice. MR image is a map of protons distribution in measured cluster of growing tissue culture [2]. The same technique was used for growth characterizing of early spruce germs contaminated by lead and zinc. There was computed intensity integral characterizing the number of protons in growing cluster and there were specified the changes of this value during the growth from MR images.
For measurement was used the spin-echo method because in contrast to gradient-echo technique the influence of the base magnetic field non-homogeneity is eliminated and images have better signal to noise ratio. The signal to noise ratio depends on the chosen width of slice. With thinner slices is the number of nucleus which generates signal smaller and the signal to noise ratio decreases. However the minimum width of slice is useful for tissue cultures imaging. The optimum width 2 mm was found. We must choose a size of image with a view to size of the tissue clusters and to size of operating probe. To improve the signal to noise ration we can repeat the measurement and average the results but it is more time-consuming. It is impossible to repeat the measurements quickly with regard to time of relaxation of water (T1 2 s, T2 80 ms). It is appropriate to choose the repetition cycle equal to spin-grid time of relaxation TR T1. In our case for size of image 256x256 pixels is measurement time 256 x TR. There was placed small flask filed by deionized water for tomograph parameters instability suppression during longterm measurement. Obtained intensities of each image were scaling according to intensity of water in the flask (fig. 1).
Fig. 1 Example of obtained image with 6 clusters and small flask filed by water for checking and scaling the image, at the top are the clusters contaminated by Zn, at the bottom are the clusters contaminated by Pb
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 190–193, 2009 www.springerlink.com
Application of Advanced Methods of NMR Image Segmentation for Monitoring the Development of Growing Cultures
191
interferential image in zero point, i.e. for kx = ky = 0. Integral of MR image after DFT could be computed by this equation: Ii
Fig. 2 Example of one cluster growth which is contaminated by lead 1000 mg/l
For described experiments was used MR tomograph with horizontal magnet (magnetic field 4.7 T) and operating area with diameter of 120 mm. Active shield gradient coils generate the maximal gradient field 180 mT/m. At the first the data were processed in MAREVISI software where was achieved the manual computation of cluster surface from diffusion image and intensity integral of clusters from images weighted by spin density. Further the images weighted by spin density were filtered by wavelet transformation and segmented by region level set method. From segmented clusters was computed their surface and intensity integral too. Both methods were compared.
xmax ymax si 0, 0 . M .N
(4)
The results of both methods are identical and the error is less than 1%. This approach is useful in case measurement of one tissue cluster. For the results verification were the images filtered by means of wavelet transformation and consequently segmented by four-phase level set method. In fig. 3 is shown the example of processed image by described approach. The surface and intensity integral is then computed only from bounded clusters and due to the result is not devaluated by noise around clusters. The results are compared in the next chapter.
II. MEASUREMENT METHODS The integral of image data could be established by two approaches. The first of them is sum of intensities in chosen part of image containing contaminated clusters divided by count of pixels Ii
xmax ymax M .N
M
N
1
1
¦¦ s
i
(1)
,
where Ii is intensity integral, xmax a ymax are maximum dimensions of image in axis x and y. The second approach utilizes properties of Fourier transformation and relation between MR image and interferential image which consist of obtained complex data. This relation could be described by following equation [3]:
si k x , k y
f f
³ ³ U x, y .e
2\ k x x k y y
i
dxdy,
(2)
f f
where kx and ky are axis in measured interferential image called spatial frequency, i(x,y) is distribution of spin density in MR image. For kx = ky = 0 we obtain: si 0, 0
f f
³ ³ U x, y dxdy
Ii .
(3)
f f
Number of the protons nucleus in measured sample is proportional to integral Ii which is equal to intensity of
_______________________________________________________________
Fig. 3 Example of the image processing, on the left is the original image, in the middle is the wavelet transform filtered image and on the right is the segmented cluster by four-phase level set segmentation (green curve)
III. RESULTS Relation between intensity of cluster (thereby relative number of protons) and time of growth is for various Zn and Pb contaminations shown in fig. 4 and fig. 5. Clearly, the proton concentration in cluster of tissue culture grows all the time independently on tissue culture growth capability. The capability of growth dramatically decreases after 14 – 20 days to minimum. Relation between number of protons in tissue culture with contamination Zn or Pb and level of concentration by this element is shown in fig. 6 and fig. 7. We can find a concentration in which the percentage of protons is the highest all the time of growing. With zinc contamination is the optimal concentration 250 mg/l and for lead is the concentration 50 mg/l. In diffusion images are the clusters more precisely bounded and the evaluation of cluster surface is more accurately. It does not reflect a concentration of proton nucleus and the results are different from intensity integral measurement. The cluster’s surfaces were evaluated from images weighted by spin density by wavelet filtration method and consequential region fourphase level set segmentation.
IFMBE Proceedings Vol. 23
_________________________________________________________________
192
J. Mikulka, E. Gescheidtova and K. Bartusek
6
6
5
5 0 Zn
4
50 Zn 3
250 Zn 500 Zn
2
1000 Zn
Intensity Integral / -
Intensity Integral / -
5 days
1
3 days 10 days
4
12 days 14 days
3
19 days 21 days
2
24 days
1
31 days 38 days
0
0 0
10
20
30
40
0
200
time / day
800
1000
1200
120
600
3 days
100
5 days 500
0 Zn 50 Zn
400
250 Zn 300
500 Zn
200
1000 Zn
Intensity Integral / -
Intensity Integral / -
600
concentration of Zn / mg/l
700
12 days 60
0 30
19 days 21 days
0 20
14 days
40 20
10
10 days
80
100
0
40
24 days 31 days 0
200
time / day
400
600
800
1000
1200
Fig. 6 Measurement of intensity integral of clusters for various Zn concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
6
6 5 days
5 0 Pb
4
50 Pb 3
250 Pb 500 Pb
2
1000 Pb
Intensity integra / -
5
3 days 4
12 days
0 15
20
25
30
35
19 days 21 days
0 10
14 days
2 1
5
10 days
3
1
0
24 days 31 days 0
40
200
400
600
800
1000
1200
38 days
concentration of Pb / mg/l
time / day
120
900
3 days
800 100
5 days
700 0 Pb
600
50 Pb
500
250 Pb
400
500 Pb 300
1000 Pb
Intensity integral / -
Intensity integral / -
38 days
concentration of Zn / mg/l
Fig. 4 Measurement of intensity integral of clusters in the time for various Zn concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
Intensity Integral / -
400
10 days 80
12 days 14 days
60 19 days 21 days
40
24 days
200
31 days
20
100
38 days
0
0
0
10
20
30
40
0
time / day
400
600
800
1000
1200
concentration of Pb / mg/l
Fig. 5 Measurement of intensity integral of clusters in the time for various Pb concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
_______________________________________________________________
200
Fig. 7 Measurement of intensity integral of clusters for various Pb concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
IFMBE Proceedings Vol. 23
_________________________________________________________________
Application of Advanced Methods of NMR Image Segmentation for Monitoring the Development of Growing Cultures
IV. CONCLUSIONS
4500 4000
Size of cluster / pixel
3500 0 Zn
3000
50 Zn
2500
250 Zn 2000
500 Zn
1500
1000 Zn
1000 500 0 0
5
10
15
20
25
30
35
40
time / day
5000 4500 size of cluster / pixels
193
4000 3500
0 Zn
3000
50 Zn
2500
250 Zn
2000
500 Zn
1500
1000 Zn
1000
The MRI technique is useful for observing of the growth of the spruce germs and for verification of the hypothesis of increasing amount of water in the growing tissue cultures with various metal contaminations thereby their faster elutriation. The basic measurements and data processing by two different methods were taken. The aim of this work was the surface and the intensity integral measurement in time. Firstly the data were manually processed in the MAREVISI software by measuring of the cluster’s surface in the diffusion images and then by the measuring of the intensity integral in the images weighted by the spin density. Further the images weighted by the spin density were processed by the wavelet transformation and segmented by the four-phase level set method and both monitored values were obtained in the Matlab. Both methods gives similar results thereby the measurement was verified.
500 0 0
10
20
30
40
ACKNOWLEDGMENT
time / day
Fig. 8 Measurement of cluster size for various Zn concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
This work was supported within the framework of the research plan MSM 0021630513 and project of the Grant Agency of the Czech Republic GA102/07/0389.
REFERENCES 4500 4000
1.
Size of cluster / pixels
3500 0 Pb
3000
50 Pb
2500
250 Pb 2000
2.
500 Pb
1500
1000 Pb
3.
1000
4.
500 0 0
5
10
15
20
25
30
35
40
time / day
6000
Size of cluster / pixels
5000
0 Pb
4000
50 Pb 3000
250 Pb 500 Pb
2000
1000 Pb
Supalkova, et al. Multi-instrumental Investigation of Affecting of Early Somatic Embryos of Spruce by Cadmium(II) and Lead(II) Ions. Sensors 2007, 7, 743-759 Callaghan, P.T., Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991 Liang, Z.P., Lauterbur, P. Principles of Magnetic Resonance Imaging, IEEE Press, New York, 1999 Vese L, Chan F (2002) A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision 50(3) 271-293 at www.math.ucla.edu/~lvese/PAPERS/IJCV2002.pdf Author: Jan Mikulka Institute: Brno University of Technology, Dept. of Theoretical and Experimental Electrical Engineering Street: Kolejni 4 City: Brno Country: Czech Republic Email:
[email protected] 1000
0 0
5
10
15
20
25
30
35
40
time / days
Fig. 9 Measurement of cluster size for various Pb concentrations, on the top is the result of the manual method, on the bottom is the result of the segmented image processing
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
High-accuracy Myocardial Detection by Combining Level Set Method and 3D NURBS Approximation T. Fukami1, H. Sato1, J. Wu2, Thet-Thet-Lwin2, T. Yuasa1, H. Hontani3, T. Takeda2 and T. Akatsuka1 1
Yamagata University, Department of Bio-System Engineering, Yonezawa, Japan 2 University of Tsukuba, Graduate School of Comprehensive Human Sciences, Tsukuba, Japan 3 Nagoya Institute of Technology, Department of Computer Science and Engineering, Nagoya, Japan Abstract — Accurate detection of myocardium is much important in diagnosis of heart diseases. In this study, we proposed the myocardial detection method by combining level set method in 2D image and 3D NURBS approximation. We first extracted epi- and endocardial walls by level set method in 2D image. Calculation cost was reduced by the processing on the slice. In this extraction, we used a near-circular shape of left ventricle and set the initial circle in the myocardial region surrounding the endocardium. We then approximated these extracted walls with 3D NURBS (Non-uniformed rational Bspline) model. Here, we used third-order B-spline basis function. In our study, we applied the method to MRI T1-weighted heart images of 10 subjects (5 normal subjects and 5 patients with apical hypertrophic cardiomyopathy). The pixel size was 1.62 1.62 mm, and the slice interval and the number of slices were 6.62 mm and 18, respectively. Evaluation of our method was done by comparing with manual detection by two cardiologists. As a result, the accuracy of endocardial detection was about the same as or less than the difference between cardiologists. While on the other hand, that of epicardial detection was larger than the difference between cardiologists. We inferred that epicaridial contour is clearer than endocardial one. Average of detection error by the method combining level set method and NURBS approximation (endocardium: 2.58 mm / epicardium: 2.71 mm) was smaller or almost same as only level set method (2.77 mm / 2.51 mm). However, as for variance of the error, combining method (0.58 mm / 0.59 mm) was smaller than only level set method (1.07 mm / 1.06 mm). These results show that NURBS approximation suppressed the variation of the detection accuracy. Keywords — level set method, NURBS approximation, myocardial wall thickness.
method and linear approximation between slices for obtaining myocardial volume. In this study, we extended the method by introducing the NURBS (Non-uniformed rational B-spline) model considering 3-D shape. Some reports applying the NURBS model to heart images can be found. Segars et al.[2] developed realistic heart phantom which is patient-based and flexible geometrybased phantom to create a realistic whole body model. Then they make polygon surfaces fit the points extracted from the surfaces of heart structures for each time frame and smoothed. 4-D NURBS surfaces were fit through these surfaces. Same research group Tsui[3] also investigated the effects of upward creep and respiratory motion in myocardial SPECT by using NURBS modeling the above phantom. Tustison et al. [4] used NURBS for biventricular deformation estimated from tagged cardiac MRI based on four model types with Cartesian or Non-Cartesian NURBS assigned by cylindrical or prolate spheroidal parameter. Lee et al.[5] developed hybrid male and female newborn phantoms to take advantages of both stylized and voxel phantoms. NURBS surfaces was adopted to replace limited mathematical surface equations of stylized phantoms, and the voxel phantom was utilized as its realistic anatomical framework. In the method we have already proposed, it has a problem that the error due to local misdetected contours gives the nonsmooth map of final result. In this study, to suppress local misdetection on slice level, we introduced NURBS model reflecting 3-D structral myocardial shape to the previous method. II. METHODS
I. INTRODUCTION In recent years, cardiac disease has become one of the most common causes of death. Therefore, quantitative evaluation of myocardial function is much important in diagnosis including disease prevention. We have already proposed the method to extract the left ventricle (LV) of apical hypertrophic cardiomyopathy and made wall thickness map from cardiac magnetic resonance imaging (MRI)[1]. However, in the method, we used the slice-based
In this study, MRI images were acquired using a Philips GyroscanNT. T1 images (256 u 256 pixels) at LV enddiastolic were obtained under synchronization with the electrocardiogram at echo time 40 ms to cover the whole heart region. The pixel size of the images was 1.62 u 1.62 mm2 and the slice thickness was 5 mm. The slice interval and the number of slices were 6.62 mm and 18, respectively. Short axial slices were acquired through the heart,
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 194–197, 2009 www.springerlink.com
High-accuracy Myocardial Detection by Combining Level Set Method and 3D NURBS Approximation
perpendicular to the line connecting the cardiac apex and base. We first detected the contours of endcardium and epicardium on the axial slice by level set method. And then, we constructed 3-D curved-surfaces on each myocardium by NURBS approximation. By applying NURBS, the misdetection by the level-set method in slice-based processing can be absorbed. We subsequently described about cardiac wall detection, NURBS fitting and image registration in detail. We finally show the constructing the bull's eys map for easy visual understanding.
I t Gt ( x, y ) I t ( x, y ) Gt (1 HN )V ( x, y )I t ( x, y )
(1)
Here, Gt and N are the time interval and curvature, respectively. We set H at 0.5.
H
I I
I xxI y2 2I yI xI xy I yyI x2 (I x2 I y2 ) 3 / 2
(2)
The function V ( x, y ) in the right-hand side of (1) is the function that adjusts the growth of the border surface. In this study, we used the velocity function:
A. Cardiac Wall Detection The level set function we used is the model introduced by Malladi et al.[6] that considers curvature. We chose this model because it is considered that myocardial walls have smooth contours. In MRI images, we manually set the initial circle of the level set in the myocardial region to obtain the endocardial and epicardial walls because the LV has a near-circular shape in the MRI short-axis cardiac images. We then applied the level set method to contour detection. Application and an example of an extracted result by this processing are shown in Fig. 1.
195
V ( x, y )
1 1 (GV * I ( x, y ))
(3)
, where I ( x, y ) are pixel values at arbitrary coordinates (x, y) and GV is the Gaussian smoothing filter whose standard deviation is V . Here, we give the following equation as the initial function I0 ( x, y) : I0 ( x, y ) ° ° ® °I0 ( x, y ) ° ¯
( x x0 ) 2 ( y y 0 ) 2 r02 (for endocardium detection) ( x x0 ) 2 ( y y 0 ) 2 r02 (for epicardium detection)
(4)
Updating in Eq.(1) was stopped when the variation of
Fig. 1 Extraction of myocardial walls by level set method
We implemented 2-D image processing because we can stably extract myocardial contours. Namely, we can determine the number of times to update the level set function, after-mentioned, on every slice even if there are variations in image contrast between slices. This method uses a dynamic contour model that iteratively deforms the contour, beginning from the initial contour, to increase the gradient of the pixel values. The surface is presented as an equipolt lent level of the function I ( x, y) . Zero crossover points t form the contour by updating I ( x, y ) . The equation is described as follows, when the boundary surface at the time t Gt is defined as I t Gt ( x, y) .
summation of I ( x, y ) in an enclosed region by the border was at the minimum. NURBS Fitting 3-D NURBS surface of degree p in the u direction and degree q in the v direction is defined as a piecewise ratio of B-spline polynomials by the folloing equation. In this study, we set both parameters p and q at 3. NURBS function, S (u, v) , can be described with next equation.
S (u , v) { ( x (u , v), y (u , v ), z (u , v))
¦ ¦ ¦ ¦ n
m
i 0 n
j 0 m
i 0
N i , p (u ) N j ,q (v)Z ij Pij
j 0
N i , p (u ) N j ,q (v )Zij
(0 d u d 1, 0 d v d 1)
Pij represent the control points defining the surface,
(5)
Zij
are weights determining a point's influence on the shape of the surface, and N ip (u ) and N iq (v) are the nonrational Bspline basis functions defined on the knot vectors.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
196
T. Fukami, H. Sato, J. Wu, Thet-Thet-Lwin, T. Yuasa, H. Hontani, T. Takeda and T. Akatsuka
U
[0, !,0, u p 1 "u n ,1 , ,1] "
p 1
V
p 1
(6)
[0, !,0, u q 1 "u m ,1 , ,1] "
q 1
q 1
B-spline basis functions are calculated using the CoxdeBoor reccurence relation described in following equations.
1 (t i d t t i 1 ) N i ,0 (t ) ® ¯0 (t t i , t i 1 d t ) t t ti t N i ,m N i ,m 1 i m 1 N i 1, m 1 tim ti t i m 1 t i 1
(7)
In this study, we acquired points on the myocardial contour by searching radially from the center which is the intersection of LV-long axis and each short-axis slice. We then calculated control points, Pij , from the points on the myocardial contour. NURBS surface was obtained by using control points in Eq.(5). We showed the example of 3-D NURBS surface constructed from the MRI myocardial contours obtained in cardiac wall detection in Fig. 2. This figure was drawn by using the software, Real INTAGE (KGT Inc.).
Fig. 3 Definition of myocardial wall thickness
MRI images as shown in Fig. 3. This wall thickness, W (u , v) , is described by following equation.
V (u , v )
Pend (u , v ) Pepi (u , v )
(8)
III. RESULTS AND DISCUSSIONS We applied our method to 5 normal cases and 5 APH cases. In this study, we show the average map of 5 normal cases in Fig. 4 and 5. Figure 4 is the bull’s eye map based on wall thickness extracted by level set method. And Fig. 5 is the map extracted by both level set method and NURBS approximation. We compared the detected myocardial walls with the results manually-extraction by two cardiologists to examine the performance of NURBS approximation. Here, NURBS surface was resliced at the original short-axis slices.
Fig. 2 Application the NURBS model to endocardium and epicardium detected from MRI images
B. Calculation of wall thickness We will describe about the method to calculate blood flow volume per unit LV myocardial volume and the method for constructing bull's map. We calculated myocardial volume by using wall thickness. It is defined by the distance between the points of endocardium, Pend (u , v ) , and epicardium, Pepi (u , v ) , whose u and v are same in
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 4 Bull’s eye map based on wall thickness extracted by level set method
_________________________________________________________________
High-accuracy Myocardial Detection by Combining Level Set Method and 3D NURBS Approximation
Fig. 5 Bull’s eye map based on wall thickness extracted by both level set method and NURBS approximation.
In Fig.4 shows some striping because 3D myocardial shape is not considered. 3D NURBS approximation eliminates these striping. In Table.1, we can see that standard deviation of error decreases by adding NURBS approximation into level set method. These results show that NURBS approximation suppressed the variation of the detection on the slice level. Our method shows relatively good performance for endocardium detection. While on the other hand, that of epicardium detection was larger than the difference between cardiologists, namely the method is inferior in epicardium detection.
We first extracted epi- and endocardial walls by level set method in 2D image. Calculation cost was reduced by the processing on the slice. In this extraction, we used a nearcircular shape of left ventricle and set the initial circle in the myocardial region surrounding the endocardium. We then approximated these extracted walls with 3D NURBS (Nonuniformed rational B-spline) model. In our study, we applied the method to MRI T1-weighted heart images of 10 subjects (5 normal subjects and 5 patients with apical hypertrophic cardiomyopathy). Evaluation of our method was done by comparing with manual detection by two cardiologists. As a result, the accuracy of endocardial detection was about the same as or less than the difference between cardiologists. While on the other hand, that of epicardial detection was larger than the difference between cardiologists. Bull’s eye maps of wall thickness also show that NURBS approximation suppressed the variation of the detection.
REFERENCES 1.
2. 3.
4.
Table. 1 Detection error of myocardial wall endocardium (mm)
epicardium (mm)
Level set method (LSM) Combined method (LSM + NURBS)
2.51 s 1.06
2.77 s 1.07
2.71 s 0.59
2.58 s 0.58
the difference of two cardiologists
3.44 s 1.57
2.00 s 0.71
IV. CONCLUSIONS In this study, we proposed the method combining level set method and NURBS approximation for myocardial detection and calculation of wall thickness.
_______________________________________________________________
197
5.
6.
Fukami T, Sato H, Wu J et al. (2007) Quantitative evaluation of myocardial function by a volume-normalized map generated from relative blood flow, Phys. Med. Biol., 52: 4311-4330. doi: 10.1088/0031-9155/52/14/019 Segars W P, Lalush D S, Tsui B M W, (1999) A realistic spline-based dynamic heart phantom, IEEE Trans. on Nucl. Sci., 46, 503-506. Tsui B M W, Segars W P, Lalush D S, (2000) Effects of upward creep and respiratory motion in myocardial SPECT, IEEE Trans. on Nucl. Sci., 47, 1192-1195. Tustison N J, Abendschein D, Amini A A, (2004) Biventricular myocardial kinematics based on tagged MRI from anatomical NURBS models, Proc. of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), 2, 514-519. Lee C, Lodwick D, Hasenauer D, et al., (2007) Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models, Phys. Med. Biol., 52, 3309-3333. Malladi R, Sethian J A, Vemuri B C, (1994) Evolutionary fronts for topology-independent shape modeling and recovery, Proc. of Third European Conference on Computer Vision, 800,3-13. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Tadanori FUKAMI Yamagata University Jonan 4-3-16 Yonezawa Japan
[email protected] _________________________________________________________________
Design of a Wireless Intraocular Pressure Monitoring System for a Glaucoma Drainage Implant T. Kakaday1, M. Plunkett2, S. McInnes3, J.S. Jimmy Li1, N.H. Voelcker3 and J.E. Craig4 1
School of Computer Science, Engineering and Mathematics, Flinders University, Adelaide, Australia 2 Ellex Medical Lasers R & D, 82 Gilbert Street, Adelaide, Australia 3 School of Chemistry, Physics and Earth Sciences, Flinders University, Adelaide, Australia 4 Department of Ophthalmology, Flinders Medical Centre, Adelaide, Australia
Abstract — Glaucoma is a common cause of blindness. Wireless, continuous monitoring of intraocular pressure (IOP) is an important, unsolved goal in managing glaucoma. An IOP monitoring system incorporated into a glaucoma drainage implant (GDI) overcomes the design complexity with incorporating a similar system in a more confined space within the eye. The device consists of a micro-electro-mechanical systems (MEMS) based capacitive pressure sensor combined with an inductor printed directly onto a polyimide printed circuit board (PCB). The device is designed to be placed onto the external plate of a therapeutic GDI. The resonance frequency changes as a function of IOP and is tracked remotely using a spectrum analyzer. A theoretical model for the reader antenna was developed to enable maximal inductive coupling with the IOP sensor implant, including modeling of high frequency effects. Pressure chamber tests indicate that the device has adequate sensitivity in the IOP range with excellent reproducibility over time. Additionally, we show that sensor sensitivity does not change significantly after encapsulation with polydimethylsiloxane (PDMS) to protect the device from the aqueous environment. In vitro experiments showed that the signal measured wirelessly through sheep corneal tissue was adequate indicating potential for using the system in human subjects. Keywords — Glaucoma, intraocular pressure, glaucoma drainage implant, micro-electro-mechanical systems (MEMS), and capacitive pressure sensor
I. INTRODUCTION The most commonly used technique, and current gold standard for intraocular pressure (IOP) measurement is applanation tonometry. In addition to requiring topical anesthetic and a skilled operator, a major disadvantage of applanation tonometry is that it is influenced by many variables, thereby providing only a surrogate measure of true IOP. Additionally, diurnal measurements are difficult to obtain, particularly overnight. Remote continuous monitoring of IOP has long been desired by clinicians and the development of such technology has the prospect of revolutionizing glaucoma care. Several groups have described an active remote measuring device
that can be incorporated into the haptic region of an intraocular lens (IOL). [1-3] IOL’s are universally used to replace the natural lens in cataract surgery and are in direct contact with the aqueous humor inside the anterior chamber, thus providing an accurate measurement of IOP. However, the IOL has size and weight constraints requiring the implant to be miniaturized and therefore requiring on-chip circuitry to process signals (i.e. active telemetry). Whilst active devices are accurate and sensitive, their complexity and manufacturing price are potentially major obstacles to a wide spread use. In another approach, Leonardi et al [4] described an indirect method in which a micro strain gauge is embedded into a soft contact lens that allows measurement of changes in corneal curvature which correlates with IOP. However, the correlation between corneal curvature and IOP is not universally accepted. A glaucoma drainage implant (GDI) is a device implanted to enable lowering of IOP in severe glaucoma cases. The explant plate of a GDI which is implanted under the conjunctiva is directly connected to the anterior chamber of the eye via a tube. The plate provides a larger surface area compared to an IOL allowing greater flexibility in design. A passive telemetry approach is suitable in such a case. This reduces the fabrication complexity making the device low cost and simple to fabricate. In addition, there are no active parts in the system making it desirable for implantation. A feasibility study (beyond the scope of this paper) was done to verify that the explant plate of a GDI is a suitable location to incorporate an IOP measuring system. In this paper an IOP monitoring device is designed and implemented onto the explant plate of a Molteno GDI. In addition, an optimal design for the reader antenna to maximize coupling between itself and the sensor implant is proposed. This paper focuses on the Molteno GDI, however this method can be adapted to other GDI’s available on the market.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 198–201, 2009 www.springerlink.com
Design of a Wireless Intraocular Pressure Monitoring System for a Glaucoma Drainage Implant
II. MATERIALS AND METHODS A. Sensor implant The sensor implant is designed to be placed onto the explant plate of the Molteno GDI (Molteno Ophthalmic Limited - New Zealand) as shown in Figure 1.
199
nance peak was recorded. Experiments were repeated using explanted sheep scleral and corneal tissue to determine if wireless communication was possible through biological tissue. C. Reader antenna design Reader antenna design involves maximizing its read range by optimizing its coupling with the sensor implant. The coupling coefficient k is given by the following expression.[5 6]
M
k
Figure 1: The IOP sensor implant placed on the explant plate of a Molteno GDI.
The IOP sensor implant consists of a MEMS capacitive pressure sensor (Microfab Bremen – Germany) and planar inductor printed directly onto a flexible, biocompatible polyimide printed circuit board (PCB) (Entech Electronics – Australia) to complete the parallel resonant circuit. The planar sensor inductor and capacitive pressure sensor were both characterized using the Vector Impedance Meter (VIM) (Model 4815A Hewlett Packard). The sensor implant was encapsulated with biomaterial polydimethylsiloxane (PDMS) to protect against aqueous environment. All PDMS coatings were prepared from a Sylgard® Brand 184 Silicone Elastomere Kit. B. Wireless communication The wireless communication between the sensor implant and external reader antenna is an important determinant in implantable systems. When the sensor implant is brought near the vicinity of the external reader antenna a portion of the RF energy is absorbed by the sensor implant, creating a ‘dip’ in the signal as observed on the spectrum analyzer. The greater ‘dip’ signifies a greater coupling between the reader antenna and sensor implant. The sensor implant is placed inside a pressure chamber connected to an inflation cuff at one end and a sphygmomanometer at the other. The sensor implant is separated from the reader antenna by a 4 mm thick non-conducting PDMS layer. The pressure was varied in the desired IOP range (550 mmHg) using the inflation cuff and the shift in the reso-
_________________________________________
(
L1 L2
Where, M is the mutual inductance, L1 and L2 are the inductance of sensor and reader inductor coils respectively. In order to obtain the best system characteristics, the value of k should be designed for unity. The inductances L1 and L2 can be directly determined from the VIM and M is calculated by the solution provided by Zierhofer et al. [7]. In order to determine the optimal reader antenna size for a specified read range the M between the two spiral coil geometries given by the expression below needs to maximized.[8]
M
P oSN 1 N 2 ab 2
2a r 2
2
3
(2)
2
Where a, b are the radius of the reader and sensor coils respectively. N1, N2 the number of turns in reader and sensor coil respectively, ‘r’ is the distance between the two coils. The above equation can be reduced to
a
2r
(3)
To validate the described model by experimentation, four planar inductors of different size, inductance and number of turns were printed directly onto a PCB. Their properties are listed in Table 1. The experimental setup consists of a LC
Table 1 Antenna coil properties Coil A
Coil B
Coil C
Coil D
Number of turns
18
15
7
15
Inductance (H)
2.14
4.45
2.5
1.75
Self Resonating Frequency (MHz)
63
49
73
102
Diameter (mm)
9.4
32.51
32.5
16.76
IFMBE Proceedings Vol. 23
___________________________________________
T. Kakaday, M. Plunkett, S. McInnes, J.S. Jimmy Li, N.H. Voelcker and J.E. Craig
resonant circuit comprising of a planar sensor inductor (L) and a 22 pF chip capacitor (C) mounted onto a vernier calipers separated from the reader antenna. The reader antenna is connected to a spectrum analyzer. To determine the maximum read range, the distance between the reader antenna and LC resonant circuit was increased until no further ‘dip’ in the signal was observed. The above experiment is then repeated using the sensor implant in place of the LC resonant circuit. III. RESULTS AND DISCUSSION A. Sensitivity of the sensor implant in the IOP range VIM results showed the quality factor (Q) of the capacitive pressure sensor to be quite low at the resonant frequency of the sensor implant (8.71 @ 38.61 MHz) as compared to the maximum Q of 57.29 @ 10MHz. Pressure chamber tests were repeated randomly (50 iterations) over a period of one week, the results are shown in Figure 2. The current resolution of the sensor implant in the IOP range is 10 mmHg. The resolution of the current system is limited by the sensitivity and Q of the MEMS capacitive sensor.
25
42.45 42.40 42.35 42.30 42.25 42.20 42.15 42.10
Coil A Coil B Coil C Coil D
20 15 10 5 0 0
2
4
6
8
Distance (mm) Figure 3: Experimental evaluation of reader coil geometries at varying distances from the LC resonant circuit. The better coupling results in a greater ‘dip’ height which decreases with increasing distance from the LC resonating circuit.
0.6
Coupling Coefficient
Frequency (MHz)
42.50
‘dip’ height are presented in Figure 3. The theoretical coupling coefficient k between the reader antenna and sensor implant is shown in Figure 4. The theoretical model closely follows the trend from the experimental results (Figure 3).
'Dip' Height (dbm)
200
Coil A Coil B Coil C Coil D
0.5 0.4 0.3 0.2 0.1 0.0
0
10
20
30
40
50
60
0
Atmospheric Pressure (mmHg)
1
2
3
4
Distance (mm)
Figure 2: The resonance frequency response of the IOP sensor implant. The error bars indicate the standard deviation.
Figure 4: Theoretical coupling coefficients of reader coils at varying distances from the IOP sensor implant calculated from (3).
In vitro experiments were carried out by substituting the PDMS barrier with explanted sheep corneal and scleral tissue in aqueous environment. The results showed that the signal measured was adequate, indicating potential for using such a system in human subjects.
A summary of results showing the performance of each reader coil with the LC resonant circuit and sensor implant are presented in Table 2. The maximum read range between the reader antenna and sensor implant is greatly attenuated due to its low Q as compared to a high Q LC resonant circuit. The effect of self resonating frequency (SRF) on the reader coil performance is evident when coupled with the sensor implant. The SRF is the frequency beyond which the inductor starts to behave as a capacitor. Although coil B gives a maximum read range of 10 mm with the LC reso-
B. Reader antenna performance Experimental results showing coupling between the reader antenna and LC resonant circuit represented by the
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Design of a Wireless Intraocular Pressure Monitoring System for a Glaucoma Drainage Implant Table 2: Summary of results Coil A
Coil B
Coil C
Coil D
Theoretical read range (mm) (3)
4
10
10
6
Experimental range LC circuit (mm)
6
8
8
7
Experimental range – sensor implant (mm)
3
0
0
4
sensitivity was not significantly affected by encapsulation with PDMS bio-coating. The reader antenna is designed to maximize its coupling with the sensor implant. Theoretical models are proposed to predict the coupling of the reader antenna coils with a resonant circuit of interest including its optimal size for a specified read range. We found the theoretical predictions closely followed experimental results. Signals from the sensor implant were measured though 4 mm thick PDMS bio-material and though explanted sheep scleral and corneal tissue indicating high potential for using the system in human subjects. The IOP monitoring system incorporated into a GDI overcomes the design complexity and associated costs of incorporating such a system in an IOL. Such a device will open new perspectives, not only in the management of glaucoma, but also in basic research for mechanisms of glaucoma.
nant circuit it produces zero ‘dip’ when coupled with the sensor implant. This is due to a) low Q of the sensor implant, b) the SRF of coil B is in close proximity to the resonant frequency of the sensor implant. Coil D gave the maximum read range of 4 mm with the sensor implant. This result is anticipated due to a number of reasons a) its SRF is at least twice the resonant frequency of the sensor implant b) the number of turns are maximized to increase mutual inductance, which is directly proportional to k, c) coil D is larger than the sensor implant allowing for lateral and angular misalignments.
1.
C. Encapsulation of sensor with PDMS
2.
The results from encapsulating the capacitive pressure sensor with biomaterial PDMS as obtained from the VIM show that the sensitivity of the sensor does not change significantly upon encapsulation. The minor offset due to the influence of the silicone coating can be compensated by calibration of each sensor after encapsulation. The average sensitivity of the sensor in the IOP range determined over several repeated measurements was determined to be 1.167x10-3 pF/mmHg.
REFERENCES
3.
4.
5.
6.
IV. CONCLUSION
7.
An IOP sensor implant comprising of a MEMS based capacitive pressure sensor and planar inductor was designed to be incorporated into the explant plate of a Molteno GDI. The IOP sensor implant has been shown to have reasonable resolution (10 mmHg) in the IOP range such a resolution can be able to differentiate between normal and high IOP. In addition, the sensor implant showed excellent repeatability over time. Better resolution of small differences in IOP will be desirable in future iterations. In addition, sensor
201
8.
Walter P, Schnakenberg U, Vom Bogel G, et al. Development of a Completely Encapsulated Intraocular Pressure Sensor [Original Paper]. Ophthalmic Research 2000;32:278-84. Schnakenberg. U, Walter. P, Bogel. Gv, et al. Initial investigations on systems for measuring intraocular pressure. Sensors and Actuators 2000;85:287-91. Eggers T, Draeger J, Hille K, et al. Wireless Intra-Ocular Pressure Monitoring System Integrated into an Artificial Lens. 1st Annual International IEEE-EMBS Special Topic Conference on Microtechnologies in Medicine & Biology. Lyon, France 2000. Leonardi M, Leunberger P, Bertrand D, Bertsch A, Renaud P. First Steps toward Non-invasive Intraocular Pressure Monitoring with a Sensing Contact Lens. Investigative Ophthalmology & Visual Science 2004;45:3113-7. Ong KG, Grimes CA, Robbins CL, Singh RS. Design and application of a wireless, passive, resonant-circuit environmental monitoring sensor. Sensors and Actuators 2001;A 93:33-43. Akar O, Akin T, Najafi K. A wireless batch sealed absolute capacitive pressure sensor. Sensors and Actuators 2001;A95:29-38. Zierhofer C, M., Hochmair E, S. Geometric Approach for Coupling Enhancement of Magnetically Coupled Coils. IEEE transactions on Biomedical Engineering. 1996;43:708-14. Reinhold. C, Scholz. P, John. W, Hilleringmann. U. Efficient Antenna Design of Inductive Coupled RFID-Systems with High Power Demand. Journal of Communications 2007;2:14-23.
Author: Institute: Street: City: Country:
Tarun Kakaday Flinders University of South Australia Sturt Road, Bedford Park ADELAIDE - 5042 AUSTRALIA
Email:
[email protected] _________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Integrating FCM and Level Sets for Liver Tumor Segmentation Bing Nan Li1, Chee Kong Chui2, S.H. Ong3,4 and Stephen Chang5 1
Graduate Programme in Bioengineering, National University of Singapore, Medical Drive 28, Singapore Department of Mechanical Engineering, National University of Singapore, Engineering Drive 1, Singapore 3 Department of Electrical and Computer Engineering, National University of Singapore, Engineering Drive 3, Singapore 4 Division of Bioengineering, National University of Singapore, Engineering Drive 1, Singapore 5 Department of Surgery, National University Hospital, Kent Ridge Wing 2, Singapore 2
Abstract — Liver and liver tumor segmentations are very important for a contemporary planning system of liver surgery. However, both liver and liver tumor segmentations are a grand challenge in clinics. In this paper, we proposed an integrated paradigm with fuzzy c-means (FCM) and level set method for computerized liver tumor segmentation. An innovation in this paper is to interface the initial segmentations from FCM and the fine delineation with level set method by morphological operations. The results in real medical images confirm the effectiveness of such integrated paradigm for liver tumor segmentation. Keywords — liver tumor segmentation, fuzzy c-means, level set methods, medical image processing
I. INTRODUCTION Information and computer technologies exhibit great impact on liver tumor treatment. For instance, by them, the physicians are now possible to inspect liver components and plan liver surgery in an augmented reality. To this goal, one of essential steps is to capture the profile of internal components of human body and reconstruct them accurately. Medical imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI), are among the most popular ones in this field. Take the CT for example. Their results have been applied to computerized planning system of liver tumor treatment in various publications [1-2]. In conventional planning systems of liver surgery, the physicians have to inspect a series of CT images and analyze liver components by hand. Obviously, it is not an easy job. Segmentation is a field of technology oriented to computerized distinguishing of anatomical structure and tissue types in medical images. It leads to the useful information like spatial distributions as well as pathological regions of physiological organs in medical image. Thus, for a contemporary planning system, one of essential components is for accurate liver segmentation, in particular hepatic vessels and liver tumors. The underlying object of image segmentation is to separate the regions of interest from their background and from
other components. A typical paradigm of image segmentation can be achieved by either allocating the homogeneous pixels or identifying the boundaries among different image regions [3]. The former often takes advantage of pixel intensities directly, while the latter depends on intensity gradients. Other than intensity information, it is possible to segment image by utilizing model templates and evolving them to matching the interested objects. In addition, various soft computing methods have been applied to image segmentation too. However, it is noteworthy that, till now, there is yet no universal method for image segmentation. The specific application and the available resources usually determine the individual method’s strengths and weaknesses. Among the state-of-the-art methods, active contours or deformable models are among the most popular ones for image segmentation. The idea behind them is quite straightforward. The user specifies an initial guess, and then let the contour or the model evolve by itself. If the initial model is parametrically expressed, it operates as snakes [3]. In contrast, the level set methods do not follow the parametric model, and hereby own better adaptability. But the level set methods, without the parametric form, often suffer from a few refractory problems, for example, boundary leakage and excessive computation [4]. Thus a good initialization is very important in level set methods for image segmentation. In reference [5], the authors utilized a fast marching approach to propagate the initial seed point outwards, and followed by a level set method to fine tune the results. In this paper, we proposed to initialize the level set evolution by the segmentations from fuzzy c-means (FCM), which has gained esteem in medical image segmentation. In other words, an integrated technique was presented in this paper for liver tumor segmentation from CT scans. The first part is based on the unsupervised clustering by FCM, whose results are then selected out for a series of morphological operations. The final part is by an enhanced level set method for fine delineation of liver tumors.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 202–205, 2009 www.springerlink.com
Integrating FCM and Level Sets for Liver Tumor Segmentation
203
II. INITIAL SEGMENTATION BY FCM Segmentation is a classical topic in the field of medical image processing and analysis. Suppose a medical image as u0: ^ R2. Image segmentation is to find out the optimal subset ui: i so that = * i and ui has a nearly-constant property within each i. It is hereby possible to segment a medical image based on its pixel intensities or variational boundaries. For the latter case, image segmentation may be formulated as = * i * `j, where `j is the boundary separating i. However, due to the intrinsic noises and discontinuities, neither of them is universally robust [7]. As a consequence, an integrated technique was proposed in this paper to utilize FCM, based on image intensity information, for initial segmentation and the level set method for object refinement by the variation information of image boundaries. FCM has been widely utilized for medical image segmentation. In essence, it originates from the classical kmeans algorithm. However, in the k-means algorithm, every image pixel is limited to one and only one of k clusters, which is not true in medical image segmentation. Generally speaking, each pixel in a medical image should be attributed to the superimposition of different human organs. In other words, it is usually not appropriate to assign an image pixel to a specific organ or organ component only. In stead, FCM utilizes a membership function μijto indicate the belongingness of the jth object to the ith cluster. Its results are hereby more justifiable in medicine. The objective function of FCM is: N
J
2
c
¦¦ P
m ij
(1)
x j vi
j 1 i 1
where μij represents the membership of pixel xj in the ith cluster, vi is the ith cluster center, and m (m>1) is a constant controlling the fuzziness of the resulting segmentation. The membership functions are subject to the constraints c
¦P
ij
1 , 0 d Pij d 1 and
i 1
N
¦P
ij
! 0 . In accordance with
j 1
the following (2)-(3), the membership functions μij and the centroids vi are updated iteration by iteration respectively:
Pij
x j vi
¦ ¦ ¦
c k 1
N
vi
j 1 N j
2 /( m 1)
x j vk
2 /( m 1)
Pijm x j Pm 1 ij
_______________________________________________________________
(2)
(3)
The system is optimized when the pixels close to their cluster’s centroid are assigned high membership values, and low membership values are assigned to the pixels far away from that centroid. The performance of FCM for medical image segmentation relies on the prefixed cluster number and the initial clusters substantially. It has been found in our practice that the random initialization was not robust enough for liver tumor segmentation in CT scans. Other investigators ever successfully carried out the optimal initialization by histogram analysis for brain segmentation in MRI [7]. But it is found that there is merely tiny but variable discrepancy between the liver’s and liver tumors’ histogram in CT images. In our experiment, we experientially designated the initial cluster centroids by averaging the histogram of the CT image. III. TUMOR DELINEATION BY LEVEL SET METHODS The level set method, proposed by Osher and Sethian, is a versatile tool for tracing the interfaces that may separate an image into different parts. The main idea behind it is to characterize the interface function `(t) by a Lipschitz function : I (t , x, y ) ! 0 ( x, y ) is inside *(t ) ° ( x, y ) is at *(t ) ®I (t , x, y ) 0 °I (t , x, y ) 0 ( x, y ) is outside *(t ) ¯
(4)
In other words, the interface `(t) is implicitly solved as the zero-level curve of the function (t, x, y) at time t. In general, `(t) evolves in accordance with the following nonlinear partial differential equation (PDE): wI F I 0 wt I (0, x, y ) I0 ( x, y )
(5)
where F is a velocity field normal to the curve `(t), and the set {(x, y)| (0(x, y))=0} is the initial contour. A few limitations have been recognized in standard level set methods too. The first of all, the evolution of level set functions has to be reinitialized periodically in order to guarantee the solutions’ stability and usability. Otherwise, it often confronts with the threats of boundary leakage. On the other hand, standard level set methods are carried out on the entire image domain. Nevertheless, in terms of image segmentation, merely the zero-level level set is interested eventually. In other words, it is meaningful if the computation is limited within a narrow band near the zero level.
IFMBE Proceedings Vol. 23
_________________________________________________________________
204
Bing Nan Li, Chee Kong Chui, S.H. Ong and Stephen Chang
In this paper, we follow the approach proposed in reference [6] for a distance preserving level set evolution. In the first side, the authors revised the level set model as: wI wt
P[I div(
I I )] O G (I ) div( g ) v g G (I ) I I
(6)
where >0 is the weight of the internal energy term, which controls the smoothness of level set curve; >0 and are constants controlling external contraction; g is the edge indicator function as in (11) ; is the univariate Dirac function. In practice, is regularized as: 0 ° G H ( x) ®1 cos(S x ) H ° 2 ¯
x !H
(7)
x dH
The distance preserving level set model eliminates the iterative reinitialization in standard level set method. Another benefit of such model is allowing a more general initialization but not the signed distance function. The authors proposed an efficient bi-value region-based initialization. Given an arbitrary region 0 in an image, the initial level sets may be simply defined as:
c ¯c
I 0 ( x, y ) ®
( x, y ) is inside : 0
(8)
Otherwise
where c should be a constant larger than in (7). The initial region 0 may come from manual region of interest or computer algorithms, for example, thresholding, region growing, and so on. IV. INTERFACING BY MORPHOLOGICAL OPERATIONS It is desired in this paper to combine the advantages of fuzzy clustering and level set method for image segmentation. On one hand, each cluster by FCM, theoretically speaking, represents a significant object in that image. On the other hand, the different initialization strategies do impose effects on the final performance of level set segmentation. Intuitively, the initial segmentation by FCM may serve as the initial guess for level set evolution. However, it is noteworthy that, unlike the original medical image, whose pixels are color intensities, the clusters by FCM are a series of membership functions to their centroids. The process of converting them to a grayscale image will amplify noise effects, and thus will not be suitable to initiate level set evolution directly. Meanwhile, as FCM pays most attention on intensity information only, the results inevitably suffer from image noise and background inhomogeneity. In other words, the initial segmentation by
_______________________________________________________________
FCM is discrete and diverse, which is a great challenge to the following level set evolution. In this paper, we suggested to process the initial FCM segmentations by morphological operations, which involve a few simple graphic instructions only. In essence, a morphological operation modifies an image based on the predefined rules and templates. The state of any image pixel is determined by its template-defined neighborhood. In general, morphological operations run very fast because they merely involve simple computation. For instance, the template is often defined as a matrix with elements 0 and 1. The template shape such as disk or diamond is approximated by the elements 1. In terms of morphological operations, merely max and min are utilized to expand or shrink the regions in the image. Take image shrinkage for example. The value of each pixel is the minimum among that pixel and its neighboring pixels. The underlying object of morphological operations here is to filter out the objects in FCM clusters due to noises but preserve the genuine image components. In our practice, it is observed that most of noises are discrete and diverse. So morphological operations should firstly shrink and remove the small objects in FCM segmentation, and then expand and recover the interested objects. In this paper, it is desired to detect and delineate liver tumors in CT scans. Although liver tumors are amorphous, they are nearly round in most cases. So a disk-like template was empirically defined in our experiments. Three real liver CT images, all of which were deemed with tumors, were randomly selected out for tumor detection and delineation. The first two images in Fig. 1 were from the Department of Surgery, National University Hospital, Singapore. The others were adopted from the 3D Liver Tumor Segmentation Challenge 2008 [8]. It is evidenced in Fig. 1 that the results by FCM may reflect the rough locations and extents of liver tumors in an unsupervised manner. Nevertheless, with the increasing of noises and artifacts, the initial segmentation by FCM turns to misleading gradually, as shown in Fig. 1(a). Fig. 1(b) illustrates the effects of morphological operations on liver tumor delineation. It is initialized by FCM segmentation and finalized by level set evolution. No matter by visual inspection or numerical comparison, the latter results are obviously more robust and credible. V. CONCLUSIONS In this paper, we presented an integrated system for computerized liver tumor segmentation. In the first part, the method of FCM was adopted for initial segmentation and liver tumor detection; the second part was based on mor
IFMBE Proceedings Vol. 23
_________________________________________________________________
Integrating FCM and Level Sets for Liver Tumor Segmentation
205
Fig. 1 Interfacing FCM and level sets by morphological operations (Red line: computerized delineation by level set evolution; Green dash line: manual delineation from reference [14]) phological operations for segmentation refinement; the final part was a level set method for fine delineation of liver tumors. In the experiments by various methods, it was observed that the performance of thresholding and the method proposed in this paper was better than that of others. However, it is eventually difficult to find out a set of effective thresholds due to the intensity variety of liver tumors. In contrast, FCM separates different objects in an unsupervised manner. Thus it is able to detect liver tumors regardless of intensity variety. As aforementioned, the results of liver tumor segmentation are oriented to an integrated surgical planning system for liver tumor treatment by radio-frequency (RF) ablation. Their accuracy and reliability are of vital importance: if false positive, it incurs the risk of impairing good hepatic tissues; if false negative, it will confront with the risk of leaving liver tumors without treatment. So, up to now, it is yet unlikely that a fully automated method or system is reliable enough for surgical planning system for liver tumor treatment. Instead of attending a fully automated segmentation method, our methods and systems are essentially in a semi-automated manner. In summary, the complexity of liver tumor segmentation is far more challenging than expected. As illustrated in our experiments, neither intensities nor morphological features are robust enough for computerized segmentation and recognition. In addition, our current work merely focused on the segmentation of tumors from the liver. But how to segment the liver out from abdominal CT scans is a problem with equivalent difficulty (www.sliver07.org). In a word, there are many steps before we are able to set up a highfidelity model for surgical planning of liver tumor ablation.
_______________________________________________________________
ACKNOWLEDGMENT This research is supported by a grant from National University of Singapore (R-265-000-270-112 and R-265-000270-133).
REFERENCES 1.
2. 3. 4.
5.
6.
7. 8.
Glombitza G, Lamade W, Demiris AM et al (1999) Virtual planning of liver resections: image processing, visualization and volumetric evaluation. International Journal of Medical Informatics 53: 225-237 Meinzer HP, Thorn M, Cardenas CE (2002) Computerized planning of liver surgery – an overview. Computers and Graphics 26: 569-576 Bankman IN (2000) Handbook of Medical Imaging: Processing and Analysis. Academic Press, San Diego Suri J, Liu L, Singh S et al (2002) Shape recovery algorithms using level sets in 2-D/3-D medical imagery: a state-of-the-art review. IEEE Transactions on Information Technology in Biomedicine 6(1): 8-28 Malladi R, Sethian JA, Vemuri B (1995) Shape modeling with front propagation: a level set approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(2): 158-175 Li CM, Xu CY, Gui CF et al (2005) Level set evolution without reinitialization: a new variational formulation. IEEE CVPR 2005 Proc. vol. 1, pp. 430-436 Pham DL, Xu C, Prince JL (2000) Current Methods in Medical Image Segmentation. Annual Review of Biomedical Engineering 2: 315-337 3D Liver Tumor Segmentation Competition 2008 at http://lts08.bigr.nl/ Author: Bing Nan LI Institute: Graduate Programme in Bioengineering, National University of Singapore Street: Medicine Drive 28 City: Kent Ridge 117456 Country: Singapore Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
A Research-Centric Server for Medical Image Processing, Statistical Analysis and Modeling Kuang Boon Beh1, Bing Nan Li2, J. Zhang1, C.H. Yan1, S. Chang4, R.Q. Yu4, S.H. Ong1, Chee Kong Chui3 1
Department of Electrical and Computer Engineering, National University of Singapore, Engineering Drive 3, Singapore 2 Graduate Programme in Bioengineering, National University of Singapore, Medical Drive, Singapore 3 Department of Mechanical Engineering, National University of Singapore, Engineering Drive 1, Singapore 4 Department of Surgery, National University Hospital, Kent Ridge Wing 2, Singapore
Abstract — Dealing with large numbers of image data is considered as a compulsory routine for medical imaging researchers. With the growing number of medical image data, the effective management and interactive processing of imaging data become essential necessity. The scenario poses a challenge to researcher community on how to construct effective data management and processing systems, in order to promote collaboration, maintain data integrity and circumvent resource redundancy. In this paper, we present a Medical Image Computing Toolbox (MICT) for Matlab® as a extension to Picture Archive and Communication System (PACS). MICT is oriented to providing interactive images processing and archiving service to medical researchers. In a nutshell, MICT is desired to be a cost effective and enriched collaboration solution for medical image sharing, processing and analysis in research communities. Keywords — Medical Image Computing, Picture Archive and Communication System (PACS), Computed Tomography (CT)
I. INTRODUCTION With the growing number of medical image data, the effective management of imaging data becomes more and more important. This scenario poses many challenges to medical image researchers. It is in some sense an answer to the popularity of Picture Archive and Communication Systems (PACS) [1-3]. Moreover, there are more and more open-source PACS and simple extensions for image processing [4-5]. All of them evidence the essential needs of researchers to have an effective image management system. Such kind of systems should be able to provide integration, archiving, distribution and presentation of medical images. As a matter of fact, there have been several long-standing PACS and DICOM toolboxes. Most of them are open source and freely downloadable. Some merely fulfill the basic functionality [1-2], some are powerful enough with PACS image management and DICOM image viewer [3-5], and some have their source codes of them opened for aca-
demic research [6-7]. However, all of them have limited capability for effective image processing and analysis only. The presented medical image computing toolbox (MICT) for Matlab® in this paper originates from our earlier works, Virtual Spinal Workstation (VSW), for managing and processing over 8,000 computed tomography (CT) images. In essence, VSW is a program developed for studying spine diseases, including spine segmentation and virtualization. MICT is a progressing project aimed to not only manage large volumes of image data but also have competence to process and analyze them. Its underlying goal is to provide a cost effective, collaboration enriched, research-centric medical image management and processing toolbox for academic researchers. The rest of this paper is organized as follows. Section II provides the overall infrastructure of MICT, design considerations and implementation details. We give an exemplified application of MICT for medical image studies in Section III. The final parts, Section IV and V, cover discussions and concluding remarks on system performance, future works and possible improvement. II. INFRASTRUCTURE AND DESIGN CONSIDERATIONS Currently MICT has three component modules, namely: (1) An image pre-processing module for extraction, conversion and classification of DICOM images based on their meta information. (2) Several image processing modules include image viewer, annotation tools and segmentation tools. (3) A database module is in charge of image archiving, process enquiry and data retrieval. All of them are programmed, compiled and running in Matlab®. In addition, their infrastructures have been shown in Fig. 1 with a color of deep blue. Three blocks colored in olive green belong to a PACS system. Finally, the block colored in light blue refers to the communication layer that mediates communication between MICT and the PACS system.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 206–209, 2009 www.springerlink.com
A Research-Centric Server for Medical Image Processing, Statistical Analysis and Modeling
Fig.1. Overall architecture of MICT consisting of three different modules: 1) image pre-processing module; 2) image archiving module; and 3) image processing module. A. Image Preprocessing Module This module is in charge of carrying out image preprocessing before an image is archived into the database. It consists of classification, extraction and conversion of DICOM CT or MRI images. The component of classification is on the top of whole process. It scans DICOM meta information, which is abided together with the images, to classify and categorize CT or MRI images. Our module fully utilizes the DICOM tools available at Matlab® to acquire DICOM meta information. After that, the component of classification fetches all relevant images together, for example, from a same patient, due to a same scan. Furthermore, the preprocessing module continues to group all images with similar physician studies into a same group of each patient in sequence. B. Image Processing Module The image processing module includes image viewer, annotation tool, segmentation tool, and volume virtualization. It acts as an extension module or application where its application tools provide image processing functionality to academic researchers. All of them are developed with Matlab® and fully utilized the image processing and statistical toolboxes of Matlab®. To minimize user intervention, the image processing module is integrated and resided in the graphic user interface (GUI) layer of database management. The annotation tool utilizes GUI drawing toolkit to place an annotation mask on top of an image. Conventional anno-
_______________________________________________________________
207
tation masks have been implemented till now, for example, circle, rectangle, text and freehand drawing. In addition, the annotation may be interactively modified and saved as regions of interest (ROI) into image database. Such interactive annotation is necessary for computerized medical image segmentation, annotation and classification. Anyway, accurate annotation by hands is time-consuming and errorprone, while computerized processing is not robust and reliable yet. As a matter of fact, a common computer-aided medical image processing procedure may be described as follows: one or more researchers are required to provide the representative images as training set for computer programs. Thus they have to annotate and edit ROI carefully. During performance evaluation, the researchers may monitor the results of computer programs and modify them manually. The enhanced results may serve as the new training set to improve computer programs. Fig. 2 shows the integrative GUI including data management, image viewer, and interactive annotation tools. Currently, MICT have been equipped with two automatic segmentation tools, that is, a tool for spinal cord segmentation and the other for active contour segmentation. The tool for spinal cord segmentation in this paper is a knowledgebased program [8]. It utilizes a few clinical knowledge and experience to allocate spine cord. For instance, the spinal cord resides inside the spinal column and above lamina, which has relatively high density compared with other tissues. After potential spinal cord location being indentified, a region growing method is applied to find spinal cord boundary. C. Database Management Module Database management module is in charge of data storage, processes enquiry and image retrieval in a text-based manner. In essence, this module includes two parts: 1) a relational database which controls, organizes, stores and retrieves various images and associated meta information; 2) an interface between that database and Matlab® modules. Our current design is built on MySQL® database management system. MySQL® provides open source interfaces and supports Open Database Connectivity (ODBC). In our current system, MySQL® acts as an agent to manage image data and various relevant information. The contents are reachable via standard SQL statement. At present, there are four major tables, named in accordance with their usage: patient ID; DICOM meta information; image data table; processed image data Table.
IFMBE Proceedings Vol. 23
_________________________________________________________________
208
Kuang Boon Beh, Bing Nan Li, J. Zhang, C.H. Yan, S. Chang, R.Q. Yu, S.H. Ong, Chee Kong Chui
Fig.2. The integrative GUI allows image management, interactive annotation and modification. III. EXAMPLE APPLICATION: INTERACTIVE IMAGE PROCESSING, ANALYSIS AND MANAGEMENT
MICT leads itself to various academic researches and applications. The potential applications include image analysis workstation, machine learning and training workstation, image-based teaching and performance evaluation tools. To demonstrate the competence of MICT, we present our experience in this section of using it in the previously-reported VSW project. The first phase is medical imaging. In the second phase, the researchers need to annotate those collected medical images, where an interactive GUI is often of great help. The third phase is computerized analysis. In the fourth phase, the researchers check the performance of computer programs. In VSW project, we successfully dealt with a dataset of 8,000 CT images from 100 patients or so. The overall process is illustrated in Fig. 3. All of three modules of MICT were used in that project, including CT image archiving, interactive image segmentation and text-based image management. 1) Image preprocessing module: It was used to get representative dataset. The module read DICOM meta information and classified CT images. The meta information was identified as the key reference to categorize types of spine column and segments (Fig.4).
_______________________________________________________________
Fig.3. Overall process flow of MICT Example application that promotes interactive image processing and data management.
2) Image processing module: It was used for segmentation and performance evaluation. This module provided the interactive GUI to ease segmentation. A scalable rectan-
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Research-Centric Server for Medical Image Processing, Statistical Analysis and Modeling
gle was used to manually define the ROI. The segmentation algorithms were then used to refine those ROI. During performance evaluation, the interactive annotation tool was applied to further refine those ROI. It was helpful to collect robust and reliable medical image segmentation. 3) Data management module: It was used to collaborate to manage both original and analytical results. The relational database and the text-based retrieve made image and result management particularly easy. Such kind of open infrastructure caters to the needs of academic researchers.
209
archived and processed over 8000 CT scans so far. MICT made it easy to process and analyze those medical images. Its interactive annotation module allowed the researchers to focus on their core studies. MICT is written, compiled and running in Matlab®. It makes full use of the built-in toolkits such as image processing, database, statistics, GUI and other toolkit libraries. All of them guarantee the compatibility of MICT. Furthermore, it is also friendly to the third-party modules written in Matlab®. MICT is a progressing project. There are still many points to be improved. Some of them are: 1) Cross platform deployment: The demonstrated MICT is just a prototype. It currently runs in Matlab® only. But we desire to make it a cross platform component-based toolbox, for example, supporting various PACS systems. In addition, it should be friendly to third-party components. As a consequence, we will evolve MICT to Java®. 2) Content-based image retrieval: It is desired to provide the language-like environment for image archiving and management. Currently, MICT runs in a content-based manner. It is still far away from language-like archiving and management. 3) Intelligent analysis module: The ability of performing intelligent analysis, such as statistical analysis and data mining, would be interesting to most academic researchers. A GUI for intelligent image analysis will be developed in future works.
ACKNOWLEDGMENT This research is supported by a grant from National University of Singapore (R-265-000-270-112 and R-265-000270-133).
REFERENCES 1.
2. 3.
Fig.4. Image pre-processing module is designed able to read DICOM header information and conduct image classification.
IV.
4. 5. 6.
DISCUSSION AND CONCLUSION
In previous section, we demonstrated that how MICT provide a cost effective and enriched collaboration solution for image management, processing and analysis. In particular, MICT has been used in VSW project and successfully
_______________________________________________________________
7. 8.
Muto K, Emoto Y, Katohji T, Nageshima H, Iwata A, and Koga S (2000) PC-based web-oriented DICOM server. Proceedings of Radiological Society of North America (RSNA), Chicago, IL. Rainbow Fish Software at http://www.pacsone.net/index.htm Herck M and Zjip L (2005) Conquest DICOM software website at http://www.xs4all.nl/~ingenium/dicom.html Mini Web PACS at http://miniwebpacs.sourceforge.net/ My Free PACS at http://pacsoft.com/ Bui AAT, Morioka C, Dionisio JDN, Johnson DB, Sinha U, Ardekani S, Taira RK, Aberle DR, Suzie ES, and Kangarloo H (2007) OpenSource PACS: An extensible infrastructure for medical image management. IEEE Transactions on Information Technology in Biomedicine 11(1): 94-109. DIOWave Visual Storage at http://diowave-vs.sourceforge.net Archip N, Erard PJ, Michael EP, Haefliger JM, and Germond JF (2002) A knowledge-based approach to automatic detection of the spinal cord in CT images. IEEE Transactions on Medical Imaging 21(12): 1504-1516.
IFMBE Proceedings Vol. 23
_________________________________________________________________
An Intelligent Implantable Wireless Shunting System for Hydrocephalus Patients A. Alkharabsheh, L. Momani, N. Al-Zu’bi and W. Al-Nuaimy Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK Abstract — Hydrocephalus is a neurological disorder whereby the cerebro-spinal fluid surrounding the brain is improperly drained, causing severe pain and swelling of the head. Existing treatments rely on passive implantable shunts with differential pressure valves; these have many limitations, and life-threatening complications often arise. In addition, the inability of such devices to autonomously and spontaneously adapt to the needs of the patients results in frequent hospital visits and shunt revisions. This paper proposes replacing the passive valve with a mechatronic valve and an intelligent microcontroller that wirelessly communicates with a hand-held device that would have a GUI and an RF interface to communicate with the patient and the implantable shunt respectively. This would deliver a personalised treatment that would aim to eventually reduce or eliminate the shunt dependence. This system would also enable a physician to monitor and modify the treatment parameters wirelessly, thus reducing, if not eliminating, the need for shunt revision operations. To manage the shunt, four methods were investigated, simulated and compared. As a result a method was selected based on performance. This method involves an implantable pressure sensor and intelligent software, which will cooperate in monitoring and determining vital parameters that will help in determining a decision regarding the optimal valve schedule. The decision will be either modifying the schedule or contacting the external device for consultation. Initial results are presented, demonstrating different valve regulation scenarios and the wireless interaction between the external and implanted sub-systems. Also presented are important parameters from the ICP data that would help in optimising system resources. To conclude, an intelligent shunting system is seen as the future in hydrocephalus treatment, potentially reducing significantly hospitalisation periods and shunt revisions. Furthermore, a new technique was investigated that would help to circumvent the problem of updating software using read-only memories. Keywords — Hydrocephalus, shunt, mechatronic valve, wireless programming.
I. INTRODUCTION A. Hydrocephalus Hydrocephalus comes from the Greek word ‘hydro’ meaning water and ‘cephalie’, meaning brain. Human brains constantly produce and absorb about a pint of CSF every day. The brain keeps a delicate balance between the
amounts of CSF it produces and the amount that is absorbed. Hydrocephalus is a result of disruption in this balance and this is caused by the inability of CSF to drain away into the bloodstream. The number of people who develop hydrocephalus or who are currently living with it is difficult to establish since there are no national registry or database of people with the condition. However, experts estimate that hydrocephalus affects approximately 1 in every 500 children [1]. Since the 1960s the usual treatment for hydrocephalus is to insert a shunting device in the patient’s CSF system [2]. Shunting controls the pressure by draining excess CSF from the ventricle of the brain to other areas of the body, so preventing the condition from becoming worse. A shunt is simply a device which diverts the accumulated CSF around the obstructed pathways and returns it to the bloodstream. It consists of a flexible tube with a valve to control the rate of drainage and prevent back-flow. The valves used are typically mechanical, opening when the differential pressures across the valves exceed some predetermined threshold. This passive operation causes many problems such as overdraining and underdraining. Overdraining occurs when the shunt allows CSF to drain from the ventricles more quickly than it is produced. This Overdraining can cause the ventricles to collapse, tearing blood vessels and causing headache, hemorrhage (subdural hematoma), or slit-like ventricles (slit ventricle syndrome). Underdraining occurs when CSF is not removed quickly enough and the symptoms of hydrocephalus recur. These problems may have dramatic effects on the patients such as brain damage. Also, current shunts cannot handle real-time patient discomfort and emergency situations, thus satisfying less than 50% of patients [3]. In addition, some complications can lead to other problems, the most common being shunt blockage, as a result the patient’s life and cognitive faculties are placed at risk. There hasn't been a significant improvement in the level of blockages in recent years. The rate of shunt blockages is highest in the first year after insertion, when it can be in the order of 20-30% - decreasing to approximately 5% per year [4]. Currently, shunt blockage cannot be detected without invasively revising the shunt. Whilst symptoms and additional investigations such as CT scan, plain X-rays and a shunt tap may be decisive, a definitive diagnosis is sometimes only possible through surgery
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 210–214, 2009 www.springerlink.com
An Intelligent Implantable Wireless Shunting System for Hydrocephalus Patients
[5]. Furthermore, shunts are subject to other problems that sometimes require them to be revised, such as cracked and disconnected catheters. The longer the shunt system is in place, the more prone it is to some form of structural degradation [6]. All these problems and more urged the need for a shunt that responds to the dynamic needs of the patients and at the same time that can achieve a gradual weaning of the patient off the shunt, wherever possible. A further need is for the shunt to be able to carry out a self-diagnostic test, monitoring of all implantable components, detecting shunt malfunctions such as shunt blockage and disconnected catheters in an autonomous way. In order to develop such a shunt, a mechatronic valve which is electrically controlled by software, and that communicates wirelessly with the physician, is needed B. Proposed Intelligent Shunting In this paper we present an intelligent implantable wireless shunting system for hydrocephalus patients with features that help in reducing or eliminating the problems with current shunts. This shunting system would consist of hard and soft components. The overall system is shown in Figure 1. The implanted hardware components would mainly consist of a microcontroller, electronic valve [7], ICP sensor and transceiver. This implantable shunting system would wirelessly communicate with a hand-held Windows Mobilebased device operated by the patient, or on the patient’s behalf by a clinician or guardian. This device would have a graphical user interface and an RF interface to communicate
with the user and the implantable wireless e-shunt respectively. The main tasks of the implantable embedded software are summarised below. One of the tasks would involve receiving ICP data from the sensor, analysing it and regulating the valve accordingly. Another task would be wirelessly receiving modifications from the physician through the external patient device. Such modification might be ICP management parameters such as pressure threshold, valve schedule, etc. On the other hand, the implantable shunting system would send a report either on regular basis or upon request to the physician through the external device. Such reports would consist of information that is useful in understanding this particular patient case , thus might help in achieving shunt weaning on the long run, and in understanding of hydrocephalus which help other hydrocephalus patients. The implantable embedded code would handle self testing of the implanted shunt components such as the valve, ICP sensor, microcontroller and transceiver. This task is would be mainly detecting any shunt malfunctions such as valve blockage or disconnected catheters, that would use to verify the shunt is not blockage and fully functional. One of the important tasks that make this proposed system unique is dealing with the emergency case. In an emergency situation, the implantable shunting system would receive requests either from patient or physician through the external patient device to open/close valve or collect ICP readings instantaneously. As the result of monitoring the shunt components, the implantable system might request help when facing problem in meaning valve open, whereas ICP still high which is mean ICP not responding to the opening and closing of the valve due to valve malfunction.
Message Mechatronic Valve
Smartphone
Implant
II. MATERIALS AND METHOD
Microcontroller
Microcontroller 402 MHz MICS
Pressure Sensor
211
Transceiver
Implantable System
Transceiver
External System
Database Server
Database
Two sources for ICP data were used to test the design of shunting software. One is a real data collected at 125Hz sampling rate [8]. The other source is a model of the simulated ICP data for hydrocephalus patients. To reach the optimal design of the implanted embedded software, four scenarios of designing the implantable code have been investigated and simulated.
Internet
A. Fixed-Time Schedule Scenario
Mobile Communication
Database Medical Centre
Fig. 1: Overall shunting system.
_________________________________________
In this scenario, the implanted shunt system would consist of a mechatronic valve, a microcontroller and an RF transceiver. The valve would permit fluid flow only based on a fixed time schedule i.e. would open at specific times for certain periods irrespective of ICP. The implanted valve
IFMBE Proceedings Vol. 23
___________________________________________
A. Alkharabsheh, L. Momani, N. Al-Zu’bi and W. Al-Nuaimy
manager would be changed remotely by a physician, who determines at what time during the day or night the shunt is opened or closed. The problem of such scenario is a mismatch between what is required and what is delivered. This mismatch would cause serious drawbacks e.g. overflow, underflow. In addition, it cannot handle real-time patient satisfaction and emergency situations e.g. headache, sneezing because the dynamic nature of ICP for the same patient. This scenario has been simulated and tested using real ICP data. Figure 2 illustrates the problems of using such approach i.e. overdraining and underdraining.
30 25 20 Upper normal limit
ICP (mmHg)
212
15 10 5 0 Upper normal limit
-5 -10 0
5
10
15
20
25
30
35
40
45
50
Time (hour)
B. Fixe-Time Schedule Scenario with Pressure Sensor This scenario differs from the previous one is utilising implanted pressure sensor. The sensor would be used to collect ICP data, and then these readings would be sent wirelessly via the RF transceiver to the external patient device. This data would help the physician modify the fixed time schedule in order to be more suitable for the patient. The new fixed schedule is uploaded remotely to the implanted shunting system.
Fig. 2: Fixed schedule problems.
C. Closed Loop Scenario A close loop shunt would consist of a mechatronic valve, microcontroller and pressure sensor. In this scenario the valve would be instantaneously managed (opened or closed) according to the measured ICP. Where the collected ICP would be analysed by the implantable software on the microcontroller to decide whether it is an appropriate time to open or close the valve. Many but not all problems can be solved by using such scenario e.g. overflow, underflow. Figure 3 illustrates the resulted ICP waveform for closed loop shunting system. Thus the collected ICP would be utilised only within implanted shunting system. Such data
_________________________________________
Fig. 3: The resulted ICP waveform for closed loop shunting.
would not reach the physician since there is no mean for sending it outside the patient’s body. D. Dynamic Shunting System Scenario In this scenario, the implanted shunt system includes the mechatronic valve, microcontroller, RF transceiver, ICP sensor and smart software. As a result of investigated the previous scenarios, their drawbacks would be eliminated if the shunting system perform the following tasks. 1. ICP analysis: the ICP readings would be analysed to figure out some important parameters such as ICP waveform components. These parameters would be useful in autonomously modifying the valve schedule internally. 2. Self testing: this task involves testing all implanted shunt components for example, ICP readings collected when valve is open would be analysed and parameters are calculated to help in detecting shunt malfunction such as shunt blockage or disconnected catheter. Also this task would work on checking the capacity of the implanted battery and the functioning of ICP sensor. 3. Emergency call: this task is responsible for all emergency cases that might be happen during shunt operation. For example, it would send a signal to inform the external device when shunt malfunctions is detected. On the other hand, it will handle any emergency signals received from the physician through the external device to open/close the valve or to request ICP readings. 4. Updating task: the implantable ICP sensor and the smart software would cooperate in monitoring and determining vital parameters that would help in modifying and optimising the valve schedule. 5. Report generating: this task involves generating ICP information report consisting of ICP waveform components, valve status, real ICP readings and their corre-
IFMBE Proceedings Vol. 23
___________________________________________
An Intelligent Implantable Wireless Shunting System for Hydrocephalus Patients
sponding time, mean ICP and self shunt testing result. This report would be stored in implanted memory and a copy of this report would be sent remotely to the physician through the external device regularly or upon request. Thus, such report would be useful tool for the physician to decide any modification on the valve schedule. Also, it would be helpful in understanding hydrocephalus in general. 6. ICP compression: a peak detection algorithm has been designed and tested to overcome the problem of the implantable memory size limitation. By this algorithm, only the peaks (upper, lower) of the ICP waveform would store. Where it was noticed that the output waveform of such algorithm give a good estimation of the original waveform. 7. Wireless updating: an algorithm was designed and tested to enable the physician through the external device wirelessly modifying different implanted parameters such as valve schedule, ICP threshold value. Such algorithm rarely mention in literatures especially for implanted microcontroller. A self learning packet technique has been used in this algorithm to access the implanted memory address of each parameter needed to modify. This packet made up of packet length, patient identification, packet identification and the modified parameters value with their addresses, shown in Figure 4(b). 8. The power consumption algorithm has been designed and tested to minimise power consumption needed for the implanted shunt. A sleeping mode for the implantable microcontroller and RF transceiver is used to reduce the power needed for these components more than 90%. A wake up signal from physician through external device would be used to wake up these components when it is needed. Most of these tasks are built using assembly and C language. MSP430 development kits shown in Figure 4(a) used to test these tasks. A packet format shown in Figure 4(b) manipulated between two kits through transceivers.
213
(a)
Implanted Transceiver Module
External Transceiver Module
The packet is sent wirelessly via RF transceivers
(b) Packet Length
Patient ID
Packet ID
Parameter1 Address1 Parameter2
Adress2
Parameter n Address n
Fig. 4: Hardware and software tools for software testing. ( a) MSP430 microcontrollers with RF transceivers, (b) packet format.
These problems could be solved by using a dynamic shunting system having a degree of “intelligence”. One of the most difficult challenges of using an implantable microcontroller in medical applications is how to access, modify and replace the implanted program. An updating algorithm is used to remotely modify some parameters which are embedded into the microcontroller via RF transceivers. A peak detection algorithm of ICP waveform is utilised by which the size of ICP data is reduced by 93%, thus overcoming the implantable memory size limitation. IV. CONCLUSION An innovative, intelligent implantable wireless shunting system was introduced in this paper for the treatment of hydrocephalus. We attempted to replace the passive mechanical shunt with a dynamic shunt that maximizes the potential quality of life for each patient, reduces hospitalisation periods and shunt revisions. Furthermore, a new technique was investigated that would help to circumvent the problem of updating software remotely through RF transceiver.
ACKNOWLEDGEMENT III. RESULT AND DISCUSSION The results of simulating a fixed-time schedule are presented in Figure 2. It can be noticed that mismatch between what is required and what is delivered by such shunt. The simulation results of closed loop shunting system shown in Figure 3 illustrates the efficacy of the closed loop shunt in keeping the ICP within the normal range. On the other hand, other current shunt problems such as difficulty of shunt malfunction detection, is not solved in closed loop shunt.
_________________________________________
The authors’ thanks to Connor Mallucci and Mohammed Al-Jumaily for their fruitful inputs.
REFERENCES 1. 2.
Hydrocephalus at http://www.medicinnet.com Aschoff A, Kremer B, Hashemi B (1999) The scientific history of hydrocephalus and its treatment Neurosurg Rev 22:67–93
IFMBE Proceedings Vol. 23
___________________________________________
214 3.
4. 5.
6. 7.
A. Alkharabsheh, L. Momani, N. Al-Zu’bi and W. Al-Nuaimy Momani L, Alkharabsheh A, Al-Nuaimy W (2008) Design of an intelligent and personalised shunting system for hydrocephalus, IEEE EMBC Personalized Healthcare through Technology, Vancouver, Canada. Association for spina bifida hydrocephalus at http://www.yourvoiceyouth.com Watkins L, Hayward R, Andar U, Harkness W (1994) The diagnosis of blocked cerebrospinal fluid shunts: a prospective study of referral to a paediatric neurosurgical unit, Child's Nerv Syst 10:87-90 Shunt malfunctions and problems at http://www.noahslifewithhydroce- phalus.com Miethke C (2006) A programmable electronical switch for the treatment of hydrocephalus. In XX Biennial Congress of the European Society for Paediatric Neurosurgery, Martinique, France
_________________________________________
8.
Biomedical signal processing laboratory/ Portland State University at http://bsp.pdx.edu Author: Abdel Rahman Alkharabsheh Institute: Department of Electrical Engineering and Electronics, University of Liverpool. Street: Brownlow Hill City: Liverpool L693GJ Country: UK Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Intelligent Diagnosis of Liver Diseases from Ultrasonic Liver Images: Neural Network Approach P.T. Karule1, S.V. Dudul2 1
Department of Electronics and Communication Engineering, YCCE, Nagpur, INDIA Department of P.G.Studies in Applied Electronics, SGB Amravati University, INDIA
2
Abstract — The main objective of this study is to develop an optimal neural network based DSS, which is aimed at precise and reliable diagnosis of chronic active hepatitis (CAH) and cirrhosis (CRH). Multilayer perceptron (MLP) neural network is designed scrupulously for classification of these diseases. The neural network is trained by eight quantified texture features, which were extracted from five different region of interests (ROIs) uniformly distributed in each B-mode ultrasonic image of normal liver (NL), CAH and CRH. The proposed MLP NN classifier is the most efficient learning machine that is able to classify all three cases of diffused liver with average classification accuracy of 96.55%; 6 cases of cirrhosis out of 7 (6/7), all 7 cases of chronic active hepatitis (7/7) and all 15 cases of normal liver (15/15). The advantage of proposed MLP NN based Decision Support System (DSS) is its hardware compactness and computational simplicity. Keywords — Chronic Active Hepatitis, Cirrhosis, Liver diseases, Decision Support System, Multi Layer Perceptron, Ultrasound imaging
I. INTRODUCTION Medical diagnosis is quite difficult and complex visual task, which is mostly done reliably only by expert doctors. An important application of artificial neural network is medical diagnosis. Chronic infection with hepatitis virus (HV) has been a major health problem and is associated with over 10,000 deaths a year in the United States [1]. In the early stages, HV tends to be asymptomatic and can be detected only through screening. Ultrasonography is a widely used medical imaging technique. It is the safest method used for imaging of human organs or their functions. The attenuation of sound wave or difference in acoustic impedance in the organ yield complicated texture on the ultrasound B-mode images. For the diagnosis of diffuse liver diseases ultrasonography is commonly used, but visual criteria provide low diagnostic accuracy, and it depends on the ability radiologist. To solve the problem, tissue characterization with ultrasound has become important topic of research. For quantitative image analysis many feature parameters have been proposed and used in developing automatic diagnosis system [2]-[4]. The several quantitative
features are used in diagnosis by ultrasonography. K. Ogawa [5, 7] developed a classification method which used an artificial neural network to diagnose diffuse liver diseases. Another work [6]-[13] presents the classifier for diagnosis of normal liver (NL), chronic active hepatitis (CAH) and cirrhosis (CRH) more accurately. Quantitative tissue characterization technique (QTCT) is gaining more acceptance and appreciation from the ultrasound diagnosis community. It has the potential to significantly assist radiologists to use this system for second opinion . The grey scale ultrasound images provide significant contribution to the diagnosis of liver diseases, however at the resolution it is difficult to diagnose active hepatitis and cirrhosis from normal liver [10, 11]. A pattern recognition system can be considered in two stages, the first stage is feature extraction and the second is classification [12]. This paper presents new optimal designed MLPNN based decision support system for diagnosis of diffused liver diseases from the ultrasound images. II. MATERIAL AND METHODS A. Data acquisition Ultrasound images used in our research were obtained on Sony (US) model ALOKA-SSD-4000 ultrasonic machine with a 2-5 MHz multi frequency convex abdominal transducer. All images were of 640 × 480 pixels with 8-bit depth. Ultrasound images for different liver cases taken from patients with known histology and accurately diagnosed by expert radiologist from Midas Institute of Gastroenterology, Nagpur, INDIA. Three set of images have been taken: normal liver, chronic active hepatitis and cirrhosis with 22, 10 and 10 images respectively. System outline: Fig. 1 shows the overall sketch of the proposed system; our approach is divided in three parts. First step is selection of region of interest (ROI). Second step is texture feature extraction from ROI and create a database. Third step is use neural network for classification of these images in one of the categories i.e. normal liver, chronic active hepatitis and cirrhosis
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 215–218, 2009 www.springerlink.com
216
P.T. Karule, S.V. Dudul
Fig. 1 Overall scheme for classification of liver image Defining the region of interest (ROI): Selection of ROI is one of the first and the most important step in the process. In order to accurately identify and quantify regions, they should be as free as possible from the effect of the imaging system. For example their depth and location in the beam should be such that the effects of the side lobes and the beam diffraction, acoustic shadowing be a minimum. The important thing in selecting ROI position is not to include major vascular structures in the ROI. In the system we freely choose any of five regions of interest (ROIs) from the given liver image. The four ROIs were located to overlap half of each ROI with the”Center ROI”. Fig. 2 shows the location of five ROIs. Each ROI is 32 x 32 pixels about 1 x 1 cm2 in size.
hepatitis with an artificial neural network. K. Ogawa et al.[5] also added three more parameters for the diagnosis of CAH and Cirrhosis: variation of the mean (VM) and the parameters ASM and CON derived from a co-occurrence matrix. VM was calculated from five ROIs, that is, the “center ROI” and the other four ROIs generated by this system around the “center ROI”. The parameters ASM and CON were calculated from only the “center ROI”. The following are definitions of the parameters, where N (=32) is the size of the ROIs and f(i,j) is the density of a pixel in an ROI and lf ( I , J ) is the Fourier power spectrum of f(i,j).We have selected one more parameter entropy (ent), calculated from the gray level co-occurrence matrix. 1) Variance (V): 1 N V= 2¦ N i1 Where m=
1 N2
N
¦ ( f (i, j ) m) , 2
(1)
i 1
N
N
i 1
i 1
¦ ¦ f (i, j ),
(2)
2) Coefficient of variation: CV =
(3)
V m
3) Annular Fourier Power Spectrum (AFP): 4
AFP =
§ lf ( I , J ) · · ¨¨ ¸¸ , R ¸¹ ¸¹ 2 © R 2 d I 2 J 2 ( R 1)2 © §
¦ ¨¨ R
¦
(4)
Where R = I 2 J 2 4) Longitudinal Fourier Power Spectrum (LFP): 5
LPF = ¦ J 3
(5)
I 1
5) Variation of mean (VM) 1 5 VM ¦ (mk - m0 )2 , 5k1
Fig. 2 Location of five ROIs of 32 x 32 pixels
(6)
Where mk is the mean value of k-th AOI and
Texture analysis of liver: Texture contains important information, which is used by human for the interpretation and the analysis of many types of images. Texture refers to the spatial interrelationship and arrangements of the basic elements of an image. The Grey Level Difference Method (GLDM) is very powerful for statistical texture description in medical imaging. The texture features are extracted within the 32 x 32 pixels ROIs selected in the liver region by the method introduced by Haralick et al. [9]. The following parameters were calculated from the ROI: variance (V), coefficient of variation (CV), annular Fourier power spectrum (AFP) and longitudinal Fourier power spectrum (LFP). These parameters have been used in the diagnostic system [7] for chronic
_______________________________________________________________
N
¦ lf ( I , J )
1 5 (7) ¦ mj 5 j1 6) Angular Second Moment (ASM): These parameters are extracted form gray level cooccurrence matrix (GLCM) obtained from selected ROI. This was on the estimation of the second order joint conditional probability of one gray level a to another gray level b with inter-sample distance d and the given direction by angle . Gray level co-occurrence matrix (GLCM) is a repeated spatial pattern. This is obtained by computation of a number of co-occurrence matrices, which essentially measure the number of times a pair of pixels at some defined separation have a given pair of intensities. The normal
IFMBE Proceedings Vol. 23
m0
_________________________________________________________________
Intelligent Diagnosis of Liver Diseases from Ultrasonic Liver Images: Neural Network Approach
217
GLCM is given as C (a, b; d, ). For calculating ASM from GLCM C (a, b; 1, 0º) we have:
C (a, b;1, 00 )
[(k , l ), (m, n)] ° ° AOI : k m 1 ° cardinaly ®l n, T (1, 00 ) ° f ( k , i ) a, ° °¯ f (m, n) b M 1 M 1
ASM =
½ ° 1, ° ° ¾ ° ° ¿°
(8)
¦ ¦ C (a, b;1, 0 ) a 0
(9)
0 2
b 0
7) Contrast (con): M 1 M 1
con =
¦ ¦ (a b) C (a, b;1,90 ) a 0
2
0
(10)
b 0
8) Entropy (ent): ent ¦ c(a, b) *log[c( a, b)]
Fig. 3 MLP NN model for Classification
(11)
a ,bG
The eight features mentioned above are extracted from five ROIs of each image. The descriptive statistics of eight extracted features are shown in Table 1. These texture features are used by neural network as input parameters for classification. Table 1 Descriptive statistics of features of ultrasonic images Sr. No. 1
V
0.002
Maximum 0.030
0.006
Std. deviation 0.005
2
CV
0.110
1.265
0.271
0.240
3
VM
0.003
0.329
0.083
0.071
4
LFP
0.000
0.239
0.037
0.047
5
AFP
0.000
0.151
0.039
0.037
6
ASM
0.167
6.382
1.613
2.313
7
con
0.098
0.559
0.252
0.098
8
ent
0.148
6.969
1.958
2.386
Features
Minimum
Mean
Table 2 Optimal Parameters of MLP NN DSS S.N. 1 2 3
Parameter Processing Elements Transfer Function Learning rule
Hidden Layer#1 3 Linear Tanh DeltaBarDelta
Output Layer 3 Linear Tanh DeltaBarDelta
Table 2 indicates optimal parameters used for MLP NN based DSS. Performance measure: The learning and generalization ability of the estimated neural network based decision support system is assessed on the basis of certain performance measures such as MSE and confusion matrix [31]. Nevertheless, for DSS the confusion matrix is the most crucial parameter. MSE (Mean Square Error): The MSE is defined by Eq. (12) as follows: P
N
¦ ¦ dij yij MSE
j 0i 0
2
(12)
N .P
B. Design of MLP NN classifier Artificial neural network (ANN), which has been successfully applied in many fields on medical imaging [29, 32 ], is easy for less experienced doctors to make a correct diagnosis by generalizing new inspections from past experience. We construct a fully connected neural network as shown in Fig. 3, which is a conventional three-layer feedforward neural network with 8 input units, 3 hidden units and 3 output units. The network is trained by using the well known backpropagation (BP) algorithm [31 ]. After establishing the relationship function between the inputs and outputs, we can apply the ANN to the doctors’ practical routine inspection to test the generation ability of ANN.
_______________________________________________________________
where P = number of output neurons, N = number of exemplars in the dataset, yij = network output for exemplar i at neuron j, dij = desired output for exemplar i at neuron j. C. Data partitioning These 42 sets of 8 features each are then used as inputs to the neural networks for the classification of liver disease. Different spit-ratios are used for training-testing exemplars. The percentage of exemplars used for training the NN was varied from 10% to 90% with corresponding 90% to 10% variation in exemplars used for testing. It is observed that with only first 30 % samples (1:13) used for training and the next 70 % samples (14:42) for testing, the classifier is seen to deliver optimal performance with respect to the MSE and classification accu-
IFMBE Proceedings Vol. 23
_________________________________________________________________
218
P.T. Karule, S.V. Dudul
racy. This also confirms the remarkable learning ability of the MLP NN as a classifier comprising of the lone hidden layer. The following Table 3 highlights the data partition schemes employed in order to design a classifier. Table 3 No. of exemplars in training and testing data set Sr. No. 1 2
Data Sets Training Set (30%) Testing Set (70%)
No. Of Exemplars
Cirrhosis
Chronic Active
Normal
13 (1:13)
3
3
7
29 (14:42)
7
7
15
ACKNOWLEDGMENT The author would like to thank the doctors of Midas Institute of Gastroenterology, Nagpur, INDIA, for providing the ultrasound images of diffused liver of patients admitted at their hospital for our study.
REFERENCES 1.
2.
The confusion matrix MLP NN based classifier for the testing data set with 70% samples is shown in Table 4. The average classification accuracy achieved is 96.55%.
3.
4.
Table 4 Confusion matrix for MLP NN based DSS Output / Desired Cirrhosis Chronic Active Normal Classification accuracy
Cirrhosis
Chronic Active
Normal
6 1 0
0 7 0
0 0 15
85.71%
100%
100%
5.
6.
7.
Table 5 displays the important performance measures of MLP NN classifier MSE and Classification accuracy. Table 5 Performance Parameter MSE Classification accuracy
Performance measures of MLP NN based DSS
0.03450
Chronic Active 0.04942
0.01580
85.71%
100%
100%
Cirrhosis
8.
9.
Normal 10.
III. CONCLUSIONS
12.
After rigorous and careful experimentation optimal model of MLP was selected. MLP NN with 8 input PEs, one hidden layer with 3 PEs, and an output layer with 3 output PEs. The results on testing data set show that MLP NN classifier is able to classify the three cases of diffused liver diseases with average accuracy of 96.55%. In testing phase, 85.71% (6/7) cirrhosis, 100% (7/7) chronic active hepatitis and 100% (15/15) cases of normal liver were classified correctly. The results also indicate that proposed MLP NN based classifier can provide a valuable ‘‘second opinion’’ tool for the radiologists in the diagnosis of liver diseases from ultrasound images, thus improving the diagnosis accuracy, and reducing the needed validation time.
_______________________________________________________________
11.
13.
Pratt Daniel and Kalpan Marshall, Evaluation of abnormal liver enzyme results in asymptomatic patients, New England Journal of Medicine, Vol. 347 (17), April 2000, 1266-1271 Abou zaid Sayed Abou zaid and Mohamed Waleed Fakhr, Automatic Diagnosis of Liver Diseases from Ultrasound Images, IEEE Transactions on Medical Imaging (2006) 313-319. Y-N Sun and M-H Horng, Ultrasonic image analysis for Liver Diagnosis, Proceeding of IEEE Engineering in Medicine and Biology (Nov 1996) 93-101 A. Takaishi, K. Ogawa, and N. &a, “Pattern recognition of diffuse liver diseases by neural networks in ultrasonography,” in Proc. of the IEICE (The Institute of Electronics, Information and Communication Engineers) Spring conference 1992 (March), p.6-202. K. Ogawa and M. Fukushima, Computer-aided Diagnostic System for Diffuse Liver Diseases with Ultrasonography by Neural Networks, IEEE Transactions on Nuclear Science (vol.45-6, 1998) 3069-3074. Y.M. Kadah, Statistical and neural classifiers for ultrasound tissue characterization, in Proc. A NNIE-93, Artificial Neural Networks in Engineering, Roolla, MO, 1993. K. Ogawa, N. Hisa, and A. Takaishi, “A study for quantitative evaluation of hepatic parenchymal diseases using neural networks in Bmode ultrasonography,” Med Imag Technol, vol.11, pp.72-79, 1993. M. Fukushima and K. Ogawa, Quantitative Tissue Characterization of Diffuse Liver Diseases from Ultrasound Images by Neural Network, IEEE Transactions on Medical Imaging, (vo1. 5, 1998) 1233-1236. R.M.Haralick, K. Shanmugam, J.Din, Texture features for image classification, IEEE Transactions on System, Man and Cybernetics (Vol. SMC-3, 1973) 610-621. Elif Derya Übeyl and nan Güler, Feature extraction from Doppler ultrasound signals for automated diagnostic systems, Computers in Biology and Medicine (Volume 35, Issue 9, November 2005) 735-764. Stavroula G. Mougiakakou and Ioannis K. Valavanis, Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers, Artificial Intelligence in Medicine (Volume 41, Issue 1, September 2007) 25-37. Elif Derya Übeyl and nan Güler, Improving medical diagnostic accuracy of ultrasound Doppler signals by combining neural network models, Computers in Biology and Medicine (Volume 35, 2005) 533-554 Y.M. Kadah, A.A. Farag, M. Zurada, A. M. Badawi, and A.M. Youssef, Classification algorithms for quantitative tissue characterization of diffuse liver diseases from ultrasound images, IEEE Transactions on Medical Imaging (col. 15, no. 4, 1996) 466-477. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Pradeep T. Karule Yeshwantrao Chavan College of Engineering Wanadondri, Hingna Road Nagpur – 441 110 INDIA
[email protected] _________________________________________________________________
A Developed Zeeman Model for HRV Signal Generation in Different Stages of Sleep Saeedeh Lotfi Mohammad Abad1, Nader Jafarnia Dabanloo2, Seyed Behnamedin Jameie3, Khosro Sadeghniiat4 P
P
TP
P
1, 2 P
P
Department of Biomedical Engineering Science & Research Branch, Islamic Azad University, Tehran, Iran Email:
[email protected],
[email protected] 3 Neuroscience Lab, CMRC/IUMS, Tehran, Iran Email:
[email protected] 4 Tehran University of Medical Sciences, Tehran, Iran Email: khosro sadeghniiat @ tums.ac.ir P
P
P
P
Abstract — Heart Rate Variability (HRV) is a sophisticated measure of an important and fundamental aspect of an individual's physiology. Heart rate variability (HRV) measurement is an important tool in cardiac diagnosis that can provide clinicians and researchers with a 24-hour noninvasive measure of autonomic nervous system activity. Heart Rate Variability is analyzed in two ways, either over time (Time Domain) or in terms of the frequency of changes in heart rate (Frequency Domain). Preliminarily studying on the different effects of sleep on HRV signal can be useful for finding out the function of autonomic nervous system (ANS) on heart rate. In this paper, we consider a HRV signal for one normal person in different sleep stages: stage1, stage2, stage3 and REM. Therefore, we use FFT on HRV signal and show differences in various stages. In addition, we evaluate these differences in quantitative and qualitative. This model can be used as a basic one for developing models to generate artificial HRV signal. T
T
Keywords — autonomic nervous system, HRV, sleep, stage, signals. T
P
TP
P
T
I. INTRODUCTIONT The electrocardiogram (ECG) signal is one of the most obvious effects of the human heart operation. The oscillation between systole and diastole states of the heart is reflected in the heart rate (HR). The surface ECG is the recorded potential difference between two electrodes placed on the surface of the skin at pre-defined points. The largest amplitude of a single cycle of the normal ECG is referred to as the R-wave manifesting the depolarization process of the ventricle. The time between successive R-waves is referred to as an RR-interval, and an RR tachogram is then a series of RR-intervals. The development of a dynamical model for the generation of ECG signals with appropriate HRV spectra is a subject that has been widely investigated. Such a model will provide a useful tool to analyse the effects of various physiological conditions on the profiles of the ECG. The model-generated ECG signals with various
characteristics can also be used as signal sources for the assessment of diagnostic ECG signal processing devices. Now, in co strutting a comprehensive model for generating ECG signals there are two steps. Step one is producing the artificial RR-tachogram with HRV spectrum similar to experimental data—the RR-tachogram shows where the Rwaves of the ECG are actually placed. And step two is constructing the actual shape of the ECG. Using Gaussian functions method for generating ECG model can be also considered too [8]. Here, we develop a new model based on modifying the original Zeeman model to produce the RR-tachogram signal, which now incorporates the effects of sympathetic and parasympathetic activities to generate the appropriate significant peaks in the power spectrum of the HRV. By using a neural network approach based upon a modified McSharry model, the actual shape of the ECG in a single cycle can be successfully re-produced by using our model generated power spectrum of RR time intervals. II. ECG AND HRV MORPHOLOGY In any heart operation there are a number of important events. The successive atrial depolarization/repolarization and ventricular depolarization/repolarization occurs with every heartbeat. These are associated with the peaks and valleys of the ECG signal, which are traditionally labeled P, Q, R, S, and T (see Fig. 1). The P-wave is caused by depolarization of the atrium prior to atrial contraction. The QRS-complex is caused by ventricular depolarization prior to ventricular contraction. The largest amplitude signal (i.e. R-wave) is located here. The T-wave is caused by ventricular repolarization which lets the heart be prepared for the next cycle. Atrial repolarization occurs within ventricular depolarization, but its waveform is masked by the large amplitude QRS-complex. The HR, which is the inverse of the RR-interval, directly affects the blood
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 219–222, 2009 www.springerlink.com
220
Saeedeh Lotfi Mohammad Abad, Nader Jafarnia Dabanloo, Seyed Behnamedin Jameie, Khosro Sadeghniiat
pressure. The autonomic nerve system (ANS) is responsible for short-term regulation of the blood pressure. The ANS is a part of the central nervous system (CNS). The ANS uses two subsystems—the sympathetic and parasympathetic systems. The HR may be increased by sympathetic activity or decreased by parasympathetic (vagal) activity. The balance between the effects of the sympathetic and parasympathetic systems is referred to as the sympathovagal balance and is believed to be reflected in the beat-to beat changes of the cardiac cycle (McSharry et al., 2002). Spectral analysis of HRV is a useful method to investigate the effects of sympathetic and parasympathetic activities on heart rate (HR). The afferent nerves provide the feedback information to the CNS. The sympathetic system is active during stressful conditions, which can increase the HR up to 180 beats per minute (bpm). When sympathetic activity increases after a latent period of up to 5 s, a linearly dependent increment in HR begins and reaches its steady state after about 30 s. This affects the low frequency (LF) component (0.04–0.15 Hz) in the power spectrum of the HRV, and slightly alters the high frequency (HF) component (0.15–0.4 Hz). The parasympathetic activity can decrease the HR down to 60bpm.
III. SLEEP Sleep is a dynamic state of consciousness characterized by rapid variations in the activity of the autonomic nervous system. The understanding of autonomic activity during sleep is based on observations of the heart rate and blood pressure variability. However, conflicting results exist about the neural mechanisms responsible for heart rate variability (HRV) during sleep. Zemaityte and colleagues [1] found that the heart rate decreased and the respiratory sinus a rhythmia increased during sleep. On the other hand, two other studies [2, 3] revealed high sympathetic activity during deep sleep. Thus the complete understanding of the autonomic activity during sleep is still elusive. Moreover, there are five different sleep stages and the HRV varies differently with these stages of sleep [4]. The comparison of HRV during different sleep stages is outside the scope of this paper. The causes of heart rate variability have been widely researched because of its ability to predict heart attack and survivability after an attack. The power spectral density (PSD) of the heart rate has been found to vary with the rate of respiration, changes in the blood pressure, as well psychosocial conditions such as anxiety and stress [5, 6]. All these phenomena are in turn related to the activity of the autonomic nervous system. The objective of this project was to understand how sleep affects the body in terms of what is already understood about the heart rate PSD. Also there are some disorders in sleep cycle which don’t remove, they can be sign of some diseases such as angry, lack of confuses, etc. some of disorders which we can point to is on the stage of going to sleep and the second is the last of sleep. So if they don’t cure, they can make some serious problems. IV. METHODS
Fig. 1. A single cycle of a typical ECG signal with the important points labeled—i.e. P, Q, R, S and T.
The parasympathetic system is active during rest conditions. There is a linear relationship between decreasing the HR and the parasympathetic activity level, without any considerable latency. This affects only the HF in the power spectrum of the HRV. The power in the HF section of the power spectrum can be considered as a measure for parasympathetic activity. Now our proposed model will artificially produce a typical power spectrum as shown in Fig. 2, but for different sicknesses the model also has the capability to alter both the magnitudes, and central frequencies, of the peaks of the power spectrum to reflect different illnesses.
_________________________________________
In this paper we use developed zeeman model for generating HRV signal in different stages of sleep. As we want to point to the model, see [7]
(1)
where x (which can be negative) is related to the length of the heart muscle fiber, e is a positive scalar, b is a parameter representing an electrochemical control, and parameter a is related to the tension in the muscle fiber. It is easy to see that the frequency of the oscillation in this model now depends upon the value of G .
IFMBE Proceedings Vol. 23
___________________________________________
A Developed Zeeman Model for HRV Signal Generation in Different Stages of Sleep
Now we consider the chronoscopic modulations of HR by relating the parameter G to the four states of sympathetic and parasympathetic activity levels, e.g., s1, s2, p1 and p2. For simplicity, we assume that the states of sympathetic and parasympathetic activities are sinusoidal and can be modeled by the equations given below [7]:
Table1- Coupling parameter in different stages C1 C2 C3 C4 A1 A2 A3 A4 B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
stage 1 0.001 1 0.4 0.25 0.3 0.75 0.1 0.8
s1
c 1 sin( Z 1 t T 1 ) A1
s2
c 2 sin( Z 2 t T 2 ) A 2
p1
c 3 sin( Z 3 t T 3 ) A 3
p2
c 4 sin( Z 4 t T 4 ) A 4
q1
s1 s 2
q2
p1 p 2
Q
q1 q 2
G
1 / h (Q )
(3)
The parameter G determines the HR, and the function h and the coupling factors (c1, c2, c3 and c4) determine how the sympathetic and parasympathetic activities alter the HR. Although the function h (q) (in (3)) is nonlinear, it can be approximated with either piecewise linear modules (as we will do in this paper) or with a neural network (currently research in progress by the authors). Finally, the parameters Z1 , Z 2 , Z 3 and Z 4 are the angular frequencies for the sinusoidal variations of the sympathetic and parasympathetic. It is important to pay attention that we get a PSG test from one healthy person which dos not have any problems in sleep and cardiac cycle. In our test, we can get stage 1, 2, 3 and REM from the sample. By considering these parameters in different stages of sleep we can see some various changes on them. We can see the parameters which are specify for sleep stages, these results are in table 1.
_________________________________________
Stage 2 0.001 1 0.4 0.3 0.3 0.75 0.1 0.8
Stage 3 0.1 1 0.4 0.7 0.4 0.8 0.1 1
Stage R 1 1 0.01 1.2 0.5 0.9 0.01 1.5
V. CONCLUSION
(2) Now we can relate the parameter G to the states of sympathetic and parasympathetic activity by the following equations[7]:
221
As regards heart operation, it is well-known (see Braunwald et al., 2004) that HRV is used to evaluate vagal and sympathetic influences on the sinus node and to identify patients at risk for a cardiovascular event or death. So the major contribution of this study is to present an improved model that is able to produce a more comprehensive simulation of a realistic ECG than is observed in practice. So, based upon the original Zeeman model of 1972a, we have proposed a new model to generate the heart-rate timeseries in different stages of sleep. In this model, considering the effects of sympathetic and parasympathetic activities on the VLF, LF and HF components in the HRV power spectrum. We can show all differences in four stages as a PSD in one shape. (See Fig2) The model presented here has some important advantages over existing models. Compared to the original Zeeman model, model has the improved ability to generate signals that better resemble those recorded in practice. And importantly, is its ability to show various changes in stages. Also we can use this model in the pacemakers which will use in future. In addition we can use it for controlling in artificial gate of heart.
Fig2-HRV, HR, PSD of combined stage1, 2, 3 & REM of sleep
IFMBE Proceedings Vol. 23
___________________________________________
222
Saeedeh Lotfi Mohammad Abad, Nader Jafarnia Dabanloo, Seyed Behnamedin Jameie, Khosro Sadeghniiat 9.
REFERENCES 1.
2.
3.
4.
5. 6. 7.
8.
de Boer, R.W., Karmaker, J.M., Strackee, J., 1987. Hemodynamic fluctuations and baroreflex sensitivity in humans: a beat-to-beat model. Am. J. Physiol. 253, 680–689. Braunwald, E., Zipes, D.P., Libby, P., Bonow, R., 2004. Heart Disease: a Textbook of Cardiovascular Medicine. W.B. Saunders Company. Brennan, M., Palaniswami, M., Kamen, P.W., 1998. A new cardiac nervous system model for heart rate variability analysis. Proceedings of the 20th Annual International Conference of IEEE Engineering in Medicine and Biology Society 20, 340–352. Jafarnia-Dabanloo, N., McLernon, D.C., Ayatollahi, A., Johari-Majd, V., 2004. A nonlinear model using neural networks for generating electrocardiogram signals. Proceedings of the IEE MEDSIP Conference, Malta, 41–45. Jones, D.S., Sleeman, B.D., 2003. Differential Equations and Mathemat cal Biology. Chapman & Hall, London. Lavie, P., 1996. The Enchanted World of Sleep. Yale University Press, New Haven. N.Jafarnia-Dabanlooa,D.C.McLernona,,H. Zhangb, A. Ayatollah, V. Johari-Majdd. A modified Zeeman model for producing HRV signals and its application to ECG signal generation Journal of Theoretical Biology 244 (2007) 180–189 S Parvaneh, M Pashna, 2007. Electrocardiogram Synthesis Using a Gaussian Combination Model (GCM), Computers in Cardiology 2007; 34:621624.
_________________________________________
10.
11.
12. 13. 14. 15.
16.
17. 18.
19.
Malik, M., Camm, A.J., 1995. Heart Rate Variability. Futura Publication Comp., New York. McSharry, P.E., Clifford, G., Tarassenko, L., Smith, L.A., 2002. Method for generating an artificial RR tachogram of a typical healthy human over 24-hours. Proc. Compute. Cardiol. 29, 225–228. McSharry, P.E., Clifford, G., Tarassenko, L., Smith, L.A., 2003. A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 50 (3), 289–294. Park, J., Sandberg, J.W., 1991. Universal approximation using radial basis functions network. Neural Compute. 3, 246–257. Suckley, R., Biktashev, V.N., 2003. Comparison of asymptotic of heart and nerve excitability. Phys. Rev. E 68. Tu Pierre, N.V., 1994. Dynamical Systems—an Introduction with Applications in Economics and Biology, 2e. Springer, Berlin. Zeeman, E.C., 1972a. Differential Equations for the Heartbeat and Nerve Impulse. Mathematics Institute, University of Warwick, Coventry, UK. Zeeman, E.C., 1972b. Differential equations for the heartbeat and nerve impulse. In: Waddington, C.H. (Ed.), Towards a Theoretical Biology, vol. 4. Edinburgh University Press. Zeeman, E.C., 1977. Catastrophe Theory. Selected Papers 1972– 1977. Addison-Wesley, Reading, MA. Zhang, H., Holden, A.V., Boyett, M.R., 2001. Modeling the effect of beta-adrenergic stimulation on rabbit senatorial node. J. Physiology. 533, 38–39. Zhang, H., Holden, A.V., Noble, D., Boyett, M.R., 2002. Analysis of the chronoscopic effect of acetylcholine on sinoatrial node. J. Cardiovascular. Electrophysiology. 13, 465–474.
IFMBE Proceedings Vol. 23
___________________________________________
Two wavelengths Hematocrit Monitoring by Light Transmittance Method Phimon Phonphruksa1 and Supan Tungjitkusolmun2 1
Department of Electronics, King Mongkut’s Institute of Technology Ladkrabang Chumphon Campus 17/1 Moo 6 Tambon Chumkho, Pathiu, Chumphon, Thailand, 86160 Email:
[email protected] 2 Department of Electronics, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang Chalongkrung Road, Ladkrabang, Bangkok, Thailand, 10520
Abstract — Methods for measuring the hematocrit level of whole blood includes measuring transmittance light at multiple wavelengths within the visible and infrared spectrum, calculating light transmittance at each of the multiple wavelengths, performing a comparison in a change in light transmittance between the multiple wavelengths, and comparison to total hematocrit value. A system for measuring total hematocrit of whole blood may include discrete LEDs for light source in the range of 430 nm to 950 nm, one photo detector, data processing circuitry, and/or a display unit. We constructed a simplified system and probe, design with LEDs and a photodiode was placed at the other side of the finger, we compare the results of the system with hematocrit levels measured by centrifuge that using blood sample drawn from 120 patients. From our analysis, the wavelengths between 700 nm to 950 nm are insensitivity to hematocrit levels, and between 470 nm to 610 nm are sensitivity to hematocrit levels. The potential optimal wavelengths for light transmittance hematocrit measuring system are in the range of 700 nm-900 nm and 470 nm-610 nm. Then, we used two potential optimal LEDs wavelengths at 585 nm and 875nm for linear algorithm to design of the noninvasive hematocrit measuring system, from the acquired information the system able to predict the hematocrit value to obtained 90% of the 120 data is given an error less than 25%. Keywords — light, Transmittance, Hematocrit
I. INTRODUCTION Blood hematocrit refers to the packed red blood cell (RCB) volume of the whole blood sample. Blood is made up of [1] red and white blood cells and plasma. Hematocrit can be measured by various methods, but the blood drawn from a finger stick is often used for hematocrit testing. The blood fills a small tube, which is then spun in a small centrifuge. As the tube spins, is the red blood cell go to the bottom of the tube, the white blood cells cover the red in a thin layer, and the liquid plasma rises to the top. The spin tube is examined for the line that divides the red cell column is measured as a percent of the total blood column. The higher column of red cells the higher of hematocrit level. With regard to the determination of the hematocrit via optical means, it is well known that the transmission of the
light through red blood cells is complicated by scattering components from plasma. The scattering from plasma vary from person to person. There by complicating the determination of hematocrit, but some wavelength possible for optical hematocrit [2-5] monitoring. The optical method is advantage the traditional for faster, real time and no finger stick similarly pulse oximeter [6-9]. The most method collected the blood sample and used spectrophotometer for optical blood constitutes transmittance spectrum. The large and heavy of spectrophotometer and puncture to drawn blood sample is disadvantage method to use direct with patients. In this present study we constructed the simplified system to measure the transmittance spectra across the finger. The LEDs was used as the light source and a photo diode was place another side to detect the light intensity. The information from this study is base to consideration the optimal wavelength for real-time optical hematocrit monitoring. II. DETAILED DESCRIPTION OF THE INVESTIGATE The methods provide an optical and probe to determining the transmittance spectra from finger. The 25 LEDs used as the light source in difference wavelength in the range of 430 nm to 950 nm and a photo diode was placed at the other side of the finger. Figure (1) the experiment system and probe. The photo diode was placed other side for visible light and infrared detection. Figure (2) the hematocrit at 0% and 100% oxygen saturation. Figure (3) a given of light transmitted through a finger proportional to the intensity. The transmittance (T) and absorbance (A) from Beer’s law able write to equation (1) and equation (2).
T A
e H ( O ) cd
(1)
2 log(% T )
(2)
I Io
Where I0 is the intensity of the incident light, I is the intensity of transmittance light, d is the optical path length, c is the concentration of the substance and H (O) is the extinction coefficient at a given wavelength.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 223–226, 2009 www.springerlink.com
224
Phimon Phonphruksa and Supan Tungjitkusolmun
light from the finger. But, thermal noise and ambient light still exist and may be measured during LED turn off. The result of signal when LED turn off may then be subtracted from the result of signal when LED turn on to reduce effect of thermal noise and ambient light and other common mode noise that may be interfere with the measurement. Fig. 1 The experiment system and probe
ILEDon
Ifinger Iambient Ithermal Iother
ILEDoff Ifinger
Iambient Ithermal Iother ILEDon ILEDoff
Fig. 4 Noise reduction of Light Transmittance passed the finger
From fig (4) after reduce thermal noise and ambient light and any noise may be interfere during measurement, to calculates the result of light transmittance from the corrected LEDs turn on signal passed the finger equation (1) can be normalize percentage of light as follows to equation (3). Fig. 2 Hematocrit at oxygen saturation 0%, 100%
T ( O )%
Ifinger Ireference
(3)
T ( O )% is the percentage of light transmittance passed the finger after normalize, Ifinger is the light transmittance from light source (LED) passed the finger to photo diode when the finger tip in the probe (I), Ireference is the light transmittance from LED to photo diode in the probe without the finger tip that mean 100% of light transmittance (I0). B. Hematocrit monitoring equations Fig. 3 Light Transmittance at hematocrit 27% and 30%
Fig. 4 Distance and Light travel passes the finger
A. Noise reduction The reduce method for thermal noise and ambient light or any other common mode noise. Fig (4) shown when LED turn on the measures of light transmittance of finger plus thermal noise plus ambient light. After that, when LED turns off there is no measure of transmittivity or reflected
_______________________________________________________________
From [10] wavelength in the range of 470 nm to 610 nm are sensitivity to hematocrit level and wavelength in the range of 700 nm to 900 nm are insensitivity to hematocrit level. From fig (2) we chose two wavelengths are insensitivity to Oxygen saturation at 585 nm and 875 nm that are potential to calculate the hematocrit value. Fig (3) graphs from 22 patients’ (Men 10, hematocrit 27% = 5, hematocrit 30 % = 5 and Women 12 hematocrit 27% = 6, hematocrit 30 % = 6) data collected of light transmittance at the finger tip from our system and compare to hematocrit value with blood drawn and centrifugation method. From fig (3) shown the wavelength 585 nm is sensitivity to hematocrit level and wavelength 875 nm is insensitivity to hematocrit level. From Fig (4) light transmittance at the finger from LED to photo diode can be writing to equation (4) and equation (5). We normalize the light transmittance to equation (6) and equation (7). Equation (8), Equation (9) showed the extinction coefficient of oxyhemoglobin and deoxyhemoglobin are equal at wavelengths 585 nm and 875
IFMBE Proceedings Vol. 23
_________________________________________________________________
Two wavelengths Hematocrit Monitoring by Light Transmittance Method
nm and able write to new constant K. Equation (10) HbO is the oxyhemoglobin, Hb is the Deoxyhemoglobin, the total hemoglobin (tHb) of whole bloods that summation of HbO Apply Equation (9) and and Hb is Hematocrit (Hct). Equation (10), summation HbO and Hb with new constant K to Equation (11). Equation (12) to eliminate exponential term by applies natural logarithm. Equation (13) finds the difference intensity of light transmittance wavelengths is sensitivity and insensitivity to hematocrit value. Equation (14) showed the algorithm to measure the hematocrit value by light transmittance from two wavelengths method that the difference intensity of light transmittance at two wavelengths divides by constant K. Equation (4) shown light transmittance at 585 nm is sensitivity to hematocrit level.
I 585
I 0 e ( aHbO bHb RBC Plasma Nail Tissue Pigmentati on ) (4)
Equation (12) takes logarithm to eliminate exponential term. KHct ln(T 585) ln(T 875)
I 0e ( RBC Plasma Nail Tissue Pigmentation )
(5)
Equation (6) normalizes the light transmittance by device with Ireference (I0) for sensitivity to hematocrit wavelengths.
T 585
I I0
(12)
Equation (13) finds the difference of light transmittance at two wavelengths (585 nm and 875 nm). 'T
ln(T 585) ln(T 875)
(13)
Equation (14) the hematocrit value is the difference of light transmittance at two wavelengths divides by the constant K. Hct
'T K
(14)
Equation (15) the relationship between Hct and tHb is as follows.
Equation (5) shown light transmittance at 875 nm is insensitivity to hematocrtit level
I 875
225
tHb(
g ) dL
0.33 u Hct(%)
(15)
Figure (5) shown the system process of the light transmittance by two wavelengths method after press start the machine will collected the important data and to
e ( aHbO bHb RBC Plasma Nail Tissue Pigmentati on ) (6)
Equation (7) normalizes the light transmittance by device with Ireference (I0) for insensitivity to hematocrit wavelengths.
T 875
I I0
e ( RBC Plasma Nail Tissue Pigmentati on )
(7)
Equation (8) finds the difference of light transmittance at wavelength at 585 nm and 875 nm
T 585 T 875
e
( aHbO bHb )
(8)
Equation (9) extinction coefficient of oxyhemoglobin and deoxyhemoglobin very small, estimate equal to new constant K. a
b
K
(9)
Equation (10) summation of total hemoglobin (HbO and Hb) is hematocrit. HbO Hb
Hct
(10)
Equation (11) applies with equation (9) and equation (10). T 585 T 875 e
K ( Hct)
(11) Fig. 5 The process flowchart of the system
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
226
Phimon Phonphruksa and Supan Tungjitkusolmun
eliminate noise (Fig 4) and follow by equation (3) to calculate the correct light intensity and finally to predicted the hematocrit value by equation (14) C. Trial 1 This experiment used the hematocrit values measured by IEC Micro-MB centrifuge (International Equipment Company, measure the hematocrit value by blood drawn and centrifugation, compare with two wavelengths light transmittance method (Equation (14). We collected the hematocrit value with 60 patients (men); by the experiment we can find the constant K (0.0047) with equation (16).
both wavelengths are insensitivity to oxygen saturation in the linear equation (14) when used the average K (0.0047) value by blood sample drawn from the finger and centrifugation method to measure the hematocrit level from 120 patients. The table 1 and Fig (6) to compare of the hematocrit value reference from centrifugation and light transmittance by two wave lengths method can be predicted the hematocrit value to obtained 90% of the data is given an error less than 25%. %hematocrit
Measured VS Predicted Hct. Tr ial 1
0 5 0 5
K
'T Hct
0 Tr
(16)
Centrifug
ial 2
5
e
0
Tr ial 1
5 0
Tr
After that we apply the constant K to equation (14), Table 1 shown the result and error from trial 1.
ial 2
1
3
5
7
No. of data
Fig. 6 Centrifugation and light transmittance method
D. Trial 2 We collected the hematocrit value by blood drawn and centrifugation method and our system (equation 14) again with 60 patients (women), result of trial 2 shown in table 1. Table 1 centrifugation Hematocrit value and Transmittance method. No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Measured (%) 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 43
Trial 1 28.032 28.807 30.908 27.719 32.596 32.721 30.083 30.42 36.634 31.633 34.909 34.259 32.633 32.758 32.508 34.584 34.734 42.636
Error (%) 12.1 11.2 8.6 1.3 7.6 6.0 0.1 -0.9 6.8 -2.1 1.3 -1.1 -4.9 -6.1 -7.8 -6.0 -7.0 -0.4
Trial 2 24.688 33.152 26.662 29.476 30.270 30.247 34.40 31.495 34.944 32.221 37.282 34.491 35.852 45.518 37.599 33.061 39.074 41.298
Error (%) 5.8 0.3 1.3 4.4 3.9 2.1 6.8 0.8 4.4 -1.2 4.6 -0.7 -0.2 10.3 -0.5 -8.2 -1.2 -2.0
III. CONCLUSIONS The system and probe in this experiment as shown in Fig (1) and the process flowchart in Fig (5) can be obtained the optical transmittance by two wavelengths method, when chose the wavelengths 585 nm is sensitivity to hematocrit level and 875 nm is insensitivity to hematocrit level and
_______________________________________________________________
ACKNOWLEDGMENT The authors would like to thank the Department of Orthopedics Ramathibodi Hospital for providing the measured hematocrit value by centrifuge and patients for optical transmittance information.
REFERENCES Wintrobe M.M. “Clinical Hematology” 5th edition, Lea & Febiger, Philadelphia, 1961. 2. Yitzhak Mendelson “Pulse oximeter and method of operation” US. Patent No.2002/0042558, Apr.11, 2002. 3. Eric Kinast “Pulse Oximeter” US. Patent No.5995858,Nov.30, 1999. 4. Teiji Ukawa, Kazumasa Ito, Tadashi Nakayama “Pulse Oximeter” US. Patent No. 5355882, Oct.18, 1994. 5. Luis Oppenheimer “Spectrophotometrice Blood Analysis” US. Patent No.5331958, Jul.26, 1994. 6. Kouhei Kabuki, Yoshisada Ebata, Tadashi Suzuki, Atsushi Hiyama “Spectrophotometer” US.Patent No. 2002/ 0050560, May. 2, 2002. 7. Wylie I. Lee, Jason E. Alderete, William V. Fower “Optical Measurement of blood Hematocrit incorporating a self calibration Algorithm” US. Patent No. 6064474 May. 16, 2000. 8. Michael J. Higgins, Huntington Beach ,“Continuous Spectroscopic Measurement of total Hemoglobin” US7319894, Jan. 15, 2008. 9. Takuo Aoyagi, Masayoshi Fuse, Michio Kanemoto, Cheng Tai Xia “ Aparatus for Measuring Hemoglobin” US. Patent No. 57200284, Feb.24, 1998. 10. Phimon Phonphruksa and Supan Tungjitkusolmun, “A Photoplethysmographic Method For real time Hematocrit Monitoring”, International Congress on Biological and Medical Engineering (ICBME), Singapore, 2002. 1.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Rhythm of the Electromyogram of External Urethral Sphincter during Micturition in Rats Yen-Ching Chang Department of Applied Information Sciences, Chung Shan Medical University, Taichung, Taiwan, ROC
Abstract — Either Fractional Brownian motion (FBM) or fractional Gaussian noise (FGN), depending on characteristic of real signal, usually provides a useful model to describe biomedical signals. Its discrete counterpart of the FBM or FGN, referred to as the discrete-time FBM (DFBM) or discrete-time FGN (DFGN), is used to analyze them in practical applications. This class of signals possesses long-term correlation and 1/f-type spectral behavior. In general, these signals appear to be irregular property in macroscopic view. However, in physiological signals they maybe exist certain regularity or rhythm in microscopic view in order to achieve the purpose of synergia. To find out these phenomena, the wavelet transform is invoked to decompose these signals and extract possible hidden characteristics. In this study, we first calculate fractal dimension of the electromyogram (EMG) of external urethral sphincter (EUS) to determine where the voiding phase is. Then sample a piece of signal during voiding phase to further investigate regularity or rhythm. Results indicate that certain regularity or rhythm indeed exists in irregular appearance. Keywords — Discrete-Time fractional Brownian motion, discrete-time fractional Gaussian noise, wavelet transform, electromyogram, rhythm.
I. INTRODUCTION Many physiological signals can be modeled as either DFBM or DFGN [1], [2]. These two models are easily described by one parameter, called Hurst parameter (H), which has finite interval 0, 1 . Hurst parameter is related to fractal dimension by D=2-H, which is a useful tool for the systems that demonstrate long-term correlations and 1/f-type spectral behavior. When the Hurst parameter estimated, its corresponding fractal dimension can be obtained. Among estimators for estimating Hurst parameter, maximum likelihood estimator (MLE) [1] is optimal. However, its computational complexity is high and difficult to execution. In particular, the problem is more apparent as the length of dataset is large. For this reason, we use an approximate MLE [3] to estimate Hurst parameter. This method is relatively quick and has acceptable accuracy.
Time-frequency analysis [4] is a frequently used tool in biomedical field. When describing periodic signals, Fourier transform is widely used. This method can capture periodic characteristic. In order to grasp irregularity of signals, a full-time interval is divided into a number of small, equal-time intervals. These intervals then are individually analyzed using the Fourier transform. The approach provides extra time information except frequency. It is well-known as the short-time Fourier transform (STFT). However, there is still a problem with this method. The time interval cannot be adjusted. If high frequencies exist, they cannot be discovered when time interval is short. In this situation, wavelets [5], [6] are invoked to improve this problem. It can keep track of time and frequency information very well. Moreover, wavelets can detect hidden information and extract important features. This function is a very useful tool for biomedical signals. Physiological signals generally come from a very complex system. Signals generated from this system are usually irregular and fluctuating over time. However, some regular features maybe exist behind these phenomena. These regular components may facilitate tissues or organs effectively to implement functions themselves. Without their assistance, their responsibility may not complete and even failure. In order to investigate whether rhythm exists in physiological system, the EMG of EUS in female Wistar rats are invoked. The waveform of EMG exhibiting statistical self-similarity can be modeled as DFGN. Its accumulative signal can be viewed as DFBM. The fractal dimension of signal will assist us to judge when voiding phases happen during micturition. Wavelet transform will help us further analyze these phenomena and detect hidden information. Rhythm generally happens at low frequency band. Besides, an average power is also invoked to identify the difference during micturition.
II. MATERIALS AND METHODS The experiments were carried out on female Wistar rats anesthetized with urethane [7]. The EMG of EUS and cystometrogram (CMG) of bladder with an infusion rate of
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 227–230, 2009 www.springerlink.com
228
Yen-Ching Chang
Room Temperature (Volts)
1 0 -1
(Volts)
10
F. D.
2
2
4
6
8
10
12
14
16
0
2
4
6
8
10
12
14
16
0
2
4
6
8
10
12
14
16
0
2
4
6
8 10 Times (Sec)
12
14
16
1.5 1
0.04 A. P.
0
0 -10
0.02 0
Fig. 1 From top to bottom: Original signal, accumulative signal, fractal dimension (F. D.), and average power (A. P.) at room temperature.
and next filling phase. On the other hand, average power also says that voiding phase needs more power to facilitate emptying. The average power is at least 0.01 during voiding. It is reasonable that the power needed after voiding is larger than filling phase. The extra power can be viewed as recovery or transient power. Moreover, certain rhythm exists at accumulative signal during voiding. It will be discovered via the FFT as illustrated in this section. Figures 2 and 3 are both coming from cold temperature. In Figure 2, the voiding phase is not easy to discover from time series, whereas accumulative signal is easier than original one. According to the same line, the time interval with fractal dimension below 1.5 can be suggested as voiding phase. When the figure is zoomed in, the time series also roughly Cold Temperature: Type I 1 (Volts)
0.123 ml/min using saline solutions were recorded simultaneously at two different temperatures: room and cold temperature (6-8o C). In this work, only the EMG of EUS is adopted for identifying muscular responses at two temperatures. The sampling rate was 500 points/s (time resolution as 0.002 s). The time series of EMG are analyzed as follows. The processing window size was N=1024 points (2.048 s, frequency resolution as 0.4883 Hz). The total window size were 8192 points (16.384 s) for room temperature and 16384 points (32.768 s) for cold temperature. The first 1024 points was collected as the first window. Shifted by 64 points to the right, the second window was obtained and analyzed. These steps were repeated until the last window was gathered. The total windows were 113 for room temperature and 241 for cold temperature. For each window, fractal dimension, average power, power spectral density (PSD) via the fast Fourier transform (FFT) was executed. Although, the FFT is not suitable for estimating the PSD of DFGN, which possesses long-term correlation and 1/f-type spectral behavior, but appropriate for identifying the frequencies with periodic signals. In general, rhythm happens at low frequencies. In order to avert the interference of high frequency components and capture hidden information, the signals analyzed were decomposed into 3 levels via wavelet transform using Daubechies-2 filter coefficients [6]. Afterwards, we reconstructed signal from its corresponding approximated signal at each level. The original signal was labeled as Level 0, and reconstructed signal as Level j 1 d j d 3 for other level. Each signal was calculated the PSD via the FFT. Their time-frequency diagrams were displayed to illustrate signal’s phenomena during micturition. For the purpose of clearness, unimportant frequencies, where the PSDs were lower than 1 for room temperature and 0.35 for cold one, were suppressed.
III. RESULTS
0 -1
2 F. D.
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
5
10
15 20 Times (Sec)
25
30
0 -20
1.5 1
0.04 A. P.
The results at two temperatures show that the collected signals roughly are classified as three possible outcomes as illustrated in Figures 1-3. Figure 1 shows that the time interval (about 6-12 s) with fractal dimension below 1.5 can be suggested as voiding phase. It is reasonable because the fractal dimension between 1 and 1.5 indicates positive correlation on signal, while the one between 1.5 and 2 indicates negative correlation on signal. When voiding, coordination is important for animals. The activity of time series during voiding is obvious, but the time interval between filling and voiding phase is not apparent. The same is for the time interval between voiding
(Volts)
20
0.02 0
Fig. 2 From top to bottom: Original signal with Type I, accumulative signal, fractal dimension (F. D.), and average power (A. P.) at cold temperature.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Rhythm of the Electromyogram of External Urethral Sphincter during Micturition in Rats
displays certain rhythm, but cannot be decided without other indicator. Here fractal dimension helps us to explain importance of coordination among muscles. Likewise, average power during voiding is at least larger than 0.01. Here average power is a good auxiliary tool.
Level 0 4 2 2
(Volts)
10
5
Fqs. (Hz)
0.5
Level 1 4
0 15 Cold Temperature: Type II
229
0 0
5
15
10 Ts. (Sec)
5 Fqs. (Hz)
Level 2
15
10 5 Ts. (Sec)
0 0
Level 3
0
4
-0.5
(Volts)
5
1.8 F. D.
0
5
10
15
20
25
30
4 2
0 -5
2
0
5
10
15
20
25
30
15 10
0 15
0 15
5 10 5 Fqs. (Hz)
0 0
10
Ts. (Sec) Fqs. (Hz)
5 0 0
5
10
15
Ts. (Sec)
1.7 1.6
0.01 A. P.
0 15 10
0
5
10
0
5
10
15
20
25
30
15 20 Times (Sec)
25
30
Fig. 4 From top to bottom and left to right: Original signal, reconstructed signal via Level 1, reconstructed signal via Level 2, and reconstructed signal via Level 3 at room temperature.
0.005 0
Fig. 3 From top to bottom: Original signal with Type II, accumulative signal, fractal dimension (F. D.), and average power (A. P.) at cold temperature.
In Figure 3, all fractal dimensions are larger than 1.5 and average power is lower than 0.01. We cannot discover any voiding response from the EMG of EUS. This phenomenon can be explained as incontinence. That is to say, cold-water stimulation can destroy coordination among muscles. It is interesting that signal with Type I still preserves intermittent voiding phases, but signal with Type II not. It can be explained that the rats of Type I possess robust physiological mechanism to self-organize, which can resist larger external stimulation. However, the rats of Type II lack of the ability to self-organization, because they have ordinary mechanism, not robust one. From time series of Figures 1 and 2, it is implied that certain rhythm exists. In order to detect it, we resort to the FFT. The results show that rhythm indeed exists during voiding only if the physiological function of EUS is not damaged completely. Figures 4-6 illustrate the phenomena as follows: Figure 4 shows that the main frequency is 6.8359 Hz and happens between 8.192 and 10.752 s. The second one is 6.3477 Hz and happens at two time intervals: 7.936-8.704 s and 10.24-11.008 s. In general, the signals on both sides of main frequency include other non-rhythmic signals, which must affect the estimate of PSD. Therefore, the main frequency can be viewed as the one of real rhythm. Rhythm will help the process during micturition.
_________________________________________
It is observed from Figure 5 that there exist five frequencies with PSD larger than 0.35. They are 5.8594 (third), 6.3477 (main frequency), 6.8359 (second), 7.3242 (fourth) and 7.8125 (fifth) Hz. The main frequency happens at two time intervals: 8.576-10.88 s and 23.296-24.576 s. The second one happens at two time intervals: 8.32-8.96 s and 10.624-11.008 s. The third one happens at 9.728-10.368 and 17.536-17.92 s. The fourth one happens between 11.776 and 12.672 s. The fifth one happens at 11.136 s. As mentioned before, the signals on both sides of main frequency are affected by other non-rhythmic signals. These rhythmic regions suggest us where voiding phases occur. Level 0
Level 1 2
2 1 1 0 15
0 15 10
10
5
Fqs. (Hz)
30 20 25 5 10 15 Fqs. (Hz) 0 0 0 0 5 Ts. (Sec) Level 2
5
30 10 15 20 25 Ts. (Sec)
Level 3 2
2 1 1 0 15
10 5 Fqs. (Hz)
0 30 25 15 20 10 15 10 5 5 0 0 Ts. (Sec) Fqs. (Hz)
30 20 25 10 15 0 0 5 Ts. (Sec)
Fig. 5 From top to bottom and left to right: Original signal with Type I, reconstructed signal via Level 1, reconstructed signal via Level 2, and reconstructed signal via Level 3 at cold temperature.
IFMBE Proceedings Vol. 23
___________________________________________
230
Yen-Ching Chang
In addition, the value also tells us that the main frequency at cold temperature is slight lower one-step frequency resolution than one at room temperature. These five obvious frequencies suggest us that rhythm still exists even though under cold-water stimulation for robust rats. Figure 6 shows that no rhythm happens in signal with Type II at cold temperature. This suggests that physiological function with this type of rats is worse than one with Type I. For revealing the rhythm on signal, one time interval with rhythm is extracted from full-time interval under room temperature. This interval of signal first is processed via
Level 0
Level 1 1
1 0.5 0.5 0 15
0 15 10
10
5
Fqs. (Hz)
30 20 25 5 10 15 Fqs. (Hz) 0 0 0 0 5 Ts. (Sec)
30 10 15 20 25 Ts. (Sec)
5
Level 3
Level 2 1 1
wavelet transform and reconstructed them using approximated signal. Then it is processed via the FFT. The result shows in Figure 7. It is obvious from this figure that rhythm occurs during voiding phase and its main frequency is 6.8359 Hz.
IV. CONCLUSION Micturition is abnormal when muscles or nerves problem may be present and interfere with the ability of bladder to hold or release urine normally. In this study, it is suggested that nerves are numbed at cold-water stimulation more or less according to animal’s physiological condition. Numbed nerves will result in coordination problem among muscles. Healthy animal should at least have one voiding phase and only one at best. Under cold-water stimulation, animal with robust mechanism exist many voiding phases, but rhythm still exist to facilitate micturition. Nevertheless, animal with bad function will result in incontinence. Results show that rhythm plays an important role during micturition. It facilitates the bladder to empty. In addition, hybrid methods usually provide some meaningful explanations.
0.5 0.5 0 15
10 5 Fqs. (Hz)
0 30 25 15 20 10 15 10 5 5 0 0 Ts. (Sec) Fqs. (Hz)
30 20 25 10 15 0 0 5 Ts. (Sec)
Fig. 6 From top to bottom and left to right: Original signal with Type II, reconstructed signal via Level 1, reconstructed signal via Level 2, and reconstructed signal via Level 3 at cold temperature.
(Votls)
Signals 4
0
2
(Volts) (Volts)
100
200
0
100
200
0
100
200
2 0 9.45
10.45
4 2 0
9.45
10.45
0 -1 8.45
0 4
0 -1 8.45 1
(Volts)
10.45
0 -1 8.45 1
References
0 9.45
4 2
9.45 Times (Sec)
This work was partially supported under the Grant number NSC 97-2914-I-040-008-A1 and Chung Shan Medical University.
PSD
1
-1 8.45 1
ACKNOWLEDGMENT
10.45
0
0
100 200 Frequencies (Hz)
Fig. 7 From top to bottom and left to right: Original signal, its corresponding
Lundahl T, Ohley J, Kay SM, and Siffert R (1986) Fractional Brownian motion: A maximum likelihood estimator and its application to image texture. IEEE Transactions on Medical Imaging MI-5:152-161. Chang S, Mao ST, Hu SJ, Lin WC, and Cheng CL (2000) Studies of detrusor-sphincter synergia and dyssynergia during micturition in rats via fractional Brownian motion. IEEE Transactions on Biomedical Engineering 47:1066-1073. Chang YC and Chang S (2002) A fast estimation algorithm on the Hurst parameter of discrete-time fractional Brownian motion. IEEE Transactions on Signal Processing 50:554-559. Akay M (1996) Detection and estimation methods for biomedical signals. Academic Press, New York. Daubechies I (1992) Ten lectures on wavelets. SIAM, Philadelphia, PA. Boggess A and Narcowich FJ (2001) A first Course in wavelets with Fourier analysis. Prentice Hall, New Jersey. Chang S, Mao ST, Kuo TP, Hu SJ, Lin WC, and Cheng CL (1999) Fractal geometry in urodynamics of lower urinary tract. Chinese Journal of Physiology 42:25-31.
PSD, reconstructed signal via Level 1, its corresponding PSD, reconstructed signal via Level 2, its corresponding PSD, and reconstructed signal via Level 3, its corresponding PSD at one time interval at room temperature.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Higher Order Spectra based Support Vector Machine for Arrhythmia Classification K.C. Chua1, V. Chandran2, U.R. Acharya1 and C.M. Lim1 1
2
Division of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore. Faculty of Built Environment and Engineering, Queensland University of Technology, Brisbane, Australia.
Abstract — Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. HRV analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. A computer-based arrhythmia detection system of cardiac states is very useful in diagnostics and disease management. In this work, we studied the identification of the HRV signals using features derived from HOS. These features were fed to the support vector machine (SVM) for classification. Our proposed system can classify the normal and other four classes of arrhythmia with an average accuracy of more than 85%. Keywords — HOS, heart rate, bispectrum, SVM, classifier
II. MATERIALS AND METHODOLOGY ECG data for the analysis was obtained from Kasturba Medical Hospital, Manipal, India arrhythmia database. Prior to recording, the ECG signals were processed to remove noise due to power line interference, respiration, muscle tremors, spikes etc. The R peaks of ECG were detected using Tompkins’s algorithm [5]. The ECG signal is used to classify cardiac arrhythmias into 5 classes, namely, normal sinus rhythm (NSR), premature ventricular contraction (PVC), complete heart block (CHB), type III-sick sinus syndrome (SSS-III) and complete heart failure (CHF). The number of dataset chosen for each of the five classes is given in Table 1. Each dataset consists of around 10,000 samples and the sampling frequency of the data is 320 Hz. The interval between two successive QRS complexes is defined as the RR interval (tr-r seconds) and the heart rate (beats per minute) is given as:
I. INTRODUCTION Electrocardiogram (ECG), a time varying signal that concerns the electrical manifestation of the heart muscle activity is an important tool in diagnosing the condition of the heart [1]. Heart rate variability, which is the changes of beat rate of heart over time, reflects the autonomic control of the cardiovascular system [2]. It is a simple, noninvasive technique which provides an indicator of the dynamic interaction and balance between the sympathetic nervous system and the parasympathetic nervous system. These signals are not linear in nature and hence, analysis using nonlinear methods can unveil the hidden information in the signal. A detailed review of HRV analysis that includes both linear and non-linear approaches is discussed [3]. An automated method for classification of cardiac abnormalities is proposed based on higher order spectra analysis of HRV. Higher order spectra (HOS) are spectral representations of moments and cumulants and can be defined for deterministic signals and random processes. They have been used to detect deviations from Gaussianity and identify non-linear systems [4].
HR=60/tr-r
(1)
Table 1 Number of Datasets in each class. Cardiac Class
No. of Datasets
NSR
183
PVC
37
CHB
42
SSS-III
43
CHF
25
III. HOS AND ITS FEATURES The HRV signal is analyzed using different higher order spectra (also known as polyspectra) that are spectral representations of higher order moments or cumulants of a signal. In particular, this paper studies features related to the third order statistics of the signal, namely the bispectrum. The Bispectrum is the Fourier Transform of the third order correlation of the signal and is given by B(f1,f2) = E[X(f1)X(f2)X*(f1+f2)]
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 231–234, 2009 www.springerlink.com
(2)
232
K.C. Chua, V. Chandran, U.R. Acharya and C.M. Lim
Where X(f) is the Fourier transform of the signal x(nT) and E[.] stands for the expectation operation. In practice, the expectation operation is replaced by an estimate that is an average over an ensemble of realizations of a random signal. For deterministic signals, the relationship holds without an expectation operation with the third order correlation being a time-average. For deterministic sampled signals, X(f) is the discrete-time Fourier transform and in practice is computed as the discrete Fourier transform (DFT) at frequency samples using the FFT algorithm. The frequency f may be normalized by the Nyquist frequency to be between 0 and 1. In our earlier study, we proposed the general patterns for different classes of arrhythmia. Bispectral entropies [6] were derived to characterize the regularity or irregularity of the HRV from bispectrum plots. The formulae for these bispectral entropies are as follows:
where pn
B( f1 , f 2 )
¦:
B ( f1 , f 2 )
(3)
P2 = ¦ n qn log qn where pn
¦
:
(4)
2
B ( f1 , f 2 )
2
, : = the region as in figure 1.
The normalization in the equations above ensures that entropy is calculated for a parameter that lies between 0 and 1 (as required of a probability) and hence the entropies (P1 and P2) computed are also between 0 and 1.
0.5 f2
H1
¦
:
log B f1 , f 2
(5)
The sum of logarit hmic amplitudes of diagonal elements in the bispectrum: H2
¦
:
log B f k , f k
(6)
The first-order spectral moment of amplitudes of diagonal elements in the bispectrum: k log B f k , f k ¦ k 1 N
H3
(7)
The definition of WCOB [8] is given by:
, : = the region as in figure 1.
Normalized Bispectral Squared Entropy (BE 2):
B( f1 , f 2 )
The sum of logarithmic amplitudes of the bispectrum:
These features (H1-H3) were used by Zhou et al [7] to classify mental tasks from EEG signals.
Normalized Bispectral Entropy (BE 1): P1 = ¦ n pn log pn
[8] to characterize these plots. The features related the moments of the plot are:
: 0.5 f1
Figure 1 Non-redundant region of computation of the bispectrum for real signals. Features are calculated from this region. Frequencies are shown normalized by the Nyquist frequency.
f1m
¦ ¦
: :
iB(i, j ) B (i, j )
f2m
¦ ¦
: :
jB (i, j ) B(i, j )
(8)
where i, j are the frequency bin index in the non-redundant region. Blocks of 1024 samples, corresponding to 256 seconds at the re-sampled rate of 4 samples/sec were used for computing the bispectrum. These blocks were taken from each HRV data record with an overlap of 512 point (i.e 50%). IV. SUPPORT VECTOR MACHINE (SVM) CLASSIFIER In this study a kernel-based classifier is adopted for classification of the cardiac abnormalities. Herein, the attribute vector is mapped to some new space. Despite the fact that classification is accomplished in a higher dimension space, any dot product between vectors involved in the optimization process can be implicitly computed in the low dimensional space [9]. For a training set of instance-label pairs x i , yi , i 0,..., l where xi R n and yi {1, 1} . If I (.) is a non-linear operator mapping the attribute vector x to a higher dimensional space, the optimization problem for the new points I ( x) becomes
In this study we also make use of features related to moments [7] and the weighted centre of bispectrum (WCOB)
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Higher Order Spectra based Support Vector Machine for Arrhythmia Classification l 1 T w w C ¦ [i 2 i 1
min [ w ,b ,
subject to the constraints
yi wT M xi b t 1 [i ,
[i t 0
(9)
where C>0 is the penalty parameter for the error term and [i are a set of slack variables that are introduced when the training data is not completely separable by a hyper plane . The SVM finds a linear separating hyper plane with the maximal margin in this higher dimensional space. As in the linear case, the mapping appears in terms of the kernel
function K xi , x j
I ( xi )T I ( x j ). Despite the fact there are
several kernels, typical choice for kernels are radial basis functions. The RBF kernel non-linearly maps samples into a higher dimensional space. There are several methods that can be used to extend a binary class SVM to multi-class SVM. In this work, we used the One against all method SVM to classify the five classes of HRV data [10].
During PVC, there will be ectopic beats in the normal ECG signals. The mean entropies (P1 and P2) indicated in the Table 2 are correspondingly higher than the normal case due to higher variation. The mean values of moments H1, H2, H3 are 2.81e5, 1.29e3 and 1.42e5 respectively. The WCOB mean values for f1m, f2m are 126.7 and 56.35 respectively. Table 2 Results of ANOVA on various bispectral features. Entries in the columns correspond to mean and standard deviation. (all these features are of p-value < 0.001) Features P1 P2 H1 H2 H3
V. TEST VECTOR GENERATION
f1m
In order to measure and validate the performance of a classifier, there should be a sufficiently large set of the test data. When only a small database is available, different combinations of training and test sets can be used to generate more trials. In our experiment, we choose approximately two third of the data from each class of HRV signals for training and one third for testing. This experiment was repeated five times by choosing different combinations of training data and test data. Combinations of training and test data were randomly chosen. In each of these experiments, a new SVM model was generated and the test data sets did not overlap with the training data sets. VI. RESULT Table 2 shows the range of values for all the seven features for the five classes. The ANOVA test result of these HOS features are of very low p-value (i.e. p-value < 0.001). For normal cases, heart rate varied continuously between 60bpm-80 bpm. The bispectrum entropies (P1 and P2) appear to be high due to higher variation in the heart rate. The mean value of P1 is 0.719 while that of P2 is 0.43. The mean values of moments H1, H2, H3 are 2.81e5, 1.29e3 and 1.42e5 respectively. The WCOB mean values for f1m, f2m are 60 and 22.32 respectively. It may be that these values are related to the rate of breathing and its harmonics. And there may be a modulating effect on the heart rate variability due to the breathing pattern.
_______________________________________________________________
233
f2m
Normal 0.719 ±0.086 0.430 ±0.146 2.81e5 ±5.82e4 1.29e3 ±2.31e2 1.42e5 ±3.04e4 60.00 ±61.90 22.32 ±31.40
PVC 0.824 ±0.063 0.542 ±0.181 3.64e5 ±4.55e4 1.60e3 ±1.74e2 1.90e5 ±2.39e4 126.70 ±43.10 56.35 ±36.80
CHB 0.710 ±0.022 0.428 ±0.150 1.79e5 ±4.23e4 8.94e2 ±1.66e2 8.94e4 ±2.15e4 41.95 ±10.90 12.91 ±4.28
SSS 0.780 ±0.091 0.420 ±0.255 4.64e5 ±3.40e4 1.98e3 ±1.22e2 2.38e5 ±1.87e4 62.85 ±36.80 31.05 ±23.2
CHF 0.605 ±0.129 0.187 ±0.140 2.02e5 ±5.43e4 9.74e2 ±2.18e2 1.02e5 ±2.89e4 33.71 ±25.5 10.50 ±9.89
Table 3 Percentage of classification accuracy for five different classes of arrhythmia with a SVM classifier. Class Accuracy
NSR 87.93
PVC 74.00
CHB 80.00
SSS 98.46
CHF 88.57
Average 85.79
In the case of CHB, the Atrio-ventricular node is unable to send electrical signals rhythmically to the ventricles and as a result the heart rate remains low. The bispectrum entropies (P1 and P2) indicated in the Table 2 are lower as compared to the normal subject due to the reduced beat to beat variation. The mean values of moments H1, H2, H3 are 1.79e5, 8.94e2 and 8.94e4 respectively. The WCOB mean values for f1m, f2m are 41.95 and 12.91 respectively. In SSS – III, there is a continuous variation of heart rate between bradycardia and tachycardia. The bispectrum entropies (P1 and P2) indicated in the Table 2 are comparable with the normal case due to the higher variation in the beat to beat. The mean values of moments H1, H2, H3 are 4.64e5, 1.93e3, 2.38e5 and 8.94e4 respectively. The WCOB mean values for f1m, f2m are 62.85 and 31.05 respectively. During CHF, the heart is unable to pump the blood (supply enough oxygen) to the different parts of the body. The mean bispectrum entropies (P1 and P2) are lower than the normal case due to the reduced beat to beat variation. The mean values of moments H1, H2, H3 are 2.02e5, 9.74e2,
IFMBE Proceedings Vol. 23
_________________________________________________________________
234
K.C. Chua, V. Chandran, U.R. Acharya and C.M. Lim
1.02e5 and 8.94e4 respectively. The WCOB mean values for f1m, f2m are 33.71 and 10.50 respectively. Table 4 Result of Sensitivity (SENS) and Specificity (SPEC) for the SVM classifier. Entries to the left are the numbers of True negatives (TN) , False negatives (FN), True positive (TP) and False positives (FP). TN 255
FN 21
TP 189
FP 35
SPEC 87.93%
ties. Our proposed system utilizes combination of the different features with a SVM classifier and is able to identify the unknown cardiac class with a sensitivity and specificity of 90% and 87.9% respectively. The accuracy of our proposed system can be further increased by increasing the size of the training set, rigorous training, and better features.
SENS 90.00%
REFERENCES The result of classification efficiency is shown in Table 3. Our results show that, our proposed method can classify the unknown cardiac class with an efficiency of about 85%, sensitivity and specificity of 90% and 87.93% respectively (Table 4).
1. 2.
3.
VII. DISCUSSION 4.
Different non-linear methods have been used to classify the cardiac classes using the heart rate signals [11-14]. In all these studies, different non-linear parameters namely correlation dimension, Lyapunov exponent, approximate entropy, fractal dimension, Hurst exponent and detrended fluctuation analysis have been used to identify the unknown class of the disease. In this work, we have applied HOS as a non-linear tool to analyze cardiac signals. We have used the SVM and bispectral features to diagnose the different cardiac arrhythmia. Table 3 and 4 shows promising results of the application of HOS features for cardiac signals classification. One of the major challenges in non-linear biosignal processing is the presence of intra-class variation. Another challenge is that there are overlaps among the derived features for various arrhythmias. Hence in our present work, we have used two bispectrum entropies and three features related to the moments and two weighted centre of bispectrum as descriptors to differentiate different arrhythmia. These features were then fed to the SVM classifier for automated classification. We achieve about 85% of classification accuracy with the current set of features. The accuracy may be further increased by extracting better features and taking more diverse training data.
5.
6.
7.
8.
9. 10.
11.
12.
13.
14.
VIII. CONCLUSION The HRV signal can be used as reliable indicator of the cardiac diseases. In this work, we have extracted different HOS features from heart rate signals for automated classification. We have evaluated the effectiveness of different bispectrum entropies, moments and weighted centre of bispectrum as features for the classification of various cardiac abnormali-
_______________________________________________________________
M. Sokolow, M. B. Mclhoy, M. D. Chiethin, "Clinical Cardiology", Vlange Medical Book, 1990. Kamath MV, Ghista DN, Fallen EL, Fitchett D, Miller D, McKelvie R, “Heart rate variability power spectrogram as a potential noninvasive signature of cardiac regulatory system response, mechanisms, and disorders,” Heart Vessels, 3, 1987, 33–41. U. R. Acharya, K. P. Joseph, N. Kannathal, C. M. Lim, J. S. Suri, "Heart rate variability: A review", Med Biol Comp Eng., 88, pp. 2297, 2006. Nikias CL, Petropulu AP, Higher –order spectra analysis: A nonlinear signal processing framework. Englewood Cliffs, HJ, PTR Prentice Hall; 1993. Pan Jiapu, Tompkins WJ, “Real Time QRS Detector algorithm”, IEEE Transactions on Biomedical Engineering 32(3), pp. 230-236, March 1985. Chua KC, Chandran V, Acharya UR, Lim CM, “Cardiac State Diagnosis Using Higher Order Spectra of Heart Rate Variability”, Journal of Medical & Engineering Technology, 32(2), 2008, 145-155. Zhuo SM, Gan JQ, Sepulveda F, “Classifying mental tasks based on features of higher-order statistics from EEG signals in brain– computer interface”, Information Sciences, 178(6), 2008, pp 16291640. Zhang J, Zheng C, Jiang D, Xie A., ”Bispectrum analysis of focal ischemic cerebral EEG signal”, In: Proceedings of the 20th annual international conference of the IEEE engineering in medicine and biology society, 20, 1998, pp 2023–2026. Vapnik, V., “Statistical Learning Theory”, New York: Willey, 1998. Lingras P, Butz C, “Rough set based 1-v-1 and 1-v-r approaches to support vector machine multi-classification”, Information Sciences, 177(18), 2007, pp 3782-3798. Acharya UR, Jasjit Suri, Jos A. E. Spaan, Krishnan SM, “Advances in Cardiac Signal Processing”, Springer Verlang GmbH Berlin Heidelberg, March, 2007. Radhakrishna RKA, Yergani VK, Dutt ND, Vedavathy TS, “ Characterizing Chaos in heart rate variability time series of panic disorder patients”, Proceedings of ICBME, Biovision 2001, Bangalore , India, 2001,pp 163-167. Mohamed IO, Ahmed H, Abou-Zied, Abou-Bakr M, Youssef, Yasser MK, “Study of features on nonlinear dynamical modeling in ECG arrhythmia detection and classification”, IEEE transactions on Biomedical Engineering, 49(7), 2002, pp733-736. Kannathal N, Lim CM, Acharya UR, Sadasivan PK, “Cardiac state diagnosis using adaptive neuro-fuzzy technique”, Med. Eng Phys., 28(8), 2006, pp 809-815.
The address of the corresponding author: Author: Chua Kuang Chua Institute: ECE Division, Ngee Ann Polytechnic Street: 535 Clementi Road Country: Singapore Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Transcutaneous Energy Transfer System for Powering Implantable Biomedical Devices T. Dissanayake1, D. Budgett1, 2, A.P. Hu3, S. Malpas2,4 and L. Bennet4 1
Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand 2 Telemetry Research, Auckland, New Zealand 3 Department of Electrical and Computer Engineering, University of Auckland, Auckland, New Zealand 4 Department of Physiology, University of Auckland, Auckland, New Zealand Abstract — Time varying magnetic fields can be used to transfer power across the skin to drive implantable biomedical devices without the use of percutaneous wires. However the coupling between the external and internal coils will vary according to orientation and posture. Other potential sources of power delivery variations arise from changes in circuit parameters and loading conditions. To maintain correct device function, the power delivered must be regulated to deal with these variations. This paper presents a TET system with a closed loop frequency based power regulation method to deliver the right amount of power to the load under variable coupling conditions. The system is capable of regulating power for axially aligned separations of up to 10mm and lateral displacements of up to 20mm when delivering 10W of power. The TET system was implanted in a sheep and the temperature of implanted components is less than 38.4 degrees over a 24 hour period.
of variation in coupling is due to posture changes of the patient causing variation in the alignment between the primary and the secondary coils. The typical separations between the internal and external coils are in the range of 1020mm. If insufficient power is delivered to the load then the implanted device will not operate properly. If excessive power is delivered, then it must be dissipated as heat with the potential for causing tissue damage. Therefore it is important to deliver the right amount of power matching the load demand. Primary Coil
Skin Magnetic Secondary Coil coupling
DC Supply
Power Converter
Pickup
Keywords — Magnetic field, coupling, Transcutaneous Energy Transfer (TET)
Load
Power feedback Controller
I. INTRODUCTION High power implantable biomedical devices such as cardiac assist devices and artificial heart pumps require electrical energy for operation. Presently this energy is provided by percutaneous leads from the implant to an external power supply [1]. This method of power delivery has the potential risk of infection associated with wires piercing through the skin. Transcutaneous Energy Transfer (TET) enables power transfer across the skin without direct electrical connectivity. This is implemented through a transcutaneous transformer where the primary and the secondary coils of the transformer are separated by the patient’s skin providing two electrically isolated systems. A TET system is illustrated in figure 1. The electromagnetic field produced by the primary coil penetrates the skin and produces an induced voltage in the secondary coil which is then rectified to power the biomedical device. Compared to percutaneous wires, TET systems become more complex to operate under variable coupling conditions as it result in a variation in power transfer [2]. One source
Fig. 1 Block diagram of a TET system Power can be regulated either in the external or the implanted system. However, regulation in the implanted system results in dissipation of heat in the implanted circuitry [3]. Furthermore, it also increases the size and weight of the implanted circuitry therefore power regulation in the external system is preferred over the implanted system. There are two main methods of regulating power in TET systems, magnitude and frequency control methods. In the case of magnitude control, input voltage to the primary power converter is varied in order to vary the power delivered to the load. This method of control is very common in TET systems however it does not take into account the miss-match of the resonant frequency of the secondary resonant tank and the operating frequency of the external power converter. This miss-match in frequency reduces the power transferred to the load, consequently, a larger input voltage is required which results in a reduction in the overall power efficiency of the system. Frequency control involves varying the operating frequency of the primary power converter to vary the
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 235–239, 2009 www.springerlink.com
236
T. Dissanayake, D. Budgett, A.P. Hu, S. Malpas and L. Bennet
power delivered to the load. Depending on the actual power requirement of the pickup load, the operating frequency of the primary power converter is varied so the secondary prick-up is either tuned/detuned, thus the effective power delivered to the implantable load is regulated [4]. The system discussed in this paper uses frequency control to control power delivery to the load, and a Radio Frequency (RF) link is used to provide wireless feedback from the implanted circuit to the external frequency controller. II. SYSTEM ARCITECHTURE The TET system is designed to deliver power in the range of 5W to 25W. Figure 2 illustrates the architecture of the overall system. A DC voltage is supplied to the system with an external battery pack. A current fed push pull resonant converter is used to generate a high frequency sinusoidal current across the primary coil. The magnetic coupling between the primary and the secondary systems produces a sinusoidal voltage in the secondary coil which is rectified by the power conditioning circuit in the pickup to provide a stable DC output to the implanted load. As shown in figure 2, a DC inductor is added to the secondary pick up following the rectifier bridge in order to maximize the power transfer to the load. The DC inductor aids to sustain a continuous current flow in the pick up [5]. Primary resonant tank Push-pull resonant Frequency controller
V ref
Cp Lp
Secondary resonant tank Ls Cs Magnetic coupling
Biomedical load
V dc Internal transceiver
Skin
Digital analogue converter RF Communication channel External transceiver
Fig. 2 System architecture Two nRF24E1 Nordic transceivers are used for data communication. The DC output voltage of the pickup is detected and transmitted to the external transceiver. The external transceiver processes the data and adjusts the duty cycle of the output PWM signal in order to vary the reference voltage (Vref) of the frequency control circuitry. The
_________________________________________
PWM signal is passed through a Digital to Analogue Converter (DAC), in order to obtain a variable reference voltage. This variable reference voltage is then used to vary the frequency of the primary resonant converter which in turn varies the power delivered to the implantable system. The response time of the system is approximately 360ms. A. Frequency controller The frequency controller employs a switched capacitor control method described in [7]. The controller varies the overall resonant frequency of the primary resonant tank in order to tune/detune to the secondary resonant frequency. The frequency of the primary circuit is adjusted by varying the effective capacitance of the primary resonant tank. This is illustrated in figure 3.
L1
L2 Primary resonant tank
Secondary resonant tank
CP LP LS
VIN CV1 SV1
CS
Load
CV2 S1
S2
SV2
Fig. 3 System based of primary frequency control [4] Inductor LP, capacitor CP and switching capacitors CV1 and CV2 form the resonant tank. The main switches S1 and S2 are switched on and off alternatively for half of each resonant period and changing the duty cycle of the detuning switch SV1 and SV2 varies the effective capacitances of CV1 and CV2 by changing the average charging or discharging period. This in turn will vary the operating frequency of the primary converter. Each CV1 and CV2 is involved in the resonance for half of each resonant period. The variation in reference voltage (Vref) obtained from the DAC is used to vary the switching period of these capacitors. This method of frequency control maintains the zero voltage switching condition of the converter while managing the operating frequency. This helps to minimize the high frequency harmonics and power losses in the system. As shown in figure 3 the pickup circuitry is tuned to a fixed frequency using the constant parameters LS and CS. The operating frequency of the overall system is dependent on the primary resonant tank which can be varied by changing the equivalent resonant capacitance [6], therefore the tuning condition of the power pickup can be controlled.
IFMBE Proceedings Vol. 23
___________________________________________
Transcutaneous Energy Transfer System for Powering Implantable Biomedical Devices
A prototype TET system was built and tested in a sheep. The internal coil and the resonant capacitor were Parylene coated and encapsulated with medical grade silicon to provide a biocompatible cover. The total weight of the implanted equipment was less than 100g. As illustrated by the cross sectional view in figure 4, thermistors were attached to the primary and the secondary coils to measure the temperature rise caused by the system in the surrounding tissue. x x x x x x
Thermistor 1: Placed on top of primary Thermistor 2: Placed under the skin Thermistor 3: Placed on the muscle side Thermistor 4: 1cm from the secondary coil Thermistor 5: 2cm from the secondary coil Thermistor 6: Near the subcutaneous tissue near the exit of the wound.
Prior to experimentation, the thermistors were calibrated against a high precision FLUKE 16 Multimeter temperature sensor and a precision infrared thermometer. Primary coil 1
Subcutaneous tissue 6
2 4
5
1cm 3 Secondary coil
Healthy tissue
2cm
also put on the site of the wound to reduce infection. Following the surgery the sheep transferred to a crate where it was kept over a three week period. The primary coil was placed directly above the secondary coil and held on the sheep using three loosely tied strings. A PowerLab ML820 data acquisition unit and Labchart software (ADInstruments, Sydney Australia) was used for continuous monitoring of the temperature, the output power to the load and the variation in input current of the system during power regulation. The data acquisition was carried out at a frequency of 10 samples per second. IV. EXPERIMENTAL RESULTS Experimental results were obtained for delivering 10W of power to the load when the system was implanted in sheep. Figure 5 illustrates the closed loop controlled power delivered to the load over a period of 24 hours. The input voltage to the system was 23.5V. The controller is able to control the power to the load for axially aligned separations and lateral displacements between 10mm to 20mm. Beyond this range the coupling is too low for the controller to provide sufficient compensation, and delivered power will drop below the 10W set point. Evidence of inadequate coupling can be seen at intervals in Figure 5.Variation in input current reflects the controller working to compensate for changes in coupling using frequency variation between 163 kHz (fully detuned) and 173 kHz (fully tuned). When the coupling between the coils is good, the primary resonant tank is fully detuned in order to reduce the power transferred to the secondary. When the coils are experiencing poor coupling, the primary resonant tank is fully tuned to increase the power transfer between the coils.
Fig. 4 The placement of the temperature sensors
Graph of closed loop control power at 10 W 12.0
_________________________________________
10.0 0.75 Power (W)
Prior to the surgery all implantable components were sterilized using methanol gas. The sheep was put under isoflurane anesthesia and the right dorsal chest of the sheep was shaved. Iodine and disinfectant was applied over the skin to sterilize the area of surgery. Using aseptic techniques a 5 cm incision was made through the skin on the dorsal chest. A tunnel was created under the skin approximately 20 cm long and a terminal pocket created. The secondary coil and the thermistors were placed within this pocket. The thickness of the skin at this site was approximately 10mm. The secondary coil was then sutured in place and the power lead from the coil and leads of the thermistors were tunneled back to the incision site and exteriorised through the wound. The wound was stitched and Marcain was injected to the area of the wound. Iconic powder was
0.8
8.0 6.0
0.7
4.0
Input current (A)
III. EXPERIMENTAL METHOD
237
0.65 2.0
Outp ut Vo lt ag e inp ut current
0.0
0.6 0
200
400
600
800
1000
1200
1400
Time (mins)
Fig. 5 Regulated power to the load and the input current to the system
Figure 6 shows the temperature recorded from the six thermistors. It takes approximately 20 minutes for the tem-
IFMBE Proceedings Vol. 23
___________________________________________
238
T. Dissanayake, D. Budgett, A.P. Hu, S. Malpas and L. Bennet
perature to reach a steady state after turn-on. The maximum temperature was observed in the thermistor placed under the secondary coil on the muscle side. The maximum temperature observed in this thermistor over the 24 hour period was 38.10C. The maximum temperature rise observed was 3.80C in the thermistor placed under the skin. The large variation in the primary coil temperature is due to the changes in current through the coil from the frequency control mechanism. When the system is in the fully tuned condition, the current in the primary coil is at a maximum to compensate for the poor coupling. The temperature rise in the thermistors 1cm and 2cm from the secondary coil is well below 20C.
Temperature against time when delivering 10W 40.0
Temperature (Celcius)
38.0 36.0 34.0 32.0 30.0 28.0 26.0
Q
R ZL
(1)
Where R is the load resistance, (2\f) is the system angular operating frequency, and L is the secondary coil inductance. A larger Q will enable the system to be more tolerant. This benefit is traded off against the need for a more sensitive and faster feedback response from the control system. VI. CONCLUSIONS We have successfully implemented a system that is capable of continuously delivering power to a load in a sheep. The results have been presented for delivering 10W of power to the load with closed loop frequency control technique for a period of 24 hours. The external coil was loosely secured to lie over the region of the internal coil and subjected to alignment variations from a non-compliant subject. The maximum temperature observed in this system is 38.10C on the thermistor placed on the muscle side under the primary coil. The maximum temperature rise was 3.80C on the thermistor placed under the skin.
24.0 22.0
REFERENCES
20.0 0
200
400
600
800
1000
1200
1400
1600
1.
Time (mins) unde r s kin
m us c le s ide
2 c m fro m s e c
ne a r wo und e xit
prim a ry s urfa c e
1 c m fro m s e c
Fig. 6 Temperature profile of the thermistors
2.
V. DISCUSSION
3.
Although the system performs well at delivering 10W over a 24 hour period, there are short intervals when this power level was not delivered. These intervals correspond to times when the coupling is too low for the controller to compensate. A variety of approaches can be taken to solve this problem. The first is to tighten the coupling limitations to prevent coupling deteriorating to beyond the equivalent limit of 20mm of axial separation. The second approach is to allow occasional power drops on the basis that an internal battery could cover these intervals, (patient alarms would activate if the problem persists). The third approach is to increase the controller tolerance to low coupling. The ability to tolerate misalignments of the frequency controlled system is mainly determined by the systems quality factor (Q value), which is defined by:
_________________________________________
4.
5. 6.
Carmelo A. Milano, A.J.L., Laura J. Blue, Peter K. Smith, Adrian F. Hernadez, Paul B. Rosenberg, and Joseph G. Rogers, Implantable Left Ventricular Assist Devices: New Hope for Patients with End stage Heart Faliure. North Carolina medical journal, 2006. 67(2): p. 110115. C. C. Tsai, B.S.C.a.C.M.T. Design of Wireless Transcutaneous Energy Transmission System for Totally Artificial Hearts. in IEEE APPCAS. 2000. Tianjin, China. Guoxing Wang, W.L., Rizwan Bashirullah, Mohanasankar Sivaprakasam, Gurhan A. Kendir, Ying Ji, Mark S. Humayun and James D.Weiland. A closed loop transcutaneous power transfer system for implantable devices with enhanced stability. in IEEE circuits and systems. 2004. Ping Si, P.A.H., J. W. Hsu, M. Chiang, Y. Wang, Simon Malpas, David Budgett Wireless power supply for implantable biomedical device based on primary input voltage reglation. 2nd IEEE conference on Industrial Electronics and Applications, 2007. Ping Si, A.P.H., Designing the DC inductance for ICPT Power pickups. 2005. Ping Si, A.P.H., Simon Malpas, David Budgett, A frequency control method for regulating wireless power to implantable devices. IEEE ICIEA conference, Harbin, China, 2007. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Thushari Dissanayake Auckland Bioengineering Institute 70, Symonds Street Auckland New Zealand
[email protected] ___________________________________________
Transcutaneous Energy Transfer System for Powering Implantable Biomedical Devices Author: Institute: Street: City: Country: Email:
David Bugett Auckland Bioengineering Institute 70, Symonds Street Auckland New Zealand
[email protected] _________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
239
Patrick Hu University of Auckland 38, Princess Street Auckland New Zealand
[email protected] ___________________________________________
A Complexity Measure Based on Modified Zero-Crossing Rate Function for Biomedical Signal Processing M. Phothisonothai1 and M. Nakagawa2 1
2
Department of Electrical Engineering, Burapha University, 169 Bangsaen, Chonburi 20131 Thailand Department of Electrical Engineering, Nagaoka University of Technology, 1603-1 Kamitomioka, Nagaoka-shi 940-2188 Japan E-mail: 1
[email protected], 2
[email protected] Tel.: +66-38-102-222 ext 3380, Fax: +66-38-745-806
Abstract — A complexity measure is a mathematical tool for analyzing time-series data in many research fields. Various measures of complexity were developed to compare time series and distinguish whether input time-series data are regular, chaotic, and random behavior. This paper proposes a simple technique to measure fractal dimension (FD) values on the basis of zero-crossing function with detrending technique or is called modified zero-crossing rate (MZCR) function. The conventional method, namely, Higuchi’s method has been selected to compare output accuracies. We used the functional Brownian motion (fBm) signal which can easily change its FD for assessing performances of the proposed method. During experiment, we tested the MZCR-based method to determine the FD values of the EEG signal of motor movements. The obtained results show that the complexity of fBm signal is measured in the form of a negative slope of log-log plot. The Hurst exponent and the FD values can be measured effectively. Keywords — Complexity, fractal dimension, biomedical signal, modified zero-crossing rate, MZCR, Hurst exponent
I. INTRODUCTION Time-series data formed by sampling method based on the suitable rate of frequency. It can be measured as the complexity in order to show whether the data are regular, chaotic, or random behavior. Basically, we can measure irregularity of time-series data in terms of quantity’s complexity. Especially, in the field of biomedical signal analysis, complexity of heartbeat signal (electrocardiogram; ECG) and brain signal (electroencephalogram; EEG) can be able to distinguish emotion, imagination, movement, and can be used for medical diagnosis purposes [1]. To determine the quantity’s complexity, fractal dimension (FD) value is a one of indicative parameters which most widely used. Moreover, the FD value has been proved to be effective parameter in characterizing such biomedical signals. Fractal geometry is a mathematical tool for dealing with complex systems. A method of estimating FD has been widely used to describe objects in space, since it has been found useful for the analysis of biological data [2][3].
For related works, classical methods such as moment statistics and regression analysis, properties such as the Kolmogorov-Sinai entropy [4], the apparent entropy [5], and the existing methods in estimating FD value have been proposed to deal with the problem of pattern analysis of waveforms. The FD may convey information on spatial extent and self similarity and self affinity [6]. Unfortunately, although precise methods to determine the FD have already been proposed, their usefulness is severely limited since they are computer intensive and their evaluation is time consuming [7]. Recently, the FD is relatively intensive to data scaling and shows a strong correlation with the human movement of EEG data [8]. The time series with fractal nature are to be describable by the functions of fractional Brownian motion (fBm), for which the FD can easily be set. Waveform FD values indicate the complexity of a pattern or the quantity of information embodied in a waveform pattern in terms of morphology, spectra, and variance. In this paper, we propose algorithm in which the FD value is on the basis of zero-crossing function with detrending technique or is called modified zero-crossing rate (MZCR) function. The proposed MZCR function is a simple technique in estimating FD and presents the fast computation time. II. METHOD A. Basic concept On the assumption of a high value of the complexity of time-series data can be easily found by obtaining the high rate of zero-crossing point. It means that we can directly compute the complexity of time-series data on the basis of the zero-crossing rate function. Based on the general relationship of power law, during the locally computation period of input data, x, and time period, t, which can be defined by fz ( x) v t
(1)
where fz () is the MZCR function, is a scaling parameter.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 240–243, 2009 www.springerlink.com
A Complexity Measure Based on Modified Zero-Crossing Rate Function for Biomedical Signal Processing
B. MZCR function n
The input time-series data are formed in the length of 2 point. To determine FD value by means of proposed method, there are three main steps of processing as follows: Step 1: Zero-mean data by subtracted the mean value of locally sampled period from each value:
241
Where w(t) is a white Gaussian function. The typical examples of two fBm signals of parameters H = 0.2 and H = 0.8 length 1,024 points generated by wavelet-based synthesis of Abry and Sellan’s algorithm [11] are shown as Fig. 1. The Hurst exponent in the process, H, is related to the fractal dimension D: H=E+1–D
(6)
(2) Step 2: Bride detrending by subtracted the regression line from each value. The zero-mean data, , is then locally detrended by subtracting the theoretical values, yd, given by the regression: (3) Step 3: Zero-crossing rate (ZCR) determination, after that data were detrended we will use the zero-crossing rate function to determine the ZCR value which is defined by:
where E is the Euclidean dimension (E = 0 for a point, 1 for a line, 2 for a surface). In this case, therefore, for onedimensional signals: H=2–D
(7)
B. Classical method Since many methods were developed to determine FD value on time and frequency domains [7], we will select the method; namely, Higuchi’s method [12] for comparison in this study because it has been proved to use in many research fields include in the biomedical engineering.
(4)
where
*
*
$ P
Finally, we can determine the scaling parameter by taking logarithmic function on both sides of Eq. (1). This computation is repeated over all possible interval lengths (in practice, we suggest minimum length 24-point and maximum length is 2n-1-point.)
P
(a)
*
*
$ P
III. ASSESSMENT CONDITIONS
A. Fractional Brownian motion (fBm)
Due to the fractional Brownian motions (fBms) take an important place among self-similar processes, fBms are selected to be input signal for assessing performance of the proposed method. The fBms are the only self-similar Gaussian processes with stationary increments, and they provide interesting models of self-similar processes [9]. General model of the fBm signal can be written as the following fractional differential equation [10] :
(5)
_______________________________________________________________
P
(b) Fig. 1. Two fBm signals generated by different Hurst exponent values. (a) Parameter H=0.8. (b) Parameter H=0.2.
IV. EXPERIMENTS The fBm is also referred to as 1/f-like profile, since it can be generated from Gaussian random variables and its fractal dimension can easily be set. Therefore, fBm is used to be input signal for assessing in this experiment. Based on the
IFMBE Proceedings Vol. 23
_________________________________________________________________
242
M. Phothisonothai and M. Nakagawa
assumption of the given concept in the Eq. (1), we found that the estimation of H for one-dimensional data is in a negative form of a scaling parameter which can be expressed as the following definition:
or,
\
(8)
NQI ( P
H | –
{D–2 4
(9)
5
For all n = 2 , 2 , …, log 2 L – 1, where L is the total length of fBm. In Fig. 2 shows a log-log plot of MZCR function the H is a slope of regression line (dash line). During experiment, we fix step of H changing by 0.01 ranged from 0 to 1 with randomly repeat of five times for each step. Then, the mean-squared error (MSE) of the theoretical and estimated values, , can be computed by:
Table 1 Comparison results of average MSE and computation time.
28
MSE [1¯10-4] MZCR 2.82
Higuchi 3.17
33.4
2.54
3.38
48.2
61.8
210
1.71
1.92
91.2
94.3
2
Table 2 Estimating results of Hurst exponent (fBm length of 1,024 points.) H
H$ O UKIPCNNGPIVJQH / xi , xi 1 @ , yi gi ( xi )
Equation 5, 6 and 7 shows the translation, rotation and scaling matrixes of template’s coordinates on x-ray image. ª x º ª dx º « y» «d » ¬ ¼ «¬ y »¼ ªcos T sin T º ª x º « sin T cos T » « y » ¬ ¼¬ ¼ ª sx 0 º ª x º «0 s »« » y¼¬ y¼ ¬
x2 d x x3
rd
ai
ª x'º « y '» ¬ ¼
x1 d x x2
Si 1 Si 6h Si 2 yi 1 yi S 2 Si )h ( i 1 6 h yi
(12)
When substituting these weights in (9), its 1st and 2nd derivatives, and including the condition of natural spline where 2nd derivative is equal to zero at the endpoints (S1 = Sn = 0), the system can be determined in matrix equation as follows
IFMBE Proceedings Vol. 23
___________________________________________
268
C. Sinthanayothin ª4 «1 « «0 « «# «0 « «0 «0 ¬
1
0 "
0
0
4 1 #
0 0 #
0 0 #
0 0
1 " 4 " # % 0 " 0 "
4 1
1 4
0
0 "
0
1
0 º ª S2 º 0 »» «« S3 »» 0 » « S4 » » »« # »« # » « » 0 Sn 3 » » »« 1 » « Sn 2 » « » » 4 ¼ ¬ S n 1 ¼
ª y1 2 y2 y3 º » « y 2y y 2 3 4 « » « y3 2 y4 y5 » 6 « » # » h 2 «« yn 4 2 yn 3 yn 2 » « » « yn 3 2 yn 2 yn 1 » « y 2y y » n 1 n ¼ ¬ n2
III. RESULT
(13)
Therefore, Eq. (13)[13] is used to find the interpolation values in all dimensions. In this paper, a data point is equivalent to the number of reference points. The result of fitting the cephalometric line with cubic-spline interpolation can be seen as in figure 6.
The result of cephalometric line tracing on both PA and Lateral x-ray images can be displayed as figure 8(A) and 8(B) respectively. This method has been tested on 10 PA and 10 Lateral x-ray images and found that cephalometric lines showed the similar result depends on the coordinate of reference points.
(A) PA View
(B) Lateral View
Fig 8: Cephalometric line tracing result.
IV. CONCLUSION
Fig 6: Tracing cephalometric line with cubic spline.
D. Cephalometric Line Smoothening Cephalometric tracing lines are automatic generated by using the deformable template registration and cubic spline fitting techniques as mention above. However, the lines are not smooth when zoom into specific area as can be seen in figure 7(A). Therefore cephalometric line smoothening technique has been proposed by transform the window/canvas coordinates of the tracing lines into bitmap coordinates. Then the program will draw lines on bitmap instead of canvas and apply image interpolation when the user zooms into the specific area of x-ray image. The result of cephalometric line smoothening technique shows as figure 7(B) which the user zooms into the specific area.
The result of the computerized cephalometric line tracing technique on x-ray images is reasonable and acceptable by collaboration clinicians. Since the tracing lines from computerized are similar to a hand drawing but more convenient and less time consuming. Therefore, this method can perform automatic drawing of the significant traces of face, skull, and teeth by using the reference points. The cephalometric analysis with different types of analysis, such as Mahidol Analysis, Down Analysis, Steiner Analysis, Tweed Analysis, Jaraback Analysis, Harvold Analysis, Rickette Analysis, and McNamara Analysis can be performed in the near future in order to show the structural problems on skulls, face, and teeth.
ACKNOWLEDGMENT This project is a part of CephSmile V.2.0. Thanks to National Electronics and Computer Technology center (NECTEC) for grant supporting. Special thanks to the orthodontics team from Mahidol University for their advices.
(A) On Canvas
REFERENCES
(B) On bitmap plus interpolation
Fig 7: Cephalometric lines.
1. 2.
_________________________________________
OrisCeph Rx3, Ref: http://www.orisline.com/en/orisceph/pricelist.aspx OnyxCeph, by OnyxCeph Inc. Ref: http://www.onyx-ceph.de/i_functionality.html
IFMBE Proceedings Vol. 23
___________________________________________
Computerized Cephalometric Line Tracing Technique on X-ray Images 3.
Dolphin Imaging 10, by Dolphin Imaging System Inc. Ref: http://www.dolphinimaging.com/new_site/imaging10.html 4. QuickCeph 2000, by QuickCeph System Inc, Ref: http://www.quickceph.com/qc2000_index.html 5. Dr.Ceph (FYI), Ref: http://www.fyitek.com/software/comparison.shtml 6. Dental Software VWorks, CyberMed, Ref: http://www.cybermed.co.kr/e_pro_dental_vworks.html 7. Dental Software VCeph, by CyberMed, Ref: http://www.cybermed.co.kr/e_pro_dental_vceph.html 8. Cephalometric AtoZ v8.0E Ref: http://www.yasunaga.co.jp/CephaloM1.html 9. CephSmile Ref: www.typo3hub.com/chanjira/CephSmileV2/cephsmileV2Eng.html 10. Leonardi R, Giordano D, Maiorana F, Spampinatod C. Automatic Cephalometric Analysis. The Angle Orthodontist: Vol. 78, No. 1, pp. 145–151.
_________________________________________
269 11. Sinthanayothin C , Boyce JF, Cook HL, Williamson TH. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. British Journal Ophthalmology (BJO) 1999;83:902-910. 12. Press W.H.. et. al., “Cubic Spline Interpolation, Numerical Recipes in C: The Art of Scientific Computing”, Cambridge, UK: Cambridge University Press, (1988). 13. McKinley S, Levine M. Cubic Spline Interpolation. Ref: http://online. redwoods.cc.ca.us/instruct/darnold/laproj/Fall98/SkyMeg/proj.pdf
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Chanjira Sinthanayothin National Electronics and Computer Technology Center 112 Thailand Science Park, Phahonyothin Rd. Pathumthani Thailand
[email protected] ___________________________________________
Brain Activation in Response to Disgustful Face Images with Different Backgrounds Takamasa Shimada1, Hideto Ono1, Tadanori Fukami2, Yoichi Saito3 1
School of Information Environment, Tokyo Denki University, Japan 2 Faculty of Engineering, Yamagata University, Japan 3 Research Institute for EEG Analysis, Japan
Abstract — Previous studies have demonstrated that the stimulus of a fearful and disgustful face images lead to activation of neural responses in the medial temporal lobe. In particular, it was reported that seeing disgustful face image activated the insula area of brain. In these studies, no background images were used with facial stimuli. However, normal day-today images always have a background. Moreover, background images are considered important in art forms (painting, photography, and movies, etc.) for eliciting effective expressions. We assessed the effect of background images on brain activation by using functional magnetic resonance imaging (fMRI). During fMRI scanning, face images with background images were presented repeatedly to 8 healthy right-handed males. Facial stimuli comprised 5 photographs of a disgustful face selected from the Paul Ekman’s database (Ekman and Friesen 1976). The background images comprised 2 photographs—one is worms and the other is a flower garden. It is thought that disgustful face images coincide with worms background image on the point of impression. After scanning, the subjects rated the impression created by the images on the Plutchik scale. Significant effects of the image of the disgustful face against the worms background minus that against the flower garden were assessed using a t-test and displayed as statistical parametric maps (SPMs) using SPM2 software. The results demonstrated activation of the right insula, and the image of the disgustful face against the worms background created a more disgustful impression than that against the flower garden. Therefore, the image of the face and the background together create the overall impression. The difference in the activation of the insula is possibly induced by the creation of this overall impression. This demonstrates the importance of background images in forming an impression of face images. Keywords — Face image, background image, functional magnetic resonance imaging.
I. INTRODUCTION In recent years, many studies have been conducted in an attempt to clarify the neural systems involved in emotional perception. Previous studies have demonstrated that the stimulus of a fearful and disgustful face images lead to activation of neural responses in the medial temporal lobe. Previous studies have suggested the involvement of the
limbic system in emotional perception. The relationship between emotional perception and the hippocampus was shown by Papez et al. [1]. In particular, it was reported that seeing disgustful face image activated the insula area of brain. One of the concepts of emotion, called Plutchik’s psychoevolutionary theory of basic emotions, was suggested by Plutchik et al. [2]. These postulated basic emotions are acceptance, anger, anticipation, disgust, joy, fear, sadness, and surprise. In many art forms (painting, photography, movies, etc.), background images are thought to be very important for enhancing the effect of the subject. In most of the recent studies on anthropomorphic user interfaces [3][4], only a face image is used and background images are either not used or are considered unimportant. However, in daily conversations with individuals, the absence of a background is unnatural. It is expected that adding a background to face images will be useful in producing emotional effects. However, the mechanism through which background images affect face images has not been cleared. In addition, there are few studies on brain functions in the field of computer interface technology. An objective evaluation method is important for evaluating the effect of background images on humans. In particular, clarifying the relation between background images and the activation of the brain is thought to be a key for such an objective estimation. However, the mechanism by which information related to background images is processed in the brain has not been investigated. In this research, we attempted to elucidate this mechanism by using an fMRI scanner to detect brain activation induced by images that include not only a face image but also background images. In our research, the subjects were shown face images with background images. We focused on the brain activation and impression change induced by the different combinations of facial expressions and background images, which were further analyzed. Two background images were used for the experiment. One image induced the same type of emotional effect as the face images, whereas the other image induced a different type of emotional effect from the face images. The effect on brain acti-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 270–273, 2009 www.springerlink.com
Brain Activation in Response to Disgustful Face Images with Different Backgrounds
vation was analyzed by means of the fMRI scanner. This experiment analyzed the relation between emotions and brain activation induced by images. Moreover, the effect induced only by background images was also analyzed by means of a questionnaire and the fMRI scanner. II. METHOD A. Experimental paradigm The face images used in our experiment expressed disgust. The five face images used were selected from Paul Ekman’s database (Ekman & Friesen, 1976). These images comprise two male and three female images. The face image was superimposed on the background image. Two types of background images were used—one depicted worms; the other, a flower garden. We selected two background images to investigate whether the emotional effect of the face images with one background is consistent with that elicited by the face images with another background. Figs. 1 and 2 are samples of the face image with a background image of worms and a flower garden, respectively. It is believed that face images expressing disgust correspond to a background depicting worms; however, face images expressing disgust are in contradiction with a background depicting a flower garden. In this experiment, we focused only on the changes in brain activity resulting from disgust; subsequently, we focused on changes in the insula areas, which are associated with the emotion of disgust. During the fMRI scanning, the subjects, who were asked to wear earplugs, lay on the bed of the fMRI scanner. The total scanning time was 5 min per subject, and the scanning was carried out in blocks of 10 s wherein a face image along with the background image of either worms or a flower garden was repeatedly presented to the subject. The face presentation block was preceded and followed by a 10s block of a crosshair cursor, as seen in Fig. 3. During the 10s face block, the stimulus of one face image (with a background image) of either a male or female was presented 5 times (stimulus duration, 200 ms; interstimulus interval, 1800 ms) because repeated stimulation is assumed to enhance corresponding cortical activation [5][6]. The combination of the face and background images was randomly selected for every 10s face block; however, the number of images with the worms and flower garden backgrounds was counterbalanced. In addition, the number of face images of a particular person with both the backgrounds was counterbalanced.
_______________________________________________________________
271
B. Questionnaire In order to assess the difference in the psychological effects caused by different background images, the subjects were required to rate their impressions by using a questionnaire based on Plutchik’s eight basic emotional categories. The subjects were required to rate the intensity of their impression of each category of emotion on a scale of 0 to 6 (seven levels). A rating of 0 implied that the subject did not feel anything for that particular category, and a rating of 6 implied that the subject experienced that emotion very strongly (maximum intensity). The subjects answered the questionnaire immediately after the fMRI scanning.
Fig.1Sample of a face image expressing disgust with the background image of worms
Fig.2Sample of a face image expressing disgust with the background image of flower garden
C. Subjects The subjects were 8 healthy right-handed male adults (mean age is 21.8 years, standard deviation is 0.97). All the subjects provided written informed consent for participation in the experiment. In all the paradigms, the subjects, who
IFMBE Proceedings Vol. 23
_________________________________________________________________
272
Takamasa Shimada, Hideto Ono, Tadanori Fukami, Yoichi Saito
mit the application of Gaussian random field theory to provide corrected statistical inference [8][9]. The SPMs {Z} for the contrasts were generated and thresholded at a voxelwise P value of 0.01. III. RESULTS
Fig.3Schematic diagram of the experimental paradigm were monitored through a window, were forbidden to move, except when the task required them to do so. D. Image Acquisition and Analysis In this experiment, gradient-echo echo-planar magnetic resonance (MR) images were developed using a 1.5 Tesla Hitach Stratis II System at the Applied Superconductivity Research Laboratory, Tokyo Denki University, Chiba, Japan. T2*-weighted time-series images depicting the blood oxygenation level-dependent (BOLD) contrast were developed using a gradient-echo echo-planar imaging (EPI) sequence (TR, 4,600 ms; TE, 74.2 ms; inter-TR time, 400 ms; total scanning time, 5.00 min; flip angle, 90°; field of view (FOV), 22.5 cm × 22.5 cm; slice thickness, 4.0 mm; slice gap, 1.0 mm; voxel, 3.52 × 3.52 × 5 mm). In all, 28 axial contiguous slices covering the entire brain were collected. The developed data were analyzed using the statistical parametric mapping (SPM) technique (using SPM2 from the Wellcome Department of Cognitive Neurology, London, UK) implemented in Matlab (Mathworks Inc., Sherborn, MA, USA). The analysis involved the following steps: correction for head movements between the scans, realignment of the functional images acquired from each subject to the first image using rigid body transformation. A mean image was created using the realigned volumes. The highresolution, T1-weighted anatomical images were coregistered to this mean (T2*) image to ensure that the functional and anatomical images were spatially aligned. The anatomical images were then normalized into the standard space [7] by matching them to a standardized Montreal Neurological Institute (MNI) template (Montreal Neurological Institute, Quebec, Canada), using both linear and nonlinear 3D transformation [8][9]. The transformation parameters determined here were also applied to the functional images. Finally, these normalized images were smoothed with a 12 mm (full width at half maximum) isotropic Gaussian kernel to accommodate intersubject differences in anatomy and to per-
_______________________________________________________________
The differences in the activation between the conditions of viewing face images expressing disgust with the background image of either worms or a flower garden (disgustful face images with the worms background minus those with the flower garden background) were analyzed. The results are shown in Fig. 4. The difference in activation was detected in the insula of the right hemisphere of the brain. The results of questionnaire shown that the intensity of the impression was stronger for the basic emotional categories of disgust when the worms background was used than when the flower garden background was used.
Fig.4Results of the difference in activity between the case of the disgustful face images with the worms background and that of the disgust images with the flower garden background (worms minus flower garden)
IV. DISCUSSION In our research, we attempted to reveal the effect of combining the background image with disgustful face images. As a result, stronger activity was detected in the insula area when the worms background image was used than when the flower garden background image was used. The area in which activation was detected coincide with the areas in which activation was observed in a previous study in which subjects were shown disgustful faces. The results of the questionnaire revealed that the subjects formed a stronger impression of disgust when they saw disgustful face images with the worms background than when they saw disgust images with the background image
IFMBE Proceedings Vol. 23
_________________________________________________________________
Brain Activation in Response to Disgustful Face Images with Different Backgrounds
of a flower garden. This reveals that the impression of disgust was enhanced by using the image with a worms background because this image corresponds with the image of a disgustful face. In the abovementioned experiment, the background images comprised images of worms and a flower garden. Even if these images are shown to the subject without the face image, it is believed that the images will have their own emotional effects on humans. The average results of the ratings for the images with the backgrounds of worms and a flower garden indicate that the impression of disgust is stronger when the worms background is used than when the background with a flower garden is used. This may imply that the difference in the activation of the brain between the condition in which face images with the worms background are presented and that in which face images with the background of a flower garden are presented is caused merely by adding the activation in the case of the background image to that in the case of the disgustful face image. We investigated this point by performing an additional experiment. In this experiment, the paradigm was the same but the face image was removed. The results are shown in Fig. 5. In this experiment, the difference in activation was not detected in the insula area. This experiment showed that the background image had little effect on activity in the insula, although it considerably influenced the activation resulting from the stimuli of disgustful face images.
siderably influences the activation resulting from the stimuli of the disgustful face images. These results indicate the probability of their application to the objective estimation of the emotional effect that the images have.
Fig.5The results of the difference in activation between the cases of the two background images (worms minus the flower garden)
REFERENCES 1. 2. 3.
V. CONCLUSIONS We investigated the effect of combining disgustful face images and background images to study the activation of the brain by using fMRI. As a result, stronger activation was detected in the insula area when the worms background image was used than when the background image of a flower garden was used. It is believed that this difference in brain activity relates to the degree of disgust impression induced by the images. The results of the questionnaire revealed that the impression of disgust induced by the disgustful face image with the worms background image was stronger than that induced by the disgustful face image with the background of a flower garden. In addition, the effect of the background image was investigated qualitatively. The difference in activation in the insula area was compared; no difference in activation was detected in the insula area. An image of worms creates an impression of disgust but may not involve biological processing. Further, these results reveal that although the background image has little effect in activating the insula area, it con-
_______________________________________________________________
273
4.
5.
6. 7. 8.
9.
J. W., Papez (1937) A proposed mechanism of emotion, Arch Neurol Psychiatry, Vol. 79, pp. 217–224. R., Plutchik (1962) Emotion: A Psychoevolutionary Synthesis, Haper and Row. H., Dohi & M. A., Ishizuka (1996) Visual software Agent: An Internet-Based Interface Agent with Rocking Realistic Face and Speech Dialog Function, AAAI technical report “Internet-Based Information Systems”, No. WS-96-06, pp. 35–40. P., Murano (2003) Anthropomorphic Vs Non-Anthropomorphic Software Interface Feedback for Online Factual Delivery, Seventh International Conference on Information Visualization (IV'03), pp. 138. E. K., Miller, L., Li, & R., Desimone (1991) A neural mechanism for working and recognition memory in inferior temporal cortex, Science, Vol. 254, pp. 1377–1379. C. L., Wiggs & A., Martin (1998) Properties and mechanisms of perceptual priming; Curr Opin Neurobiol, Vol. 8, pp. 227–233. J., Thalairach & P., Tournoux (1988) Co-Planar Stereotactic Atlas of the Human Brain, Thieme, Stuttgart. K., Friston, J., Ashburner, J., Poline, C., Frith, J., Heather, & R., Frackowiak (1995) Spatial registration and normalization of images, Hum Brain Mapp, Vol. 2, pp. 165–189. K., Friston, A., Holmes, K., Worsley, J., Poline, C., Frith, & R., Frackowiak (1995) Statistical parametric maps in functional imaging: A general approach, Hum Brain Mapp, Vol. 5, pp. 189–201.
Author: Takamasa Shimada Institute: School of Information Environment, Tokyo Denki University Street: 2-1200 Muzai Gakuendai City: Inzai City, Chiba Prefecture Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Automatic Segmentation of Blood Vessels in Colour Retinal Images using Spatial Gabor Filter and Multiscale Analysis P.C. Siddalingaswamy1, K. Gopalakrishna Prabhu2 1
Department of Computer Science & Engineering, Manipal Institute of Technology, Manipal, India 2 Department of Biomedical Engineering, Manipal Institute of Technology, Manipal, India
Abstract — Retinal blood vessels are significant anatomical structures in ophthalmic images. Automatic segmentation of blood vessels is one of the important steps in computer aided diagnosis system for the detection of diseases such as Diabetic Retinopathy that affect human retina. We propose a method for the segmentation of retinal blood vessels using Spatial Gabor filters as they can be tuned to the specific frequency and orientation. A new parameter is defined to facilitate filtering at different scales to detect the vessels of varying thicknesses. The method is tested on forty colour retinal images of DRIVE (Digital Retinal Images for Vessel Extraction) database with manual segmentations as ground truth. An overall accuracy of 84.22% is achieved for segmentation of retinal blood vessels. Keywords — Colour Retinal Image, Vessel Segmentation, Gabor filter, Diabetic Retinopathy.
I. INTRODUCTION Diabetic retinopathy is a disorder of the retinal vasculature that eventually develops to some degree in nearly all patients with long-standing diabetes mellitus [1]. It is estimated that by the year 2010 the world diabetic population will be doubled, reaching an estimated 221 million [2]. The timely diagnosis and referral for management of diabetic retinopathy can prevent 98% of severe visual loss. Colour retinal images are widely used for detection and diagnosis of Diabetic retinopathy. In computer assisted diagnosis the automatic segmentation of the vasculature in retinal images helps in characterizing the detected lesions and in identifying false positives [3]. The performance of automatic detection of pathologies like microaneurysms and hemorrhages may be improved if regions containing vasculature can be excluded from the analysis. Another important application of automatic retinal vessel segmentation is in the registration of retinal images of the same patient taken at different times [4]. The registered images are useful in monitoring the progression of certain diseases. In the literature [5] it is reported that many retinal vascular segmentation techniques utilize the information such as contrast that exists between the retinal blood vessel and surrounding background and all vessels are connected and originates from the same point that is the optic disc. Four
techniques used for vessel detection are classified as filter based methods, tracking of vessels, classifiers based methods and morphological methods. In filter based methods the cross-sectional gray-level profile of a typical retinal vessel matches the Gaussian shape and the vasculature is piecewise linear and may be represented by a series of connected line segments [6]. These methods employ a twodimensional linear structural element that has a Gaussian cross-profile section, rotated into different angles to identify the cross-profile of the blood vessel. Tracking methods [7] [8] use a model to track the vessels starting at given points and individual segments are identified using a search procedure which keeps track of the center of the vessel and makes some decisions about the future path of the vessel based on certain vessel properties. Classifier-based methods use a two-step approach [9]. They start with a segmentation step often by employing one of the mentioned matched filter-based methods and next the regions are classified according to many features. In the next step neural networks classifier is constructed using selected features by the sequential forward selection method with the training data to detect vessel pixels. Morphological image processing exploits features of the vasculature shape that are known a priori, such as it being piecewise linear and connected. The use of mathematical morphology for segmentation of blood vessels is explained in [10] [11]. These approaches works well on normal retinal images with uniform contrast but suffers in the presence of noise due to pathologies within the retina of eye. In our work the vessel segmentation is performed using Gabor filter. Gabor filters have been widely applied to image processing and computer vision application problems such as face recognition and texture segmentation, strokes in character recognition and roads in satellite image analysis [12][13]. A few papers have already reported work on segmentation of vessels using Gabor filters [14] [15]. But still there is a scope of improvement as they fail to detect vessels of different widths. Also detection process becomes much more complicated when lesions and other pathological changes affect the retinal images. We focus to develop a much robust and fast method of retinal blood vessel segmentation using Gabor filter and introduce new parameter
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 274–276, 2009 www.springerlink.com
Automatic Segmentation of Blood Vessels in Colour Retinal Images using Spatial Gabor Filter and Multiscale Analysis
for designing filter of different scales that facilitates the detection of vessels of varying width. II. SEGMENTATION OF BLOOD VESSELS The detection of blood vessels is an important step in automated retinal image processing system as the colour of hemorrhages and microaneurysms is similar to the colour of blood vessels and appear darker than the background. The blood vessels appear most contrasted in the green channel of the RGB colour space because of that only the green component is retained for the segmentation of vessels using Gabor filter. Figure 1 shows digital colour fundus image and green channel image.
275
V y : Standard deviation of Gaussian across the filter that control the orientation selectivity of the filter. The parameters are to be derived by taking into account the size of the lines or curvilinear structures to be detected. To produce a single peak response on the center of a line of width t, the gabor filter kernel is rotated in different orientations with the parameters set as follows f = 1/t, V x = t*kx , kx is scale factor relative to V x
Vy
= 0.5 V x ,
kx is required so that the shapes of the filters are invariant to the scale. The widths of the vessels are found to lie within a range of 2-14 pixels (40-200 μm). Thus for the values of kx = 0.4 and t = 11, Figure 2 shows the Gabor filter spatial frequency responses at different orientations (600,1200 and 1800 ) on the retinal image in Figure 1.
Fig. 1 Digital colour Retinal image and its green channel image A. Spatial Gabor filter The 2D Gabor filters are a set of orientation and frequency sensitive band pass filters which have the optimal localization in both the frequency and space domains. Thus they are suitable for extracting orientation dependent frequency contents of the patterns [13]. The spatial Gabor filter kernels are sinusoids modulated by a Gaussian window, the real part of which is expressed by ª 1 § x 2p y 2p ·º g ( x, y ) exp « ¨ 2 2 ¸» cos2Sfx «¬ 2 ¨© V x V y ¸¹»¼
(1)
Where,
xp
x cosT y sin T
yp
x sin T y cos T
T f
Vx
: Orientation of the filter, an angle of zero gives a filter that responds to vertical features. : Central frequency of pass band. : Standard deviation of Gaussian in x direction along the filter that determines the bandwidth of the filter.
_______________________________________________________________
Orientation of Gabor filter at 1800, 600 and 1200 (first row) applied to retinal image and corresponding response output (second row)
Fig. 2
B. Blood vessel segmentation Only six Gabor filter banks with different orientations (0 to 180 with interval of thirty degrees) are used to convolve with the image for fast extraction of vessels. The magnitude of each response is retained and combined to generate the result image. Figure 3 show the result of vessel segmentation using Gabor filter. Parameter are set to t=11 in Figure 3(c) and t=12 in Figure 3(d). It can be seen that at t=11 along with the thick vessels the thin vessels are also detected with true positive rate of 0.8347 and false positive rate of 0.21. With t=12 the true positive rate comes down to 0.7258, it can be seen in the Figure 3(d) that only thick vessels are segmented. III. EXPERIMENT RESULTS The image data required for the research work is obtained from publicly available DRIVE (Digital Retinal Im-
IFMBE Proceedings Vol. 23
_________________________________________________________________
276
P.C. Siddalingaswamy, K. Gopalakrishna Prabhu
ages for Vessel Extraction) database and also from the Department of Ophthalmology, KMC, Manipal using Sony FF450IR digital fundus camera and stored in 24-bit colour compressed JPEG format with 768×576 pixel resolution.
REFERENCES 1. 2. 3.
4.
5. (a)
(b) 6.
7.
8. (c) (d) Segmentation of blood vessels. (a) 19_test colour image from DRIVE database; (b) Manual segmentation of vessel; (c) Segmentation with t=11; (d) Segmentation with t=12.
Fig. 3
9.
10.
It is reported in the literature that matched filter method of extraction provides an accuracy of 91% on drive database. We have implemented the matched filter for comparison with our method and found that the accuracy depends on threshold selected and applied to the filtered image the same is not the case with Gabor filter. It is also found that the matched filter method works well on normal retinal images but it suffers when an image with pathologies is considered. The proposed method is capable of segmenting retinal blood vessels of varying thickness. We tested our method on DRIVE database and obtained an accuracy of 84.22%.
11.
12.
13.
14.
15.
IV. CONCLUSIONS In this paper, we presented a method to detect vessels of varying thickness using Gabor filters of different scales and orientation and found that it provide better way of extracting vessels in retinal images. We also tested the method on retinal images containing lesions and varying local contrast and it gives a reasonable good result. It is hoped that automated segmentation of vessel technique can detect the signs of diabetic retinopathy in the early stage, monitor the progression of disease, minimize the examination time and assist the ophthalmologist for a better treatment plan.
_______________________________________________________________
Emily Y Chew, “Diabetic Retinopathy”, American academy of ophthalmology – Retina panel, Preferred practice patterns, 2003. Lalit Verma Gunjan Prakash and Hem K. Tewari, “Bulletin of the World Health Organization”, vol.80, no.5, Genebra 2002. Thomas Walter, Jean-Claude Klein, Pascale Massin, and Ali Erginay, “A contribution of Image Processing to the Diagnosis of Diabetic Retinopathy—Detection of Exudates in Color Fundus Images of the Human Retina”, IEEE Trans. Medical. Imaging, vol. 21, no. 10, October 2002. Laliberté .F, Gagnon .L, and Sheng .Y, “Registration and fusion of retinal images: An Evaluation study”, IEEE Trans. Medical. Imaging, vol. 22, pp. 661–673, May 2003. Hoover, A., Kouznetsoza, V., Goldbaum, M., “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response”, IEEE Transactions on Medical Imaging, vol. 19, pp. 203– 210, 2000. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989. Di Wu, Ming Zhang, Jyh-Charn Liu, and Wendall Bauman,” On the adaptive detection of blood vessels in retinal images”, IEEE Transactions on Biomedical Engineering, vol. 53, no. 2, February 2006. A. Pinz, S. Bernogger, P. Datlinger, and A. Kruger, “Mapping the human retina”, IEEE Transactions on Medical imaging, vol. 17, no. 4, August 1998. C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H.Williamson, “Automated location of the optic disc, fovea, and retinal blood vessels from digital color fundus images,” British Journal of Ophthalmology, vol. 83, no. 8, pp. 902–910, 1999. F. Zana and J. Klein., “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation”, IEEE Transactions on Image Processing, vol. 10, pp.1010–1019, 2001. P. C. Siddalingaswamy, G. K. Prabhu and Mithun Desai, “Feature extraction of retinal image”, E Proc. Of the National Conference for PG and Research scholars, NMAMIT, Nitte, April 2006. T.S. Lee, “Image representation using 2D Gabor wavelets”, IEEE Transactions of Pattern Analysis and Machine Intelligence, vol. 18, pp. 959-971,October 1996. J.Chen, Y.Sato, and S.Tamura, “Orientation space filtering for multiple orientation line segmentation”, IEEE Transactions of Pattern Analysis and Machine Intelligence, vol.22, May 2000, pp.417-429. Ming Zhang, Di Wu, and Jyh-Charn Liu, “On the small vessel detection in high resolution retinal images”, Proceedings of the 2005 IEEE Engineering in Medicine and Biology, Shanghai, China, September 2005. Rangaraj M. Rangayyan, Faraz Oloumi, Foad Oloumi, Peyman Eshghzadeh-Zanjani, and F´abio J. Ayres, “Detection of blood vessels in the retina using gabor filters”, Canadian Conference on Electrical and Computer Engineering, April 2007. Corresponding author Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 23
P. C. Siddalingaswamy Manipal Institute of Technology Manipal India
[email protected] _________________________________________________________________
Automated Detection of Optic Disc and Exudates in Retinal Images P.C. Siddalingaswamy1, K. Gopalakrishna Prabhu2 1
Department of Computer Science & Engineering, Manipal Institute of Technology, Manipal, India 2 Department of Biomedical Engineering, Manipal Institute of Technology, Manipal, India
Abstract — Digital colour retinal images are used by the ophthalmologists for the detection of many eye related diseases such as Diabetic retinopathy. These images are generated in large number during the mass screening of the disease and may result in biased observation due to fatigue. Automated retinal image processing system could save workload of the ophthalmologists and also assist them to extract the normal and abnormal structures in retinal images and help in grading the severity of the disease. In this paper we present a method for automatic detection of optic disc followed by classification of hard exudates pixels in retinal image. Optic disc localization is achieved by iterative threshold method to identify initial set of candidate regions followed by connected component analysis to locate the actual optic disc. Exudates are detected using k means clustering algorithm. The algorithm is evaluated against a carefully selected database of 100 color retinal images at different stages of diabetic retinopathy. The methods achieve a sensitivity of 92% for the optic disc and 86% for the detection of exudates. Keywords — Diabetic retinopathy, Optic disc, Exudates, clustering.
I. INTRODUCTION Diabetic retinopathy causes changes in the retina, which is the most important tissue of the eye [1]. There are different kinds of abnormal lesions caused by diabetic retinopathy in a diabetic’s eye such as microaneurysm, hard exudates, soft exudates and hemorrhages that affect the normal vision. The timely diagnosis and referral for management of diabetic retinopathy can prevent 98% of severe visual loss. Mass screening of Diabetic retinopathy is conducted for the early detection of this disease, wherein the retinal images are captured using standard digital colour fundus camera resulting in large number of retinal images that need to be examined by ophthalmologists. Automating the preliminary detection of the disease during mass screening can reduce the workload of the ophthalmologists. This approach involves digital fundus image analysis by computer for an immediate classification of retinopathy without the need for specialist opinions. In computer assisted diagnosis automatic detection of normal features in the fundus images like blood vessels, optic disc and fovea helps in characterizing the detected lesions and in identifying false positives.
Optic disc and hard exudates are the brightest features in the colour retinal image. It is found that the optic disc is the brightest part in the normal fundus image that can be seen as a pale, round or vertically slightly oval disc. It is the entrance region of blood vessels and optic nerves to the retina and its detection is essential since it often works as a landmark and reference for the other features in the fundus image and in correct classification of exudates [2]. The optic disc was located by the largest region that consists of pixels with the highest gray levels in [3]. The area with the highest intensity variation of adjacent pixels was identified as optic disc in [4]. Even the geometrical relationship between optic disc and blood vessels was utilized in the identification of optic disc [4]. In [5] region growing algorithm was used for detecting hard exudates. They report 80.21% sensitivity and 70.66% specificity for detecting overall retinopathy. In [6] red free fundus images are divided into sub-blocks and use artificial neural networks for classifying the sub blocks as having exudates or not. They report 88.4% sensitivity and 83.5% specificity for detecting retinopathy as a whole and 93.1% sensitivity and 93.1% specificity for hard exudates. In this paper we propose detection of optic disc by detected using iterative threshold method followed by correctly identifying optic disc out of many candidate regions. Exudates are detected using k means clustering algorithm to classify exudate and non exudate pixels. II. METHODS A. Optic Disc detection In retinal images it has been observed that the green component of the RGB colour space contains the maximum data that can be used for efficient thresholding of the optic disc. The optic disc is observed to be brighter than other features in the normal retina. Optimal thresholding is applied to segment the brightest areas in the image. It is a method based on approximation of the histogram of an image using a weighted sum of two or more probability densities with normal distribution. The threshold is set as the closest gray level corresponding to the minimum probability between the maximum of two or more normal distribution. It is observed that it resulted in maximization of
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 277–279, 2009 www.springerlink.com
278
P.C. Siddalingaswamy, K. Gopalakrishna Prabhu
(a)
(b)
(a) (b) Fig. 1 Result of optimal thresholding. (a) Colour retinal image affected with diabetic retinopathy. (b) Distributed bright connected components in the threshold image.
gray level variance between object and background. The result of optimal thresholding is as shown in the Figure 1. A data structure is used to find the total number of connected components in the threshold image. The component having the maximum number of pixels is assumed to be having the Optic cup part of the optic disc and it is considered to be the primary region of interest. The mean of this region is computed in terms of x and y coordinates. The maximum length of optic disc can be 100 pixels, only the components whose mean coordinates are within 50 to 60 pixels distance from the mean coordinates of the largest component are considered to be parts of the optic disc. The extent of the Optic disc in horizontal and vertical directions is computed by finding the overall minimum and maximum x and y coordinates among all the components that are considered as a part of optic disc. If this region is greater than 80 pixels in width and 90 pixels in height, an ellipse is drawn using the x and y coordinates so obtained. Otherwise the threshold is decremented by one and applied to the initial image. But this time the threshold is applied only in the vicinity of the mean x and y coordinates computed earlier so as to avoid misclassifying large exudates as a part of the optic disc. The above process is repeated until an optimum size of the optic disc has been obtained. The Figure 2 shows the result of optic disc segmentation. B. Hard Exudates detection Hard exudates are considered to be bright intensity regions in the retinal images. It is found from the literature that the green layer contains the most information on the brightness and structure of exudates. The image is smoothed and elimination of Also fundus images are characterized by uneven illumination that is, the center region of a fundus image is usually highly illuminated while the illumination decreases closer to the edge of the fundus. In other words, objects of the fundus (lesions and blood vessels) are differently illuminated in different locations of the image due to the non-uniform illumination. Median filter is applied to the
_______________________________________________________________
Fig. 2
(c) (d) Detection of optic disc (a) Colour retinal image (b) - (c) optic disc detection phase (e) Detected optic disc marked by ellipse
retinal image and the filtered image is subtracted from original green channel image to eliminate intensity variation. Figure 3 shows intensity image in which lesion areas are properly highlighted. The darker background features and brighter lesion features can be clearly seen. Fig. 3
Digital colour retinal image and Image after illumination correction
K means clustering technique have been used to automatically determine clusters without prior thresholds. The set of data within the cluster are more similar to each other the data from other clusters in the specific features. Each cluster has its own cluster center in the feature space. An important measurement of similarity for data is distance between cluster centers and between points inside one cluster. Here the distance measure is the difference in the intensity values between two pixels in the intensity difference image. Since we are interested in exudates and non exudates regions the number of clusters will be two. Exudates cluster will be located in the higher intensity range and background cluster in the lower intensity range. The maximum and minimum intensity levels in the intensity difference image calculated are termed as min and max respectively and
IFMBE Proceedings Vol. 23
_________________________________________________________________
Automated Detection of Optic Disc and Exudates in Retinal Images
279
forms initial cluster centers. Intensity difference is taken as distance measure and it is used to find distance between pixels and initial cluster centers resulting in two clusters and the cluster centers are updated. The process is repeated iteratively there is not much variation in the values of cluster centers. Figure 4 shows the result of application of k means clustering method to the intensity image in Figure 3.
per. Optimal iterative threshold method followed by connected component analysis is proposed to identify the optic disc. Clustering method was used for identifying hard exudates in a digital colour retinal image. The algorithm performed better on variety of input retinal images, which show the relative simplicity and robustness of the proposed algorithm. This is a first step toward the development of automated retinal analysis system and it is hoped that a fully automated system can detect the signs of diabetic retinopathy in the early stage, monitor the progression of disease, minimize the examination time and assist the ophthalmologist for a better treatment plan.
ACKNOWLEDGMENT
Fig. 4
The authors would like to express their gratitude to the Department of Ophthalmology, Kasturba Medical College, Manipal for providing the necessary images and clinical details needed for the research work.
Detected exudates
III. RESULTS
REFERENCES
The image data required for the research work is obtained from the “Lasers and Diagnostics”, Department of Ophthalmology, Kasturba Medical College, Manipal. The colour fundus images are of dimension 768×576 pixels captured from Sony FF450IR digital fundus camera. Each fundus image comprises of a red, green, and blue grayscale image combined to form a three dimensional color image. The green plane was used in the algorithms due to the greater distribution of intensity through the image. The algorithm is evaluated against a carefully selected database of 100 color retinal images at different stages of diabetic retinopathy. Optic disc is located with the sensitivity of 92%. It failed in eight of the testing images, because of presence of large area of lesions around the optic disk. For the detection of hard exudates the optic disc is masked to avoid misclassification. The proposed clustering method detected the hard exudates with 86% sensitivity.
1. 2.
3.
4.
5.
6.
Emily Y Chew, “Diabetic Retinopathy”, American academy of ophthalmology – Retina panel, Preferred practice patterns, 2003. Huiqi Li and Opas Chutatape, “Automated Feature Extraction in Color Retinal Images by a Model Based Approach”, IEEE Trans. Biomedical Engineering, vol. 51, no. 2, February 2004. Z. Liu, O. Chutatape, and S. M. Krishnan, “Automatic image analysis of fundus photograph”, Proc. 19th Annu. Int. Conf. IEEE Engineering in Medicine and Biology Society, vol. 2, pp. 524–525, 1997. C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H.Williamson, “Automated location of the optic disc, fovea, and retinal blood vessels from digital color fundus images,” British Journal of Ophthalmology, vol. 83, no. 8, pp. 902–910, 1999. M. Foracchia, E. Grisan, and A. Ruggeri, “ Detection of Optic Disc in Retinal Images by Means of a Geometrical Model of Vessel Structure”, IEEE Trans. Medical imaging, vol. 23, no. 10, October 2004. G. Gardner, D. Keating, T.H. Williamson, and A. T. Elliot, “Automatic Detection of diabetic retinopathy using an Artificial neural network: a screening tool”, British Journal of Ophthalmology, vol. 80, pp. 940–944, 1996. Corresponding author
IV. CONCLUSION The algorithms for the automatic and robust extraction of features in color retinal images were developed in this pa-
_______________________________________________________________
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 23
P. C. Siddalingaswamy Manipal Institute of Technology Manipal India
[email protected] _________________________________________________________________
Qualitative Studies on the Development of Ultraviolet Sterilization System for Biological Applications Then Tze Kang1, S. Ravichandran2, Siti Faradina Bte Isa1, Nina Karmiza Bte Kamarozaman1, Senthil Kumar3 1 2
Student, Temasek Engineering School, Temasek Polytechnic, Singapore Faculty, Temasek Engineering School, Temasek Polytechnic, Singapore 3 Student, Nanyang Technological University, Singapore
Abstract — Ultraviolet rays have been widely used in providing antimicrobial environment in hospitals and also in certain sterilization procedures related to water treatment. The scope of the paper is to investigate the design of an ultraviolet sterilization unit developed to work in conjunction with a fluid dispenser for dispensing fluids in measured quantities periodically. Common problems associated with contamination of fluids in these dispensers have been carefully investigated to qualitatively document the requirements of the system. The most important part of this study has been focused on the qualitative assessment of the antimicrobial effects at various parts of the dispenser and also the variations of the antimicrobial effects at various depths of the fluid contained in the dispenser. We have designed a protocol to study the efficiency of the system and in order to have a real picture of the antimicrobial effects of ultraviolet radiation at various depths. To implement this protocol, we have designed an implantable array which is capable of containing microorganisms in sealed Petri dishes to be immersed in the fluid contained in the dispenser. Studies on the microbial growth conducted periodically under the influence of ultraviolet radiation of a known intensity provide a qualitative picture on the antimicrobial effects of ultraviolet rays at various depths. Thus it is possible to qualitatively analyze each sample for documenting antimicrobial effect. This study provides a good understanding on the intensity of the ultraviolet radiation required for providing a perfect antimicrobial environment and also other factors that are critical in the design of the system as a whole for dispensing fluids in biological applications.
Filtration and sterilization is achieved by passing water usually through iodine exchange resins. In this method, when negatively charged contaminants contact the iodine resin, iodine is instantly released and kills the microorganisms without large quantities of iodine being in the solution. Boiling water is considered the most reliable and often the cheapest. Ideally, boiling the water for 5 minutes is considered safe in killing the microorganisms [2]. Alternatively, sterilization using chlorine and silver-based tablets can destroy most bacteria’s when used correctly, but these are less effective for viruses and cysts. In recent years, the ultraviolet (UV) sterilization is gaining popularity as a reliable sterilization method and is environmental-friendly too [1]. UV disinfection technology is of growing interest in the water industry since it was demonstrated that UV radiation is very effective against certain pathogenic micro-organisms of importance for the safety of drinking water [3]. In most of the sterilization equipments, the light source is a low-pressure mercury lamp emitting UV in the wavelength of 253.7 nm and this source is referred to as UVC [4]. Ultraviolet rays have been a known mutagen at the cellular level and it is used in a variety of applications, such as food, air and water purification.
Keywords — Ultraviolet rays, Antimicrobial effects, Fluid dispenser, Biological applications, Sterilization procedures
A. Germicidal Mechanisms of Ultraviolet Rays
I. INTRODUCTION Purification and sterilization of water is considered important for all biological applications. Some of the conventional methods referred in practice are as briefly discussed [1]. Filtration of the water is considered very important before any sterilization procedures and a wide variety of filters are available for filtration of water. Filters remove sand, clay and other matter as well as organisms by means of a small pore size.
II. ULTRAVIOLET RAYS IN STERILIZATION
The germicidal mechanism of UVC light has the ability to alter deoxyribonucleic acid (DNA) code, which, in turn, disrupts an organism’s ability to replicate [5]. Absorption of UV energy by nuclei in DNA is primary lethal mechanism for the destruction of microorganisms. The antimicrobial effect is dependent on both the time of exposure and the intensity of the UVC ray. However, the UVC sterilization will not be effective if the bacterial or mold spores are hidden or not in the direct path of the rays. Organisms that are large enough to be seen are usually too resistant to be killed by practical UVC exposure. The radiation dose of UVC energy is measured in microwatts per centimeter square
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 280–283, 2009 www.springerlink.com
Qualitative Studies on the Development of Ultraviolet Sterilization System for Biological Applications
(μW/cm²). With the known wavelength of 253.7nm, it has been found that intensities from 2500 μW/cm to 26,400μW/cm² are effective against various bacterial organisms. For mold spores, intensities ranging from 11,000μW/cm² to 220,000μW/cm² are effective. Though most of common viruses, such as influenza, poliovirus and viruses of infectious hepatitis, are susceptible to intensities less than 10,000μW/cm², some of the viruses such as tobacco mosaic virus seen in plants are susceptible to UVC rays at very high intensity (440,000μW/cm²) [6]. B. Sterilization in Hospitals and Clinics UVC rays are very widely used in sterilization procedures in hospital and clinics. UVC light (100-280 nm) has been reported to be very effective in decontamination of hospital-related surfaces, such as unpainted/painted aluminum (bed railings), stainless steel (operating tables), and scrubs (laboratory coats) [7]. It has been reported that UVC lighting is an alternative to laminar airflow in the operating room and that it may be an effective way for lowering the number of environmental bacteria. It is also believed that this method can possibly lower the infection rates by killing the bacteria in the environment rather than simply reducing the number at the operative site [8]. As infections represent a major problem in dialysis treatment, dialyzing rooms need be kept antibacterial to the extent possible. It has been reported that 15-watt UVC lamps installed for every 13.5m² on the ceiling for the purpose of the room disinfection used for 16 hours nightly after working hours provide an antimicrobial environment even in areas which were not directly exposed to the UVC radiations [9].
281
and the minimal disinfection by-product formation that generally accompanies their use [11]. D. Ultraviolet Sterilization for Clinical Applications Filtered water free from microorganisms and chemicals disinfectants is an absolute requirement for the preparation of solutions for certain biological applications in medicine. The process of reverse osmosis is an invaluable technology to provide filtered water free from pathogens. Since water filtered through the process of reverse osmosis is free from chemical disinfectants, it serves as the ideal solvent for the preparation of biochemical solutions used in biological applications. This paper will discuss briefly the various parts of the ultraviolet sterilization system developed for providing the solvent required for preparation of biochemical solutions used in biological applications. III. ARCHITECTURE OF THE SYSTEM The architecture of the system essentially consist of ultraviolet radiation chamber, fluid dispensing chamber, embedded adjustable profile, fluid inlet and outlet system and a microcontroller module. The block diagram of the architecture is shown below.
C. Sterilization in Water Purification System Transmission of pathogens through drinking water is a well-known problem, which affects highly industrialized countries and also countries with low hygienic standards. Chlorination of drinking water was introduced to the water supply in the beginning of the 19th century in order to stop the spreading of pathogens. Disinfection of drinking water with chlorine has undoubtedly contributed to the reduction of typhoid fever mortality in many countries. Despite the worldwide use of chlorine for disinfection of drinking water, other safe methods of disinfection had gained popularity [10]. Ultraviolet disinfection technology is of growing interest in the water industry ever since it was found very effective against common pathogenic micro-organisms in water [3]. Ultraviolet disinfection systems are commonly incorporated into drinking water production facilities because of their broad-spectrum antimicrobial capabilities,
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 1 Block Diagram of the Architecture
_________________________________________________________________
282
Then Tze Kang, S. Ravichandran, Siti Faradina Bte Isa, Nina Karmiza Bte Kamarozaman, Senthil Kumar
A. Ultraviolet Radiation Chamber In order to provide an antimicrobial environment in the chamber, an ultraviolet radiation chamber was built. The ultraviolet radiation chamber has two ultraviolet lamps installed to provide maximal sterilization throughout the whole fluid dispensing chamber. The ultraviolet radiation module used in our system is shown in Fig. 2.
This adjustable profile had been indigenously fabricated in such a way that it could be well accommodated within the chamber. The embedded adjustable profile along with the fluid dispensing chamber is shown in Fig. 3. D. Fluid Inlet and Outlet System The system is installed with two solenoid pinch valves that controlled the filling and dispensing of the fluid into and also out of the fluid dispensing chamber. Volume of the fluid is measured precisely by the level detection mechanism. A microcontroller module interfaces a level detection module to control the fluid inlet and outlet system. E. Microcontroller Module The architecture is built around PIC18F4520 microcontroller containing five ports. The ports are configured to support some of the modules such as display interface, keyboard interface, activation of the pinch valves and the level detection mechanism. The implementation of the microcontroller allows integration of the different modules of the system.
Fig. 2 Ultraviolet Radiation Module
B. Fluid Dispensing Chamber The material used for building the chamber has to be a medically approved material for biological applications. Medical grade stainless steel and food-grade polyethylene often meet the requirements. In our preliminary studies, we have made use of food-grade polyethylene to build the fluid dispensing chamber. This chamber has been designed to hold the measured volume of fluid stored for dispensing which is eventually used to prepare biochemical solution for the biological applications. C. Embedded Adjustable Profile Qualitative studies had been made using the embedded adjustable profile capable of holding the sealed Petri dishes at various positions in the fluid dispensing chamber.
IV. QUALITATIVE STUDIES ON ULTRAVIOLET IRRADIATION The object of this study is to qualitatively evaluate the antimicrobial effects of the UV radiation. In order to assess the effects, we have constructed the adjustable profile capable of holding the sealed Petri dishes at various positions in the fluid dispensing chamber. The sealed Petri dishes within the chamber contain the required environment for bacterial growth. In our studies, we have used Lysogeny broth (LB) agar for the bacteria to multiply and we have conducted studies using this medium extensively. With the help of this set-up, it was possible to conduct studies on the bacterial colonies after exposure to UV radiations at various positions inside the fluid dispensing chamber. Studies were conducted for various exposure durations to assess the antimicrobial effects of UV radiations on the bacterial colonies positioned at various levels of the fluid dispensing chamber. V. RESULTS AND DISCUSSION
Fig. 3 Adjustable Profile in Chamber
_______________________________________________________________
Preliminary studies of ultraviolet radiations had clearly demonstrated the effects of ultraviolet on the bacterial colonies at various depths within the fluid dispensing chamber. These studies have provided a clear picture on the effects of the ultraviolet rays of a known intensity and its capability to provide an antimicrobial environment deep inside the fluid
IFMBE Proceedings Vol. 23
_________________________________________________________________
Qualitative Studies on the Development of Ultraviolet Sterilization System for Biological Applications
dispensing chamber. Fig. 4 shows the growth of the bacterial without ultraviolet radiation in an incubator retained as a control for our studies. Fig. 5 shows the absence of bacterial growth in the agar medium after exposure. The Petri dish was maintained within the fluid dispensing chamber at level two, which is approximately the middle portion of the chamber. Fig. 6 shows the absence of bacterial growth in the agar medium after exposure. The Petri dish was maintained within the fluid dispensing chamber at level three, which is approximately the bottom portion of the chamber.
283
VI. CONCLUSION Preliminary studies conducted on the newly developed Ultraviolet Sterilization System have demonstrated the antimicrobial effects of the ultraviolet radiations as seen in Fig. 5 and Fig. 6. Theses studies were conducted with E.Coli bacterial strains to qualitatively document the antimicrobial effects of UVC at various depths. This system was built indigenously for providing a measured quantity of solvent for the preparation of biochemical solution for certain biological applications.
REFERENCES 1.
Fig. 4 Control Petri Dish
Fig. 5 Petri Dish at Level Two
Yagi, N. Mori, M. Hamamoto, A. Nakano, M. Akutagawa, M. Tachibana, S. Takahashi, A. Ikehara, T. Kinouchi, Y. (2007) Sterilization Using 365nm UV-LED, Proceeding of 29th Annual International Conference of IEEE/EMBS, 2007, pp 5841-5844 DOI 10.1109/ IEMBS.2007.4353676 2. Anthony T.Spinks, R.H. Dunstan, T. Harrison, P. Coombes, G. Kuczera (2006) Thermal inactivation of water-borne pathogenic and indicator bacteria at sub-boiling temperatures. DOI 10.1016/j.watres.2006.01.032 3. W.A.M. Hijnen, E.F. Beerendonk, G.J. Medema (2005) Inactivation credit of UV radiation for viruses, bacteria and protozoan (oo) cysts in water: A review. DOI 10.1016/j.watres.2005.10.030 4. Mirei Mori, Akiko Hamamoto, Akira Takahashi, Masayuki Nakano, Noriko Wakikawa, Satoko Tachibana, Toshitaka Ikehara, Yutaka Nakaya, Masatake Akutagawa, Yohsuke Kinouchi (2007) Development of a new water sterilization device with a 365 nm UVLED. DOI 10.1007/s11517-007-0263-1 5. Janoschek, r., G. C. Moulin (1994) Ultraviolet Disinfection on Biotechnology: Myth vs. Practice. BioPharm (Jan./Feb.), pp24-31 6. Anne F. Booth (1999) Sterilization of Medical Devices. Interpharm Press, Inc, Buffalo Grove, IL60089, USA 7. Rastogi VK, Wallace L, Smith LS. (2007) Disinfection of Acinetobacter baumannii-contaminated surfaces relevant to medical treatment facilities with ultraviolet C light. Mil Med. 2007 Nov; 172(11):11669. PMID: 18062390 8. Merrill A. Ritter, Emily M. Olberding, Robert A. Malinzak (2007) Ultraviolet Lighting during Orthopaedic Surgery and the Rate of Infection. The Journal of Bone and Joint Surgery (American). 2007; 89:1935-1940. DOI 10.2106/JBJS.F.01037 Inamoto H, Ino Y, Jinnouchi M, Sata K, Wada T, Inamoto N, Osawa A (1979) Dialyzing room disinfection with ultra-violet irradiation. J Dial. 1979; 3(2-3):191-205. PMID: 41859 10. D. Schoenen (2002) Role of disinfection in suppressing the spread of pathogens with drinking water: possibilities and limitations. DOI 10. 1016/S0043-1354(02)00076-3 11. Isaac W. Waita, Cliff T. Johnstonb, Ernest R. Blatchley IIIc (2007) The influence of oxidation-reduction potential and water treatment processes on quartz lamp sleeve fouling in ultraviolet disinfection reactors. DOI 10.1016/j.watres.2007.02.057 9.
Fig. 6 Petri Dish at Level Three
_______________________________________________________________
Authors: Institute: Street: City: Country: Emails:
IFMBE Proceedings Vol. 23
Then Tze Kang¹, S.Ravichandran² Temasek Polytechnic 21 Tampines Ave 1 Singapore Singapore
[email protected],
[email protected],
[email protected] _________________________________________________________________
From e-health to Personalised Medicine N. Pangher ITALTBS SpA, Trieste,Italy Abstract — The research agenda of the TBS group is aiming at offering a complete solution for the management of molecular medicine in the standard care environment. The impact of the different –omics (genomics, proteomics, metabolomics,….) is already representing a very important part of the research effort in medicine and is expected to modify dramatically the model of delivery of healthcare service. The TBS group is facing this challenge through a R&D effort in order to transform its Clinical Information System into a complete suite for the management of clinical research and care pathways, supporting a completely personalised approach. The IT suite allows research to integrate the Electronic Clinical Records with data from technologies such as DNA and protein microarrays, data from diagnostic and molecular imaging, and workflow management solution. In this paper we will discuss the results from the participation to different European and national research projects, sharing this development aim. We provided the IT integration suite for projects for the identification of therapy-relevant mutations of tumor suppressor genes in colon cancer (MATCH EU project), for the genetic base for the impact of metabolic diseases on cardiovascular risk (MULTIKNOWLEDGE EU project) and for the identification of biomarkers for Parkinson disease (SYMPAR project in Italy). Keywords — Bioinformatics, e-health, personalized medicine, Electronic Medical Records, health risk profiles.
I. INTRODUCTION The mantra in healthcare services in the past years has been quality through formalization and standardization of processes. Accreditation, Evidence Based Medicine, Best practices, Risk management: processes are described, guidelines established, outcomes defined as targets. The quality revolution has become the daily bread for most industrial sectors: we expect fully functional products wherever we obtain them and services that are effective independent of the particular person involved. At the same level we pretend from our healthcare provider to deliver the best possible service independent of the location and the professional that are treating us: the great effort towards a standardization of the quality of healthcare is not yet completed, but we are facing a changing scenario. The impact of molecular medicine is going to impact dramatically on the service delivery models: personalization of treatment will have a strong scientific background. Treatment will be based on
our molecular make up: genomics, transcriptomics, proteomics, metabolomics will become the tools of the day. Advanced diagnostic will result not only in a more tailored treatment, but will allow the identification of risk profiles that will be essential to design improved prevention protocols, aiming at remaining healthy and not only cure or manage diseases. The personalization of the treatment process will also be based on the availability of new tissues and organs that will be obtained through the most advanced applications of tissue engineering: regenerative medicine, based on extensive exploitation of stem cells, will allow the repair and substitution of defective organs. But again the compatibility of the new tissues and organs will have to be tailored to the characteristics of the single human being, Engineered stem cells will be used to set up tissue banks: the patient himself will be one of the sources of these new tissues. While we have still not achieved the target of the standardization of healthcare services, a discontinuity in the complexity of healthcare is appearing: good quality healthcare will require a very personalized approach, starting from diagnostic systems based on “-omics” analysis and molecular imaging through targeted drugs, new compatible tissues and organs, gene therapy and drug delivery solutions. Prevention will be based not only on more general rules, but will also depend on our actual predisposition to diseases. This trend is impacting the development of IT solutions for the healthcare sector: workflow systems allowing supporting healthcare processes are not widely available and suddenly IT systems will have to abandon a “few sizes fit all” approach to present solutions systems that will allow the setting up of prevention , treatment and disease management processes ecompassing all kind of environments, data sources and knowledge basis. IT solution will have to allow continuity of prevention and care, collecting data from different sources ranging from home-based monitors to sequencing technologies to PET scanners. Treatment options should range from diet and exercise prescriptions to the design of viral vectors for gene therapies to the production of stem cells for organ replacement.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 284–286, 2009 www.springerlink.com
From e-health to Personalised Medicine
285
II. RESEARCH STRATEGY A. The Match Project The MATCH project entitled the development of an automatic diagnosis system that aimed to support treatment of colon cancer diseases by discovering mutations that occurs to tumour suppressor genes (TSGs) and contributes to the development of cancerous tumours. The colon cancer automated diagnosis system is a computer based platform that addresses doctors, biologists, cancer researchers and pharmaceutical companies. The project goal was to perform medical integration between medicine and molecular biology by developing a framework where colon cancer diseases will be efficiently handled. Through this integration real time conclusion can be drawn for early diagnosis and more effective colon cancer treatment. The constitution of the system is based on a) colon cancer clinical data and b) biological information that will be derived by data mining techniques from genomic and proteomic sources. B. The Multiknowledge Project MULTI-KNOWLEDGE starts from the data processing needs of a network of Medical Research Centres, in Europe and USA, Partners in the Project and co-operating in researches related to the link between metabolic diseases and cardiovascular risks. These needs are mostly related to the integration of three main sources of information: clinical data (EHR), patient-specific genomic and proteomic data (in particular data produced through Micro-arrays technology), and demographic data. MULTI-KNOWLEDGE has created an intelligent workflow environment for multi-national multiprofessional research consortia aiming at cooperatively mining, modelling, visualizing biomedical data under a single common perspective. This will allow retrieval and analysis of millions of data through bio-informatics tools, with the intent of improving medical knowledge discovery and understanding through integration of biomedical information. Critical and difficult issues addressed are the management of data that are heterogeneous in nature (continuous and categorical, with different order of magnitude, different degree of precision, etc.), origin (statistical programs, manual introduction from an operator, etc.), and coming from different data achieving environments (from the clinical setting to the molecular biology lab). The MULTIKNOWLEDGE architecture and set of tools have been tested for the development of a structured system to integrate data in a single informative system committed to cardiovascular risk assessment. Therefore this project will also
_________________________________________
contribute to establish guidelines and operating procedures to manage and combine data coming from protein arrays and make them easily available for the imputation of study algorithms. C. The Sympar Project The target of the project is to develop an informatic system for research in the biomarkers and treatment of Parkinson’s Disease (PD), the second most common progressive neurodegenerative disorder in western countries. We are testing how an integrated IT solution can support researchers in identifying molecular biomarkers of the disease. These biomarkers will be used for early diagnosis in pre-symptomatic individuals and to follow at the molecular level the response to old and new therapies. Our IT solution is based on the following elements: 1) A data repository where all data relevant to patient health are stored. Data will consist on routine medical records, brain-imaging data, filmed records of behavioural assessments, gene expression profiles from peripheral tissue. 2) A workflow engine: medical and research procedures should be standardized. The IT system should support users in collecting medical data and in following laboratory protocols. The system should help the user both in identifying and classifying patients, and in activating medical and research procedures. 3) An access to the external knowledge base: a unique interface to access literature, external scientific databases, specialised healthcare resources. III. THE TOOLBOX During the development of these different solutions,, we have identified the need to set up a unique toolbox to develop IT solutions addressing the needs of biomedical research and healthcare services tackling the issues of a the need of integrating different knowledge sources, biomedical instruments and personalised treatment options. We have realised this toolbox baptized PHI technology. PHI Technology is divided in two separate parts called PHI Designer and PHI Runtime Environment (PHI RE). The PHI Designer is used to generate healthcare applications named PHI Solutions, deployed and executed upon the PHI RE runtime environment. PHI Designer and PHI RE share a common Reference Information Model (PHI RIM), fully extensible and customizable, derived from the international standard HL7 RIM (www.hl7.org). PHI RIM stores the metadata catalog, de-
IFMBE Proceedings Vol. 23
___________________________________________
286
N. Pangher
scribing objects’ attributes, services, events, vocabularies and ontologies. Applications, named “solutions”, are designed and executed upon the RIM. The physical database (PHI RIM DB) is invisible to designers and applications; its conceptual and physical model is derived from the RIM, that makes it an open database, based on the most popular international healthcare standard. Its mixture of Entity-Relationship and Entity-AttributeValue physical structures makes it extremely flexible and performing.
J2EE (Java 2 Enterprise Edition) and open standards, where you can deploy your applications once you have designed them with the PHI Designer. Servers and Engines can be seen as being nearly synonyms. The main purpose of the PHI RE is to enable the applications exchange and reusability among partners and customers, making it as easy as a plug-and-play setup. PHI RE is mainly composed by Servers and Engines, which are, finally, J2EE components developed in JBoss SEAM framework, which are deployed in JBOSS Application Server. Both PHI Designer and PHI RE are reliable and scalable: you can install the whole PHI Technology either on a personal computer or in a network of distributed servers, in single node configuration as well as in a cluster configuration for high availability. IV. CONCLUSIONS The need of IT solutions for the management of biomedical research and personalized medicine tretments will be a major issue: these tools will b enecessary to navigate in the sea of informationa dn find the correct personalize route. corresponding author:
The PHI Designer is intended to assist you with developing easy to use and intuitive reusable applications for your daily work with information. Finally, the PHI Technology provides you with a runtime environment independent from underlying operating systems, named PHI RE, based on
_________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Nicola Pangher ITALTBS SpA Padriciano 99 Trieste Italy
[email protected] ___________________________________________
Quantitative Biological Models as Dynamic, User-Generated Online Content J.R. Lawson, C.M. Lloyd, T. Yu and P.F. Nielsen Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand Abstract — Quantitative models of biological systems are dynamic entities, with modelers continually tweaking parameters and equations as new data becomes available. This dynamic quality is not well represented in the current framework of peer-reviewed publications, as modelers are required to take a 'one-dimensional snapshot' of their model in a particular state of development. The CellML team is developing new software for the CellML Model Repository that treats models as dynamic code, by providing server-side storage of models within a version control system, rather than as static entities within a relational database. Version control systems are designed to allow a community to collaborate on the same code in parallel and to provide comprehensive revision histories for every file within the repository. Because CellML 1.1 espouses a modular architecture for model description, it is possible to create a library of modular components corresponding to particular biological processes or entities, which can then be combined to create more complex models. For this to be possible, each component needs to be annotated with a set of metadata that describes its provenance, a detailed revision history and semantic information from ontologies. A high level of collaboration is also essential to leverage domain-specific knowledge about the components of a large model from a number of researchers. By treating quantitative biological models as dynamic, usergenerated content, and providing facilities for the expert modeling community to participate in the creation and curation of a free, open-access model repository, the the next-generation CellML Model Repository will provide a powerful tool for collaboration and will revolutionise the way cellular models are developed, used, reused and integrated into larger systems. Keywords — databases, e-science, CellML, quantitative modeling
I. INTRODUCTION Quantitative models of biological systems are dynamic entities, with modelers continually tweaking parameters and equations as new data becomes available. This dynamic quality is not well represented in the current framework of peer-reviewed publications, as modelers are required to take a 'one-dimensional snapshot' of their model in a particular state of development. The glut of data about biological systems can present a challenge to the ability of individual researchers or groups to manage and utilize their knowledge of these systems.
One solution to this challenge is to encode this information into structured systems of knowledge, and the resulting models are rapidly becoming integral to many fields of biology. The sheer volume and rate of production of new data is stressing the traditional scientific publication process because it cannot keep up [1], and because the print medium is simply not appropriate in many cases. The internet must be leveraged to disseminate data as it is collected and incorporated into models, and to facilitate the mass collaborative initiatives required to merge these models into ever larger, more complex systems. II. LIMITATIONS OF THE CONVENTIONAL SCIENTIFIC PUBLISHING SYSTEM
The ability of researchers to describe a quantitative biological model within a conventional printed academic publication is limited. As these models become more complex and detailed, this limitation has begun to hamper the ability of other researchers to work with these models. At best, this issue limits the critical second stage of peer-review: reproduction of experiments described in publications by the scientific community. If a model cannot be easily reproduced because it is inadequately described in the literature, not only is its validity questioned, but a barrier is created to the construction of complex models of biological systems, which are commonly built by combining smaller models. The greater levels of transparency required by this field of research will be difficult to provide by merely extending print-based the publication model. At present, publications that discuss quantitative models take can take one of several forms: x
x
Model equations and parameters are often interspersed throughout the paper, with salient discussions on each. This format can be frustrating to researchers attempting to reproduce a model, as the primary emphasis is often on the commentary describing why particular values were chosen or how equations were derived, rather than providing a complete description. Furthermore, descriptions of these models are frequently missing important parameters and initial conditions. Supplemental datasheets published with the article define the model by listing equations and parameters. This information provides a more concise, and often
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 287–290, 2009 www.springerlink.com
288
x
x
J.R. Lawson, C.M. Lloyd, T. Yu and P.F. Nielsen
more complete, description of the model, although typographical errors still present a persistent barrier to model reproduction. Alternatively, the predictions and outputs of a model are simply discussed, without any real attention to how the model is constructed. These articles tend to be reviews or short format publications in journals such as Science. This approach is sometimes extended by providing links to the author's website, which either elaborates in the form of a listing of equations and parameters, or provides a download of the original model code. Providing 'external supplemental data' can be useful, but again, typographical errors can be an issue. This can be quite effective, although the fact that the model is not officially part of the publication, and thus not subject to the same oversight and peer review, gives no guarantee that the available model is the identical to the one used to produce the outputs shown and discussed in the publication. Making model code available in a widely accepted and utilised language such as MATLAB is helpful, but more often the model code is only available in obscure formats specific to small, in-house software packages developed by the authors or their colleagues. Some publishing groups, such as PLoS, Nature and BMC are actively moving towards requiring model authors to deposit a copy of their model in an open-access, recognised, online repository, in the same manner that most journals now require authors to deposit novel DNA sequence information or protein structures in a database (such as GenBank [2] or PDB [3], respectively) as a prerequisite to publication. This is an ideal solution to removing the barriers to transparency and reproduction presented above, but it involves a significant shift in the scientific publishing paradigm that has existed to date.
The CellML team is developing new software (PMR2) for the CellML Model Repository that dispenses with the concept of quantitative biological models as static snapshots of knowledge and treats them as dynamic, software-like entities by providing server-side storage of models within a Mercurial, distributed version control system (DVCS). DVCS are designed to allow a community to collaborate on the same code in parallel and to provide comprehensive revision histories for every file within the repository. This concept of model development is drawn from the open source software development. Programmers are able to build large systems by drawing on libraries of pre-existing code, and online communities such as Sourceforge form around these efforts to discuss implementation. Comprehensive revision histories which define provenance and the rationale of each and every addition to and modification of a piece of code are fundamental to any open source initiative because information about who did what and when provides the foundation for collaboration. PMR2 can support the signing of changesets with cryptographic signatures using the GPG plugin provided by Mercurial, ensuring the integrity of the information within and proper attribution to the author who signed the changeset. Changing the data without breaking the checksum and signature is extremely difficult without the original signature key that was used to sign the model. Storing models within a DVCS allows collaborative software development methods to be used in model construction: rollbacks, branching, merging of branches and parallel revisions to the same code are all possible. Powerful access control systems can also be implemented to allow modelers to control who is able to see and interact with their work. The PMR2 software and associated systems for online publishing are described in detail by Yu et al. [6] IV. MODULARITY AND MODEL REUSE
III. STORING MODELS IN ONLINE VERSION CONTROL SYSTEMS
CellML [4] is an XML-based format designed to allow description of quantitative models based on ordinary differential equations and systems of differential algebraic equations and to facilitate wide dissemination and reuse of these models. The CellML Model Repository [5] is an online resource that provides free, open access to CellML models. Currently Physiome Model Repository (PMR) software underlying this repository allows limited interaction with the models it stores; they can be associated with documentation and metadata and or organized thin a primitive versioning system.
_________________________________________
Complex systems in engineering are almost always constructed as hierarchical systems of modular components [7]. Such modularity can be seen in human engineered systems but also in biology [8]. Models should be designed with reuse in mind and implemented in formats which facilitate their combination into the kinds of hierarchical systems familiar to engineers. Such methodology leverages ‘black-box’ abstraction and allows for separation of concerns. For example, a disparate group of researchers may be collaborating on the construction of a system: one may be responsible for organizing the top level hierarchy, while others may be responsible for lower level subcomponents or hierarchies. If the compo-
IFMBE Proceedings Vol. 23
___________________________________________
Quantitative Biological Models as Dynamic, User-Generated Online Content
nents of the system are constructed in a sufficiently modular fashion, the researcher organizing the hierarchy needs to know very little about how they actually work – only their input and output. Terkildsen et al. [9] demonstrate this in a recent article discussing their use of the CellML standard to integrate multiple models, [10,11,12] each describing discrete but interrelated systems within a rat cardiac myocyte; the result was a model of excitation-contraction coupling. Similarly, descriptions of systems involving multiple cell types can be created from pre-existing models. Sachse et al. [13] recently used a model of the interaction between cardiac myocytes and fibroblasts as a platform to develop and number of novel hypotheses about the role of fibroblasts in cellular electric coupling within the heart. The model of the cardiac myocyte used in this work was a curated, (but unpublished through official channels,) form of the influential Pandit 2001 model [10], downloaded from the CellML Model Repository. Because CellML 1.1 espouses a modular architecture for model description, it is possible to create a library of modular components corresponding to particular biological processes or entities, which can then be combined to create more complex models [14]. For this to be possible, each component needs to be annotated with a set of metadata that describes its provenance, a detailed revision history and semantic information from ontologies. A high level of collaboration is also essential to leverage domain-specific knowledge about the components of a large model from a number of researchers. The CellML project seeks to provide the tools and infrastructure for both creating these libraries of components and disseminating them over the internet. PMR2 will provide a framework for users to reuse, review and modify preexisting components, create new components, to collaborate securely and privately with any number of other users and to rapidly share and their work public. Methods for composing models from such libraries of components and practical applications of such have been described in the literature [15] and best practice approaches are beginning to be developed [16]. Further, the requirement within the synthetic biology community for a library of ‘virtual parts’ representing Standard Biological Parts [17], or ‘Biobricks’ which can be slotted together and simulated for rapid prototyping has been voiced repeatedly [18]. Implementing models as dynamic, online content opens up possibilities for programmatic interaction and processing, which represents a considerable development in the field of computational biology. The BioCoDa [19] project is working to create libraries of CellML models of biochemical reactions which source parameter values from databases that hold kinetic parameters. This will allow these models to dynamically update themselves as new information be-
_________________________________________
289
comes available in the database. Description Logics [20] can be used to define rules to impose constraints on models to assure their biological validity. Assertions may be made about how particular processes act or how entities are related in a biological system; if a model violates these assertions it can be denoted as biologically invalid. In concert with comprehensive ontologies describing biological systems, these rules can also be used to automate the model building process. A modeler might define what parts of a system he wants to represent and where to draw the information from. This model could then be simulated and the results checked against databases containing wetlab experimental data about how the system should behave. Key to these possibilities is the digital representation and hyperaccessible nature of CellML models. V. THE ROLE OF ONLINE COMMUNITIES The possibilities presented by internet-based scientific collaboration do not end at simply building models: linking elements of models to online content through metadata promises to do for descriptions of biological systems what hyperlinking has done for basic text. For example, curation is essential to the integrity of any large repository of information. This is currently done primarily by small groups of highly skilled experts, but the job of curating every single entry in a repository or database is beginning to overwhelm curators of large datasets. This suggests a new job for current curation experts as ‘meta-curators,’ checking the validity of curation done by the community. A number of recent initiatives, such as the WikiPathways [21] and GeneWiki [22] projects, are taking such an approach to the challenging of organizing large datasets. Academic analysis has revealed that the community-sourced ‘anyone-can-edit’ Wikipedia online encyclopedia is in fact as trustworthy as Encyclopedia Brittannica; the Wikipedia system also involves a small group of 'Wikipedians' who take an active role in cleaning up and standardizing public contributions [23]. While the formal scientific review and publication process will likely remain in place for some time to come, many elements of peer review are amenable to online fora. Additionally, annotation with quality community-generated commentary and analysis can add significant value to a piece of research. VI. CONCLUSIONS The rise of 21st century communications technologies is profoundly affecting the way the science of quantitative computational biology is practiced through digitization,
IFMBE Proceedings Vol. 23
___________________________________________
290
J.R. Lawson, C.M. Lloyd, T. Yu and P.F. Nielsen
decentralization and democratization. By treating quantitative biological models as dynamic, user-generated content, and providing facilities for the expert modeling community to participate in the creation and curation of a free, openaccess model repository, the next-generation CellML Model Repository will provide a powerful tool for collaboration and will revolutionize the way cellular models are developed, used, reused and integrated into larger systems.
ACKNOWLEDGMENT The authors would like to acknowledge the Wellcome Trust, the Maurice Wilkins Centre for Molecular Biodiscovery and the IUPS Physiome Project.
REFERENCES 1. 2.
3.
4. 5.
6.
7. 8. 9.
Butler, D "Science in the web age: joint efforts" Nature. 2005 Dec 1;438(7068):548-9. Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE "The Protein Data Bank." Nucleic Acids Res, 2000, 28 pp. 235-242 Benson DA, Karsch-Mizrachi I, Lipman DJ, Ostell J, Wheeler DL "Genbank" Nucleic Acids Res. 35 Database issue:D21-D25, January 1, 2007 Lloyd CM, Halstead MD, Nielsen PF. "CellML: its future, present and past." Prog Biophys Mol Biol. 2004 Jun-Jul;85(2-3):433-50. Lloyd CM, Lawson JR, Hunter PJ, Nielsen PF "The CellML Model Repository" Bioinformatics. 2008 Sep 15;24(18):2122-3. Epub 2008 Jul 25 Yu T, Lawson JR, Britten R "A distributed revision control system for collaborative development of quantitative biological models" Proc. ICBME2008 [in print] Grau BC, Horrocks I, Kazakov Y, Sattler U "A logical framework for modularity of ontologies" Proc. IJCAI, 2007 Kitano H "Systems biology: a brief overview" Science. 2002 Mar 1;295(5560):1662-4. Terkildsen JR, Niederer S, Crampin EJ, Hunter P, Smith NP "Using Physiome standards to couple cellular functions for rat cardiac excitation-contraction" 2008, Experimental Physiology , 93, 919-929.
_________________________________________
10. Pandit SV, Clark RB, Giles WR, Demir SS "A Mathematical Model of Action Potential Heterogeneity in Adult Rat Left Ventricular Myocytes" 2001, Biophysical Journal , 81, 3029-3051. 11. Hinch R, Greenstein JR, Tanskanen AJ, Xu L, Winslow RL "A Simplified Local Control Model of Calcium-Induced Calcium Release in Cardiac Ventricular Myocytes" 2004 Biophysical Journal Volume 87 pp.3723-3736 12. Niederer SA, Hunter PJ, Smith NP. "A quantitative analysis of cardiac myocyte relaxation: a simulation study." Biophys J. 2006 Mar 1;90(5):1697-722. 13. Sachse FB, Moreno AP, Abildskov JA. "Electrophysiological modeling of fibroblasts and their interaction with myocytes."Ann Biomed Eng. 2008 Jan;36(1):41-56. Epub 2007 Nov 13. 14. Wilmalaratne S, Auckland Bioengineering Institute, The University of Auckland - personal communication 15. Nickerson D, Buist M. "Practical application of CellML 1.1: The integration of new mechanisms into a human ventricular myocyte model." Prog Biophys Mol Biol. 2008 Sep;98(1):38-51. 16. Cooling MT, Hunter P, Crampin EJ. "Modelling biological modularity with CellML." IET Syst Biol. 2008 Mar;2(2):73-9. 17. Endy D "Foundations for engineering biology" Nature. 2005 Nov 24;438(7067):449-53 18. Cai Y, Hartnett B, Gustafsson C, Peccoud J. "A syntactic model to design and verify synthetic genetic constructs derived from standard biological parts." Bioinformatics. 2007 Oct 15;23(20):2760-7 19. Beard DA et al. "CellML Metadata: standards, tools and repositories" Phil. Trans. R. Soc. B [in print] 20. Baader F, Calvanese D, McGuinness DL, Nardi D, Patel-Scneider PF "The Description Logic Handbook - Theory, Implementation and Applications" 2007 - Cambridge University Press New York, NY, USA 21. Pico AR, Kelder T, van Iersel MP, Hanspers K, Conklin BR, Evelo C. "WikiPathways: pathway editing for the people." PLoS Biol. 2008 Jul 22;6(7):e184. 22. Hoffman R "A wiki for the life sciences where authorship matters" Nat Genet. 2008 Sep;40(9):1047-51. 23. Giles J "Internet encyclopaedias go head to head" Nature. 2005 Dec 15;438(7070):900-1.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
James R. Lawson Auckland Bioengineering Institute 70 Symonds Street Auckland New Zealand
[email protected] ___________________________________________
Development of Soft Tissue Stiffness Measuring Device for Minimally Invasive Surgery by using Sensing Cum Actuating Method M.-S. Ju1, H.-M. Vong1, C.-C.K. Lin2 and S.-F. Ling3 1
Dept. of Mechanical Engineering, National Cheng Kung University, Taiwan Dept. of Neurology, Medical Center, National Cheng Kung University, Taiwan 3 School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore 2
Abstract — Surgeon’s perception of palpation is limited in a minimally invasive surgery so devices for in situ quantification of soft tissue stiffness are necessary. A PZT-driven miniaturized cantilever served as actuator and sensor simultaneously was developed. By using the sensing cum actuating method, the mechanical impedance functions of soft tissues can be measured. Bioviscoelastic materials silicone rubber and dissected porcine livers under room temperature or in frozen state were utilized to test the performance of the device. The frozen porcine liver was employed to simulate liver cirrhosis. The results showed that the dissipative modulus around the resonant frequency might be utilized to quantify the stiffness of the cancerous and normal liver structure. The application of the device to robot-assisted surgery was discussed. Keywords — Soft tissue stiffness, sensing-cum-actuating, minimally invasive surgery, mechanical impedance, electrical impedance.
aspiration [4] have been developed. However, all these methods need at least an actuator and a sensor to measure the dynamic response, i.e. applying force or torque and measuring displacement or velocity of the tissue simultaneously. The requirement makes the miniaturization of theses testing systems a challenge technical problem. Recently, Ling et al. [5] propose a new dynamic testing method- sensing cum actuating(SCA) in which the mechanical impedance of soft viscoelastic materials can be measured by detecting input electrical current and voltage of an activated electro-mechanical actuator without employing any traditional force and displacement sensors. The goals of this study were three-folds: First was to develop a PZT-based miniaturized stiffness measuring device based on SCA method. Second was to test the system on biomaterials and in vitro tests of porcine livers. Third was to develop a method for quantify stiffness of liver from their electrical and mechanical impedances. Feasibility of the method for MIS of liver will be discussed.
I. INTRODUCTION Minimally invasive surgery (MIS), also known as endoscopic surgery, is a new and popular surgical operation. Surgical equipments such as endoscope, grasper and blade can be inserted through small incisions rather than making a large incision to provide access to the operation site. Distinct advantages of this technique are: reduction of trauma, milder inflammation, reduced postoperative pain and fast recovery. However, reduced dexterity, restricted field of vision and lack of tactile feedback are main drawbacks of MIS. Liver cirrhosis is one of the most deadly diseases in Taiwan. Surgeons usually rely on palpation to assess the boundary of abnormality of liver tissues during surgery. However, this information is no longer available in MIS. It is believed that in-situ estimation of soft tissue’s mechanical property may improve quality of MIS such as image-guided laparoscopic liver operation. Recently, new techniques and instruments have been appeared for in-vivo determination of tissue properties. Dynamic testing methods such as indentation probes [1], compression techniques [2], rotary shear applicators [3] or
II. METHODS Fig. 1 shows the PZT-based soft tissue measurement system designed in this work. The outer diameter of the pen holder was 10mm and the PZT- coated cantilever and the ring base were machined from a brass cylinder to avoid the fracture at the root of the cantilever. Dimension of the brass cantilever was 6.5mm × 2.0mm × 0.1mm and 5.0mm × 2.0mm × 0.2mm for the PZT. The height of stinger was 3mm to prevent direct contact between cantilever surface and specimen.
Fig.1 Schematic diagram of the PZT-coated cantilever (left) and the assembly of the pen-like soft tissue measurement device (right)
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 291–295, 2009 www.springerlink.com
M.-S. Ju, H.-M. Vong, C.-C.K. Lin and S.-F. Ling
Fig. 2 Experimental setup of the soft tissue stiffness measurement system From the two-port model (Fig. 3) of the system relationship between the input port (electrical) variables and the output port (mechanical) variables was given by: ª E ( jZ ) º « I ( jZ ) » ¬ ¼
ªt11 ( jZ ) t12 ( jZ ) º ª F ( jZ ) º «t ( jZ ) t ( jZ )» «V ( jZ ) » 22 ¬ 21 ¼¬ ¼
(1)
where tij, i,j = 1, 2 were elements of the transduction matrix, T(jZ), of the system and F(jZ) and V(jZ) were Fourier transform of the force and velocity at the output port. If T(jZ) and Ze(jZ) were known, the mechanical impedance, defined as Zm(jZ)=F(jZ)/V(jZ) of the biological specimen at the contact point could be computed as Z m ( jZ )
F V
t 22 Z e t12 t11 t 21 Z e
(2)
To determine elements of the matrix T one could mechanically constrain the output port of the device to obtain the first column or free it to obtain the second column. The other approach was to add different masses on the stinger and solve a linear algebraic equation. In this study, due to the miniaturization, it was difficult to calibrate the transduction matrix experimentally. An alternative method suggested in [6] was employed in which finite element simulations were adopted. After the transduction matrix was calibrated, the mechanical impedance, Zm, could be estimated. If no internal energy was generated or dissipated in the system the determinant of the matrix T should be unity.
_______________________________________________________________
v (v e lo c ity )
i (cu rr en t) E (voltage)
The experimental setup shown in Fig. 2 consisted of a function generator to provide swept sinusoidal signals to the power amplifier to drive the PZT and a current amplifier to measure the potential induced from the direct piezoelectric effect. The potential, E, and current, I, were acquired simultaneously into the PC and real-time fast Fourier transforms were performed and the electrical impedance, Ze(Z)=E(jZ)/I(jZ), j ෭ was computed.
P Z T co a ted c a n tile v e r
F (force)
292
biological material
Fig. 3 Two-port model of the sensing-cum-actuating measurement system All the finite element analyses were performed by using software package ANSYS. First, modal analysis of the PZT-coated cantilever was performed and next harmonic analyses of the PZT-coated cantilever were conducted to obtain the frequency response of the system. For simplification, the stinger was modeled as an equivalent rectangle mass. Element Solid 5 was used to model the PZT layer and element Solid 45 for modeling the brass cantilever and adhesive layer (Loctite-3880). The material properties of the probe can be found in [8]. Fig. 4 showed the finite element mesh of the composite cantilever beam. After a convergent test, the element size was set to 0.13mm and the output port was located at 6mm from the clamped end. Two simulations were performed to obtain the transduction matrix. In the first case, a spring with a spring constant of 1,000 N/m was connected to the stinger and the frequency response functions E1, I1, F1, V1, were computed from the harmonic analysis. In the second case, the spring constant was changed to 5,000 N/m and the corresponding frequency response functions were: E2, I2, F2, V2. The transduction matrix T can then be calculated by: ª E1V2 « FV T « 1 2 « I1V2 ¬« F1V2
E2V1 F2V1 I 2V1 F2V1
E1F2 E2 F1 º F2V1 F1V2 » » I1 F2 I 2 F1 » F2V1 F1V2 ¼»
(3)
In the experiments, four specimens were utilized to evaluate the applicability of the soft tissue stiffness measurement system. There were two silicon gels (SC-725, PDMS) and two biological soft tissues (fresh and frozen porcine liver unfrozen for 20 minutes at room temperature) To simulate the application of the stiffness measurement system in robotic assisted surgery or traditional surgery, the porcine liver was measured by either holding the device by hand or by clamped on a fixture. The driving signal for PZT was swept sinusoidal with frequency ranged from 10Hz to 3.0 kHz and amplitude of 40 Volts. The initial indentation depth was set at 1 mm. Each sample was tested for five trials and the mean electrical impedances are computed. Then the mechanical impedance was computed by using the transduction matrix obtained from the aforementioned FEM simulations. The frozen porcine liver was employed to simulate cirrhotic liver tissyes.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of Soft Tissue Stiffness Measuring Device for Minimally Invasive Surgery by using Sensing Cum Actuating Method
293
1 ELEMENTS JUL 1 2008 11:03:34
U VOLT CP
Y Z
X
fourpole of a PZT-driven cantilever SCA
Fig. 4 Finite element model of the PZT-coated cantilever
Fig. 6 Effects of holding the stiffness device on measured electrical impedance
III. RESULTS The measured electrical impedances of the four specimens, from fixture clamped test, were compared in Fig. 5. Within 10 Hz to 2 kHz (not shown), the difference between specimens was insignificant but around 2,250 Hz, except PDMS, there were resonant frequencies (valley) for SC-725, normal liver and frozen liver. PDMS was so stiff that no resonant frequency can be found. There was a local maximum higher than the resonant frequency. The difference between the local maximum and the resonant was defined as the peak-to-peak value of electrical impedance. The mean peak-to-peak values of SC-725, normal liver and frozen liver were: 0.478×104, 1.391×104 and 0.132×104 : respectively. The peak-to-peak value increased with the compliance of specimens.
Fig. 7 Comparison of measured and simulated electrical impedance when tip was fixed
measured electrical impedance was less than that of the simulated and the resonant frequency of the simulated electrical impedance was lower than that of the measured electrical impedance. In Fig. 8 the mechanical impedances of the porcine livers under different testing conditions were compared. In general, the magnitude decreased with the frequency from 1400 Hz to 2200Hz and it increased with the frequency for frequency higher than 2400 Hz. Similar to the electrical impedance, the peak-to-peak value around the resonant frequency for the liver under room temperature can be read from the figure as: 0.07145 Ns/m for the clamped test and 0.037 Ns/m for the hand-held test. For the frozen liver, it was
Fig. 5 Comparison of specimen electrical impedances, resonant frequency of each sample indicated by arrow and peak-to-peak value of normal liver is defined
In Fig. 6 the effects of holding the device on electrical impedance were compared. The peak-to-peak values of the hand-held tested samples were less than those of the clamped tested samples. For the normal porcine liver, the reduction of peak-to-peak value was about 74.2% by hand-held test. In Fig. 7, the measured electrical impedance of the device was compared with the impedance computed from finite element simulations. The peak-to-peak value of the
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 8 Mechanical impedances of normal liver and frozen liver under different holding conditions
_________________________________________________________________
294
M.-S. Ju, H.-M. Vong, C.-C.K. Lin and S.-F. Ling
Fig. 9 The imaginary part of the complex modulus of stiffness of the porcine livers tested under different conditions
difficult to define the peak-to-peak values for both test conditions. Fig. 9 showed the imaginary modulus of the liver samples under room temperature had a peaks around the resonant frequency for both handheld and clamped tests.
IV. DISCUSSION From the electrical impedance results, one may observe that the samples can be qualitatively separated into two groups: the soft materials (porcine liver at room temperature and SC-725) and the stiff materials (frozen porcine liver and PDMS). In a previous work, we found that the apparent Young’s moduli of the specimens were ordered as: liver < SC-725 < frozen liver < PDMS. The peak-to-peak values of the electrical impedance of the liver and SC-725 seemed to be inversely proportional to the apparent Young’s modulus. For PDMS there is no peak-to-peak value. This may due to the stiffness of PDMS was higher than the mini cantilever and around the resonant frequency the beam deflection was small and the current induced by direct piezoelectric effect was small. On the other hand, the fresh porcine liver (at room temperature) and the silicone rubber SC-725 were less stiff than the mini cantilever so at the resonant frequency the beam deflection was large so did the current and thus the impedance decreases for same input voltage. In this work, the finite-element method was employed to compute the transduction matrix. However, there were errors between the computed electrical impedance and the measured one. The error might come from the uncertainty in modeling the adhesive layer and the damping model of the mini cantilever. Further improvement on structural damping of the beam model might reduce the amplitude error. Unlike the experimental approach, the determinant of the transduction matrix was very close to one.
_______________________________________________________________
Unlike the electrical impedances, the mechanical impedances of the porcine liver specimens had a minimum around 2,500 Hz. Physically it means that, at this frequency, the same amplitude of sinusoidal force could result in higher amplitude of sinusoidal velocity signal. The peak-to-peak value of the liver tested under clamped test was larger than that of the hand-held test. The difference might be that the initial depth of hand-held test was higher than that of the clamped test (1 mm). It is well known that soft tissue like liver has a stress-strain curve consisted of three regions: toe, linear and nonlinear. The apparent Young’s modulus at the toe region is much less than that of the linear region. It is very easy for the hand-held test to enter into the linear region and yield higher stiffness. From the complex modulus of stiffness (F(jZ)/X(jZ)) of the porcine livers, one may observe that the real part or the storage modulus decreases with the frequency monotonically although, around 2,250 Hz small variations can be observed for liver tested at room temperature. However, significant variations of the imaginary part or the dissipation modulus can be found around the same frequency. It reveals that the porcine livers at room temperature had a higher damping or more like fluid than the frozen liver. The imaginary modulus at resonant frequency might be used as quantitative index for assessing the stiffness of cancerous liver tissue. In this work, through the in vitro experiments the performance of the soft tissue stiffness measurement has been tested. The next stage is to design a system suitable for in vivo experiments by considering the control of initial indentation and the sterilization of the probe and the integration with robotic assisted surgery. V. CONCLUSIONS In this work, the sensing-cum-actuating method was adopted to develop a PZT-based soft tissue stiffness measurement system. In vitro tests on porcine livers revealed that the dissipative modulus computed from the mechanical impedance could quantify the stiffness of normal and pathological tissues.
ACKNOWLEDGEMENT Research supported partially by a grant from ROC National Science Council under contract NSC 95-2221-E006 -009 -MY3.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of Soft Tissue Stiffness Measuring Device for Minimally Invasive Surgery by using Sensing Cum Actuating Method 6.
REFERENCES 1.
2.
3.
4. 5.
Mark P, Salisbury J(2001) In Vivo Data Acquisition Instrument for Solid Organ Mechanical Property Measurement. Proc 4th Intl Conf on Medical Image Computing and Computer-Assisted Intervention. Narayanan N, Bonakdar A, et al.(2006) Design and analysis of a micromachined piezoelectric sensor for measuring the viscoelastic properties of tissues in minimally invasive surgery. Smart Mater & Struct 15:1684-1690. Valtorta D, Mazza E(2005) Dynamic measurement of soft tissue viscoelastic properties with a torsional resonator device. Medical Image Analysis 9:481-490. Kauer M, Vuskovic V et al.(2002) Inverse finite element characterization of soft tissues. Medical Image Analysis 6:275-287. Ling S, Xie Y(2001) Detecting mechanical impedance of structures using the sensing capability of a piezoceramic inertial actuator. Sensors and Actuators A Physical 93:243-249.
_______________________________________________________________
7. 8.
295
Ling S, Wang D, Lu B(2005) Dynamics of a PZT-coated cantilever utilized as a transducer for simultaneous sensing and actuating. Smart Materials and Structures 14:1127-1132. Fung Y(1993) Biomechanics Mechanical Properties of Living Tissues. 2nd ed New York Springer-Verlag. Vong A (2008) Development of Soft Tissue Stiffness Measuring Device for Minimally Invasive Surgery by using Sensing Cum Actuating Method. MS Thesis Dept of Natl Cheng Kung Univ Tainan Taiwan.
Corresponding: Ming-Shaung Ju Dept. of Mechanical Eng, National Cheng Kung Univ., 1 University Rd., Tainan, Taiwan 701. e-mail:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
A Novel Method to Describe and Share Complex Mathematical Models of Cellular Physiology D.P. Nickerson and M.L. Buist Division of Bioengineering, National University of Singapore, Singapore Abstract — With improved experimental techniques and computational power we have seen an explosion in the complexity of mathematical models of cellular physiology. Modern lumped parameter models integrating cellular electrophysiology, mechanics, and mitochondrial energetics can easily consist of many tens of coupled differential equations requiring hundreds of parameters. The reward of this increase in complexity is improved biophysical realism and an increase in the predictive qualities of the models. One of the most significant challenges with such models is being able to completely describe the model in such a manner that the authors can share the model with collaborators and, potentially, the scientific community at large. In this work we have developed methods and tools for the specification of the complete description of mathematical models of cellular physiology using established community standards combined with some emerging and proposed standards. We term the complete description of the cellular model a ‘reference description’. Such a description includes everything from the mathematical equations, parameter definitions, numerical simulations, graphical outputs, and post-processing of simulation outputs. All of these are grouped into a hierarchy of tasks which define the overall reference description structure and provide the documentation glue to enable the presentation of a complete story of the model development and application. Our reference description framework is based primarily on the CellML project, using annotated CellML models as the basis of the model description. Being based on CellML allows the underlying mathematical model to be shared with the community using standard CellML tools, and allows tools capable of interpreting the annotations to present the complete reference description in various manners. Keywords — CellML, mathematical model description, Physiome Project, electrophysiology, bioinformatics.
I. INTRODUCTION We have witnessed a dramatic increase in the complexity of mathematical models of cellular physiology in recent years. Due to an increased availability of experimental data and computational power, model authors are now able to create biophysically based models of cellular physiology incorporating finer and finer details. Recent models contain multiple compartments and the transportation kinetics between them, as well as combining several different aspects
of cellular function into single models. The Cortassa et al. [1] model, for example, includes cellular electrophysiology, calcium dynamics, mechanical behavior, and mitochondrial bioenergetics, defined using fifty differential equations and more than 100 supporting algebraic expressions (c.f., the four differential equations in the classic Hodgkin and Huxley [2] model). When dealing with such complicated mathematical models, it becomes very difficult to share the model with collaborators or the scientific community at large. Traditional peer reviewed journal articles restrict how much detail can be presented and require the translation of the model implementation into a format suitable for the particular journal. Such translation is an error prone process leaving much room for typographical mistakes when translating complex models. To address this deficiency, standard model encoding formats have been applied to the sharing and archiving of mathematical models of cellular physiology. The most notable standards being SBML [3, 4, http://sbml.org/] and CellML [5, 6, http://www.cellml.org/]. Both SBML and CellML address similar requirements for a machine readable and software independent model encoding format, but they each approach the issue from a quite different perspective. SBML developments have traditionally focused on representing models of biochemical pathways, whereas CellML has placed emphasis on representing related mathematical equations. These different approaches has resulted in two quite different model encoding standards, although there is some degree of compatibility between them – the mathematics can be translated from one to the other, but the biological concepts represented by a model may not be so easily translated in an automated fashion. Using these model encoding standards, it is possible to define mathematical models of cellular physiology in a machine readable and software agnostic format. Models encoded in these standard formats can then be exchanged between scientists with confidence. The only potential loss of information in such an exchange is the “correctness” with which the tools used by each scientist interpret the model encoding [for a discussion of this problem, see 7]. While these standards provide for the specification of the mathematical models, further information is required in order to completely define the application of mathematical models
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 296–298, 2009 www.springerlink.com
A Novel Method to Describe and Share Complex Mathematical Models of Cellular Physiology
in various scenarios. It is this additional information that is the subject of this manuscript. II. METHODS In this work, we utilize CellML for encoding mathematical models. The discussion below applies equally as well to SBML. Metadata is used to provide additional, context dependent data about the mathematical model. In CellML the data represented are the definition of the mathematical equations. This work also falls under the IUPS Physiome Project (http://www.physiomeproject.org/) and as such it is desirable that this work should be able to extend to types of mathematical models and spatial scales other than those discussed below. A. CellML CellML is an XML-based model encoding standard that utilizes MathML (http://www.w3.org/Math/) to define mathematical equations and custom XML elements to define the variables used in the equations. CellML includes capabilities to define abstract hierarchies of mathematical components which can be connected together to form larger models of complete systems. These components can be defined locally within a model or imported from external models. Given the generic nature of CellML, it is applicable to a wide range of mathematical models covering the full spectrum of computational physiology [8–12]. The CellML project provides a freely accessible model repository (http://www.cellml.org/models/) as a valuable community resource [13]. With over 350 models in the repository, this provides an excellent demonstration of the range of models capable of being described using CellML. There are also several tools now available capable of utilizing CellML [for a recent review, see 14]. B. CellML Metadata The CellML community is currently developing several metadata standards that greatly enhance model descriptions. The CellML Metadata specification (http://www.cellml.org/ specifications/metadata/) provides a standard framework for the annotation of mathematical models with common data. This includes annotations such as model authorship, modification history, literature citations, biological constituents, human readable descriptions, etc. Such metadata provides the core description of the model allowing biological significance to be inferred. The CellML Simulation Metadata specification (http:// www.cellml.org/specifications/metadata/simulations/) aims
_________________________________________
297
to provide a standard framework for the description of specific numerical simulations. This provides for the instantiation of mathematical models into specific computational simulations. CellML Graphing Metadata (http://www. cellml.org/specifications/metadata/graphs/) defines a framework for the specification of particular observations to be extracted from simulation results. Graphing metadata also provides a mechanism for the association of external data with specific simulation observations. External data may consist of the experimental data from which the model was derived or the data used to validate the model. For the case of validating simulation tools rather than the mathematical models, the external data could be curated simulation output against which simulation tools can be tested. This therefore provides a quantitative validation of new simulation tools as opposed to the largely qualitative validation currently used [7]. All three of these metadata specifications leverage upon the Resource Description Framework (RDF, http://www. w3.org/RDF/) in their specification, providing a powerful technology for the annotation of models with no specific requirements on the serialized format of the model encoding. This therefore makes it straightforward to annotate models serialized in XML documents stored on the local computer, models stored remotely on the Internet as XML documents, or even models stored in remote databases. C. Reference Descriptions of Cellular Physiology Models We have previously described an approach to annotating mathematical models of cellular physiology using the metadata standards described above [15, 16]. This approach makes extensive use of graphing and simulation metadata in order to completely define simulation observations in terms of the mathematical models, their parameterization, and the numerical methods used in performing the computational simulations. We envision these simulation observations forming the basis of journal articles submitted for peer review, allowing the articles to focus on the novel developments or observations being presented rather than devoting much of the article content to a description of the mathematical model. From the model reference description, web-friendly presentations can be generated, as illustrated in Nickerson et al. [15, http://www.bioeng.nus.edu.sg/compbiolab/p2/], which provide a complete description of the mathematical models and all associated annotations. Such presentations of the models allow for significantly more detail to be provided in regard to the development and implementation of the models. In addition, as they are generated directly from the model implementation, there is no longer the possibility of
IFMBE Proceedings Vol. 23
___________________________________________
298
D.P. Nickerson and M.L. Buist
translation errors when translating model implementations into journal articles.
4.
III. DISCUSSION Annotated mathematical model descriptions using the community standards being developed by the CellML project provide the technology for completely describing the development of mathematical models. Supporting data, such as literature citations or experimental observations, can easily be incorporated into such model descriptions to substantiate modeling assumptions and justify parameter choices. Model authors using this framework are able to provide human-friendly presentations of model reference descriptions as supplements to peer reviewed journal articles. Providing such supplements as part of a curated web repository provides assurance to the scientific community that the reference description will be available as well as some degree of support in terms of the validity of the model encoding. This leaves the journal article able to focus on the novel aspects of the model development and outcomes rather than needing to devote a large portion of each article to basic model development and validation. Work is currently underway to develop interactive presentations of the model reference descriptions to ensure relevant information is readily available to the various user communities. Such presentation environments must be sufficiently flexible to accommodate the different types of “views” users may desire from the underlying reference description. For example, following the presentation mode developed in Nickerson et al. [15] we are developing a more mathematically oriented view as well as a biologically focused view.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
ACKNOWLEDGMENT A*STAR BMRC Grant #05/1/21/19/383.
15.
REFERENCES 1.
2.
3.
16.
Sonia Cortassa, Miguel A Aon, Brian O’Rourke, Robert Jacques, Hsiang-Jer Tseng, Eduardo Marban, and Raimond L Winslow. A computational model integrating electrophysiology, contraction, and mitochondrial bioenergetics in the ventricular myocyte. Biophys J, 91(4): 1564–1589, Aug 2006. doi: 10.1529/biophysj.105.076174. A L Hodgkin and A F Huxley. A quantitative description of membrane current and its application to conductance and excitation in nerve. J. Physiol., 117(4):500–544, August 1952. A Finney and M Hucka. Systems biology markup language: Level 2 and beyond. Biochem. Soc. Trans., 31(Pt 6):1472–1473, 2003. URL http://www.biochemsoctrans.org/bst/031/bst0311472.htm.
_________________________________________
M. Hucka, A. Finney, H. M. Sauro, H. Bolouri, J. C. Doyle, H. Kitano, A. P. Arkin, B. J. Bornstein, D. Bray, A. Cornish-Bowden, A. A. Cuellar, S. Dronov, E. D. Gilles, M. Ginkel, V. Gor, I. I. Goryanin, W. J. Hedley, T. C. Hodgman, J-H. Hofmeyr, P. J. Hunter, N. S. Juty, J. L. Kasberger, A. Kremling, U. Kummer, N. Le Nov`ere, L. M. Loew, D. Lucio, P. Mendes, E. Minch, E. D. Mjolsness, Y. Nakayama, M. R. Nelson, P. F. Nielsen, T. Sakurada, J. C. Schaff, B. E. Shapiro, T. S. Shimizu, H. D. Spence, J. Stelling, K. Takahashi, M. Tomita, J. Wagner, J. Wang, and SBML Forum. The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics, 19(4):524–531, Mar 2003. doi: 10.1093/bioinformatics/btg015. A A Cuellar, C M Lloyd, P F Nielson, M D B Halstead, D P Bullivant, D P Nickerson, and P J Hunter. An overview of CellML 1.1, a biological model description language. Simulation, 79(12):740–747, 2003. doi: 10.1177/0037549703040939. D. P. Nickerson and P. J. Hunter. The Noble cardiac ventricular electrophysiology models in CellML. Prog Biophys Mol Biol, 90(13): 346–359, 2006. doi: 10.1016/j.pbiomolbio.2005.05.007. Frank T Bergmann and Herbert M Sauro. Comparing simulation results of SBML capable simulators. Bioinformatics, 24(17):1963– 1965, Sep 2008. doi: 10.1093/bioinformatics/btn319. D P Nickerson, M P Nash, P F Nielsen, N P Smith, and P J Hunter. Computational multiscale modeling in the IUPS Physiome Project: modeling cardiac electromechanics. IBM J. Res. & Dev., 50(6):617– 630, 2006. doi: 10.1147/rd.506.0617. A. Corrias and M. L. Buist. A quantitative model of gastric smooth muscle cellular activation. Ann Biomed Eng, 35(9):1595–1607, September 2007. doi: 10.1007/s10439-007-9324-8. H. Schmid, M. P. Nash, A. A. Young, O. R¨ohrle, and P. J. Hunter. A computationally efficient optimization kernel for material parameter estimation procedures. J Biomech Eng, 129(2):279–283, Apr 2007. doi: 10.1115/1.2540860. M. T. Cooling, P. Hunter, and E. J. Crampin. Modelling biological modularity with CellML. IET Syst Biol, 2(2):73–79, Mar 2008. doi: 10.1049/iet-syb:20070020. Alberto Corrias and Martin L Buist. Quantitative cellular description of gastric slow wave activity. Am J Physiol Gastrointest Liver Physiol, 294(4):G989–G995, Apr 2008. doi: 10.1152/ajpgi.00528. 2007. Catherine M Lloyd, James R Lawson, Peter J Hunter, and Poul F Nielsen. The CellML model repository. Bioinformatics, 24(18):2122– 2123, Jul 2008. doi: 10.1093/bioinformatics/btn390. Alan Garny, David P Nickerson, Jonathan Cooper, Rodrigo Weber dos Santos, Andrew K Miller, Steve McKeever, Poul M F Nielsen, and Peter J Hunter. CellML and associated tools and techniques. Philos Transact A Math Phys Eng Sci, 366(1878):3017–3043, Sep 2008. doi: 10.1098/rsta.2008.0094. David P Nickerson, Alberto Corrias, and Martin L Buist. Reference descriptions of cellular electrophysiology models. Bioinformatics, 24 (8):1112–1114, Apr 2008. doi: 10.1093/bioinformatics/btn080. David Nickerson and Martin Buist. Practical application of CellML 1.1: The integration of new mechanisms into a human ventricular myocyte model. Prog Biophys Mol Biol, 98:38–51, Jun 2008. doi: 10.1016/j.pbiomolbio.2008.05.006. David Nickerson National University of Singapore 7 Engineering Drive 1, Block E3A #04-15 Singapore 117574
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
New Paradigm in Journal Reference Management Casey K. Chan1,2, Yean C. Lee3 and Victor Lin4 1
Division of Bioengineering, National University of Singapore, Singapore Department of Orthopaedic Surgery, National University of Singapore, Singapore 3 Department of Biological Sciences, National University of Singapore, Singapore 4 WizPatent Pte Ltd, Singapore
2
Abstract — The activity of generating bibliographic data and the storage of the article (PDF) are usually separate activities. Desktop journal reference management software have been developed to manage bibliographic data but the PDF files are usually managed separately or added on later as a special feature. Based on a strategy used in tagging of MP3 files, a web based application in which the bibliographic data is embedded to the PDF has been developed. Such a paradigm shift allows highly efficient web applications to be developed for management of citations, bibliographic data and documents.
allows for more versatile management of academic papers and increases the efficiency of research collaboration.
Keywords — reference, management, bibliographic, data, software
I. INTRODUCTION Life science research has undergone rapid expansion in both scope and depth in the recent years. The phenomenal growth is reflected by the exponential increase in the number of citations indexed by Medline in the last ten years. Shown in Fig. 1 is the citation collection in Medline on February 21, 2008, grouped according to publishing years of indexed citations [1]. As shown in the graph, there has been an exponential growth in the number of life science journal articles published from 2000 to 2006. Rapid growth in life science research is only possible if the information on the field is readily accessible. Realizing the importance of open access information, the US government has required publications originated from research funded by National Institute of Health to be made publically available [2]. Shortly after the introduction of the bill, the European Commission poposed a similar recommendation which enable public access of publications that arise from EC funded research [3]. Traditionally, citations for peer-reviewed publications are managed as a list independent from the corresponding collection of electronic articles (mostly as PDF) [4]. In this new era of research where information can be freely accessed, the old way of managing references and full text articles as separate objects need to be reconsidered. In this article, we will introduce a new paradigm of managing the bibliographic data together with the corresponding electronic article (PDF) as a single object. This new approach
Fig. 1 Citations available in Medline as of February 21, 2008, sorted according to years the original articles were published. Data for citations published in year 2007 is incomplete and hence omitted. II. OLD PARADIGM A. Embedded Metadata In 2004, James Howison and Abby Goodrum from Syracuse University demonstrated the importance of metadata for effective management of music files [4]. Metadata refers to machine-readable information which describes content of a file, in a way labels describe contents of can foods. Examples of music metadata include song title, artist, and genre. Metadata facilitates management of music files by allowing users to sort and group the files according to various fields without having to run the files. Music metadata can be directly stored in music files. The advantage of embedding metadata to a file is that whenever the file is moved or transferred, its metadata goes along with the file. According to Howison and Goodrum, it is the tight coupling of music metadata to music files which makes management of the files “the best personal informa-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 299–301, 2009 www.springerlink.com
300
Casey K. Chan, Yean C. Lee and Victor Lin
tion management experience available to individuals today [4]”. B. Metadata in Academic Papers Bibliographic data is basically metadata for academic papers which is used to identify specific publications. References cited in academic papers allow authors to substantiate their arguments and enable readers to access sources of information. Contrary to music files, bibliographic data is not usually embedded onto PDF of academic papers; it is separately obtained from a database and managed independent of PDF. The consequence is evident: a PDF cannot be identified without firstly opening the file. This makes management of PDF troublesome. One can use file names to identify a PDF but file names alone do not usually provide sufficient information for the file because of the limitation in number of characters a file name can hold. Furthermore, publishers usually name their PDF in a way nonsensical to readers which makes identification of the file even more difficult. The process of reference management involves collecting references from multiple sources, obtain the PDFs from publishers, organize the references and cite relevant ones while writing papers. After obtaining a list of references from multiple sources (often from references found in research papers), a researcher will need to download PDF from journal database. Usually, researchers use Digital Object Identifier (DOI) to retrieve PDF. DOI is a link which points a reference to its publisher’s website. The PDF is saved on local drive but because bibliographic data is not embedded in PDF, the PDF cannot be easily tracked. For some, it might be easier to download the same article again using DOI [4]. As convenient as it might be, DOI is not a replacement of actual PDF for a variety of reasons. Firstly, the publishers’ website that DOI links to always require users to login. Secondly, collaborators will need to repeat the same process of downloading the PDF which makes the process inefficient, given that the effort done by other collaborators is lost. Reference managers help to ease the problem to some extent. Although reference management can be done manually but for a more organized approach, researchers use reference managers such as Endnote, Refworks, and WizFolio. Reference managers contain tools for researchers to collect references, obtain PDFs, organize and insert citations into documents. Reference managers can be broadly classified into two major groups according to the way they are accessed: desktop and web-based reference managers. Most desktop reference managers link references and PDF stored on hard drive via shortcuts. Whenever PDF is
_______________________________________________________________
moved or transferred, bibliographic data is not transferred together. The recipient will need to repeat the process of searching for bibliographic data; all the effort done by previous users is lost. On the other hand, most web-based reference managers do not allow users to upload PDF and hence do not assist in management of the documents. III. NEW PARADIGM Since bibliographic data is an integral for PDF management, we propose that the two should be managed as one object using soft embedment. Soft embedment refers to embedment of metadata to a file using a file wrapper. As opposed to hard embedment (as in music files), the wrapper is not physically tied to PDF but is linked to PDF via software pointers. In order for soft embedment to work, there must be a new platform which redefines the linkage between PDF and references. The new platform links both PDF and bibliographic data together. Whenever an item is moved or transferred, both PDF and corresponding bibliographic data are moved together as one item. The power of internet which connects users together can be harnessed and the independence of the web from users’ operating system can be leveraged to bring management of references and files to a whole new level. By applying concepts in the new paradigm, we have developed a web based application [5] that allows users to store both the bibliographic data and PDFs together as single object. Whenever an item is shared, both metadata and PDF can be shared together. By allowing sharing of both bibliographic data and PDF as a single entity we believe research efficiency is improved because effort done by previous collaborators is shared. IV. CONCLUSION Bibliographic data and PDFs for journal articles have long been managed as separate entities. Because of this practice, scattered PDFs are impossible to identify unless one opens the files. Management of PDFs is therefore convoluted especially when sharing is concerned. Although PDFs allow embedment of some bibliographic data, its current file system severely limits the amount of bibliographic data that can be stored in the file. It is proposed that soft-embedment can be used to embed bibliographic data onto PDF. This allows for more efficient and manipulation of journal reference articles.
IFMBE Proceedings Vol. 23
_________________________________________________________________
New Paradigm in Journal Reference Management
301
REFERENCES 1. 2.
3. 4.
5.
MEDLINE® Citation Counts by Year of Publication at http://www.nlm.nih.gov/bsd/medline_cit_counts_yr_pub.html. Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research at http://grants.nih.gov/grants/ guide/notice-files/NOT-OD-08-033.html. Study on the Economic and Technical Evolution of the Scientific Publication Markets in Europe. p. 69. Howison Jand Goodrum A (2004) Why can't I manage academic papers like MP3s? The evolution and intent of Metadata standards. Colleges, Code and Copyright, 2004, Adelphi, Association of College & Research Libraries. WizFolio Homepage at http://www.wizfolio.com/
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
Casey K. Chan National University of Singapore Lower Kent Ridge Road Singapore
[email protected] Author: Institute: Street: City: Country: Email:
Yean Chert Lee National University of Singapore Lower Kent Ridge Road
Author: Institute: Street: City: Country: Email:
Victor Lin WizPatent Pte Ltd Pandan Loop
IFMBE Proceedings Vol. 23
Singapore
[email protected] Singapore
[email protected] _________________________________________________________________
Incremental Learning Method for Biological Signal Identification Tadahiro Oyama1, Stephen Karungaru2, Satoru Tsuge2, Yasue Mitsukura3 and Minoru Fukumi2 1
Systems Innovation Engineering, The University of Tokushima, Tokushima, Japan Institute of Technology and Science, The University of Tokushima, Tokushima, Japan 3 Graduate School of Bio-Application & Systems Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan 2
Abstract — There is electromyogram(EMG) as one of biological signals generated along with motions of the human body. This EMG has information corresponding to condition of power, tenderness of motions and motion level. Therefore, it is thought that it is useful biological information for analyzing person’s motions. Recently, researches on this EMG have been actively done. For instance, it is used as a control signal of electrical arms, because EMG can be gathered from the remaining muscle about upper-extremity amputee. In addition, the pointing device that uses EMG has been developed. In general, EMG is measured from a part with comparatively big muscular fibers such as arms and shoulders. There is a problem that placing and removing the electrode is inconvenient when EMG is measured by placing the electrode in the arm and the shoulder. Therefore, if we can recognize wrist motions using EMG which is measured from the wrist, the range of application will extend furthermore. Currently, we have constructed a wrist motion recognition system which recognizes the wrist motion of 7 types as an object by using the technique named Simple-FLDA as a feature extraction technique. However, motions with low recognition accuracy were observed, and it was found that the difference of the recognition accuracy is significant at each motion and subject. Because EMG is highly individual and its repeatability is low. Therefore, it is necessary to deal with these problems. In this paper, we try the construction of a system that can learn by giving incremental data to achieve an online tuning. The improvement of the algorithm of Simple-FLDA that incremental learning becomes possible was tried as a technique for the construction of online tuning system. As a recognition experimental result, we can confirm the rising of the recognition accuracy by incremental learning. Keywords — EMG, incremental learning, Simple-FLDA, Incremental Simple-FLDA
I. INTRODUCTION There is ElectroMyoGram(EMG) as one of biological signals generated along with motions of the human body. This EMG has information corresponding to condition of power, tenderness of motions and motion level. Therefore, it is thought that it is useful biological information for analyzing person's motions. However, there is a problem where the individual variation of EMG is large and its repeatability is low [1].
Recently, researches on this EMG have been actively done. For instance, it is used as a control signal of electrical arms, because EMG can be gathered from the remaining muscle about upper-extremity amputee [2,3]. In addition, the pointing device that uses EMG was developed [4]. In general, EMG is measured from a part with comparatively big muscular fibers such as arms and shoulders. There is a problem that placing and removing the electrode is inconvenient when EMG is measured by placing the electrode in the arm and the shoulder. Therefore, if we can recognize wrist motions using EMG which is measured from the wrist, the range of application will extend furthermore. In this research, we aim toward the development of a device of wristwatch type that consolidates operation interface of various equipments. In particular, as an early stage, we propose a wrist motion recognition system which recognizes the wrist motions of 7 types as an object by using the technique named Simple-FLDA as a feature extraction technique [5,6,7]. Simple-FLDA is an approximation algorithm that calculates eigenvectors sequentially by an easy iterative calculation without the use of matrix calculation in the linear discriminant analysis. The verification experiments are carried out using this system. As a result, motions with low recognition accuracy were observed, and it was found that the difference of the recognition accuracy is significant at each motion and subject. As these reasons, we think that the difference between individual EMGs is related. To achieve the construction of the system that has the high general versatility, it is necessary that the system adapt to users. In this paper, we try the construction of a system that can learn by giving incremental data to achieve an online tuning. The improvement of the algorithm of Simple-FLDA was tried as a technique for the construction of online tuning system. This improved algorithm that is called Incremental Simple-FLDA can perform an incremental learning by updating eigenvectors. The rest of this paper is organized as follows. We describe a constructed system configuration and techniques used in this system in section 2. Section 3 explains about experimental details, experimental result, and discussion. Finally we conclude this paper with comments and remarks on future work in section 4.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 302–305, 2009 www.springerlink.com
Incremental Learning Method for Biological Signal Identification
Signal Processing
Input
Feature Extraction & Dimension Reduction
Initial data EMG Sensor
303
FFT
Simple-FLDA
Learning Discrimination
Switching
Amplifier
EMG Signal
A/D Transform
FFT
Incremental data
Incremental Simple-FLDA
Neural Network
Fig.1 : Configuration of EMG recognition system II. ICREMENTAL LEARNING SYSTEM A. System configuration The construction of an EMG pattern recognition system that we propose in this paper is shown in Fig1. The system consists of an input part, a signal processing part, a feature extraction & dimension reduction part, and a learning discrimination part. In the input part, EMG is measured form the wrist by the electrode in four poles. In this paper, we use the surface electrode in consideration of practical application. Furthermore, because its treatment is easy, a dry-type electrode is adopted though surface electrodes are divided into the wettype (used electrolysis cream) and dry-type (non-used electrolysis cream). The electrode is attached from the aspect of convenience in the wrist though it is necessary to place in flexor carpi radialis muscle and flexor carpi ulnaris muscle that can strongly measure EMG in high accuracy. The motions of the wrist at this time are set in seven states of the neutral, up, down, right, left, inside and outside as shown in Fig. 2, and measured in each state. In general, it was reported that from a few Hz to 2 kHz frequency of the EMG were important [9]. In this research, we have extracted the signal between 70Hz and 2 kHz from the measured EMG to avoid the influence of the commercial frequency noise (70Hz) in the signal processing part. Then, the extracted signal is converted to the digital signal by the A/D conversion. Next, we execute the fast Fourier transform(FFT) to the signal in the feature extraction & the dimension reduction part. Moreover, the eigenvectors are obtained using
Simple-FLDA to spectra made by FFT. At the same time, the reduction in the dimension is carried out by doing this processing. Finally, wrist motions are discriminated using the eigenvectors by a classifier such as a neural network(NN) in the learning discrimination part. When an incremental data was input, the eigenvectors are updated by using Incremental Simple-FLDA. Similarly, NN recognizes motions by using the updated eigenvectors. Thus, this system is expected to adjust to an individual by the influence of the incremental data. Therefore, feature extraction and dimension reduction is performed by using Simple-FLDA when the input EMG is the initial data. When the input EMG is the incremental data, the eigenvector that the influence of incremental data was considered is updated by using Incremental Simple-FLDA. Furthermore, the weights of NN are updated by using the updated eigenvectors at the same time. Next, we explain techniques named Simple-FLDA and Incremental Simple-FLDA. B. Simple-FLDA The fisher linear discriminant analysis is one of techniques of discriminant analysis. This technique can find eigenvectors that achieve maximization of variance between classes and minimization of variance in each class at the same time. However, matrix calculation cost becomes huge and a calculation time becomes very long. Simple-FLDA (Simple Fisher Linear Discriminant Analysis) is an algorithm from which eigenvectors are found without using matrix calculation, and by using the repetition calculation of an approximation algorithm. The Simple-FLDA is an approximated algorithm that achieves maximization of the variance between classes and minimization of the variance in each class at the same time. First, the maximization of the variance between classes as a description of this algorithm is described. First of all, a set of vectors to use is defined as follows. V ^v1 , v 2 , " v m ` (1) The mean value of all data is assumed to be zero. The mean vector h j of the data of each class is calculated. The following calculations are carried out by using this h j .
Fig.2 : Wrist motion patterns
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
304
Tadahiro Oyama, Stephen Karungaru, Satoru Tsuge, Yasue Mitsukura and Minoru Fukumi
yn
(D nk )T h j
(2)
h j ᇫᇫifᇫy n t 0 (3) ® ¯ h j ᇫᇫotherwise where D nk is an approximated vector of the n-th eigenvector, and k shows an index of repetition of the calculation. The threshold function (3) is summed using every mean vector. It can converge to the eigenvector that maximizes the variance between classes by these formulas. eq. (2) can be replaced by another form. Next, we describe the algorithm of the minimization of variance in each class. The vector x j s have zero mean in the class. The positional relation between data x j and an arbitrary vector assumed to be D nk is considered. The direction where x j is projected and vector length is minimized is the orthogonal direction to x j . Therefore, direction b j where the direction component of x j was removed from D nk can be expressed by the following eq. (4). (4) b j a nk ( xˆ j a nk ) xˆ j f ( yn , h j )
xˆ
xj
(5)
xj
This is the same as Schmidt orthogonalization procedure. An actual quantity is obtained by normalization of the vector length of b j as follows. f i (b j , x j )
xj bj
bj
(6)
The averaging is carried out by using all input vectors in each class during the repetition calculation. In other words, the influence of a component that the norm of a vector is large in this eq. (6) is significant. Therefore, it is hoped that it converges to a direction to minimize the variance in each class. The repetition calculation is shown by the following formulas. c
f nk
¦
c
N i f ( y n , hi )
i 1
a nk 1
Ni
¦¦ f (b , x ) i
j
j
(7)
i 1 j 1
f nk f nk
(8)
where c is the number of classes and N i shows the number of data in class i. N i in the first term is used for equalizing the number of data in both terms. Thus, an arbitrary vector is converged to the eigenvector by achieving maximization of the variance between classes and minimization of the variance in each class at the same time. C. Incremental Simple-FLDA Incremental Simple-FLDA enables to update the eigenvector obtained by Simple-FLDA according to incremental
_______________________________________________________________
data. First of all, incremental data is defined as vm1 and the last mean value is assumed to be v . Furthermore, the mean vector of the data of each class is shown h j and m is the number of all data until last time. The calculation shown in eq. (9) is executed, a new mean v c is found. 1 vc (mv v m 1 ) (9) m 1 Next, new class mean vector h jc is calculated in eq. (10) 1 h cj ( Mh j (v m 1 v c)) (10) M 1 where M is the number of data in each class. The mean vector of the class h jc where vm1 belongs is updated. Next, we introduce the following threshold functions as well as the case of Simple-FLDA. y n (D n ) T h cj (11) h cj ᇫᇫifᇫy n t 0 (12) ® c ¯ h j ᇫᇫotherwise where D n shows a n-th eigenvector calculated last time. Moreover, we carry out the orthogonalization by using as well as the case of Simple-FLDA and the new orthogonal vector b cj is obtained. (13) b cj a n ( xˆ cj a n ) xˆ cj
f ( y n , h cj )
xˆ c
x cj
(14)
x cj
where x cj have zero mean in the class where vm1 belongs. The normalization of b cj is done in eq. (15) x cj f i (b cj , x cj ) b cj (15) b cj
The new eigenvector D nc is obtained by using f y n , h jc and f i b cj , x cj in the following eq. (16) 1 §¨ f ( y n , h cj ) f i (b cj , x cj ) ·¸ m D nc Dn (16) ¨ ¸ m 1 ¨ f ( y n , h cj ) f i (b cj , x cj ) ¸ m 1 © ¹ Finally, the previous eigenvector D nc is removed from the new data and the mean vector in each class by using GramSchmidt orthogonalization. The eigenvector is sequentially updated by repeating these calculations. III. INCREMENTAL LEARNING EXPERIMENT A. Experimental details In this section, incremental learning experiments are conducted by using the EMG recognition system to verify the effectiveness of incremental learning. This system obtains the eigenvectors using Simple-FLDA in the initial state and
IFMBE Proceedings Vol. 23
_________________________________________________________________
Incremental Learning Method for Biological Signal Identification
using Incremental Simple-FLDA in the incremental learning. In this experiment, EMG that is measured different person is used as the initial data and the incremental data. Because, it is necessary that the system adapts to user by using user’s incremental data. Thus, EMG of subject A(male: 30 years old) and subject B(male: 22 years old) are used as initial data and incremental data, respectively. Moreover, we use EMG of subject B as the evaluation data. Therefore, it is expected that the recognition accuracy in the initial state is extremely low. This system can increase the recognition accuracy using incremental data. As a comparison experiment, we tried the incremental learning experiment by using the eigenvectors that are not updated with the incremental data, which means eigenvectors in the initial state. The number of initial data, incremental data, and evaluation data are 70(7 classes x 10 trials) in each. B. Experimental result The incremental learning experimental result is shown in Fig.3. The horizontal axis is the number of incremental data, and the vertical axis is recognition accuracy. In the graph, the lines labeled "Inc. SFLDA" and "Non Inc. SFLDA" are ones obtained by the Incremental Simple-FLDA and the method that the eigenvectors are not updated, respectively. The recognition accuracy in the initial state is about 41%. In previous study, when the initial learning and the evaluation are performed using same person’s data, the recognition accuracy is obtained about 90%. Therefore, this result is extremely low. This is because the initial leaning data and evaluation data are different person’s ones. In the incremental learning, the recognition accuracy increases gradually according to the number of incremental data in each case. However, the recognition accuracy is obtained by using the Incremental Simple-FLDA is higher than the case
305
when the eigenvector is not updated. As a result, the Incremental Simple-FLDA is effective in incremental learning in EMG recognition. However, more high recognition accuracy is required to use in actual environment. Thus, it is necessary to improve further this system. IV. CONCLUSIONS In this paper, we constructed the online tuning system that built incremental learning function into the wrist motion recognition system that used wrist EMG. As the method, we proposed Incremental Simple-FLDA that gave the incremental learning function to the algorithm of SimpleFLDA. The construction of the online tuning system was tried by doing incremental learning with this algorithm. Moreover, the recognition experiment was carried out with this system. As a result, we can confirm the effectiveness of proposed system. However, further improvement is necessary to develop the various equipments using this system. In the future, we aim at the completion of the system.
REFERENCES 1. 2.
3.
4.
5.
6. 0.8 80.0
7. Recognition accuracy [%]
0.7 70.0
8.
0.6 60.0
9. 50.0 0.5
0.4 40.0
Japanese Automatic Recognition System Society, ”Biometrics that Understands from this”, Ohm Company, 2001, in Japanese D.Nishikawa, W.Yu, H.Yokoi, Y.Kakazu, ”On-Line Learning Method for EMG Prosthetic Hand Controlling”, IEICE Trans. D-II, Vol.J82, No.9, pp.1510-1519, 1999, in Japanese O.Fukuda, N.Bu, T.Tsuji, ”Control of an Externally Powered Prosthetic Forearm Using Raw-EMG Signals”, T.SICE, Vol.40, No.11, pp. 1124-1131, Nov. 2004, in Japanese O.Fukuda, J.Arita, T.Tsuji, ”An EMG-Controlled Omnidirectional Pointing Device”, IEICE Trans. J87- D-II, No.10, pp.1996-2003, Oct.2004, in Japanese T.Oyama, Y.Matsumura, S.Karungaru, M.Fukumi, “Feature Generation Method by Geometrical Interpretation of Fisher Linear Discriminant Analysis”, Trans. of IEEJ, Vol.127-C, No.6 T.Oyama, Y.Matsumura, S.Karungaru, Y.Mitsukura, M.Fukumi, ”Construction of Wrist Motion recognition System”, Proc. of 2006 RISP InternationalWorkshop on Nonlinear Circuits and Signal Processing, pp.385-388, Hawaii, March 2006 T.Oyama, Y.Matsumura, S.Karungaru, Y.Mitsukura, M.Fukumi, ”Recognition of Wrist Motion Pattern by EMG”, Proc. of SICEICCAS’2006, pp.599-603, Busan, Korea, Oct. 2006 Y. Ishioka, ”Standard and Application of Stomatognathic Function Analysis”, dental Diamond Company, pp.260-273, 1991, in Japanese Shingo Kuroiwa, Satoru Tsuge, Hironori Tani, Xiaoying Tai, Masami Shishibori and Kenji Kita, "Dimensionality reduction of vector space model based on Simple PCA", Proc. Knowledge-Based Intelligent Information Engineering Systems & Allied Technologies (KES), Vol.2, pp.362-366, Osaka, Sep. 2001
Inc. SFLDA Non Inc. SFLDA
0.00.3 1 Initial State
2
3
14
4
5
6
7
8
28 42 Number of incremental data
569
10
11 70
Fig.3 : Recognition accuracy in the incremental learning experiment
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Metal Artifact Removal on Dental CT Scanned Images by Using Multi-Layer Entropic Thresholding and Label Filtering Techniques for 3-D Visualization of CT Images K. Koonsanit, T. Chanwimaluang, D. Gansawat, S. Sotthivirat, W. Narkbuakaew, W. Areeprayolkij, P. Yampri and W. Sinthupinyo National Electronics and Computer Technology Center, Pathumthani, Thailand Abstract — Metal artifact is a significant problem in computed tomography (CT). Degradation of the quality of an image is a direct consequence of the metal artifacts in the image data. A number of papers and articles have been published regarding the subjects. However, none of the approach can be employed to incorporate in commercial CT scanners. Besides, all of the methods, to the best of our knowledge, are performed in the reconstruction process which means the CT images are created at the same time as the metal artifact removal process. In this research, we assume that the CT images with metal artifacts are given and we have no control over the reconstruction approach. Hence, we propose a new method to automatically remove metal artifacts in dental CT images in the post-processing steps. The proposed technique consists of two main steps. First, the local entropy thresholding scheme is employed to automatically segment out the dental region in a CT image. Then, the Label filtering technique is used to remove isolated pixels, which are the metal artifacts, by using the concept of connected pixel labeling. The algorithm has been tested on thirty sets of the dental CT scanned images. The experimental results are compared with hand-labeled dental images and are evaluated in terms of accuracy, sensitivity and specificity. The numerical results showing sensitivity, specificity, and accuracy are 87.89%, 99.54%, and 99.21% respectively. The experiments demonstrate the robustness and effectiveness of the proposed algorithm. The algorithm provides promising performance in detecting and removing metal artifacts from dental CT images. Therefore, automatic artifact removal can greatly help with the 3-D Visualization of CT images. Keywords — Artifact Removal, CT scanned image, Dentistry, Entropic Thresholding, Label Filtering
I. INTRODUCTION At present, computed tomography streak artifacts caused by metallic objects remain a challenge for the automatic processing of image data. A three-dimensional assessment of the bone architecture needs to be available for the planning of surgical placement of dental implants. Therefore, automatic artifact removal can greatly help with the 3-D object reconstruction process as shown in Fig.1.
Fig.1. 3-D Visualization of CT Images II. THE PROPOSED ALGORITHM Dental CT is a three-dimensional (3-D) scan from a large set of 2-D X-ray images which can be used for dental and maxillofacial applications. A 3-D object is generated from a series of 2-D images by selecting regions of interest. Then only regions of interest are used to generate a 3-D shape. In our case, dental bones, possessing brighter gray-scale intensities compared to other tissues, are our regions of interest. A user or dentist has to select an appropriate threshold to segment out the dental bones. Therefore, automatic threshold selection can greatly help with the 3-D object reconstruction process. In this paper, we propose a new method for automatic segmentation, which is based on entropic thresholding scheme. While a traditional co-occurrence matrix specifies only the transition within an image on horizontal and vertical directions, in this work we embrace the transition of the gray-scale value between the current layer and its prior layer as well as the current layer and its next layer into our cooccurrence matrix. The proposed method can be used to automatically select an appropriate threshold range in dental CT images as shown in Fig.2. The proposed technique consists of two main steps. First, the local entropy thresholding scheme is employed to automatically segment out the dental region in a CT image. Then, the Label filtering technique is used to remove iso-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 306–309, 2009 www.springerlink.com
Metal Artifact Removal on Dental CT Scanned Images by Using Multi-Layer Entropic Thresholding …
307
Layer k-1 Layer k
Layer k Layer k+1
Fig.5. right and bottom of pixel in
Fig.6. the prior and the next layers in
a co-occurrence matrix
a co-occurrence matrix
Let F be the set of images. Each image has dimension of P×Q. Let tij be an element in a co-occurrence matrix depending upon the ways in which the gray level i follows gray level j ( Fk ( x, y )
i ) and ( Fk ( x, y 1)
j)
or
( Fk ( x, y )
i ) and ( Fk ( x 1, y )
j)
or
y 1
( Fk ( x, y )
i ) and ( Fk 1 ( x, y )
j)
Where G 1
( Fk ( x, y )
i ) and ( Fk 1 ( x, y )
j)
lated pixels, which are the metal artifacts, by using the concept of connected pixel labeling. A. Entropy Thresholding Our test data are the dental CT (Computed Tomography) scan images which consist of different cross-section images as shown in Fig.3.and Fig.4. Layer 1 Layer 2
nd
Layer 3
Fig.3. Position of CT slices from top to bottom
st
Q
x 1
¦ ¦G
tij
Fig.2. Metal Artifact Removal Algorithm on Dental CT Scanned Image
P
,G
0 otherwise .
or
(1)
where Fk denotes the kth slice in the image set, F The total number of occurrences, tij, divided by the total number of pixels, P×Q, defines a joint probability, pij, which can be written as
pij
tij
¦¦ t i
ij
j
(2)
If s,0 d s d L 1 is a threshold. Then s can partition the co-occurrence matrix into 4 quadrants, namely A, B, C, and D shown in Fig.7.
rd
Fig.4. Examples from layer number
Because image pixel intensities are not independent of each other, an entropic-based thresholding technique is employed. Specifically, we implement a Multi-layer Entropy method which can preserve the structure details of an image. Our definition of a co-occurrence matrix is based on the idea that the neighboring layers should affect the threshold value. Hence, we define a new definition for a cooccurrence matrix by including the transition of the grayscale value between the current layer and its prior layer as well as the current layer and its next layer into our cooccurrence matrix illustrated in Fig 6.
_________________________________________
Fig.7.
an example of blocking of co-occurrence matrix
Let us define the following quantities: s
PA
s
¦¦ p
PijA
ij
i 0 j 0
IFMBE Proceedings Vol. 23
,
tij s
s
¦¦ t i 0 j 0
ij
(3)
for 0 d i d s ,0 d j d s
___________________________________________
308
K. Koonsanit, T. Chanwimaluang, D. Gansawat, S. Sotthivirat, W. Narkbuakaew, W. Areeprayolkij, P. Yampri, W. Sinthupinyo L 1
L 1
¦ ¦p
PC
PijC
ij
i s 1 j s 1
,
B. Connected Component Labeling
t
ij L 1 L 1
¦ ¦t
ij
i s 1 j s 1
(4)
for s 1 d i d L 1, s 1 d j d L 1
H A( 2 ) ( s )
H C( 2) ( s)
1 s s A ¦ ¦ Pij log 2 PijA 2i 0 j 0
(5)
L 1 L 1
1 ¦ ¦ PijC log 2 PijC 2 i s 1 j s 1 ( 2) A
(6)
( 2) C
Where H ( s ) and H ( s ) represent the local entropy of the background and foreground respectively. Since two of the quadrants shown in Fig.5, B and D, contain information about edges and noise alone, they are ignored in the calculation. Because the quadrants, which contain the object and the background, A and C, are considered to be independent distributions, the probability values in each case must be normalized in order for each of the quadrants to have a total probability equal 1. The gray level corresponding to the maximum of H A( 2) ( s) H C( 2 ) ( s) give the optimal threshold for object-background classification.
Sopt
>
arg max H A( 2) (s) H C( 2) (s) S
Connected component labeling or labeling filtering is used to remove some misclassified pixels in image as shown in Fig. 10. Label filtering is used to remove isolated pixels by using the concept of connected pixels labeling. The Label filtering tries to isolate the individual objects by using the eight-connected neighborhood. The result after label filtering shows in Fig. 11.
@ (7)
For the given original image shown in Fig. 8. The result after Multi-layer entropy thresholding shows in Fig. 10. For comparison purpose, the local entropic result is illustrated in Fig.9 where the metal artifacts have been segmented out in the amount greater than those in Fig. 9.
Fig.11. Resulted image after connected component labeling III. EXPERIMENT AND RESULTS On Pentium 4, CPU 2.0 GHz, using MATLAB version R2006b, the computational time of the whole algorithm takes approximately a few minutes for each sets of the dental CT scanned images. We used the thirty sets of the dental CT scanned images. The dental images and dental ground truth data are obtained from the National Metal and Materials Technology Center (MTEC), Advanced Dental Technology Center (ADTEC) and hand-labeled. The performance of the metal artifact removal algorithm is conventionally measured using accuracy sensitivity and specificity. The definitions of the three indices are follows: Table 1 Calculations of Accuracy
New Test Results
Reference Test Results + + TP FP FN TN
Sensitivity
Fig.8. original Image Specificity
Accuracy
Fig.9. Resulted image
Fig.10. Resulted image after the integra-
after Local Entropic Thresholding
tion of the prior-layer and the next-layer relationship
_________________________________________
TP TP FN
(8)
TN TN FP
(9)
TP TN TP FP TN FN
(10)
where TP=number of true positive specimens FP=number of false positive specimens
IFMBE Proceedings Vol. 23
___________________________________________
Metal Artifact Removal on Dental CT Scanned Images by Using Multi-Layer Entropic Thresholding …
FN=number of false negative specimens TN=number of true negative specimens
309
The result after final step which is 3-D object reconstruction process is shown in Fig.12.
The algorithm has been tested on thirty sets of the dental CT scanned images. The experimental results are compared with hand-labeled dental images and are evaluated in terms of accuracy, sensitivity and specificity. The numerical results showing sensitivity, specificity, and accuracy are 87.89%, 99.54%, and 99.21% respectively, and show on Table 2. The experiments demonstrate the robustness and effectiveness of the proposed algorithm. The algorithm provides promising performance in detecting and removing metal artifacts from dental CT images.
Table 2 Comparison of purposed Metal Artifact Removal Algorithm and hand-labeled techniques
index
Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 Dataset 6 Dataset 7 Dataset 8 Dataset 9 Dataset 10 Dataset 11 Dataset 12 Dataset 13 Dataset 14 Dataset 15 Dataset 16 Dataset 17 Dataset 18 Dataset 19 Dataset 20 Dataset 21 Dataset 22 Dataset 23 Dataset 24 Dataset 25 Dataset 26 Dataset 27 Dataset 28 Dataset 29 Dataset 30 MINIMUM MAXIMUN AVERAGE
Comparison of purposed Metal Artifact Removal Algorithm and hand-labeled techniques Sensitivity Specificity Accuracy (%) (%) (%) 96.1512 94.4695 94.5731 74.7131 99.9919 99.7589 85.4737 99.9958 99.5002 92.3677 99.9097 99.7638 89.7465 99.9962 99.8632 90.0708 99.8815 99.6517 91.9688 99.7759 99.4221 93.9094 99.9029 99.6731 95.1024 99.1150 98.8994 88.4291 99.9244 99.6448 85.6647 99.9912 99.7158 96.8132 99.9382 99.7784 97.2216 99.3522 99.1856 93.6635 99.9935 99.7231 94.084 99.9987 99.8632 27.1539 99.994 98.9462 94.0643 99.9676 99.7727 87.8229 99.9762 99.8913 93.916 95.3802 95.3385 94.877 99.8086 99.6062 94.4785 99.9705 99.8431 91.2738 99.9967 99.9515 93.7396 99.7321 99.4825 94.4697 99.9915 99.7633 77.4510 99.9964 99.9880 83.4507 99.9971 99.9799 75.4415 99.9345 99.7870 93.0513 99.9672 99.8325 77.2059 99.9983 99.9906 92.8948 99.4268 99.2183 27.1539 94.4695 94.5731 97.2216 99.9987 99.9906 87.88902 99.54581 99.21792
_________________________________________
Fig.12. 3D rendering after Metal Artifact Removal
IV. CONCLUSIONS Multi-layer Entropic Thresholding and Label Filtering Technique methods for metal artifact removal on dental CT scanned image are presented in this paper. We define a new definition for a co-occurrence matrix which can both preserve the structure within an image and at the same time can present the connection between adjacent images. The approach can be applied to automatically indicate an appropriate thresholding range in dental CT images. The algorithm provides promising performance in detecting and removing metal artifacts from dental CT images. Therefore, automatic artifact removal can greatly help with the 3-D Visualization of CT images.
ACKNOWLEDGMENT The authors would like to thank MTEC and ADTEC for providing CT data.
REFERENCES 1. 2.
3.
N. R. Pal and S. K. Pal, "Entropic thresholding," Signal processing, vol. 16, pp. 97-108, 1989. T. Chanwimaluang and Guoliang Fan, “An efficient algorithm for extraction of anatomical structures in retinal images,” ICIP 2003 Proceedings, Sept 4-17, 2003. K. Koonsanit, T. Chanwimaluang, C.Watcharopas, “Automatic segmentation of dental CT using entropic threshold,” Proceedings of the 1st International Bone and Dental Technology Symposium 2007, November 12-13, 2007.
IFMBE Proceedings Vol. 23
___________________________________________
A Vocoder for a Novel Cochlear Implant Stimulating Strategy Based on Virtual Channel Technology Charles T.M. Choi1*, C.H. Hsu1, W.Y. Tsai1 and Yi Hsuan Lee2 1
Department of Computer Science and Institute of Biomedical Engineering, National Chiao-Tung University, Taiwan R.O.C. 2 Department of Computer and Information Science, National Taichung University, Taiwan R.O.C. *
[email protected] Abstract — Cochlear implant provides opportunity for profoundly hearing impairment patient to have a chance to hear sound again. However, the limited number of electrodes is in sufficient to provide enough hearing resolution for hearing impaired people, especially for application in tonal language and music. Virtual channel technology opens up the possibility to increase the hearing resolution under the limited electrodes available and improve and quality for tonal language and music. In this paper, vocoder implementation of a new speech strategy based on virtual channel technology is used to study the improvement. The test result shows improved understanding and quality with virtual channels over the traditional strategies, ex. CIS, especially for mandarin and music. Keywords — Cochlear Implant, Current Steering, Virtual Channel, Vocoder, Speech Strategy.
I. INTRODUCTION Cochlear implant (CI) helps people with profoundly hearing impairment to recover partial hearing through electrical stimulating the hearing nerves. The mechanism is to pick up sound by a microphone and a speech processor turns the input sound to frequency representations fit to the tonotopic organization of human cochlea and generates electrical current to implanted electrode array to stimulate the corresponding hearing nerves [1], [2]. Current available commercial CI devices provide 12~24 electrodes but the limited number of electrodes cannot fully cover the whole auditory nerve fibers and are not sufficient for satisfactory stimulation quality. Current steering is designed to improve the stimulation resolution without increasing the originally implanted electrode count. By controlling the input currents of adjacent electrodes in a suitable manner simultaneously, intermediate channels, virtual channels, between the electrodes can be generated [3], [4]. This can increase the perceptual channels over the limited electrodes and improve the listening quality, especially for tonal language and music. To benefit from the virtual channels, new speech strategies are required to exploit this effect. In this paper, new speech strategy based on virtual channel technology is designed and a vocoder is developed to
realize the new strategy to study the improvement of performance. II. METHODS A. Spectrum representations in CI To give an impression how a CI user may perceive the sound, a spectrum comparison in Fig. 1 shows the difference between original signal and the stimulation channels in CI. Because of the tonotopic organization of human cochlea, the locations along the basilar membrane represent the corresponding frequencies. In Fig. 1, the upper one is an example spectrum of the original signal and the lower two are the stimulation channels in CI. The bottom squares represent the electrodes in an electrode array and eight electrodes are present in this example. It can be observed that in most commercial CI devices, the limited number of electrodes does not allow for satisfactory tracing of spectrum variation, resulting in inconsistent perception and thus compromised understanding. Virtual channels allow intermediate channels between adjacent electrodes can be generated, the stimulation channels similar to the original signal which can provide better stimulation results for CI users and better hearing quality. This can be compared by the peak positions from the figure (arrows) B. New speech strategy and vocoder In this study, we proposed a new speech strategy based on virtual channel technology for better efficacy. To assess the performance, an adapted vocoder is developed using National Instrument LABVIEW programming environment [7]. Fig. 2 provides the block diagram of the vocoder, including new speech strategy and the sound synthesis. In the simulation, digitized sound data stored in disk is used as input directly. Two main paths in the new strategy are processed according to the input sound data. Spectrum processing analyzes the spectrum information by using FFT and then selects the peak signal between adjacent electrodes. This will be used to locate the virtual channels. Magnitude
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 310–313, 2009 www.springerlink.com
A Vocoder for a Novel Cochlear Implant Stimulating Strategy Based on Virtual Channel Technology
311
processing analyzes the magnitude of the spectral band between adjacent electrodes and then nonlinear maps to the CI user’s dynamic range. This will be used to set the magnitude of the virtual channels stimulation.
Fig. 2 Block diagram of the vocoder, including new speech strategy and sound synthesis.
Table 1 Mapping between frequencies and corresponding electrodes
Fig.1 Spectrum comparison between original signal and the CI channels. The arrows represent the stimulation channels by the electrodes.
After processed by the new speech strategy, noise and pure tonal carrier are used to synthesize the sound to emulate the hearing of CI users [8].A band pass filter (BPF) control module is used to control the bandwidth of synthesis BPF. By adjusting the bandwidth of synthesis BPF, The current spread from the electrodes or virtual channels can be represented. This makes the outputs from the vocoder closer to the hearing of CI users.
Electrode ID Frequency (Hz) Electrode ID Frequency (Hz) Electrode ID Frequency (Hz) Electrode ID Frequency (Hz)
1 333 5 762 9 1518 13 3022
2 455 6 906 10 1803 14 3590
3 540 7 1076 11 2142 15 4264
4 642 8 1278 12 2544 16 6665
III. RESULTS Fig. 3 shows the waveform and spectrum of the test Mandarin phrase.“ton(/) shi(\) ten(-) ya(/)”. A typical Chinese word is usually composed of consonant, vowel and
C. Test materials To assess the performance of the new speech strategy, Mandarin and music were selected in this study. For Mandarin source, a phrase including four different tone words is used : “ton(/) shi(\) ten(-)ya(/)”. For music source, a clip of music played by a violin is used. All test materials are sampled at 16kHz and the signal bandwidth is 8kHz according to Nyquist-Shannon sampling theorem which covers the typical hearing frequen cy range of CI users. The configuration of HiFocus electrode array from Advanced Bionics Corporation [9] was used in this experiment. There are 16 electrodes, representing 16 fixed channels generated by traditional strategy and 15 virtual channels between 15 electrode pairs. The mapping between frequencies and the corresponding electrodes are listed in Table 1 [10]. Fig. 3 The original test Mandarin phrase sample: ton(/) shi(\) ten(-) ya(/). (a) Speech waveform. (b) Corresponding spectrum.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
312
Charles T.M. Choi, C.H. Hsu, W.Y. Tsai and Yi Hsuan Lee
tonal information. In the spectrum representation (Fig. 3b), it is observed clearly that the four words comprise the consonant and vowel while the second word is more obvious. The tones are different between words. The rising tone of the first word is associated with a rising frequency while the falling tone of the second word is associated with a falling frequency and so on. This tonal information is important to the understanding of Mandarin and any other tonal language, such like Cantonese. So how to maintain the tonal information for CI users using tonal language is crucial. Fig. 4 shows the spectral comparison of the synthesized sounds by using traditional fixed channel strategy and new strategy based on virtual channel. As Fig. 1 show, the spectrum is composed of fixed frequency components as the electrode located for the fixed channel strategy (Fig. 4a). The tonal information is lost and sounded like flat tone for all words. This makes the understanding more difficult. Observing the spectrum of the tonal synthesized sound (Fig. 4b), the frequency variation between adjacent electrodes, the stimulation sites can be adjusted and the frequency perceived will follow accordingly. In the perceptual test, the signal from the virtual channel strategy sounded much clearer and more natural than the traditional approach. Fig. 4c shows the spectrum using noise synthesis. By controlling the bandwidth of synthesis BPF, the synthesized sound can be made easier to understand. Since tonal carrier is a very narrow band noise signal, the spectral spread of tonal synthesis is narrower and more concentrated than that
of noise synthesis and can be observed by comparing Fig. 4b and 4c, especially in the high frequency region. IV. DISCUSSION Traditional fixed channel strategy performs well for nontonal language now, however the fixed approach of stimulation cannot trace the frequency variation for tonal language. The proposed new speech strategy preserves the main spectral information since the main peaks of the spectrum between each electrode pairs are selected for stimulation and adopts the virtual channel technology to trace the frequency variation. So the simulation result reveals that the new speech strategy is superior to the traditional strategy. In addition to the preservation of tonal information, the consonant also sounded more natural than traditional approach. Since the consonant is a noise like signal, the peaks are also distributed randomly. Using the virtual channel technology, the signal spectrum can be represented easily and therefore sounds more natural. The simulation result shows that the new strategy also performs better than the traditional approach. V. CONCLUSIONS This paper presented a new speech strategy based on virtual channel technology to improve the understanding of tonal language. The acoustic simulation showed that it is superior to the traditional strategy. For the music listening, the new strategy also performed better but there is still room for improvement. The vocoder implementation can help to develop a new strategy without involving CI users which can simplify the development effort and reduce the development time. The new CI strategy can therefore be improved based on the result of the vocoder simulation. Systematic perceptual experiment will be performed to validate the new speech strategy in the future.
ACKNOWLEDGMENT This research was supported in part by National Health Research Institute (NHRI-EX97-9735EI) and National Science Council, R.O.C. (NSC95-2221-E-009-366-MY3) Fig. 4 Spectral comparison of the synthesized sounds. (a) Spectrum of the tonal synthesized sound using traditional fixed channel speech strategy. (b) Spectrum of the tonal synthesized sound using new speech strategy. (c) Spectrum of the noisy synthesized sound using new speech strategy.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Vocoder for a Novel Cochlear Implant Stimulating Strategy Based on Virtual Channel Technology 7.
REFERENCES 1. 2. 3. 4.
5.
6.
Spelman F (1999) The past, present, and future of cochlear prostheses. IEEE Eng Med Biol 18(3):27-33 Loizou P (1998) Mimicking the human ear. IEEE Signal Process Mag 15(5):101-130 Advanced Bionics Corporation (2005) Increasing spectral channels through current steering in HiResolution bionics ear users Firszt, Koch JB, Downing DB et al (2007) Current steering creates additional pitch percepts in adult cochlear implant recipients. Otol and Neurotol 28:629-636 Peddigari V, Kehtarnavaz N, Loizou P et al (2007) Real-time LABVIEW implementation of cochlear implant signal processing on PDA platforms. Proc. IEEE intern. Conf. Signal, Acoust. Speech Proc. II, Honolulu, USA, 2007, pp 357-360 Advanced Bionics Corporation (2005) HiRes90K: surgeon’s manual for the HiFocus Helix and HiFocus 1j electrodes
_______________________________________________________________
8. 9.
313
Advanced Bionics Corporation (2006) SoundWave professional suit: device fitting manual, software version 1.4 Choi CTM, Hsu CH (2007) Models of virtual channels based on various electrode shape. Proc. 6th APSCI, Sydney, Australia, 20 Sit JJ, Simonson AM, Oxenham AJ et al (2007) A low-power asynchronous interleaved sampling algorithm for cochlear implants that encodes envelope and phase information. IEEE Trans Biomed Eng 54(1):138-149 Author: Charles T. M. Choi Institute: Department of Computer Science and Institute of Biomedical Engineering, National Chiao-Tung University Street: 1001, Ta Hsueh Road City: Hsinchu 300 Country: Taiwan, R.O.C. Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Towards a 3D Real Time Renal Calculi Tracking for Extracorporeal Shock Wave Lithotripsy I. Manousakas, J.J. Li Department of Biomedical Engineering, I-Shou University, Kaohsiung County, Taiwan Abstract — Extracorporeal shock wave lithotripsy can fragment renal stones so that they can pass out during urination. Breathing can induce motion of the kidneys which results in reduced efficiency of the treatment. This study attempts to evaluate if a three dimensional method of stone tracking could be possible with existing hardware and software components. A gelatin phantom was imaged with ultrasound to create a three dimensional volume image. Subsequently, two dimensional images acquired in parallel and perpendicular directions to the scans that produced the volume image were registered within the image volume. It is shown that data reduction methods and optimized software can achieve processing times suitable for real time processing. Keywords — ESWL, renal stone, image analysis.
I. INTRODUCTION Extracorporeal shock wave lithotripsy (ESWL) is the preferred method of treatment for renal calculi. It is a non invasive treatment and has been already in routine clinical practice for many years. During treatment, the stone formation is commutated to small fragments by exposing the stone to focused shock waves. The fragments, if they are of small size, can spontaneously pass out during urination. The most common practice is that the stone is localized using fluoroscopy. The focal area of the shock wave generator is positioned in such way that it will coincide with the stone location. It is usually a painful procedure and has complications such as hematuria and injuries at other healthy tissues near the treated area. Renal function may be reduced and most of it is recovered after a period of time. Internal injuries are caused because of reasons such as focus area that is larger than the size of the stone, breathing induced motion of the organs in the abdomen including the kidneys, involuntary movements of the patient, non-accurate localization, overtreatment and so on. Moreover, fluoroscopy with its radiation nature is not the ideal choice for localization purposes and ultrasound is now available for the majority of the commercial systems. Nevertheless, it is seldom used because requires more training for the technicians and doctors and also more time during treatment for the stone localization.
During the recent years, methods for automatic renal stone tracking have been proposed based in ultrasound imaging 1-3. These methods use stone positioning information from the images to reposition the shock wave focal area in real time. The benefits of such tracking methods are such as: smaller stone fragments, reduced treatment time, reduced injuries to healthy tissues, reduced pain and less radiation exposure for the patient. The efficiency that could be achieved is commented by other independent researchers 4. The main reasons why these treatment methods are not yet accepted and widely used today are that the methods are difficult to use and there are possibilities of erroneous tracking of other structures that may look like stones to the system’s image analysis software. These problems mostly arise from the nature of the two dimensional ultrasound imaging that is used with these systems. For a perfect stone tracking the stone should be visible in the image at all times. This could only be achieved if the imaging plane is perfectly aligned with the stone motion. Repositioning the patient on the system may improve the alignment but this needs some time and the cumbersome procedure may have to be repeated through a treatment. As somebody could imagine, such treatments are not welcome by doctors or patients. Moreover, if the alignment is poor and the stone gets out of the imaging plane, the software has difficulty in deciding whether to track something else or stop and wait for the stone to reappear at a close position. While three dimensional ultrasound imaging systems could certainly solve the above mentioned problems, are of high cost and not supported by any software system for stone tracking. In the research presented here, a method that uses pseudo three dimensional imaging is used in conjunction with classical two dimensional imaging. This method, although in an initial state, shows the potential of a three dimensional method. A study with a phantom shows that real time processing with current computer technology is possible. II. MATERIALS AND METHODS The method presented here uses a three dimensional image composed from individually acquired slices. This image volume acts as a reference for registration of subsequently
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 314–317, 2009 www.springerlink.com
Towards a 3D Real Time Renal Calculi Tracking for Extracorporeal Shock Wave Lithotripsy
acquired images. The study was performed using a simplified gelatin phantom.
315
identified by a system user. A subsequently acquired two dimensional image of the same area could be registered to this volume.
A. Phantom Model
B. Ultrasound Imaging
z
Software was written in both Matlab® (ver. 2007a, The MathWorks, Inc, USA) and Visual C++® (Visual studio 2005, Microsoft, USA) for performance comparison. The software was used on a Windows XP® based PC with an Intel Pentium D® 930 CPU at 3GHz having 4Gbytes of memory. The Intel’s Integrated Performance Primitives software library (ver. 5.3, Intel, USA) was used to increase performance of the C++ program. In a basic three dimensional stone tracking method, the task would be to follow an already identified target through time and space. The available processing time for a realistic tracking system could be equal or less to the frame rate of the ultrasound system. Often this is about 15 frames per second, allowing about 66 ms for image analysis and the necessary motion of the system until the next image becomes available.
Fig. 1 Drawing of the configuration for the phantom experiments.
The constructed phantom was placed inside a water tank lined with materials to reduce reflections. A commercial ultrasound imaging system was used for this study (Sonos 500, Hewlett Packard). Ultrasound imaging was performed for the constructed phantom at 2.5 MHz and at the maximum depth of 12 cm. The ultrasound probe was mounted on a XYZ positioning system and a small front part of the probe was immersed in the water. A drawing of the experiment’s configuration is shown in Fig. 1. The whole phantom volume was imaged using two different probe directions. The space between successive slices was 1 mm. In a clinical case, a free hand three dimensional ultrasound imaging of the kidney could be acquired within a short time period during hold breath. During a treatment, the morphology of the kidney in the image is not expected to be strongly altered and therefore this image volume could be used as a reference. Within this volume the position of the stone under treatment should be
_________________________________________
C. Image Analysis
x
A simplified kidney stone phantom was used in this study for collection of ultrasound images. The phantom was structured as a kidney with a clearly visible stone formation in the calyx and was constructed with widely available materials for a single use. The kidney tissue was mimicked using material used for gelatin based desserts. Various other materials, like agar, could have been used but with no difference to the results. Since these materials are usually looking transparent in the ultrasound images, scattering materials are usually added, such as graphite powder. A more easy to prepare method that we have been using for some time is the addition of a small portion of common wheat flour that is firstly diluted with cold water and then added to the hot gelatin solution as the last preparation step before pouring into a mold. As could be expected a more irregular grain is produced in the image that could be claimed to be more realistic for a tissue mimicking phantom. The exact proportions of the added materials and water are not critical to this study and could be altered to produce phantoms of various sound impedances, if that was necessary. The stone model was mimicked with a small irregular natural stone positioned within a small balloon full of water. This balloon was first fixed at a position within the mold and then the gelatin was poured around. The phantom was used after refrigeration for one day which reduces the air bubbles that may present during manufacturing.
Fig. 2 Ultrasound image of the phantom.
The region marked A depicts the 3D ROI for the registration and the region B depicts the 2D template size.
IFMBE Proceedings Vol. 23
___________________________________________
316
I. Manousakas, J.J. Li
The proposed approach relates the tracking performance to the software’s ability to register the most recent acquired two dimensional image to the three dimensional image that was initially acquired. The difference between the displacement of the stone position in the 3D and 2D images and the current position of the system represents the required motion for the systems shock wave focal area positioning motors. D. Algorithm Image registration was performed between each of the 2D images and each of the individual slices inside the acquired three dimensional volumes. Normalized correlation coefficients were computed for the image registration 5. The images were acquired as 640x480 pixel images, see Fig. 2. Each time, a template region from the two dimensional image of size 100x100 pixels (2D template) was related to regions of interest (3D ROIs) of size 290x200 pixels on each slice in the three dimensional volume image. Tests were also performed with the same images after been resized to a 1:3 and 1:5 ratio. When the images had different resolutions, the data were interpolated to produce more slices and result in a volume with the same resolution in all axes. The relative position of the 3D ROIs and the 2D templates are shown in Fig. 3. In the first case the 2D templates were extracted from the volume image, so they are at the same direction. In the second case, the 3D ROIs and the 2D templates were acquired in a perpendicular to the volume scans direction. The number of 3D ROIs images and 2D templates used in the parallel scans case were 42 and 20 respectively and for the perpendicular scans case 45 and 20 respectively. The sizes of the 3D ROIs and the 2D templates were selected so that to contain the whole balloon area. III. RESULTS In Fig. 4 the results from the registration are shown. Each line in the graphs show the values achieved when a specific 2D template is compared with all the 3D ROIs. The normalized correlation coefficient is in the range between zero and one. The ideal experiment in the parallel scans case, Fig. 4(a), shows that there is always a distinct peak of maximum height. When we perform resizing (1:5), distinct peaks still exist but with lower values. In the perpendicular scans case were the acquisition directions differ, the peaks are lower and when also using resizing, the peaks are even lower.
_________________________________________
Fig. 3 The relative direction between the 3D ROIs (top row) and the 2D templates (bottom row) for the parallel scans case (left column) and the perpendicular scans case (right column). The dashed lines depict the direction that the scans cut the balloon.
The processing times for the experiments in (ms) are shown in Table I. These times represent the time needed for a single 2D template to a 3D ROI registration. IV. DISCUSSION The results shown in Fig. 4 show that registration of the phantom images is ideal when the direction of both the 3D ROI and the 2D template are the same and the images are practically the same. Furthermore, stronger values are achieved when the images are of the original size. The lines in the graphs also show that there are two distinct peaks within an area of higher values. The higher value areas depict the balloon area where higher similarity exists as compared to the further away area. As the balloon with the stone was constructed as a very symmetrical structure, two peaks appear in the graphs representing positions with similar appearance on the left and on the right of the stone. In a real case, tracking would be sufficient when the shock waves’ focal area is positioned so that intersects with the calyx. Therefore, small errors could be of small importance. The relative directions of the 2D templates and the 3D ROIs may reduce the correlation coefficient but do not affect drastically the registration results. In a clinical case, it is more likely that all the 2D and 3D scans would be parallel. Even so, reasons such as movement of the patient or fan shape 3D image acquisition are expected to cause reduction of the correlation coefficient.
IFMBE Proceedings Vol. 23
___________________________________________
Towards a 3D Real Time Renal Calculi Tracking for Extracorporeal Shock Wave Lithotripsy
317
example, a 1:5 resizing of the data and registration between a whole 42 slices volume would require 42 times 0.28ms which is about 12 ms. Such processing time is within the time available for a real time stone tracking. Further in-vitro and in-vivo studies are necessary to verify the applicability of the method in clinical conditions. V. CONCLUSION
(a)
Using a simplified kidney stone phantom we have shown that three dimensional stone tracking could be achieved using a normalized correlation coefficient method. The time required for the proposed method is within the range of real time processing. Table 1 Single registration processing time (ms) (b)
Resize ratio Parallel scans case using Matlab® Perpendicular scans case using Matlab® Parallel scans case using the Intel® library
1:1 230 120
1:3 15.9 12.6
1:5 5.9 5.2
9.7
0.65
0.28
ACKNOWLEDGMENT
(c)
This research was partially supported by I-Shou University research grand ISU97-01-17
REFERENCES 1. (d)
2.
Fig. 4 Crosscorelation figures for (a) parallel scans case with original size, (b) parallel scans case with resize 1:5, (c) perpendicular scans case with original size, (d) perpendicular scans case with resize 1:5. 3.
The time needed for processing shows that processing is time consuming with non specific software like Matlab®. This software is often used for prototyping but for a real time application is not recommended. Some difference in processing time is noticed between the parallel and perpendicular scans cases due to the different data sizes. Nevertheless, the shortest time achieved for a single slice to slice registration using the original sized data was 9.7 ms and is also very long for a real time application. Reduction of the dataset using resizing or skipping individual slices, or any other appropriate method, is highly recommended. As an
_________________________________________
4.
5.
Orkisz M, Farchtchian T, Saighi D et al. (1998 ) Image based renal stone tracking to improve efficacy in extracorporeal lithotripsy. J Urol 160: 1237-1240 Chang C C, Manousakas I, Pu Y R et al. (2002) In vitro study of ultrasound based real-time tracking for renal stones in shock wave lithotripsy: Part II--a simulated animal experiment. J Urol 167:25942597 Chang C C, Liang S M, Pu Y R (2001) In vitro study of ultrasound based real-time tracking of renal stones for shock wave lithotripsy: part 1. J Urol 166:28-32 Cleveland R O, Anglade R, Babayan R K (2004) Effect of stone motion on in vitro comminution efficiency of Storz Modulith SLX. J Endourol 18:629-633 Gonzalez R C, Woods R E (1992) Digital Image Processing (third edition). Reading, Massachusetts: Addison-Wesley. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Ioannis Manousakas Dep of Biomedical Engineering No.1, Sec. 1, Syuecheng Rd., Dashu Township Kaohsiung County 840 Taiwan
[email protected] ___________________________________________
A Novel Multivariate Analysis Method for Bio-Signal Processing H.H. Lin1, S.H. Change1, Y.J. Chiou1, J.H. Lin2, T.C. Hsiao1 1
Institute of Computer Science/Institute of Biomedical Engineering, National Chiao Tung University, Taiwan 2 Department of Electronic Engineering, Kun Shan University, Taiwan
Abstract — Often, multivariate analysis is wildly used to process “signal information”, which includes spectrum analysis, bio-signal processing and etc. In general, Least Squares (LS) and PLS fall into overfitting problem with ill-posed condition, which means the future selections make the training data have better adaptability, but the quality of the prediction would be poor, compared with the testing data. However, the goal of these models is to have consistent prediction between testing and training data. Therefore, in this study, we present a novel MVA model, Partial Regularized Least Squares, which applies regularization algorithm (entropy regularization), to Partial Least Square (PLS) method to cope with the problem mentioned above. In this paper, we briefly introduce the conventional methods and also clearly define the model, PRLS. Then, the new approach is applied to several real world cases and the outcomes demonstrate that while calibrating data with noises, PRLS shows better noise reduction performance and lower time complexity than cross-validation (CV) technique and original PLS method which indicates that PRLS is capable of processing “Bio-signal”. Finally, in the future we expect utilizing another two regularization techniques instead of the one in the paper to identify the performance differentiations. Keywords — Multivariate Analysis, Partial Regularize Least Squares, Noise reduction-
I. INTRODUCTION Multivariate analysis technique is of great importance in signal processing field, such as application in spectrum analysis [1], bio-signal and image processing [2,3]and pattern reorganization [4]. Typically, MVA can be classified into two categories: regression analysis and iterative method. In regression analysis, Least Square (LS) and Partial Least Square (PLS) are the most commonly used method. Also, the iterative method, which is well known as artificial neural network (ANN), uses Multilayer perceptron [5] as its main model. Although, both regression and iterative methods have their unique properties and are suitable for different applications, some researches in the past have demonstrated that integrating these two techniques shows significant impact on the analyzing data and verify with real case data [7, 8]. Chen (1991) presented by using the regression strategy, orthogonal least squares (OLS), to construct a ANN, radial
basis function network (RBFN) [9]. Moreover, Hsiao (1998) suggested that PLS can be implemented in multilayer architecture as back-propagation (BP) network [10]. However, LS criterion in certain condition is prone to overfitting. If data are highly noisy, results may still fit into noise. Regularization is one of the techniques to overcoming the overfitting problem. Chen (1996) applied regularization techniques to his algorithm, which is regularized orthogonal least squares learning algorithm for radial basis function networks (ROLS based on RBFN) [11], and the prediction results demonstrate better generalization performance than the original non-regularized method. Therefore, follow the concept of ROLS based on RBFN, we develop a novel calibration model called Partial Regularized Least Square (PRLS) to overcome the overfitting problem that Hisoa’s algorithm encountered. II. METHODS AND MATERIALS Before specifying the PRLS algorithm, we briefly describe the basic principle of PLS and its implementation in ANN architecture. PLS regression is a widely used multivariate analysis method and is particularly useful when we need to predict a set of dependent variables from a large set of independent variables. The independent variable matrix Xmsn and dependent variable matrix Ynxl can be both decomposed into a matrix with corresponding weight matrix: X nxm
X 1 X 2 " X a E
T T T u1 p1 u 2 p 2 " u a p a E
U n x a PaTx m
Ynx1
(1)
Y 1 Y 2 " Y a F
v1 q1 v 2 q 2 " v a q a F Vn x a Q a x 1
(2)
z Partial Least Squares (PLS) Fig. 1 shows the schema of PLS algorithm. By performing the algorithm in Fig. 1, ||Enxm|| and ||Fnxl|| are minimized regressively and when number of iterations
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 318–322, 2009 www.springerlink.com
A Novel Multivariate Analysis Method for Bio-Signal Processing
319
quadratic constraint to minimize the weighted sum A[u] + B[u] and lead to a adequate solution for u. z Partial Regularized Least Squares (PRLS)
Fig. 1 PLS algorithm equals to a certain value “a” or the “examine residual” is less or equal than a certain value , the process terminates. The best generalized result would be found out. Fig. 2 illustrates the implementation of PLS in multi-layer architecture as BP network. Hsiao’s contribution is to provide possible findings for the constituents within the convergent weight matrix and is a more effective method for the necessary number of hidden nodes in BP.
In general PLS calibration only minimizes the residual ||Enxm|| and ||Fnxl|| during the processing of decomposing independent and dependent matrices. In ideal situation, the calibration will approximate the desired output minimum. In most cases, undesired information always hide behind the real world data which we called “nosie” will interfere the prediction. However, in some circumstances, PLS calibration may suffer from overfitting problem by the undesired noises within the data. As mentioned earlier, to prevent overfitting, we introduce regularization, entropy regularization, concept into PLS and rewrite the error criterion of PLS as: (3) (Where q is weighting vector which inferences the output directly)
Fig. 3 demonstrates the schema of PRLS algorithm. No/Yes
Initial unx1 ynx1 O
Check residual approaches zero
oXnxm unx1 p1Txm
LS calculates the weight P T
T
p1xm
u1xn Xnxm T u1xn unx1
ounx1 Xnxm pmx1 Linear mapping
vnx1 unx1
oynx1 vnx1q1x1
Terminate
T
Xnxm Xnxm unx1 p1xm E
ynx1 ynx1 vnx1 qnx1 F Subtract residual and update O
q1x1
T
v1xn ynx1 T v1xn vnx1 O
Regularized LS calculates the weight q
Fig. 3 PRLS algorithm flow chart Fig. 2 Three layer PLS architecture z Regularization Generally, regularization is used to prevent overfitting and to minimize the error function of residual. Essentially it involves adding some multiple of a positive definite matrix to an ill-conditioned matrix so that the sum is no longer illconditioned and is equivalent to simple weight-decay in gradient descent methods. The symbols, A[u] > 0 and B[u] > 0, are two positive functions of u, so that u can be determined by either minimizing A[u] or B[u]. To summarized, regularization uses Lagrange Multipliers combines with
_________________________________________
In Fig. 3, we can see that PLS are used in two different phases, however, only the later one, which calculates the weight q, influences the output directly. Hence, we only apply regularization into the second PLS phases. Apart from integrating regularization into the original PLS, we also adjust the three-layer PLS architecture with regularization method (Fig. 4).
IFMBE Proceedings Vol. 23
___________________________________________
320
H.H. Lin, S.H. Change, Y.J. Chiou, J.H. Lin, T.C. Hsiao
y ¦ LS method
q1 u
q2 u
1
f (.)
2
f (.)
v1
v2
qa u
Regularization a
f (.)
f(.) : usual linear mapping so u v i
i
va
p1a p11 p12 x1
x2
p ma
LS method
Fig.5-3 Correlation coefficient as a function of index of hidden node under CV
xm
Fig. 4 PRLS three layer calibration system
Table 1 Optimal CV results for power station ambience prediction data
III. RESULTS In the previous work, the PRLS algorithm has been proved have better performance than PLS with the simulation data - sigmoid and polynomial function and imitative spectrum data under SCSP (self calibration and self prediction) and CV (cross-validation). In this study, we apply PRLS method to the real data, sound and blood glucose to illustrate the performance of PRLS. z Sound file - Power-station ambience In the experiments, ex-99 data are used to predict the 100 data in the sound data - power-station-ambience. Followings are the results of the experiment.
z Blood glucose data Diabetes mellitus is one of the most common diseases in the present day, we can analysis the blood glucose data and further control when the density is irregular. In the experiment, we select 37 data sets to evidence our purpose. Fig.61 shows blood glucose data with noise. Fig6-2 – Fig.6-7are the results of calibration under SCSP and CV.
Fig.5-1 Power station ambience source data
Fig.6-1 Blood glucose data with noise
Fig.5-2 Correlation coefficient as a function of index of hidden node under SCSP
_________________________________________
Fig.6-2 Correlation coefficient as a function of executable iteration under SCSP. Fig. 6-3 RMSE as a function of executable iteration under SCSP
IFMBE Proceedings Vol. 23
___________________________________________
A Novel Multivariate Analysis Method for Bio-Signal Processing
321
plexity could be as low as possible. From the table, it clearly shows that PRLS has better performance than other methods. Table 3 Compilation of real experimental results
Fig.6-4 Network mapping constructed by PRLS and PLS algorithm under SCSP
V. CONCLUSIONS
Fig.6-5 Correlation coefficient as a function of executable iteration under CV. Fig. 6-6 RMSE as a function of executable iteration under CV
In this study, we accomplish integrating regularization method, entropy regularization, into PLS algorithm called PRLS as a brand new MVA technique. PRLS illustrates better calibration result than the original method. Despites manipulating PRLS with simulation data, PRLS method is successfully applied to real world problems – sound and blood glucose in this study. PRLS illustrates better calibration results both in performance and time complexity than the original method when calibrating data with large undesired noise. The result demonstrates that PRLS is suitable for analyzing bio-signal, which particularly has a lot of noise within the raw data. Furthermore, we would like to extend our work by using another two regularization techniques (gradient based regularization and square-error based regularization) to enhance multivariate analysis technique.
Fig. 6-7 Network mapping constructed by PRLS and PLS algorithm under CV
ACKNOWLEDGMENT Table 2 Optimal CV results for blood glucose data
The authors would like to thanks the National Science Council, ROC Taiwan, for financially supporting this study (NSC 94-2213-E-214-042 and NSC 97-222-E-009-121).
REFERENCES IV. DISCUSSION
1.
Table 3 is drawn from the results of our experiments. It is hoped that the prediction results are in high correlation coefficient, small RMSE and consumes little computation which means, the height of correlation coefficient always keeps high, slope of RMSE is not abrupt and the time com-
2.
_________________________________________
Bhandare P, Mendelson Y, Peura RA, Janatsch G, Kruse-Jarres JD, Marbach R, Heise HM (1993) Multivariate determination of glucose in whole blood using partial least-squares and artificial neural networks based on mid-infrared spectroscopy. Appl Spectrosc 47:12141221 Möcks J, Verleger R (1991) Multivariate methods in biosignal analysis: application of principal component analysis to event-related. Techniques in the behavioral and neural sciences 5:399-458
IFMBE Proceedings Vol. 23
___________________________________________
322 3.
4. 5. 6. 7.
H.H. Lin, S.H. Change, Y.J. Chiou, J.H. Lin, T.C. Hsiao Castellanos G, Delgado E, Daza G, Sanchez LG, Suarez JF (2006) Feature selection in pathology detection using hybrid multidimensional analysis. Annual International Conference of the IEEE EMBS, New York, USA, 2006 Huang KY (2003) Neural networks and pattern recognition, Wei-Keg Book Co. Ltd., ROC, Taiwan Oja E (1982) A simplified neuron model as a principal component analyzer. J Math Biol 15:267-273 Harald M, Tormod N (1996) Multivariate calibration, John Wiley & Sons, Great Britain Wang CY, Tsai T, Chen HM, Chen CT, Chiang CP (2003) PLS-ANN based classification model for oral submucous fibrosis and oral carcinogenesis. Laser Surg Med 32:318-326
_________________________________________
8.
Chu CC, Hsiao TC, Wang CY, Lin JK, Chiang HH (2006) Comparison of the performances of linear multivariate analysis method for normal and dyplasia tissues differentiation using autofluorescence spectroscopic. IEEE T Bio-Men Eng 53:2265-2273 9. Hsiao TC, Lin CW, Tseng MT, Chiang HH (1998) The implementation of partial least squares with artificial neural network architecture. IEEE-EMBS’98, Honk Kong, China, 1998 10. Chen S, Cowan CFN, Grant PM (1991) Orthogonal least squares learning algorithm for radial basis function networks. IEEE T Neural Network 2:302-309 11. Chen S, Chng ES, Alkadhimi K (1996) Regularized orthogonal least squares algorithm for constructing radial basis function networks. Int J Control 64:829-837 12. Hsiao TC, Lin CW, Chiang HH (2003) Partial least squares algorithm for weights initialization of the back-propagation network. Neurocomputing, 50:237-247
IFMBE Proceedings Vol. 23
___________________________________________
Multi-Wavelength Diffuse Reflectance Plots for Mapping Various Chromophores in Human Skin for Non-Invasive Diagnosis Shanthi Prince and S. Malarvizhi 1
Department of Electronics and Communication Engineering, SRM University, SRM Nagar -603203, Tamil Nadu, Chennai, India.
Abstract — Optical techniques have the potential for performing in vivo diagnosis on tissue. Spectral characteristics of the components provide useful information to identify the components, because different chromophores have different spectroscopic responses to electromagnetic waves of a certain energy band. The basis for this mapping method arise from the differences in the spectra obtained from the normal and diseased tissue owing to the multiple physiological changes associated with increased vasculature, cellularity, oxygen consumption and edema in tumour. Different skin and sub-surface tissues have distinct or unique reflectance pattern which help us differentiate normal and cancerous tissues. An optical fibre spectrometer is set up for this purpose, which is safe, portable and very affordable relative to other techniques. The method involves exposure of skin surface to white light produced by an incandescent source. These back scattered photons emerging from various layers of tissue are detected by spectrometer resulting in tissue surface emission profile. For the present study, three different skin diseases - warts, moles and vitiligo are chosen. The spectral data from the scan is presented as a multi-wavelength plot. Further, ratio analysis is carried out in which, the relative spectral intensity changes are quantified and the spectral shape changes are enhanced and more easily visualized on the spectral curves, thus assisting in differentiating the part which is affected by disease visually. The unique information obtained from the multiwavelength reflectance plots makes it suitable for a variety of clinical applications, such as therapeutic monitoring, lesion characterization and risk assessment. Keywords — Multi-wavelength, diffuse reflectance, chromophores, ratio-analysis, non-invasive.
chemical composition of tissue that accompany disease progression. A non-invasive tool for skin disease diagnosis would be a useful clinical adjunct. The purpose of this study is to determine whether visible/near-infrared spectroscopy can be used to non-invasively characterize skin diseases. Many benign skin diseases resemble malignancies upon visual examination. As a consequence, histopathological analysis of skin biopsies remains the standard for confirmation of a diagnosis. Visible/near-infrared (IR) spectroscopy may be that tool which could be utilized for characterization of skin diseases prior to biopsy. A variety of materials in skin, absorbs mid-IR light (>2500 nm), thus providing an insight into skin biochemistry. But, if the sample thickness is greater than 10-15μm, mid-IR light is completely absorbed. Therefore, the diagnostic potential of mid-IR spectroscopy in-vivo is limited, In contrast, near-IR light is scattered to a much greater extent than it is absorbed, making tissues relatively transparent to near-IR light, thus allowing the examination of much larger volumes of tissue [4] and the potential for in-vivo studies. The near-IR region is often sub-divided into the short (680-1100 nm) and long (1100-2500 nm) near-IR wavelengths, based upon the technology required to analyze light in these wavelength regions. At shorter near-IR wavelengths, oxy- and deoxyhemoglobin, myoglobin and cytochromes dominate the spectra, and their absorptions are indicative of regional blood flow and oxygen consumption. The purpose of this study is to determine whether the information obtained from visible/near-IR spectroscopy for a variety of skin diseases will enable us to characterize the tissues based on chromophore mapping and be used as a diagnostic tool.
I. INTRODUCTION Advances in the understanding of light transport through turbid media during the 1990s led to the development of technologies based on diffuse optical spectroscopy and diffuse optical imaging [1] [2]. There has recently been significant interest in developing optical spectroscopy as a tool to augment the current protocols for cancer diagnosis [3], as it has the capability to probe changes in the bio-
II. MATERIALS AND METHODS Optical spectrum from the tissue yield diagnostic information based on the biochemical composition and structure of the tissue. Different skin and sub-surface tissues have distinct or unique reflectance pattern which help us differentiate normal and cancerous tissues [5].
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 323–326, 2009 www.springerlink.com
324
Shanthi Prince and S. Malarvizhi
Light reflected from a surface consists of specularly reflected and diffusely reflected components. The intensity of the specular component is largely determined by the surface properties of the sample and is generally not of analytical utility. The intensity of the diffuse component, which includes the contributions from the absorbance of light by the specimen and the scattering of light by the specimen and the substrate, can be used to determine the concentration of the indicator species [6]. The schematic diagram of the visible/near-infrared (IR) spectroscopy system is shown in Fig.1. It consists of Tungsten Halogen Light Source (LS-1) which is a versatile white-light source optimized for the VIS-NIR (3602500nm) and the spectrometer (USB4000) [7]. The spectrometer consists of 3648 element detector with shutter, high speed electronics and interface capabilities. The USB4000 is responsive from 360-1100nm. Acquisition of visible/near-IR data is straightforward. White light from a tungsten halogen lamp is brought to the skin via a reflectance probe (R400). The reflectance probe consists of bundle of seven optical fibers. Six illumination fibers and one read fiber – each of which is 400μm in diameter. The fiber ends are coupled in such a manner that the 6-fiber leg (the illumination leg) is connected to the light source and the single fiber leg is connected to the spectrometer. The light penetrates the skin, and water, hemoglobin species, cytochromes, lipids and proteins absorb this light at specific frequencies. The remaining light is scattered by the skin, with some light being scattered back to the fiber optic probe. The light is collected by the probe and transmitted back to the spectrometer for analysis.
Fig.1 The schematic diagram of Visible/near-infrared (IR) spectroscopy system based on diffused reflectance
_______________________________________________________________
III. AQUISITION OF SPECTRA AND ANALYSIS For the present study, three different skin diseases warts, moles and vitiligo are chosen. Warts are small skin growths caused by viral infections. There over 100 types of human papilloma virus (HPV). Some warts share characteristics with other skin disorders such as molluscum contagiosum (a different type of viral skin infection), seborrhoeic keratosis (benign skin tumor) and squamous cell carcinoma (skin cancer). It is important to distinguish and diagnose. Mole (nevus) is a pigmented spot on the outer layer of the skin epidermis. Most moles are benign, but typical moles (dysplastic nevi) may develop into malignant melanoma, a potentially fatal form of skin cancer. Congenital nevi are more likely to become cancerous. Vitiligo is a zero melanin skin condition . The characteristics of this disease are the acquired sudden loss of the inherited skin color. The loss of the skin color yields white patches of various sizes, which can be localized anywhere on the body. However, not all white skin patches are vitiligo. There are other conditions and diseases that are associated with white skin called leucoderma. Clearly, it seems mandatory to make the correct diagnose. Malignant Melanoma (MM) is another skin cancer which can be very dangerous if not recognized early. These tumors can develop in existing moles but they can also arise totally new as pigmented as well as non-pigmented tumors. Early recognition and excision are important for the outcome. The observation that melanoma is more frequent in patients with vitiligo originates from a study which included 623 Caucasian patients with melanoma of the Oncology Clinic at the Department of Dermatology at the University of Hamburg/Germany [8]. Some individuals with melanoma develop patches of white skin in the vicinity of their melanoma or after their tumor had been excised. In this context it seems important that these white patches are not vitiligo. This skin shows a very different molecular biology and biochemistry compared to true vitiligo [9]. Each reflectance spectrum is acquired by an Ocean Optics USB4000 spectrometer with a spectral resolution of 3.7nm. Spectra are acquired and recorded in the360 -1100 nm ranges using the system described in the previous section. Firstly, white light is directed into a portion of the skin afflicted with the skin disease, the diffusely reflected light is collected, thereby producing a condition spectrum. Next, the same light is directed into a control skin portion of the patient which is not afflicted with the skin disease. A spectrum is taken of an unaffected skin portion as a control from each patient. Prior to obtaining the readings, the subject's skin and the end of the probe are cleansed with 70% alcohol. The fiber optic probe is then positioned 1.0 mm from the measure-
IFMBE Proceedings Vol. 23
_________________________________________________________________
Multi-Wavelength Diffuse Reflectance Plots for Mapping Various Chromophores in Human Skin for Non-Invasive Diagnosis
cannot be discerned by only looking at the original spectrum. The values of the ratio plot are higher in the lower and higher wavelengths, except for the visible region. Fig. 3 shows the spectra of mole and the control skin along with the reflectance ratio plot. Except for a slight decrease in the reflected intensity for the mole region nothing specific can be obtained from the original spectra. 4
2
x 10
1.5
Normal Skin
Reflectance Ratio
1
1
0 300
IV. RESULTS AND DISCUSSION
400
500
600
700
800
900
1000
Reflectance ratio
Mole
Reflected Intensity (a.u.)
ment site and data acquired. A plot of the amount of light backscattered at each wavelength (the spectrum) is computed. Measurements are rapid, non-destructive and noninvasive The ratio technique is used to aid spectra interpretation. In the ratio analysis technique, the lesional spectra are divided by the corresponding spectra of the normal neighboring skin. In this way, the relative spectral intensity changes are quantified and the spectral shape changes are enhanced and more easily visualized on the spectral curves. As a consequence, these differences can be used to identify or diagnose a skin disease by comparing the visible/near-IR spectrum of a control region to a spectrum taken of the region of interest. For the present study, spectrum is obtained from wart, mole and vitiligo skin. Also, for each case a control spectrum is obtained. The control spectrum and the disease spectrum is compared at wavelengths corresponding to visible/ near-IR absorption by oxyhemoglobin, deoxyhemoglobin, water, proteins, lipids or combinations thereof.
325
0.5 1100
Wavelength (nm)
As mentioned above, three different skin abnormalities are studied viz. wart, mole and vitiligo regions. Fig. 2 shows the plot of the original reflectance spectra of wart and the control skin along with the reflectance ratio between the wart region and the normal skin. The wart region shows very low reflectance but the shapes of the two curves are visually same. By obtaining the ratio spectrum we observe that there is a valley around 610nm which is unique to wart and which
Fig.3 Reflected Intensity and Reflectance ratio spectra for the mole and the normal skin
By observing the reflectance ratio plot we find a valley at around 580nm which is specific to that particular mole type. The ratio value is more or less equa1 to 1, indicating that there is a close resemblance of mole to the control spectra. Fig. 4 shows the reflectance spectra of vitiligo and the control skin. The absolute value of the ratio spectrum is larger than 1, indicating that the reflected intensity for a
4
x 10
1.2 1.8
18000
Normal Skin
Normal Skin
Reflectance ratio
1
0.6
0.5
0.4
400
500
600
700
800
900
1000
0.2 1100
Reflected Intensity (a.u.)
0.8
1.5
0 300
Vitiligo skin
1.7
1
Reflectance ratio
Reflected Intensity (a.u.)
16000
Wart
2
Reflectance Ratio
14000
1.6
12000
1.5
10000
1.4
8000
1.3
6000
1.2
4000
1.1
2000
1
0 300
Wavelength (nm)
500
600
700
800
900
1000
0.9 1100
Wavelength (nm)
Fig.2 Reflected Intensity and Reflectance ratio spectra for the wart and the normal skin
_______________________________________________________________
400
Reflectance ratio
2.5
Fig.4 Reflected Intensity and Reflectance ratio spectra for the vitiligo and the normal skin
IFMBE Proceedings Vol. 23
_________________________________________________________________
326
Shanthi Prince and S. Malarvizhi
vitiligo skin is higher than that for the normal skin. Since vitiligo corresponds to region of zero-melanin skin it has little absorption and hence maximum reflectance. The ratios with values less than 1 indicate that the lesional reflectance is lower than the surrounding normal skin. The numerical ratio values quantify this difference as a function of wavelength. Fig.5 shows the three dimensional multi-wavelength reflectance plots along the region of wart. As seen in the Fig. 2 the intensity is much lower when compared to the neighboring control skin. This plot along with the reflectance ratio plots will aid in the visible-NIR spectroscopy to be used as a diagnostic tool for detecting various skin pathologies.
4
Photon Counts (a.u.)
x 10 2 1.5
trum. However, much information is present in the weaker spectral features. For instance, the relatively strong absorption feature at 550 nm arises from hemoglobin species and provides information relating to the oxygenation status of tissues. Further information on tissue oxygenation can be obtained from analysis of a weak absorption feature at 760 nm, arising from deoxyhemoglobin. Information on tissue architecture/optical properties can be obtained from the spectra. Changes in tissue architecture/optical properties may affect the basic nature of the interaction of light with the tissue. For example, changes in the character of the epidermis (i.e. dehydration) may result in more scattering of light from the surface, reducing penetration of light into the skin in a wavelength dependant manner. Also, different tumor densities may result in more scattering of light from the surface. Such phenomena would be manifest in spectra as changes in the slope of the spectral curves, especially in the 400-780 nm regions.
Normal Region Wart Region
1
ACKNOWLEDGMENT
0.5
0 1500 1000 Wav elen gth( 500 nm )
15 5 0
0
ce Distan
10 (mm)
This research work is being funded and supported by All India Council for Technical Education (AICTE), Government of India, New Delhi under the scheme of Career Award for Young Teacher (CAYT).
Fig.5 Multi-wavelength reflectance plot along the wart region
REFERENCES The visible/near-IR spectra of different skins presented here exhibit strong absorption bands from water and a number of weak, but consistent, absorption bands arising from oxy- and deoxy- hemoglobin, lipids and proteins. However, visual examination of spectra did not show distinct differences in these spectral features that could be used to distinguish between spectra of skin diseases and healthy skin.
1.
2. 3.
4.
V. CONCLUSIONS
5.
The spectrum depends on the depth and the type of chromophore contained in the inclusion. An increase in the concentration of a given molecule may produce different contrast, independently of the depth, depending on the characteristics of the skin layer where this change occurs. Each peak in the spectrum can be assigned to a specific compound found in the skin. Visually, strong absorption bands arising from OH groups of water dominate the spec-
_______________________________________________________________
6. 7. 8. 9.
N. Shah, A.E. Cerussi, D. Jakubowski, D. Hsiang, J. Butler and B.J. Tromberg (2003-04) The role of diffuse optical spectroscopy in the clinical management of breast cancer Disease Markers 19:95-105 Scott. C. Bruce, et al (2006) Functional near Infrared spectroscopy IEEE Eng in Med. and Biol. Magazine 54-62 R. Manoharan, K. Shafer, L. Perelman, J. Wu, K. Chem, G. Deinum, M. Fitzmaurice, J. Myles, J. Crowe, R.R. Dasari, M.S. Feld (1998) Raman Spectroscopy and Fluorescence Photon Migration for Breast Cancer Diagnosis and Imaging Photochem. Photobiol 67:15-22 Shanthi Prince and S. Malarvizhi (2007) Monte Carlo Simulation of NIR Diffuse Reflectance in the normal and diseased human breast tissues Biofactors Vol. 30, No.4:255-263 Welch, A., v. Gemert, M. (1995) Optical-thermal response of laserirradiated tissue- Lasers, photonics and electro-optics , Plenum Press, New York, USA, 19-20 Tuan Vo-Dinh (2003) Biomedical Photonics Handbook, CRC Press Ocean Optics at http://www.oceanoptics.com/products Schallreuter KU, Levenig C, Berger J. Vitiligo and cutaneous melanoma. Dermatologica( 1991) 183: 239– 245 Hasse, S., Kothari, S., Rokos, H., Kauser, S., Schurer, N. Y., Schallreuter, K. (2005) In vivo and in vitro evidence for autocrine DCoH/HNF-1[alpha] transcription of albumin in the human epidermis Experimental Dermatology 14(3):182-187
IFMBE Proceedings Vol. 23
_________________________________________________________________
Diagnosis of Diabetic Retinopathy through Slit Lamp Images J. David1, A. Sukesh Kumar2 and V.V. Vineeth1 1 2
College of Engineering, Trivandrum, India College of Engineering, Trivandrum, India
Abstract — A new system is developed for the diagnosis of diabetic retinopathy. It is developed with the help of slit lamp biomicroscopic retinal images. The results are compared with digital fundus image using image processing techniques. The slit lamp, offering users both space savings and cost advantages. By using slit lamp biomicroscopic equipment with an ordinary camera, users can reduce the cost more than 30 times compared to digital fundus camera. The fundus examination was performed on human volunteers with a hand held contact or non-contact lens (90D).The lens providing an extremely wide field of view and better image resolution. The slit lamp equipment is used to examine, treat (with a laser), and photograph (with a camera) of the retina. A digital camera is permitted to capture the image and store this image by using slit lamp, each photograph have small potion of the entire retinal image. The individual slit lamp biomicroscopic fundus images are aligned and blended with block matching algorithm to develop an entire retinal image same as the fundus camera image and detect optic disk, blood vessel ratio from the images. This image can be used for the diagnosis of diabetic retinopathy.
diseases such as; glaucoma, diabetic retinopathy, hypertensive retinopathy and age related macular degeneration. The main parts for the slit lamp images consist of slit lamp equipment, digital camera and 90D lens, without using 90D lens we can capture only the front portion of the eye [2]. The slit lamp images from the slit lamp camera is combined by using Block Matching Algorithm with a typical criteria used is Sum of Squared Difference (SSD) in terms of cross correlation operations [3]. Diabetic retinopathy is the leading cause of blindness in the Western working age population. Screening for retinopathy in known diabetics can save eyesight but involves manual procedures which involve expense and time [4].In this work, detect the arteries to vein ratio of main blood vessels from the fundus image and the combined slit lamp image then compare the values between these two images. The diabetic level, both for these two set of images are determined and compared with clinical data. II. PROPOSED
Keywords — Block matching algorithm, Sum of squared difference, Diabetic retinopathy, Optic disk, Blood vessel ratio.
I. INTRODUCTION The retina is a light sensitive tissue at the back of the eye. When light enters the eye, the retina changes the light into nerve signals. The retina then sends these signals along the optic nerve to the brain. Without retina eye cannot communicate with the brain, making vision impossible. Fundus cameras provide wide field, high quality images of posterior segment structures including the optic disc macula and blood vessels [1]. However, the slit lamp biomicroscope, the workhorse for ophthalmic diagnosis and treatment is easy and it is now often equipped with camera attachments to permit image capture for documentation, storage, and transmission. In many cases, image quality may be low, in part attributable to a narrow field of view and specular reflections arising from the cornea, sclera, and hand held lens.The cost of fundus camera is very high compared to slit lamp camera and this is the motivation of this work. Digital retinal cameras are now being used in a variety of conditions to obtain images of the retina to detect
SYSTEM
The fundus photographs were taken with a fundus camera during mass screening. These photographs were then scanned by a flat-bed scanner and saved as a 24-bit true color JPG file of 512x512 pixels. The slit lamp images of the same person is taken with a slit lamp camera and saved as a JPG file of size 512x512 pixels. The basic block diagram to describe the methodology is shown in Fig (1). Slit Lamp images
Result
Preprocessing
Comparison with Fundus image
BlockMatching Algorithm
Feature Extraction
Fig (1) Block diagram of methodology A. Slit Lamp Image The slit lamp biomicroscopic images are small portions of the entire retinal fundus image; this is shown below in Fig (2).
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 327–330, 2009 www.springerlink.com
328
J. David, A. Sukesh Kumar and V.V. Vineeth
the motion vector corresponding to the candidate block, which yields the best criterion function value [6]. In this paper, we compare only the boundaries of current block f to the boundaries of the candidate blocks to optimizing the algorithm. Typical criteria used is the sum of squared differences (SSD), n 1 m1
SSD( x, y )
¦¦ ( f (k x, l y) g (k , l ))
2
(1)
l 0 k 0
The motion vector ( mvx , mv y ) is measured from the position of the minimum in (1). The original fundus image from digital fundus camera and the resultant combined slit lamp fundus image are shown in Fig (4) and (5). Fig (2) Slit Lamp Biomicroscopic image B. Pre-processing Pre-processing of the slit lamp image is done to improve the visual quality and reducing the amount of noise appearing in the image. RGB color coordinate is not required for this purpose; color system of the original image was converted to RGB components. Applied histogram equalization [5] to B component only and combined to three color coordinate, which is converted to grayscale image. The resultant image is shown below in Fig (3),
Fig (4) Original Fundus image The combined slit lamp image is shown below,
Fig (3) Pre-processed image C. Block Matching Algorithm In this paper, The block based method is to determine the position of a given block g from the current block f. Let g(k,l) denotes the luminance value of the block g of size (a x b) at the point (k,l) and f(k,l) denotes the luminance value of the current block f of size (c x d) at the point (k,l). The common used optimal block based algorithm is the Full Search Block Matching Algorithm (FS) which compares the current block f to all of the candidate blocks , and selects
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig (5) Combined slit lamp image
_________________________________________________________________
Diagnosis of Diabetic Retinopathy through Slit Lamp Images
D. Optic Disk Detection Optic disk is the brightest part in the normal fundus image, which can be seen as a pale, round or vertically oval disk. It is the entrance region of blood vessels and optic nerves of the retina. The optic disk was identified by the area with the highest variation in intensity of adjacent pixels. First convert the color image into gray level image; enhance this image by using histogram equalization, then apply morphological closing followed by opening to suppress most of the vasculature information [7]. The structuring element is a disc lager than the largest vessel crosssection. Finally we got a bright area as plausible optic disk candidate. This is shown in Fig (6).
329
nary matrix calculations [10]. The next step is to find the ratio of the widths of the blood vessels. The centre of the optical disk is used to draw concentric circles of uniformly increasing radius. Traveling along the circles, the sudden intensity changes are noted, which are used to identify the width of the veins and arteries. F. Comparison with fundus image Image data base consist of 11 images from both digital fundus camera and slit lamp camera of diabetic retinopathy with clinical details. All images are classified in to four categories like normal, mild, moderate and severe. The comparison between fundus image and slit lamp image is shown in Table 1. x
Error Calculation
Calculating the error between fundus image and slit lamp image by using equation (2) given below, Error in % =
AB A
(2)
u 100
Where A is the blood vessel ratio of fundus image and B is the blood vessel ratio of slit lamp image Table: 1. Comparison between fundus image and slit lamp image. Fig (6) Optic Disk
No.
Blood vessel ratio (fundus image) (A)
Diabetic Stage of fundus image
1 2 3 4 5
1.6667 1.4981 1.4688 1.9704 1.6144
6 7 8 9 10 11
1.7934 2.0680 1.5685 1.5695 1.9264 1.4684
E. Ratio of blood vessel width Blood vessels appear as networks of either deep or orange-red filaments that originate with in the optic disk and are progressively diminishing width. Information about blood vessels in retinal images can be used in grading disease severity. The normal ratio of the diameter of the main venule to the main arteriole in the retinal vascular system is about 3:2. This value is standardized by ophthalmologists all over the world [8]. In a diseased retina, one that is affected by diabetic, ratio of the diameter of the main venule to the main arteriole increases above 1.5 and can go up to 2. Due to blockages in the vessels and change in the blood sugar levels caused by diabetes, there will be a change in the diameters of the blood vessels. The diameter of arteries will decrease and that of veins increases. By detecting this change in the blood vessel diameter ratio increase, it can diagnose diabetic as early as possible [9]. Ratio of blood vessel width is estimated using concentric circle method. The centre of the optic disk is found by bi-
_______________________________________________________________
Diabetic Stage of slit lamp image
Error (%)
moderate normal normal severe moderate
Blood vessel ratio (slit lamp image) (B) 1.7333 1.5618 1.5321 2.0461 1.6819
moderate mild mild severe moderate
4.00 4.25 4.30 3.84 4.18
moderate severe mild mild severe normal
1.8648 2.1441 1.6345 1.6357 2.0008 1.5316
moderate severe moderate moderate severe mild
3.98 3.68 4.21 4.22 3.86 4.30
III. CONCLUSIONS On the basis of slit lamp image information, can combine the slit lamp images and detect diabetic retinopathy. The error between slit lamp image and fundus image is small in terms of blood vessel ratio, hence in the detection of diabetics, the results are matched with clinical data. Although the
IFMBE Proceedings Vol. 23
_________________________________________________________________
330
J. David, A. Sukesh Kumar and V.V. Vineeth
present work is focused on diabetic retinopathy in terms of blood vessel ratio, it is extensible to the detection of diseases based on retinal condition.
ACKNOWLEDGMENT The authors would like to thank Dr. K. Mahadevan, Department of ophthalmology, Regional Institute of ophthalmology, Thiruvananthapuram for valuable suggestions. We also grateful to chakrabarti eye hospital, Thiruvananthapuram for providing images and clinical details.
REFERENCES 1.
2.
3.
4.
David. J, Deepa A. K., January (2006) “ Diagnosis of Diabetic Retinopathy and Diabetes using Retinal Image Analysis” Proceedings of National Conference on Emerging Trends in Computer Science and Engineering, K. S. Rangasamy College of Technology, Tiruchangode, India, PP 306-311. Madjarov BD, Berger JW, (2000) Automated, real-time extraction of fundus images from slit lamp fundus biomicroscope video image sequences. Br J Ophthalmol, 84:645–7. Y.C Lin and S.C Tai, may (1997) “ Fast full-search block matching algorithm for motion-compensated video compression”, IEEE transactions on communication vol. 45, no. 5,pp. 527-531. D. Klein, B. E Klein, S. E Mos et al, (1986) “The Wisconsin epidemiologic study of diabetic retinopathy VII. Diabetic nonproliferative retinal lesions”, Br. J Ophthalmology, vol 94.
_______________________________________________________________
5.
K. Rapantzikos, M. Zervakis,( 2003) “ Detection and segmentation of drusen deposits on human retina: Potential in the diagnosis of agerelated macular degeneration”, Medical Image Analysis Elsevier Science PP 95-108. 6. Fedwa Essannouni, Rachid Oulad Haj Thami, Ahmed Salam, march (2006) “ An efficient fast full search block matching algorithm using FFT algorithms ” , IJCSNS International Journal of Computer Science and Network Security, vol. 6 No. 3B. 7. David. J, Sukesh Kumar. A, Rekha Krishnan, May (2008) “Neural Network Based Retinal Image Analysis Diagnosis”, Proceedings of 2008 Congress on Image and Signal Processing, IEEE Computer Society, Sanya, China, PP 49-53, 27-30. 8. C. Sinthanayothin, J. Boyce, H.Cook, and T. Williamson, August (1999) “Automated localization of optic disc, fovea and retinal blood vessels from digital color fundus images”, Br.J Ophthalmology, vol. 83. 9. Xuemin Wang, Hongbao Cao, Jie Zhang, (2005)"Analysis of Retinal Images Associated with Hypertensionand Diabetes", IEEE 27th Annual International Conference of the Engineering in Medicine and Biology Society, pp. 6407- 6410. 10. Kavitha, D. Shenbaga Devi, S, (2005)” Automatic Detection of optic disc and exudates in retinal Images”, Proceedings of International Conference on Intelligent Sensing and Information Processing, pp. 501- 506. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
J. David College of Engineering, Trivandrum Sreekariyam Trivandrum India
[email protected] _________________________________________________________________
Tracing of Central Serous Retinopathy from Retinal Fundus Images J. David1, A. Sukesh Kumar2 and V. Viji1 1 2
College of Engineering, Trivandrum, India College of Engineering, Trivandrum, India
Abstract — The Fundus images of retina of human eye can provide valuable information about human health. In this respect, one can systematically assess digital retinal photographs, to predict various diseases. This eliminates the need for manual assessment of ophthalmic images in diagnostic practices. This work studies how the changes in the retina caused by Central Serous Retinopathy (CSR) can be detected from color fundus images using image-processing techniques. Localization of the leakage area is usually accomplished by Fluorescein angiography. The proposed work is motivated by severe discomfort occurring to certain patients by the injection of fluorescein dye and also the increase in occurrence of CSR now days. This paper presents a novel segmentation algorithm for automatic detection of leakage site in color fundus images of retina. Wavelet transform method is used for denoising in the preprocessing stage. Contrast Enhancement method is employed for non-uniform illumination compensation and enhancement. The work determines CSR by the localization of leakage pinpoint in terms of pixel co-ordinates and calculates the area of the leakage site from normal color fundus images itself. Keywords — Central Serous Retinopathy, Fundus Imaging, Wavelet Analysis, Watershed Algorithm, Neural Network Classification.
I. INTRODUCTION Central serous Retinopathy (CSR) [1] is a serous macular detachment that usually affects young people and it leads to visual prognosis in most patients. It may also develop as a chronic or progressive disease with widespread decompensation of the Retinal Pigment Epithelium (RPE) and severe vision loss. Localization of the leakage site is crucial in the treatment of CSR. It is usually accomplished by Fluorescein angiography (FA). Despite the widespread acceptance of FA, its application has been restricted because of the possibility of severe complication and the discomfort to patients, as well as the time needed to perform the test. A normal fundoscopy of the eye also reveals the features of CSR. However, the images obtained through the normal fundoscopy are not as specific as Angiogram and they cannot be used as an efficient method for the analysis of CSR. A computer assisted diagnosis system can effectively employ Image Processing techniques to detect specifically the leakage area of CSR without taking the angiogram of the pa-
tient. The major structures of the retina such as Optic disk and Macula are detected during the work using Image Processing Techniques. This work describes image analysis methods for the automatic recognition of leakage area from color fundus image. A comparative study of the Angiogram and the color fundus images is done. The error between the coordinates of detected leakage point from color fundus and the angiogram leakage point is calculated. From the set of parameters, the images are distributed into two different groups, mild and severe. Neural network is used effectively in data classification. II. PROPOSED SYSTEM The fundus photographs were taken with a fundus camera. The angiogram images of the patients are also taken by the injection of the fluorescein dye. These photographs were then scanned by a flatbed scanner and saved as a 24-bit true color JPG file of 576x768 pixels. Both the images of the same patient are taken and collected as a database for the comparison purpose. The block diagram of the proposed system is shown in figure (1). The image files are analyzed using the algorithms described in the following section: 1. Detection of optic disc 2. Detection of Fovea. 3. Detection of leakage site and the corresponding pixel co-ordinates. 3. Calculation of area of the leakage site. 4. Classification of images using Neural Network.
Retinal fundus images
Pre -Processing Feature extraction
Calculation of error
Neural Network Classifier
Parameter Acquisition
Prediction of severity
Fig (1) Block diagram of the proposed system
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 331–334, 2009 www.springerlink.com
332
J. David, A. Sukesh Kumar and V. Viji
III. PRE-PROCESSING A. Method Pre-processing is used to improve the visual quality of the fundus image and it helps in color normalization [2]. It helps in highlighting the features that are used in image segmentation. Two typical techniques used are filtering and contrast enhancing. Filtering is used for enhancing the quality of images. It helps in sharpening blurry images. Images having low contrast can result from inadequate illumination, wrong lens aperture. Contrast enhancement increases the dynamic range of an image. For the image, local contrast enhancement [3] gave a superior result compared to contrast stretching transformation and histogram equalization. The results are shown in Fig (2) a. and b.
macula, which is of great importance, as leakage area is more prominent there. An area threshold is used to localize the optic disc. A best fitting circle is drawn for determining the boundary. The centre point and hence the diameter of the optic disk (ODD) is thus defined. The Optic disk detection is significant for the work because the centre point of the Optic disk is taken as the reference point for locating pixel coordinates of the leakage point. The result is shown in Fig (4).
Fig (4) Optic Disc located
V. DETECTION OF FOVEA Fig (2) a. Original image Fig (2) b. Contrast Enhanced image
B. De-Noising. De-Noising is done by wavelet analysis using wavelet toolbox, as shown in Fig (3). The advanced stationary wavelet analysis [4] is employed. The basic idea is to average many slightly different discrete wavelet analyses.
The macula is localized by finding the darkest pixel in the coarse resolution image following a geometric criterion. The gradient is calculated and certain morphological operations are done to recognize the fovea, as shown in Fig (5). The candidate region of fovea is defined as an area of circle, which forms 2ODD [6], shown in Fig (6). Location of fovea is significant because, for the classification scheme, the distance of the leakage point co-ordinates from the fovea is to be measured as it determines whether Laser Photocoagulation is necessary for the patient or not.
Fig (5) Gradient calculated and fovea recognized Fig (3) Denoising using wavelet analysis.
IV. DETECTION OF OPTIC DISC Optic disk is the entrance region of blood vessels and optic nerves to the retina and it often works as a landmark and reference for other features in the retinal fundus image [5]. Its detection is the first step in understanding fundus images. It determines approximately the localization of the
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig (6) Fovea located
_________________________________________________________________
Tracing of Central Serous Retinopathy from Retinal Fundus Images
333
VI. DETECTION OF LEAKAGE AREA. It is the green channel that the leakage area appears more contrasted. We first identify the candidate regions; these are regions that mostly contain the leakage area. Then morphological techniques [7] are applied in order to find the exact contours.
Fig (8) a. Color fundus image Fig (8) b. Angiogram image
A. The Segmentation Paradigm The information on objects of interest are extracted from the images. Image Segmentation [8] plays a crucial role in medical imaging by facilitating the delineation of regions of interest. Watershed transform [9] is the technique employed for the purpose. The principle of the watershed technique is to transform the gradient of a gray level image in a topographic surface. A watershed is obtained which is the boundary between the two minima influence zone. The real image is often noisy and leads to over-segmentation. So in order to avoid that, the advanced Gradient and Marker Controlled algorithm is employed. The watershed transform gives good segmentation results if topographical relief and markers are suitably chosen for different type of images. [10] The minima and maxima of a function are calculated by SKIZ algorithm [11].
Fig (7) a. Original image Fig (7) b. Watershed of the image. Fig (7) c. Ridgelines
VII. PARAMETER ACQUISITION A. Area of the Leakage site As the disease progresses, the area of the leakage site increases. It indicates the severity of the disease, as increase in leakage area means the RPE leak is more prominent. Leakage area is estimated by taking the ratio of No. of pixels in the leakage site to that of total pixels in the image. B. Measurement of the distance of leakage site from fovea The leakage point is defined by the particular pixel coordinate of the region. The co-ordinate distance is measured with respect to centre of OD and fovea [12]. If the RPE leak region is less than 1/4th of ODD distance from fovea, the disease is mild and can wait for 4 months without any
_______________________________________________________________
Fig (9) Leakage point detected from color fundus image.
treatment. If it is greater than 1/4th of ODD, then Laser Photocoagulation is needed. Image database consists of 20 images. For each image, area of leakage site and the distance from fovea are measured. Table 1 shows the pixel co-ordinates with respect to optic disk from both color fundus images and angiogram images. The pixel co-ordinates represent the leakage pinpoint. The distance of the leakage co-ordinate from the reference point. i.e., from the centre of optic disk is also calculated. Error percentage of Coordinates is measured by least square method.
Table 1. Calculation of error between leakage point pixel co-ordinates of color fundus image and Angiogram images Coordinates Angio 178,145 123,136 145,128 192,162 135,122 232,179 188,203 211,192 122,149 176,141 201,176 122,177 190,211 151,193 212,221 144,187 253,232 136,198 165,211 137,181
IFMBE Proceedings Vol. 23
Coordinates Fundus 174,151 126,132 140,136 187,166 141,124 239,177 184,211 215,199 117,155 168,143 207,179 118,169 185,201 162,189 207,229 133,185 246,220 127,186 152,216 129,192
Error % 2.81 2.23 3.20 1.68 2.90 2.58 3.02 1.19 2.90 1.53 2.11 1.16 3.03 2.61 1.22 1.17 1.90 3.01 2.26 2.06
Dist: From Ref: Angio(X) 132 128 146 198 127 211 192 213 142 159 213 129 208 165 213 152 261 159 206 187
Dist from Ref: Fundus(Y) 128 131 140 202 131 206 187 217 137 163 219 131 213 159 209 147 258 165 199 183
Error % 3.03 2.34 4.05 2.02 3.14 2.36 2.67 1.84 2.54 2.01 2.36 1.53 3.21 2.41 1.74 1.19 2.04 3.77 2.92 2.13
_________________________________________________________________
334
J. David, A. Sukesh Kumar and V. Viji
Error percentage of distance =
X Y
u 100 %
ACKNOWLEDGMENT (1)
X
where X and Y are distance from reference point to leakage site for angiogram and fundus images respectively.
Authors would like to thank Dr. K. Mahadevan, Regional Institute of Ophthalmology, Thiruvananthapuram and Chaithanya eye Research Centre, Thiruvananthapuram for providing database of Retina images and Clinical details.
VIII. NEURAL NETWORK CLASSIFIER
REFERENCES
In this work, Back Propagation Network [13] is used for the classification scheme. The network is used to classify the images according to various disease conditions. All the images are distributed into two categories as severe and mild according to the parameter values. Table 2 shows the distribution of the parameters with CSR conditions. Table 2. Classification of images based on disease conditions. No.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Area of Leakage site(A)
OD diameter (D)
Distance from Fovea(d)
Stages Of Disease
0.0361 0.0362 0.0678 0.0523 0.0852 0.0961 0.0361 0.0511 0.1321 0.0932 0.0756 0.0462 0.0382 0.0621 0.0500 0.0112 0.0718 0.0819 0.0911 0.1667
84 89 82 91 79 89 78 93 86 84 82 83 87 83 77 93 92 84 82 85
76.94 51.00 29.21 46.52 23.01 56.12 21.82 98.31 32.63 19.82 78.52 63.52 21.63 19.86 21.36 82.37 63.00 49.58 23.12 73.06
mild mild severe severe severe severe mild mild mild severe severe mild mild severe mild mild severe severe severe mild
1. 2.
3.
4. 5.
6.
7.
8. 9. 10.
11.
12.
13.
Bennett. (1995) G. Central Serous Retinopathy, Br .J. Ophtalmol,; 39:605-618. RekhaKrishnan, David J, Dr. A. Sukeshkumar, (2008) Neural Network based retinal image analysis. Proceedings of 2008 congress on image and signal processing,IEEE Computer Society, China, pp-4953 Chanjira Sinthanajyothin, (1999) Image Analysis for Automated Diagnosis of Diabetic Retinopathy. Thesis submitted, Doctor of Philosophy, University of London. Donoho, D.L. (1995), "De-Noising by soft-thresholding," IEEE Trans. on Inf. Theory, vol. 41, 3, pp. 613-627. Emmanuele Trucco and Pawan. J. Kamath , (2004) Locating the Optic Disc in retinal images via plausible detection and constraint satisfaction,0-7803-8554-3/04 IEEE, International Conference on Image Processing(ICIP). L. Gagnon, M.lalonde. (2001) Procedure to detect anatomical structures in optical fundus images. Proceedings on Conference on medical Imaging (SPIE#4322) P-1218-1225. Thomas Walter, Jean –Claude Klien, (2002) A Contribution of Image Processing to the diagnosis of Diabetic Retinopathy-Detection of exudates in Color Fundus image of the human Retina. IEEE transactions on medical imaging Vol 21 No.10, PP-1236-1244, Gonzalez, Rafel.C. Woods, Steven. Digital Image processing using Mat lab, L, Pearson Education. Gang Luo, Opas Chutatape, (2001) Abnormality Detection in Automated Screening of Diabetic Retinopathy. IEEE pp-132-137. J. Roerdink and Meijster (2001) “The Watershed Transform”. Definitions algorithms and parallelization strategies. Fundamenta Informaticae, 41, pp 187-228, IOS Press. Sequeira RE, Preteux FJ. (1997) Discrete Voroni Diagrams and SKIZ Operator- A Dynamic Algorithm. IEEE Transactions on pattern Analysis and machine Intelligence, vol 19 pp 1165-1170. J Korean( 2005) Comparison of Results of Electroretinogram, Fluorescein Angiogram and Color Vision Tests in Acute Central Serous Chorioretinopathy. Ophthalmol Soc Jan; 46(1):71-77. Simon Haykin, (2001). Neural networks –A comprehensive foundation, Pearson Education Asia.
IX. CONCLUSION In this paper, an image-processing technique is proposed, which can play a major role in the diagnosis of CSR. A comparative analysis of the detected leakage area from Color Fundus images and Angiogram images is done. The accuracy is determined by the calculated error percentage between the two, and it is found that the error is small.
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
J. David College of Engineering, Trivandrum Sreekaryam Trivandrum India
[email protected] _________________________________________________________________
A Confidence Measure for Real-time Eye Movement Detection in Video-oculography S.M.H. Jansen1, H. Kingma2 and R.L.M. Peeters1 1
2
Department of Mathematics, MICC, Maastricht University, The Netherlands Division of Balance Disorders, Department of ENT, University Hospital Maastricht, The Netherlands
Abstract — Video-oculography (VOG) is a frequently used clinical technique to detect eye movements. In this research, head mounted small video-cameras and IR-illumination are employed to image the eye. The need for algorithms to extract the eye movements automatically from the video recordings is high. Many algorithms have been developed. However, to our knowledge, none of the current algorithms, is in all cases accurate and provides an indication when detection fails. While many doctors draw their conclusions based on occasional erroneous measurement outcomes too, this can result in a wrong diagnosis. This research presents the design and implementation of a robust, real-time and high-precision eye movement detection algorithm for VOG. But most importantly, for this algorithm a confidence measure is introduced to express the quality of a measurement and to indicate when detection fails. This confidence measure allows doctors in a clinical setting to see if the outcome of a measurement is reliable. Keywords — Eye movement detection, video-oculography, confidence measure, pupil detection, reliability
I. INTRODUCTION Medical studies of eye movement are important in the area of balance disorders and dizziness. For example, studying nystagmus (rapid, involuntary motion of the eyeball) frequently allows localization of pathology in the CNS. Nystagmus can occur with and without visual fixation. Therefore, detection of nystagmus or eye movements in general, is required in the dark and in the light. Various methods have been developed to examine human gaze direction or to detect eye movement. However, the remaining problems in these methods are simplicity of measurement, detection accuracy and the patient's comfort. E.g., electro-oculography (EOG) [1] is affected by environmental electrical noise and drift. In the corneal reflection method [2], the accuracy is strongly affected by head movement. The main drawback of the scleral search coil technique [3], is the need for a patient to wear a relatively large contact lens, causing irritation to the eye and limiting the examination time to a maximum of about 30 minutes. In the nineties, clinical Video Eye Trackers (c-VET) were developed. The c-Vet can be described as a goggle with infrared illumination, where small cameras have been
attached to the goggle. With this construction it is possible to record a series of images (movie) while the relative position of the head and the camera remain constant. The need for algorithms to extract the eye movements automatically from the video recordings is high. However, this is not an easy task as IR-illumination results in a relatively poor image contrast. Furthermore, a part of the pupil can be covered by the eyelid, by eyelashes or reflections. Also the pupil continuously changes in size and shape. Another design issue is that the algorithm has to perform in real-time, because direct feedback with the patient is desirable. Many algorithms have been developed, some with analytical methods and others with neural networks. However, to our knowledge, none of the current algorithms is in all cases accurate and provides an indication when detection fails. While many doctors draw their conclusions based on occasional erroneous measurement outcomes too, this can result in a wrong diagnosis. This research presents the design and implementation of a real-time high-precision eye movement detection algorithm for VOG in combination with a confidence measure to express the reliability of the detection. By this, clinical application will gain substantial reliability. II. ALGORITHM DESIGN While the pupil is much darker than the rest of the eye, it can be detected 'easily', and is therefore a perfect marker to determine the eye movement. Because the shape and the size of the pupil vary continuously, it is necessary to determine one point in the pupil that is constant on the same position of the eye ball: the center of the pupil. Before starting with complex time consuming calculations to find the exact center of the pupil, it is suitable to design a quick and less time consuming algorithm to approximate the location of the center of the pupil and to determine a region of interest (ROI). Consequently, applying complex calculations in the ROI is less time consuming than applying the same calculations in the original image.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 335–339, 2009 www.springerlink.com
336
S.M.H. Jansen, H. Kingma and R.L.M. Peeters
A. Rough localization of the pupil center By taking horizontal lines through an image recording by the c-VET, see Figure 1, a corresponding gray value pattern can be obtained. All points with a gray value lower than a certain threshold are considered to be part of the pupil.
threshold. The center of the pupil is calculated by using the points on the left and right border of two horizontal lines, see Figure 3. By taking the median of the outcomes for 50 randomly chosen combinations of two horizontal lines through the pupil, the center of the pupil is determined. B. Determining a region of interest With the approximation of the pupil center point, one is able to place a window around this point, see Figure 4.
Fig. 1 Horizontal line through the pupil with corresponding RGB pattern This threshold value is for each recording different, mainly caused by different positioning of the c-Vet. Setting this threshold by hand before every recording is timeconsuming and error-sensitive. Therefore, a method was developed to determine the threshold automatically. An RGB histogram of the first image (Figure 2) is made. The small peak on the left side represents the RGB values of the pupil. By performing a local minimum search after this peak, the threshold is determined. To approximate the center of the pupil, the method of Teiwes [4] is used. Teiwes method determines circles through points on the edge of the pupil. The left and right borders of the pupil are detected by comparing the gray value of points on horizontal lines in the image with the
Fig. 2 Using a histogram of the RGB-values to find the threshold value
Fig. 4 A window is placed around the approximated pupil center The window size is chosen in such a way that, in all practical situations, the pupil will fit in the window. Taking the resolution of the images into account, this led to a window size of 210x210 pixels, the ROI, which is much smaller than the original image (680x460). C. Edge detection Within this ROI, an edge detection algorithm is applied. After several analyses on a dataset with artificially created pupil images, the Canny Edge Detector [5] is decided to be the most suitable edge detector for this situation. To avoid the selection of noise, detection with a high threshold is applied. Also the 8-directional connection method is used to further delete noise, see Figure 5. Unfortunately, a high threshold makes that a part of the pupil edge remains undetected. In Section II D it is discussed how to reconstruct the boundary of the pupil. Even when edge detection with a high threshold is applied, the white reflections caused by the IR-light source are
(a)
(b)
(c)
Fig. 5 At the region of interest, displayed in (a), edge detection is applied Fig. 3 Approximation of the pupil center by Teiwes method
_______________________________________________________________
with a low threshold in (b) and with a high threshold in (c)
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Confidence Measure for Real-time Eye Movement Detection in Video-oculography
still selected as edges. However, they can be filtered out with a simple analytical method. As can be seen in Figure 2, the small peak on the right represents the RGB-values of the reflections. If one or more adjacent pixels have such a high RGB-value, the edge point is deleted. During the experiments, patients will lose their concentration and eyelids and lashes will cover a part of the pupil. This disturbs the edge detection process, see Figure 6.
Fig. 8 Ellipse fit can fail when there are not enough edge points or the distribution of the edge points on the pupil border is not suitable
x
Fig. 6 Eyelashes disturb the process of edge detection Kong and Zhang proposed an accurate method for eyelash detection [6]. In this method, as implemented, edge detection is applied to detect separable eyelashes, and intensity variances are used to recognize multiple eyelashes. D. Reconstruction of the pupil shape With the set of edge points found by the algorithm, it is possible to reconstruct the complete edge of the pupil by applying a stable direct least squares ellipse fit [7]. In Figure 7, an example of ellipse fitting is given.
Fig. 7 Reconstruction of the pupil shape with least squares ellipse fitting E. Feasibility check When only a few edge points are detected and those edge points lie on one side of the pupil, it can happen that the algorithm returns an inadequate ellipse fit, see Figure 8. A method was developed to detect those wrong ellipses. In an experimental set up, the relation between the shape and angle of the pupil and ellipse fit of two successive frames was studied. From the experiments it followed that an ellipse is generally accurately computed correct if: x
The rotation angle of the ellipse differs no more than 6.88 degrees from the previous ellipse.
_______________________________________________________________
337
The ratio of the two axis lengths of the ellipse differs no more than 0.1 from the ratio for the previous ellipse.
If one or both conditions are not met, a minimization procedure is started to compute an ellipse that fits the conditions and minimizes the average distance from the ellipse to the edge points. Note that this process will only work, if the data of the previous ellipse is correct. Therefore, it is necessary that, in the beginning of the measurement, the eyes of a patient are wide open. If the algorithm detects at least 10 stable pupils in a row, this procedure is started. III. CONFIDENCE MEASURE Many authors, e.g. [8, 9], have described an algorithm to find the pupil center in images. However, to our knowledge, none of these authors paid attention to the reliability of the outcome of such a measurement. In many cases, the average error of the algorithm was given, but none of the authors gave information about what happens in situations when the measurement is wrong. In some situations it is impossible, even for humans, to detect the exact pupil center. In other situations the detection process is hampered by many eyelashes and it is not unlikely that the algorithm makes an error in the exact location of the pupil center. In those situations, doctors would like to know that the measurement has failed. This will help to prevent them to draw conclusions based on an erroneous measurement. The proposed algorithm, described in the previous section is designed in such a way that it will be possible to design a confidence measure which makes use of data from the pupil detection algorithm. Some aspects of the algorithm contain information on the likelihood that pupil detection will fail. First, the quality of the image is assessed by determining the number of pixels that contain noise, as found by the edge detection algorithm. Second, the number of pixels with eyelashes that have been filtered out are used. More eyelashes will decrease the reliability of the measurement. A third aspect concerns the number of detected edge points. More such points increase the confidence of the measurement. Another feature is the distribution of the edge points on the pupil boundary. It is better that a few edge points are
IFMBE Proceedings Vol. 23
_________________________________________________________________
338
S.M.H. Jansen, H. Kingma and R.L.M. Peeters
located on all sides of the pupil border than many edge points all on one side of the pupil. Therefore, the size of the largest gap between the edge points is used for the confidence measure. The last information that is derived from the algorithm concerns consistency. If the ellipse fit does not differ much from the previous ellipse, it is considered more reliable. Summarizing, the design of the confidence measure takes the following features into account. x x x x x x
Number of pixels in ROI containing noise Number of pixels in ROI containing eyelashes Number of edge points that is found Largest gap between edge points Difference in angle with the previous ellipse fit Difference in axis ratio with the previous ellipse fit
To express the relation of these features with the outcome and accuracy of a measurement, certain weights have to be assigned to each feature. In this research it was chosen to train a neural network to assess these weights.
Fig. 9 Performance of the algorithm on part A and part B of the dataset C. Performance of the confidence measure For the confidence measure, a neural network was trained with 6 input parameters, as described in Section III, and 1 output parameter, the error of the pupil center. The idea is that the neural network predicts the accuracy of the pupil center algorithm. The result of the neural network was validated with a test set, see Table 1. Table 1 Result of confidence measure in relation with real error (in %).
IV. EXPERIMENTAL RESULTS
Conf. measure
0
1
2
3
4
5
6
7
8
9
10
10+
Real error 0 1 2 3 4 5 6 7 8 9 10 10+
58.1 24.5 16.0 1.3 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0
24.1 48.7 15.8 10.8 0.4 0.2 0.0 0.0 0.0 0.0 0.0 0.0
13.8 23.2 24.9 24.3 11.6 2.1 0.1 0.0 0.0 0.0 0.0 0.0
7.1 9.2 29.4 33.4 19.0 1.6 0.3 0.0 0.0 0.0 0.0 0.0
6.8 12.7 11.9 20.0 40.5 5.7 2.2 0.1 0.1 0.0 0.0 0.0
6.8 13.5 12.4 8.9 11.2 22.7 18.6 4.5 0.9 0.5 0.0 0.0
6.4 2.4 4.5 3.0 2.6 3.7 22.5 15.1 19.3 18.4 1.7 0.4
4.4 4.7 5.9 6.3 8.6 17.6 14.5 21.8 11.4 4.6 0.2 0.0
2.6 0.1 0.0 3.1 14.5 20.0 16.2 12.5 23.8 0.8 0.1 0.3
1.4 0.7 0.2 0.5 0.6 0.3 0.2 2.4 29.3 35.6 24.5 4.3
2.7 3.0 2.1 2.9 3.4 2.7 2.8 11.0 24.6 19.3 18.8 6.7
4.1 0.6 0.1 0.4 0.7 2.4 3.2 10.4 11.5 17.4 20.6 28.6
A. Test environment To design the confidence measure and to validate the algorithm, a dataset of 50.000 images was created. This dataset exists of 25.000 images where the pupil center can be easily detected by humans (part A) and 25.000 images where it is harder or (almost) impossible to locate the exact pupil center by visualization (part B). In all images, the location of the (expected) pupil center was annotated by human experts. Also the Helmholtz coordinates of the pupil center in the images are determined using the scleral search coil technique simultaneously with the c-Vet recordings. Patients with conjugate eye movements of the left and right eye wore a contact lens in one eye during the experiments. The recording of the eye without the contact lens was used for the dataset. Patients were tested on saccadic and persuit eye movements and the head impulse test was performed. B. Performance of the algorithm To express the quality of the algorithm, the pupil centers found by the algorithm are compared with the experts annotated pupil centers and with the outcome of the scleral search coil technique. In Figure 9, the performance of the algorithm on images of part A and B are shown. The average error in the detected pupil location in the images in part A is 1.7 pixels, with a maximum error of 14 pixels. The average error in the images of part B is 3.3 pixels with a maximum of 22 pixels.
_______________________________________________________________
V. CONCLUSIONS Tests have shown that the algorithm of this paper can find the pupil center in images that are not extremely hampered by eyelashes and eyelids with a high accuracy. On images where the pupil is covered by many eyelashes, a pupil center is found with lower accuracy. When the accuracy can not be guaranteed, this is well expressed by the confidence measure. If the confidence measure expresses a high accuracy, a doctor can rely on the measurement. Otherwise, he needs to be on his guard.
REFERENCES 1. 2.
Barber H, Stockwell C (1980) Manual of Electronystagmography. The C.V. Mosby Company, 2nd edition Carpenter R (1988) Movement of the eyes. Pion Limited, 2nd edition
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Confidence Measure for Real-time Eye Movement Detection in Video-oculography 3.
4. 5. 6.
Robinson D (1963) A method of measuring eye movement using scleral search coil in a magnetic field. IEEE Trans Biomed Electr, 10:137-145 Teiwes W (1991) Video-Okulographie. PhD-thesis, Berlin, 74-85 Canny J (1986) A Computational Approach To Edge Detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679-714 Kong W, Zhang D (2003) Detecting eyelash and reflections for accurate iris segmentation. Int. J. Pattern Recognition Artif. Intell. 17 (6) 1025-1034
_______________________________________________________________
7. 8.
9.
339
Fitzgibbon A, Pilu A, Fisher R (1996) Direct least-square fitting of Ellipses. International Conference on Pattern Recognition, Vienna Kim S et al. (2005) A Fast Center of Pupil Detection Algorithm for VOG-Based Eye Movement Tracking. Conf Proc IEEE Eng Med Biol Soc. 3:3188-91 Cho J. et al. (2004) A pupil center detection algorithms for partiallycovered eye image. Conf Proc. IEEE Tencon, 183- 186 Vol. 1
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of Active Guide-wire for Cardiac Catheterization by Using Ionic Polymer-Metal Composites B.K. Fang1, M.S. Ju2 and C.C.K. Lin3 1, 2 3
Dept. of Mech. Engineering, National Cheng Kung University, Tainan 701, Taiwan Dept. of Neurology, National Cheng Kung University Hospital, Tainan 701, Taiwan
Abstract — Due to the inconvenience in passing bifurcated blood vessels via changing curvature of guide-wires during surgery, active catheter or guide-wire systems are developed recently. For lightweight and large bending deformation, the ionic polymer metal composites (IPMCs) have been employed in many biomedical applications. For controlling an IPMCbased active cardiac guide-wire system, the goal of this research is to develop methods that can actuate an IPMC and detect its deformation without extra sensors. The method is to parallel a reference IPMC with an actuated IPMC. Then a mixed driving signal that consisted of high and low frequencies was applied to drive the IPMCs. The low frequency signal makes the IPMC to deform and change its surface electrical resistance, while the high frequency signal retains the deformation information. By utilizing a lock-in amplifier to demodulate the high frequency signal, the deformation can be measured. When low frequency actuation signal is absent, the sensing signal follows the deformation well. However, when sinusoidal or square wave actuation signals of frequency 0.1Hz was applied transient error appeared. The error may due to the mismatch of electric resistances and capacitances between the actuation and reference IPMCs. However, when the frequency of actuating signal was reduced to 0.01Hz, the transient error disappeared. For practical applications like catheter guide wire, a low frequency actuation signal induces a large deformation so the method might be feasible for simultaneously sensing and actuating an IPMC. Keywords — Ionic polymer-metal composites (IPMC), actuator, active catheter, control, guide wire.
I. INTRODUCTION For diagnosing or treating coronary heart diseases, cardiac catheterization is a common procedure (Fig. 1). To improve the inconvenience in changing guide-wires during surgery to pass bifurcated blood vessels, there have been many studies on active catheter or guide-wire systems, which can tune the tip-curvature of guide-wire or catheter in real-time [1-4]. Due to lightweight, large bending deformation, biocompatibility and low power consumption, ionic polymer metal composites (IPMCs) have high potential for many biomedical applications. The IPMC is a proton exchange membrane (PEM) plated with platinum or gold on both surfaces and
typically working in a hydrated condition. When a electric potential is applied on the electrode pair, the hydrophilic cations within the IPMC migrate to the cathode of potential and make an unsymmetrical swelling of PEM. Then the IPMC bends toward the anode. In the reverse way, bending of the IPMC also induces a transient electric potential which can be utilized as a sensing signal. Therefore, the characteristics of IPMC are similar to piezoelectric materials which can be served as an actuator and a sensor [5-7]. In general, an actuated IPMC has nonlinear and timevariant behaviors, e.g. hysteresis and back relaxation, which deteriorate precision of the control system. To solve these problems, using feedback control schemes are common strategies [8, 9]. For position or force controls, the feedback signals are mostly measured by using bulky sensors, e.g. laser displacement sensor, CCD camera and load cell. For this reason, the applications of feedback control are restrictive for bulky size of system. Therefore integrating a sensory function into the actuated IPMC without using extra sensors is an important subject in this area. In this research, the ultimate goal is to develop an IPMCbased active cardiac guide wire system (Fig. 1 ). In previous work, a position feedback control scheme was applied to actuate an IPMC [10]. For implementing the control scheme to our application, the next objective is to combine position sensing method with the actuation of IPMC. The goal of this study is to develop a sensor-free system to measure the tip position and to actuate the IPMC simultaneously. Curvaturetuned part
A Proceeding direction Catheter inserted
Cardiac catheter
B
Fig. 1 Sketch of cardiac catheterization with active guide-wire. A-traditional guide-wire, B-IPMC based active guide wire
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 340–343, 2009 www.springerlink.com
Development of Active Guide-wire for Cardiac Catheterization by Using Ionic Polymer-Metal Composites
The surface resistances of the actuated IPMC, after deformation, can be written as:
II. METHODS Depending on its size, the bandwidth of an IPMC is generally lower than several hundred Hzs, so a driving voltage of several thousand Hzs does not actuate an IPMC [10, 11]. In this study, a driving signal, Vdm, that consisted of a low frequency (0.01~10Hz) actuating signal, Vact, and a high frequency (5kHz) carrier signal, Vcar, was used to actuate the IPMC and to measure the deformation, simultaneously(Eq. 1). In Eq. (1), vact, vcar are amplitudes, and Zact, Zcar are frequencies. With Vdm, the deformation of the IPMC follows the waveform of Vact, and Vcar carries the deformation signal due to the changes of surface resistances.
Vdm
Vact Vcar
vact sin(Zact t ) vcar sin(Zcar t )
(1)
To eliminate the noise from environments, a fixed reference IPMC connected parallel with the actuated IPMC was employed (Fig. 2). The equivalent circuit shown in Fig. 3 is similar to a Wheatstone bridge circuit, in which Va1 and Va2 are voltage signals, Ra, Rb, Ram, and Rbm are electrical resistances, Cp and Cpm are capacitances. The sensing signal Vaa(t) from the connected IPMCs can be written as Vaa ( t ) Va1 Va 2
Ra ® ¯ Rb
Ra 0 'Ra
(6)
Rb 0 'Rb
where Ra0, Rb0 are initial resistance of the IPMC, and 'Ra, 'Rb are changes of resistances due to deformation. Ra0, Rb0, Ram, Rbm are equal to a constant Ri. Substitute Eq. (6) into Eq. (5) and simplify it to yield: Vsen (t ) |
vcar 2 'Rb 'Ra Ri 8
(7)
where 'Ra, 'Rb are assumed much smaller than Ri. Eq. (7) showed that Vsen(t) is proportional to the difference between variations of resistance at the compression and tension surface of the ated IPMC.
§ Rb 1 · Rbm 1 Cps C pm s ¸ ¨ ¨R R 1 ¸ Vdm (2) Ram Rbm 1 b ¨ a ¸ C s C s p pm ¹ ©
Because Vact in Vdm could be treated as a low frequency noise in Vaa(t), the high frequency band signals in Va1 and Va2 are first separated by using a high-pass filter. Eq. (2) can then be further simplified as: Vaac (t )
341
§ · Rb Rbm ¨ ¸ vcar sin(Zcar t ) © ( Ra Rb ) ( Ram Rbm ) ¹
Fig. 2 Parallel connection between the reference and the actuated IPMC
(3)
Furthermore, multiply Eq. (3) by vcsin(Zct) and expend yield: Vaacc (t )
vcar sin(Zcar t ) Vaac (t ) · vcar 2 § Rb Rbm ¨ ¸ 1 cos(2Zcar t ) 2 © ( Ra Rb ) ( Ram Rbm ) ¹
(4) The high frequency band cos(2Zct) in Vaacc(t) can be eliminated by using a low-pass filter to yield the demodulated sensing signal Vsen(t). Vsen(t) is related to the amplitude of carrier signal and electrode resistances by: Vsen (t )
· vcar 2 § Rb Rbm ¨ ¸ 2 © ( Ra Rb ) ( Ram Rbm ) ¹
_______________________________________________________________
(5)
Fig. 3 Equivalent circuit of reference IPMC paralleled with reference IPMC
Two encapsulated IPMCs and an experimental setup were implemented to verify the feasibility of the proposed method. Dimension of the two IPMCs are 30mm in length, 5mm in width, 0.2mm in thickness (Fig. 4 ). The values of electrical resistances are close to 10, and the values of
IFMBE Proceedings Vol. 23
_________________________________________________________________
342
B.K. Fang, M.S. Ju and C.C.K. Lin
capacitances are 1.8mF. The sensing method was realized by using an analog circuit (Fig. 5 ). In the experimental setup a laser displacement sensor is utilized to detect the deformation of the actuated IPMC at the end, and an electromagnetic shaker controlled by the displacement feedback is used to deform the IPMC for actuation-free tests (Fig. 6 ).
Fig. 4 Sample of IPMC manufactured in this study Demodulation
High pass filter
V'aa(t)
V"aa(t)
M
Vsen(t)
Low pass filter
Multiply
Vin(t)
Vc Vd
+ +
Ra
Ram
Cp
Cpm
Rb
Rbm
-
Sum
Power Amp +
Sum
III. RESULTS AND DISCUSSIONS Typical testing results are depicted in Fig. 7. A tip displacement of amplitude 2mm deformed by the shaker induced a sensing signal of amplitude 5mV even though the waveforms are slightly different. A 4th-order polynomial was fitted to the relationship between sensing signals and deformations (Fig. 8). From the results, sensing deformation of IPMC with the method proposed in this study was achieved successfully. However, while applying a 0.1Hz low frequency actuating signal into the driving signal and restricting the displacement of the actuated IPMC, the sensing signal was coupled with noise that increased with the amplitude of actuating signal (Fig. 9). This may due to the electric resistances and capacitances of the reference and actuated IPMCs are not equal exactly, so the transient error was exhibited to the sensing signal. Comparing results of actuating signals with frequencies of 0.1Hz and 0.01Hz, the noise for the 0.01Hz actuation is much smaller than that of the 0.1Hz (Fig. 10). It indicates that deformation of actuated IPMC can be measured by current method if the actuating frequency is lower than 0.01Hz. For an actuated IPMC, the longer DC actuating time induces the larger deformation [12]. So the sensing approach proposed in this study might be feasible for an IPMC despite the limitation of low actuating frequency.
Vaa(t)
Reference IPMC
IPMC actuator
Fig. 5 Diagram of the sensing circuit
B
A C
D
Fig. 6 Experimental setup. A-actuated IPMC, B-reference IPMC, C-Laser displacement sensor, D-electromagnetic shaker
Fig. 7 Sensing signals in response to deformation induced by the shaker, sinusoidal (a) and square wave (b)
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of Active Guide-wire for Cardiac Catheterization by Using Ionic Polymer-Metal Composites
343
IV. CONCLUSIONS A method of sensing deformation and actuating of IPMC simultaneously was developed and tested. The results revealed that the method can be applied for IPMC at low frequency of 0.01Hz. For further development, the degree of deformation of IPMC is much concerned than the frequency of manipulation. The developed method might be feasible for a large deformed IPMC induced by DC actuating signal. Fig. 8 Relationship between deformation and sensing signal
ACKNOWLEDGMENT This work is supported by a grant from ROC National Science Council. The contract number is NSC-95-2221-E006-009-MY3.
REFERENCES 1.
Fig. 9 Sensing signals in response to 0.1Hz square-wave actuating signals of different amplitudes
Fig. 10 Comparison between low (0.01Hz) and high (0.1Hz) frequency actuating results
_______________________________________________________________
Mineta T, Mitsui T, Watanabe Y et al. (2002) An active guide wire with shape memory alloy bending actuator fabricated by room temperature process. Sens Actuator A-Phys 97-98 : 632-637 2. Ishiyama K, Sendoh M, Arai KI (2002) Smart actuator with magnetoelastic strain sensor. J Magn Magn Mater 242: 41-46 Part 1 3. Haga Y, Muyari Y, Mineta T et al. (2005) Small diameter hydraulic active bending catheter using laser processed super elastic alloy and silicone rubber tube, 3rd IEEE/EMBS Special Topic Conference, Microtechnology in Medicine and Biology, Oahu, HI, pp 245-248. 4. Guo SX, Nakamura T, Fukuda T (1996) Micro active guide wire using ICPF actuator-characteristic evaluation, electrical model and operability evaluation, IEEE IECON 22nd International Conference, pp.1312-1317. 5. Shahinpoor M, Kim KJ (2001) Ionic polymer-metal composites: I. Fundamentals. Smart Mater Struct 10:819-833 6. Bonomo C, Fortuna L, Giannone P et al. (2005) A method to characterize the deformation of an IPMC sensing membrane. Sens Actuators A: Phys 123-24:146-154 7. Biddiss E, Chau T (2006) Electroactive polymeric sensors in hand prostheses: Bending response of an ionic polymer metal composite. Med Eng Phys 28(6):568-578 8. Lavu BC, Schoen MP, Mahajan A (2005) Adaptive intelligent control of ionic polymer-metal composites. Smart Mater Struct 14(4):466-474 9. Arena P, Bonomo C, Fortuna L et al. (2006) Design and control of an IPMC wormlike robot. IEEE Trans Syst Man Cybern Part B-Cybern 36(5):1044-1052 10. Fang BK, Ju MS, Lin CCK (2007) A new approach to develop ionic polymer-metal composites (IPMC) actuator: Fabrication and control for active catheter systems. Sens Actuators A: Phys 137(2):321-329 11. Paquette JW, Kim KJ (2004) Ionomeric electroactive polymer artificial muscle for naval applications. IEEE J Ocean Eng 29(3):729-737 12. Pandita SD, Lim HT, Yoo YT et al. (2006) The actuation performance of ionic polymer metal composites with mixtures of ethylene glycol and hydrophobic ionic liquids as an inner solvent. J Korean Phys Soc 49(3):1046-1051
IFMBE Proceedings Vol. 23
_________________________________________________________________
Design and Development of an Interactive Proteomic Website *K. Xin Hui1, C. Zheng Wei2, Sze Siu Kwan3 and R. Raja4 1,2&4 3
Biomedical Informatics Engineering, Temasek Polytechnic, Singapore Asst. Professor, SCBS, Nanyang Technological University, Singapore.
Abstract — The interactive proteomic core facility website was done with Apache, PHP & MySQL. This website aims to allow researchers to submit information about their samples via a webpage and at the same time, provides clear and accurate cost of the experiment. This would minimize information mix up and provides a platform for them to manage their expenses better. The server program would process the information sent and return a generated code about their sample. The end results would be stored into our database and our servers would email users to download the results from our servers. Users would have to register first before they could start using our services and a tutorial (FAQ) is provided to facilitate the users. This website would increase the efficiency of work done, reducing the trouble on both sides. User now can submit entry anytime at anywhere and the staff can manage the submission better from the lab. The staff will also gain a much more manageable system for them to trace or perform any calculation. Errors like missing forms; unreadable handwriting and etc will be reduced to the minimum. Keywords — Proteomics, information processing, database.
II. FLOW CHART Module [2]-[18] PHP
MySQL
Apache Server
I. INTRODUCTION Proteomic is a study of protein’s structures and function. As we know protein is essential for all living organisms because they are the main component of the physiological metabolic pathway of cells. [1] Protein is a very complex structure form by many peptides or amino acids, thus identifying them are not an easy task. At Nanyang Technology University (NTU)’s proteomics facility, they are equipped with a high-tech proteomic facility mass spectrometry lab which can carry out protein test like “Protein Identification”, “Mass Weight determination” and “Chromatography-mass Spectrometry (LC-MS)”. With these services protein can be easily analysis in the lab with some specialist machines. Most proteomic researcher does not have these equipments to carry test on their own. So they have to send all samples to the proteomic facility. A better platform is needed to interact between researcher and service provider, hence an interactive proteomic website needed to be create to handle sending and receiving data or information.
MySQL Navicat Lite 8.0
Adobe Dreamweaver CS3 Macromedia Fireworks 8
Macromedia Flash 8
Principle - Open Source - Used to write our webpage and to create website functions such as login and user profile update - Highly flexible: able to use together with JavaScript, HTML & any database systems - Widely supported by many online communities and reference materials - Open Source - Database for our system - Used to store information and to be used with the website - Open Source - The program makes the system into a website server - Highly customizable: able to configure to suit the website’s demand - Widely used; internet companies such as Google & Yahoo! uses Apache - Free to use - Used to edit, create and maintain MySQL database - Able to remote connect via Server’s IP address - Fast & easy to use - To create websites - Easier to develop websites with Dreamweaver as there are many inbuilt functions - Able to view our websites as we create it - Also enable us to connect to the database - Used hand in hand with Flash 8 - To create images for the website and also for the flash animations - To edit some of the image background to transparent in order to prevent white background to overlap other words or images. - To create flash web links - Makes the website more colorful with nicer looking buttons
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 344–347, 2009 www.springerlink.com
Design and Development of an Interactive Proteomic Website
Work Flow The work flow for the development of this website project can be grouped into seven stages. First Stage: Defining the Concept At this stage the project is defined. The project scope is discussed and it is designed with the needs and demands of the end users in mind
345
III. RESULTS In the diagram on the right show the old system that is practice by the proteomic facility lab. There are many inconvenient causes by this system. With this system, the three parties which are the lab, the SBS account management and user (researcher) need to meet up personally with each other in order to get the workdone.
Second Stage: Initialization The initialization stage begins by deciding on the technology used. Firstly, a research was done on the available options. This includes researching on the software used to code the website and the type of programming language used. It was decided on PHP, MySQL and Apache because it was cheaper and widely supported. The necessary software was then installed. Third Stage: Design At this stage, the layout for the website was decided. Essential functions for this website were also decided at this stage. It is important to decide on those that are relevant, suitable and practical so as to suit the demands and the ease of use. Fourth Stage: Development & Debugging At this stage, we begin the coding of the website. Developing under local host, it was to minimize the number of error and bugs that are persistent on the system before posting on the server. Common bugs and errors that were usually spotted were incomplete database queries and white space. As the project continues, there were changes made to the design. Certain planned functions were either removed or improved on.
With this system, error and problem usually occur as stated in the diagram for all three parties. With this new interactive proteomic website is setup (server) shown in the diagram on the right, we can solve the above problem and simplify the procedure of the whole process. The website (server) acts as a main brain of the system. All the information is send directly to the server with is 24/7 operated. System is automated which help to cut down time and cost needed compare to the old system.
Fifth Stage: Finalizing the Concept The prototype was shown. During this stage, a third party’s point of view is essential to this project, due to the fact that they are the target users. Their comments and ideas are incorporated into making this website better. Sixth Stage: Development & Debugging (live) This is a repeat of the fourth stage. On the other hand, at this stage, it is tested first on local host and then tested on the live server. Final Stage: Launch The completed programme is launched to the server.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
346
K. Xin Hui, C. Zheng Wei, Sze Siu Kwan and R. Raja
Features Admin Admin home page which includes x Search for sample using the search bar x Sign up new admin x View incomplete submission/filter completed submission x View mass weight/protein id/user’s detail with more information x Update payment
x x x
User Submission and calculation
Admin can Upload results to the web for the user to download Sent email to notify user User can Download the result after they login their account User can Submit their sample information form (mass weight/ protein id) x Cost calculation of the service will be done x Detail will be show for and edit before confirmation
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Design and Development of an Interactive Proteomic Website
Records User can view x Sample reference no. x Status (Complete/Incomplete) x Cost x Payment (Paid/Pending) x Full detail by clicking “View”
347
design and develop an interactive proteomic website for mass spectrophotometric analysis.
REFERENCES 1
User also can Update file (image or text) by clicking “Update File”
2 3 4
IV. DISCUSSION & CONCLUSION
5
Minimize human related errors x Computerized forms removes unreadable handwriting x Inbuilt program prevents important fields from not filling up x Makes data analyzing faster and efficient as there’s little need to call back to double confirm
6 7
Saves cost x The manpower in the lab can be redeployed to other areas, reducing unused manpower
11
8 9 10
12 13
Convenient x Reduces the number of unnecessary trips down to the lab. Example: a researcher would have to make multiple trips if he could not submit his form as the staff is missing or he would have to make a advance appointment to arrange a time to come down to the lab. x Reduce the demand of sending emails or making phone calls to arrange appointments or sample pick up x Digitalized database of received forms makes it possible to make back up and to change address of business easily
ACKNOWLEDGMENT We acknowledge with thanks the management of NTU & Temasek Polytechnic for the opportunity provided to create,
_______________________________________________________________
14 15 16 17 18
Naramore, E (2004). Beginning PHP5, Apache, and MySQL Web Development, Apress, Valade, J (2005). PHP & MySQL Everyday Apps for Dummies, Wiley Publishing. Babin, L(2005), PHP 5 Recipes: A Problem-Solution Approach, Apress. Williams, HE, and Lane, D (2004). Web Database Applications with PHP and MySQL, 2nd Edition, O'Reilly Media, Inc., Finkelstein, E, and Leete, G. (2005) Macromedia Flash 8 for Dummies, Wiley Publishing. Bride, Mac, Teach Yourself Flash 8, Teach Yourself Darie, C, and Bucica, M(2004). Beginning PHP 5 and MySQL eCommerce: from Novice to Professional, Apress. Kent, A, Powers, D, and Andrew, R (2004) PHP Web Development with Macromedia Dreamweaver MX 2004, Apress. Gilmore, W.J (2008). Beginning PHP and MySQL 5: from Novice to Professional, 3rd Edition, Apress. Bardzell, J(2005). Macromedia Dreamweaver 8 with ASP, Cold Fusion and PHP, Macromedia Press. Cogswell, J (2003). Apache, MySQL and PHP Web Development All-in-one Desk Reference for Dummies, Wiley Publishing, 2003 Davis, M E and Phillips, J A (2006). Learning PHP and MySQL, O'Reilly Media, Inc. Harris, A. (2004), PHP5/MySQL Programming for the Absolute Beginner, 1st Edition, Course Technology PTR. Hughes, S. and Zmievski, A (2000). PHP Developer’s Cookbook, 1st Edition, SAMS. Sklar, D and Trachtenberg, A (2002). PHP Cookbook, O'Reilly Media, Inc. Converse, T., Park, J. and Morgan, C (2004). PHP5 and MySQL Bible, Wiley Publishing. “PHP Tutorial” http://w3schools.com/php/default.asp (25 April 2008 –1 September 2008) Proteomics – Wikipedia http://en.wikipedia.org/wiki/Proteomics (9 August 2008) Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Dr. R. Raja Temasek Polytechnic Tampines Avenue 1 Tampines Singapore 529757
[email protected] _________________________________________________________________
A New Interaction Modality for the Visualization of 3D Models of Human Organ L.T. De Paolis1,3, M. Pulimeno2 and G. Aloisio1,3 1
Department of Innovation Engineering, Salento University, Lecce, Italy 2 ISUFI, Salento University, Lecce, Italy 3 SPACI Consortium, Italy
Abstract — The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize models of the patient’s organs more effectively during surgical procedure. In particular, the surgeon will be able to rotate, to translate and to zoom in on 3D models of the patient’s organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen. Keywords — User Interface, Image Processing, Tracking System.
I. INTRODUCTION The visualization of 3D models of the patient’s body emerges as a priority in surgery both in pre-operative planning and during surgical procedures. Current input devices tether the user to the system by restrictive cabling or gloves. The use of a computer in the operating room requires the introduction of new modalities of interaction designed in order to replace the standard ones and to enable a noncontact doctor-computer interaction. Gesture tracking systems provide a natural and intuitive means of interacting with the environment in an equipmentfree and non-intrusive manner. Greater flexibility of action is provided since no wired components or markers need to be introduced into the system. In this work we present a new interface, based on the use of an optical tracking system, which interprets the user’s gestures in real-time for the navigation and manipulation of 3D models of the human body. The tracked movements of the finger provide a more natural and less-restrictive way of manipulating 3D models created using patient’s medical images.
Various gesture-based interfaces have been developed; some of these are used in medical applications. Grätzel et al. [1] presented a non-contact mouse for surgeon-computer interaction in order to replace standard computer mouse functions with hand gestures. Wachs et al. [2] presented ”Gestix”, a vision-based hand gesture capture and recognition system for navigation and manipulation of images in an electronic medical record database. GoMonkey [3] is an interactive, real time gesture-based control system for projected output that combines conventional PC hardware with a pair of stereo tracking cameras, gesture recognition software and customized content management system. O’Hagan and Zelinsky [4] presented a prototype interface based on tracking system where a finger is used as a pointing and selection device. The focus of the discussion is how the system can be made to perform robustly in realtime. O’Hagan et al. [5] implemented a gesture interface for navigation and object manipulation in the virtual environment. II. TECHNOLOGIES USED In the developed system we have utilized OpenSceneGraph for the construction of the graphic environment and 3D Slicer for building the 3D models starting from the real patient’s medical images. OpenSceneGraph [6] is an open source high performance 3D graphics toolkit used by application developers in fields such as visual simulation, computer games, virtual reality, scientific visualization and modeling. The toolkit is a C++ library and is available on multiple platforms including Windows, Linux, IRIX and Solaris. 3D Slicer [7] is a multi-platform open-source software package for visualization and image analysis, aimed at computer scientists and clinical researchers. The platform provides functionality for segmentation, registration and three-dimensional visualization of multimodal image data, as well as advanced image analysis algo-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 348–350, 2009 www.springerlink.com
A New Interaction Modality for the Visualization of 3D Models of Human Organ
rithms for diffusion tensor imaging, functional magnetic resonance imaging and image-guided therapy. Standard image file formats are supported, and the application integrates interface capabilities with biomedical research software and image informatics frameworks. The optical tracking system used in this application is the Polaris Vicra of the NDI. The Polaris Vicra [8] is an optical system that tracks both active and passive markers and provides precise, real-time spatial measurements of the location and orientation of an object or tool within a defined coordinate system. The system tracks wired active tools with infra-red lightemitting diodes and wireless passive tools with passive reflective spheres. With passive and active markers, the position sensor receives light from marker reflections and marker emissions, respectively. The Polaris Vicra uses a position sensor to detect infrared-emitting or retro-reflective markers affixed to a tool or object; based on the information received from the markers, the position sensor is able to determine position and orientation of tools within a specific measurement volume. The system is able to track up to 6 tools (maximum 1 active wireless) with a maximum of 32 passive markers in view and the maximum update rate is 20 Hz. The systems can be used in a variety of surgical applications, delivering accurate, flexible, and reliable measurement solutions that are easily customized for specific applications. III. THE DEVELOPED APPLICATION The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon can visualize models of the patient’s organs more effectively during the surgical procedure. A 3D model of the abdominal area, reconstructed from CT images, is shown in Figure 1 using the user interface of the 3D Slicer software. In order to build the 3D model from the CT images some segmentation and classification algorithms were utilized. The Fast Marching algorithm was used for the image segmentation; some fiducial points were chosen in the interest area and used in the growing phase. After a first semi-automatic segmentation, a manual segmentation was carried out. All of the interactions with the models happen in realtime using the virtual interface which appears as a touchscreen suspended in free space in a position chosen by the user when the application is started up.
349
Fig. 1 User interface of 3D Slicer with the reconstructed model Finger movements are detected by means of an optical tracking system (the Polaris Vicra) and are used to simulate touch with the interface where some buttons are located. The interaction with the virtual screen happens by pressing these buttons, which make it possible to visualize the different organs present in the built 3D model (buttons on the right) and to choose the possible operations allowed on the selected model (buttons on the left). For this reason, when using this graphical interface, the surgeon is able to rotate, to translate and to zoom in on the 3D models of the patient’s organs simply by moving his finger in free space; in addition, he can select the visualization of all of the organs or only some of them. In figure 2 the interaction with the user interface by means of the tracking system is shown.
Fig. 2 The interaction with the virtual user interface
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
350
L.T. De Paolis, M. Pulimeno and G. Aloisio
IV. CONCLUSIONS AND FUTURE WORK The described application is the first prototype of a virtual interface which provides a very simple form of interaction for navigation and manipulation of 3D virtual models of the human body. The virtual interface created provides an interaction modality with models of the human body, a modality which is similar to the traditional one which uses a touch screen, but in this interface there is no contact with the screen and the user’s finger moves through open space. By means of an optical tracking system, the position of the finger tip, where an IR reflector is located, is detected and utilized first to define the four vertexes of the virtual interface and then to manage the interaction with this. The optical tracker is already in use in computer aided systems and, for this reason, the developed interface can easily be integrated in the operation room. Taking into account a possible use of the optical tracker in the operating room during surgical procedures, the problem of possible undesired interferences due to the detection of false markers (phantom markers) will be evaluated.
_______________________________________________________________
The introduction of other functionalities of interaction with the models is in progress, after further investigation and consideration of surgeons’ requirements.
REFERENCES 1.
2.
3. 4.
5.
6. 7. 8.
Grätzel C, Fong T, Grange S, Baur C (2004) A non-contact mouse for surgeon-computer interaction. Technology and Health Care Journal, IOS Press, Vol. 12, No. 3 Wachs J P, Stern H I, Edan Y, Gillam M, Handler J, Feied C, Smith M (2008) A Gesture-based Tool for Sterile Browsing of Radiology Images. The Journal of the American Medical Informatics Association, vol. 15, Issue 3 GoMonkey at http://www.gomonkey.at O’Hagan R, Zelinsky A (1997) Finger Track - A Robust and RealTime Gesture Interface. Lecture Notes In Computer Science; vol. 1342, pp 475 - 484 O'Hagan R, Zelinsky A, Rougeaux S (2002) Visual gesture interfaces for virtual environments. Interacting with Computers, vol. 14, pp 231250 OpenSceneGraph at http://www.openscenegraph.org 3D Slicer at http://www.slicer.org NDI Polaris Vicra at http://www.ndigital.com
IFMBE Proceedings Vol. 23
_________________________________________________________________
Performance Analysis of Support Vector Machine (SVM) for Optimization of Fuzzy Based Epilepsy Risk Level Classifications from EEG Signal Parameters R. Harikumar1, A. Keerthi Vasan2, M. Logesh Kumar3 1
Professor, Bannari Amman Institute of Technology, Sathyamangalam, India U.G students, ECE, Bannari Amman Institute of Technology, Sathyamangalam, India
2,3
Abstract — In this paper, we investigate the optimization of fuzzy outputs in the classification of epilepsy risk levels from EEG (Electroencephalogram) signals. The fuzzy techniques are applied as a first level classifier to classify the risk levels of epilepsy based on extracted parameters which include parameters like energy, variance, peaks, sharp spike waves, duration, events and covariance from the EEG signals of the patient. Support Vector Machine (SVM) may be identified as a post classifier on the classified data to obtain the optimized risk level that characterizes the patient’s epilepsy risk level. Epileptic seizures result from a sudden electrical disturbance to the brain. Approximately one in every 100 persons will experience a seizure at some time in their life. Some times seizures may go unnoticed, depending on their presentation which may be confused with other events, such as a stroke, which can also cause falls or migraines. Unfortunately, the occurrence of an epileptic seizure seems unpredictable and its process is very little understood The Performance Index (PI) and Quality Value (QV) are calculated for the above methods. A group of twenty patients with known epilepsy findings are used in this study. High PI such as 98.5% was obtained at QV’s of 22.94, for SVM optimization when compared to the value of 40% and 6.25 through fuzzy techniques respectively. We find that the SVM Method out performs Fuzzy Techniques in optimizing the epilepsy risk levels. In India number of persons are suffering from Epilepsy are increasing every year. The complexity involved in the diagnosis and therapy is to be cost effective in nature. This paper is intended to synthesis a cost effective SVM mechanism to classify the epilepsy risk level of patients. Keywords — Epilepsy, EEG signals, fuzzy techniques, performance Index, Quality Value.
I. INTRODUCTION Support Vector Machine (SVM) is an important machine learning technique which involves creating a function from a set of labeled trained data. People attacked by epilepsy [2] are unnoticed and this leads to other events such as a stroke, which also causes falls or migraines. In India number of persons suffering from epilepsy is increasing per year. The complexity involved in the diagnosis and therapy is to be cost effective in nature. Airports, amusement parks, and shopping malls are just a few of the places where computers are used to diagnosis a person’s Epilepsy
risk levels if a life threatening condition occurs. In some situation there is not always a trained doctor’s and neuro scientists on hand. This project work is intended to synthesis a cost effective SVM mechanism to classify the epilepsy risk level of the patients and to mimic a doctor’s and neuro scientist’s diagnosis. The EEG (Electroencephalogram) signals of 20 patients are collected from Sri Ramakrishna Hospitals at Coimbatore and their risk level of epilepsy is identified after converting the EEG signals to code patterns by fuzzy systems. This type of classification helped doctor’s and neuro surgeons in giving appropriate therapeutic measures to the patients. This paper helps to save a patient’s life when a life threatening condition occurs. This scientific paper is carried in order to save a patient’s life and also to create public awareness among people about the risk ness of epilepsy. II. METHODOLOGY Support Vector Machine (SVM) is used for pattern classification and non linear regression like multilayer perceptrons and Radial Basis Function networks. SVM is now regarded as important example of ‘Kernel Methods’. The main idea of SVM is to construct a hyper plane as the decision surface in such a way that the margin of separation between positive and negative examples is minimized. The SVM is an approximate implementation of method of structural minimization. In SVM we investigate the optimization of fuzzy outputs in the classification of Epilepsy Risk Levels from EEG (Electroencephalogram) signals. The fuzzy techniques are applied as a first level classifier to classify the risk levels of epilepsy based on extracted parameters like energy, variance, peaks, sharp and spike waves, duration, events and covariance from the EEG signals of the patient. The block diagram of epilepsy classifier is shown in figure1. This is accomplished as: 1. Fuzzy classification for epilepsy risk level at each channel from EEG signals and its parameters. 2. Each channel results are optimized, since they are at different risk levels.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 351–354, 2009 www.springerlink.com
352
R. Harikumar, A. Keerthi Vasan, M. Logesh Kumar
3. Performance of fuzzy classification before and after the SVM optimization methods is analyzed.
Fig 1 SVM- Fuzzy Classification System
The following tasks are carried out to classify the risk levels by SVM which are, 1. First a simplest case is analyzed with hyper plane as decision function with the known linear data. 2. A non linear classification is done for the codes obtained from a particular patient by using quadratic discrimination. 3. Then the k-means [8][5] clustering is performed for large data with different sets of clusters with centroid for each. 4. The centroid obtained is mapped by the kernel function for obtaining a proper shape. 5. A linear separation is obtained by using SVM with kernel and k-means clustering In fuzzy techniques [3] more suboptimal solutions are arrived. These solutions are to be optimized to arrive a better solution for identifying patient’s epilepsy risk level. . Due to the low value of performance index (40%), quality value (6.25) it is necessary to optimize the output of the fuzzy systems. Hence we are moving to SVM classification which gives a performance index of 98% and a quality value of 22.94. For optimization of fuzzy outputs the Support Vector Machine (SVM) method is identified. The following solutions constrain steps are followed: Step 1: The linearization and convergence is done using Quadratic Optimization [7][4]. The primal minimization problem is transformed into its dual optimization problem of maximizing the dual lagrangian LD with respect to Di: Max L D
1
¦ i 1 D i 2 ¦ i 1 ¦ j 1 D i D j yi y j X i X j l
l
l
(1)
Subject to
¦ i 1D i yi l
0
_______________________________________________________________
(2)
D i t 0 i 1, ......, l
(3)
Step 2: The optimal separating hyper plane is constructed by solving the quadratic programming problem defined by (1)-(3). In this solution, those points have nonzero Lagrangian multipliers (Di ! 0) are termed support vectors. Step 3: Support vectors lie closest to the decision boundary. Consequently, the optimal hyper plane is only determined by the support vectors in the training data. Step 4: The k-means [8][5] clustering is done for the given set of data. The k-means function will form a group of clusters according to the condition given in step2 and step3. Suppose for a group of 3 clusters, k-means function will randomly choose 3 centre points from the given set. Each centre point will acquire the values that are present around them. Step 5: Now there will be six centre points three from each epochs and then the SVM training process is done by the Kernel methods. Thus, only the kernel function is used in the training algorithm, and one does not need to know the explicit form of . Some of the commonly used kernel [10] functions are: Polynomial function:
K(X, Y) = (XT Y – 1)d
Radial Basis Function: k xi, xj Sigmoid function:
° | xi xj |2 ½° exp ® ¾ 2 °¯ 2 V °¿
K(X, Y) = tanh (kXT Y – T)
III. RADIAL BASIS FUNCTION KERNEL The hyper plane and support vectors are used to separate linearly separable and non-linearly separable data. In this project we used, Radial Basis Kernel function (RBF) [4] for this non-linear classification. RBF is a curve fitting approximation in higher dimensional space. According to this learning it is equivalent to finding a surface in multi dimensional space that provides a best fit by utilizing the training data and generalization is equivalent to use of this multidimensional surface to interpolate the test data. It draws up on a traditional strict interpolation in multidimensional space. Thus RBF provides a set of the testing data which acts as a “basis” for input patterns when expanded into hidden space. From the set of RBF testing values the Mean Square Error (MSE) and Average MSE is performed and error values are calculated. The tool used in this study is mat lab v7.2. An important factor for the choice of a classification method for a given problem is the available a-priori knowledge. During the last few years support vector machines
IFMBE Proceedings Vol. 23
_________________________________________________________________
Performance Analysis of Support Vector Machine (SVM) for Optimization of Fuzzy Based Epilepsy Risk Level Classifications …
MSE of Training and testing SVM Models
MSE of SVM Models
0.006 0.005 0.004 0.003 0.002 0.001
19
17
15
13
9
11
Series1
Patients
testing
Fig 2 MSE of Training and Testing of SVM Models
Average MSE under Testing
Patients
19
17
15
13
11
9
7
5
3
1
0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
Average MSE under Testing
Fig 3 Average MSE under Testing of SVM Models
IV. TEST RESULTS In SVM the performance classification is about 97.39% which is very high when compared with Fuzzy logic which is 50% only. The sensitivity and selectivity of SVM is also more when compared to the latter. The missed classification
Table 1: Comparison Results of Classifiers Taken As Average of All Ten Patients Parameters
Fuzzy Techniques Without Optimization
Optimization With SVM Technique
Perfect Classification (%) Missed Classification (%) Performance Index (%) Sensitivity
50
97.39
20
1.458
40
97.07
83.33 71.42 6.25
98.59 98.52 22.94
Specificity Quality Value
_______________________________________________________________
7
5
3
1
0
Average MSE
(SVM) have shown to be widely applicable and successful particular in cases where a-priori knowledge consists of labeled learning data. If more knowledge is available, it is reasonable to incorporate and model this knowledge within the classification results or to require less training data. Therefore, much active research is dealing with adapting the general SVM methodology to cases where additional apriori knowledge is available. We have focused on the common case where variability of data can be modeled by transformations which leave the class membership unchanged. If these transformations can be modeled by mathematical groups of transformations one can incorporate this knowledge independently of the classifier during the feature extraction stage by group integration, normalization etc. this leads to variant features, on which any classification algorithm can be applied. It is noted that one of main assumptions of SVM is that all samples in the training set are independent and identically distributed (i.i.d), however, in many practical engineering applications, the obtained training data is often contaminated by noise. Further, some samples in the training data set are misplaced on the wrong side by accident. These known as outliers. In this case, the standard SVM training algorithm will make decision boundary deviate severely from the optimal hyper plane, such that, the SVM is very sensitive to noise, and especially those outliers that are close to decision boundary. This makes the standard SVM no longer sparse, that is, the number of support vectors increases significantly due to outliers. In this project, we present a general method that follows the main idea of SVM using adaptive margin for each data point to formulate the minimization problem, which uses the RBF kernel trick. It is noted that the classification functions obtained by minimizing MSE are not sensitive to outliers in the training set. The reason that classical MSE is immune to outliers is that it is an average algorithm. A particular sample in the training set only contributes little to the final result. The effect of outliers can be eliminated by taking average on samples. That is why the average technique is a simple yet effective tool to tackle outliers. In order to avoid outliers we utilized the RBF kernel functions and also decision functions for determining the margin of each classes. Since we are analyzing twenty epilepsy patients through leave one out methods and ten fold cross validation. Based on the MSE value and Average MSE values of SVM models the classifications of epilepsy risk levels are validated. The following fig 2 depicts the training and testing MSE of SVM models. The out liers problem is solved through Average MSE method which is shown in figure 3
353
IFMBE Proceedings Vol. 23
_________________________________________________________________
354
R. Harikumar, A. Keerthi Vasan, M. Logesh Kumar
of SVM is 1.458% but it is about 20% in Fuzzy Network and the value of PI in SVM is 97.07 and 40 in Fuzzy.
express our sincere thanks to Dr.Asokan, Neurologist, Sri Ramakrishna Hospitals, Coimbatore for providing us EEG signals of the patient.
V. CONCLUSION
REFERENCES This Project investigates the performance of SVM in optimizing the epilepsy risk level of epileptic patients from EEG signals. The parameters derived from the EEG signal are stored as data sets. Then the fuzzy technique is used to obtain the risk level from each epoch at every EEG channel. The objective was to classify perfect risk levels with high rate of classification, a short delay from onset, and a low false alarm rate. Though it is impossible to obtain a perfect performance in all these conditions, some compromises have been made. As a high false alarm rate ruins the effectiveness of the system, a low false-alarm rate is most important. SVM optimization techniques are used to optimize the risk level by incorporating the above goals. The classification rate of epilepsy risk level of above 98% is possible in our method. The missed classification is almost 1.458 for a short delay of 2.031 seconds. The number of cases from the present twenty patients has to be increased for better testing of the system. From this method we can infer the occurrence of High-risk level frequency and the possible medication to the patients. Also optimizing each region’s data separately can solve the focal epilepsy problem. The future research is in the direction of a comparison of SVM between heuristic MLP and Elman neural network optimization models.
ACKNOWLEDGEMENT The authors wish to express their sincere appreciation to the Management Principal of Bannari Amman Institute of Technology, Sathyamangalam for their support. We wish to
_______________________________________________________________
1.
Pamela McCauley-Bell and Adedeji B.Badiru, Fuzzy Modeling and Analytic Hierarchy Processing to Quantify Risk levels Associated with Occupational Injuries- Part I: The Development of FuzzyLinguistic Risk Levels, IEEE Transaction on Fuzzy Systems, 1996,4 ( 2): 124-31. 2. R.HARIKUMAR AND, B.SABARISH NARAYANAN Fuzzy Techniques for Classification of Epilepsy risk level from EEG Signals, Proceedings of IEEE Tencon – 2003, 14-17 October 2003,Bangalore, India, 209-213. 3. R.Harikumar and B.Sabarish Narayanan, Fuzzy Techniques for Classification of Epilepsy risk level from 4. S.Haykin, Neural networks a Comprehensive Foundation, PrenticeHall Inc. 2nd Ed. 1999. 5. Mu-chun Su, Chien –Hsing Chou, A modified version of the k-means clustering algorithm with a distance based on cluster symmetry, IEEE Transactions on Pattern Analysis and Machine Intelligence June 2001, 23 (6): 674-680. 6. Qing song, Wenjie Hu, and Wenfang Xie, Robust Support Vector Machine With Bullet Hole Image Classification, IEEE Transaction on SMC Part C, 2002,32 ( 4):440-448. 7. Sathish Kumar-Neural Networks, A Classroom Approach, McGrawHill New York, 2004. 8. Richard O. Duda, David G. Stroke, Peter E. Hart-Pattern Classification, second edition, A Wiley-Interscience Publication, John Wiley and Sons, Inc, 2003. 9. Jehan Zeb Shah, Naomie bt Salim- Neural Networks and Support Vector Machines Based Bio-Activity Classification, Proceedings of the 1st Conference on Natural Resources Engineering & Technology 2006, 24-25th July 2006: Putra Jaya, Malaysia, 484-491. 10. V.Vapnik, Statistical Learning Theory, Wiely Chichester, GB,1998. 11. Joel.J etal, Detection of seizure precursors from depth EEG using a sign periodogram transform, IEEE Transactions on Bio Medical Engineering, April 2004,51 (4):449-458. 12. Celement.C etal, A Comparison of Algorithms for Detection of Spikes in the Electroencephalogram,IEEE Transaction on Bio Medical Engineering, April 2003, 50 (4): 521-26.
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Feasibility Study for the Cancer Therapy Using Cold Plasma D. Kim1, B. Gweon2, D.B. Kim2, W. Choe2 and J.H. Shin1 1
Division of Mechanical Engineering, Aerospace and Systems Engineering, Korea Advanced Institute of Science and Technology(KAIST), Daejeon, Republic of Korea 2 Department of Physics, KAIST, Daejeon, Republic of Korea
Abstract — Cold plasma generated at the atmospheric pressure has been applied to disinfect microorganisms such as bacteria and yeast cells in biomedical research. Especially, due to its low temperature condition, the heat-sensitive medical device can be easily sterilized by the cold plasma treatment. In recent years, the effects of plasma on mammalian cells have arisen as a new issue. Generally, plasma is known to induce intensity dependent necrotic cell death. In this research, we investigate the feasibility of cold plasma treatment for cancer therapy by conducting comparative study of plasma effects on normal and cancer cells. We select THLE-2 (human liver normal cell) and SK-Hep1 (human liver metathetic cancer cell) as our target cells. Two types of cells have different onset plasma conditions for the necrosis, which may be explained by difference in electrical properties of these two cell types. Based on this work, a feasibility of the novel selective cancer therapy is tested. Keywords — Cold plasma, Cancer therapy, Biomedical engineering
cells. This study could evaluate the feasibility for the cancer therapy using cold plasma. II. MATERIALS AND MATHEDS A. Target cells and sample preparation We select THLE-2 (human liver normal cell) and SKHep1 (human liver metathetic cancer cell) cell lines as the target cells. THLE-2 cells are cultured in the complete media (BEGM + 1% antibiotics + 10% Fetal Bovine Serum (FBS)). SK-Hep1 cells are also cultured the recommended complete media (DMEM + 1% antibiotics + 10% FBS). Cells are seeded on the slide glasses to prepare the experiment. After culturing two types of cells at the same coverage, the cells are rinsed twice by Phosphate Buffered Saline (DPBS) solution for the sample preparation. B. Experimental setup
I. INTRODUCTION Plasma is generated by ionizing the neutral gas molecules, resulting in the mixture of energetic particles, including electrons and ions. Generally, the low pressure plasma is well characterized over many years and applied particularly in semiconductor industry applications. In recent years, new technique has been developed to generate plasma at the atmospheric pressure. The temperature of the nonthermal atmospheric plasmas, so called the cold plasmas, is at the minimum around the room temperature. When the substrate is treated by the cold plasma, it induces chemical reactions due to the active radicals even at the low temperature. Also, the vacuum system is not required in the cold plasma, making it possible to be used in many applications, for examples, plasma waterproofing of textiles [1]. In biomedical engineering, cold plasma is used to sterilize medical equipments, especially heat-sensitive devices [2, 3]. Membranes of bacteria and yeast cells are broken by radicals mechanically and chemically. In addition, in a few years, the plasma effects on mammalian cells have been studied [4, 5]. In this research, we investigate the difference in plasma effects on the human liver normal and cancer
We use jet type plasma device which consists of a single pin electrode of 360 ˩m in radius as shown in Fig. 1. Helium (99.99% purity) gas flows at 2 lpm through the 3 mm diameter pyrex tube. When 50 kHz AC voltage (950 ~ 1200 V) is applied to the pin electrode, the plasma is generated by the electric discharge. The sample is placed on the substrate about 15 mm below the device and treated by the plasma at a certain applied voltage and exposure time. C. Experimental procedure and imaging The intensity of the generated plasma depends on the distance (d) between the pin electrode and the slide glass, gas flow rate (r), the applied voltage (V) and frequency (f), and liquid thickness on the sample (l). To prevent cells from being dehydrated, we add DPBS solution on the slide glass to 0.15 mm thickness (l). We set the parameters same during the plasma experiment except the applied voltage: f = 50 kHz, r = 2 lpm, d = 15 mm, l = 0.15 mm, and V = 950 ~ 1200 V. After loading a sample on the center of the substrate, we have plasma treatment on cells and the exposure time ranges 30 ~ 120 sec.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 355–357, 2009 www.springerlink.com
356
D. Kim, B. Gweon, D.B. Kim, W. Choe and J.H. Shin
Fig. 2 A microscopic image of plasma-treated SK-Hep1 cells at V= 950 V, Fig. 1 Schematic drawling of pin-type plasma jet device The plasma treated sample is stained with the dye (ethidium homodimer 5 ul + calcine 2.5 ul + DPBS 5 ml for the live/dead assay. In imaging processes, the live and dead cells are stained in green and red colors, respectively. III. RESULTS A. Characteristics of the plasma treated cells
samples and N0 is the number of total samples. THLE-2 has the sample viability over 60 % in all of plasma conditions. While, the sample viability of SK-Hep1 is much lower than that of THLE-2. It is below 50 % over V = 950 V. At the highest voltage of 1200 V, SK-Hep1 samples completely have necrosis. Therefore the necrosis-onset voltage of SKHep1 is lower than that of THLE-2. C. Differences in electrical properties of both types of cells
At short exposure times and low applied voltages, both types of cells, THLE-2 and SK-Hep1 are live and their morphologies are unchanged, compared to the control cells. Over the onset plasma condition of necrosis, however, we observe the plasma treated area. The typical plasma treated area shows two different circular zone, void and dead zone, as shown in Fig. 2. In the dead zone, the necrotic cell death occurs while there are no cells in the donut-like void zone. The cells seem to be detached or lysed in the void zone. We find out the higher the applied voltage is, the larger the plasma treated zone is. Considering that the diameter of the plasma jet gets longer as the applied voltage increases, this result is given from the change of plasma beam diameter. Also, higher intensity of plasma in large voltage conditions gives more influence to the cells. B. Differences in plasma effects on both types of cells After we perform experiments for THLE-2 and SK-Hep1 cells with the same plasma condition, which is described in the section2, except fixing t = 120 sec, we compare sample viabilities of both cells. The sample viability is defined by N/N0 times 100 %, where N is the number of surviving
_______________________________________________________________
t = 120 sec and r = 2 lpm. Green and red dots are live and dead cells, respectively
We measure the total capacitance of the system, including air, DPBS solution, cells and slide glass. We find that the capacitance of THLE-2 system is higher than that of SK-Hep1 system, suggesting that SK-Hep1 cells could be more conductive in electricity. IV. DISCCUSIONS In the necrotic sample, we observe the phase separation into void and dead zones. We propose that this feature results from the spatial characteristics of the pin-type plasma device. Generally, pin-type plasma has Gaussian distribution in the plasma density, which is maximum at the center of plasma [6]. Thus, at the center of the plasma treated region, necrosis could occur due to the strong intensity of plasma. Cell detachment could occur around the boundary region, resulting in a void. On the other hand, we see that the sharp boundary line between non-plasma-treated and plasma-treated regions. This result implies that we can have plasma treatment on cells with a high-precision. This high-precision removal of cancer cells could be applied to the direct cancer therapy
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Feasibility Study for the Cancer Therapy Using Cold Plasma
without excessive damage of surrounding normal cells in biomedical application. Even at the same plasma condition, samples have a distribution in both of survival and necrosis. This could result from the fluctuation of cell coverage. Basically, plasma effects on cells could rely on the cell coverage. Because cells are dielectric materials, the difference in cell coverage could give rise to the difference in the plasma condition. We maintain the same coverage of THLE-2 and SK-Hep1 cells. However, there could be a fluctuation in the cell coverage. To estimate this effect, we can study the dependence of plasma effects on cell coverage. In the results of capacitance measurement, SK-Hep1 cells could be more conductive in electricity than THLE-2 cells. The higher conductivity of a sample is, the stronger intensity of plasma is. Thus, even at the same plasma condition, the generated plasma in SK-Hep1 system is stronger than that in THLE-2 system. This result could give rise to the difference in the onset voltages of necrosis between normal and cancer cells. Using this difference in plasma effects on THLE-2 and SK-Hep1 cells, we may apply this device to a novel cancer therapy, which kills cancer cells selectively without damages of normal cells. V. CONCULSIONS We perform a comparative study of plasma effects on human liver normal and cancer cells for the feasibility of cancer therapy using cold plasma. In necrotic plasma conditions, the local area of cells can be treated by plasma with a high-precision. On the other hand, the total capacitance of
_______________________________________________________________
357
normal cell system is larger than that of cancer cell system. This difference in electrical properties of cells could result in the difference in onset voltages of necrosis between normal and cancer cells. Based on the results of this research, we may provide a novel method of cancer treatment to biomedical field.
REFERENCES 1.
2. 3.
4.
5.
6.
Radetic M, Jocic D, Jovancic P, Trajkovic R, Petrovic Z Lj (2000) The Effect of Low-Temperature Plasma Pretreatment on Wool Printing. Textile Chem. Col. 32: 55-60 Kieft I E, Laan E P v d, Stoffels E (2004) Electrical and optical characterization of the plasma needle. New J. Phys. 6: 149 Moisan M, Barbeau J, Moreau S, Pelletier J, Tabrizian M, Yahia LH (2001) Low-temperature sterilization using gas plasma: A review of the experiments and an analysis of the inactivation mechanisms. Int. J. Pharm. 226: 1-21 Kieft I E, Darios D, Roks A J M, Stoffels E (2005) Plasma treatment of mammalian vascular cells: A quantitative description. IEEE Trans. Plasma Sci. 33: 771-775 Stoffels E, Kieft I E, Sladek R E J (2003) Superficial treatment of mammalian cells using plasma needle. J. Phys. D: Apple. Phys. 36: 2098-2913 Radu I, Bartnikas R, Wertheimer M R (2003) Dielectric barrier discharges in helium at atmospheric pressure: experiments and model in the needle-plane geometry. J. Phys. D: Apple. Phys. 36: 1284-1291 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Jennifer H. Shin Korea Advanced Institute of Science and Technology Gusung-dong, Yusung-gu 373-1 Daejeon Republic of Korea
[email protected] _________________________________________________________________
Space State Approach to Study the Effect of Sodium over Cytosolic Calcium Profile Shivendra Tewari and K.R. Pardasani Department of Mathematics, M.A.N.I.T., Bhopal, INDIA Abstract — Calcium is known to play an important role in signal transduction, synaptic plasticity, gene expression, muscle contraction etc. A number of researchers have studied Cytosolic calcium diffusion but none have studied the effect of sodium over Cytosolic calcium profile. Here in this paper we have developed a mathematical model which incorporates all the important parameters like permeability coefficient, calcium flux, sodium flux, external sodium, external calcium etc. Thus we can study dynamically changing calcium with respect to dynamically changing sodium. Further, we have used Space State approach for the simulation of the proposed model which is a novel technique in itself developed in the later part of the twentieth century. Keywords — Cytosolic calcium, permeability coefficient, sodium flux, calcium flux.
I. INTRODUCTION Intracellular calcium is known to regulate a number of processes [1]. One of the most important process is the process of signal transduction. Calcium acts as a switch in the process of signal transduction while converting electrical signal into a chemical signal. Calcium helps in the mechanism of exocytosis by combining with synaptotagmin to release neurotransmitters [2]. There are a number of parameters that affect its mobility and concentration like channels, pumps, leaks etc. Further, Reuter and Seitz found that calcium extrusion in heart muscles is caused by the electrochemical sodium gradient across the plasma membrane. Blaustein also observed that the sodium gradient across the plasma membrane influences the intracellular calcium concentration in a large variety of cells via a counter transport of Na+ for Ca2+. The dependence of Na+ – Ca2+ electrochemical gradient has been studied by Sheu and Fozzard for sheep ventricular muscle and Purkinje strands. Thus, there is enough evidence that Na+ is an important parameter to be considered when modeling cytosolic [Ca2+] concentration. Matsuoka et al. also found that Na+ – Ca2+ is the major mechanism by which cytoplasmic Ca2+ is extruded from cardiac myocytes [3, 4, 5, 6]. The mathematical models proposed so far have not incorporated the effect of Na+ in their models [1, 7, 8]. Thus in this model we have incorporated the effect of dynamically changing Na+ over dynamically changing Ca2+. In this mathematical model we have incorporated Ca2+ influx, Na+ influx, Na+ / Ca2+ exchange
pump. Since we have used Space State approach [9] to simulate the given model, we needed to linearise the given model. The results showed the effect of Na+ / Ca2+ exchange over intracellular Ca2+ and intracellular Na+. II. THE MATHEMATICAL MODEL The mathematical model consists of a Ca2+ flux, Na+ flux and a Na+ / Ca2+ exchange pump. The influx of Ca2+ and Na+ currents is modeled using the famous GoldmanHodgkin-Katz (GHK) current equation [10] while the third parameter Na+ / Ca2+ exchange pump is modeled using the free energy principle [11]. We have assumed a cytosol of radius 5 Pm and thickness 7 nm. The proposed mathematical model can be framed using the following system of ordinary differential equations: d [Ca 2 ] dt
V Ca V NCX
d [ Na ] dt
V Na V NCX
(1)
Along with the initial conditions,
[Ca 2 ] 0.1P M
[ Na ] 12mM A. Ca2+ and Na+ currents
The influx of Ca2+ and Na+ is modeled using the GHK equation:
PS zS2 ( IS
Vm F 2 (z V F ) )([ S ]i [ S ]o exp( S m )) RT RT zS Vm F (1 exp( )) RT 2+
+
(2)
here, ‘S’ is any ion in this case Ca or Na . All the parameters have their usual meanings and have values as stated in Table 1. The value of permeability constant of Ca2+ and Na+ is determined from the fact that conductance or D permeability is equal to where D is the diffusion coeffiL
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 358–361, 2009 www.springerlink.com
Space State Approach to Study the Effect of Sodium over Cytosolic Calcium Profile
cient and L is the thickness of the membrane [10]. The diffusion coefficients were taken as from Stryer et al. [12] and membrane thickness is taken to be 7 nm [10]. Further, the inward current was taken to be negative and is converted into Molar / second using the faradays constant and using the fact that 1 L = 1015 Pm3 before being substituted in equation (1).
V Ca
I Ca zCa FV
where, all the parameters have their usual meanings and V is the volume of the cytosol. Similarly, we can calculate the net flux of Na+ ions from the Na+ channel.
B. Na+ / Ca2+ exchange The Na+ / Ca2+ exchange pump is known as the most important mechanism of Ca2+ extrusion [6]. This exchange is modeled by equating the electrochemical gradient of both the ions, ' Ca
RT log(
Cai ) zCa FVm Cao
3' Na
(3)
(4)
Using equation 4 and solving we can obtain the required relation for Na+ / Ca2+ exchange, given in the following equation:
V NCX
Na FV Cao ( i )3 exp( m ) Nao RT
V NCX
Nao (
Cai 1/3 FV ) exp( m ) Cao RT
where, a
2 PCa FVm uout uout FV ,a' exp( m ) 2 FVm vout RT RT (1 exp( )) RT FV PNa FVm exp( m ) PNa FVm vout RT , d c FVm FV (1 exp( )) (1 exp( m )) RT RT vout FVm exp( ) c' uout RT
b
Using another transformation of u
(6)
(7)
Using equation (7) in equations (6) it is reduced to d ªu º « » dt «¬ v »¼
2 PCa exp(2H ) º ª uout « v exp(2H ) 1 exp(2H ) »» ªu º « out « » « PNa H exp(H ) » «v » vout exp( 2 H ) « »¬ ¼ uout ¬ 1 exp(H ) ¼
uout ª º « » / exp(2 H ) v ¬ out ¼
(8)
here, H is a dimensionless quantity equal to FVm/RT. If we use matrix notations and use another transformation then equation (8) can be reduced to the form which is readily solvable by the method of Space State, dY dt
(9)
AY
where, ªu º « » ,C ¬« v ¼»
uout ª º « v / exp(2H ) » , ¬ out ¼
Y
AX C , X
A
2 PCa exp(2H ) º ª uout « v exp(2H ) 1 exp(2H ) »» « out « PNa H exp(H ) » v out exp(2H ) » « uout ¬ 1 exp(H ) ¼
au b a ' v cv d c ' u
_________________________________________
2 FVm ) uout RT FV v exp( m ) vout RT u exp(
(5)
Before solving equation (1) by Space State technique the equations were linearised. For our convenience we write ‘u’ in lieu of Ca2+ and ‘v’ in lieu of Na+. Further, the equations were transformed into matrix equations after using a number of transformations: du dt dv dt
2 FVm ) RT 2 FVm ) RT (1 exp( RT
2 PCa FVm exp(
v
Similarly, we can frame for electrochemical gradient of Na+ ( ' Na ). The pump is assumed to be electrogenic in nature as one Ca2+ leaves the cytosol for intake of three Na+ ions. Thus, ' Ca
359
IFMBE Proceedings Vol. 23
___________________________________________
360
Shivendra Tewari and K.R. Pardasani Ca 0.14
with initial conditions,
Y (0)
ª 0.002 º «74.361» ¬ ¼
(10)
0.10
Solving equation (9 – 10) with help of space state technique and using inverse transformations, we have, u (t )
0.12
256.41(0.002 0.00005( 37.18 37.31( Sinh ( kt )
0.08 0.06
74.36(Cosh ( kt ) 0.012( Sinh ( kt )))) 0.012( 0.002(
0.04
0.002 0.004( Sinh ( kt )) 0.002(Cosh( kt )
0.02
0.0002( Sinh ( kt ))))) v (t ) 16(0.145 0.0002( 37.18 37.31( Sinh( kt ))) 74.36(Cosh ( kt ) 0.0118( Sinh ( kt )))) 18589.7( 0.002 0.004( Sinh ( kt )) 0.002(Cosh ( kt ) 0.0002( Sinh( kt )))))
here, k is the eigen value of matrix A which has two values:
k1
1.00579
k2
0.99253
0.2
0.4
0.6
0.8
1.0
time
Fig. 1shows the plot of Ca2+ against time
In figure 1, the impact of Na+ / Ca2+ exchange is shown on the temporal scale. Ca2+ is shown on the mM scale and time is shown on the second scale. It is evident from the figure when Ca2+ concentration rises above a certain level it triggers Na+ / Ca2+ exchange protein and initializes the extrusion of Ca2+ for intake of Na+ ions. As soon as, Na+ / Ca2+ exchange is triggered the rising of Ca2+ stops and it starts decaying.
That is k1 | k2 | k | 1 Na 1000
III. RESULTS AND DISCUSSION This section comprises of the results and conclusion obtained from our methodology and hypothesis. The parameters used for simulation are as stated in Table 1
Symbol
Value
Temperature
T
96487 Coulombs / moles -70 mV 8.314 J per Kelvin mole 293 K
External Calcium Concentration
uout
2 mM
Faraday’s Constant
F
Membrane Potential
Vm
Real Gas Constant
R
External Sodium Concentration
vout
145 mM
Ca2+ diffusion coefficient
DCa
250 Pm2/second
Na+ diffusion coefficient
DNa
480 Pm2/second
Membrane thickness
L
7 nm 3.3 x 10-2metre / second 6.4 x 10-2metre / second
2+
Ca permeability
PCa
Na+ permeability
PNa
_________________________________________
600 400
Table 1 Values of the parameters used Parameter
800
200
0.2
0.4
0.6
0.8
1.0
time
Fig. 2 shows the plot of Na+ against time
In figure 2, the increasing Na+ is plotted against time. Intracellular Na+ is in the units of mM and time is on the scale of seconds. Since, there is no parameter in equation 1 to regulate intracellular concentration, therefore, Na+ concentration goes on increasing. But that is not the case in reality as there is a Na+ / K+ ATPase which extrudes excess Na+ from inside the cytosol [13]. Further, the Na+ / Ca2+ exchange functions both ways when the Ca2+ is high it exchanges intracellular Ca2+ for extracellular Na+ and when Na+ is high it exchanges intracellular Na+ for extracellular
IFMBE Proceedings Vol. 23
___________________________________________
Space State Approach to Study the Effect of Sodium over Cytosolic Calcium Profile
Ca2+ [14]. The numerical results and graphs are obtained using Mathematica 6.0. Since the model proposed needed to be a linear one we have to drop the non-linear terms and hence the Na+ concentration is not decreasing. On the other hand, this paper helps us to observe the apparent effect of Na+ / Ca2+ exchange over intracellular Ca2+ concentration while keeping the model a simple one. Further, the use of space state technique simplifies the solution and gives an analytic solution. In a similar manner, we can incorporate more parameters in this model to have a more realistic model which can be used either for simulation of cytosolic diffusion or excitation contraction coupling problem.
4. 5.
6.
7.
8.
9. 10. 11. 12.
ACKNOWLEDGMENT The authors are highly grateful to Department of Biotechnology, New Delhi, India for providing support in the form of Bioinformatics Infrastructure Facility for carrying out this work.
13.
14.
15.
REFERENCES 1.
2.
3.
Rüdiger, S., Shuai, J. W., Huisinga, W., Nagaiah, C., Warnecke, G., Parker, I. & Falcke, M. (2007) Hybrid Stochastic and Deterministic Simulations of Calcium Blips, Biophysical J., 93, 1847-1857 Brose, N., Petrenko, A. G., Sudhof, T. C., & Jahn, R. (1992) Synaptotagmin: a calcium sensor on the synaptic vesicle surface. Science, 256, 1021-1025 Reuter, H. & Seitz, N. (1968) The dependence of calcium efflux from cardiac muscle on temperature and external ion composition. J. Physiol., 195, 451-470
_________________________________________
361
Blaustein, M. P. & Hodgkin, A. L., (1969) The effect of cyanide on the efflux of calcium from squid axons. J. Physiol., 200, 497-527 Sheu, S. S., & Fozzard, H. A. (1982) Transmembrane Na+ and Ca2+ Electrochemical Gradients in Cardiac Muscle and Their Relationship to Force Development, J. Physiol., 80, 325 – 351 Fujioka, Y., Hiroe, K. & Matsuoka, S. (2000) Regulation kinetics of Na+-Ca2+ exchange current in guinea-pig ventricular myocytes, J. Physiol., 529, 611-623 Smith, G.D. (1996) Analytical Steady-State Solution to the rapid buffering approximation near an open Ca2+ channel. Biophys. J., 71, 3064-3072 Smith, G.D., Dai, L., Miura, Robert M. & Sherman, A. (2000) Asymptotic Analysis of buffered Ca2+ diffusion near a point source. SIAM J. of Applied of Math, 61, 1816-1838 Ogatta K. (1967) State Space Analysis of Control Systems. PreniceHall, INC., Englewood Cliffs, N.J. Neher, E. (1986) Concentration profiles of intracellular Ca2+ in the presence of diffusible chelator. Exp. Brain Res. Ser., 14, 80-96 Nelson D.L., Cox M.M. (2001) Lehninger Principles of Biochemistry Keener J., Sneyd J. (1998) Mathematical Physiology Springer. New york Allbritton, N.L., Meyer, T., & Stryer, L. (1992) Range of messenger action of calcium ion and inositol 1,4,5-trisphosphate. Science, 258, 1812–1815 Clarke R. J., Kane D. J., Apell H.J., Roudna M., Bamberg E. (1998) Kinetics of Na+ -Dependent Conformational Changes of Rabbit Kidney Na+, K+ ATPase. Biophys. J., 75:1340-1353 Barry W.H., Bridge J.H. (1993) Intracellular calcium homeostasis in cardiac myocytes. J. of American Heart Association 87:1806-1815 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Dr. K.R. Pardasani Department of Mathematics Maulana Azad National Institute of Technology Bhopal INDIA
[email protected] ___________________________________________
Preliminary Study of Mapping Brain ATP and Brain pH Using Multivoxel 31P MR Spectroscopy Ren-Hua Wu1,3, Wei-Wen Liu1, Yao-Wen Chen1, Hui Wang2,3, Zhi-Wei Shen1, Karel terBrugge3, David J. Mikulis3 2
1 Department of Medical Imaging, Shantou University Medical College, Shantou, China School of Biomedical Science and Medical Engineering, Southeast University, Nanjing, China 3 Department of Medical Imaging, University of Toronto, Toronto, Canada
Abstract — Magnetic resonance (MR) spectroscopy is a valuable method for the noninvasive investigation of metabolic processes. Although brain ATP studies can be found in multivoxel 31P MR spectroscopy, previous studies of intracellular brain pH was conducted in single-voxel 31P MR spectroscopy. The purpose of this study was to explore the feasibility of mapping brain ATP and brain pH by using multivoxel 31P MR spectroscopy. Phantom studies were carried out by using a GE 3T scanner firstly. Many available sequences were tested using phantom and the 2D PRESSCSI sequence was selected because of better signal to noise ratio. TR was 1000 msec and TE 144 msec with 128 scan averages. The acquisition matrix was 16 x 16 phase encodings over a 24-cm FOV. Slice thickness was 10 mm. Then a healthy volunteer from MR research team was studied. Data were processed offline using the SAGE/IDL software. Baseline and phase corrections were performed. Multivoxel spectra and brain ATP map were analyzed. Brain pH values were calculated from the difference in chemical shifts between inorganic phosphate (Pi) and phosphocreatine (PCr) resonances. Color scaling map was generated using MatLab software. Multivoxel 31P spectra were obtained for phantom and the healthy volunteer. PCr map was obtained in phantom. At this moment, peaks of PCr were not homogeneous in phantom studies. There was noise for multivoxel 31P spectra in volunteer study. Phosphomonoester (PME) peak, Pi peak, phosphodiester (PDE) peak, PCr peak, ATP peak, ATP peak, and ATP peak can be identified. Preliminary brain ATP map and brain pH map were generated in the volunteer. It is feasible to map brain ATP and brain pH using multivoxel 31P MR spectroscopy. However, endeavors should be made to improve quality of multivoxel 31P MR spectroscopy. Keywords — MR spectroscopy, brain pH mapping, brain ATP mapping
I. INTRODUCTION Magnetic resonance (MR) spectroscopy is a valuable method for the noninvasive investigation of metabolic processes. Many fundamental physiological, biochemical and metabolic events in human body can be evaluated by using MR spectroscopy. Single voxel MR spectroscopy has been an important tool to investigate metabolites in the regions of
brain, prostate, muscle, etc. However, the drawback of single voxel technique is obvious in terms of anatomical coverage and small structure, compared with multivoxel MR spectroscopy. Multivoxel MR spectroscopy is a welcome change over previous single voxel method [1-5]. Brain energy metabolism can be assessed by using 31P MRS to measure changes in the intracellular pH and relative concentrations of adenosinetriphosphate (ATP), phosphocreatine (PCr), and inorganic phosphate (Pi) [6]. Intracellular pH values can be calculated from the difference in chemical shifts between Pi and PCr resonances [7-11]. Mitochondrial activities can be evidenced by measuring ATP peaks. Brain pH study will be beneficial for diagnosis and treatment of many diseases, such as brain tumor, brain infarction, neurodegenerative diseases, and so on. Although brain ATP studies can be found in multi-voxel 31P MR spectroscopy, previous studies of intracellular brain pH was conducted in single-voxel 31P MR spectroscopy. To our knowledge, mapping brain pH using multivoxel 31P MR spectroscopy has not been published. The hypothesis of this study was that if multivoxel 31P MR spectroscopy could be used to measure brain metabolites directly, we should be able to generate brain pH map indirectly. In order to enhance visual observation, brain pH map could be enhanced by color scaling. Therefore, the purpose of this study was to explore the feasibility of mapping brain ATP and brain pH by using multivoxel 31P MR spectroscopy. We report our preliminary results in a sense of encouraging our future endeavors. II. MATERIALS AND METHODS Phantom studies were carried out at first in this study. Phantom was a sphere filled with physiological metabolites of brain, including phosphocreatine (GE Braino). Then a healthy volunteer from MR research team was studied. All procedures were approved by the research committee at the Toronto Western Hospital. The studies were performed on a 3-T GE scanner (General Electric Medical Systems, Milwaukee, WI). The scout images were obtained with a gradient echo sequence. Many available sequences were tested using
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 362–365, 2009 www.springerlink.com
Preliminary Study of Mapping Brain ATP and Brain pH Using Multivoxel 31P MR Spectroscopy
phantom. The 2D PRESSCSI sequence is the best sequence for multivoxel 31P MR spectroscopy concerning signal to noise ratio. The 2D PRESSCSI sequence was utilized for both 1 H scans with a standard head coil and 31P scans with a GE service coil. TR was 1000 msec and TE 144 msec with 128 scan averages. A single PRESS volume of interest was prescribed graphically. The acquisition matrix was 16 x 16 phase encodings over a 24-cm FOV. Slice thickness was 10 mm. Before 31P scan, a 1H MRS pre-scan was performed to obtain shimming values. The 1H pre-scan was performed using first and automatic shimming. Shimming values of 1H MRS scan in x, y, z directions were copied to 31P MRS scans. The standard head coil was unplugged when 31P scan was performed. Data were processed offline using the SAGE/IDL software. Baseline and phase corrections were performed. Multivoxel spectra, phantom PCr map, and brain ATP map were generated using SAGE/IDL software. Because of noise spectra in the volunteer, brain pH values were roughly calculated from the difference in chemical shifts between Pi and PCr resonances using the following standard formula [7-10]: pH = 6.77 + log{(A - 3.29)/(5.68 - A)}
(1)
where A = chemical shift difference in parts per million between Pi and PCr. Color scaling map was generated using MatLab software. III. RESULTS Multivoxel 31P spectra were obtained for phantom and the healthy volunteer. However, the peaks of PCr were hetero-
Fig. 1. Multivoxel
31
P MR spectroscopy of phantom obtained by the 2D PRESSCSI sequence. The heterogeneous PCr peaks were observed.
_______________________________________________________________
363
Fig. 2. PCr map generated by the data of Figure 1.
geneous in phantom study (Figure 1). From the data of this scan, a corresponding PCr map could be generated by SAGE/IDL software (Figure 2). At this moment, there was noise for multivoxel 31P spectra in volunteer study. A cumulated spectra from the multivoxel 31P MR spectroscopy is shown in Figure 3. Roughly, phosphomonoester (PME) peak, inorganic phosphate (Pi) peak, phosphodiester (PDE) peak, phosphocreatine (PCr) peak, ATP peak, ATP peak, and ATP peak could be identified. The individual spectra were with similar quality. From the data of same scan, corresponding metabolite maps could be generated by SAGE/IDL software. Figure 4 shows a brain ATP map as an example.
Fig. 3. Cumulated spectra of multivoxel 31P MR spectroscopy in vivo. Although there was too much noise, brain metabolites could be identified.
IFMBE Proceedings Vol. 23
_________________________________________________________________
364
Ren-Hua Wu, Wei-Wen Liu, Yao-Wen Chen, Hui Wang, Zhi-Wei Shen, Karel terBrugge, David J. Mikulis
Fig. 6. Color scaling map based on the values of Figure 5.
IV. DISCUSSION
Fig. 4. ATP map generated by SAGE/IDL software from same data of Figure 3.
Figure 5 shows the results of rough brain pH calculation based on multivoxel 31P data of the volunteer scan. The lowest value was 7.01 and the highest value was 7.24. From the values of Figure 5, a color scaling map was generated using MatLab software (Figure 6). Red color represents higher pH value and blue color represents lower pH value. Although the brain pH map might be just preliminary, we did obtain the brain pH map.
Fig. 5. Brain pH values calculated using formula (1).
_______________________________________________________________
What multivoxel 31P MR spectroscopy measures at this time is limited in quality, but it does provide a window to noninvasively measure small structure of brain tissue. The advantage of using multivoxel 31P MR spectroscopy over single voxel arises from its capability of offering metabolite information in spacing distribution, and continuous spectra to be obtained from tissue in real-time. By generating color map using multivoxel 31P MR spectroscopy, evaluation of physiological or pathological information becomes much easy. Mapping brain metabolites has received increasing interest and could be utilized in other organs and tissue of human body [12-14]. Although our results of generating brain ATP map and brain pH map are preliminary, bright future could be expected in terms of technique improving. At this moment, MR hardware and software are not perfect so that our brain ATP map and brain pH map might just be at their earliest stage. But we did obtain our preliminary maps based on previous outcomes. What we should endeavor in the future is finding better hardware and software to improve homogeneity of magnetic filed and perfect shimming for multivoxel 31P MR spectroscopy. More sensitive detection techniques and better post-processing should also be taken into account. As long as source signals of multivoxel 31P MR spectroscopy are accurate, the corresponding maps are just visual enhancement. We are confident of that brain ATP map and brain pH map will be feasible in the near future. Accurate metabolite information for small region of organ tissue will be available for studies of gene expression, identifying progenitor cells, early detection of various diseases, differential diagnosis of diseases, etc [15-17].
IFMBE Proceedings Vol. 23
_________________________________________________________________
Preliminary Study of Mapping Brain ATP and Brain pH Using Multivoxel 31P MR Spectroscopy
V. CONCLUSIONS This study was conducted to prove the possibility of mapping brain ATP and brain pH by using multivoxel 31P MR spectroscopy. We report our preliminary results in a sense of encouraging our future endeavors.
ACKNOWLEDGMENT This work was supported in part by the National Natural Science Foundation of China (30570480) and Guangdong Natural Science Foundation (8151503102000032). This work was mainly done in the University of Toronto, Canada. We thank MR staffs in the Toronto Western Hospital, University of Toronto, Canada.
REFERENCES [1] Wu RH, Mikulis DJ, Ducreux D, et al. (2003) Evaluation of chemical shift imaging using a two-dimensional PRESSCSI sequence: work in progress. 25th Annual international conference of the IEEE Engineering in Medicine and Biology Proceedings, Cancun, Mexico, 2003, pp. 505-508 [2] Keshavan MS, Stanley JA, Montrose DM, et al. (2003) Prefrontal membrane phospholipid metabolism of child and adolescent offspring at risk for schizophrenia or schizoaffective disorder: an in vivo 31P MRS study. Mol Psychiatry 8:316-323 [3] Stanley JA, Kipp H, Greisenegger E, et al. (2006) Regionally specific alterations in membrane phospholipids in children with ADHD: An in vivo 31P spectroscopy study. Psychiatry Res 148:217-221 [4] Kemp GJ, Meyerspeer M, Moser E (2007) Absolute quantification of phosphorus metabolite concentrations in human muscle in vivo by 31P MRS: a quantitative review. NMR Biomed 20:555–565. [5] Mairiang E, Hanpanich P, Sriboonlue P (2004) In vivo 31P-MRS assessment of muscle-pH, cytolsolic-[Mg2_] and phosphorylation potential after supplementing hypokaliuric renal stone patients with potassium and magnesium salts. Magnetic Resonance Imaging 22:715– 719. [6] Wu RH, Poublanc J, Mandell D, et al. (2007) Evidence of brain mitochondrial activities after oxygen inhalation by 31P magnetic resonance spectroscopy at 3T. Conf Proc IEEE Eng Med Biol Soc, 2007, pp. 2899-2902.
_______________________________________________________________
365
[7] Patel N, Forton DM, Coutts GA, et al. (2000) Intracellular pH measurements of the whole head and the basal ganglia in chronic liver disease: A phosphorus-31 MR spectroscopy study. Metab Brain Dis 15:223–240. [8] Petroff OAC and Prichard JW (1983) Cerebral pH by NMR. Lancet ii(8341):105–106. [9] Taylor-Robinson SD and Marcus CD (1996) Tissue behaviour measurements using phosphorus-31 NMR. In (Grant DM, Harris RK, ed.), Encyclopedia of Nuclear Magnetic Resonance. Wiley, Chichester, UK, pp. 4765–4771. [10] Hamilton G, Mathur R, Allsop JM, et al. (2003) Changes in Brain Intracellular pH and Membrane Phospholipids on Oxygen Therapy in Hypoxic Patients with Chronic Obstructive Pulmonary Disease. Metabolic Brain Disease 18:95-109. [11] Brindle KM, Rajagopalan B, Williams DS, et al. (1988) 31P NMR measurements of myocardial pH in vivo. Biochem Biophys Res Commun, 151:70-77. [12] Mannix ET, Boska MD, Galassetti P, et al. (1995) Modulation of ATP production by oxygen in obstructive lung disease as assessed by 31P-MRS. J Appl Physiol 78:2218-2227. [13] Kutsuzawa T, Shioya S, Kurita D, et al. (2001) Effects of age on muscle energy metabolism and oxygenation in the forearm muscles. Med Sci Sports Exerc 33:901-906. [14] Haseler LJ, Lin AP, and Richardson RS (2004) Skeletal muscle oxidative metabolism in sedentary humans: 31P-MRS assessment of O2 supply and demand limitations. J Appl Physiol 97:1077–1081. [15] Brindle K (2008) New approaches for imaging tumour responses to treatment. Nat Rev Cancer 8:94-107. [16] Cunningham CH, Chen AP, Albers MJ, et al. (2007) Double spinecho sequence for rapid spectroscopic imaging of hyperpolarized 13C. J Magn Reson 187:357-362. [17] Drummond A, Macdonald J, Dumas J, et al. (2004) Development of a system for simultaneous 31P NMR and optical transmembrane potential measurement in rabbit hearts. Conf Proc IEEE Eng Med Biol Soc, 3:2102-2104. Corresponding authors: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Mikulis DJ University of Toronto 399 Bathurst Street Toronto Canada
[email protected] Wu RH Shantou University 22 Xinling Road Shantou China
[email protected] _________________________________________________________________
Brain-Computer Interfaces for Virtual Environment Control G. Edlinger1, G. Krausz1, C. Groenegress2, C. Holzner1, C. Guger1, M. Slater2 1
g.tec medical engineering, Guger Technologies OEG, Herbersteinstrasse 60, 8020 Graz, Austria,
[email protected] 2 Centre de Realitat Virtual (CRV), Universitat Politècnica de Catalunya, Barcelona, Spain
Abstract — A brain-computer interface (BCI) is a new communication channel between the human brain and a digital computer. Furthermore a BCI enables communication without using any muscle activity for a subject. The ambitious goal of a BCI is finally the restoration of movements, communication and environmental control for handicapped people. However, in more recent research also BCI control in combination with Virtual Environments (VE) gains more and more interest. Within this study we present experiments combining BCI systems and control VE for navigation and control purposes just by thoughts. A comparison of the applicability and reliability of different BCI types based on event related potentials (P300 approach) will be presented. BCI experiments for navigation in VR were conducted so far with (i) synchronous BCI and (ii) asynchronous BCI systems. A synchronous BCI analyzes the EEG patterns in a predefined time window and has 2-3 degrees of freedom. A asynchronous BCI analyzes the EEG signal continuously and if a specific event is detected then a control signal is generated. This study is focused on a BCI system that can be realized for Virtual Reality (VR) control with a high degree of freedom and high information transfer rate. Therefore a P300 based human computer interface has been developed in a VR implementation of a smart home for controlling. the environment (television, music, telephone calls) and navigation control in the house. Results show that the new P300 based BCI system allows a very reliable control of the VR system. Of special importance is the possibility to select very rapidly the specific command out of many different choices. This eliminates the usage of decision trees as previously done with BCI systems. Keywords — Brain-Computer Interface, P300, evoked potential, Virtual Environment
I. INTRODUCTION An EEG based Brain-Computer Interface (BCI) measures and analyzes the electrical brain activity (EEG) in order to control external devices. A BCI can be seen as novel and additional communication channel for humans. In contrast to other communication channels a BCI does not need using any muscle activity from the subject. BCIs are based on slow cortical potentials [1], EEG oscillations in the alpha and beta band [2, 3, 4], the P300 response [5] or steady-state visual evoked potentials (SSVEP) [6]. BCI systems are used
mainly for moving a cursor on a computer screen, controlling external devices or for spelling purposes [2, 3, 5]. BCI systems based on slow cortical potentials or oscillatory EEG components with 1-5 degrees of freedom were realized up to now. However, high information transfer rates were reached based on 2 degrees of freedom as otherwise the accuracy of the BCI systems dropped down. SSVEP based systems allow selecting up to 12 different targets and are limited by the number of distinct frequency responses that can be analyzed in the EEG. P300 response based BCIs typically used a matrix of 36 characters for spelling applications [5]. The underlying phenomenon of a P300 speller is the P300 component of the EEG which is induced if an unlikely event occurs. The subject has the task to concentrate himself on a specific letter he wants to select [5, 7, 8]. When the character flashes on, a P300 is induced and the maximum in the EEG amplitude is reached typically 300 ms after the flash onset. Several repetitions are needed to perform EEG data averaging for increase the signal to noise ratio and accuracy of the system. The P300 signal response is more pronounced in the single character speller than in the row/column speller and therefore easier to detect [7, 8]. II. METHODS Three subjects participated in the experiments. EEG from 8 electrode positions located over the parietal and occipital areas was measured. The sampling frequency was set to 256 Hz and EEG was digitally bandpass filtered between 0.1 to 30 Hz. Then a hardware interrupt driven device driver was used to read the biosignal data with a buffersize of 1 (time interval ~ 4ms) into Simulink which runs under the MATLAB environment [4]. Within Simulink the signal processing and feature extraction (see [5] for details) and the paradigm presentations are performed. The paradigm module controls the flashing sequence of the symbols. In this work, a smart home VR realization should be controlled with the BCI. Therefore the subjects were trained firstly in spelling characters and numbers based on their P300 EEG response. Therefore, the characters of the English alphabet (A, B,…Z) and Arabic numbers (1, 2,…9) were arranged in
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 366–369, 2009 www.springerlink.com
Brain-Computer Interfaces for Virtual Environment Control
a 6 x 6 matrix on a computer screen. Then the characters were highlighted in a random order and the subject had the task to concentrate on the specific character he/she wanted to spell. All experiments were undertaken in 2 modi: (i) the row/column speller – all items of one row or column are highlighted at the same time, (ii) the single character speller – only one character is highlighted. For the single character speller each character was highlighted 15 times. For the row/column speller each row and each column was also highlighted 15 times. This results in a speed up of 3 for the row/column speller. Another important parameter in the P300 experiment is the flash time (character is highlighted) and the dark time (time between 2 highlights). Both times should be as short as possible to reach a high communication speed, but must be long enough so that the subject can react on the flash and that the single P300 responses are not overlapping. At the beginning of the experiment the BCI system was trained based on the P300 response of 42 characters of each subject with 15 flashes per character (about 40 minutes training time). All 3 subjects needed between 3 and 10 flashes (mean 5.2) per character to reach an accuracy of 95 % for the single character speller and between 4 and 11 flashes (mean 5.4) for the Row/Column speller. This resulted in a maximum information transfer rate of 84 bits/s for the single character speller and 65 bits/s for the row column speller. Figure 1 yields the a typical P300 response to the target letters. Then the P300 based BCI system was connected to a Virtual Reality (VR) system. A virtual 3D representation of a
367
Figure 2: Virtual representation of a smart home
smart home with different control elements was developed as shown in Figure 2. In the experiment it should be possible for a subject to switch on and off the light, to open and close the doors and windows, to control the TV set, to use the phone, to play music, to operate a video camera at the entrance, to walk around in the house and to move him/herself to a specific location in the smart home. Therefore special control masks for the BCI system were developed containing all the different necessary commands. In total 7 control masks were created: a light mask, a music mask, a phone mask, a temperature mask, a TV mask (see Figure 3), a move mask and a go to mask (see Figure 4).
P300 target response
Figure 1: Typical averaged P300 responses for a single character flasher. The graphs represent the P300 responses from electrode positions Fz, Cz, P3, Pz, P4, Fp1, Oz and Fp2 (from upper left to lower right). The red vertical bar represents the occurrence of the target letter. Amplitudes on the y-axis are given in [μV] and the x-axis represents the time from 100 ms before target occurrence to 700ms after target occurrence.
_______________________________________________________________
Figure 3: Control mask with the main menu in the first 2 rows, the icons for the camera, door control and questions in the 3rd and 4th row and the TV control in the last 2 rows.
IFMBE Proceedings Vol. 23
_________________________________________________________________
368
G. Edlinger, G. Krausz, C. Groenegress, C. Holzner, C. Guger, M. Slater
umn shows the total number of flashes per mask until a decision is made. The translation time per character that is longer if more symbols are on the mask. IV. DISCUSSION & CONCLUSION
Figure 4: Control mask for going to a specific position in the smart home (bird view).
III. RESULTS Table 1 shows the results of the 3 subjects for the 3 parts of the experiment and for the 7 control masks. Interestingly, the light, the phone and the temperature mask were controlled by 100 % accuracy. The Go to mask was controlled with 94.4 % accuracy. The worst results were achieved for the TV mask with only 83.3 % accuracy. Table 2 shows the number of symbols for each mask and the resulting probability that a specific symbol flashes up. If more symbols are displayed on one mask then the probability of occurrence is smaller and this results in a higher P300 response which should be easier to detect. The flashes colTable 1. Accuracy of the BCI system for each part and control mask of the experiment for all subjects. Mask Light Music Phone Temperature TV Move Go to
Part1 100% 100% 83,3% 88,87% 100%
Part2 100% 89,63% 100% -
Part3 100% 93,3% 88,87%
Total 100% 89,63% 100% 100% 83,3% 91,1% 94,43%
Table 2. Number of symbols, occurrence probability per symbol, number of flashes per mask (e.g. 25 x 15 = 375) and conversion time per character for each mask.
The P300 based BCI system was successfully used to control a smart home environment with accuracy between 83 and 100 % depending on the mask type. The difference in accuracy can be explained by the arrangement of the icons. However, the experiment yielded 2 important new facts: (i) instead of displaying characters and numbers to the subject also different icons can be used, (ii) the BCI system must not be trained on each individual character. The BCI system was trained with EEG data of the spelling experiment and the subject specific information was used also for the smart home control. This allows using icons for many different tasks without prior time consuming and boring training of the subject on each individual icon. This reduces the training time in contrast to other BCI implementations were hours or even weeks of training are needed [1, 2, 3]. This reduction in training time might be important for locked-in and ALS patients who have problems with the concentration over longer time periods. The P300 concept works also better if more items are presented in the control mask as the P300 response is more pronounced if the likelihood that the target character is highlighted drops down [4]. This results of course in a lower information transfer rate, but enables to control almost any device with such a BCI system. Especially applications which require reliable decisions are highly supported. Therefore the P300 based BCI system enables an optimal way for the smart home control. The virtual smart home acts in such experiments as a testing installation for real smart homes. Also wheelchair control, which many authors identify as their target application, can be realized with this type of BCI system in a goal oriented way. In a goal oriented BCI approach it is then not necessary e.g. to move a robotic hand by thinking about hand or foot movements and controlling right, left, up, down commands. Humans just think “I want to grasp the glass” and the real command is initiated by this type of BCI implementation. A P300 based BCI system is optimally suited to control smart home applications with high accuracy and high reliability. Such a system can serve as a easy reconfigurable and therefore cheap testing environment for real smart homes for handicapped people.
ACKNOWLEDGEMENT The work was funded by the EU project PRESENCCIA.
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Brain-Computer Interfaces for Virtual Environment Control 5.
REFERENCES 1.
2.
3.
4.
N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor (1999) A spelling device for the paralysed, Nature, vol. 398, pp. 297- 298 C. Guger, A. Schlögl, C. Neuper, D. Walterspacher, T. Strein, and G. Pfurtscheller (2001) Rapid prototyping of an EEG-based braincomputer interface (BCI), IEEE Trans. Rehab. Engng., vol. 9 (1), pp. 49-58 T.M. Vaughan, J.R. Wolpaw, and E. Donchin (1996), EEG-based communication: Prospects and problems, IEEE Trans. Rehab. Engng., vol. 4, pp. 425-430 G. Edlinger, C. Guger, Laboratory PC and Mobile Pocket PC BrainComputer Interface Architectures (2006), Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference, 5347 - 5350
_______________________________________________________________
369
6.
7.
8.
9.
D. Krusienski, E. Sellers, F. Cabestaing, S. Bayoudh, D. McFarland, T. Vaughan, J. Wolpaw (2006) A comparison of classification techniques for the P300 Speller, Journal of Neural Engineering, vol. 6, pp. 299 – 305. G.R. McMillan and G.L. Calhoun et al. (1995) Direct brain interface utilizing self-regulation of steady-state visual evoke response in Proceedings of RESNA, June 9-14, pp.693-695. G. Cuntai, M. Thulasidas, and W. Jiankang (2004). High performance P300 speller for brain-computer interface, Biomedical Circuits and Systems, IEEE International Workshop on, S3/5/INV - S3/13-16. M. Thulasidas, G. Cuntai, and W. Jiankang (2006), Robust classification of EEG signal for brain-computer interface. IEEE Trans Neural Syst Rehabil Eng., 14(1): p. 24-29. Zhang H, Guan C, Wang C (2006). A Statistical Model of Brain Signals with Application to Brain-Computer Interface. Proceedings of the 2005 IEEE on Engineering in Medicine and Biology 27th Annual Conferenc:5388 - 5391
IFMBE Proceedings Vol. 23
_________________________________________________________________
FPGA Implementation of Fuzzy (PD&PID) Controller for Insulin Pumps in Diabetes V.K. Sudhaman1, R. HariKumar2 1
U.G Student, ECE, Bannari Amman Institute of Technology, Sathyamangalam, India 2 Professor, ECE, Bannari Amman Institute of Technology, Sathyamangalam, India
Abstract — This paper emphasizes on a FPGA Implementation of Fuzzy PD&PID Controller in biomedical application. A novel approach aims to identify and design a simple robust Fuzzy PD&PID Controller system with minimum number of fuzzy rules for the diabetic patients as a single injection process through the blood glucose level from Photo plethysmogram is discussed. VLSI system is simulated and analyzed then FPGA system is synthesized and implemented using VHDL.A simulation of VLSI design of the above automatic controller is analyzed. In this process insulin is administrated through an infusion pump as a single injection. The pump is controlled by the automatic control Fuzzy PD Controller which is more efficient compared to the conventional PD Controller. Keywords — blood glucose level, Photo plethysmogram, FPGA, Fuzzy PD&PID Controller.
I. INTRODUCTION The motto of this paper is to identify a proper methodology for the infusion process of insulin to diabetic patients using an automated Fuzzy logic controller. FPGA Implementation of the above automatic controller is analyzed by using VHDL. In this process insulin is administrated through an infusion pump as a single injection. The pump is controlled by the automatic control Fuzzy PD& PID Controller which is more efficient compared to the conventional PD&PID Controller. In case of non- linear inputs, Fuzzy PD &PID Controller performs better compared to the conventional controller. The task organization for obtaining FPGA Implementation of Fuzzy PD&PID Controller is as follows: 1. Measurement of Blood glucose level using Photo plethysmography method which is explained in ref [6],[8] 2. Design of Fuzzy PD controller (2x1) based on error and error rate as inputs and output signal to control the movement of infusion pump. 3. Performance study of conventional PD&PID controller with Fuzzy PD&PID controller. 4. VLSI Design and Simulation of Fuzzy PD &PID Controller are analyzed. 5. FPGA Implementation of Fuzzy PD&PID Controller is analyzed.
The FPGA architecture is designed such that its very simple and occupies less memory ,so that our system can also be adaptable in low level FPGA‘s. II. MATERIALS
AND METHODOLOGY
A logical system, which is much closer to the spirit of human thinking and natural language than the traditional logic system is called a Fuzzy logic. Here the Fuzzy Controller is considered as new approchement between the conventional mathematical control and human like decision making. The Fuzzy controller has to be provided with a predefined range of control parameters or sense of fields. The determination of these sets is normally carried out by initially providing the system with a set of data manually fed by the operator. The Fuzzy system can be further enhanced by the adaptive control where the time constant and gain are varied to self tune the controller at various operating point.
Fig.1 Fuzzy PD controller system
The Figure 1 depicts the Fuzzy PD Controller with Y(nT) is the output of photoglucometer (Photo plethysmography) which is compared with set point (sp), the error and rate of error are calculated and they are given as an input to the Fuzzy inference system which consists of fuzzifier, rule base and defuzzification, which produces the control input u(nT). The input u (nT) of the plant is calculated which act as the position control of an insulin pump. Photo-glucometer is an instrument that uses IR radiation (850 nm) from a source, a sensor, amplifier and an output display unit to give the blood sugar of a diabetic patient. The IR radiation when incident on the skin a part of it is transmitted reflected and absorbed by the skin. The transmitted ray is sensed by the sensor and the output of sensor is applied and it is displayed.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 370–373, 2009 www.springerlink.com
FPGA Implementation of Fuzzy (PD&PID) Controller for Insulin Pumps in Diabetes
371
e(t )
III. ANALYSIS OF FUZZY PD CONTROLLER Fuzzy logic control technique has found many successful industrial applications and demonstrated significant performance improvements. In standard procedure the design consist of three main parts Fuzzification, Fuzzy logic rule base, and Defuzzification[1]. A. Mathematical analysis Ackerman et al (1965) used a two compartment model to represent the dynamics of glucose and insulin concentration in the blood system. The blood Glucose dynamics of an individual can be written as
x1. =-m1x1-m2x2+p(t)
(1)
x 2. =-m3x2+m4x1+u(t)
(2)
.
.
where x1 represents blood glucose level deviations, x 2 denotes the net blood glycemic harmonic level, p(t) represents the input rate of infusion of glucose, u(t) denotes the input rates of infusion of insulin and m1,m2,and m3 and m4 are parameters. The photo glucometer output levels are shown in table1.
sp (t ) y (t )
(4)
Sp (t) is the set point and y (t) is the system output In fuzzification step we employ two inputs ,the error signal e(nT) ,and the rate of change of error signal r(nT), with only one control output u(nT). Both the error and the rate have two membership values, positive and negative while the output has three; positive, negative and zero. Based on the membership functions, the fuzzy control rules that we used are the following Fr1: IF error = ep AND rate = rp THEN output = oz Fr2: IF error = ep AND rate = rn THEN output =op Fr3: IF error = en AND rate = rp THEN output =on Fr4: IF error = ep AND rate = rn THEN output = oz Here the output is the fuzzy control action u (nT), ep means error positive and oz means output zero etc., In the defuzzification center of mass formula is employed 'u (nT )
rp u oz rn u op en u on en u oz rp rn en en
(13)
We use op = 2; on = -3; z=0
Table 1 Photo Glucometer Output Levels Sl No. 1.
Blood Glucose Level (mg/dl) 50
Photo Glucometer Output (V) 8
Control levels Lower point
2.
100
9
Set point
3.
200
10
Upper point
For severe diabetes we shall take m4 to have the value of Zero. We are considering a group of diabetics with no insulin secretion whose blood glucose level is 200 mg/dl while fasting. Based on the observation we derived the following insulin injection needed for the group in a single injection process in the time duration of 30 min. The above case needs 2950 micro units/ml insulin at a single stroke of injection.
Fig.2 The membership function of e (nT), r (nT) and u (nT)
C. Design of Fuzzy PID controller The Fuzzy PID controller is designed similar to that of PD controller and it has seven fuzzy control rules.
B. Design of Fuzzy PD controller The conventional continuous time PD control law is described by
u (t )
k pc e(t ) k dc e(t )
(3)
Where k and k are the proportionality derivative gains of the controller and e(t) is the error signal defined by
_______________________________________________________________
Fig.3 Step responses of optimal PID and optimal fuzzy PID controller
The simulation results of the design shows that the Fuzzy PID controller performs better than conventional PID controller.
IFMBE Proceedings Vol. 23
_________________________________________________________________
372
V.K. Sudhaman, R. HariKumar Table 2 Device Utilization Summary for Fuzzy PD controller output
IV. VLSI DESIGN OF FUZZY PROCESSOR VLSI architecture is designed to implement the fuzzy based controller for insulin pumps in diabetic neuropathy patients. This implementation aims at improvement of speed and efficiency of the system. This can be done by incorporating parallel computing system to the architecture. In the design of Fuzzy PD&PID controller there are two inputs, the error signal, error rate and the output is the control signal for the insulin pump. The design we decided to develop was that of a low cost, high performance fuzzy processor with 5 inputs and a single control output.
Device Utilization Summary Logic Utilization
Used
Available
Number of Slice Latches
2
3,840
Number of 4 input LUTs
125
3,840
66
1,920
Logic Distribution Number of occupied Slices
Number of Slices containing only 66 related logic
66
Number of Slices containing unre0 lated logic
66
Total Number of 4 input LUTs
125
3,840
Number of bonded IOBs
22
173
IOB Latches
2
Number of MULT18X18s
4
12
Total equivalent gate count for 17,016 design Additional JTAG gate count for IOBs 1,056
Fig.4: VLSI architecture of fuzzy controller
As shown in the block diagram the controller has an error generator, fuzzy controller, output control signal and the process of infusion being carried on by the insulin pump in a single injection process. In this work the fuzzy architecture is implemented and simulated using VHDL which is IEEE standard language for both simulation and synthesis. VLSI system is incorporated using VHDL Design unit called PROCESS. Atypical test bench is created using VHDL by which simulation of online testing is carried out. V. FPGA IMPLEMENTATION OF FUZZY CONTROLLERS Field Programmable Gate Arrays (FPGAs) represent reconfigurable computing technology, they are processors which can be programmed with a design, and then reprogrammed (or reconfigured) with virtually limitless designs as the designer’s needs change.FPGA Generic Design Flow has three steps. They are Design Entry, Implementation, and Design verification. Design Entry is created by design files using a schematic editor or hardware description language. Design Implementation on FPGA has partition, place, and route to create bit-stream file. Design verification by using simulator to check function, other software determines max clock frequency. In this paper the Xilinx Spartan-3 family of FPGA is used. Xilinx Spartan-3 features are low cost, high performance logic solution for high-volume, consumer-oriented applications. Table.2 & 3 shows the for the Device utilization summary for output membership function of Fuzzy PD & PID controllers synthesis process respectively. It shows
_______________________________________________________________
Table 3 Device Utilization Summary for Fuzzy PID controller output Device Utilization Summary Logic Utilization
Used
Available
Number of Slice Latches
202
7,168
Number of 4 input LUTs
593
7,168
376
3,584
Logic Distribution Number of occupied Slices
Number of Slices containing 376 only related logic
185
Number of Slices containing 0 unrelated logic
185
Total Number of 4 input 593 LUTs
7,168
Number of bonded IOBs
43
97
IOB Latches
26
Number of MULT18X18s
8
16
Number of GCLKs
2
8
Total equivalent gate count 38,165 for design Additional JTAG gate count for 2,064 IOBs
that only less part of the resources are utilized for the FPGA synthesis process. Densities up to 74,880 logic cells, Select IO™ signaling Up to 784 I/O pins, 622 Mb/s data transfer rate per I/O, 18 single-ended signal standards, Eight differential I/O standards including LVDS, RSDS. Logic resources are abundant logic cells with shift register capability. Wide, fast multiplexers, Fast look-ahead carry logic, Dedicated 18 x 18 multipliers, JTAG logic compatible with IEEE 1149.1/1532,
IFMBE Proceedings Vol. 23
_________________________________________________________________
FPGA Implementation of Fuzzy (PD&PID) Controller for Insulin Pumps in Diabetes
Select RAM™ hierarchical memory, Up to 1,872 Kbits of total block RAM, Up to 520 Kbits of total distributed RAM, Digital Clock Manager (up to four DCMs), Clock skew elimination, Frequency synthesis, High resolution phase shifting, Eight global clock lines and abundant routing. By using the RTL schematic internal components also can be analyzed. VI. RESULTS AND DISCUSSION The Implementation result shows that the Fuzzy PD controller has Minimum input arrival time before clock is 2.951 ns; Maximum output required time after clock is 7.078ns. The Fuzzy PID controller has Minimum period is 5.512ns (Maximum Frequency: 181.429MHz), Minimum input arrival time before clock is 6.657ns, Maximum output required time after clock is 6.141ns.The FPGA system is very simple in architecture and higher in performance with less power dissipation.FPGA Fuzzy control system, which is capable of processing under different input conditions. The architecture is simulated with various values of error and error rate values. The initial conditions for the overall control systems have the following natural values. For the fuzzy control action 'U (0) 0 ; for the system output y (0) =0and for the original error and rate signals e(0)=r (the set point) and r(0)=r respectively. For our diabetic pump which is a second order system [4], the transfer function is H ( S )
1 . we remark that the Fuzzy PD con2 s s 1
troller designed above has a self-timing control capability i.e., when the tracking error e(nT) keeps increasing, the term d(nT)=[e(nT)+e(nT-T)]/T becomes larger. In this case the fuzzy control action 'u (nT ) also keeps increasing accordingly, which will continuously reduce the tracking error. Under the steady-state condition d ( nT )
non-linear inputs. In this paper it is assumed that the patient is continuously attached to the infusion pump and the blood glucose level is monitored at regular intervals. The VHDL code is implemented in FPGA and tested for its performance.
ACKNOWLEDGEMENT The author wish to express their sincere thanks to the Management, Principal and IEEE Student Branch Counselor of Bannari Amman Institute of Technology, Sathy for providing the necessary facilities for the completion of this paper. This paper is supported by the grants of IEEE Student Enterprise Award, 2007-2008.
REFERENCES 1.
2.
3.
4.
5.
6.
7.
0 , we found
that the control performance of the Fuzzy PD controller in VLSI Simulation is also as good as, if not better than the conventional controller.
8. 9.
10.
VII. CONCLUSION This paper discusses about the treatment of diabetic neuropathic patients using an automated Fuzzy PD&PID Controller system by FPGA. This system is designed by VLSI System and Implemented using FPGA technology. The control system consists of photoglucometer and an insulin pump. The simulation results have shown that this PD Controller overrides the conventional controller in handling
_______________________________________________________________
373
11. 12.
13.
Harikumar R and Selvan.S. Fuzzy controller for insulin pumps in diabetes. Proceedings of International Conference on Bio medical Engineering, Anna University, Chennai, India. pp 73-76, Jan.24-26, 2001. Paramasivam K and R Harikumar and R Sundararajan, Simulation of VLSI design using Parallel Architecture For Epilepsy risk level diagnosis in diabetic neuropathy. IETE Journal of Research Vol50, no.4,pp 297-304, August 2004. Giuseppe Ascia, Vincenzo Catania, and Marco Russo, VLSI Hardware Architecture for complex Fuzzy systems. IEEE Transaction on fuzzy systems, vol 7,No.5,pp 521-539,October 1999 Baogang Hu, George K.I. Mann, and Raymond G.Gosine, New Methodology for Analytical and Optimal design of fuzzy PID Controllers. IEEE Transactions on fuzzy systems,vol 7,No.5,October 1999 K.H.Kienitz and T.Yoneyama. A Robust Controller for Insulin pumps based on H-infinity Theory. IEEE Trans on BME, Vol 40, No: 11 pp 1133-1137, Nov 1993. C.C.Lim and K.L.Teo. Optimal Insulin Infusion Control via a Mathematical Blood Glicoregulator Model with Fuzzy Parameters. Cybernatics and Systems: Vol 22, Pp 1-16, 1991. B.Hu, G.K.I.Mann and R.G.Gosine. New Methodology for Analytical and Optimal Design of Fuzzy PID Controllers. IEEE Trans On Fuzzy Systems, Vol 7, No: 5 Pp 512-539, Oct1999. C.-L.Chen and F.-C.Kuo. Design and analysis of a fuzzy logic controller. Int.J. Syst. Sci., vol.26, pp.1223-1248, 1995. C.C.Lee, .Fuzzy logic in control systems: Fuzzy logic controller- part 1 and part 2 . IEEE Trans. Syst., Man, Cybern. Vol. 20 pp.404-435, 1990. J. Rose, R. J. Francis, D. Lewis and P. Chow. Architecture of fieldprogrammable gate arrays: The effects of logic block functionality on area efficiency. IEEE J. of Solid-State Circ., vol. 25, no. 5, pp. 12171225, Oct. 1990. J. Di Giacomo. Design methodology in VLSI Handbook, J. Di Giacomo, Ed., New York: McGraw-Hill, pp. 1.3-1.9, 1989. A. E. Gamal, et al. An architecture for electrically configurable gate arrays . IEEE J. of Solid-state Circ., vol. 24, no. 2, pp. 394-398, Apr.1989. Xilinx- FPGA Reference Manual, 2006.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Position Reconstruction of Awake Rodents by Evaluating Neural Spike Information from Place Cells in the Hippocampus G. Edlinger1, G. Krausz1, S. Schaffelhofer1,2, C. Guger1, J. Brotons-Mas3, M. Sanchez-Vives3,4 1
g.tec medical engineering GmbH, Graz, Austria University of Applied Sciences Upper Austria, Linz, Austria 3 Instituto de Neurociencias de Alicante, Universidad Miguel Hernandez-CSIC, Alicante, Spain; 4 ICREA – Institut d’Investigacios Biomediques August Pi i Sunyer, Barcelona, Spain 2
Abstract — Place cells are located in the hippocampus of the brain and play an important role for spatial navigation. In this study neural spike activity of freely moving rats along with the position of the rats was acquired. The study was performed to investigate if position reconstruction is possible if the rat is freely moving in open arenas of different sizes based on neural recordings from CA1 and subiculum regions. The neural spike activity was measured using tetrodes from 6 chronically implanted rats in CA1 and subiculum regions. The position of the rats was monitored using a video tracking system .In the encoding step spike activity features of the place cells and position information from the rats were used to train a computer system. In the decoding step, the position was estimated from the neural spiking activity. Different reconstruction methods were implemented: (i) Bayesian 1-step and (ii) Bayesian 2-step. The results show, that the reconstruction methods are able to determine the position of the rat in 2dimensional open space from cell activity measured in the CA1 and subiculum regions. Better accuracies were achieved for CA1 regions because the firing fields show more localized spots. Higher accuracies are achieved with a higher number of place cells and higher firing rates. Keywords — place cells, hippocampus, online position reconstruction, Bayesian 2-step method
I. INTRODUCTION Cells with spatially modulated firing rates have been found in almost all areas of the hippocampus and in some surrounding areas. Such place cells (PC) encode an internal representation of an animal’s position in its environment. The background firing rate of a PC is very low (0.1 Hz), but when an animal enters the receptive field of the neuron, its firing rate rapidly increases (up to ~ 5- 15 Hz) [1,2]. The location inducing the increased cell activity is called the firing field (FF) or place field (PF) of the particular neuron. However, also other sensory cues can influence place cell activity, but visual and motion-related cues are the most relevant [3]. Recently place cells were also used to reconstruct the foraging path of rats by investigating the firing field patterns [1, 5]. In an encoding stage the place cell spiking activity together with video tracking information is used to
train a computer system on the data. In a decoding stage only the spike activity is used to reconstruct the path of the animal. In this study it was of interest to test if position reconstruction is also possible in open environments in contrast to the linear arenas used in previous studies [5]. The reconstruction was tested in square arenas with different side length (0,5m, 0,7m, 0,1m, 1m) and a square arena with a smaller square barrier inside (outer square: 1m x 1m, inner square: 0,6 x 0,6m). Two different algorithms based on Bayesian methods were implemented after a Template matching approach was ruled out for reconstruction. II. MATERIAL AND METHODS A. Neural spike and video data acquisition Action potential data were measured from 6 rats from CA1 or subiculum by the Instituto de Neurociencias de Alicante (Universidad Miguel Hernández-CSIC, UMH), Institute of Cognitive Neuroscience (University College London, UCL) and Center for Neural Systems, Memory and Aging (University of Arizona, UA). The rats were connected to the recording system via a head stage pre-amplifier. From 1 to 8 tetrodes were used for the recordings. Each data channel was amplified up to 40,000 times and a sampling frequency of 48,000 Hz was used. For tracking purpose small infrared light-emitting diodes were attached to the rat’s head and a video camera system was mounted above the experimental arena. The number of recorded cells, the recording region and the arena sizes and shapes are shown in Table 1. The sampling frequency for the position-tracking signal was 50 Hz (UMH, UA) and 46.875 Hz (UCL). Table 1: Recording information of the 6 rats. Rat # No. of cells Hippocampal region Test field side length [m] Test field shape
1 5 CA1
2 26 CA1
3 9 CA1
4 6 CA1
5 4 Subiculum
6 11 Subiculum
0,7
0,8
0,5
1
1
1
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 374–377, 2009 www.springerlink.com
Position Reconstruction of Awake Rodents by Evaluating Neural Spike Information from Place Cells in the Hippocampus
B. Dwell Time and Firing Rate In the first step the recorded spike activity was used as input for the cluster cutting. The cluster cutting was performed manually to separate neuronal activity picked up by the electrodes into single cell activity. In a second step firing rate maps were created. Therefore the arena was divided into small subsets of space and class labels were assigned to these subsets (pixels). Then the firing rate for each pixel was calculated by counting the number of spikes in each class. The test fields were divided into 64x64 classes, leading to an edge length for each class of 1.09 cm (rat 1), 1.25 cm (rat 2), 0.78 cm (rat 3) and 1.56 cm (rats 4, 5, 6). Figure 1A shows the movement trajectory of a rat obtained from the video tracking system for one full experiment with one rat.
375
were setup in the training phase. Depending on the algorithm, a matching class number is returned which identifies the reconstructed position. To calculate the reconstruction error, the reconstructed position can be compared to the position known from the position tracking data. Then the time window is moved to the next reconstruction position. For position reconstruction the two algorithms were implemented that were already tested in linear arenas: (i) Bayesian 1-step and (ii) Bayesian 2-step with continuity constraint. For the analysis all datasets were divided into three equally long parts. Training was performed on two thirds of the recordings; the other third was used for the testing and reconstruction. Bayesian method (1-step) When the rat moves to specific positions then certain spike trains are produced as response to the sensory stimuli The probability of observing a number of spikes n ( n1 , n2 ,..., n N ) given the stimulus x is P(n | x) . The probability of a stimulus (rat at position x) to occur is defined as P(x) . The joint distribution
A
B
P ( n, x )
C
Figure 1: A: arena divided into 64 fields, the cells are firing with a specific rate at each position. B: movement trajectories of the rat recorded with the video tracking system seen from top. C, D: firing maps of 1 neuron with different smoothing factors (5x5 and 10x10 Kernel).
The average firing rate of a cell i for each position x is described by the firing rate distribution, i.e. a vector of firing rates per cell i: f ( x) ( f1 ( x), f 2 ( x),..., f N ( x)) . In the training step, the firing rate maps f(x) are calculated for the whole population of N recorded cells and for every single position x. The average firing rates are calculated from the total number of spikes collected for a cell while the rat was at position S(x), divided by the total amount of time the rat spent there V(x). f ( x)
S ( x) V ( x) 't
(1)
C. Position reconstruction algorithms First firing rates of all cells within a sliding time window are compared with the firing rate vectors of each class that
_________________________________________
(2)
measures the likelihood of observing both a stimulus x and a spike train n. For the reconstruction, the observation of a number of spikes n found in the measured data occur with a probability of P(n) . The likelihood of observing both stimulus and spikes is finally given by [5] P ( x | n)
N · § N · § C (W , n) P( x) ¨¨ f i ( x) ni ¸¸ exp¨ W ¦ f i ( x) ¸ i 1 ¹ © ¹ ©i1
(3)
Were C (W , n) is a normalization factor and W is the length of the time window. In this study the normalization factor C (W , n) was set to 1. The most probable position will be regarded as the animal’s position: xˆ Bayes,1 step
t is the time interval of the position tracking system. The firing rate distribution is independent of how often the rat populates a certain position, it rather describes the tendency of a cell to fire at a given position x as shown in Figure 1B for one neuron.
P(n | x) P( x)
arg max P( x | n)
(4)
x
Bayesian method with continuity constraint (2-step) A continuity constraint can improve the accuracy of reconstruction. Sudden jumps in the reconstruction are observed by the low instantaneous firing rates of the recorded place cells. If not enough cells are firing then there is a lack of information. However the firing information is needed for the position reconstruction. The continuity constraint incorporates the reconstructed position from the preceding
IFMBE Proceedings Vol. 23
___________________________________________
376
G. Edlinger, G. Krausz, S. Schaffelhofer, C. Guger, J. Brotons-Mas, M. Sanchez-Vives
time step as well as the current speed information. Based on the 1-step Bayesian method the reconstructed position at the preceding time step t 1 is also used to calculate the reconstructed position at time t: P( xt | nt , xt 1 )
k P ( xt | nt ) P ( xt 1 | xt )
(5)
For more on this approach and details see [5]. III. RESULTS Figure 2 shows the reconstruction results computed with the Bayesian 2-Step algorithm of rat 3. Data from seconds 160 to 480 were used to train the algorithm and the interval 0 to 160 was used to test the method. The figure shows that the reconstructed path follows the real path very well in many data points of the recording. The mean reconstruction error of rat 3 was 9.5 cm. Figure 2 shows clearly erratic jumps of the reconstructed path in both x- and ycoordinates. If the reconstruction is only performed for time intervals where a minimum number of spikes (4 spikes in reconstruction window) are present the accuracy is increased as shown by the grey thin line in Figure 2B. In this case the mean error is 8 cm. Interesting is the erratic jump around second 51 were the running speed (Figure 2C) was almost 0.
Figure 2: Reconstruction results for rat 3. A: Real (red) and reconstructed x- and y-positions. B: Reconstruction error (thick line), reconstruction error weighted by the instant place cell firing rate (thin line) and median error of the whole recording (horizontal line). C: Running speed. D: Firing rate of all neurons.
The two reconstruction algorithms were tested on all 6 rats by using a 3 x 3 fold cross-validation technique. Additionally, the Bayesian 2-step algorithm was trained and tested on 100 % of the data to test the theoretically achiev-
_________________________________________
able accuracy. In Figure 3 one example of CA1 cells and one example for subiculum cells are shown. Rat 3 reached a minimum error of 9.4 cm for a reconstruction window of 4 seconds using CA1 neurons. Rat 6 reached 26 cm with subicular units. For all rats the Bayesian 2-step (100 % training) version performed obviously best. The horizontal dotted line at 5cm in each graph displays the intrinsic tracking error, which is defined as the average uncertainty in position tracking, due to the size of the diode (LED) arrays, the distance of the diodes above the rat’s head and variations in posture [6]. IV. DISCUSSION The reconstruction algorithms for neural spike trains were implemented and applied to hippocampal place cell activity. The goal was to reconstruct the position of the rats just by investigating the spike activity as accurate as possible and therefore to minimize the reconstruction error. The best performance was found for the Bayesian 2-Step algorithm. The reason is that the algorithm considers also the previous position of the rat and does not allow large jumps from one reconstruction point to the next one. The Bayesian 1-Step algorithm performed less accurate but it is very interesting to see the results because it is only based on the current reconstruction window. Despite subicular units tend to be more stable than CA1 cells, the results show that position reconstruction is more accurate with CA1 units. However, it is also interesting that subicular units contain enough information for the reconstruction. This can also be seen from the place field density plots, where the place fields of subicular units are much more blurred than place fields of CA1 units. Distinct place fields in combination with a high number of cells guarantee good reconstruction results. But it must be noted that only 2 data-sets from the subiculum and 4 data-sets from CA1 regions were investigated. The investigation of more datasets is necessary to proof that. Theoretically the reconstruction error is inversely proportional to the square root of the number of cells. This has been confirmed by the data analysis of all 6 rats as well as in other publications [5,7]. Interesting is that for 3 rats (1, 2, 3) the reconstruction error was below 20 cm with less than 10 place cells. Wilson [7] reported an error of 33 cm with 10 cells and Zhang [5] of 25 or 11 cm for 10 cells. This shows that with even a few cells the reconstruction can already be performed. But important is to consider the different arena sizes. Erratic jumps occur also when the animal stops running. This has two reasons: (i) the firing rate is modulated by speed and therefore lower and (ii) the animal received food
IFMBE Proceedings Vol. 23
___________________________________________
Position Reconstruction of Awake Rodents by Evaluating Neural Spike Information from Place Cells in the Hippocampus
rewards to move around in the arena and therefore the eating can produce artifacts in the neural data recorded. Zhang also suggests biological reasons for the erratic jumps like the rat is looking around or planning the next move. As already mentioned in the results section, minimizing the reconstruction to points where the firing rate is above a certain threshold only punctually yields to better reconstruction results. It does not reduce the overall error rate, because the positions that have not been reconstructed have to be interpolated. This estimation leads to reconstruction errors for the interpolated positions again, and increases the error rate. Next steps in this research will include the investigation of grid cells as well as the realization of real-time reconstruction hardware and software setup.
ACKNOWLEDGEMENT
_________________________________________
REFERENCES 1.
2.
3.
4.
5.
6.
This work was supported by the EU-IST project Presenccia.
377
7.
O’Mara S. M. (1995). Spatially selective firing properties of hippocampal formation neurons in rodents and primates. Progress in Neurobiology Vol. 45, 253-274 Anderson M. I. and O’Mara S. M., (2003) Analysis of recordings of single-unit firing and population activity in the dorsal subiculum of unrestrained, freely moving rats. The Journal of Neurophysiology Vol. 90 No. 2, 655-665 Poucet B., Lenck-Santini P. P., Paz-Villagrán V. and Save E. (2003) Place cells, neocortex and spatial navigation: a short review. J Physiol Paris Vol. 97 Issues 4-6, 537-546 Brown E. N., Frank L. M., Tang D., Quirk M. C. and Wilson M. A. (1998) A Statistical Paradigm for Neural Spike Train Decoding Applied to Position Prediction from Ensemble Firing Patterns of Rat Hippocampal Place Cells. The Journal of Neuroscience Vol. 18(18), 7411-7425 Zhang K., Ginzburg I., McNaughton B. L. and Sejnowski T. J. (1998) Interpreting Neuronal Population Activity by Reconstruction: Unified Framework With Application to Hippocampal Place Cells. The Journal of Neurophysiology Vol. 79, No. 2, 1017-1044 Skaggs W. E., McNaughton B. L., Wilson M. A. and Barnes C. A. (1996) Theta Phase Precession in Hippocampal Neuronal Populations and the Compression of Temporal Sequences. Hippocampus Vol. 6(2), 149-172 Wilson M. A. and McNaughton B. L. (1993) Dynamics of the Hippocampal Ensemble Code for Space. Science Vol. 261 (5124), 10551058.
IFMBE Proceedings Vol. 23
___________________________________________
Heart Rate Variability Response to Stressful Event in Healthy Subjects Chih-Yuan Chuang, Wei-Ru Han and Shuenn-Tsong Young Institute of Biomedical Engineering, National Yang-Ming University, Taipei, Taiwan Abstract — The purpose of this study was to investigate the autonomic nervous system function in healthy subject under stress event by analyzing heart rate variability (HRV). The participants were eight graduate students exposed to a cognitive stress task involving preparation for an oral presentation. Measurements of subjective tension, muscle bound level and electrocardiograms of 5 minutes were obtained at 30 minutes before oral presentation as pretest and at 30 minutes after oral presentation as posttest. R-R intervals of electrocardiogram were calculated, and the R-R intervals’ tracks were analyzed using power spectral analysis to quantify the frequency domain properties of HRV. The results showed that subjective tension, muscle bound level and heart rate were significantly higher in pretest and normalized high frequency power of HRV was significantly lower in pretest, compared with posttest. These findings suggest that stress event will reduce cardiovascular parasympathetic nervous responsiveness and increase sympathetic nervous responsiveness and subjective tension. The normalized high frequency power of HRV can response the affective state under stress event. This psychophysiology measurement will be used for detect human affective state and stress management.
A reduction of HRV has been associated with several cardiologic diseases, especially lethal arrhythmic complications after an acute myocardial infarction [2]. Therefore, HRV is often used as a predictor of disease risk [3]. A reduced HRV is also associated with psychosocial factors such as stress, anxiety, and panic disorder [4]. It is suggested as a good tool to estimate the strength of psychological effects [5]. Previous studies reported that the HF component of HRV significantly increased during relaxation training, and the cardiac parasympathetic tone also increased associating with relaxation response [6]. The purpose of this study was to investigate the autonomic nervous system function and subjective tension in healthy subject under stress event. We hypothesized those participants who are exposed to a cognitive stress task will demonstrate significant higher tension, higher musclebound level and lower arousal of parasympathetic nervous system compared to finish stress task.
Keywords — Autonomic nervous system, Heart rate variability, Stress, Tension
II. METHODS A. Experimental Protocol
I. INTRODUCTION Mental stress can cause psychosomatic and other types of disease. Quantitative evaluation of stress is very important for disease prevention and treatment. Heart rate variability (HRV) is the variation in the time interval between heartbeats, from beat to beat. It is controlled by autonomic nervous system including the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS). Generally, the SNS activity increases heart rate and the PNS activity decreases heart rate [1]. Frequency-domain analysis of HRV can obtain the HRV spectrum which is a noninvasive tool for the evaluation of autonomic regulation of the heart. HRV spectrum can be categorized into high-frequency (HF 0.15-0.40Hz) and lowfrequency (LF 0.04-0.15Hz) components. The HF component is equivalent to the respiratory sinus arrhythmia (RSA) and is generally assumed to represent parasympathetic control of heart. The LF component is jointly contributed by both sympathetic and parasympathetic nerves. The ratio LF/HF is considered to reflect sympathetic modulation.
The participants were recruited from Institute of Biomedical Engineering, Nation Yang Ming University. Total of 8 healthy graduate students, aged 22-32 without symptoms or histories of cardiovascular or other diseases were recruited in this study. The participants were exposed to a cognitive stress task involving preparation for one-hour oral presentation. The presentations were related to participants’ research topics. Measurement of subjective tension, muscle-bound level and electrocardiogram (ECG) of 5 minutes were obtained at 30 minutes before oral presentation as pretest and at 30 minutes after oral presentation as posttest (Fig. 1). Instantaneous ECG recordings were taken for 5 min while each subject sat quietly and breathed normally. ECG recordings were obtained by Biopac ECG-100C, digitalized using a 14-bit analog-to-digital converter (NI USB-6009, National Instruments) with a sampling rate of 1024 Hz, and stored on a personal computer for off-line analysis.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 378–380, 2009 www.springerlink.com
Heart Rate Variability Response to Stressful Event in Healthy Subjects
379
Table 1 Heart Rate and Heart Rate Variability indices
Pretest (M±SE)
HR
96.38±6.49
80.96±5.04
0.011
HF(ms2)
267.62±154.73
215.04±69.69
0.483
LF(ms2)
874.15±296.56
649.18±239.30
0.262
%HF
20.51±4.45
31.34±6.66
0.035
%LF
84.40±4.10
73.15±7.0
0.068
LF/HF
6.21±1.70
4.37±1.51
0.262
TP
2589.96±1534.29
1307.83±350.46
0.888
Posttest (M±SE)
p value
Fig. 1 Experiment procedure Participants also rated their subjective tension and muscle-bound level on five-point scales which ranged from "very little"(1) to "very much"(5). B. Data Analysis The HRV analysis of ECG signals was performed with a standard procedure [3]. The R point of each valid QRS complex was defined as the time point of each heart beat, and the interval between two R points (R-R interval) was calaculated. Frequency-domain analysis of HRV was also performed using the nonparametric method of Fourier transform. For the ECG of 5 minutes, the R-R intervals were linearly interpolated to produce a continuous track by 3Hz resample rate, and the track was analyzed by Fourier transform with a hamming window to have the power sectrum of HRV. The HRV power spectrum was subsequently quantified into various frequency-domain indices as total power, LF (0.04-0.15 Hz) and HF (0.15-0.40 Hz) power, LF and HF in normalized units (LF%, HF %), and ratio of LF to HF (LF/HF). Statistical data analysis was performed using software SPSS 15.0. Data were expressed as mean and standard error (SE). Comparisons of the HRV indices and subjective tension between pretest and posttest were performed with the Mann-Whitney U-test. Significant levels were assumed at p
O1 ,( 3) O2 ,!,( 3) OK
( 3)
@
T
> p1 | xc , p2 | xc ,!, pK | xc @
T
.
(3)
The entropy combinator receives the output of each LLGMN weighted by coefficient D c , and outputs the a posteriori probabilities of all classes. Each element of the entropy combinator’s input vector yc is given by y k ( xc )
D c p k | xc ,
(4)
where coefficient D c 0 D c 1 , which denotes the degree of effect of the cth LLGMN’s output, is defined as
Dc
x j (t D )] T D (j = 1,2,…,11).
The system next measures the finger tapping movements of the patient and those of normal subjects. The feature vectors x cj and x(tall) calculated from these movements are then input to each LLGMN as teacher vectors, and the LLGMNs are trained to estimate the a posteriori probabilities of each movement. Thus, the number of LLGMNs is C = 11+1 = 12. After training, the system can calculate similarities between patterns in the subject’s movements and trained movements as a posteriori probabilities by inputting the newly measured vectors to the LLGMNs.
1 H ( xc ) 1
1 §K ·. ¨ ¦ p k | xc log p k | xc ¸ log K © k 1 ¹
A. Method
C
p k | x1 , x2 ,!, xC
¦ yk xc
K
cc 1 C
¦¦ yk c xccc
k c 1 cc 1
_______________________________________________________________
.
III. EXPERIMENTS
(5)
Here, H ( xc ) signifies the entropy of the output of the LLGMN, and denotes the ambiguity of the a posteriori probabilities. When these probabilities are ambiguous, the entropy H ( xc ) becomes large and D c approaches 0. In the entropy combinator, the a posteriori probabilities of all classes are calculated by
Yk
If E is smaller than discrimination determination threshold value Ed, the class with the highest a posteriori probability becomes the result of discrimination. Otherwise, if E exceeds Ed, discrimination is suspended as an obscure class. Evaluation of finger tapping movements: First, input vector xc is created from measured finger tapping movements for their evaluation. x (tall ) 11 and x (td ) 11 , which are the feature vectors, are computed for the overall measurement time tall and the time interval [tdst , tded ] respectively. Then, the jth elements xj (td) of x(td) (d = 1,2,…,D) are used to make the new vector, defined as x cj [ x j (t1 ), x j (t 2 ),!,
(6)
The subjects were 33 patients with PD (average age: 69.4 r 8.1, male: 16, female: 17) and 32 normal elderly subjects (average age: 68.2 r 5.0, male: 16, female: 16). Coils were attached to the distal parts of the thumb and index finger, and the magnetic sensor was calibrated using three calibration values of 20, 30 and 90 mm. The movement of each hand was measured for 60 s in compliance with instructions to move the fingers as far apart and as quickly as possible. The severities of PD in the patients were evaluated by a neuro-physician based on the finger tapping test of the Unified Parkinson’s Disease Rating Scale (UPDRS). The calculated indices were standardized on the
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Diagnosis Support System for Finger Tapping Movements Using Magnetic Sensors and Probabilistic Neural Networks
685
Fig. 5 A posteriori probabilities of Parkinson’s disease in each index Fig. 3 Examples of
radar chart representation of the results of the evaluated indices [4]
basis of the values obtained from the normal elderly subjects. The sampling frequency was 100 Hz. Each index was computed for the overall measurement time tall = 60 and at four pre-specified time intervals of t1 =[0,30], t2 = [10,40], t3 = [20,50] and t4 = [30,60] and input to the LLGMNs. The measured finger tapping movements were then put into two classes in terms of whether they were normal or not; k = 1: normal elderly; k = 2: PD; K = 2. In addition, fifteen samples of each class were used as teacher vectors for learning.
Fig. 4 Discrimination rates of finger tapping movements B. Results Radar chart representation of the results of the indices is shown in Fig. 3; (a) to (c) illustrate the charts for normal elderly subjects, PD patients with UPDRS-FT 1 and those with UPDRS-FT 2 respectively. The solid lines describe the average value of each index in the group of normal elderly subjects, and the dotted lines show double and quintuple the standard deviation (2SD, 5SD). The classification results of the finger tapping movements for all subjects are outlined in Fig. 4. This shows the mean values and standard deviations of the discrimination rates for 50 kinds of training set and for the test set, where the initial weight coefficients were changed randomly 10 times in each trial. The average dis-
_______________________________________________________________
crimination rates of the normal elderly subjects using a single LLGMN with the proposed method were 86.2 r 9.24% and 91.6 r 4.51%, and those of the PD patients were 87.5 r 7.25% and 93.1 r 3.69% respectively. Further, each LLGMN’s output y2(xc) (c = 1,2,…,12), which represents the a posteriori probability for PD patients, for all subjects is illustrated in Fig. 5. The subjects shown in this figure are the same as those in Fig. 3. C. Discussion From the experimental results, plotting radar charts showing the indices of movements computed and standardized using the basic values obtained from normal elderly subjects revealed that data from normal elderly subjects lie near the average, while those in PD patients’ charts become larger according to the severity of their condition. Further, the results of discrimination demonstrated that the patients could be classified correctly in terms of their impairment status using 12 LLGMNs with a degree of accuracy about 5% higher than results obtained using a single LLGMN. Moreover, representing the a posteriori probabilities as radar charts confirmed that the values for PD patients become large, and such charts enable quantitative evaluation and description of subjects’ motility function. These results indicate that the proposed method is capable of detecting the disease and supporting PD diagnosis.
IV. CONCLUSIONS This paper proposes a diagnosis support system that can quantitatively evaluate motility function for finger tapping movements. From the experiments performed, the finger tapping movements of PD patients were discriminated at a rate of 93.1 r 3.69 %, demonstrating that the proposed system is effective in the support of diagnosis using finger movements. In future research, we plan to improve the proposed
IFMBE Proceedings Vol. 23
_________________________________________________________________
686
K. Shima, T. Tsuji, A. Kandori, M. Yokoe and S. Sakoda
method to enable diagnosis of the severity of the disease, as well as investigating the effects of aging with an increased number of subjects.
ACKNOWLEDGMENT
REFERENCES
2.
Holmes G (1917) The symptoms of acute cerebellar injuries due to gunshot injuries. Brain. vol 40, no 4, pp 461–535 Okuno R, Yokoe M, Akazawa K et al. (2006) Finger taps acceleration measurement system for quantitative diagnosis of Parkinson's disease. Proc. of the 2006 IEEE Int. Conf. of the Engineering in Medicine and Biology Society, pp 6623–6626
_______________________________________________________________
4.
5.
This study was supported in part by a Grant-in-Aid for Scientific Research (19 9510) from the Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists.
1.
3.
6.
Kandori A, Yokoe M, Sakoda S et al. (2004) Quantitative magnetic detection of finger movements in parients with Parkinson's disease. Neuroscience Reseach. vol. 49, no. 2, pp 253–260 Shima K, Tsuji T, Kan E et al. (2008) Measurement and Evaluation of Finger Tapping Movements Using Magnetic Sensors. Proc. of the 30th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society. 5628–5631 Tsuji T, Fukuda O, Ichinobe H and Kaneko M (1999) A LogLinearized Gaussian Mixture Network and Its Application to EEG Pattern Classification. IEEE Trans. on Systems, Man, and Cybernetics-Part C: Applications and Reviews. vol 29, no 1, pp 60–72 Breiman L (1996) Bagging predictors. Machine Learning. vol. 24, pp 123–140 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Keisuke Shima Hiroshima University Kagamiyama 1-4-1 Higashi-hiroshima Japan
[email protected] _________________________________________________________________
Increasing User Functionality of an Auditory P3 Brain-Computer Interface for Functional Electrical Stimulation Application A.S.J Bentley1, C.M. Andrew1 and L.R. John1 1
MRC/UCT Medical Imaging Research Unit, University of Cape Town, Faculty of Health Sciences, South Africa
Abstract — A brain-computer interface (BCI) provides a hands-free means of controlling electrical devices by using signals derived directly from brain activity. Its aim is to provide an additional voluntary channel of output. A P3 BCI utilizes the fact that various stimuli may provide a detection response in the brain’s electrical activity to act as a control signal. The aim of this research is to explore different stimulus paradigms in an attempt to develop an accurate, efficient and readily applicable P3 BCI for real task applications. The increased amplitude of a target P3 determines the extent to which it may be detected and thus its efficiency as a signal controller in a P3 BCI. Six different experimental paradigms were explored for feasibility and sustainable applicability. Principal component analysis (PCA) and independent component analysis (ICA) were used to pre-process the data to increase computational efficiency before a linear support vector machine (SVM) was used for categorization. The experimental procedures for single trial detection produced excellent results for visual and auditory stimuli. Visual proved slightly superior overall, but the auditory paradigms were sufficient for real applications. Increasing user functionality decreased the accuracy of the results. It should be noted that accuracies of over 90% were obtained in some instances. Salient results suggest increasing the number of varying stimuli causes minimal differences in speed categorization. The added benefit of a threestimulus paradigm as opposed to the traditional paradigm is highlighted by its increased user functionality for applications such as functional electrical stimulation (FES). Additionally auditory BCIs do not require the user to avert their visual attention away from the task at hand and are thus more practical in a real environment. Coupled with the proposed threestimulus procedure, the P3 BCI’s capability is vastly improved for people suffering from neurological disorders. Keywords — BCI, FES, P3, auditory, visual
I. INTRODUCTION Functional electrical stimulation (FES) presents a neuroprosthetic technique that uses changes in potential to activate nerves innervating muscles associated with motor movement [1]. “Neuroprostheses operate through a command interface that measures some modality over which voluntary control is maintained, and translates this to a specific operation of the prosthesis.” [2] Devices are developed to restore or improve functionality of an impaired nervous system affected by
paralysis [1]. A brain-computer interface (BCI) provides a hands-free means of controlling FES devices. Many patients require the use of their extremities for the intended application and thus it isn’t possible to use these in operating a command system. Therefore measured electrical activity in the brain presents a possible control system for FES devices. A BCI is a direct communication pathway between a brain and a reactive device and can operate via mutual interaction between both interfaces [3]. A major aim and incentive of BCIs has been to help users suffering from conditions which inhibit physical control, but which leave intellectual capabilities unhindered. Simplified, BCIs “determine the intent of the user” from various electrophysiological signals in the brain and convert these into responses that enable a device to be controlled [4]. Most BCIs are operated via a visual stimulus or motor imagery [2]. Unfortunately this often distracts the user’s attention from the task at hand. Additionally many neurological disorders may lead to loss of vision, especially in the case of patients suffering from a complete locked-in state (CLIS) – voluntary muscular control is lost [5]. Therefore an auditory BCI is adopted as the preferred interface between FES and the user. Although visual stimulus BCIs have proven more accurate than auditory BCIs, their applicability in a clinical setting is limited. A study conducted by Nijboer et al. concluded “that with sufficient training time an auditory BCI may be as efficient as a visual BCI” [6]. Hill et al. suggested that auditory event-related potentials (ERPs) could be used as part of a single trial BCI [7]. Various physiological signals may be used as the control signal for an auditory BCI. However, the associated recorded EEG integrity is affected by passive or active movement of extremities. The P3 presents a robust solution to combating the effects of signal distortion. P3 BCIs provide a means of obtaining cognitive information and communication without relying on peripheral nerves and muscles. For this reason they have widespread use in people with disabilities. A. P3 (or P300) It is an ERP component of EEG and indicates a subject’s recognition of useful information according to a task. The P3 is one of the most common features of an ERP [8].
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 687–690, 2009 www.springerlink.com
688
A.S.J Bentley, C.M. Andrew and L.R. John
Most current FES-BCI systems use visual input as a control signal. The reality is that in real-life situations this is not feasible. The pre-processing technique presented by Bentley et al. creates a means of increasing the accuracy and response time of FES-BCI systems for economically viable solutions [5]. P3 waveforms are extracted from EEG signals more often than not by principal component analysis (PCA) or independent component analysis (ICA). PCA presents an attractive method of data reduction and ICA a means of source extraction [10]. Importantly PCA is able to compress high resolution data into a format for ICA to extract the required information – increasing computational efficiency. Xu et al. used algorithms based on ICA (with PCA preprocessing) for P3 detection on a 64-channel system by means of anatomical and psychological knowledge of P3 spatio-temporal progression [11]. II. METHOD Fig. 1 Schematic illustration of P3 (or P300) waveform [9]. Figure 1 illustrates the three traditional paradigms associated with P3 generation. The target elicits a large positive potential that increases in amplitude and propagates from the frontal to the parietal electrodes of an EEG “and has a peak latency of 300 ms for auditory and 400 ms for visual stimuli in young adults” [9]. Extensive P3 theory is discussed in [9]. A traditional P3 experimental paradigm only allows the user to distinguish between two stimuli – that is, binary selectivity – via a process of weighted probability, whereby the stimulus with the least probability is seen as the target producing a comparable increase in amplitude of the P3. In everyday activity we are confronted with multiple decisions, thus increasing the number of stimuli for P3 production indirectly improves decision making capabilities of an auditory P3 BCI. An attempt must also be made to assess the efficiency with which the proposed system is able to distinguish the target stimulus amongst two other stimuli with the same probability. B. Principal and Independent Component Analysis BCI systems employing measurement of cortical areas associated with motor movement are limited by the reliability of the signal for FES applications [5]. The corollary indicates that “using signals outside of motor areas make them less susceptible to disturbance from active or passive movement of limbs” [2].
_______________________________________________________________
Traditional P3 paradigms consist generally of a twostimulus experimental technique, whereby decreasing the probability of one stimulus alternately increases the probability of it being distinguished from the other stimuli. These changes in the P3 waveform enable BCIs to differentiate between them. EEG recordings from electrical activity in the brain act as the control method for BCIs. These signals can be generated most effectively by mental imagery and external stimuli produced by vision or hearing. Most of these waveforms (apart from the P3) carry with them complications associated with artifact production and signal echo distortion created by motor movement generated in the motor area of the cortex. Advantages of auditory stimuli over visual and mental imagery are discussed in Hill et al. [7]. This method of signal control also does not require the extensive training that mental imagery techniques require and the resultant device is not as user specific. It allows the user to focus their attention on the task at hand. A method using a combination of PCA and ICA techniques discussed in [5] is utilized to pre-process and extract the waveform – the computational efficiency of which is of the utmost importance if it is to be used for FES. The data is spatio-temporally manipulated in the single trial scenario to highlight and enhance the P3 waveforms for classification. A. Experimental Paradigms The aim of this research was to investigate increasing user functionality of alternative paradigms associated with the P3.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Increasing User Functionality of an Auditory P3 Brain-Computer Interface for Functional Electrical Stimulation Application
Materials: It was determined that the use of regular earphones should be utilised for the auditory stimulus so as to replicate a real environment. Electrical noise generated was taken into consideration due to the proximity of the earphones to the high resolution 128-channel Geodesic Sensor Net (GSN). It was determined that five experiments of 180 trials were to be conducted on 15 subjects (right-handed males between the ages of 21 and 30) using the high resolution system. The paradigms included visual and auditory stimuli, a requirement for a button to be pushed in certain instances, and a traditional P3 approach. The use of three stimuli of equal probability is assessed for classification accuracy and efficacy so as to simulate multiple decision processes in actual scenarios. It was decided to utilize three different stimuli in an attempt to add extra selective functionality for the user (as opposed to the traditional paradigm utilizing only two differing stimuli). The five experiments included: 1. presenting the subject with an auditory target stimulus amongst two other stimuli; 2. the same procedure as experiment 1 except that a button is pushed to indicate a target and a separate button is pushed to indicate a non-target; 3. presenting the subject with a visual stimulus amongst two other stimuli; 4. the same procedure as experiment 3 except that a button is pushed to indicate a target and a separate button is pushed to indicate a non-target; and 5. a traditional P3 experiment consisting separately of auditory (indicated by a * in Table 1) and visual stimuli in the single-stimulus paradigm illustrated in Figure 1. In each case the subject was presented with the target prior to testing to familiarize themselves with the stimulus. All experiments required the subject to focus on a cross-hair and were positioned 0.5 m from the screen. Auditory stimuli varied only by frequency (volume and time were kept constant throughout). The predetermined sinusoidal tones had frequencies of 500, 1500, and 3500 Hz, which presented the least level of discomfort. These were chosen after subjects indicated from a variety of frequencies which three provided the best discernible differences from a selective range. Visual stimuli consisted of three arrows of equal size and positioning which only varied by orientation – left, right, and up. The target in each case was randomly selected for cross-validation averaging. In paradigms 1 and 3 the subjects were required to count the number of targets and in instances 2, 4 and 5 they were required to push a button.
_______________________________________________________________
689
The paradigms were established according to the ERP technique discussed in [12] and sampled at 200Hz. B. Processing and Classification Eye artifacts and bad channels were removed from the recordings and the raw data was 0.1 to 8 Hz band-pass filtered (spectrum analysis reveals that the principal energy of the P3 is in the 1 to 8 Hz band [11], but using high-pass filtering above 0.1Hz tends to attenuate the P3 waveform). A combination of PCA and ICA was used to formulate an independent component (IC) matrix from the training data. This matrix acted as an “unmixing” matrix (W) for single trial data. The ICs of each single trial epoch are then spatiotemporally manipulated to highlight the qualities associated with P3 detection. This method is discussed in [5] and presents an effective and relatively fast classification technique (Figure 2).
Fig. 2 Algorithm for P3 detection: (a) training and (b) testing phase [5]. A linear support vector machine (SVM) is then used to classify a moving average of the data from 0 to 650ms for a single trial epoch. Thornton’s separability index was used to determine the optimal features for classification. A generic subset of features was chosen. The most prominent feature combination was used on all the data in contrast with calculating the index for each test (increased accuracies can be obtained by calculating individual indices). III. RESULTS An average of 10 separate cross-validation sets was used to determine the prediction accuracies. It should be noted that certain subjects, presented with the 1500 Hz auditory stimulus, highlighted that identifying the target proved more difficult than in the cases where the higher or lower frequencies were the target. Additionally confusion developed for certain subjects in the visual stimulus paradigm when asked to push a button located on the left when a “right” arrow was displayed and vice versa.
IFMBE Proceedings Vol. 23
_________________________________________________________________
690
A.S.J Bentley, C.M. Andrew and L.R. John
Table 1 Prediction accuracy percentages of cross-validation tests Subject
Test 1
Test 2
Test 3
Test 4
Test 5
1
68.1
67.2
71.4
74.5
85.0*
2
72.1
71.7
77.3
77.6
83.2
3
72.8
73.1
76.4
78.4
81.5*
4
78.1
75.1
79.1
78.0
81.0
5
65.3
69.6
70.2
68.8
86.2*
6
69.7
68.4
67.5
70.1
90.1
7
72.5
60.2
71.9
77.2
82.3*
8
62.4
68.2
61.3
69.3
77.3
9
73.8
75.4
77.8
76.9
87.7*
10
69.7
70.2
72.3
75.1
87.0
11
58.3
61.4
61.5
62.0
81.6*
12
67.4
66.8
69.9
70.7
78.5
13
73.1
69.9
73.4
71.3
82.2*
14
73.6
71.0
74.5
79.6
91.0
15
70
75.2
74.7
76.1
88.6
Average
69.8
69.6
71.9
73.7
84.2
ACKNOWLEDGMENT This work was supported in part by the National Research Foundation (NRF) and the Medical Research Council (MRC) of South Africa.
REFERENCES 1. 2.
3. 4.
5.
6.
Classification accuracies of between 82 and 88% were obtained for auditory P3 single trial traditional paradigms (visual paradigms resulted in accuracies between 77 and 91%). For the four new experimental paradigms (i.e. two stimuli and one target) average accuracies of 70% and 73% were obtained for auditory and visual cases respectively. The results were enhanced by acceptable sensitivity (82%) and specificity (76%) scores for FES compatibility [13]. IV. CONCLUSIONS
8. 9. 10. 11.
12.
By using the nominated three-stimulus paradigm, classification accuracies are obtained that may prove acceptable in a clinical environment for FES applications. Greater preprocessing capabilities and classification techniques need to be developed so as to enhance P3 waveform characterization to combat the proportional decrease in P3 differentiation. The traditional P3 paradigm proved superior due to the nature of P3 generation i.e. its increase in amplitude with decreased probability. Visual paradigms proved slightly superior in performance to auditory classification paradigms [4]. However, accuracies obtained from some auditory experiments allow for potential FES implementation [13] and indicated that the traditional button response to stimuli did not prove superior. The speed with which the algorithm may detect single trial P3s is sufficient for the intended application; although the accuracies obtained using the proposed paradigms need to be improved. The lower accuracies result from increased target probability and hence increased task difficulty compared to the traditional paradigm [14].
_______________________________________________________________
7.
13.
14.
Crago P, Lan N, Veltink P et al. (1996) New control strategies for neuroprosthetic systems. J Rehabil Res Dev 33:158-172 Boord P, Barriskill A, Craig A et al. (2004) Brain-Computer Interface - FES Integration: Towards a Hands-free Neuroprosthesis Command System. INS 7:267-276 Levine S, Huggins J, BeMent S et al. (2000) A direct brain interface based on event-related potentials. IEEE T Rehabil Eng 8:180-185 Wolpaw J, Birbaumer N, McFarland D et al. (2002) Brain-computer interfaces for communication and control. Clin Neurophysiol 113:767-791 Bentley A, Andrew C, John L (2008) An Offline Auditory P300 Brain-Computer Interface Using Principal and Independent Component Analysis Techniques for Functional Electrical Stimulation Application. IEEE EMBS Conf. Proc. in press Nijboer F, Furdea A, Gunst I et al. (2007) An auditory brain-computer interface (BCI). J Neurosci Methods 167:43-50 Hill N, Lal T, Bierig K et al. (2005) An Auditory Paradigm for BrainComputer Interfaces. Advances in Neural Information Processing Systems 17:569-576 Sutton S, Braren M, Zubin J et al. (1965) Evoked-potential correlates of stimulus uncertainty. Science 150:1187-1188 Polich J, Criado J (2006) Neuropsychology and neuropharmacology of P3a and P3b. Int J Psychophysiol 60:172-185 Stone J (2004) Independent Component Analysis: A Tutorial Introduction. The MIT Press, Cambridge, Massachusetts Xu N, Gao X, Hong B (2004) BCI Competition 2003--Data set IIb: enhancing P300 wave detection using ICA-based subspace projections for BCI applications. IEEE T Bio-med Eng 1067-1072 DOI 10.1109/TBME.2004.826699 Luck S (2005) An Introduction to the Event-Related Potential Technique. The MIT Press, Cambridge, Massachusetts Bentley A (2005) Design of a PC-based controller for Functional Electrical Stimulation (FES) of Voluntary Muscles. BSc. (Elec.) Eng. thesis, University of Cape Town, South Africa Kok A (2001) On the utility of P3 amplitude as a measure of processing capacity. Psychophysiology 38:557-577 The corresponding author’s address details are listed below: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Alexander Bentley University of Cape Town Anzio Road, Observatory, 7925 Cape Town South Africa
[email protected] _________________________________________________________________
An Electroencephalogram Signal based Triggering Circuit for controlling Hand Grasp in Neuroprosthetics G. Karthikeyan1, Debdoot Sheet2 and M. Manjunatha2 1
Department of Biomedical Engineering, SSN College of Engineering, Chennai, INDIA. School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, INDIA.
2
Abstract — Quadriplegia is a serious problem in the case of patients with neurological disorders. Functional Electrical Stimulation (FES) has been a very good rehabilitation technique to treat this condition and help the patient to lead a near-normal life by aiding him/her to move the limbs of the upper and lower extremities with less difficulty. Various techniques have been proposed to trigger the FES system. In this paper, we describe the design of a novel circuit used to trigger a FES device by a person’s EEG signals i.e., by his thoughts. The total project was divided into three modules. The first module was to design a proper interface between the electrodes placed on the scalp and the electronic system which was to be used as a trigger. The second module was to amplify the signal to a sufficient level such that the strength of the signal is high enough to drive the third module which served as the classifier part. The classifier part of the circuit was built out of commercially available IC’s and external discrete components. Though there was some tolerance errors induced due to the external components, the error was at a minimal rate when compared with the actual signal considered. The circuit was powered by a 9V battery and the only input to it was the thought waves through EEG signals from the subject/patient considered. The circuit is low power efficient with a wide operational range of ±3V to ±18V. Keywords — Circuit synthesis, Electroencephalography, Electronic equipment, Filters, Instrument amplifiers, Integrated circuits, Logic design.
I. INTRODUCTION Quadriplegia is a medical condition in which one half of the body is completely paralyzed, greatly reducing mobility of the affected person. In the case of upper extremity paralysis (which is the condition dealt with in this study), the affected person suffers from an inability to move his/her hands, resulting in severe inability to grasp objects and perform normal tasks. The two reasons for occurrence of Quadriplegia are Congenital and Result of a stroke or illness. Though there is little development as of now for treatment of the congenital type of Quadriplegia, recent developments in the field of Functional Electrical Stimulation (FES) have lead to ease in the treatment of the second kind of patients. The treatment of stroke induced Quadriplegia is made possible by a FES system due to the fact that
though the Upper Motor Neurons (Connecting the brain with the spinal cord) are damaged, the Lower Motor Neurons (Connecting the spinal cord with the limbs) are healthy. As a result of this, any stimulation given to the nerves connecting the affected limb with the spinal cord stimulates the muscles of the limb and facilitates motion. Recent developments in the acquisition and analysis of Electroencephalogram (EEG) signals have shown that EEG signals are unique for each person and for each thought activity. This property of the EEG has been found to be suitable for usage in Brain- Computer Interface (BCI) applications [1]. Most of the BCI applications use variations in EEG patterns to classify a person’s thoughts and make an external device act according to the patterns recognized by the machine [2]. The EEG based trigger circuit developed in this project acts using the same principle as that of a BrainComputer Interface, detecting patterns in EEG signals using a circuit tailor- made for stimulation of the FES device. II. FUNCTIONAL ELECTRIC STIMULATION: AN OVERVIEW A Functional Electrical Stimulator is a neuroprosthetic device, the main function of which is to activate the inactive and weak nerves of the upper/lower extremities of the body. Usage of a FES is fruitful in a way that the affected nerves are rejuvenated by continual usage of the device and this helps in restoring movement to the patient. The affected portion of the patient is stimulated whenever movement is required by the patient. Most of the early FES devices were based upon pure analog designs which rendered the usage and accuracy of usage of the FES device an unpredictable one. But, recent developments in digital technologies and usage of microcontrollers have rendered the usage of FES systems a reliable method of rehabilitation. The schematic of the device used here is shown in Fig. 1 and is controlled using a microcontroller and the output waveform of the circuit is shaped in such a manner that the patient feels negligible fatigue of muscles. Bio-potentials as a command for FES. Numerous works have been done to study the feasibility of using EEG signals for control of FES. A significant result with regards to this
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 691–693, 2009 www.springerlink.com
692
G. Karthikeyan, Debdoot Sheet and M. Manjunatha
Fig. 3 Classification portion of the circuit Fig. 1 Block setup of FES Table 1 Gain of the various stages of the amplification portion was given by Juul et al (2000) [3] stating that there is a remarkable change in the EEG patterns during preparation of hand or leg movements. This experiment used different kinds of movement trainings to record the Movement Related Potentials (MRPs) for different kinds of movement attempts. Seven recording sites were used according to the 10–20 electrode placement system. The promising results shown by this study prompted us to use C3 and C4 recording sites for tapping the Beta waves of the EEG which are the control signals to be used in triggering the FES.
III. IDEA OF THE CIRCUIT The basic idea of the circuit is divided into two partsThe first part would amplify the circuit to the required level and the second part would classify the amplified signals for triggering of the circuit. The block set up of the total circuit is given is Fig.2 and Fig.3.
of the circuit Stage
Gain
Preamplifier - I Preamplifier - II Non-Inverting Amplifier - I Non-Inverting Amplifier – II Net Gain of circuit in Fig. 2
2.08 74.46 24.33 7.76 29358
Voltage dectection and FES triggering using comparators and AND gates:A very critical component of the circuit is the Frequency to Voltage (F–V) converter which is used to convert the various frequencies present in the EEG signal into proportional voltages. The output of an F–V converter is a voltage signal which would make this portion of the circuit act as a classification precursor stage. The output of the F–V converter is given into two comparators with one comparator being sensitive to voltages corresponding to frequencies greater than F1 and another comparator being sensitive to frequencies lesser than F2. The outputs of these comparators are treated as logic levels of 1’s and 0’s. These logic levels are given as inputs to the AND gate and this AND gate responds to the logic level inputs and gives out an output which is given as a trigger signal to the FES device and the algorithm for the output and input of the comparators and AND gate is as seen in Table 2. The output of the AND gate, is a logic level of 1 or 0 and would be suitable for use as an input to the trigger switch of the FES device. Table 2 Logic levels of the comparators and AND gate for triggering the FES
Fig. 2 Amplifier and Preprocessing set up of the circuit The second portion of the circuit is involved in choosing and classifying the frequencies of interest which would facilitate triggering of the FES device. The output/ Gain calculation of the amplification portion of the circuit is shown in Table 1.
_______________________________________________________________
Input to AND gate from Comparator 1
Input to AND gate from Comparator 2
Output to the AND gate
0 (FO2 @ 1 K SV
By recording a calibration curve for defined oxygen concentrations, sensor intensity can be translated into corresponding oxygen concentration values. The inset in Figure 3 shows the measured relationship between the relative intensity and five different DO concentrations, as used for calibration. The plot indicates linear Stern–Volmer behavior, which was constant for flow rates from 0.5 to 2 mL/min. Figure 4 shows the application of the oxygen measurement method by plotting the transverse oxygen concentration profile as a function of the width of the bioreactor. The profiles plotted in Figure 4(a) correspond to the inlet and outlet of the example shown in Figure 2. Dashed lines in the
(1)
S where K SV is the Stern-Volmer constant and [O2] is the oxygen concentration in solution. During fabrication it was observed that the intensity signal ratio I0/I100 of the sensor layer could be optimized to 6.1 by reducing sensor film thickness to 0.6 μm during spin-coating. As shown in Figure 3, this ratio reduced by only 8% over a duration of two weeks while covered with a layer of cell adhesion-promoting collagen to demonstrate its suitability for cell culture.
Fig. 3 Plot of the sensor intensity ratio I0/I100 as a function of time in days for a PtOEPK/PS film covered by type I collagen. The inset shows the Stern-Volmer calibration curve for measurement of DO concentration using the same sensor film.
_______________________________________________________________
Fig. 4 Plot of the transverse oxygen concentration versus bioreactor width at the inlet (solid) and outlet (dash): a) For two parallel laminar flow streams of 8.6 and 0 ppm O2, and b) for three streams of 34, 8.6 and 0 ppm O2. In both cases DI water was used as medium.
IFMBE Proceedings Vol. 23
_________________________________________________________________
In-situ Optical Oxygen Sensing for Bio-artificial Liver Bioreactors
781
graph indicate the 0 and 8.6 ppm DO levels. In Figure 4(b) a third central flow stream was added to the two existing ones via a third inlet connected to an additional gas exchanger. The DO level of this stream was produced by flowing industrial-grade oxygen through the exchanger and represents the upper limit of the attainable range (34 ppm). Custom values in between can be realized by use of an appropriate mixture of nitrogen and oxygen in the gas exchanger. A further characteristic of parallel laminar flow streams is the development of a diffusive boundary layer between streams of different oxygen concentration. Depending on the flow rate and residence time, oxygen from the oxygenrich stream diffuses into those with a lower initial DO level. In the transverse concentration plot of Figure 4(a) for example, this is illustrated by a flattening of the initially stepwise transition from inlet to outlet. If undesirable, this effect can be reduced by increasing the flow rate, whereas otherwise it provides a convenient means to determine the diffusion constant of oxygen in the respective medium used for perfusion [13]. The combination of multiple laminar streams with varying DO levels, such as demonstrated here, has the potential for oxygen-dependent treatment of cells without the need for physically separated bioreactors. A similar setup has previously been used to selectively treat cellular microdomains with chemicals [14,15]. However, oxygen concentration, a further parameter with significant effect on cell behavior, was not controlled or monitored for the individual streams used. In contrast, our device enables similar experiments to be performed while simultaneously controlling the DO level of the individual flow stream, thereby increasing the validity of in-vitro experiments. Concerning BAL devices, the concept of transverse oxygen gradients has the potential, to further understanding of the effect of autocrine signaling on cell metabolic functions and zonation. One such example is the inverse relationship between hormone gradients and those for oxygen, as observed in the liver [2]. Using parallel laminar flow streams, dynamic and simultaneous observation of the effect of different oxygen concentrations on cells located downstream will be possible in a single bioreactor, while at the same time keeping stream-to-stream cross-talk to a minimum.
stable inter-stream interfaces. The presented chip provides a novel tool for the parallelization of DO concentrationdependent assays, BAL and tissue engineering applications.
IV. CONCLUSIONS
The authors would like to thank Helen Devereux and Gary Turner for technical assistance.
REFERENCES 1.
2.
3.
4.
5.
6. 7.
8.
9. 10.
11.
12. 13.
14.
15.
We have shown the generation and detection of transverse gradients of DO inside a PDMS-based microfluidic device. The oxygen concentration of individual flow streams was controlled, visualized and measured using a polymer-based oxygen sensor layer. Multiple streams were combined in a single bioreactor chamber demonstrating
_______________________________________________________________
ACKNOWLEDGMENT
Semenza G L (2001) HIF-1, O2, and the 3 PHDs: How Animal Cells Signal Hypoxia to the Nucleus. Cell 107:1-3 DOI 10.1016/S00928674(01)00518-9 Jungermann K, Kietzmann T (2000) Oxygen: Modulator of metabolic zonation and disease of the liver. Hepatology 31:255-260 DOI 10.1002/hep.510310201 Tilles A W, Baskaran H, Roy P et al. (2001) Effects of oxygenation and flow on the viability and function of rat hepatocytes cocultured in a microchannel flat-plate bioreactor. Biotechnol Bioeng 73:379-389 DOI 10.1002/bit.1071 Roy P, Baskaran H, Tilles A W et al. (2001) Analysis of oxygen transport to hepatocytes in a flat-plate microchannel bioreactor. Ann Biomed Eng 29:947-955 Allen J W, Bhatia S N (2003) Formation of steady-state oxygen gradients in vitro - Application to liver zonation. Biotechnol Bioeng 82:253-262 DOI 10.1002/bit.10569 Allen J W, Khetani S R, Bhatia S N (2005) In vitro zonation and toxicity in a hepatocyte bioreactor. Toxicol Sci 84:110-119 Kane B J, Zinner M J, Yarmush M L et al. (2006) Liver-Specific Functional Studies in a Microfluidic Array of Primary Mammalian Hepatocytes. Anal Chem 78:4291-4298 DOI 10.1021/ac051856v Park J, Bansal T, Pinelis M et al. (2006) A microsystem for sensing and patterning oxidative microgradients during cell culture. Lab Chip 6:611-622 DOI 10.1039/b516483d Nock V, Blaikie R J, David T (2007) Microfluidics for Bioartificial Livers. N Z Med J 120:2-3 Nock V, Blaikie R J, David T (2007) Micro-patterning of polymerbased optical oxygen sensors for lab-on-chip applications. Proc SPIE 6799:67990Y-10 DOI:10.1117/12.759023 Nock V, Blaikie R J, David T (2008) Patterning, integration and characterisation of polymer optical oxygen sensors for microfluidic devices. Lab Chip 8:1300-1307 DOI 10.1039/b801879k Atencia J, Beebe D J (2005) Controlled microfluidic interfaces. Nature 437:648-655 DOI 10.1038/nature04163 Nock V, Blaikie R J, David T (2008) Generation and Detection of Laminar Flow with Laterally-Varying Oxygen Concentration Levels. Proc uTAS In Press Takayama S, Ostuni E, LeDuc P et al. (2001) Laminar flows: Subcellular positioning of small molecules. Nature 411:1016-1016 DOI 10.1038/35082637 Takayama S, Ostuni E, LeDuc P et al. (2003) Selective Chemical Treatment of Cellular Microdomains Using Multiple Laminar Streams. Chem Biol 10:123-130 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Volker Nock University of Canterbury Private Bag 4800 Christchurch New Zealand
[email protected] _________________________________________________________________
Quantitative and Indirect Qualitative Analysis Approach for Nanodiamond Using SEM Images and Raman Response Niranjana S., B.S. Satyanarayana, U.C. Niranjan and Shounak De Manipal Institute of Technology, Manipal, Karnataka, INDIA,576 104. Abstract — In the era of nanotechnology, one of the material which seems to be in the forefront, in terms of its application in medical domain is nanocarbon. Various forms of nanocarbon including, carbon nanotubes, nanostructured graphite, nanodiamond, nanowalls, nanocluster carbon, diamond like carbon (DLC) and tetrahedral amorphous carbon(taC) are being studied, for different biomedical applications includes Vacuum based X-ray source, Tribology coating for joints or surgical, Micro/Nano Elcecto Mechanical System (MEMS /NEMS) and Biosensor. Study of nanodiamond grown with Hot Filament Chemical Vapor Deposition(HFCVD) process with various CH4/H2 condition indicates various morphological and compositional properties. The surface based morphology are studied with SEM images provides visual information’s. Studied compositional variation using nondestructive and instantaneous approach using Raman response. Reported the mechanism of deposition and the influence of the deposition parameters in terms on the morphology and composition. Designed an insitu applicable software approach for SEM images analysis. The images are initially segmented for enhancing the cluster regions on the substrate and each individual clusters are labeled. Generated histogram using the area estimated for each cluster and analysed it quantitatively. The Raman and SEM based analysis evaluates cluster dimensions, distribution and provides quantitative and indirect qualitative information for evaluating the nanodiamond. Keywords — Nanodiamond,Quantitative images, Raman Response,
analysis,
SEM
I. INTRODUCTION Recently miniaturized products have become increasingly dominant in every aspect of life. As per ITRS (International Technology Roadmap for Semiconductor) novel, self aligned nanomaterials which are being developed are the building blocks of future nanoelectronic devices [1-8]. Among the many nanomaterials, nanocarbon is one of the most studied material for nanotechnology applications, in its various manifestations. The various forms of nanocarbon being studied include nanodiamond, Single-walled / Multiwalled carbon nanotubes (SWNT/MWNT), fullerenes (C60), Nanohorns, Nanowalls, Nanowires, Nanofibers, nanocluster or nanostructured carbon. Growing areas of technology
were these nanocarbon find applications include Nanoelectronics, Vacuum nanoelectronics, Sensors, Biomedical applications, Novel energy sources, Interconnects in ICs, Novel light, strong and even conducting Composite materials and Flexible electronics [1-8] Presented here nanodiamond grown using Hot Filament Chemical Vapor Deposition (HFCVD) process under various CH4/H2 condition and its SEM and Raman response based analysis. The morphological information’s derived from SEM includes nanostructure distribution ,size and thereby furnishes details for quantitative analysis. The study also provides understanding of the quality of the film. The uniformity of size or distribution of the structure indirectly qualifies the thin films. However the uniform distributed films are more preferred for applications like sensors. Raman Spectroscopic approach of characterization is one of the nondestructive and instantaneous technique used for characterization of nanocarbon in nanotechnology. Raman response as unique signature for nanostructure, is interesting for understanding bonding and dimensional details[9-12]. II. EXERIMENT Hot Filament CVD The heavily n-doped silicon substrate surface was pretreated by polishing to increase diamond nucleation. The nanodiamond deposited using Hot Filament Chemical Vapour deposition[HFCVD]at 1% methane to hydrogen ratio(CH4/ H2 ) atmosphere for two hours. The filament temperature during the deposition was ranging between 2200 ~ 2300oC and the substrate temperature was between 725 oC to 925 oC. Grown nanocrystaline diamond thin films with the variation of methane to hydrogen(CH4/ H2 )ratio from 0.5 % to 3 %. Further study was made using the Scanning Electron Microscope(SEM) images, Raman response of the thin films. Typical thickness of the films grown was 500nm. Further developed , the MATLAB based automated morphology analysis tool for nanostructure size based analysis.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 782–785, 2009 www.springerlink.com
Quantitative and Indirect Qualitative Analysis Approach for Nanodiamond Using SEM Images and Raman Response
III. RESULT AND DISCUSSION A. Sem Based Analysis
and finally plotted the area based histogram for further analysis of the film. Shown in figure 3, the area histogram plot of nanodimond film plotted considering area along the x axis and count of, nanodiamond structures along y axis. The plot indicates the size distribution of nanodiamond structure around the film surface as well provides the median size or size range
Count
The morphological and dimensional features of various nanocarbons including nanodiamond, carbon nanotube, carbon based nanocluster and nanowalls were studied using Scanning Electron Microscopic images (SEM). Shown in figure1, is the typical SEM images of the nanodiamond considered for the study. The surface morphological study for various sized nanostructures are possible with visual inspection. However automated approach of morphological study is essential for in situ applications. In this approach from the SEM image the nanostructure dimension is estimated. The indirect quality analysis of nanodiamond films are achieved with the histogram plot.
783
Area Fig. 3 Histogram plot for the nanodimension indicates Area of structures (along X axis in μm2 , Count (along Y axis)
B. Raman Response Analysis Fig. 1
The original SEM image of Nanodiamond
Raman response of wide variety of nanostructures including carbon nanotube,nanodiamond,nanowall and nanocluste are shown in figure 4. It clearly shows different signature for each individual nanomaterial, promising usage of Raman Response for instantaneous and nondestructive way of nanomaterial understanding approach. In case of Raman studies of nanocarbon, the visible Raman spectrum shows 3500
Nanodiamond Nanocluster
3000
Fig. 2 The segmented SEM image after enhancement
The image based analysis approach includes following steps. Initially the grey image is preprocessed for noise pixel elimination and to enhance the quality.The enhanced binary image is then segmented to get region of interest. Shown in figure2, the image after Watershed segmentation. Estimated area of each nanodimond structure using software
Intensity (A.U)
2500 2000
Nanopiller
1500 1000
CNT Nanowall
500 0 800
1000
1200
1400
1600
1800
-1
Wave No(cm )
Fig. 4
Raman response of various nanocarbon thin films
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
45000
Response(A.U)
40000 35000 30000 25000 20000 15000 10000 800
1000
1200
1400
1600
1800
WaveNo(cm-1)
Fig. 5 Curve fitted Raman Response for a nanodiamond film 1.0 0.8
Id/Ig
two prominent features and some minor modulations. The prominent features are G Peak (Graphite peak around 1580cm-1 ), and D Peak(Disorder peak around 1330 cm-1) In the case of nanostructured carbon nanowalls we see relatively the narrowest G peak around 1580cm-1, showing the well formed nanowalls. The carbon nanotubes show both the G and D peaks like the nanowalls, but area broader. This shows that the materials have more amorphous phase materials and also clusters with varying dimensions besides the carbon nanotubes. From the Raman response of room temperature grown nanocluster carbon it may be observed that G and D peak are even broader than nanotubes. This indicates that the clusters with vastly varying dimensions and amorphous phase materials. Especially in case of nanodiamond, the visible Raman spectrum shows two prominent features. The nanodiamond film exhibits a narrow diamond peak around 1332cm-1 and a broad peak around 1580cm-1(G peak) showing the presence of graphite like and amorphous phase materials around the grain boundaries. Raman can be used to identify a wide range of material parameters or properties including, if it is amorphous, nanocrystalline or crystalline, the size of the cluster, the nature of bonding: graphite like (sp2 bonding or bonding )or diamond like (sp3 bonding or bonds )and the ratio between the two (sp2/sp3) types of bonding. Analyzed nanodiamond for understanding its property by using curve fitting approach. Shown in figure 5, curve fitted nanodiamond Raman response. Extracted features from curve fitting includes:D-Peak, G-Peak, Id-Ig Ratio, D peak – G peak Ratio, G-area, D-areas and some statistical values. Further an effort was made to look at possibility of correlation between the nanocarbon dimension estimated from SEM images to field assisted electron emission properties of theses films finding the FN slope from field emission response and the dimension/size of nanocarbon. The influence of temperature variation also studied between 725 oC to 975oC for a constant CH4/H2 ratio. Observed the shift of the G peak position towards smaller wave number, with an incremental change of temperature, resulting a more sp3 phase nanodiamond. Based on the experimental data between temperature range 800 oC -850oC suggests the region of uniform cluster size and distribution. Shown in figure 6 and figure 7 are some relationship shown with Raman derived parameter Id/Ig. The Figure 6 indicates possibility of some dimension and Id/Ig ratio relation. Similarly shown in figure.7 a plot of process parameter(CH4/H2) ratio dependence along with Raman derived ratio. The results are clearly indicates possibility of usage of Raman response for nanodiamond analysis.
Niranjana S., B.S. Satyanarayana, U.C. Niranjan and Shounak De
0.6 0.4 0.2 200 250 300 350 400 450 500
Dimension nm (for nanodimaond)
Fig. 6 Size variation of
Nanodiamond size with Id/Ig ratio calculated from Raman response.
1.0 0.8 Id/Ig
784
0.6 0.4 0.2 1.0
1.5
2.0
2.5
3.0
CH4/H2(for naodiamond)
Fig. 7 CH4/H2 ratio variation and Id/Ig ratio for nanodiamond films grown under various conditionsonse.
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Quantitative and Indirect Qualitative Analysis Approach for Nanodiamond Using SEM Images and Raman Response
IV. CONCLUSION The study promises the possibility of correlation between dimensions (size) derived from SEM images of nanodiamond and parameters derived from Raman response. Also suggested automated SEM image analysis approach for analysis of nanodiamond SEM image for quantitative analysis. Further the derived parameters from the Raman response are used to evaluate or classify the nanocarbons indirectly with the extracted parameters. However the Raman signature of nanodimond material needs further detailed study to explain conductive or other electronic properties .The work promises the importance of Raman study for qualitative analysis of nonmaterial.
ACKNOWLEDGMENT The author would like to acknowledge the Innovation Center, Manipal University, Manipal, Karnataka, India, for providing continuous encouragement and support. Also I extend thanks to Dr. Ramesh Galigekere,HOD Biomedical Dept, Dr. V H S Moorthy,E &C Dept and Biomedical Engineering Department for providing the support.
REFERENCES 1.
2.
M.Terones,A.Jorio,M.Endo,A.M.Rao,Y.A.Kim,T.Hayashi,H.Terrone s,J.C.Charlier, G.Dresselhaus and M.S.Dresselhaus, “New Directions in Nanotube Science,” MaterialsToday, Oct, pp 30-45,2004. Walt A De Heer, “Nanotubes and Pursuit of Applications,” MRS Bulletin,April,2004.
785
3.
Niraj Sinha, JohnT, W.Yellow, “Carbon Nanotubes for biomedical Applications,” IEEE Transaction on Nanobioscience,Vol.4,No.2, pp180-195, June, 2005. 4. N.S.Xu and S.Ejaz Huq, “Novel cold cathode materials and applications,” Material Science. & Engg. R 48 ,47-189, 2005. 5. Keat Ghee Ong, Kefeng Zeng, and Craig A. Grimes , “A Wireless, Passive Carbon Nanotube -Based Gas Sensor,” IEEE Sensors Journal, Vol. 2, No. 2, pp 82- 88 ,APRIL 2002. 6. Seongjeen Kim, “CNT Sensors for Detecting Gases with Low Adsorption Energy by Ionization,” Sensors 2006, 6, 503-513. 7. Niraj Sinha, Jiazhi Ma, and John T. W. Yeow “Carbon NanotubeBased Sensors,” Journal of Nanoscience and Nanotechnology ASP, Vol.6, 573–590, 2006. 8. B.S.Satyanarayana, “Room Temperature Grown Nanocarbon based multilayered Field Emitter Cathodes for Vacuum Microelectronics,” 11th IWPSD Conference Proceedings,ed V.Kumar &P.K.Basu, p278,2001. 9. B.S.Satyanarayana, X.L.Peng, G.Adamapolous, J.Robertson, W.I. Milne, and T.W. Clyne, “Very Low Threshold Field Emission from Microcrystalline Diamond films grown using Hot Filament CVD Process,” MRS Symp. Proc., Vol.621,Q5.3.1-5.3.7, 2000. 10. B.S.Satyanarayana “The influence of sp3 bonded carbon on field emission from nanostructured sp2 bonded carbon films,” IEEE,18th International Vacuum Nanoelectronics Conference Technical Digest, Oxford ,p219, 2005. 11. M A Al-Khedher , C Pezeshki, J L McHale and F J Knorr , “Quality classification via Raman identification and SEM analysis of carbon nanotube bundles using artificial neural networks”, Nanotechnology, 18- 355703 (11pp),2007. 12. S. Zhang a,, X.T. Zeng a, H. Xie , P. Hing, “A phenomenological approach for the Id/Ig ratio and sp3 fraction of magnetron sputtered aC films”, Surface and Coatings Technology 123 ,256–260,2000
Author: Institute: City: Country: Email:
NIRANJANA .S MANIPAL INSTITUTE OF TECHNOLOGY MANIPAL INDIA
[email protected];
[email protected] _________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Non-invasive Acquisition of Blood Pulse Using Magnetic Disturbance Technique Chee Teck Phua1, Gaëlle Lissorgues2, Bruno Mercier2 1
School of Engineering (Electronics), Nanyang Polytechnic, Singapore 2 ESIEE – ESYCOM , University Paris Est, France
Abstract — Blood pulse is an important human physiological signal commonly used for the understanding of the individual physical health. Current methods of non-invasive blood pulse sensing require direct contact or access to the human skin. As such, the performances of these devices tend to vary with time and are subjective to human body fluids (e.g. blood, perspiration and skin-oil) and environmental contaminants (e.g. mud, water, etc). This paper proposes a novel method of non-invasive acquisition of blood pulse using the disturbance created by blood flowing through a localized magnetic field. The proposed system employs a magnetic sensor and a small permanent magnet placed on the artery (major blood vessel) of the limbs. The magnetic field generated by the permanent magnet acts both as the biasing field for the sensor and also the uniform magnetic flux for blood disturbance. As such, the system is able to operate at room temperature, reliable for continuous long term acquisition, compact (small size) and convenient for daily usage. The heart rate obtained from the proposed system when measured through non-conductive opaque fabric, is found to be highly correlated to commercially available cardiac monitoring system such as ECG and pulseoximetry. Keywords — blood pulse, magnetic biasing, non-invasive, magnetic disturbance
I. INTRODUCTION A. Context of the study With the advancement of bioelectronics, portable health monitoring devices are getting popular because they are able to provide continuous monitoring of an individual’s health condition with ease of use and comfort. Portable health monitoring devices are increasingly required at places such as home, ambulance and hospital, and at situations including military training and sports. Pulse rate is a measurement of the number of times the heart beats per minute. The heart pushes blood through the arteries, which expand and contract allowing blood to flow. Heart or pulse rate is an important parameter subject to continuous monitoring because they are representative in assessing the physical health condition of an individual. Healthcare institutes such as the hospitals and elderly care centers can use this information to monitor the health conditions of their patients. This is particularly important for
patients with cardiac arrhythmia whose heart rate variability needs to be monitored closely for early detection of cardiac complications. Furthermore, pulse rate information of individuals subjected to mentally or physically stressful conditions may be utilized to trigger alert for immediate attention when large changes in heart rate variability indicate potentially fatal events such as heat stroke, cardiac disorder and mental break down. Finally, it can be also important to monitor pulse rate of personnel working in dangerous environments such as deep sea condition (divers), high temperature (firefighters), and deep underground (coal miners). B. State of the art Current methods of heart or pulse rate acquisition can be classified into electrical [1-2], optical [3,7], microwave [4], acoustic [5,8,11], mechanical [6,9] or magnetic [10,12,13] means. The use of electrical probes to measure heart rate was discovered in 1872. Such a method normally requires good electrical contact with the human skin and the performance is subjective to human body fluids (e.g. blood, perspiration, skin-oil) and environmental contaminants (e.g. mud, water, etc). In most deployment, reference electrodes are usually required to ensure such a system is less susceptible to electrical noise [2]. Recent research [1] had also reported a new method of ECG acquisition using contact free capacitive sensing. However, this method is highly subjective to noise and motion artefact. Examples of commercially available heart rate acquisition systems include ECG monitoring devices, pulse rate monitoring watches and chest strips. All of these devices require electrical contact between the probes and the human skin. The use of optical sensors in pulse oximetry [3,7] is getting popular due to its compact nature and the ability to concurrently acquire SpO2 concentration in blood. The basic operating principle of such a system requires a light source and a sensor where the reflected light content is measured to determine the blood pulse and SpO2 concentration in blood. Typically, such a system is worn on the tip of the fingers or toes where optical transmittance of the human skin is important to ensure signal quality. Microwave radar [4] has also been reported for noninvasive detection of heart rate. However, such methods are
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 786–789, 2009 www.springerlink.com
Non-invasive Acquisition of Blood Pulse Using Magnetic Disturbance Technique
highly subjective to motion artifacts created due to movement of the subject. Acoustic method of acquiring heart rate on the human chest was available through the invention of the stethoscope since 1816. Over the years, this method, which has progressed with electronics and innovative signal acquisition systems, is widely reported [5,8,11]. Biomagnetic signal from the heart was detected in 1963 by Baule and McFee. The basic principle involves the mapping of the magnetic field around the thorax while the heart magnetic vector is acquired and is commonly known as magnetocardiogram (MCG). Such a method requires highly sensitive magnetic sensors such as SQUID (Superconducting QUantum Interference Device) and is currently not readily deployed for clinical use as ECG proves to be more reliable, convenient and less expensive. Another way to measure the blood pulse is based on Hall Effect [10,13]. Such system applies a uni- or bi-directional magnetic field on the human body to create polarization of blood molecules. Electrodes are placed on the human skin near the applied magnetic field to pick up the potential difference created by the induced magnetic signal. Mechanical methods to acquire heart or pulse rate varies from the use of pressure cuff to piezo-electric materials worn over the limbs or body [6,9]. Such mechanical methods to acquire heart or pulse rate requires the application of localized pressure on the human subject and are not well suited for continuous signal acquisition. The limitations of each of the above methods to acquire heart or pulse rate motivated the research on a simple and yet reliable magnetic means to acquire blood pulse. Such a method will support the acquisition of blood pulse without the need for a good electrical or optical contact and can be used over a prolonged period of time on the limbs. One of the objectives in developing a non-invasive magnetic based blood pulse acquisition system is to use commercially available magnetic sensors in place of SQUID or electrodes. This will allow the system to operate at room temperature, to be reliable, compact (small size), cheaper, and more convenient for daily usage. II. EXPERIMENTAL SET-UP The experiment on blood pulse acquisition utilizes the concept of placing a magnetic field in the vicinity of the major artery where the blood flowing through the magnetic field will disturb the magnetic field, thus creating a magnetic disturbance. Such a magnetic disturbance is acquired using a magnetic based sensor operating at room temperature as shown in Figure 1. In this experiment, the variation of magnetic field is termed Modulated Magnetic Signature of Blood (MMSB).
_______________________________________________________________
787
The relative position between the magnet (1), sensor (2) and the artery (major blood vessel) was varied and the final set-up for measurement is shown in Figure 2 where the sensor output was sufficiently amplified and then connected to an oscilloscope. The configurations of the sensor on the limbs are illustrated in Figure 3. To achieve the objective of small size for portability and operation at room temperature, the uniform magnetic field is created by the use of a button-sized permanent magnet (3mm diameter) with magnetic strength of 1000 Gauss. The magnet is placed over the major arteries on the limbs with a magnetic sensor placed in close proximity. The measurements will depend on several aspects: the distance between the sensor and the magnet (approximately 15mm) providing a uniform magnetic field that ensures proper biasing, the magnet strength that does not saturate the sensor, and the appropriate penetration of the magnetic field into the skin tissues. The magnetic sensor has typical performance characteristics as shown in Figure 4. Magnetic biasing is illustrated in Figure 4 where the sensitivity of the sensor is improved by operating it in the linear region. As such, the sensor is able to amplify any minute changes to the magnetic field due to the flow of pulsatile blood in the uniform magnetic field.
Figure 1. Cross-sectional view of the experimental setup to acquire MMSB
Oscilloscope to show MMSB waveform Computer for data acquisitions and post-processing
Electronics Amplifications
Sensor and Magnet on limbs of human subject
Commercially available pulse oximeter being placed on finger of human subject
Correlations of pulse rate measured using MMSB (computer postprocessing) with readings from pulse oximeter
Figure 2. Block diagram of experimental setup to acquire MMSB with correlation of results using commercially available pluse oximeter.
IFMBE Proceedings Vol. 23
_________________________________________________________________
788
Chee Teck Phua, Gaëlle Lissorgues, Bruno Mercier
Major blood vessel
2 1
Figure 3. Illustrations of the relative position of the magnet(1), sensor(2) and a major blood vessel
Magnetic biasing operates sensor within linear region allowing weak signals to be amplified / detected
measured are found to be within +5% of the readings obtained from a pulse oximeter. A typical waveform acquired from the sensor in this setup is shown in Figure 5 where it can be observed that it is highly periodic and has the pulse width of the subject’s heart rate. The waveform obtained is also observed to have high correlations to the MCG reported in [10] as illustrated in Figure 6. In addition, on the basis of magnetic disturbance of the pulsatile blood flow on a uniform magnetic field, MMSB can also be used to describe the activities of the heart as illustrated in Figure 7. For example, an increase in the measured magnetic disturbance can only be created by an increase in blood flow due to the compression of the heart ventricles. As the ventricles compress to their maximum, the peak of the waveform is reached. Without further force from the heart on the blood flow in the artery, the blood flow rate will reduce as shown in the decreased in amplitude of the waveform. With the relaxation of the ventricles, a back-flow of blood is present in the artery. Finally, the atrium will simultaneously compress, resulting in a small forward flow of blood as shown in the second peak detected in MMSB.
Heart rate
Typical operation of sensor in weak magnetic field
Figure 4. Typical sensor response with illustration of magnetic biasing to increase its sensitivity
Figure 5. Waveform captured from the sensor output using oscilloscope
III. RESULTS AND DISCUSSIONS The experimental setup to acquire MMSB on the wrist was done in a laboratory condition where subjects are in a sitting position and their arms are well rested on the table. The magnet is secured on the wrist using a non-magnetic adhesive tape while the sensor is positioned on the artery, near to the magnet. Similar experimental setup to acquire MMSB on the heel was performed on subjects in a sitting position with their heels rested on the floor. The signal acquired from the sensor on each subject is repeated on at least two separate measurements and found to be repeatable. In addition, these experiments have also been repeated on fifty different subjects and the pulse rates
_______________________________________________________________
Figure 6. Simultaneous plots of the magnitude curves of the EHV (dashed curve) and the MHV (solid curve) during the QRS complex [10]
IFMBE Proceedings Vol. 23
_________________________________________________________________
Non-invasive Acquisition of Blood Pulse Using Magnetic Disturbance Technique
789
REFERENCES Ventricles compression of heart Atrium compression and relaxation of heart Ventricles relaxation of heart
Figure 7. Illustration of MMSB and the activities related to the heart IV. CONCLUSIONS This experiment demonstrated the concept of magnetic disturbance (MMSB) by the pulsatile blood flowing in a uniform magnetic field. Such magnetic disturbance can be acquired by the use of a button sized permanent magnet and a commercially available magnetic sensor. The waveform obtained is highly correlated to the activities of the heart. It is conclusive that MMSB exists and has the advantage of being contact surface independent which is lacking in existing methods of heart or pulse rate acquisition systems. Further work will focus on the modelling of the magnetic modulation using blood, to optimise the system sensitivity. This should lead to a reduction in the overall size of the sensing module (i.e. sensor and magnet).
ACKNOWLEDGMENT The authors would like to thank Nanyang Polytechnic of Singapore for the opportunity to work on this development. In particular, the authors would like to express their gratitude to the School of Engineering (Electronics), Nanyang Polytechnic (Singapore) for the usage of facilities that supported this work.
_______________________________________________________________
[1] Akinori Ueno, Yasunao Akabane, Tsuyoshi Kato, Hiroshi Hoshino, Sachiyo Kataoka, Yoji Ishiyama (2007) Capacitive Sensing of Electrocardiographic Potential Through Cloth from the Dorsal Surface of the Body in a Supine Position: A Preliminary Study, IEEE Transactions on Biomedical Engineering, Vol. 54, No. 4, April 2007 [2] S. Bowbrick, A. N. Borg. Edinburgh (2006) ECG complete, New York, Churchill Livingstone [3] S. M. Burns (2006) AACN protocols for practice: noninvasive monitoring, Jones and Bartlett Publishers [4] Shuhei Yamada, Mingqi Chen, Victor Lubecke (2006) Sub-uW Signal Power Doppler Radar Heart Rate Detection, Proceedings of Asia-Pacific Microwave Conference [5] G. Amit, N. Gavriely, J. Lessick, N. Intrator (2005) Automatic extraction of physiological features from vibro-acoustic heart signals : correlation with echo-doppler, Computers in Cardiology, Issue September 25-28, pp 299-302 [6] J.L. Jacobs, P. Embree, M. Glei, S. Christensen, P.K. Sullivan (2004) Characterization of a Novel Heart and Respiratory Rate Sensor, Proceedings of the 26th Annual International Conference of the IEEE EMBS [7] M.N. Ericson, E.L Ibey, G.L. Cote, J.S. Baba, J.B. Dixon (2002) In vivo application of a minimally invasive oximetry based perfusion sensor, Proceedings of the Second Joint EMBS/BMES Conference [8] Luis Torres-Pereira, Cala Torres-Pereira, Carlos Couto (1997) A Noninvasive Telemetric Heart Rate Monitoring System based on Phonocadiography, ISIE’97 [9] J. Kerola, V. Kontra, R. Sepponen (1996) Non-invasive blood pressure data acquisition employing pulse transit time detection, Engineering in Medicine and Biology Society, Volume 3, Issue 31 Oct-3 Nov, pp 1308 – 1309 [10] J.Malmivuo, R. Plonsey (1995) Bioelectromagnetism – Principles and Applications of Bioelectric and Biomagnetic Fields, New-York, Oxford University Press [11] Yasuaki Noguchi, Hideyuki Mamune, Suguru sugimoto, Jun Yoshida, Hidenori Sasa, Hisaaki Kobayashi, Mitsunao Kobayashi (1994) Measurement characteristics of the ultrasound heart rate monitor, Engineering in Medicine and Biology Society, Engineering Advances: New Opportunities for Biomedical Engineers. Proceedings of the 16th Annual International Conference of the IEEE Vol. 1, Issue , 3-6 Nov 1994 Page(s):670 - 671 [12] J.R. Singer (1980) Blood Flow Measurements by NMR of the intact body, IEEE Transactions on Nuclear Science, Vol. NS-27, No. 3 June 1980 [13] Hiroshi Kanai, Eiki Yamano, Kiyoshi Nakayama, Naoshige Kawamura, Hiroshi Furuhata (1974) Transcutaneous Blood Flow Measurement by Electromagnetic Induction, IEEE Transaction on Biomedical Engineering, Vol. BME-21, No. 2, Mar 1974
IFMBE Proceedings Vol. 23
_________________________________________________________________
Microfabrication of high-density microelectrode arrays for in vitro applications Lionel Rousseau1,2, Gaëlle Lissorgues1,2, Fabrice Verjus3, Blaise Yvert4 1
ESIEE, 2 boulevard Blaise Pascal – 93162 Noisy le Grand cedex, France, 2 ESYCOM, Université Paris-EST, Marne-La-Vallée, France, 3 NXP Caen, France, 4CNIC CNRS, Av. des Facultés, F-33402 Talence, France Abstract — Micro Electrode Arrays (MEAs) offer an elegant way to probe the neuronal activity distributed over large populations of neurons either in vitro or in vivo. They also give the possibility to deliver specific electrical stimulations to neuronal networks. This paper will present the fabrication of different kinds of 3D electrode arrays based on silicon micromachining: dense 3D arrays with high aspect ratios (up to 1024 probes with a pitch of 50μm and 80μm of height) and 3D probes enabling recording within the depth of the tissues. Stimulation is also possible with these systems, and we have developed a specific MEA to deliver a focal stimulation in the tissue. The 3D-shaped microelectrodes are necessary to achieve better contacts with the neural tissue, the tip of each needle being the recording site. Indeed, we will detail a new technology based on the Deep Reactive Ion etching technique (DRIE) which offers the possibility to manufacture various shapes of electrodes on silicon. Second, alternative geometries with several contacts on each needle will be described, leading to really 3D probes scanning within the depth of the tissues. Such geometry is based on the combination of DRIE and standard etching techniques. Experiments currently performed have shown that both kind of MEAs present a noise level around 10μV, in the same range as available commercial MEAs. In vitro applications will be presented concerning the study of the whole spinal cord of embryonic mousse and we conceived a new configuration to reach a focal stimulation for in vitro and in vivo applications. Keywords — Micro Electrode Arrays, microfabrication, DRIE
electrodes for in vitro systems are usually shaped as 3D needles, at the tip of which is the recording site. II. MEA FABRICATION A. Standard MEAs Classically the isotropic etching [1,2], which is a very simple process, is used to etch silicon or glass substrates to fabricate 3D electrodes, Figure 1. Either wet etching (fluorhydric acid, KOH) or dry etching (plasma) is possible. The major problem of the isotropic etching is due to a large under etch above the protection layer. In isotropic etching, the substrate is etched at the same speed in all directions. The under etch is problematic when you want to fabricate a narrow structure. To obtain a micro needle after isotropic etching we protect the substrate with circular protective layers. Theses circles correspond to the requested diameter of the micro needle. However, the drawback of such isotropic etching is a dimensioning constraint: the electrode height vs. the electrode pitch. Indeed, the electrode pitch cannot be smaller than twice the electrode height. For example, electrodes with 80 μm of height impose the pitch to be higher than 160 μm. The fabrication of very dense 3D electrode arrays with high aspect ratios is therefore impossible with the standard technique.
I. INTRODUCTION Although neuroscience has already progressed in the knowledge of the neuronal system with the development of medical imagery (like CT scanner, MRI and nuclear medicine), all those imagery systems only give a global vision of neurons operation. Then Micro Electrode Arrays (MEAs) offer an elegant way to get the recording of large neuronal networks and follow the cellular information, providing a means to record the activity of many cells simultaneously over large networks. Such techniques are using arrays of microelectrodes placed in contact with the neural tissue to probe neuronal electrical activity at several sites simultaneously. In order to be as close to neurons as possible, micro-
DRIE Isotropic etching
Fig.1. Classical 3D MEAs Process
B. Dense MEAs The DRIE (Deep Reactive Ion Etching) is an anisotropic etching process and a classical process used to obtain vertical side walls is the Bosch process [3].
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 790–793, 2009 www.springerlink.com
Microfabrication of high-density microelectrode arrays for in vitro applications
To get high aspect ratios, a repetitive alternation of etching and insulation steps, see figure 2. This process is composed by two steps performed subsequently: deposition and etching. The sidewall passivation is obtained by teflon-film deposition on the silicon surface and avoids a lateral etching. In plasma etching the ions are oriented vertically to the substrate and etch in priority the bottom of the structures. Consequently the ions firstly etch the teflon-like layer at the bottom before they etch the silicon and leave teflon-like protection on the side walls.
791
We obtained dense MEAs with 64, 256 or 1024 electrodes [5] having a height of 80μm and a pitch of 50 μm, figure 3. This process also offers the possibility to manufacture various shapes of electrodes on silicon, as shown on figure 4.
DRIE Anisotropic etching
DRIE Isotropic and Anisotropic etching alternated
Fig.4. Various shapes of 3D electrodes processed by DRIE
C. 3D needles used in MEAs
Fig.2. DRIE process adapted to dense MEA fabrication
To prevent the previous limitation of isotropic etching to achieve micro needles, we developed a new technology based on the DRIE technique [4]. We combine anisotropic etching and isotropic etching to give us the opportunity to reduce the diameter of the base micro needle.
The neuronal activity is not distributed on one level but in 3D within the volume of the tissues, i.e. at different depths, as illustrated on figure 5. To achieve a 3D recording, we have fabricated a comb with 4 shafts on which there are 4 electrodes.
a) Fig.5. Principle of 3D recording
b)
Fig.3. An example of a 256 electrode array a) and its position on the whole spinal cord of embryonic mouse b)
The fabrication process is again based on DRIE used to release long silicon needles with several recording sites on each, as shown on figure 6 [6]. To manufacture this comb shaft, we started with an oxidation step of a silicon wafer (500 nm). A metallic gold or platinum layer is sputtered to fabricate the micro electrodes. Such noble metal is necessary to prevent degradation of metal in contact with tissues and physiological liquids. A PECVD (Plasma Enhanced Chemical Vapor Depostion) silicon nitride layer is deposited to insulate the leads on the comb shaft from the liquid environment. Top side DRIE is done to define the comb shafts and a second bottom side DRIE to release the structures.
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
792
Lionel Rousseau, Gaëlle Lissorgues, Fabrice Verjus, Blaise Yvert
obtain a more efficient stimulation, which means focal stimulation [9]. The basic idea was to add a ground surface surrounding the electrodes (either a plane or a grid), allowing the whole current delivered by the stimulation electrode to return back directly through this ground surface, figure 8. The induced change in the fabrication technology of the MEA was the addition of one masking level on which a second metallic (Pt) layer is deposited above the passivation layer. The Pt layers can be patterned with a liftoff process. The ground surface was designed as a grid instead of a plane in order to ensure enough transparency of the MEA.
a) process steps
b) SEM & top photography Fig.6. Fabrication process of 3D needles a) and examples of fabricated samples b)
The prototypes are mounted on a PCB support and connected to a specific acquisition system. The first tests have shown that noise level is around 10 μV.
Fig.8. Part of a 4u15 MEA with a grid-like ground surface surrounding all the electrodes
III. IN VITRO MEASUREMENTS All the presented electrode arrays have been tested by the CNIC laboratory on the whole spinal cord of embryonic mouse, figure 9, and showed performances corresponding to commercially available MEAs. They indeed present a noise level around 10μV. An example of the improvement obtained during stimulation with the ground surface configuration is illustrated on figure 10.
Fig.7. Comb Shaft Mounted on PCB for testing
D. Optimized design for stimulation Different studies showed that the activation of single neurons may impact the activity of large populations at the networks level [7, 8]. Therefore the precise activation of small groups of neurons is required during stimulation with MEAs. Indeed, each electrode of such array should act as an independent “stimulation pixel”, only influencing cells in its close vicinity. Thus an optimized design was elaborated to
Fig.9. Map of Mouse spinal cord activities (LFP)
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Microfabrication of high-density microelectrode arrays for in vitro applications
793
ACKNOWLEDGMENT This work was supported by the French Ministry of Technology (Neurocom RMNT project, and MEA3D ANR Blanc). In particular, the authors would like to express their gratitude to SMM-ESIEE for the usage of their clean room facilities.
REFERENCES 1.
2. 3. Fig.10. Comparison of field focality between a classical monopolar configuration (A) and the proposed ground surface configuration (B). The potential field is recorded in response to a biphasic stimulation on 3 electrodes located at 3 different distances from the stimulation electrode (150, 1050, and 8250 μm).
4.
5. 6.
IV. CONCLUSION In conclusion, we presented the fabrication of different kinds of MEA: very dense 3D microelectrode arrays with high aspect ratios using a specific DRIE process, or real 3D needles to access the volume of the tissues and scan within their depth. We also proposed a new electrode configuration that appears to ensure a good compromise between potential field focality and isotropy, which moreover requires few additional steps in the fabrication process. Such a prototype was micro-fabricated showing that the field focality could be improved.
7.
8. 9.
Heuschkel MO, Fejtl M, Raggenbass M, Bertrand D, Renaud P, A three-dimensional multi-electrode array for multi-site stimulation and recording in acute brain slices. J Neurosci Methods 114, pp.135-148, 2002. N Wilke et al., Fabrication and characterisation of microneedle electrode arrays using wet etch technologies, Proc. EMN 2004. F.Laermer and al, Bosch Deep silicon Etching : Improving uniformity and etch rate for advanced MEMS applications, IEEE 1999 F.Marty et al., Advanced silicon etching techniques based on deep reactive ion etching for silicon harms and 3D micro- and nanostructures, Proc. EMN 2004. L. Rousseau and al, BioMEA : A 256-channel MEA system with integrated electronics, Neurosciences 2006 S. Kisban and al, Microprobe Array with Low Impedance Electrodes and Highly Flexible Polyimide Cables for Acute Neural Recording, IEEE EMBS 2007 Huber, D., L. Petreanu, N. Ghitani, S. Ranade, T. Hromadka, Z. Mainen, and K. Svoboda, Sparse optical microstimulation in barrel cortex drives learned behaviour in freely moving mice. Nature 451, pp.61-64, 2008. Houweling, A. R. and M. Brecht, Behavioural report of single neuron stimulation in somatosensory cortex. Nature 451, pp.65-68, 2008. Joucla, Rousseau, Yvert. "Matrices microelectrodes". Demande de brevet d'invention français n° 07 07369 du 22 octobre 2007.
Author: Institute: Street: City: Country: Email:
Lissorgues Gaëlle ESIEE - ESYCOM BP99 Cité Descartes Noisy-le-Grand 93162 France
[email protected] _________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
A MEMS-based Impedance Pump Based on a Magnetic Diaphragm C.Y. Lee1, Z.H. Chen2, C.Y. Wen3, L.M. Fu1, H.T. Chang3, R.H. Ma4 1
Department of Materials Engineering, National Pingtung University of Science and Technology, Taiwan 2 Department of Mechanical and Automation Engineering, Da-Yeh University, Taiwan 3 Department of Aeronautics and Astronautics, National Cheng-Kung University. Taiwan 4 Department of Mechanical Engineering, R.O.C. Military Academy, Taiwan
Abstract — In realizing Lab-on-a-Chip systems, micro pumps play an essential role in manipulating small, precise volumes of solution and driving them through the various components of the micro chip. The current study proposes a micro pump comprising four major components, namely a lower glass substrate containing a copper micro coil, a microchannel, an upper glass cover plate, and a PDMS-based magnetic diaphragm. A Co-Ni magnet is electroplated on the PDMS diaphragm with sufficient thickness to produce a magnetic force of the intensity required to achieve the required diaphragm deflection. When a current is passed through the micro coil, an electromagnetic force is established between the coil and the magnet on the diaphragm. The resulting deflection of the PDMS diaphragm creates an acoustic impedance mismatch within the microchannel, which results in a net flow. The performance of the micro pump is characterized experimentally. A deflection of 30 m is obtained by supplying the micro coil with an input current of 0.6A, and results in a flow rate of 1.5 l/sec when the PDMS membrane is driven by an actuating frequency of 240 Hz. Keywords — electroplating, impedance pump, magnetic diaphragm, micro coil, micro pump
I. INTRODUCTION The rapid advances achieved in micro-electromechanical systems (MEMS) techniques over the past decade have enabled the development of a wide variety of microfluidic devices for use in the industrial, chemical, biological and medical fields. Typically, these devices are designed to perform specific functions such as sample injection, cell sorting and counting, polymerase chain reaction (PCR), species mixing, and so forth. As the characteristic size of such devices has reduced, researchers have demonstrated the feasibility of integrating two or more micro devices on a single chip to construct so-called micro-totalanalysis systems (-TAS) capable of performing a complete biochemical assay of solutions. In realizing such systems, micro pumps play an essential role in manipulating small, precise volumes of solution and driving them through the various components of the micro chips. Broadly speaking, micro pumps can be classified as either mechanical or non-mechanical, depending upon their
mode of actuation. Non-mechanical micro pumps are typically actuated using electrohydrodynamic, magnetohydrodynamic or electroosmotic techniques. However, such devices are not fully compatible with many biological systems and suffer a number of practical problems such as electrolytic bubble formation (in the magnetohydrodynamic type) [1] or a solution-dependent flow response (electroosmotic type) [2]. Mechanical micro pumps can be classified as either reciprocating (i.e. diaphragm-based) or peristaltic. Most commercially-available mechanical micro pumps tend to be of the former type since they are generally more easily controlled, more reliable and more effective than their peristaltic counterparts. Furthermore, reciprocating micro pumps resolve many of the problems associated with nonmechanical pumps and are easily integrated with other micro devices to construct large-scale microfluidic networks. Gong et al. [3] developed reciprocating micro pumps incorporating a pressure chamber and a system of mechanicallyoperated inlet and outlet passive check valves. While the results showed that the pump successfully drove fluids, the use of mechanical valves rendered the pump prone to clogging and to leakage at high driving pressures. In an attempt to resolve these problems, researchers have proposed a variety of valveless micro pump schemes based on peristaltic effects [4,5] or the use of reciprocating diaphragms and integrated nozzles/diffusers [6,7]. Some researchers have also presented impedance-based micro pumps comprising an elastic section connected at either end to a rigid body. In such pumps, the elastic section is compressed asymmetrically to create an acoustic impedance mismatch within the microchannel, which prompts a net flow through the pump. Rinderknecht et al. [8] proposed a novel valveless, substrate-free impedance-based micro pump driven by an electromagnetic actuating force. The performance of the pump was found to be highly sensitive to the waveform, offset, amplitude and duty cycle of the excitation force. Wen et al. [9] presented a planar valveless micro impedance pump in which a PZT actuator was used to drive a thin Ni diaphragm at its resonance frequency of 34.7 kHz. The resulting largescale displacements of the diaphragm created a significant driving pressure, and yielded a substantial net flow. In their study, the pump was composed of an elastic section con-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 794–798, 2009 www.springerlink.com
A MEMS-based Impedance Pump Based on a Magnetic Diaphragm
795
nected at the ends to rigid sections and a mismatch in acoustic impedance was used to drive fluid flow. By independently compressing the elastic section periodically with the PZT actuator at one asymmetric position from the ends, traveling waves emitted from the compression combined with reflected waves at the impedance-mismatched positions and generated a typically pulsatile flow. It was also found that a flow reversal happened in some range of actuating frequency. However, the device required a high actuating voltage and therefore consumed an excessive power. Reviewing the literature, it is found that many alterative actuation mechanisms have been proposed for micro pumps, including piezoelectric [9], thermopneumatic [10], electrostatic [11], electromagnetic [12] and others [13]. Of these various methods, electromagnetic actuation has a number of major advantages, including an extended working range, a rapid response time and a low actuating voltage. These characteristics render electromagnetic actuation an appealing choice for applications in which large diaphragm deflections and a straightforward structural integration is required. Among them, Lee et al. [12] developed a micro pump comprising a micro coil and a permanent magnet mounted on a PDMS diaphragm. When a current was passed through the micro coil, an electromagnetic force is established between the coil and the magnet. The resulting deflection of the PDMS diaphragm created an acoustic impedance mismatch within the microchannel, which resulted in a net flow. Accordingly, the current study develops an impedancebased micro pump utilizing an electromagnetic actuation technique. The major components of the device include a lower glass substrate containing an electroplated planar copper micro coil, a glass microchannel, a glass cover plate and a PDMS diaphragm with a magnet diaphragm electroplated on its upper surface. In the pumping operation, fluid is driven through the pump by applying a current to the micro coil such that an electromagnetic force is established between the coil and the magnet causing a resulting deflection of the PDMS diaphragm. The performance of the pump is evaluated experimentally. It is shown that the maximum diaphragm deflection is approximately 30 m and is attained using an actuating current of 0.6 A. Under these operating conditions, a flow rate of 1.5 l/sec can be achieved by driving the diaphragm at a frequency of 240 Hz.
15 mm × 2.5 mm (length × width × height), while the microchannel measures 11.35 mm × 4 mm × 50 m (length × width × height). The flow rate of the impedance pump is determined by the volume change induced in the microchannel each time the diaphragm deflects and the frequency at which the diaphragm is actuated [9]. Therefore, in theory, the flow rate can be enhanced by increasing the stroke volume of the diaphragm (i.e. increasing its displacement) or increasing the excitation frequency.
Fig. 1 Schematic illustrations of valveless micro impedance pump III. FABICATION
(a) 1.5 ˩ m copper seed layer
(b) 5 ˩ m polyimide pattern
(e)Electroplating Cu and liftoff PR
(c) Electroplating Cu 5 ˩ m
(f) Polyimide coating
Fig. 2 Major steps in micro coil fabrication process. II. DESIGN As shown in Fig. 1, the micro pump comprises four basic layers, namely a lower glass substrate containing a planar micro coil, a microfluidic channel, a glass cover plate, and a PDMS diaphragm with a magnet electroplated on its upper surface. The pump body has overall dimensions of 26 mm ×
_______________________________________________________________
The micro pump developed in this study was fabricated using photolithography, vacuum evaporation, electroplating and wet etching microfabrication techniques. The fabrication process involved the following basic procedures: (1) electroplating the micro coil on the lower glass substrate (Fig. 2), (2) etching the microchannel configuration into a second glass substrate, (3) bonding the lower glass substrate
IFMBE Proceedings Vol. 23
_________________________________________________________________
796
C.Y. Lee, Z.H. Chen, C.Y. Wen, L.M. Fu, H.T. Chang, R.H. Ma
and the etched substrate, (4) fabricating the PDMS diaphragm, (5) electroplating the magnet of Co-Ni alloy on the PDMS diaphragm, (6) attaching the PDMS diaphragm to an upper glass substrate, and (7) bonding the upper and lower substrate layers to form a sealed microchannel (Fig. 3).
PR Coating
Magnet Electroplating and PDMS Lift-off
˵ 4 mm Hole Drilling in Upper Glass Substrate
PDMS Molding
Magnetic field (mTesla)
Glass Substrate Cleaning
(LC-2400A + 2430, Keyence, Japan) powered by a PR8323 power supply (ABM, Taiwan). The displacement was measured at the central area of the diaphragm, i.e. the position of maximum deflection, at micro coil currents in the range 0.1-0.5 A. The experimental results for the variation of the diaphragm displacement with the input current are presented in Fig. 7.
Magnetic PDMS/Bonding
0.2A 0.4A 0.6A
Fig. 3 Major steps in upper plate fabrication process.
Distance (m)
Magnetic field gradient (mT/m)
Fig. 5 Experimental results for flux density produced by micro coil. 0.2A 0.4A 0.6A
Distance (m)
Fig. 6 Experimental results for rate of change of flux density with vertical distance from micro coil
The flux density characteristics of the micro coil were measured using a Tesla meter (TM-401, KANETEC, Japan). In the characterization tests, the coil was supplied with an input current of 0.2, 0.4 and 0.6 A, respectively, and the variation of the flux density was measured in the vertical direction along the central axis. The experimental results are presented in Fig. 5. Figure 6 presents the experimental results for the rate of change in the magnetic flux density along the vertical centerline of the micro coil. As discussed previously, to optimize the performance of the electromagnetic actuator, the magnet should be positioned at a height corresponding to the maximum rate of change of the magnetic flux. From an inspection of Fig. 6, it is found that the magnet should therefore be positioned 500 Pm above the micro coil. The displacement characteristics of the diaphragm were evaluated experimentally using a laser displacement meter
_______________________________________________________________
Magnet thickness 170m Magnet thickness 110m Magnet thickness 60m
Current (A) (a)
Distance (m)
IV. RESULTS AND DISCUSSION
Distance (m)
Fig. 4 Photograph of completed micro pump.
PDMS thickness 30m
PDMS thickness 80m
PDMS thickness 200m
Current (A) (b) Fig. 7 Experimental results for maximum diaphragm deflection for input current ranging from 0.1 A to 0.5 A. at (a) different magnet diaphragm thicknesses and (b) different PDMS diaphragm thicknesses.
IFMBE Proceedings Vol. 23
_________________________________________________________________
A MEMS-based Impedance Pump Based on a Magnetic Diaphragm
797
V. CONCLUSIONS This study has designed and fabricated a novel micro valveless impedance pump. The experimental results have shown that the actuator mechanism, comprising a micro coil, a PDMS diaphragm and an electroplated magnet, provides a large diaphragm deflection and a low power consumption. The micro pump is fabricated using MEMS techniques, has a planar structure and can therefore be readily integrated with other microfluidic devices to create a Labon-a-Chip device. It has been shown that the maximum diaphragm deflection is approximately 30 m and is attained using an actuating current of 0.6 A. The corresponding flow rate is found to be 1.5 l/sec at an actuating frequency of 240 Hz. Overall, the experimental results indicate that the micro pump presented in this study represents an ideal solution for microfluidic systems in which miniaturized pumps are required.
(a)
(b)
ACKNOWLEDGMENT
(c)
Fig. 8 Experimental results for flow rate given (a) input power ranging
The authors would like to thank the financial support provided by the National Science Council in Taiwan (NSC 97-2221-E-020-03, NSC 97-2218-E-006-012 and NSC 962218-E-006-004).
from 0 W to 1.8 W at different PDMS thicknesses, (b) actuation frequency raging from 30 Hz to 300 Hz at different magnet diaphragm thicknesses and (c) the same frequency range at different PDMS diaphragm thickensses.
REFERENCES 1.
The flow rate characteristics of the micro pump were evaluated by monitoring the difference in the levels of two water columns contained within capillary tubes connected to the inlet and outlet sides of the pump, respectively (see Fig. 4). Figure 8(a) illustrates the variation in the flow rate with different input powers. The results indicate that the flow rate increases linearly with an increasing input power on the coil. From inspection, the maximum operational flow rate (i.e. the flow rate at a coil power of 1.8 W) is found to be around 0.9 l/s as the PDMS diaphragm thickness is 30 m and the actuation frequency is 30 Hz. Maintaining a constant coil current of 0.6 A, the flow rate was evaluated for excitation frequencies ranging from 30~300 Hz at different magnet diaphragm and PDMS diaphragm thicknesses. The corresponding results are presented in Fig. 8(b) and (c), and indicate that the maximum flow rate is obtained at a frequency of 240 Hz.
_______________________________________________________________
Jang J and Lee S S (2000) Theoretical and experimental study of MHD (magnetohydrodynamic) micropump, Sensors Actuators A 80, pp. 84–89. 2. Brechtel R et al. (1995) Control of the electroosmotic flow by metalsalt-containing buffers, J. Chromatogr. A 716, pp. 97–105. 3. Gong Q L et al. (2000) Design, optimization and simulation on microelectromagnetic pump, Sensors and Actuators A 83, pp. 200-207. 4. Berg J M et al. (2003) A two-stage discrete peristaltic micropump, Sensors and Actuators A 104, pp. 6-10. 5. Husband B et al. (2004) Investigation for the operation of an integrated peristaltic micropump, Journal of Micromechanics and Microengineering 14, pp. S64-S69. 6. Olsson A et al. (1996) A valve-less planar pump isotropically etched in silicon, Journal of Micromechanics and Microengineering 6, pp. 87-91. 7. Andersson H et al. (2001) A valve-less diffuser micropump for microfuidic analytical systems, Sensors and Actuators B 72, pp. 259-265. 8. Rinderknecht D et al. (2005) A valveless micro impedance pump driven by electromagnetic actuation, Journal of Micromechanics and Microengineering 15, pp. 861-866. 9. Wen C Y et al. (2006) A valveless micro impedance pump driven by PZT actuation, Materials Science Forum 505-507, pp. 127-132. 10. Cooney C G, Towe B C (2004) A thermopneumatic dispensing micropump, Sensors and Actuators A 116, pp. 519–524.
IFMBE Proceedings Vol. 23
_________________________________________________________________
798
C.Y. Lee, Z.H. Chen, C.Y. Wen, L.M. Fu, H.T. Chang, R.H. Ma
11. Teymoori M M, Abbaspour-Sani E (2005) Design and simulation of a novel electrostatic peristaltic micromachined pump for drug delivery applications, Sensors and Actuators A 117, pp.222-229. 12. Lee C Y et al. (2008) A planar valveless micro impedance pump for micro-fluidic systems, Journal of Micromechanics and Microengineering 18, pp. 1-9. 13. Makino E. et al. (2001) Fabrication of TiNi shape memory micropump, Sens. Actuators A 88, pp. 256–262.
_______________________________________________________________
[corresponding author] Author: Prof. Chia-Yen Lee Institute: National Pingtung University of Science and Technology Street: No.1, Shuehfu Rd. City: Neipu, Pingtung Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Sample Concentration and Auto-location With Radiate Microstructure Chip for Peptide Analysis by MALDI-MS Shun-Yuan Chen, Chih-Sheng Yu, Jun-Sheng Wang, Chih-Cheng Huang, Yi-Chiuen Hu 1
Instrument Technology Research Center, National Applied Research Laboratories, Taiwan
Abstract — A new micro-scale radiate chip was demostrated for matrix assisted laser desoption/ionization mass (MALDIMS) sample concentration and auto-location. While mass spectrometry is nowadays an important tool for analyzing and characterizing large biomolecules of varying complexity, sample preparation has become a key point within the whole experiment, and sample concentration is a most often used procedure to improve MALDI sensitivity. Here we demonstrated another simple method in concentrating samples for MALDI mass spectrometry analysis by using a micro-scale structure chip. Samples applied around the chip could auto-located to the center and deposited on the central zone after air dried. It showed that all the samples applied on the chip were concentrated and confined precisely to the central zone, even up to 30 micro liters. Sample spots formed on our chip were much smaller than those on an unmodified plate with the same total volume. With standard MALDI/MS sample preparation procedure, we found that matrix samples were precisely concentrated in our chip and significant enhanced results were obtained by MALDI/MS.
the analyte is distributed unevenly throughout the sample spot, or “sweet spot”. To search for these sweet spots is time consuming for high-throughput or automated processes. Hence, the ability to concentrate and localize samples is advantageous, especially for low concentration of analyte. Different methods, like chemical treatment and nanotechnology have been developed to change the surface hydrophobic characters. Although these methods may have good performance, but the processes of manufacture are minute and complicate. Here we demonstrated a new method for peptide sample preparation. Micro-scale radiate chip was manufactured by photolithography process, and could apply directly to the MALDI/MS. Added sample could auto-located to the center of the chips and deposited on the central zone after dried. After analysis by mass spectrometer, the results showed that our chip really had better performance in sample concentration and deposition than conventional MALDI/MS plate.
Keywords — MALDI, mass spectrometry, concentrate, chip, auto-location.
II. MATERIALS AND METHODS I. INTRODUCTION While mass spectrometry is nowadays an important tool for analyzing and characterizing large biomolecules of varying complexity, sample preparation has become a key point within the whole experiment. Sample concentration is the most often used procedure to improve the sensitivity of matrix assisted laser desorption/ionization mass (MALDI/MS). Different kinds of methods have been reported and used, like nano-material added, plate surface modified, hydrophobic surface deposition, and so on. Different kinds of functional plates were made by using the concepts of nanotechnologies with unreveal materials that could precisely control the deposition of sample on the surface. The dried spots would be precisely restricted within the less than 300um diameter central zone after vacuum sublimation. With appropriate modification on the surface, the plate could even isolate and purify specific proteins. Another continuing challenge in MALDI MS is the spread of matrix and sample cocrystalline structures required for effective ionization. Typically, in MALDI MS,
A. Reagents and Materials All test materials were reagent grade or better. A-cyano4-hytdroxycinnamic acid (CHCA) matrix was purchased from Waters Corporation. Acetonitrile (ACN), ethanol, 0.1% trifluoroacetic acid, ammonium citrate, and human albumin were purchased from Sigma-Aldrich. The peptides adrenocorticotropic hormone (ACTH) fragment were used as reference standard and purchased from Bruker Corporation, ACTH (18-39)_[M+H]+_mono with a molecular weight of 2465.199 Da and ACTH(7-38)_mono with 3657.929 Da. B. Methods Sample preparation CHCA matrix working solution was prepared as followed. 5 mg/ml CHCA was dissolved in the solution of 90:10 acetonitrile : 0.1%TFA. Peptides were dissolved in distilled water to a final concentration of 1 mg/ml. Human albumin was dissolved to 5mg/ml solution before use.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 799–801, 2009 www.springerlink.com
800
Shun-Yuan Chen, Chih-Sheng Yu, Jun-Sheng Wang, Chih-Cheng Huang, Yi-Chiuen Hu
Micro-scale chip fabrication The radiate microstructure chip was manufactured by photolithography process, and was deposited with C4F8 in Plasma-thermo Parallel Plate Plasma Etcher. Finally, chips was adhered to the stage of the ABI Vayager mass spectrometer. The MALDI/MS experimental process First, we applied 1 micro litter peptide sample on the plate or chip, and added another 1 micro litter matrix solution immediately without mixing. The mixture solution was dried down in ambient conditions for about few minutes. Finally, the analyte/matrix mixture had been analysis by the mass spectrometer. The time is a function of an ion’s mass-to-charge (m/z) ration, enabling an ion’s mass to be derived from its TOF.
Cocrystalline on the central zone In order to analyze by the mass spectrometer, sample mixed with organic matrix needed to apply to the stainless plate for electricity conduction. For this reason, we used the silicon chip as substrate. After applied 2 microliter peptide/matrix mixed sample on the chip, cocrystalline was formed in a short time and concentrated on the central zone. Figure 2a showed that peptide/matrix mixed droplet applied on the silicon chip and the observation of cocrystalline from the microscopy. After cocrystalline was formed, chips were analyzed by ABI voyager MALDI/MS. Figure 2b are the peptide mass maps from 1 pmole ACTH with different central zone size. Each zone size all has a significant signal from MALDI/MS.
III. RESULTS Radiate Microstructure Chip The chip manufactured by photolithography was illustrated as figure 1. All lines were radiates from the central circle zone which has different diameter. After applied appropriate volume of samples around the chip, the sample droplet would run into the center without any external force and deposited on the central zone. Here we demonstrated 5 mg/ml albumin as sample, and separately added 10, 20, 30 microliter to each chip. The sample droplets were confined to the central zone by the radiated lines. After about four hours, all samples were dried and concentrated into the central zone.
(a)
(b) Fig. 2 (a) The appearance of radiate microstructure chip used for MALDI and the peptide/matrix cocrystalline observed by optical microscopy. (b) The peptide mass map of ACTH(7-38)_mono (3657 Da) based on the radiate microstructure chip.
(a)
(b) Fig. 1 The appearance of the radiate microstructure chip which applied (a)10 Pl H2O on silicon substrate, (b) 10 Pl, 20 Pl, 30Pl human albumin (from left to right) on PDMS substrate.
_______________________________________________________________
Matrix optimization In order to find out the optimum condition in our chip, different concentrations of CHCA in matrix were used. We applied 1ul of 1mg/ml ACTH on the chip, and then added another 1ul of matrix immediately. Figure 3 showed the ACTH peptide mass map separately with 10mg/ml, 2mg/ml, 1mg/ml, 0.5mg/ml, 0.1mg/ml CHCA as matrix. It looks that even with low concentration of matrix, the signal from MADLI/MS was still significantly.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Sample Concentration and Auto-location With Radiate Microstructure Chip for Peptide Analysis by MALDI-MS
801
IV. CONCLUSIONS
Fig.3 The effects of different CHCA concentration on ACTH peptide mass analysis.
Sample Concentrate on Microstructure Chip To verify the effect of concentrate on our chip, the original ABI stainless sample plate was used as the control. We used different concentration of ACTH as sample, which was applied separately to the ABI plate and on our chip. Then added equal volume of matrix immediately and waited till dried. All experimental conditions, like laser pulse intensity, were the same. After collected from mass spectrometer, the data were showed as figure 4. It looks that the sensitivity and signal intensity have remarkable performance on our chip than original stainless plate.
10ug/ml
0.5ug/ml
2ug/ml
0.2ug/ml
Silicon-based substrate used for the MALDI/MS has been reported by A. Kraj et al. They developed porous silicon to improve the sensitivity of low molecular weight sample. Here we demonstrated the on-chip concentration with our chip. The radiate silicon chip could concentrate and auto-deposit sample, and used for MALDI/MS is just one of our applications. The results from our experiments showed that to use the microstructure silicon chip as the substrate for mass spectrometer was a feasible way. With appropriate design for the radiate lines, sample/matrix would centralize and cocrystallize to the central zone of our chip. And analyzed by the mass spectrometer, the sensitivity and intensity of our chip showed better than original stainless plate. The sensitivity and speed of MALDI-MS analysis have led to its being used for the majority of protein and oligonucleotide analysis in high-throughput proteomics and genomics projects. Our chip provides a novel direction to concentration and deposition in MALDI/MS sample preparation, and make the automated process to be more feasible.
ACKNOWLEDGMENT The authors thank M.C. Tseng (Institute of Chemistry, Academia Sinica, TW) for her technical supported.
REFERENCES 1.
10ug/ml
2.
0.5ug/ml
3.
2ug/ml
4.
0.2ug/ml
Fig.4 The comparison of ACTH peptide mass maps between conventional mass stage (upper) and radiate microstructure chip (below).
_______________________________________________________________
Rebekah LG, Edward R, Kole TP et al (2005) Disposable hydrophobic surface on MALDI targets for enhancing MS and MS/MS data of peptides. Anal Chem 77:6609-6617. Lee J, Mysuimi HK, Soper SA et al (2008) Development of an automated digestion and droplet deposition microfluidic chip for MALDITOF MS, J Am Soc Mass Spectrom. Juri R, Marc M, Michael LN et al (2003) Experiences and perspectives of MALDI MS and MS/MS in proteomic research. Int J Mass Spectrom 226:223-237. Benito C, Daniel LF, Antonio RF et al (2006) Mass spectrometry technologies for proteomics, Brie Func Geno Proteomics 4:4:295-320. Author: Shun-Yuan Chen Institute: Instrument Technology Research Center, National Applied Research Laboratories Street: R&D Rd. VI City: Hsinchu Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
The Synthesis of Iron Oxide Nanoparticles via Seed-Mediated Process and its Cytotoxicity Studies J.-H. Huang1, H.J. Parab1,4, R.S. Liu1,*, T.-C. Lai2, M. Hsiao2, C.H. Chen2, D.-P. Tsai3 and Y.-K. Hwu4 1
Department of Chemistry, National Taiwan University, Taipei 106, Taiwan 2 The Genomics Research Center, Academia Sinica, Taipei 115, Taiwan 3 Department of Physics, National Taiwan University, Taipei 106, Taiwan 4 Institute of Physics, Academia Sinica Taipei 115, Taiwan
Abstract — The development of a seed-mediated growth method for synthesis of iron oxide nanoparticles with tunable size distribution and magnetic properties is reported. The detailed investigation of the size distribution of seed as well as iron oxide nanoparticles during the growth process has been carried out using transmission electron microscopy (TEM). It was observed that the distribution of size gradually becomes narrow with time via the intra-particle ripening process and Oswald ripening process. The monodispersed iron oxide nanoparticles with sizes between 5 - 10 nm were fabricated using this method by varying the experimental parameters. The magnetic nanoparticles showed well-defined superparamagnetic and blocking temperature due to the size effects in consistent with the TEM images. The thermogravimetric analyses exhibited the size-dependent weight loss of the magnetic nanoparticles. The in vitro cytotoxicity tests were also performed in order to determine the cell viability as a function of size and concentration of the magnetic nanoparticles. The as-prepared iron oxide nanoparticles showed biocompatibility and nontoxicity against the normal as well as cancerous cell lines.
Herein, we report the development of a seed-mediated growth method for the synthesis of iron oxide nanoparticles with tunable size distribution and magnetic properties. A simple method has been illustrated to fabricate the monodispersed iron oxide nanoparticles from 5 to 10 nm with varying the experimental parameters. It was observed that the seed-mediated growth method for iron oxide nanoparticles is advantageous since it is a non-injection or heating-up method with easily controllable growth process. The iron oxide nanoparticles were characterized by X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The synthesis of iron oxide nanoparticles was found to be reproducible with the tunable properties. The in vitro cell viability analysis were also performed for these iron oxide nanoparticles in order to determine their cytotoxic effects. II. MATERIALS AND METHODS A. Experimental section
Keywords — magnetite, Fe3O4, nanoparticles, cytotoxicity and biomaterial
I. INTRODUCTION The magnetic nanoparticles have been researched extensively for magnetic resonance image, drug targeting, magnetic separation and catalytic applications. [1-3] The size effect of particles have the influence on their electrical, magnetic and chemical properties. Hence, it is really important to obtain monodispersity-distribution size during the fabrication of the nanoparticles to increase the sensitivity of these particles for the various applications such as cell image. Generally, the synthesis of magnetic nanoparticles, using methods such as chemical coprecipitation [4] and hydrothermal treatment [5-7], produces broad size distribution of particles. Hence, the development of novel methods have been the focus of recent research to fabricate uniform and highly monodispersed nanocrystals by thermal decomposition method.
Synthesis of iron oxide nanoparticle seeds. The seeds were prepared by following a procedure reported in literature. [8] Briefly, The mixture of the Fe(acac)3 (2 mmol), oleic acid (6 mmol), oleylamine (6 mmol), 1,2tetradecanediol (10 mmol) and 20 mL of phenyl ether were magnetically stirred in a three-net-round flask under nitrogen atmosphere. The mixture was slowly heated at 265 °C to reflux for 30 min, then followed by cooling under room temperature. The iron oxide nanoparticles were washed using ethanol and collected in solid form after centrifugation at 9000 rpm. The product was further washed several times by hexane to remove the residual solvent. Finally, 5.4 nm iron oxide nanoparticles were obtained after drying in the oven, which were redispersed into hexane. The different size iron oxide nanoparticles in growth solution was synthesized. The synthesis of iron oxide nanoparticles of different sizes involved an initial formation of iron oxide nanoparticles as seeds and subsequent growth of the particles in the presence seeds at high temperature. The concentration of growth solutions prepared by adjusting the concentration of
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 802–805, 2009 www.springerlink.com
The Synthesis of Iron Oxide Nanoparticles via Seed-Mediated Process and its Cytotoxicity Studies
Fe(acac)3 and phenyl ether/ benzyl ether were 0.028, 0.056, 0.084, 0.112 M. The mixture of 0.08 g seeds and 20 mL growth solution in the presence of 1,2-tetradecanediol, oleic acid and oleylamine was heated to 265 °C at a rate of 4 °C/min under nitrogen atmosphere and then was kept at this temperature for 30 to 90 min for complete growth of nanoparticles. After the growth of the nanoparticles, the reaction mixture was cooled down to room temperature followed by the addition of ethanol. The brown precipitate thus formed was redispersed into hexane. B. Cytotoxicity assay Considering the increasing applications of the magnetic nanoparticles in the biomedical fields, the in vitro cell viability studies were performed in presence of the magnetite nanoparticles synthesized by the above method. The normal breast epithelial cell (H184B5F5/M10) and the three type of breast cancer cells (SKBR3, MB157 and T47D) were used for cytotoxicity tests. These cells (2000 cells per 90 L) with 10 L of different concentrations of nanoparticles (in 0.l, 1, 10, 100 g/mL) were seeded into different wells and were incubated for 72 h followed by the addition of 20 mL of MTS (3 - (4,5-dimethylthiazol-2-yl) -5 - (3-carboxymeth oxyphenyl) -2-(4-sulfophenyl)- 2 H- tetrazolium, inner salt) in each well. The optical density (OD) of the resultant solutions were determined (= 490 nm) by using microplate absorbance reader (SpectraMAX 340pc, Molecular Devices, CA). III. RESULTS AND DISCUSSION
V (Standard deviation) (nm)
Figure 1 shows the standard deviation () of nanoparticles size distribution after refluxing for different time scales. In the beginning, the value increases with increasing re2.4
803
fluxing time owing to the result of size difference between the small nuclei and the growth particles. The maximum value has been observed between 30 and 60 min and then it begins to drop and deviates from the original trend. As a result the amount of small particles decreases and the distribution of particle size start to focus. The rapidly decreasing value in high concentration solution shows the small particles dissolves easily due to the relatively large critical radius. By controlling the reaction time and various concentrations of growth solution, it was possible to produce monodispersed nanoparticles with particle sizes of 6.8, 7.6, 8.2 and 8.3 nm. The TEM images of the particles along with the seed are shown in Figure 2. The figure 2 (F) is the table of calculated average size of the particles by using corresponding TEM images of the samples. Figure 3 shows the concentration-dependent seed growth after reacting for 90 min in different solvents such as benzyl ether and phenyl ether. The boiling temperature of benzyl ether is higher than that of phenyl ether, so the system in benzyl ether product the high temperature condition than in phenyl ether. It was observed that the size of as prepared seeds in benzyl ether is larger than in phenyl ether, because the amount of monomers induced by thermal energy increases with temperature. The growth curve finally reached the saturation value due to the high concentration of solvents. The maximum value was observed at around 8.2 and 10.2 nm for phenyl ether and benzyl ether, respectively. Although the higher concentration causes faster seed growth, some part of monomers also form small particles and grow over the level of critical radius to compete with as-prepared seeds. Hence, the final particle concentration is contributed partial from primary nucleation of the particles, so that the particle size is limited. Thus, by tuning the parameters such as temperature and concentration, the series of monodis-
(D)
2.2 2.0 (C)
1.8 1.6 1.4 1.2 1.0
(B)
0.8 0.6 0.4
(A)
0
30
60
90
Refluxing time (min)
Fig. 1 Standard deviation in size distribution in (A) 0.028 M, (B) 0.056 M, (C) 0.084 M and (D) 0.112 M growth solution.
_______________________________________________________________
Fig. 2 TEM images of seed solution (A) and iron oxide nanoparticles after growth in different concentrations of growth solution such as (B) 0.028 M, (C) 0.056 M, (D) 0.084 M, (E) 0.112 M and (F) Final particle size of iron oxide nanoparticles in different concentrations of growth solution.
IFMBE Proceedings Vol. 23
_________________________________________________________________
804
J.-H. Huang, H.J. Parab, R.S. Liu, T.-C. Lai, M. Hsiao, C.H. Chen, D.-P. Tsai and Y.-K. Hwu
12 Growth in phenyl ether Growth in benzyl ether
Size (nm)
10 8 6 4 0.00
0.02
0.04
0.06
0.08
0.10
0.12
Fig. 5 TG analysis for 5.4, 6.8 and 7.6 nm iron oxide nanoparticles
Concentration of Growth Solution (M) Fig. 3 The concentration-dependent seed growth in phenyl ether and benzyl ether.
persed nanoparticles in the range of 5.4 to 10.2 nm are synthesized successfully. Figure 4 shows the XRD patterns of iron oxide nanocrystals of 5.4, 6.8, 7.6, 8.2 nm and the standard Fe3O4 and Fe2O3. The peak positions and relative intensities of nanoparticles agree well with XRD patterns of standard Fe3O4, which confirms the well-known inverse spinel structure of the magnetite materials. The average size of the iron oxide nanoparticles are deduced from Sherrer’s equation as shown in Table 1, which is consistent with the result obtained from the transmission electron microscopy (TEM) analysis. The weight loss in TG analysis for 5.4, 6.8 and 7.6 nm iron oxide nanoparticles are shown in Figure 5. The TGA curve displays a weight loss from room temperature to 250°C and a sudden drop between 250 ~ 350°C and 550 ~
Intensity
8.2 nm
650°C, respectively. The weight loss in the low temperature region results from the evaporation of free oleic acid and the solvent, phenyl ether. However, the weight loss at relative high temperatures can be attributed to the decomposition of oleic acid bound tightly to the surface of nanoparticles. These results suggest the two dissociation regions of oleic acid, owing to the two kinds of interactions between iron oxide nanoparticles and oleic acid. The decomposition at higher temperatures can be attributed to the chemical bonding between Fe2+/Fe3+ and carbonyl group of ligand. Since the smaller nanoparticles provide more binding sites on the surface for surfactant than larger ones, the amount of weight loss is related to the particle size due to higher surface area to volume ratio of nanoparticles. Thus TG analyses offer the important quantitative analysis for different particle sizes of magnetic nanoparticles. For the biomedical applications of iron oxide nanoparticles such as magnetic resonance imaging, it is necessary to modify the hydrophilic ligand on the nanoparticles surface to promote the suspension of nanoparticles in aqueous media. In the present studies, we have modified the nanoparticles surface by replacing the oleate species using the tetramethyl- ammonium 11-aminoundecanoate ligand to promote hydrophilicity. [8] The carboxylate group of the
7.6 nm
Table 1 Comparison of particle size of iron oxide nanoparticles
6.8 nm
using TEM and XRD analysis 5.4 nm Fe3O4 D-Fe2O3 15
20
25
30
35
40
Fig. 4 XRD patterns of iron oxide nanocrystals of 5.4, 6.8, 7.6, 8.2 nm
_______________________________________________________________
Particle Size (nm) TEM Analysis
XRD Analysis
Seed
5.4
5.1
0.028
6.8
6.5
0.058
7.6
7.2
0.084
8.2
7.9
45
2T (degree)
sizes and the Fe3O4 and -Fe2O3 standard.
Concentration of growth solution (M)
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Synthesis of Iron Oxide Nanoparticles via Seed-Mediated Process and its Cytotoxicity Studies
140 120
160 H184B5F5/M10 SKBR3 MB157 T47D
(A)
100 80 60 40 20 0
0
0.1
140
Cell viability (%)
Cell viability (%)
160
1
10
100
Concentration (Pg/mL)
120
H184B5F5/M10 SKBR3 MB157 T47D
(B)
100 80 60 40 20 0
0
0.1
1
10
100
Concentration (Pg/mL)
805
nanoparticles with standard deviation ( < 1). The TG analyses showed that the surface of nanoparticles were occupied by different amount of ligands via physical absorption and chemical binding, which changes with particle size, owing to the surface area to volume ratio. The asprepared iron oxide nanoparticles showed biocompatibility and nontoxicity against the normal as well as cancerous cell lines.
Fig. 6 The cytotoxicity tests for breast epithelial cell (H184B5F5/M10) and
ACKNOWLEDGMENT
the three type of breast cancer cells using (A) 5.4 nm (B) 7.4 nm iron oxide nanoparticles.
ligand binds to the surface iron and exposes the hydrophilic amino group to aqueous media. For analyzing the biocompatibility, the hydrophilic nanoparticles of size 5.4 nm and 7.6 nm in the concentrations range of 0.1, 1, 10 and 100 M were treated with the normal breast epithelial cells (H184B5F5/M10) and the three types of breast cancer cells (SKBR3, MB157 and T47D) for 72 h. Figure 6 shows the cell viability results for normal as well as breast cancer cells using 5.4 nm and 7.4 nm nanoparticles. The cytotoxicity studies revealed that there is no obvious change in cell viability in the studied concentration range of magnetic nanoparticles, although the 100 M nanoparticle concentration is considered to be far higher than the normal use. Thus, the as-prepared nanoparticles showed biocompatibility and nontoxicity against the normal as well as cancerous cell lines.
The authors would like thank the National Science Council of Taiwan for financially supporting this research under Contract Nos. NSC 97-2113-M-002-012-MY3, NSC 972120-M-002-013 and NSC 97-2120-M-001-006.
REFERENCES 1.
2.
3.
4.
5.
IV. CONCLUSIONS In summary, we have synthesized the iron oxide nanoparticles with monodispereity using the seed-mediated method combined with thermal decomposition. The size evolution has been demonstrated by understanding the growth process for the formation of monodispersed nanoparticles. It is also possible to control the distribution and size of the particles by tuning the temperature and the concentration of growth solution. Under the high concentration condition, the nucleation and growth process compete with each other to consume the monomers, whereas the temperature can affect the degree of dissociation of precursor for the further growth of nanoparticles. Hence, we modified these factors to get the monodispersed iron oxide
_______________________________________________________________
6.
7. 8.
Song H T, Choi J S, Huh Y M et al. (2005) Surface Modulation of Magnetic Nanocrystals in the Development of Highly Efficient Magnetic Resonance Probes for Intracellular Labeling. J Am Chem Soc 127: 9992-9993. Jun Y W, Huh Y M, Choi J S et al. (2005) Nanoscale Size Effect of Magnetic Nanocrystals and Their Utilization for Cancer Diagnosis via Magnetic Resonance Imaging. J Am Chem Soc 127:5732-5733. Weizmann Y, Patolsky F, Katz E et al. (2003) Amplified DNA Sensing and Immunosensing by the Rotation of Functional Magnetic Particles. J Am Chem Soc 125, 3452-3454. Gass J, Poddar P, Almand J, et al. (2006) Superparamagnetic Polymer Nanocomposites with Uniform Fe3O4 Nanoparticle Dispersions. Adv Funct Mater 16:71-75. Daou T J, Pourroy G, Be´gin-Colin S (2006) Hydrothermal Synthesis of Monodisperse Magnetite Nanoparticles. Chem Mater 18:43994404. Park J, Lee E, Hwang N-M et al. (2005) One-Nanometer-Scale SizeControlled Synthesis of Monodisperse Magnetic Iron Oxide Nanoparticles. Angew Chem Int Ed 44: 2872-2877. Park J, An K, Hwang Y et al. (2004) Ultra-large-scale syntheses of monodisperse nanocrystals. Nat Mater 3:891-895. Sun S, Zeng H, Robinson D B et al. Monodisperse MFe2O4 (M = Fe, Co, Mn) Nanoparticles (2004) J Am Chem Soc 126:273-279. The address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Ru-Shi Liu Department of Chemistry, National Taiwan University Sec. 4, Roosvelt Road Taipei, 106 Taiwan
[email protected] _________________________________________________________________
Characterization of Functional Nanomaterials in Cosmetics and its Cytotoxic Effects J.-H. Huang1, H.J. Parab1,3, R.S. Liu1,*, T.-C. Lai2, M. Hsiao2, C.H. Chen2 and Y.K. Hwu3 1
Department of Chemistry, National Taiwan University, Taipei 106, Taiwan 2 Genomics Research Center, Academia Sinica, Taipei 115, Taiwan 3 Institute of Physics, Academia Sinica Taipei 115, Taiwan
Abstract — The ultraviolet (UV) rays from sun are generally divided in three types such as UVA (315~400 nm), UVB (280~315 nm) and UVC (100~280 nm). Among these, the UVC rays with the highest energy get absorbed completely as they pass through the atmospheric layer. The UVA and UVB rays which penetrate through the atmospheric layer pose a threat for the humankind due to its carcinogenic nature to human skin. Hence it is necessary to develop the cosmetics which absorb the UV rays to prevent the damage of the skin. Generally, the Tinosorb M (2,2'Methyl-enbis[6-(2H benzotriazol-2yl)-4-(1,1,3, 3-tetra-methylbutyl) phenol]) is mixed in the cosmetics which absorb UVB rays. But almost 97% of the UV rays consist of UVA rays which are mainly responsible for the damage of skin. Recently, the sun-screening cosmetics containing the nanoparticles such as ZnO and TiO2 nanoparticles or nanorods have attracted great attention for protecting skin from UVA by reducing the amount of UVB penetrating into the skin due to the reflection and scattering. Therefore, it is necessary to investigate the risk factors involved in the usage of these nanoparticles in cosmetics. The present studies involve the characterization of cosmetics containing TiO2 and ZnO nanoparticles and its comparison with the commercial samples. The absorbance measurements in the range of UVA were performed in order to determine the absorption efficiency of nanoparticles extracted from the cosmetics samples. The transmission electron microscopic analysis revealed the morphology of the nanoparticles. The in vitro cell viability tests were performed as a function of particles size in order to determine the cytotoxicity of the nanoparticles. Keywords — sun-screening cosmetics, zinc oxide, titanium oxide, nanoparticles and cytotoxicity analysis.
I. INTRODUCTION The nanoscience and nanotechnology have already been offered the new industrial and health related applications of nanoparticles for the commercial product such as cosmetics, food additive and medicine reagent, because it is attributed to the unique properties for nanoparticles in comparison with the their relative bulky materials. [1-4] The sunscreening cosmetics containing the additive of nanoparticles are useful way to protect the human skin from the exposure to the UVA by the result of scattering and reflection physi-
cal properties. The nanoparticles such as zinc oxide, titanium oxide etc. with smaller size than the wavelength of light provide the excellent scattering and absorption ability in the range of UVA, but there are many uncertain risk factors formed such as the penetration of nanoparticles for skin and the exposure of nanoparticles to the lung. [5-7] Hence, it is necessarily for scientist to investigate the properties of manufactured commercial product. In the present studies, the properties of commercial product of sun-screening cosmetics containing the nanoparticles has been investigated, including of the determination of their percentage content and the identification of nanomaterials, by using extraction method to separate the part of nanoparticles from sun-screening cosmetics. We offer and confirms the exact the nanoparticles component for further quantitatively/qualitative analysis and bio-analysis such as cytotoxicity assay. The cytotoxicity analyses of different commercially available nanoparticles are performed with different cell lines including S-G (transformed oral keratinocyte), OMF (oral mucosa fibroblast) and WI-38 (lung fibroblast) to investigate the particles size effect on cell viability. II. MATERIALS AND METHODS A. Experimental section The sun-screening cosmetics were purchased from the five companies. (SH, ES, KA, LA and CL) Nano-scaled commercially available TiO2 nanoparticles (AFDC; 161 nm spherical irregular particles and M212; 20 nm width/ 50 nm length nanorods) were purchased from Top Rhyme International Company. Acetone (>99.5%, Sigma-Aldrich), ethanol (99%, Riedel-deHean) and n-hexane (99%, Riedel-deHean) were used as purchased without further purification. The 1.5 g sun-screening cosmetics was dispersed into 50 mL solvent such as ethanol, hexane or acetone followed by sonification for 20 min. Supernatant was removed using centrifugation at the rate of 9,000 rpm and then white powder was collected and dried in the oven at 70oC for 10 min. Repeating the above steps several times to complete re-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 806–809, 2009 www.springerlink.com
Characterization of Functional Nanomaterials in Cosmetics and its Cytotoxic Effects
807
moval of the organic species from the sample. Finally, the pure white powders were obtained which was characterized by XRD (X-ray powder diffraction; PANalytical X' Pert PRO), UV/visible spectroscopy, SEM (scanning electron microscope; Tecnai-G2-F20) and TEM (transmission electron microscope; JEOL JSM-1200EX II). B. Cytotoxicity assay-MTS assay Three normal cell lines were employed in this study including S-G (transformed oral keratinocyte), OMF (oral mucosa fibroblast) and WI-38 (lung fibroblast). S-G and OMF were cultured in Dulbecco’s modified Eagle’ medium (DMEM, Gibco, Grand Island, NY, USA). WI-38 was cultured in Minimum essential medium (MEM, Gibco, Grand Island, NY, USA). All mediums supplemented with 10% (v/v) FBS (Hyclone, CA, USA), 2mM glutamine (Gibco, Grand Island, NY, USA), 100 units/mL penicillin (Gibco, Grand Island, NY, USA), and 100 mg/mL streptomycin (Gibco, Grand Island, NY, USA). All cell lines were incubated in a 37°C humidified atmosphere (95% air and 5% CO2). In the first day, 2000 cells were seeded to each well in 96-well plate and incubated overnight at 37 oC in 5% CO2.In the second day, replaced the medium with variant amounts of titanium dioxide which was suspended in 200 L growth medium. The doses of titanium dioxide (AFDC and M212) were followed: 13000, 300, 80.3, 20.8, 5.2 and 1.3 g/mL. After 72 hrs incubation, 20 l MTS was added into each well and incubated for one to three hours. The cell viabilities were determined by the absorbance of OD490nm which were measured by ELISA reader.
Fig. 1 The TEM images of nanoparticles extracted from the commercial products of (a) SH, (b) ES, (c) KA, (d) LA and (e) CL companies.
III. RESULTS AND DISCUSSION The morphology of the nanoparticles extracted from the sun-screening cosmetics was analyzed using TEM. The images of TEM are shown in figure 1(a) ~ 1(e). The morphology image of the nanoparticles exhibits the irregular spherical shape for the SH and ES cosmetics, the mixture shape of irregular spherical, long rod and short rod for KA cosmetics and only rod shape for LA cosmetics. The morphology of the CL nanoparticles reveals that the shape of the uniformly distributed core-shell structures, which is quite different from others. The TEM images show all the nanoparticles were of the nano-scale size except the figure 1(e) and the morphology of nanoparticles are different due to the nature properties of materials such as the preference orientation for crystallization growth. The nano-scale size of particles satisfies the scattering requirements for the wavelength of UV (100 nm ~ 400 nm) and absorption peak found
_______________________________________________________________
Fig. 2 XRD patterns of nanoparticles and standard of zinc oxide and titanium oxide in the form of rutile and anatase
in the range of UVA, as the nanoparticles size is smaller than the wavelength of incident light and the absorption position of energy shifts to UV range due to size effect. Hence, The TEM images also support that the size of nanoparticles are responsible for the physical sun-screening. Figure 2 reveals the several characterization peaks for the nanoparticles. The XRD peaks of SH nanoparticles are agreed well with the standard zinc oxide pattern and the
IFMBE Proceedings Vol. 23
_________________________________________________________________
808
J.-H. Huang, H.J. Parab, R.S. Liu, T.-C. Lai, M. Hsiao, C.H. Chen and Y.K. Hwu
Fig. 3 SEM-EDS and SEM image of nanoparticles extracted from the commercial product of CL company.
same is found for ES nanoparticles, so these contains zinc oxide nanopartcles as the physical sun-screen component. Because the XRD pattern of AL nanoparticles consists of the standard pattern of zinc oxide and titanium oxide in the rutile and anatase forms, the AL nanoparticles being composed of three type nanoparticles, which are responsible for more absorption positions in the range of UVA and UVB than only single material. The XRD peaks for LA nanoparticles appear only the characterization of titanium oxide in the form of rutile. Finally, the crystalline of CL nanoparticles is amorphous, because there is no any peak appeared in the XRD pattern of CL nanoparticles. In figure 3, SEM image revealed the surface morphology of CL nanoparticles, which was found in form of smooth spphere and SEM-EDS analysis show all of its composition is silica. Low refractive indexed non-crystalline components of CL sun-screen make it transparent and high reflection due to the bulk material. In the table 1, it displays the of nanoparticles size for five products calculated by using the Scherrer equation. The zinc oxide nanoparticles sizes are consisted with the TEM image around 20 nm for the SH, ES and AL nanopaticles. However, the titanium oxide in the anatase form for AL cosmetics shows the larger size than others due to the length of nanorod. For titanium oxide in the rutile form, it appears the Table 1 Nanomaterials size are deduced using scherrer equation. SH
ES
AL
LA
CL
ZnO
16.0 nm
16.6 nm
22.8 nm
-
-
TiO2 (Rutile)
-
-
12.2 nm
16.9 nm
-
TiO2 (Anatase)
-
-
41.3 nm
-
-
Table 2 Weight percentage of solid component of cosmetics SH
ES
AL
LA
CL
Powder (wt%)
20%
16.6%
16%
5.2%
5.5%
Others (wt%)
80%
83.4%
84%
94.8%
94.5%
_______________________________________________________________
Fig. 4 UV/vis spectroscopy of (a) SH, (b) ES, (c) AL, (d) LA and (e) CL cosmetics with anal sized in the range of 200 ~ 700 nm
size is as same as zinc oxide nanoparticles, so it also forms irregular sphere by comparison with the TEM image. There is no specific peak found in XRD pattern for the CL coreshell nanoparticles, so the particles size is only calculated by the TEM or SEM image. Table 2 show the percentage of content for cosmetics such as the powder and other organic species. Present of 20 wt%, 16.6 wt%, 16 wt%, 5.2 wt% and 5.5 wt% powders were found for the sun-screen cosmetics of SH, ES and AL cosmetics after the extraction procedure. It shows the large amount of powder used as content of cosmetics during the manufacture procedure. Previously, there is no any investigation that includes bio-analysis such as cytotoxicity for these cosmetics. Exposure of such fine particles to human skin or lung is very dangerous. Though the nature of titanium oxide and zinc oxide did not show any toxicity for humankind, the unexpected effects occurred as the particles size decreases to the nano-scale. Base on the data of TEM image and XRD pattern, the major materials of nanoparticles used as sun-screen agent in cosmetics are titanium oxide and zinc oxide and their sizes localize around 20 nm. The UV/visible spectroscopy are measured for five cosmetics shown in Figure 4. The wavelength range of UVA lies between 315 ~ 400 nm. The absorption peaks for titanium oxide and zinc oxide nanoparticles are found to be at 280 nm and 375 nm, respectively. Therefore, they all appear the characteristic peaks in the range of UV . [7] The characteristic peak for titanium oxide nanoparticles localize at the UVB range with high intensity, but the absorption peak of zinc oxide nanoparticles exhibit in UVA range with low intensity. For the sun-screening in the UVA range, the zinc
IFMBE Proceedings Vol. 23
_________________________________________________________________
Characterization of Functional Nanomaterials in Cosmetics and its Cytotoxic Effects
809
characterization absorption in the range of UV, but only the absorption peak of zinc oxide nanoparticles localize at the range of UVA, and the peak position for titanium oxide nanoparticles place in the range UVB. Hence, the zinc oxide nanoparticles is excellent for sun-screening cosmetics to protect the skin of human kind form the major part of UV, which is UVA. By the cytotoxicity analysis for five cell lines, the high wt% nanoparticles do not benefit the growth environment for the cell and the nano-scale materials suppress the growth of cell due to the size effect. Fig. 5 Cytotoxicity analysis of commercially available titanium oxide with
ACKNOWLEDGMENT
diameter of (a) 161.1 nm and (b) ~20 nm width / 50 nm length nanorod. ( The inserts are TEM image of nanoparticles )
oxide nanoparticles is more useful for human skin than titanium oxide nanoparticles, but the intensity is relative low by comparison with each characterization absorption peak. The background for each nanoparticles in figure 4 (a) to 4 (d) is relative higher than the larger particles in figure 4 (e) due to scattering effect by the influence of closed package nanoparticles than larger nanoparticles. Figure 5 shows the result of cytotoxicity assay of OMF, S-G and WI-38 cell lines incubating in two type nanomaterials as AFDC (161.1 nm nanoparticles) and M212 (20 nm/ 50 nm nanorods). The result reveals that oral keratinocye and fibroblast cells have higher tolerance to the various concentrations of titanium oxide nanoparticles than lung fibroblast, because the cell viability decreases until 13000 g/mL. For cell lines exposed to M212 nano-materials, the cell viablitity of WI-38 starts to decrease at 50.78 Pg/mL. In comparison, the AFDC nano-materials appears less toxic than M212 in cell lines due to the size effect. The mechanisms of the TiO2 to inhibit cell viability remain unknown. IV. CONCLUSIONS
REFERENCES 1.
2.
3. 4.
5.
6.
7.
By the analysis of TEM image and XRD pattern, the results reveal that the material of nanoparticles extracted from sun-screening cosmetics is composed of zinc oxide, titanium oxide or mixture of both for the cosmetics of SH, ES, AL and LA companies, but the silica is component for product of CL company by the SEM-EDS. The percentage of content on nanoparticles are 20wt% for SH company, 16.6wt% for ES company, 16wt% for AL company, 5.2wt% for LA company and 5.5wt% for CL company. The datum show quite large amount of nanoparticles used in the sunscreen cosmetics, so the cell in skin and lung will be dangerous to expose to the nanoparticles by the way of contact with flying nanoparticles. The nanoparticles all show the
_______________________________________________________________
The authors would like thank the Department of Health, Executive Yuan under Contract Nos. DOH97-TD-D-11397002 and the National Science Council of Taiwan under Contract Nos. NSC 97-2113-M-002-012-MY3 and NSC 972120-M-001-006 for financially supporting this research.
Paul J E, Tatjana P, Stefan V et al. (2007) DNA-TiO2 nanoconjugates labeled with magnetic resonance contrast agents. J Am Chem Soc 129:15760-15761 Bappaditya S, Haoheng Y, Nicholas O F et al. (2008) Proteinpassivated Fe3O4 nanoparticles: low toxicity and rapid heating for thermal therapy. J Mater Chem 18:1204–1208 Aleksandra B D and Yu H L (2006) Optical properties of ZnO nanostructures. Small 2:944 – 961 Kazunari O, Shigeyuki N, Yuanzhi L et al. (2007) The surface of TiO2 gate of 2DEG-FET in contact with electrolytes for bio sensing use. Appl Surf Sci 254:36–39 Sang H L, Hyun J L, Hiroki G et al. (2007) Fabrication of porous ZnO nanostructures and morphology control. Phys Stat Sol (c) 4:1747– 1750 Qamar R, Mohtashim L, Elke D et al. (2002) Evidence that ultrafine titanium dioxide induces micronuclei and apoptosis in Syrian hamster embryo fibroblasts. Environ Health Perspect 110:797-800 Raymond SH Yang, Louis W C, Wu J-P et al. (2007) Persistent tissue kinetics and redistribution of nanoparticles, quantum dot 705, in mice: ICP-MS quantitative assessment. Environ Health Perspect 115:13301343 Use macro [author address] to enter the address of the corresponding author: Author: Ru-Shi Liu Institute: Department of Chemistry, National Taiwan University Street: Sec. 4, Roosvelt Road City: Taipei 106 Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Design and Analysis of MEMS based Cantilever Sensor for the Detection of Cardiac Markers in Acute Myocardial Infarction Sree Vidhya1 & Lazar Mathew2 1
2
Healthcare Practice, Frost & Sullivan (P).Ltd, Chennai, India School of Biotechnology, Chemical & Biomedical Engineering, VIT University, Vellore, India.
Abstract — Piezo-resistive actuation of a microcantilever induced by biomolecular binding such as DNA hybridization and antibody-antigen binding is an important principle useful in biosensing applications. As the magnitude of the forces exerted is small, increasing the sensitivity of the microcantilever becomes critical. In this paper, we are considering to achieve this by geometric variation of the cantilever. The sensitivity of the cantilever was improved so that the device can sense the presence of antigen even if the magnitude of surfacestresses over the microcantilever was very small. We consider a ‘T-shaped’ cantilever that eliminates the disadvantages while improving the sensitivity simultaneously. Simulations for validation have been performed using Intellisuite software (a MEMS design and simulation package). The simulations reveal that the T-shaped microcantilever is almost as sensitive as a thin cantilever and has relatively very low buckling effect. Simulations also reveal that with an increase in thickness of the cantilever, there is a proportional decrease in the sensitivity. Keywords — Microcantilever, Acute Myocardial Infarction, Cardiac Troponins, Piezo resistive, Thermo Electro Mechanical Analysis
I. INTRODUCTION This paper presents an analytical modeling of a piezoresistive cantilever used as MEMS based biosensor for the detection of cardiac markers in Acute Myocardial Infarction (AMI).Diagnosis of Myocardial Infarction was achieved by the nanomechanical deflection of the microcantilever due to adsorption of the Troponin I complex. The deflection of the microcantilever was measured in terms of the piezoresistive changes by implanting boron at the anchor point where there is maximum strain due to the adsorption of the analyte molecules. The biochemical interactions between the Cardiac Troponin I (cTnI) complex and the immobilized antibodies would cause change in resistance of the piezoresistor integrated at the anchor point. A ‘T’ shaped microcantilever design was proposed for the study. The distal end of the device was coated with gold. The sensitivity of the cantilever was improved so that the device
can sense the presence of antigen even if the magnitude of surface-stresses over the microcantilever was very small. To obtain an application specific optimum design parameter and predict the cantilever performance, Thermo Electro Mechanical (TEM) Analysis using Intellisuite (a MEMS design and simulation package) was performed. The miniaturization of the cantilever-based biosensor leads to significant advantages in the absolute device sensitivity. Precise measurement of the deflection at the end of the cantilever was achieved through this arrangement. Measurement of antigens up to picogram levels can be done using this technique. II. MATERIALS & METHODS A. Theoretical Considerations The fact that there is no standard procedure to determine the electromechanical parameters of piezo-resistive structures is apparent. In this paper, two different approaches have been used to study the characteristics of the T-shaped microcantilever and to predict its performance during the biomolecular binding process. The first approach uses theoretical relationship between differential surface stress and tip displacement during the biomolecular binding process at the upper surface of the cantilever. In the second approach, FEM simulations by Intellisuite Software have been performed on the T-shaped microcantilever. Piezoelectric materials strain when exposed to voltage and, conversely, electrical charge accumulates on opposing surfaces and produces voltage when strained by an external force. This is due to the permanent dipole nature of these materials. When the biomolecular interaction occurs on the top of the cantilever, the change in the radius of curvature, R and the cantilever deflection zmax can be related to the differential surface stress, s, by 1/R = 6(1 – ) s/Et2 and zmax = 3l 2(1 – ) s/Et2 Where is Poisson’s ratio, E is Young’s modulus for the substrate, t is the thickness of the MC, and l is the cantilever length.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 810–812, 2009 www.springerlink.com
Design and Analysis of MEMS based Cantilever Sensor for the Detection of Cardiac Markers in Acute Myocardial Infarction
B. Microcantilever Design Silicon is the substrate material used. To use silicon as a substrate material, it has to be pure silicon in a singlecrystal form. The Czochralski method appears to be the most popular method among several methods that have been developed for producing pure silicon crystal. A popular method of designating planes and orientations is the Miller indices. These indices are effectively used to designate planes of materials in cubic crystal families. For simulation purposes, we have considered the design parameters given in table 1. Table 1 Design Parameters
Length of the cantilever
250 Pm
Length of the T-arm
100 Pm
Width
50 Pm
Cantilever thickness
1 Pm
811
environment. The first step (pre-processing) in using TEM analysis module is constructing a model of the structure to be analyzed. The input of a topological description of the structures and geometric features is required in TEM. This is represented in 3D form, using the IntelliFAB designer which gives the virtual prototype of the model. The primary objective of the model is to realistically replicate the important parameters and features of the real model. Once the geometric model has been created, a meshing procedure is used to define and break the model up into small elements. In general TEM model is defined by a mesh network, which is made up of the geometric arrangement of elements and nodes. Nodes represent points at which features such as displacements are calculated. Elements are bound by set of nodes, and define localized mass and stiffness properties of the model. Elements are also defined by mesh numbers which allow reference to be made to the corresponding deflections or stresses at specific model locations. A fine meshed model of the T-shaped microcantilever used in the analysis is shown in figure 2.
Keeping these parameters into consideration a T-shaped microcantilever was designed using IntelliFAB designer. Sequential steps led to the design of the microcantilever model as shown in figure 1.
Fig. 2 Intellisuite Auto Mesh With 50 X 50 P M
III. SIMULATIONS AND RESULTS
Fig. 1 Cross section of the T-shape micro cantilever design
C. Thermo-electro Mechanical Analysis Thermo-electro mechanical (TEM) analysis using Intellisuite was chosen as the simulation tool in this study because of its unique capabilities in MEMS design, simulation, and modeling. It has a fully integrated MEMS design
_______________________________________________________________
Simulations were performed in order to validate the claim that the T-shaped microcantilever leads to significant advantages in the absolute device sensitivity. Microcantilever with 250Pm length x 100Pm length of the T-arm x 50Pm width x 1Pm thickness was used for simulations using INTELLISUITE: material - silicon; Young’s Modulus 200GPa; Poisson’s ratio 0.22 and mass density of 2.3g cm-3. The effect of the surface stress created by the troponincomplex interactions on the gold coated surface of the cantilever can be effectively simulated by substituting it with a line force (amounting to 15.2x10-17 MPa) on the edges of the T-arm. Sensitivity has been meas-
IFMBE Proceedings Vol. 23
_________________________________________________________________
812
Sree Vidhya & Lazar Mathew
ured in terms of the maximum displacement and surface stress undergone by any point on the cantilever as shown in figure 3 & 4.
IV. CONCLUSION Thus a T-shaped microcantilever was designed, analyzed and simulated successfully. Stresses were generated at the anchoring region of the cantilever where Boron was implanted. These stresses were detected by employing piezoresistive detection method. However, analyses based on the concentration of sample solution may be needed for further research.
REFERENCES 1.
2. 3.
Fig.3 Displacement along Z - axis
4.
5.
Wu, A.H.B., Feng, Y.J., Contois, J. H. & Pervaiz, J. (1996) Comparison of myoglobin, creatine kinase MB, and cardiac troponin I for diagnosis of acute myocardial infarction Ann. Clin. Lab. ScL 26: 291300. Wu, Alan, editor. Washington, DC: American Association of Clinical Chemistry (AACC) Press, 1998. – “A review on Cardiac Markers”. D. G. Pijanowska, A. J. Sprenkels, W. Olthuis and P. Bergveld,: Chemical, Volume 91, Issues 1-3, 1 June 2003, Pages 98-102.- A flow-through amperometric sensor for micro-analytical systems, Sensors and Actuators. Alansari, S. E. & Croal, B. L. (2004) Diagnostic value of heart fatty acid-binding protein and myoglobin in patients admitted with chest pain Annals of Clinical Biochemistry 41: 391-396. Wei Zhoua, Abdul Khaliq a, Yanjun Tang a, Haifeng Ji a,c, Rastko R. Selmic - Simulation and design of piezoelectric microcantilever chemical sensors.
Fig.4 Stress at the anchoring region along Z - axis
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Integrating Micro Array Probes with Amplifier on Flexible Substrate J.M. Lin1, P.W.Lin2 and L.C. Pan3 1
Chung-Hua University/Department of Mechanical Engineering, Hsin-Chu, Taiwan, R. O. C. 2 Taipei Medical University/New Business Center, Taipei, Taiwan, R. O. C. 3 Taipei Medical University/Department of General Education, Taipei, Taiwan, R. O. C.
Abstract — In this paper a bio-sensing module by integrating a micro-array probes device and a semiconductor amplifier respectively on two flexible substrates is proposed, which utilizes semiconductor processes. The amplifier is formed of bottom-gate thin film transistors (TFT). As such, the signal obtained by the bio-sensing probes can be amplified nearby to improve the signal-to-noise ratio and impedance matching. This module is formed on the flexible substrate such that the bio-probe device can be disposed to conform to the profile of a living body’s portion to improve the electrical contact property. Keywords — Bio-sensing probe, thin film transistor amplifier, flexible substrate, signal-to-noise ratio
The next are microprobe fabrication test and discussion. The last part is the conclusion. II. DEVICE FABRICATION STEPS As in Fig. 1, the bio-sensing probe has a tip end to facilitate thrusting into the living body to decrease the contact impedance. This research can vary the density, occupied area and sharpness of the tip ends of the probes to change the contact impedance so as to meet different needs. In addition this product can be designed for roll-to-roll types to facilitate mass-production.
I. INTRODUCTION Conventional micro array biological probes are produced on a hard silicon wafer substrate [1-7]. This kind of product is not only heavy and frangible but also high temperature processes needed. Moreover, the conventional micro array biological probes fail to be designed and disposed relying on the profile of a living body’s portion, and adversely affecting contact between the biological probes and living body. Besides, after a signal detected from the conventional micro array biological probes, the signal is picked up to process signal-to-noise ratio and impedance matching. Additional devices for signal processing are required. This research provides a micro array bio-probe device integrated with TFT amplifier formed of bottom-gate thin film transistors [8-9], which utilizes a micro-electro-mechanical process and a semiconductor process to integrate micro array bio-probes and an amplifier, formed of bottom-gate TFTs on a flexible substrate. As such, the signal obtained by the probes can be amplified nearby to improve the signal-tonoise ratio and impedance matching. The micro array bioprobes are formed on the flexible substrate such that the present bio-probe device can be disposed to conform to the profile of a living body’s portion so as to improve the electrical contact property. The organization of this paper is as follows: the first section is introduction. The second one is the fabrication steps of semiconductor amplifier and bio-sensing probe device.
Fig. 1
The side view of the proposed module for micro array bio-sensing probe device integrated with a semiconductor amplifier.
A. Semiconductor Amplifier Design Step 1: Using mask #1 and Photolithography And Etching Processes (PAEP), some through holes are formed on the flexible substrate for signal conduction between both surfaces. Then remove the Photo Resist (PR). The next is by E-gun evaporator (a low temperature process) to deposit a layer of TiN (0.1 μm) on each side of substrate as seed for electroplating copper (100 μm). The result is in Fig. 2.
Fig. 2 The result of design step 1. Step 2: By E-gun evaporator deposit an insulating layer of SiO2 or Si3N4 (2 μm) on the lower surface of substrate. By using mask #1 and PAEP to make vias on those holes made in Step 1 through the insulating layer deposited. Then evaporating a layer of amorphous silicon layer (to be for the four active regions of thin-film-transistors, 2 μm) and using
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 813–816, 2009 www.springerlink.com
814
J.M. Lin, P.W.Lin and L.C. Pan
mask #2 with PAEP to left four island regions for making MOS transistor. These regions are going to make two pairs of amplifiers, the left-hand side two regions are for making two N-MOS transistors, and the other two regions are going to make a CMOS transistor amplifier. Finally remove the PR, and anneal the amorphous silicon for re-crystallization by Nd-YAG laser. The result is in Fig. 3. Fig. 6 The result of design step 5. Step6: Evaporate a layer of Si3N4 or SiO2 (2˩m), and with mask #6 and PAEP to make the contact holes for all the electrodes of MOS transistors and wirings. Finally remove the PR, and the result is in Fig. 7. Fig. 3 The result of design step 3. Step 3: Evaporate layers of SiO2 (2˩m) and amorphous Si (2 μm) respectively, and with mask #3 and PAEP to make the gate electrodes of transistors and wirings connecting to the vias. Finally remove the PR, and the result is in Fig. 4.
Fig. 7 The result of design step 6. Step 7: Evaporate a layer of aluminium (2 μm) and with mask #7 and PAEP to make the contact metallization for all the electrodes of MOS transistors and wirings. Finally remove the PR, and the result is in Fig. 8. Fig. 4 The result of design step 3. Step4: Using mask #4 and the PAEP to etch SiO2 away at the sources, drains and wirings on the left three N-MOS transistors for phosphorous (N+ donor type) ion implantation. Finally remove the PR, and the result is in Fig. 5.
Fig. 8 The result of design step 7.
Fig. 5 The result of design step 4. Step 5: By using mask #5 with the PAEP to etch some regions of SiO2 away at the sources, drains and wirings on the right hand side P-MOS transistors for boron (P+ acceptor type) ion implantation. Finally remove the PR, and the result is in Fig. 6.
_______________________________________________________________
Step 8: By E-gun evaporator deposit a layer of SiO2 or Si3N4 (2 μm) for insulation and passivation, using mask #8 with PAEP to make the pad holes for wire bonds. Then electroless plating two layers of nickel and gold. Finally remove PR, and the result is in Fig.9, it is ready for making bumps to connect to the outer circuit by solder screening and the reflow processes. The four transistors are connected as two sets of amplifiers in Fig. 10; they can be used for impedance matching and increasing signal-to-noise ratio.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Integrating Micro Array Probes with Amplifier on Flexible Substrate
815
Step 3: Forming a layer of Lift-Off-Resist (LOR) with thickness as 500 μm on the back side with mask #10. Make SU-8 PR (500 μm) on back side with mask #3 in Fig. 13.
Fig. 13 The result of design step3.
Fig. 9 The result of design step 8.
Step 4: By E-gun evaporator deposit a layer of TiN (2 μm). The result is in Fig. 14.
Fig. 14 The result of design step 4.
Fig. 10 Four MOS transistors are connected as two pairs of amplifiers
Step 5: Stripe LOR PR away, then the micro array biosensing probe can be formed. The result is in Fig. 15.
B. Micro Array Bio-Probe Design Step 1: The conducting vias of the micro array biosensing probe is formed by using Nd:YAG laser ablation. Make SU-8 thick photo-resist (500 μm) on each side by using mask #9. The result is in Fig.11.
Fig. 15 The result of design step 5. Step 6: Wire-bonding for electrical signal connection. The result is in Fig. 1. III. TEST AND DISCUSSIONS
Fig. 11 The result of design step1. Step 2: Evaporate copper and TiN on each side with thickness 100 μm. Stripe SU-8 photo-resist away. The result is in Fig.12.
Fig. 12 The result of design step 2
_______________________________________________________________
The top view and the fabrication result after packaging is shown in Figs.16.and 17. The next step is impedance test [10]. The center line is soldered with the probes while the outer shielding ones connected to the ground of the probe. Then the pig skin in Fig.18 is applied for testing with the equivalent circuit model in Fig.19. The results of pig skin impedance measurements for two cases are shown in Table 1. It can be seen that the resistance obtained by case A is about 330 times larger than that of case B for the same kind of pig skin. The reason is that area of case A is larger, the insertion force may not large enough to penetrate the real skin, and thus the resistance is so much larger. The result of case B is more reasonable.
IFMBE Proceedings Vol. 23
_________________________________________________________________
816
J.M. Lin, P.W.Lin and L.C. Pan
body’s portion by forming the bio-probe on the flexible substrate. As such, the contact effect between the biological probes and living body becomes better. On the other hand, the TFT amplifier is also produced on the flexible substrate such that a signal detected from the biological probes can be amplified through a short path. The signal-to-noise ratio and impedance matching are improved.
Fig. 16 The top view of the design result.
ACKNOWLEDGMENT This research was supported by National Science Council Taiwan, R.O.C. with contracts NSC-94-2622-E-216-011CC3, NSC-95-2622-E-216-CC3 and NSC-96-2622-E-216005-CC3. Fig. 17 The result of fabrication.
REFERENCES 1. 2.
3.
Fig. 18 The pig skin under test. 4.
5.
6.
Fig. 19 The equivalent circuit model of pig skin. 7.
Table 1 The results of pig skin impedance measurements for two cases. Case / Sample
A
B
14 , 14, 0.5
12 , 12, 0.3
Areacm
196
144
RResistance
11.5261 M
34.9063 K
LInductance
255.366 nH
1.37497 H
CaCapacitance
8.39747 pF
12.1078 pF
Length, Width, Thickness cm 2
8.
9.
Lin L, and Pisano, A (1999) Silicon-processed microneedles, J. of Microelectro-Mechanical Systems, 18 (1):78–84. Nomura, K, Ohta, H, Ueda, K et al. (2003) Thin-film transistor fabricated in single-crystalline transparent oxide semiconductor, Science, 300(5623):1269–1272. Wu Y, Qiu, Y, Zhang S et al. (2008) Microneedle-based drug delivery: studies on delivery parameters and biocompatibility, Biomedical Microdevices. Chen B, Wei J , Tay E et al. (2008) Silicon microneedle array with biodegradable tips for transdermal drug delivery, Microsystem Technologies, 14/7, pp 1015–1019 Zahn J, Talbot N, Liepmann D et al. (2008) Microfabricated polysilicon micro-needles for minimally invasive biomedical devices, Biomedical Microdevices,pp 295-303 Cormier M., Johnson B, Ameri M, K. et al. (2008) Fabrication and characterization of laser micromachined hollow microneedles, Journal of Controlled Release, pp 503–511 McAllister D, Cros F, Davis S et al. (2004) Three-dimensional hollow microneedle and microtube arrays, J. of Micromechanics and Microengi-neering,14:597–602 Meng Z, Wang M, Wong, M (2000) High performance low temperature metal-induced unilaterally crystallized polycrystalline silicon thin film transistors for system-on-panel applications, IEEE Trans. on Electron Devices, 47 (2):404–409 Wong 0JinZBhatG, Wong, P et al. (2000) Characterization of the MIC/MILC interface and its effects on the performance of MILC
thin-film transistors, IEEE Tran. on Electron Devices, 47(5 1061– 1067 10. RosellJ, Colominas, J, Pallas-Areny, R et al. (1988) Skin impedance from 1 Hz to 1 MHz, IEEE Trans. Bio-med Eng., BME-35: 649–651.
IV. CONCLUSIONS This research employs the MEMS process to integrate TFT amplifiers and micro array biological probes on the flexible substrate. It becomes possible to dispose the biosensing probe in conformity with the profile of the living
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
J.M. Lin Chung-Hua University, Dept of Mechanical Engineering 707, Sec. 2 Wu-Fu Rd. Hsin-Chu Taiwan, R. O. C.
[email protected] _________________________________________________________________
Investigating Combinatorial Drug Effects on Adhesion and Suspension Cell Types Using a Microfluidic-Based Sensor System S. Arora1 2, C.S. Lim1,2,3*, M. Kakran3, J.Y.A. Foo14, M.K. Sakharkar3, P. Dixit3,5, and J. Miao3 1
Biomedical Engineering Research Centre, Biomedical and Pharmaceutical Engineering Cluster, 50 Nanyang Drive, Research Techno Plaza, 6th Storey, XFrontiers Block, Singapore 637553 2 School of Chemical & Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore 637459 3 School of Mechanical & Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 4 Division of Research, Singapore General Hospital, Bowyer Block A Level 3, Outram Road, Singapore 169608 5 School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA 30332 Abstract — Present medical practitioners apply combination drug therapy for faster and effective treatment of diseases. Every individual’s reaction to certain drugs or drug combinations is different. Mix and match of drugs is common but they can have adverse complications in the body or sometimes be useful in accelerating the treatment. The uncertainty is very high due to non-linearity of drug interactions. Hence optimization is required, so that the effect of the therapy is useful. The present paper discusses a microfluidic-based sensor system that can carry out simultaneous investigations of cell outcomes, exposed to a combination of drugs. It consists of a PDMS based microfluidic-chip that can generate serial combinations of two compounds in a single step and a bioreactor to house the chip, designed to provide favourable conditions for cell growth. The results confirm that the chip can be used for long and short term monitoring of various cell types. Keywords — Drug mix, drug interaction, microfluidics, PDMS
I. INTRODUCTION The fields of biotechnology and microfluidics have witnessed tremendous developments in the past decade. Initially coined by Manz et. al. in the early nineties [1] where the concept of “miniaturized total chemical analysis system” or PTAS was first introduced. Since then these systems have been explored for their innumerable advantages, like low volume consumption of reagents and samples, high resolution and sensitivity, economical and short analysis time. Many microfluidic devices that uses combinatorial chemistry to synthesize compounds and/or for culturing cells have been previously reported in literature [2, 3]. In today’s age, where infectious diseases are growing rapidly, there is an urgent requirement of effective and long term solutions. Combination drug therapy is often useful, but sometimes one drug may reduce the therapeutic effect of other or even have adverse effects in the body due to drug interactions. One of the reasons for this may be due to the non-linearity of drug interactions with other drugs or
herbs which is often difficult to predict for an individual. Hence it is necessary to understand the effects of combination of drugs and/or herbs on various cell types. In this paper the authors describes a system to monitor and track combinatory drug effects in order to provide a more effective means of accessing drug-drug effects on cell outcomes using various cell types (adhesion and suspension). It consists of a polydimethylsiloxane (PDMS) based microchip for drug mixing and cell culturing and a bioreactor to provide favourable environmental conditions for cell growth in the chip. The results obtained, demonstrates the system to have a strong potential in high-throughput drugdrug analysis on cells. II. DESIGN PRINCIPLE The microchip was aimed at mixing several fluids together to enable rapid mixing ratios to be carried out. As a proof of concept, this work focused on using only two fluids into serial concentration steps of 20% for simplicity. Since microscale flow is laminar (Reynolds number of the order of unity), flow can be controlled more readily, giving: Re
UVD P
(1)
where, U is fluid density, V is fluid velocity, D is characteristic length, and P is absolute dynamic fluid viscosity [4]. Typical velocity profiles are parabolic in pressure driven flows. Flow can be described by the Stokes equation for incompressible fluids with non-slip boundary conditions. For rectangular channels, height h and width w, pressure drop p over length L is related to volumetric flow rate Q by: wh3'p (2) Q >1 O(h / w)@ 12P L
where, the approximate form
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 817–820, 2009 www.springerlink.com
O ( h / w)
6(25 ) h S5 w
gives less then
818
S. Arora, C.S. Lim, M. Kakran, J.Y.A. Foo, M.K. Sakharkar, P. Dixit, and J. Miao
10% error for h/w d 0.7 [5]. These parameters were used as guidelines to ensure stable laminar flow in the microchannels designed. The macrowells of the chip were designed at a higher dimension that does not fall in the micro range (~4 mm, see figure 1), to ensure mixing to occur, because the fluid flow is no longer laminar after it enters the wells from the microchannels. The rationale for microfluidic chip Design I (Figure 1a) entails that the daughter channels have equal cross-sections and were positioned symmetrically with respect to the parent channel upstream. In this respect, each of the channels would receive the same amount of fluid and the flow rate is effectively halved. The principle of chip Design II (Figure 1b) was based on the distance traveled by a fixed amount of fluid from the parent channel to the daughter channel, for a particular period of time. Each well was filled by each of the opposite two channels from inlets A and B respectively.
III. METHODS AND MATERIALS The microchip was fabricated by standard photolithography on four inch silicon wafer (JA Associates, Singapore) and soft lithography on PDMS (Slygard 184 kit, Dow Corning) using SU-8 photoresist (Microchem, USA). This process is well documented in many books and journals and has been used for over a decade for developing PDMS based microchips for biomedical applications [4, 6]. The choice of material was primarily because of its biocompatibility and ease of use and makes the microchip useful for all types of optical detection methods [7]. The final device is shown in figure 2.
Fig. 2 Final microchips compared to a standard 1 ml syringe. From left to right – Design II and I. The inlets and the wells were punched using 18 gauge blunt needle and 4 mm cork borer respectively. Dimensions: (width × height × thickness) 10 × 6 × 0.6 cm and 6 × 7.5 × 0.6 cm for design I and II respectively.
Fig. 1 (a) The layout of design I with the ratio of the fluid mixes denoted as xX + yY (x y, X = Y). Volumes of 16X and 16Y enters from inlet A and B respectively. The rectangular microchannel has width of 200 Pm except the inlets. (b) The layout for design II with annotations. The percentage mixtures are indicated on the macro-wells as A-B% of fluid volume from inlet A and B respectively. The macrowells measures 4 mm in diameter. Vertical channels are 1mm by 30 mm (excluding wells), horizontal channels are 2mm by 31.5 mm and inlets are 5 mm by 2 mm.
_______________________________________________________________
The depth of the microchannels for both the designs was 150 Pm and volume of each well was approximately 50 Pl each. For design II, the total volume of each vertical channel is 4.5 mm3. It is matched with 4.5 Pl of fluid in each well. For fluid injection two tubes from both the inlets were connected to standard 1 ml syringes. The microchips were tested for their performance using water soluble dye, tryphan blue which was simultaneously injected along with DI water from both the inlets of the microchip using syringes and syringe pumps. The concentration was inspected by spectrophotometric studies at 590 nm wavelength (Analytical Instrument Systems Inc., Model DT 1000 CE) as the source and USB2000 (Ocean Optics Inc.) as the detector. The experimental set-up is shown in Figure 3. A control test was done by manually preparing the desired concentrations using the dye and water. A bioreactor system was designed to house the chip and provide favorable environmental conditions for cell growth and monitoring. The block diagram of the bioreactor is shown in figure 4. It consists of a box incubator to house the microchip, integrated with plate heaters and temperature sensors, programmed to maintain the internal temperature at
IFMBE Proceedings Vol. 23
_________________________________________________________________
Investigating Combinatorial Drug Effects on Adhesion and Suspension Cell Types Using a Microfluidic-Based Sensor System
37 °C using a microcontroller circuitry (ATmega16 chip and AVR® SKT500).
819
B. Case studies using adhesion cells This experiment was performed to check the viability of the microchip and its material for growth and adhesion of mammalian cells. Mouse neuroblastoma or Neuro-2A (N2A) cells were sub-cultured and planted in the wells of the chip and in standard flasks as control to compare the growth. Adhesion was compared by taking images from the test and control after 24 hours of incubation at 37 °C. IV. RESULTS AND DISCUSSION The result of the dye experiment performed to test the integrity of the microchip is shown in figure 5. The graph was obtained by plotting the averaged absorbance values at 590 nm wavelength versus the concentration of dye in each well of the microchips.
Fig. 3 Experimental setup to perform spectrometric studies on microchip
Fig. 5 The graph showing comparison between the microchip results with those of the control (solid line). All the test result data points exhibits an average linear deviation of ± 5 %)
Fig. 4 Block diagram illustrating the components of the bioreactor system. The microchip can be inserted in the box incubator behind the cover-head.
A. Case studies using suspension cells As one of the prospective application of the diverse microchip, it was used to investigate long term growth of cultures of Escherichia coli (E. coli strain MC 1061) in the chip wells using Luria-Bertani (LB) broth (Sigma-Aldrich, Singapore) [8]. The chip was incubated in the bioreactor for 18 hours and monitored by optical density (OD) measurement at 600 nm wavelength using the same setup shown in figure 3. The growth curve for bacteria was obtained by plotting absorbance (OD value) versus time.
_______________________________________________________________
The results obtained were compared with the control for accuracy using R2 value (coefficient of determination) and area under curve (AUC). R2 value was used to determine the proportion of variability in a data set [9], whilst AUC was used to establish coherence with the control. The results obtained from the microchip having design I exhibits a near linear graph as compared to design II, also inferred from its high R2 value of 0.9773. It also reveals that design I presents a high coherence with the control set, with a small AUC difference of 25 square units compared to design II. Hence the design I was the preferred choice for further investigations. The bacterial growth curve obtained by measuring the OD at 600 nm wavelength in intervals of 3 hours is shown
IFMBE Proceedings Vol. 23
_________________________________________________________________
820
S. Arora, C.S. Lim, M. Kakran, J.Y.A. Foo, M.K. Sakharkar, P. Dixit, and J. Miao
in figure 6. The growth curve closely matches the standard bacterial growth curves [10] and suggests that the system can be used to monitor long term batch cultures of bacteria. Mammalian N2A cells that were allowed to incubate for 24 hours before images were taken directly from the chip wells under the microscope, were compared with those taken from the control flasks. The comparison is presented in figure 7 and it clearly suggests good adhesion of the cells onto the bottom of the chip wells as seen also in those taken from the flasks and that the chip wells can sustain mammalian cell for long term monitoring.
and cell screening to perform simultaneous drug-drug investigations on cells in-vitro. This in turn results in reduced manpower, time and reagents giving higher accuracy in a single step, compared to conventional mixing techniques. Microscale mixing problems are not encountered as the mixing occurred in macroscale. Incorporating drugs from the inlets of the microchip would expose the cells directly to various drug-drug combinations that can be monitored over time under microscope or by doing spectrometric studies.
ACKNOWLEDGEMENT The authors of this paper acknowledge the financial support provided by Academic Research Fund (RG 35/06), Ministry of Manpower, Singapore, Biomedical and Pharmaceutical Engineering Cluster and Micromachining Center at Nanyang Technological University, Singapore, for providing facilities to carry out fabrication and experiments.
REFERENCES Manz, A., N. Graber, and H.M. Widmer, Miniaturized total chemical analysis systems. A novel concept for chemical sensing. Sensors and Actuators, B: Chemical, 1990. B1(1-6): p. 244-248. 2. Bang, H., et al., Serial dilution microchip for cytotoxicity test. Journal of Micromechanics and Microengineering, 2004. 14(8): p. 1165-1170. 3. Chang, W.J., et al., Poly(dimethylsiloxane) (PDMS) and silicon hybrid biochip for bacterial culture. Biomedical Microdevices, 2003. 5(4): p. 281-290. 4. Saliterman, S.S., Fundamentals of bioMEMS and medical microdevices. 2006: SPIE. 610. 5. Stone, H.A., A.D. Stroock, and A. Ajdari, Engineering flows in small devices: Microfluidics toward a Lab-on-a-chip. Annu. Rev. Fluid Mech., 2004. 36: p. 381-441. 6. McDonald, J.C. and G.M. Whitesides, Poly(dimethylsiloxane) as a material for fabricating microfluidic devices. Accounts of Chemical Research, 2002. 35(7): p. 491-499. 7. Duffy, D.C., et al., Rapid prototyping of microfluidic systems in poly(dimethylsiloxane). Analytical Chemistry, 1998. 70(23): p. 49744984. 8. Ausubel, F.M., Current protocols in molecular biology, ed. F.M. Ausubel, et al. 2007 John Wiley and Sons, Inc. 9. Marcello Pagano, K.G., Principles of Biostatistics. 2nd ed. 2000, Pacific Grove, CA Duxbury. 10. Singleton, P., Bacteria in biology, biotechnology and medicine. 6 ed. 2004: John Wiley and Sons, Ltd. 559. 1.
Fig. 6 Growth curve obtained by plotting OD against time with ± 10% error. The error represents average liner deviation of OD.
Fig 7. Images show N2A cells at 400 times magnification. The left image was taken from the chip well (test) after 24 hrs of incubation and the right image was taken at the same time from the flask (control).
V. CONCLUSION A microfluidic-based sensor system was designed and tested. Case studies were performed by sub-culturing different cell lines into the system and monitored over time. The two microchips designs that were tested with dye solution for accuracy suggested that performance of design I, based on symmetry was superior in terms of reproducibility and accuracy based on the results obtained. This microfluidic platform helped integrate the process of mixing, pipetting
_______________________________________________________________
*Corresponding Author Associate Professor Lim Chu Sing, Daniel Division of Manufacturing Engineering, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Organic Phase Coating of Polymers onto Agarose Microcapsules for Encapsulation of Biomolecules with High Efficiency J. Bai1, W.C. Mak3, X.Y. Chang1 and D. Trau1,2,* 1
Division of Bioengineering and 2 Department of Chemical & Biomolecular Engineering, National University of Singapore, Singapore 3 Department of Chemistry, Hong Kong University of Science and Technology, Hong Kong, China *Corresponding Author at: Division of Bioengineering, National University of Singapore, Singapore 117576, Singapore. Tel: 65 65168052. Fax: +65 65163069. Email Address:
[email protected] (D.Trau)
Abstract — The “Matrix-assisted Layer-by-Layer (LbL)” encapsulation technique is one approach to encapsulate biomolecules within hydrogel microcapsules. However, performing “matrix-assisted LbL” encapsulation usually results in low encapsulation efficiency of water soluble biomolecules as these biomolecules leach out into the aqueous phase during the LbL process. To achieve high encapsulation efficiency, the idea of LbL in organic solvents, termed as Reverse-Phase (RP)-LbL, is extended into the “matrix-assisted LbL” encapsulation technique. In this work, agarose microbeads are conferred stability in an organic solvent by fabrication of agarose-based colloidosomes. Next, non-ionized polyelectrolytes (non-ionized poly(allylamine) and non-ionized poly(acrylic acid)) are used to coat the surface of these colloidosomes in organic phase to prevent leakage of preloaded biomolecules from the interior of the colloidosomes. This can allow encapsulation of water soluble biomolecules in agarose microcapsules with high efficiency. Keywords — Layer-by-Layer, colloidosome, hydrogel, polymer capsules, polyelectrolytes
I. INTRODUCTION Microcapsules, with micrometer dimensions, possess a large surface area to volume ratio and therefore allow efficient diffusion of material into or out of the microcapsules. These allow microcapsules to serve as excellent carriers of biomolecules for many fields. For example, in therapeutics [1,2], micro-bioreactors [3], and bioanalytical applications [4,5]. Hence, different methods to fabricate microcapsules that can encapsulate biomolecules have been developed including colloidosomes [6] and the templated Layer-byLayer (LbL) polyelectrolyte self-assembly technique [7]. Colloidosomes are capsules formed from self assembled microparticles at an interface. For example, liquid/liquid such as oil/water [8]. However, the requirement of harsh conditions for fixation of the microparticles, such as using a chemical cross-linking reagent or sintering at a temperature of 100 qC [9,10], limits their application for encapsulation of biomolecules. Another approach to encapsulate biomolecules is via the “Matrix-assisted LbL” polyelectrolyte self-assembly technique. Using this technique, water soluble biomolecules
have been encapsulated within hydrogel matrix spheres [11]. The hydrogel spheres contain almost 98% water and can provide both physiological environment for the encapsulated biomolecules and mechanical support for maintaining the shape of the microcapsules. However, low encapsulation efficiency of the pre-loaded biomolecules is usually obtained as the water soluble biomolecules leach out from the spheres during the aqueous LbL process. As existing hydrogel microcapsules encapsulation methods do not allow high encapsulation efficiency, we extend the idea of LbL in organic phase, termed as Reverse-Phase LbL (RP-LbL) [12], into “matrix assisted LbL” encapsulation technique to allow high encapsulation efficiency of water soluble biomolecules. II. EXPERIMENTAL SECTION A. Materials 1-butanol anhydrous 99.8%, mineral oil and N-(3dimethylaminopropyl)-N’-ethylcarbodiimide hydrochloride (EDC) were purchased from Sigma. Poly(allylamine) (PA) MW 65,000 Da and poly(acrylic acid) (PAA) MW 450,000 Da were purchased from Aldrich. Span® 80, Rhodamine 123 and N-hydroxysuccinimide (NHS) were purchased from Fluka. Carboxylate-polystyrene (carboxylate-PS) microparticles with diameter of 1.0 m and fluorescent labeled carboxylate-PS microparticles with diameter of 1.0 m were purchased from Polysciences Inc. PS microparticles with diameter of 20 m were purchased from microParticles GmbH. Ethanol was purchased from Fisher Scientific. Low melting agarose was purchased Promega. PBS was purchased from 1st BASE. All materials were used as received. Double distilled water (d.d. H2O) used was distilled using a Fistreem™ Cyclone™ machine. B. Preparation of Colloidosomes 500 l of carboxylate-PS microparticles suspension as purchased was aliquoted and centrifuged (4,000 rpm, 6 minutes) and the supernatant was subsequently removed. Ethanol was
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 821–824, 2009 www.springerlink.com
822
J. Bai, W.C. Mak, X.Y. Chang and D. Trau
added and the microparticles were sonicated for 30 minutes to remove any existing surfactants. After sonication, the microparticles were centrifuged (2,000 rpm, 3 minutes) and the supernatant subsequently removed. This sonication procedure was repeated for a total of 3 times. The microparticles were dried, weighted and d.d. H2O was added to make a suspension of 20 % w/v microparticles. The carboxylate-PS microparticles were subsequently used for fabrication of colloidosomes. All necessary reagents were pre-warmed and kept at a temperature of 45 qC. A molten solution of 4% w/v lowmelting agarose was prepared then mixed with the carboxylate-PS microparticles (20% w/v) to prepare a mixture with a final concentration of 2% w/v carboxylate-PS microparticles in 2% w/v agarose. The agarose/carboxylate-PS mixture was added to prewarmed mineral oil containing 0.1% Span 80 and stirred vigorously for 15 minutes to form water-in-oil (w/o) emulsion droplets. The emulsion droplets were then cooled to 25 qC while stirring for another 10 minutes to allow solidification of the molten agarose core and formation of colloidosomes. The colloidosomes were further stabilized by placing at -20 qC for 5 minutes. C. Preparation of Agarose Microbeads Agarose microbeads were prepared with similar procedures as mentioned in the preparation of colloidosomes except no carboxylate-PS microparticles were used in the fabrication step. D. Self assembly of Non-ionized Polyelectrolytes (niPolyelectrolytes) onto Colloidosomes by Reverse-Phase LbL (RP-LbL) The RP-LbL coating process was entirely performed in 1butanol with non-ionized poly(allylamine) (niPA) and nonionized PAA (niPAA) as niPolyelectrolytes. niPA was prepared by drying the PA solution and dissolving with an appropriate amount of 1-butanol to obtain a concentration of 1 mg/mL. niPAA was prepared by dissolving PAA in 1butanol to a concentration of 5 mg/mL. To transfer the colloidosomes from mineral oil to 1-butanol, an equal amount of ethanol was shaken with the colloidosomes in mineral oil and centrifuged (1,000 rpm, 2 minutes). The mineral oil and ethanol were then discarded and the colloidosomes were then further washed 2 times with 1-butanol using centrifugation (1,000 rpm, 2 minutes) and redispersion. 1.5 mL of niPA in 1-butanol was used as the initial layer followed by 1.5 mL of niPAA in 1-butanol as the subsequent layer. Each niPolyelectrolyte was coated for 15 minutes with gentle vortexing and excess niPolyelectrolyte was washed away via centrifugation (1,000 rpm, 2 minutes) and redispersion 3 times with 1.5 mL of 1-butanol before coating of the next layer.
_______________________________________________________________
E. Self assembly of Non-ionized Polyelectrolytes (niPolyelectrolytes) onto PS microparticles by Reverse-Phase LbL (RP-LbL) The RP-LbL coating process of niPolyelectrolytes onto PS microparticles is similar to the coating process mentioned in self assembly of niPolyelectrolytes onto colloidosomes except PS microparticles were used in place of colloidosomes. F. Preparation of Fluorescence niPolyelectrolyte PAA-Rhodamine 123 conjugate was synthesized with a conjugation ratio of 1:10 (Rhodamine 123 molecules:PAA monomer) using EDC/NHS, purified by dialysis (MWCO of 8,000 Da) and dried at 50 oC. The dry niPAA-Rhodamine 123 was dissolved to a concentration of 0.5 mg/mL in 1butanol. III. RESULTS & DISCUSSION A. Coating of Non-ionized Polyelectrolytes (niPolyelectrolytes) onto PS microparticles by Reverse-Phase LbL (RPLbL) To study the RP-LbL coating of niPolyelectrolytes, a fluorescent-labeled niPolyelectrolyte (niPAA-Rhodamine 123) was used in combination with niPAA to coat 20 m PS microparticles. niPA was coated as the odd layers while niPAA/niPAA-Rhodamine 123 was coated as the even layers. Figure 1 shows the fluorescence intensity of PS microparticles as a function of the number of coated layers. The fluorescence intensity of PS microparticles after each layer coating was quantified by capturing the fluorescence image
Fig. 1 Fluorescence intensity (pixel value) of PS microparticles against layer number coated onto the microparticles via RP-LbL technique. PAARhodamine 123 was used during coating of even layers as proof of multilayer deposition.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Organic Phase Coating of Polymers onto Agarose Microcapsules for Encapsulation of Biomolecules with High Efficiency
A
823
B
Fig. 3 Optical images of A) Agarose microbeads aggregated in 1-butanol Fig. 2 Zeta potential as a function of layer number (No) for coating of niPolyelectrolytes onto PS microparticles in 1-butanol. Multilayer deposition is demonstrated by the alternating signs of the zeta potential. of the microparticles and measuring the pixel values of each image using ImageJ. It was noted that the fluorescence intensity of the microparticles increases after each bi-layer coating and this increase in fluorescence intensity is contributed from the accumulation of niPAA-Rhodamine 123 onto the microparticles after each even coating. This result demonstrates that niPA and niPAA can be stepwisecoated onto PS microparticles by RP-LbL. As further evidence for RP-LbL coating of niPolyelectrolytes onto the PS microparticles, the zeta-potential of the microparticles after each layer coating was measured. All measurements were performed after transferring the PS microparticles from 1-butanol to 0.01x PBS. Figure 2 shows the zeta potential of the PS microparticles after coating of each niPolyelectrolyte layer. The alternating magnitudes of the zeta potential from initially -30 mV (zeta potential of “bare” PS microparticles) to +7, +11 and +12 mV for deposition of niPA layers and -50, -44 and -62 mV for niPAA layers supports evidence for coating of niPolyelectrolytes onto PS microparticles in 1-butanol. B. Stability & Morphology of Colloidosomes To improve the mechanical stability of microcapsules and provide a good matrix material to entrap biomolecules, fabrication of microcapsules was done through the use of agarose microbeads – a hydrogel template. Unfortunately, being hydrophilic in nature, the beads are not stable in anhydrous 1-butanol but begin to aggregate (Figure 3A). Hence, microparticles were included into the emulsification phase of agarose microbeads fabrication to produce hydrogel-based colloidosomes [8]. Addition of microparticles was performed to confer dispersion stability to the agarose microbeads in 1butanol and allow reverse phase (RP)-LbL deposition of polymers. Colloidosomes in 1-butanol were found to be able
_______________________________________________________________
and B) fabricated colloidosomes in 1-butanol. Colloidosomes are stably dispersed and show no aggregation.
to retain their spherical shape (Figure 3B) and colloidosomes fabricated using fluorescent labeled carboxylate-PS beads exhibited a denser population of microparticles lining on the surface of the colloidosomes (Figure 4).
Fig. 4 Confocal Image of colloidosomes fabricated with fluorescent labeled carboxylate PS microparticles. A fluorescence ring can be observed around each colloidosome.
In summary, the inclusion of microparticles for the formation of colloidosomes does allow for the stable dispersion of agarose microbeads in 1-butanol. This stability allows RP-LbL coating of niPolyelectrolytes onto the agarose microbeads for high encapsulation efficiency of water soluble biomolecules. C. Coating of Non-ionized Polyelectrolytes (niPolyelectrolytes) onto Colloidosomes by Reverse-Phase LbL (RP-LbL) Encapsulation of preloaded water soluble biomolecules within hydrogel microcapsules with high efficiency requires little or no loss of biomolecules to occur during the LbL
IFMBE Proceedings Vol. 23
_________________________________________________________________
824
J. Bai, W.C. Mak, X.Y. Chang and D. Trau
ACKNOWLEDGMENT This work was supported by Research Grant R-397-000026-112 from National University of Singapore.
REFERENCES 1.
Fig. 5 Fluorescence intensity (pixel value) of colloidosomes against layer number coated onto the colloidosomes via RP-LbL technique. PAA-Rhodamine 123 was used during coating of even layers as proof of multilayer deposition. coating process. To allow minimal loss of biomolecules, a suitable “coating system” onto the colloidosomes is needed and performing RP-LbL coating of niPolyelectrolytes onto colloidosomes is one suitable approach. The study of the coating of niPolyelectrolytes onto colloidosomes via RP-LbL was done by using niPAARhodamine 123 as discussed in coating of niPolyelectrolytes onto PS microparticles. Similarly, niPA was coated as the odd layer and niPAA/niPAA-Rhodamine 123 was coated as the even layers. The fluorescence intensity of colloidosomes was also quantified in a similar fashion. Figure 5 shows the fluorescence intensity of the colloidosomes increasing after each bi-layer coating. This increase in fluorescence intensity is due to an accumulation of niPAA-Rhodamine 123 after each bi-layer coating andshows that niPA and niPAA can be coated onto agarosebased colloidosomes by RP-LbL. IV. CONCLUSIONS niPA and niPAA have been identified as non-ionized polymers that can perform RP-LbL coating of niPolyelectrolytes onto colloidosomes. By coupling this “organic phase coating system” with hydrogel-based colloidosomes, the high encapsulation efficiency of water soluble biomolecules within agarose hydrogel templates has been made possible. We believe that our technology can significantly contribute to the development of microcapsule based bioreactors and bioanalytical system.
_______________________________________________________________
Dai Z, Heilig A, Zastrow H, Donath E, Möwald H (2004) Novel formulations of vitamins and insulin by nanoengineering of polyelectrolyte multilayers around microcrystals. Chem Eur J 10(24):63696374 2. De Geest B G, Vandenbroucke R E, Guenther A M, Sukhorukov G B, Hennink W E, Sanders N N, Demeester J, De Smedt S C (2006) Intracellularly degradable polyelectrolyte microcapsules. Adv Mater 18(8):1005-1009 3. Mak W C, Cheung K Y, Trau D (2008) Diffusion controlled and temperature stable microcapsule reaction compartments for high throughput microcapsule-PCR. Adv Funct Mater “in press” 4. Chinnayelka S, McShane M J (2005) Microcapsule biosensors using competitive binding resonance energy transfer assays based on apoenzymes. Anal Chem 77(17):5501-5511 5. Brown J Q, Srivastava R, McShane M J (2005) Encapsulation of glucose oxidase and an oxygen-quenced fluorphore in polyelectrolyte-coated calcium alginate microspheres as optical glucose sensor systems. Biosens Bioelectron 21(1):212-216. 6. Laib S, Routh A F (2008) Fabrication of colloidosomes at low temperature for the encapsulation of thermally sensitive compounds. J Colloid Interface Sci 317(1):121-129. 7. Mak W C, Cheung K Y, Trau D (2008) The influence of different polyelectrolytes on Layer-by-Layer microcapsules propertiesencapsulation efficiency, colloidal and temperature stability. Chem Mater 20(17): 5475-5484 8. Cayre O J, Noble P F, Paunov V N (2004) Fabrication of novel colloidosome microcapsules with gelled aqueous cores. J Mater Chem 14(22):3351-3355 9. Velev O D, Furusawa K, Nagayama K (1996) Assembly of latex particles by using emulsion droplets as template. 1. Microstructured hollow spheres. Langmuir 12(10):2374-2384 10. Hsu M F, Nikolaides M G, Dinsmore A D, Bausch A R, Gordon V D, Chen X, Hutchinson J W, Weitz D A, Marquez, M (2005) Selfassembled shells composed of colloidal particles: fabrication and characterization. Langmuir 21(7):2963-2970 11. Srivastava R, Brown J Q, Zhu H, McShane M J (2005) Stable encapsulation of active enzyme by application of multilayer nanofilm coatings to alginate microspheres. Macromol Biosci 5(8):717-727 12. Beyer S, Mak W C, Trau D (2007) Reverse-Phase LbL-encapsulation of highly water soluble materials by Layer-by-Layer polyelectrolyte self-assembly. Langmuir 23(17):8827-8832
IFMBE Proceedings Vol. 23
_________________________________________________________________
LED Based Sensor System for Non-Invasive Measurement of the Hemoglobin Concentration in Human Blood U. Timm1, E. Lewis1, D. McGrath2, J. Kraitl3 and H. Ewald3 1
University of Limerick, Optical Fibre Sensors Research Center, Limerick, Ireland 2 University of Limerick, Graduate Medical School, Limerick, Ireland 3 University of Rostock, Institute of General Electrical Engineering, Rostock, Germany Abstract — In the perioperative area, the period before and after surgery, it is essential to measure diagnostic parameters such as oxygen saturation, hemoglobin (Hb) concentration and pulse. The Hb concentration in human blood is an important parameter to evaluate the physiological condition. By determining the Hb concentration it is possible to observe imminent postoperative bleeding and autologous retransfusions. Currently, invasive methods are used to measure the Hb concentration. For this purpose blood is taken and analyzed. The disadvantage of this method is the delay between the blood collection and its analysis, which doesn't permit a real-time patient monitoring in critical situations. A non-invasive method allows pain free online patient monitoring with minimum risk of infection and facilitates real time data monitoring allowing immediate clinical reaction to the measured data. Keywords — hemoglobin, non-invasive, photoplethysmography
I. INTRODUCTION In the perioperative area, the period before and after surgery, it is essential to measure diagnostic parameters such as oxygen saturation, hemoglobin (Hb) concentration and pulse[1]. The Hb concentration in human blood is an important parameter to evaluate the physiological condition. By determining the Hb concentration it is possible to observe imminent postoperative bleeding and autologous retransfusions. Currently, invasive methods are used to measure the Hb concentration. For this purpose blood is taken and analyzed. The disadvantage of this method is the delay between the blood collection and its analysis, which does not a permit a real-time patient monitoring in critical situations. A non-invasive method allows pain free online patient monitoring with minimum risk of infection and facilitates real time data monitoring allowing immediate clinical reaction to the measured data. The absorption of whole blood in the visible and near infrared range is dominated by the different hemoglobin derivatives and the blood plasma that consists mainly of water.[2] It is well known that pulsatile changes of blood volume in tissue can be observed by measuring the transmission or reflection of light through it. This diagnostic method is called photo-plethysmography (PPG) The newly
developed optical sensor system uses three wavelengths for the measurement of the Hb concentration, oxygenation and pulse. This non-invasive multi-spectral measurement method is based on radiation of near monochromatic light, emitted by light emitting diodes (LED) in the range of 600nm to 1400nm, through an area of skin on the finger. The sensor assembled in this investigation is fully integrated into a wearable finger clip and allows full wireless operation through on board miniature wireless enabled microcontroller. II. MEASUREMENT METHOD The new developed sensor device allows a non-invasive continuous measurement of hemoglobin concentration, oxygen saturation and pulse which is based on a multispectral measurement method. Thereby the area of skin on the finger is transilluminated by monochromatic light which is emitted by LEDs in the range from 600nm-1400nm. The arteries contain more blood during the systolic phase of the heart than during the diastolic phase, due to an increased diameter of the arteries during the systolic phase. This effect occurs only in arteries but not in veins.[3] For this reason the absorbance of light in tissues with arteries increases during systole because the amount of hemoglobin (absorber) is higher and the light passes through a longer optical path length d in the arteries. These intensity changes are the so called PPG-waves.[4] The time varying part allows the distinction between the absorbance due to venous blood (DC part) and that due to the pulsatile component of the total absorbance (AC part). Figure 1 shows the absorption model for light penetrating tissue to sufficient depth to encounter arterial blood. Upon interaction with the tissue the transmitted light is detected non-invasively by photo diodes. Suitable wavelengths were selected for the analyses of relative hemoglobin concentration change and SpO2 measurement. During the measurement of hemoglobin the absorption should not be dependent on the oxygen saturation. That means that the measurement is only practicable at so called isobestic points where the extinction coefficients of HHb and HbO2 are identical. One such point is
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 825–828, 2009 www.springerlink.com
826
U. Timm, E. Lewis, D. McGrath, J. Kraitl and H. Ewald
III. SENSOR DEVICE The sensor system being developed consists of hardware modules including appropriate light sources and receivers, a microcontroller and a wireless interface. A key component of the sensor system is the low power microcontroller MSP430F1611.[8]
Figure 1: Absorption Spectra of Hemoglobin and Water
known to exist around a wavelength of 810nm.[5] According to the assumption that red blood cells are mainly filled with water, the absorption coefficient of blood is similar to a solution consisting HHb, HbO2 and water (H2O) and the absorption of HHb and Hb02 is indistinguishable to the absorption of H2O above 1200nm, it is necessary to select a wavelength value in this region above the diagnostic window.[6] (Figure 2)
Figure 2: Model of Tissue Layer
Finally the determination of hemoglobin concentration will be performed at the wavelengths O1 = 810nm and O2 = 1300nm. The AC/DC Values of both wavelengths leads to the quotient H (equation 1): I AC DC 810nm
H
I DC 810nm I AC DC 1300nm I DC 1300nm
ln 10
H Hb O 810nm cHb
P H 20 O 1300nm 64500g / I
This enables software controlled and time multiplexed operation of the light sources and receiver channels. The mean value is calculated and the dark current subtracted in software, and the data is transmitted via serial RS232 or wireless interface. With application software programmed in LabView it is possible to handle the data on a Laptop or PC. The light sources, three LEDs, with centre wavelengths of O1= 670nm, O2= 810nm and O3= 1300nm are installed in the upper part of the clip. The pulsed LED currents are controlled by the microcontroller that allows a change of the light source intensity. To detect the transmission signals of the LEDs a silicon photodiode with a spectral range of 400nm-1100nm was used. For measuring at 1300nm an additional Indium Gallium Arsenide (InGaAs) photodiode with a spectral of 1000nm-1700nm is selected. Figure 4 shows a photo of the sensor device.
(1)
This theoretical equation results in incorrect values in practise but the fact f(H) is true.[7] Thus, the hemoglobin sensor needs calibration using an external blood stream model including real blood.
_________________________________________
Figure 3: Functional Diagram Sensor Device
Figure 4: Sensor Head with Photodiodes and Light Sources
IFMBE Proceedings Vol. 23
___________________________________________
LED Based Sensor System for Non-Invasive Measurement of the Hemoglobin Concentration in Human Blood
IV. BLOOD STREAM MODEL The human circulatory system is a fast regulatory transport system. This closed circuit consist of parallel and serially connected blood vessels for which the heart acts as a circulation pump. The mandatory blood circulation is maintained by the contraction of the muscular cardiac wall. The heart is made up of a left and right cardiac half, each of them being a muscular hollow organ. In functional and morphologic terms the circulatory system consists of two serial sections. The heart also can be considered as two serial pumps. The right cardiac side absorbs the deoxygenated blood from the body and transports it to the lungs (lung circuit), where it is re-oxygenated. The oxygenated blood arrives at the left cardiac side, wherefrom the dispensation to various organs takes place (body circuit). The oxygenated blood in the body circuit is pumped from the left ventricle to the aorta and main arteries whose sub branches lead to the tissue and organs. Following the next stage of branching into arteriole and capillary, the oxygen and nutrients and the ingestion of carbon monoxide and intermediate catabolic products are distributed to the tissue and organs of the body. Finally the capillaries terminate into venules and veins which transport the deoxygenated blood back to the right atrium. Figure 5 shows the measuring point for the developed hemoglobin sensor which is located on the finger terminal element.
827
the typical physiological state. Therefore a setup for optical measurement on circlulating blood is necessary. By using a circulation system based on a roller pump the blood is circulated through the closed circuit and a pulsatile blood volume is generated. The schematic diagram of the system is shown in Fig. 6.
Figure 6: Functional Diagram of the Blood Stream Model
Using this system measurement allowing the accurate assessment of the pulsatile blood portion (AC/DC signal ratio) is possible. The input point is at the same time the blood sample extraction point which will be needed for the reference point measurement. The blood gassing takes place in the oxygenator which allows the precisely controlled introduction of oxygen or carbon monoxide and a separate heated water circuit allows the elevation of the blood temperature to 36qC -37qC V. RESULTS
Figure 5: Measuring Point on Circulatory System
Based on human circulatory system a bloodstream model has been designed which is necessary for validation of the measurement method of hemoglobin concentration and the newly developed optical sensor system. With the help of the model a controlled variation of the blood parameter, hemoglobin concentration and oxygenation are feasible. Thereby both parameters can be changed separately or simultaneously on circulating blood is not representative of
_________________________________________
Initial measurements of the transmission signals on the finger terminal element have shown variations in light absorption due to the arterial pulse at all three wavelengths. The signal at 1300nm is especially weak and requires further effort in signal processing. However, the signal quality is sufficient to allow analysis of the signals and to calculate the relative attenuation coefficient of the arterial blood. In case of the signal components at 1300nm an evaluation of the relative portions of hemoglobin and water in blood are feasible.[9] On the basis of this measurement technique a pulsed signal for the calculation of the relative attenuation coefficient is required. Decreases of the signal amplitude and therefore the signal to noise ratio caused by vasoconstriction at the extremities is a potential problem. Small signal amplitudes can give rise to inaccurate results.[10] Figure 7 shows the signal at 1300nm and 810nm when light is transmitted through the finger.
IFMBE Proceedings Vol. 23
___________________________________________
828
U. Timm, E. Lewis, D. McGrath, J. Kraitl and H. Ewald
using an experimental blood stream model and optimisation of the sensor device.
REFERENCES 1.
Figure 7: PPG Wave at 810nm and 1300nm
The developed sensor device is suitable for non-invasive continuous online monitoring of one ore biological parameter. The advantage of this measuring technique is independent of blood samples and this allows a minimum risk of infection and it makes possible to react immediately on the measured data. VI. CONCLUSIONS In this paper a non invasive device for measuring hemoglobin concentration based on multi wavelength light absorption has been reported. The newly developed sensor device is able to measure PPG signals at three independent wavelengths continuously. Future work involves validation of the sensor measurement based on further measurements
_________________________________________
T. Ahrens and K.Rutherford. Essentials of Oxygenation: Implication for Clinical Practice. Jones & Bartlett Pub, 1993. 2. S. J. Matcher, M. Cope, and D. T. Delpy. Use of the water absorption spectrumto quantifies tissue chromospheres concentration changes in near-infrared spectroscopy. Phys. Med. Biol., 38:177–196, September 1993. 3. Woods AM, Queen, JS, Lawson D., 1991 Valsalva maneuver in obstetrics: The influence of peripheral circulatory changes on function of the pulse oximeter, Anesth. Analg. 73 pp. 765-71 4. Lock I, Jerov M, Scovith S (2003) Future of modeling and simulation, IFMBE Proc. vol. 4, World Congress on Med. Phys. & Biomed. Eng., Sydney, Australia, 2003, pp 789–792 5. Kamal AAR, Hatness JB, Irving G, Means AJ, 1989 Skin photoplethysmography – a review, Comp. Meth.Progr. Biomed., 28, pp 257-69 6. N. Kollias. Tabulated molar extinction coe_cient for hemoglobin in water. Wellman Laboratories, Harvard Medical School, Boston, 1999. 7. Kratil, J., Ewald H., Results of hemoglobin concentration measurements in whole blood with an optical non-invasive method”, Photon08, Optics and Photonics, IOP Conference, p 77, Edinburgh UK, 2008 8. U.Timm, E.Lewis, H.Ewald, "Non-Invasive Wireless Sensor System for Measurement of Hemoglobin Concentration in Human Blood", 6th International Forum Life Science Automation (LSA) 2008, September 10th-12th, Rostock, Germany, ISBN: 978-3-938042-17-54 9. Kraitl, J., Ewald H.: None-invasive measurement of hemoglobin concentration with a pulsephotometric method“ Proceedings, 5th International Forum Life Science Automation, Washington (DC), USA 10. Yosihida I, Shimada Y, Oka N, Hamaguri K, 1984, Effects of multiple scattering and pheripheral circulation on aterial oxygen saturation measured with pulse type oximeter, Med & Biol. Eng & Comput., 22, pp 475-78
IFMBE Proceedings Vol. 23
___________________________________________
Amperometric Hydrogen Peroxide Sensors with Multivalent Metal Oxide-Modified Electrodes for Biomedical Analysis Tesfaye Waryo1,5,*, Petr Kotzian2, Sabina Begi 3, Petra Bradizlova2, Negussie Beyene4, Priscilla Baker5, Boitumelo Kgarebe5, Emir Turkuši 3, Emmanuel Iwuoha5, Karel Vyt as2 and Kurt Kalcher1 1
Institute of Chemistry-Analytical Chemistry, Karl-Franzens University of Graz, Graz, Austria 2 Department of Analytical Chemistry, University of Pardubice, Pardubice, Czech Republic 3 Department of Chemistry, University of Sarajevo, Sarajevo, Bosnia and Herzegovina 4 Department of Chemistry, Addis Ababa University, Addis Ababa, Ethiopia 5 SensorLab, Department of Chemistry, University of the Western Cape, P/Bag X17, Bellville 7535, South Africa Abstract — An overview of publications (1991 – 2007) on amperometric sensors for hydrogen peroxide (H2O2) is presented, with emphasis on carbon electrodes modified with multivalent-metal oxides as electro-catalysts and applications in biosensors. Keywords — Hydrogen peroxide; amperometric sensors; amperometric biosensors; metal oxides; electrocatalyst; carbon electrodes.
I. INTRODUCTION The development of methods for the detection and quantification of hydrogen peroxide (H2O2) in environmental and biological samples is invaluable, particularly as it will also find applications in the indirect determination of several substances of clinical, dietary, and environmental importance [1-4]. H2O2 is a by-product of several biochemical reactions and, as a result, it is ubiquitous in the natural biological systems. It has been described as quantitatively the most dominant peroxide in brain cells [4]. Formed via various routes such as superoxide dismutasecatalyzed disproportionation of the superoxide ion [5] and oxidiase enzymes-catalyzed oxidation of biomolecules with oxygen [4,6-8], it is eliminated from the body by the action of catalase and peroxidase enzymes. Peroxidases rely on molecular reducing agents like ascorbates and glutathion [9,10]. Thus, while H2O2 by itself is an intesting analyte, the above listed enzymatic reactions in general, and the oxidase catalyzed systems in particular serve as selective biochemical recognition reactions linking H2O2 with a number of biomolecules of clinical and dietary interest such as amines, uric acid, glucose, glutamate, lactate, cholesterol, and alcohols [11-15]. Therefore, the development of H2O2-detection methods is a catch-all strategy in bioanalytical chemistry. H2O2 can be assayed using various chemical and instrumental methods such as volumetric redox titration [16], spectrophotometry [17], chemiluminometry [18], fluorimetry [19], amperometry/ voltammety and potentiometry
[20,21]. In vivo and in vitro-H2O2 determination methods based on indirect spectrophotometric and fluorimetric detections were reviewed by Tarpey and co-workers [22]. Brief reviews on amperometric/ voltammetric sensors for H2O2 are widely available [23-27]. Amperometric sensors are interesting since they are suitable for designing portable and multiplex chemical sensor devices with background and interference correction capability. Such H2O2 sensors transduce the concentration of H2O2 into an electrical current signal via a direct or an indirect electrochemical reduction (cathodic) or oxidation (anodic) of H2O2. It is also easy to tailor amperometric sensors in order to suit different real samples by using materials with selective recognition and signal amplification properties towards the analyte of interest [28,29]. Multivalent metal oxides are interesting materials because of the wide spectrum of structureproperties they exhibit. We give an over view of amperometric H2O2 sensors in which these oxides function as electrochemical recognition and signal amplification componets. Biosensors developed on sensors for some clinical analytes are also highlighted. II. CHEMICALLY MODIFIED ELECTRODES IN CONTEXT In biomedical analysis, of particular interest is the development of selective amperometric biosensors for in-vivo and in-vitro applications. Such sensors must be able to function at physiologic pH conditions in the highly complex matrix of undiluted body fluids. For instance, a successful blood serum diagnostics could be anticipated only if the biosensor is developed on an H2O2 sensor that responds insignificantly to other common electroactive components at the respective serum concentrations; i.e., less than 1 mM uric acid, ascorbic acid, and acetaminophen; few mM xanthine; 20-40 PM hypoxanthine, Fe, and Cu species [30,31]. Amperometric methods and biosensors based on the detection of H2O2 at classic metal electrodes (Pt, Hg, Au) or carbon (C) require very high operating potentials (> 600 or
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 829–833, 2009 www.springerlink.com
830
T. Waryo, P. Kotzian, S. Begi , P. Bradizlova, N. Beyene, P. Baker, B. Kgarebe, E. Turkuši , E. Iwuoha, K. Vyt as and K. Kalcher
< -600 mV and even outside the range r 1000 in case of C). Consequently, they are prone to interference [23,24,26] besides surface-fouling complications [26,32]. These limitations could be addressed by different strategies [23,26,33], among which is to lower the operating potential itself by using electrocatalyst-modified electrodes [24]. Chemically modified electrodes have been excellently discussed else where [28,29], and carbon electrodes are the most attractive for modification studies [25,34,35]. Carbon paste (CPE) or thick film (screen printed) carbon electrodes (SPCE) modified with MnO2 [36], Prussian blue [37], metallophthalocyanins [38], Rh and Ru [24], hexacyanoferrates [39] exhibited significantly reduced operating potentials and improved selectivity. Different enzymes (purified or as-is in tissues) have also been utilized in H2O2 biosensors and bi-enzymatic oxidase-biosensors [40,41]. As readily available and robust catalytic materials are preferred, the authors are particularly interested in multivalent metal oxides, which are thermodynamically stable and sparingly soluble under ordinary conditions [42] could yield stable and commercially cheap sensors.
(300 mV). Those which were operated cathodically may be ordered according to decreasing operating potential: VZrO2 (-400 mV) = CuO/ Cu2O t FeO (-200 mV) | Fe3O4 | SnO2 (-170 mV) t PtO2 (-100 mV). Table 1. Amperometric H2O2 sensors with multivalent metals oxide§modified carbon electrodes demonstrated at physiologic conditions
Sensor MnO2__SPCE
_______________________________________________________________
Ref.
a
+480 FIA
0.07
0.1 – 300
MnO2_NaMM__GCE
+650a Stir
0.15
0.5 – 7500
[52]
MnO2_DHP__GCE
+650a Stir
0.080 0.10-2000
[53]
Fe3O4_C-ink__GCE
-200c RDE
25/3
25 – 1000
[51]
Fe3O4_Chitosan_C__GCE
-200c RDE
7.4
25 – 5000
[51]
Fe3O4_SPCE
-100c FIA
9
300 – 30000
[48]
FeO_SPCE
-200c FIA
1.2
150 – 60000
[63]
nano
nano
Fe2O3_CPE
[27]
-200c Stir
20
Up to 8500
[64]
SnO2_SPCE
-175c FIA
2.9
150 – 9000
[63]
RuO2_SPCE RhO2_SPCE
+480a FIA
0.1
29 – 29000
[49]
+500a FIA
0.7
3 – 3000
[65]
PdO_SPCE
+500a FIA
24
29 – > 2900 [50]
IrO2_SPCE
+400a FIA
7
29 – > 2900
[50]
PtO2_SPCE
100c FIA
1
29 – > 2900
[50]
nano
III. MULTI-VALENT METAL OXIDE-MODIFIED ELECTRODES The earliest (1990s) H2O2 sensors with electrodesmodified with electrocatalytically active oxides of multivalent metals were based on Mn and Ir oxide films electrochemically deposited on GCE surfaces [43,44], and a Co(II,III) oxide-bulk modified CPE [45]. Since then, amperometric sensors for H2O2 based on commercially available powders of MnO2, Fe3O4, CuO, and oxides of platinum metals incorporated into bulk-modified carbon paste and screen printed carbon electrode have been reported [27,46-54]. GCEs modified with electrodeposited films of Ni and Co hydroxides were also reported recently [55,56]. Some Pervoskyte-type oxides, namely La0.6Ca0.4MnO3 and La0.6Ca0.4CoO3 were also tested [57,58]. Zirconia doped with vanadium was also shown to be promising [59]. Polyoxometallates (POMs) [60], especially heteropolytungstates and heteropolymolybdates or their doped and composite versions, are electroctalytic towards H2O2 but sensors based on POM-modified electrodes worked only at very low pH conditions [61,62]. Table 1 compiles the multivalent-metal oxide-modified carbon electrodes studied at physiological conditions. Operating potentials, detection limits, and linearity ranges varied markedly with the type of the oxide. The operating potentials of the sensors operated anodically may be compared as follows: nano Co3O4 = La0.6Ca0.4MnO3 (750 mV) ! nano MnO2 ! RhO2 = PdO | RuO2 (480 mV) | IrO2 (+400 mV) t MnO2 > nano cryptomelane type Mn oxide
Linearity, PM
DL PM
E**/ Mode mV
CuO_Cu2O_CPE
-400c Batch ?
nano
Co3O4_Nafion__GCE
+750a CV
0.050 1 – 100
[66]
nano
Co3O4_Nafion__GCE
+750a RDE
4x10-4 0.004 – 300
[66]
DPV
1.7
5 – 400
[59]
-400c DPV
2.1
5 – 400
[59]
+300a Stir
2
100 – 690
[67]
La0.6Ca0.4MnO3__Teflon_C +750a Stir
?
Up to 500
[57]
m-VZrO2__GCE m-VZrO2__Polyester_C nano
KMnIV4MnIIO16_CPE
-400
c
?
[54]
§
Assume microparticles if not specified; “nano” - nanoparticles; “_”: bulk-modified, “__”: surface-modified; GCE, CPE, and SPCE: glassy, paste, screen printed carbon electrodes; NaMM: sodium montmorillonite; DHP: dihexadecyl hydrogen phosphate; m- prefix: monoclinic; **operating potentials vs. Ag_AgCl or Hg_Hg2Cl2 reference; superscripts: a: anodic, c: cathodic; FIA: flow injection analysis; RDE: rotating disc electrode; CV and DPV: cyclic and differential pulse voltammetry; DL: detection limit.
Operated either under hydrodynamic (rotation, stirring, and flowing injection) or voltammetric modes, response times of the sensors were generally described as fast (seconds), but flow rates and voltammetric scan rates obviously determined sample throughput rates. The lowest detection limit (0.4 nM) was reported for nano Co3O4-filmmodified GCE (rotating disc set-up). However, this was found at high operating potential (+750 mV). The widest linearity range (5 orders) was reported for electrodeposited films of Co3O4 and MnO2. With respect to operational and storage stability, no practical limitations were reported. SPCEs modified with oxides of Fe and Sn were reported as selective against ascorbic acid, uric acid, acetaminophen,
IFMBE Proceedings Vol. 23
_________________________________________________________________
Amperometric Hydrogen Peroxide Sensors with Multivalent Metal Oxide-Modified Electrodes for Biomedical Analysis
and xanthine species, possibly because of their low operating potentials. Ascorbic acid, nucleic acid species, and dopamine interfered with MnO2, RuO2, and RhO2modified electrodes, probably because they were operated at relatively high anodic potentials. Generally, sugars and amino acid species were reported as non-interfering substances. Such data was unavailable for the other oxides. MnO2, Fe3O4, FeO, RuO2, RhO2, IrO2, PdO2, and PtO2 bulk-modified SPCEs have been tested successfully by developing model biosensors using glucose oxidase [4850,68-70]. A similar demonstration of CuO was also reported [69] using CPEs double bulk-modified with the oxide and glucose oxidase. MnO2 has seen further applications in which biosensors for sarcosine [71], glutamate [72], and E-ODAP [73] were developed and tested for the determination of these analytes in food samples. In-tissue implantation study with MnO2-based glucose biosensors was also reported [74]. RuO2 was tested in a fish quality biosensor [75].
7
8 9
10 11 12 13
14
15 16
17
IV. CONCLUDING REMARKS A brief account sensors for H2O2 with multivalent metal oxide-modified electrodes was presented. Oxides of twelve multivalent metals: Mn, Fe, Co, Ni, Pt, Rh, Pd, Ru, Ir, V/Zr, Cu, and Sn have been identified. Overall, the analytical characteristics of the sensors largely appeared promising.
18
19 20
21
ACKNOWLEDGEMENT We gratefully acknowledge supports from National Research Fund (NRF, South Africa), Academic Exchange Service (ÖAD – Austria), and Ministry of Education (Czech Republic).
22
23
24
REFERENCES 1 2
3
4
5 6
ECETOC. (1996) Hydrogen peroxide OEL criteria document, special report No. 10, Euopean Center for Ecotoxicology of Chemicals Guwy, A.J., Hawkes, F.R., Martin, S.R., Hawkes, D.L. Cunnah, P. (2000) A technique for monitoring hydrogen peroxide concentration off-line and on-line. Water Research 34: 2191-2198 Anglada, J.M., Aplincourt, P., Bofill, J.M. Cremer, D. (2002) Atmospheric formation of OH radicals and H2O2 from alkene ozonolysis under humid conditions. ChemPhysChem 3: 215-221 Dringen, R., Pawlowski Petra, G. Hirrlinger, J. (2005) Peroxide detoxification by brain cells. Journal of neuroscience research 79: 157-165 Cowan, J.A. (1997) Cell Toxicity and Chemotherapeutic. In Inorganic Biochemistry, an introduction, , pp. 319-356, Wiley-VCH Walters, D.R. (2003) Polyamines and plant disease. Phytochemistry 64: 97-107
_______________________________________________________________
25
26
27
28
831
Ohshima, H., Tatemichi, M. Sawa, T. (2003) Chemical basis of inflammation-induced carcinogenesis. Archives of Biochemistry and Biophysics 417: 3-11 Toninello, A., Salvi, M., Pietrangeli, P. Mondovi, B. (2004) Biogenic amines and apoptosis: Minireview article. Amino Acids 26: 339-343 Blokhina, O., Virolainen, E. Fagerstedt, K.V. (2003) Antioxidants, oxidative damage and oxygen deprivation stress: A review. Annals of Botany (Oxford, United Kingdom) 91: 179-194 Veal, E.A., Day, A.M. Morgan, B.A. (2007) Hydrogen peroxide sensing and signaling. Molecular Cell 26: 1-14 Shih, J.C. (2007) Monoamine Oxidases: From Tissue Homogenates to Transgenic Mice. Neurochemical Research 32: 1757-1761 Tipton, P.A. (1999) Kinetic studies of urate oxidase. Biomedical and Health Research 27 (Enzymatic Mechanisms): 278-287 Wong, C.M., Wong, K.H. Chen, X.D. (2008) Glucose oxidase: natural occurrence, function, properties and industrial applications. Applied Microbiology and Biotechnology 78: 927-938 Schwelberger, H.G. (2004) Diamine oxidase (DAO) enzyme and gene. In Histamine: Biology and Medical Aspects (Falus, A., ed.), pp. 43-52 De Tullio, M.C., Liso, R. Arrigoni, O. (2004) Ascorbic acid oxidase: an enzyme in search of a role. Biologia Plantarum 48: 161-166 Bassett, J., Denney, R.C., Jeffery, G.H. Mendham, J. (1981 ) In Vogel’s Textbook of quantitative inorganic analysis, including elementary instrumental analysis pp. 355, 381, Longman Bailey, R.Boltz, D.F. (1959) Differential spectrophotometric determination of hydrogen peroxide with 1,10-phenanthroline and bathophenanthroline. Analytical Chemistry 31: 117-119 Omanovic, E.Kalcher, K. (2005) A new chemiluminescence sensor for hydrogen peroxide determination. International Journal of Environmental Analytical Chemistry 85: 853-860 Black, M.J.Brandt, R.B. (1974) Spectrofluorometric analysis of hydrogen peroxide. Analytical Biochemistry 58: 246-254 Ho, M.H. (1988) Potentiometric biosensor based on immobilized enzyme membrane and fluoride detection. Sensors and Actuators 15: 445-450 Zheng, X.Guo, Z. (2000) Potentiometric determination of hydrogen peroxide at MnO2-doped carbon paste electrode. Talanta 50: 11571162 Tarpey, M.M., Wink, D.A. Grisham, M.B. (2004) Methods for detection of reactive metabolites of oxygen and nitrogen: in vitro and in vivo considerations. American Journal of Physiology 286: R431R444 Prodromidis, M.I.Karayannis, M.I. (2002) Enzyme based amperometric biosensors for food analysis. Electroanalysis 14: 241261 Wang, J., Lu, F., Angnes, L., Liu, J., Sakslund, H., Chen, Q., Pedrero, M., Chen, L. Hammerich, O. (1995) Remarkably selective metalizedcarbon amperometric biosensors. Analytica Chimica Acta 305: 3-7 Svancara, I., Vytras, K., Barek, J. Zima, J. (2001) Carbon paste electrodes in modern electroanalysis. Critical Reviews in Analytical Chemistry 31: 311-345 O'Neill, R.D., Chang, S.-C., Lowry, J.P. McNeil, C.J. (2004) Comparisons of platinum, gold, palladium and glassy carbon as electrode materials in the design of biosensors for glutamate. Biosensors & Bioelectronics 19: 1521-1528 Schachl, K., Alemu, H., Kalcher, K., Moderegger, H., Svancara, I. Vytras, K. (1998) Amperometric determination of hydrogen peroxide with a manganese dioxide film-modified screen printed carbon electrode. Fresenius' Journal of Analytical Chemistry 362: 194-200 Martin, C.R.Foss, C.A. (1996) Chemically Modified Electrodes. In Laboratory Techniques in Electroanalytical Chemistry (Kissinger, P.T. and Heineman, W.R., eds.), pp. 403-442, Marcel Dekker
IFMBE Proceedings Vol. 23
_________________________________________________________________
832 29
30
31
32
33
34
35 36
37
38
39
40
41
42
43
44
45
46
47
T. Waryo, P. Kotzian, S. Begi , P. Bradizlova, N. Beyene, P. Baker, B. Kgarebe, E. Turkuši , E. Iwuoha, K. Vyt as and K. Kalcher Zen, J.-M., Kumar, A.S. Tsai, D.-M. (2003) Recent updates of chemically modified electrodes in analytical chemistry. Electroanalysis 15: 1073-1087 Kock, R., Delvoux, B. Greiling, H. (1993) A high-performance liquid chromatographic method for the determination of hypoxanthine, xanthine, uric acid and allantoin in serum. European Journal of Clinical Chemistry and Clinical Biochemistry 31: 303-310 Repetto, M.R.Repetto, M. (1999) Concentrations in human fluids: 101 drugs affecting the digestive system and metabolism. Journal of Toxicology, Clinical Toxicology 37: 1-8 Maidan, R.Heller, A. (1992) Elimination of electrooxidizable interferant-produced currents in amperometric biosensors. Analytical Chemistry 64: 2889-2896 Wang, J., Angnes, L., Sakslund, H., Fang, L., Pedrero, M., Chen, Q. Liu, J. (1995) Mettalized carbons for improved amperometric biosensors. Book of Abstracts, 210th ACS National Meeting, Chicago, IL, August 20-24: INOR-349 McCreery, R.L.Cline, K.K. (1996 ) Carbon Electrodes. In Laboratory Techniques in Electroanalytical Chemistry (Kissinger, P.T. and Heineman, W.R., eds.), pp. 293-332, Marcel Dekker, New York Svancara, I.Schachl, K. (1999) Testing of unmodified carbon paste electrodes. Chemicke Listy 93: 490-499 Schachl, K., Alemu, H., Kalcher, K., Jezkova, J., Svancara, I. Vytras, K. (1998) Determination of hydrogen peroxide with sensors based on heterogenous carbon materials modified with manganese dioxide. Scientific Papers of the University of Pardubice, Series A: Faculty of Chemical Technology 3: 41-55 Moscone, D., D'Ottavi, D., Compagnone, D., Palleschi, G. Amine, A. (2001) Construction and analytical characterization of Prussian bluebased carbon paste electrodes and their assembly as oxidase enzyme sensors. Analytical Chemistry 73: 2529-2535 Linders, C.R., Vincke, B.J. Patriarche, G.J. (1986) Catalase like activity of iron phthalocyanine incorporated in a carbon paste electrode. Analytical Letters 19: 1831-1837 Garjonyte, R.Malinauskas, A. (1998) Electrocatalytic reactions of hydrogen peroxide at carbon paste electrodes modified by some metal hexacyanoferrates. Sensors and Actuators, B: Chemical B46: 236-241 Kwon, H., Park, I.K. Kim, Y.S. (2004) Amperometric biosensor for hydrogen peroxide determination based on black goat liver-tissue and ferrocene mediation. Journal of the Korean Chemical Society 48: 491498 Wollenberger, U., Wang, J., Ozsoz, M., Gonzalez-Romero, E. Scheller, F. (1991) Bulk modified enzyme electrodes for reagentless detection of peroxides. Bioelectrochemistry and Bioenergetics 26: 287-296 Schmuki, P. (2002) From Bacon to barriers: a review on the passivity of metals and alloys. Journal of Solid State Electrochemistry 6: 145164 Taha, Z.Wang, J. (1991) Electrocatalysis and flow detection at a glassy carbon electrode modified with a thin film of oxymanganese species. Electroanalysis 3: 215-219 Cox, J.A.Lewinski, K. (1993) Flow injection amperometric determination of hydrogen peroxide by oxidation at an iridium oxide electrode. Talanta 40: 1911-1915 Mannino, S., Cosio, M.S. Ratti, S. (1993) Cobalt(II, III) oxide chemically modified electrode as amperometric detector in flowinjection systems. Electroanalysis 5: 145-148 Schachl, K., Alemu, H., Kalcher, K., Jezkova, J., Svancara, I. Vytras, K. (1997) Amperometric determination of hydrogen peroxide with a manganese dioxide-modified carbon paste electrode using flow injection analysis. Analyst (UK) 122: 985-989 Schachl, K., Alemu, H., Kalcher, K., Jezkova, J., Svancara, I. Vytras, K. (1997) Flow injection determination of hydrogen peroxide using a carbon paste electrode modified with a manganese dioxide film. Analytical Letters 30: 2655-2673
_______________________________________________________________
48
49
50
51
52
53 54
55
56
57
58
59
60 61
62
63 64
65
66
Waryo, T.T., Begic, S., Turkusic, E., Vytras, K. Kalcher, K. (2005 ) Metal Oxide-Based Carbon Amperometric H2O2-Transducers and Oxidase Biosensors. In Sensing in Electroanalysis (K. Vytras and Kalcher, K., eds.), pp. 145-191, University of Pardubice Kotzian, P., Brazdilova, P., Kalcher, K. Vytras, K. (2005) Determination of hydrogen peroxide, glucose and hypoxanthine using (bio)sensors based on ruthenium dioxide-modified screen-printed electrodes. Analytical Letters 38: 1099-1113 Kotzian, P., Brazdilova, P., Kalcher, K., Handlir, K. Vytras, K. (2007) Oxides of platinum metal group as potential catalysts in carbonaceous amperometric biosensors based on oxidases. Sensors and Actuators, B: Chemical B124: 297-302 Lin, M.S.Leu, H.J. (2005) A Fe3O4-based chemical sensor for cathodic determination of hydrogen peroxide. Electroanalysis 17: 2068-2073 Yao, S., Yuan, S., Xu, J., Wang, Y., Luo, J. Hu, S. (2006) A hydrogen peroxide sensor based on colloidal MnO2/Na-montmorillonite. Applied Clay Science 33: 35-42 Yao, S., Xu, J., Wang, Y., Chen, X., Xu, Y. Hu, S. (2006 ) Analytica Chimica Acta 557: 78-84 Garjonyte, R.Malinauskas, A. (1998) Amperometric sensor for hydrogen peroxide,based on Cu2O or CuO-modified carbon paste electrodes. Fresensius J. Anal. Chem. 360: 122-123 Liu, Y.-q.Shen, H.-x. (2005) Preparation of nickel hydroxide modified glassy carbon electrode and its electrochemical behavior. Fenxi Kexue Xuebao 21: 378-380 Liu, Y., Liu, L. Shen, H. (2004) Preparation of a cobalt hydroxide modified glassy carbon electrode and its electrochemical behavior. Fenxi Ceshi Xuebao 23: 9-13 Shimizu, Y., Komatsu, H., Michishita, S., Miura, N. Yamazo, N. (1996) Sensing characteristics of hydrogen peroxide sensor using carbon-based electrode loaded with perovskite-type oxide. Sensors and Actuators, B: Chemical B34: 493-498 Hermann, V., Muller, S. Comninellis, C. (1998) Oxygen and hydrogen peroxide reduction on La0.6Ca0.4CoO3 perovskite electrodes. Proceedings - Electrochemical Society 97-28: 159-169 Domenech, A.Alarcon, J. (2002) Determination of hydrogen peroxide using glassy carbon and graphite/polyester composite electrodes modified by vanadium-doped zirconias. Analytica Chimica Acta 452: 11-22 Sadakane, M.Steckhan, E. (1998) Electrochemical Properties of Polyoxometalates as Electrocatalysts. Chemical Reviews 98: 219-237 Wang, X., Zhang, H., Wang, E., Han, Z. Hu, C. (2004) Phosphomolybdate-polypyrrole composite bulk-modified carbon paste electrode for a hydrogen peroxide amperometric sensor. Materials Letters 58: 1661-1664 Gaspar, S., Muresan, L., Patrut, A. Popescu, I.C. (1999) PFeW11doped polymer film modified electrodes and their electrocatalytic activity for H2O2 reduction. Analytica Chimica Acta 385: 111-117 Kalcher, K.Co-workers. (Unpublished data) Unpublished data. Institute of Chemistry, K.F. University of Graz Hrbac, J., Halouzka, V., Zboril, R., Papadopoulos, K. Triantis, T. (2007) Carbon electrodes modified by nanoscopic iron(III) oxides to assemble chemical sensors for the hydrogen peroxide amperometric detection. Electroanalysis 19: 1850-1854 Kotzian, P., Brazdilova, P., Rezkova, S., Kalcher, K. Vytras, K. (2006) Amperometric glucose biosensor based on rhodium dioxidemodified carbon ink. Electroanalysis 18: 1499-1504 Salimi, A., Hallaj, R., Soltanian, S. Mamkhezri, H. (2007) Nanomolar detection of hydrogen peroxide on glassy carbon electrode modified with electrodeposited cobalt oxide nanoparticles. Analytica Chimica Acta 594: 24-31
IFMBE Proceedings Vol. 23
_________________________________________________________________
Amperometric Hydrogen Peroxide Sensors with Multivalent Metal Oxide-Modified Electrodes for Biomedical Analysis 67
68
69
70
71
Lin, Y., Cui, X. Li, L. (2005) Low-potential amperometric determination of hydrogen peroxide with a carbon paste electrode modified with nanostructured cryptomelane-type manganese oxides. Electrochemistry Communications 7: 166-172 Turkusic, E., Kalcher, K., Schachl, K., Komersova, A., Bartos, M., Moderegger, H., Svancara, I. Vytras, K. (2001) Amperometric determination of glucose with an MnO2 and glucose oxidase bulkmodified screen-printed carbon ink biosensor. Analytical Letters 34: 2633-2647 Luque, G.L., Rodriguez, M.C. Rivas, G.A. (2005) Glucose biosensors based on the immobilization of copper oxide and glucose oxidase within a carbon paste matrix. Talanta 66: 467-471 Waryo, T.T., Begic, S., Turkusic, E., Vytras, K. Kalcher, K. (2005) Fe3O4-modified thick film carbon-based amperometric oxidasebiosensor. Scientific Papers of the University of Pardubice, Series A: Faculty of Chemical Technology 11: 265-279 Kotzian, P., Beyene, N.W., Llano, L.F., Moderegger, H., TunonBlanco, P., Kalcher, K. Vytras, K. (2002) Amperometric determination of sarcosine with sarcosine oxidase entrapped with Nafion on manganese dioxide-modified screen-printed electrodes. Scientific Papers of the University of Pardubice, Series A: Faculty of Chemical Technology 8: 93-101
_______________________________________________________________
72
73
74 75
833
Beyene, N.W., Moderegger, H. Kalcher, K. (2003) A stable glutamate biosensor based on MnO2 bulk-modified screen-printed carbon electrode and Nafion film-immobilized glutamate oxidase. South African Journal of Chemistry 56: 54-59 Beyene, N.W., Moderegger, H. Kalcher, K. (2003) A new amperometric ß-ODAP biosensor. Lathyrus Lathyrism Newsletter 3: 47-49 Mang, A., Pill, J., Gretz, N., Kränzlin, B., Buck, H., Schoemaker, M. Petrich, W. (2005) Diabetes Technology and Therapeutics 7: 163-173 Brazdilova, P., Kotzian, P. Vytras, K. (2005) Biosensor for control of fish meat freshness. Bulletin Potravinarskeho Vyskumu 44: 75-82 Corresponding author: Tesfaye Waryo email:
[email protected] Tel: +27-219593079 Fax: +27-219591316
IFMBE Proceedings Vol. 23
_________________________________________________________________
Patch-Clamping in Droplet Arrays: Single Cell Positioning via Dielectrophoresis J. Reboud1, M.Q. Luong1, 2, C. Rosales3 and L. Yobas1 1
Institute of Microelectronics (IME), Agency for Science Technology & Research, SINGAPORE 2 National University of Singapore (NUS), SINGAPORE 3 Institute of High Performance Computing (IHPC), Agency for Science Technology & Research, SINGAPORE Abstract — This paper reports a new approach to enable patch-clamp recording from living cells in arrays of droplets placed on a microstructured silicon surface. Single cells were positioned on the lateral patch sites by using dielectrophoresis (DEP) electrical force. This approach simplifies the complex packaging process involved when pressure-driven fluidics are used to guide cells through microchannels towards the patching sites. It also enables direct access to the droplets for fluid exchange, opening the way for the gold standard patch-clamp technique to become compatible with the industry standards of high-throughput liquid handling platforms. The microchips contained two 20μm deep chambers etched in silicon and passivated by silicon oxide, holding the reagent droplets, and separated by a 250μm-long buried microchannel with an aperture of around 1μm in diameter. Simulations of a fourelectrode configuration showed that negative DEP could advantageously eliminate the need of any bulk fluid movement and enable single cell positioning in droplets. Working DEP parameters (voltage, frequency) were identified using gold microprobes as external electrodes, dipped in the cell droplets. Single cell positioning at the keyhole was achieved at a speed of about 4μm/s for an optimum cell concentration. Keywords — patch-clamp, microchip, droplet, microfluidics, dielectrophoresis
I. INTRODUCTION Patch-clamp is the golden standard technique when studying ion channels in living cells. Contrary to optical assays relying on fluorescence, it enables a direct measurement of the ion transport through the membrane pores of the cells. These ion channels play a fundamental role in cell signaling and are involved in many diseases including epilepsy, cardiac arrhythmia, high blood pressure, and diabetes. Moreover ion channels can be affected by drugs targeted at other mechanisms, which lead the U.S. Food and Drug Administration (FDA) to propose new regulations to study such potential side effects [1]. Conventional patchclamping uses glass micropipettes terminated by micronsized apertures to electrically isolate a patch of cell membrane (electrical sealing) and record the ionic current flowing through the ion channels it contains. To position the pipettes on the cell and obtain a good seal in the order of
gigaohms (Gigaseal) through gentle suction, a skilled scientist uses micromanipulators under a microscope. This technique achieves high precision, but is laborious, and hence, low-throughput. In an attempt to transform patch-clamp into a highthroughput assay, research groups have been developing microfabricated chips containing arrays of patch apertures to replace the glass micropipettes, either in a planar [2] or lateral[3,4] configuration. In an opposing fashion to the conventional technique in which the micropipette is brought to the cell, the chip-based systems rely on pressure-driven flow of the bulk fluid to position the cell on the patch aperture. The same means are used to gently suck the cell further to obtain the gigaseal. Even though means to achieve pressure-driven actuation on chip have been proposed, a truly integrated system is far from reaching the capabilities of high-throughput assays liquid handling platforms. In addition, fluidic channels opened to air can greatly ease fluidic exchanges [4]. Following a trend put forward by DNA chips and liquid handling machines, droplets have been proposed as bioreactors for cell-based assays [5]. They constitute a completely open-access architecture without any capping, greatly simplifying fluidic integration (Figure 1). Furthermore they use very little volumes of precious samples, such as cells or primary cells, or drug candidates. However surface droplets cannot sustain pressure gradients to position the cells to the patch apertures and obtain a good seal.
Fig. 1: Silicon microchip for lateral patch-clamping in an array of droplets, without any fluidic channels. Droplets of 0.25μl of cell suspension were manually dispensed onto the microchip
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 834–837, 2009 www.springerlink.com
Patch-Clamping in Droplet Arrays: Single Cell Positioning via Dielectrophoresis
Dielectrophoresis (DEP), the electric force arising from the interaction of a particle’s polarization and the spatial gradient of the electric field, has been widely used to handle living cells be it in microfluidic flow, or by trapping them in electric field potential wells for example[6,7]. DEP has the advantage of eliminating any bulk movement of the fluid, associated with pressure-driven flow. By carefully designing the geometry of the system, single cells can be captured or directed to specific areas of microsystem, while no measured effects on the cells owing to field exposure have been reported[8]. In this paper, we report a new approach to enable patchclamp recording from living cells in arrays of droplets placed on a microstructured silicon surface. The patch microapertures (or “keyholes”) were fabricated following a technique developed by our group, creating integrated round glass microcapillaries on silicon chips, which has been demonstrated to produce a high success rate of Gigaseals [3]. Using simulations, we studied how negative DEP obtained via a four-electrode configuration can control cells in two dimensions and move single cells to the keyhole without liquid flow. This concept is demonstrated by achieving the positioning of single cells to the keyholes at a speed of about 4μm/s.
835
(PECVD) forming a trapped void (‘‘keyhole’’) inside the trench, which was rounded after a thermal reflow 1150°C for 30 min. A second lithography followed by a combination of directional dry etching steps was carried out to create 1mm wide round chambers at both ends of the keyholes and open up the buried capillary. In a final DRIE step the chambers were deepened to 20μm to enable easy access of the cells to the keyhole. The whole surface was finally passivated by silicon oxide. B. Dielectrophoresis design and simulations: A 3D model of the microchip was designed in which 4 microelectrodes were placed 2 by 2 on both sides of the keyhole (Figure 2a). The square electrodes shown in Figure 2 represent planar or 3D electrodes that can be microfabricated on the surface of the chip. For ease of computing dimensions were reduced compared to the actual chip. Simulations were performed using depSolver, an Open Source code[9]. 50 μm 25 μm 50 μm
II. MATERIAL AND METHODS
75 μm 20 μm
A. Microchip design and fabrication: The patch micro-apertures were fabricated using a previously reported technique [3] (Figure 2). Briefly, lithographically-defined narrow trenches (2x250x3.5μm crosssection) were etched into a p-type silicon wafer using deep reactive ion etching (RIE). Phosphosilicate glass (PSG) was deposited via plasma enhanced chemical vapour deposition
Electrode
1mm
1μm keyhole 10μm Fig. 2: Scanning Electron Microscope images of a chip, with 2 chambers of 1mm in diameter separated by a 250μm long buried microchannel (or “keyhole”).
_______________________________________________________________
Fig. 3: 3-dimensional simulation of DEP, applied through 4 electrodes (2 right electrodes at the potential +V, left electrodes at –V). a. Model design used for the simulation. b. Results showing isosurfaces of the DEP force, from dark (intense force) to white (weak). The sphere represents a particle trapped at the keyhole.
IFMBE Proceedings Vol. 23
_________________________________________________________________
836
J. Reboud, M.Q. Luong, C. Rosales and L. Yobas
C. DEP application on cells, using external electrodes: Jurkat T-lymphocytes (ATCC) were maintained following the supplier’s protocol. They were harvested and resuspended in Phosphate Buffer Solution (PBS) fresh for every experiment. Chips were washed with isopropanol (IPA) and water and dried under nitrogen. 0.25μl of the cell suspension solution at the desired concentration were manually dispensed as droplets in the chambers on the chips. The solution formed delimited droplets as seen in Figure 1 above. To prevent evaporation, the droplets were covered with a few microlitres of mineral oil (Sigma-Aldrich, cat#M5904) depending the number of droplets used.
Microscope objective electrodes
microchip Fig. 4: experimental set-up showing the microchip supporting cell suspension droplets covered in oil. The 4 external gold electrodes are immersed inside 1 droplet on two sides of the keyhole, in a similar configuration as shown in Figure 3.
The chip supporting the droplets in oil were positioned in a probestation (MicrochamberTM alessis REL-6100), which contained 4 micromanipulators, connecting 4 fine-tipped gold microprobes to the electrical circuit. The probes were elongated to allow for liquid contacts. Using the microscope incorporated in the probestation set-up, the tips of the electrical probes were carefully positioned inside the cell suspension droplet in the desired configuration (Figure 4). The use of external electrodes enabled the study of a wide range of configurations and geometries. Voltage (0.2-5MHz, 0.520Vpp) was applied to external microprobes to obtain DEP. Movement of the cells was recorded using a digital camera connected to the microscope under a 10X objective.
_______________________________________________________________
III. RESULTS AND DISCUSSION A. Dielectrophoresis simulation: Figure 3 above shows the results of the simulation of the negative DEP force arising from a non-uniform electric field applied from 4 electrodes, placed 2 by 2 on both sides of the keyhole. The isosurfaces presented confirm that the electric field is more intense at the edges of the electrodes. Therefore the minimum of the electric field was located in the middle of the 4 electrodes. Similar configurations had been studied previously, for example for trapping cells moved inside a microchannel by bulk fluid flow[10] and show similar results. Under negative DEP, the cells would be effectively repelled from the electrodes towards the minimum of the electric field. By tuning the distances between each electrode, specific configurations could be found that placed the minimum of the electric field nearest to the entrance of the keyhole (Fig 3.b.). B. Moving cells in droplets via dielectrophoresis: In order to confirm the results from the simulations and refine further the configuration of the electrodes onto the microchip, external microelectrodes were used to apply the DEP voltage. The microprobe set-up was flexible so as to allow studying various geometries and configurations without the time-consuming efforts of design and microfabrication of integrated electrodes onto the chip surface. Jurkat cells at relatively high concentrations (around 2 millions cells/ml) were manually dispensed on the microchip in small droplets of 0.25μl, and subjected to DEP voltage. Results from one such experiment are shown in Figure 5 below. electrode
keyhole
0 min
10 min
Fig 5: Movement of a high concentration of Jurkat cells under negative DEP towards the keyhole (1MHz, 6VRMS, in PBS)
The 4 electrodes at the 4 corners of the pictures were placed around the keyhole. The 2 electrodes on top were placed inside the chamber, while the 2 at the bottom were positioned outside the chamber, but still in the cell suspension droplet, that overflowed to keep electrical contact with
IFMBE Proceedings Vol. 23
_________________________________________________________________
Patch-Clamping in Droplet Arrays: Single Cell Positioning via Dielectrophoresis
the solution. At the beginning of the experiment, cells were distributed throughout the surface. When the DEP voltage was applied, the cells were repelled from the electrodes and moved towards the minimum of the electric field in the center of the 4 electrodes, as was predicted by simulations. Cells accumulated at the border of the chamber because the DEP force was not strong enough to overcome the 20μm deep edge.
837
clamp experiment. Droplets are completely open-access and will greatly ease fluidic interfaces. Single cells were positioned on the lateral patch sites by using dielectrophoresis, without the requirement of any bulk fluidic movement, which could eliminate the need for pressure driven fluidics limiting the throughput of these assays. The integration of the electrodes onto the microstructured surface will enable greater control on the cell movement.
C. Single cell positioning:
ACKNOWLEDGMENT
Patch-clamp recordings are performed on a patch of the cell membrane of a single cell, positioned at the micropipette aperture. Therefore high concentrations of cells between the electrodes such as those used in Figure 5 above, where clusters of cells are connected together at the keyhole, were not suitable for enabling a patch-clamp configuration of interest. A usable concentration would be lower. On the other hand, a concentration where single cells are seldom present in between the electrodes is also damaging. For each configuration of the electrodes, the cell concentration can be optimized so that a single cell will be directed to the keyhole by the DEP force. Figure 5 presents a series of pictures taken during the journey of one cell to the keyhole at a speed of about 4μm/s.
0s
25s
300μm
50s
The help of Vijany and her supervisor Martin Buist (NUS/Bioengineering) in conducting DEP experiments is greatly acknowledged.
REFERENCES 1.
Neubert H.-J. (2004) Patch-clamping moves to chips Anal. Chem. Sept 1, 327A-330A. 2. Li, X. H., Klemic K.G, Reed M A., Sigworth F. J (2006) Microfluidic System for Planar Patch Clamp Electrode Arrays, Nano Lett. 6:815819 3. Ong W.-L., Tang K.-C., Agarwal A, Nagarajan R., Luo L.-W., Yobas L. (2007) Microfluidic integration of substantially round glass capillaries for lateral patch clamping on chip, Lab Chip, 7:1357-1366 4. Lau A. Y., Hung P J., Wu A. R., Lee L. P. (2006) Open-access microfluidic patch-clamp array with raised lateral celltrapping sites, Lab Chip 6:1510–1515 5. Lemaire F. et al. (2006) Toxicity assays in nanodrops combining bioassay and morphometric endpoints. PLoS ONE.2 :e163 17235363 6. Voldman J., (2006) Electrical Forces for Microscale cell Manipulation, Annu. Rev. Biomed. Eng., 8:425-454 7. Frénéa M., Faure S.P., Le pioufle B., Coquet P., Fujita H. (2003) Positioning living cells on a high-density electrode array by negative dielectrophoresis, Mat. Sci. Eng. C, 23 :597 8. Fuhr G., Glasser H., Muller T., Schnelle T. (1994). Cell manipulation and cultivationunder AC electric-field influence in highly conductive culture media. Biochim. Biophys. Acta Gen. Subj. 1201:353–60 9. http://software.ihpc.a-star.edu.sg/projects/depSolver.php: depSolver is an Open Source code developed for the study of dielectrophoretic forces in complex geometries. 10. Voldman J., Toner M., Gray M.L., Schmidt M.A. (2003) Design and Analysis of Extruded Quadrupolar Dielectrophoretic Traps, J. Electrostatics, 57:69
Fig 6: Movement and positioning of a single cell (arrow) to the keyhole (8VRMS at 1MHz). Droplets of 0.5μl containing around 250 cells were manually dispensed
IV. CONCLUSIONS This paper presents a new approach to enable single cell positioning in arrays of droplets placed on a microstructured silicon surface, in order to achieve chip-based lateral patch-
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Julien REBOUD Institute of Microelectronics 11 Science Park Road, Science Park II Singapore 117685 Singapore
[email protected] _________________________________________________________________
Label-free Detection of Proteins with Surface-functionalized Silicon Nanowires R.E. Chee, J.H. Chua, A. Agarwal, S.M. Wong, G.J. Zhang Institute of Microelectronics, Agency for Science, Technology and Research, 11 Science Park Road, Singapore 117685. Abstract — Semiconducting silicon nanowires (SiNWs) have shown great potential for the electrical detection of human disease biomarkers. The technique for label-free detection using SiNWs allows for fast, efficient and inexpensive detection of biomarkers, unlike current time-consuming methodologies such as fluorescence ELISA or western blot. We demonstrate a proof-of-principle protein detection using biotinylated SiNWs for the detection of ultralow concentrations of streptavidin with a maximum sensitivity of up to 10 fM. Real-time detection of human cardiac troponin T using antibody-functionalized SiNWs is also presented. The possibility of large scale integration paves the way towards the simultaneous detection of a large number of protein biomarkers within a single chip and is of importance to the development of diagnostic tools as well as point-of-care applications. Keywords — silicon nanowire sensor, protein detection.
I. INTRODUCTION Silicon nanowires (SiNWs) have demonstrated significant potential in its capability for ultrasensitive detection of bio-molecular moieties such as DNAs[1-5], proteins[6,7] and viruses[8]. The biologically gated device owes its sensitive nature to the surface area to volume ratio that the onedimensional structure confers: unprecedented ultra-sensitivity to biological as well as chemical species. A top-down approach as opposed to nanowire growth techniques elsewhere demonstrated was utilized for which individually addressable silicon nanowires were fabricated[2-5, 7], since reproducible nanowire structures could be mass manufactured and later integrated with external CMOS circuitry for direct electrical readout. The basis of operation of the nanowire device depends on the charge characteristics of the bio-moiety in question. In this case, proteins with various pIs were utilized; surface charges inherent in the protein molecule result in the SiNW operating in either the accumulation or depletion mode depending on the polarity of the charges attached to the target biomolecule. We investigate the variation of the conductances of the nanowire device where various concentrations of streptavidin and avidin are utilized for covalent attachment, since the pI of streptavidin and avidin are substantially different from the pH of the buffer solution and thus would facilitate the identification of the change between baseline conductances and the conductance after wires have been exposed
to a particular protein. These biotinylated wires offer significant advantage over traditional protein identification techniques such as the Western blot or fluorescence ELISA which are both time-consuming and labor-intensive. Furthermore, the bio-FET approach eliminates the need for a fluorescent label, demonstrating high specificity and sensitivity together with rapid response rates, paving the way towards the development of low-cost, sensitive and rapid diagnostic as well as point of care applications which potentially may necessitate the sensing of biomolecules in concentrations of down to 10 fM levels. In addition, the individually addressable array allows for the simultaneous detection of different species in solution, alluding to the devices’ capability for multiple detection schemes on a single platform. We also demonstrate the real-time detection of human cardiac troponin-T (cTnT), a significant biomarker for the indication of myocardial tissue necrosis[9-11], which demonstrates the practicable development of biosensors for the identification of patients suffering from acute myocardial infarction. II. MATERIALS AND METHODS A. Materials. EZ-LinkTM Sulfo-NHS-LC-Biotin, avidin, and streptavidin were purchased from Pierce Biotechnology, Inc. (Rockford, IL). Mouse anti-human cardiac troponin T and human troponin T were purchased from HyTest Ltd. (Turku, Finland). All other chemicals were purchased from SigmaAldrich, Inc. Proteins were used without further purification and diluted to the required concentrations with assay buffer. B. SiNW device design and fabrication. SiNW devices were fabricated on a 145-nm buried oxide layer silicon-on-insulator (SOI) substrate that was n-doped on purchase. Deep ultraviolet lithography techniques were used to pattern and etch fins that were later doped and thermally oxidized to form the individual nanowires. Contact metal definition and Si3N4/SiO2 passivation of metal lines were performed and the nanowires released by dry etching of the passivation layers followed by wet etching of the remaining SiO2 on the device array surfaces. Scanning electron microscopy (SEM) and transmission electron
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 838–841, 2009 www.springerlink.com
Label-free Detection of Proteins with Surface-functionalized Silicon Nanowires
839
The detection of protein biomolecular species was achieved by measuring the resistance of the SiNW devices at three stages: before chemical functionalization (denoted R1), after immobilization of biotin probe molecules on the SiNW device surface (denoted R2), and after the binding of streptavidin or avidin target molecules to the probeimmobilized surface (denoted R3). The percentage change between R2 and R3 was computed and used as the indicator for the binding event. Figure 1. (a) SEM micrograph of SiNW array. (b) TEM image showing the cross-sectional profile of an individual nanowire.
III. RESULTS AND DISCUSSION microscopy (TEM) were performed to monitor the uniformity and quality of the resultant devices. Figures 1a and 1b show a SEM micrograph and TEM image of the SiNW array structure, from which the high uniformity of the wire dimension and spacing can be observed. C. Surface functionalization. Surface functionalization of the SiNW array chips was carried out using a conventional silane chemistry that has been well documented in the literature[12]. The chips were immersed in a solution of 2% 3-aminopropyltriethoxysilane (APTES) in a 95%/5% mixture of ethanol and deionised water for 2 hrs, resulting in the formation of an amineterminated surface. A 20 l aliquot of 10 g/ml biotin in 1xPBS was applied to the SiNW device clusters and left to incubate in a humidity-controlled environment at room temperature overnight, allowing the N-terminus of the biotin molecules to bind to the amine-terminated SiNW surface. Unbound biotin molecules remaining in the solution aliquot were removed by washing with 1xPBS for 10 minutes, followed by washing in deionised water for 5 minutes. The freshly prepared chips were immediately used for electrical detection experiments, although storage in a 40C environment was observed to be possible for short periods of time (< 12 hours) without degradation of the quality of the SiNW devices. D. Electrical characterization and biomolecular detection. Electrical characterization of the SiNW devices was carried out with an Alessi REL-6100 probe station (Cascade Microtech, Beaverton, OR). Data was measured and recorded with a HP-4156A parameter analyzer. A controlled voltage of 0.5V was applied across the drain and source electrodes of each individual SiNW device via a two-probe system that referenced one end of the device to ground. The current through the SiNW device was then measured and used to compute the resistance of the SiNW device.
A. Theoretical considerations. Proteins, which consist of long-chain amino acid sequences joined and folded in a specific manner, demonstrate an amphoteric nature in solution due to zwitterion formation between the amine and carboxyl functional groups on the protein molecule. In a solution with a pH less than its isoelectronic point (pI), a protein molecule carries a net positive charge due to the net protonation of its functional groups. Conversely, in a solution with a pH greater than its pI, a protein molecule carries a net negative charge due to the overall deprotonation of its functional groups. As the SiNW devices used in our experiments are ndoped, the presence of a negative surface charge on the device causes depletion to occur in the SiNW bulk, reducing the effective cross-sectional area for conductance of the SiNW device and thus increasing the resistance of the device. Conversely, the presence of positive surface charge on the SiNW device causes carrier accumulation in the device, increasing its conductance and decreasing its resistance. A positive or negative charge layer on the surface of the devices arise from the charge on the target proteins bound to the antibody-functionalized SiNW surface; thus, by measuring the magnitude and direction of resistance change in the SiNW devices, it is possible to determine with high specificity the type and concentration of biomolecular species present in the analyte solution. As the strength and specificity of the binding between biotin and the avidin family of proteins is well documented, these biomolecular species were selected for the demonstration of the sensing capabilities of the SiNW devices. The highly specific binding between the antibody and the target molecule ensures that the charge carried by the target molecule is brought sufficiently close to the SiNW surface to affect the resistance changes described above; the presence of a different target protein that does not bind to the biotinylated surface should result in little or no change in SiNW resistance.
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
840
R.E. Chee, J.H. Chua, A. Agarwal, S.M. Wong, G.J. Zhang
B. Streptavidin detection. Streptavidin (pI = 5.5) has an isoelectronic point that is less than the pH of the assay buffer used, and thus carries a net negative charge in solution. After the immobilization of biotin probe molecules onto the SiNW surface, a 20 l aliquot of 0.01x PBS solution was applied to the SiNWs and the resistances (R2) of 15 wires on each chip were measured. The buffer solution was then washed away and the dried SiNW chips incubated for ~1hr with a 20 l aliquot of streptavidin solution in 0.01x PBS solution applied to the SiNWs. Excess streptavidin solution was washed away with 0.01x PBS (3x5min) and deionised water (1x5min) and the chip dried in a stream of dry N2 gas. Solutions of streptavidin with concentrations of 1 nM, 10 pM, 10 fM and 1fM were used on different devices to investigate the concentration dependence of SiNW device response. The SiNW devices were immersed in a second 20 l aliquot of 0.01x PBS solution and the resistance (R3) measured. A box diagram of the responses are given in Figure 2.
Figure 3. Box diagram comparing the percentage resistance changes of SiNW devices when used to detect 1 nM streptavidin and 1 nM avidin.
D. Detection of human cardiac troponin-T. In order to demonstrate the applicability of the SiNW array biosensor for medical point-of-care applications, a detection experiment with 1 ng/ml human cardiac troponin-T (cTnT) was performed. Human cTnT is regarded as one of the most cardiac-specific biomarkers and is highly indicative of acute myocardial infarction in heart disease patients. The rapid, real-time and label-free detection of cTnT with the SiNW array biosensor will pave the way for the development of an electrical biosensor for medical point-of-care applications. A SiNW array device was functionalized with anti-cTnT capture probes in the manner described in section 2.3. Glutaraldehyde, a bifunctional linker, was used to bind the anti-cTnT probe molecules to the amine-terminated SiNW surface as cTnT is unable to directly bind to the amino
Figure 2. Box diagram of percentage change in resistance of SiNW device with different concentrations of streptavidin solution.
C. Avidin detection. Avidin (pI = 10.5) has an isoelectronic point that is greater than the pH of the assay buffer used, and thus carries a net positive charge in solution. A 1 nM in 0.01x PBS avidin solution was used to bind avidin target molecules to the SiNW device in the same manner as described above in the section on streptavidin detection. In accordance with theory, a decrease in SiNW resistance was observed from R2 to R3 upon binding of avidin. A box diagram of the avidin response plotted with 1 nM streptavidin response for comparison is given in Figure 3. Figure 4. Graph of conductance against time for the real-time detection of 1 ng/ml cTnT in 0.01x PBS.
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Label-free Detection of Proteins with Surface-functionalized Silicon Nanowires
groups on the silicon surface[13]. Detection was performed by monitoring the baseline current through a SiNW device immersed in 0.01x PBS solution, followed by the addition of an aliquot of 1 ng/ml solution of cTnT in 0.01x PBS. A graph of the conductance against time of the SiNW device is given in Figure 4, and it may be clearly seen that a large and rapid response is obtained upon addition of the cTnT solution. The SiNW conductance decreases and stabilizes quickly to a new, lower baseline level, demonstrating that the binding and detection event has occurred in accordance with theory. IV. CONCLUSIONS We have demonstrated the sensing capabilities of the silicon nanowires for various proteins, as well as investigated the real-time sensing features of human cardiac troponin-T. The ability to identify ultra-low concentrations of streptavidin in the femtomolar regime attests the sensitive nature of the SiNWs, a feature inherently due to the miniature dimensions of the sensing platform itself. Further work is being carried out to determine the detection limit of troponin-T, but is expected to be comparable to that of streptavidin. The individually addressable nature of the SiNWs alludes to the multiplexing capabilities of the device since each SiNW could be selectively functionalized using a robotic spotter to specifically bind to a specific bio-marker of interest, leading to multiple confirmatory diagnostic investigations on a single platform.
REFERENCES [1] Hahm JI, Lieber CM (2004) Direct Ultrasensitive Electrical Detection of DNA and DNA Sequence Variations Using Nanowire Nanosensors. Nano Lett 4(1):51-54. [2] Li Z, Chen Y, Li X, Kamins TI, Nauka K, Williams RS (2004) Sequence-specific label-free DNA sensors based on silicon nanowires. 4:245-247.
841
[3] Zhang GJ, Zhang G, Chua JH, Chee RE, Wong EH, Agarwal A,
[4]
[5]
[6] [7]
[8] [9] [10] [11] [12] [13]
Buddharaju KD, Singh N, Gao ZQ, Balasubramanian N. (2008) DNA Sensing by Silicon Nanowire: Charge Layer Distance Dependence. Nano Lett 8(4):1066-1070. Zhang GJ, Chua JH, Chee RE, Agarwal A, Wong SM, Buddharaju KD, Balasubramanian N (2008) Highly sensitive measurements of PNA-DNA hybridization using oxide-etched silicon nanowire biosensors. Biosens. Bioelectron. 23:1701-1707. Bunimovich YL, Shin YS, Yeo WS, Amori M, Kwong G, Heath JR (2006) Quantitative real-time measurements of DNA hybridization with alkylated non-oxidized silicon nanowires in electrolyte solution. JACS 128(50):16323-31. Cui Y, Wei QQ, Park H, Lieber CM (2001) Nanowire nanosensors for highly sensitive and selective detection of biological and chemical species. Science 293(5533):1289-92 Stern E, Klemic JF, Routenberg DA, Wyremebak PN, Turner-Evans DB, Hamilton AD, Lavan DA, Fahmy TM, Reed MA (2007) Label free immunodetection with CMOS compatible semiconducting nanowires. Nature 445(7127):519-22 Patolsky F, Zheng G, Hayden O, Lakadamyali M, Zhuang X, Lieber CM (2004) Electrical detection of single viruses. Proc Natl Acad Sci U.S.A. 101:14017-22 Hamm CW, Ravkilde J, Gerhardt W et al. (1992) Prognostic value of serum troponin-T in unstable angina. N Engl J Med 327:146-50 Adams JE, Bodor GS, Davila-Roman VG et al. (1993) Cardiac troponin I: a marker with high specificity for cardiac injury. Circulation 88:101-6 Ohman EM, Armstrong PW, Christenson RH et al. (1996) Cardiac troponin T levels for risk stratification in acute myocardial ischemia. N Engl J Med 335:1333-41 Zheng G, Patolsky F, Cui Y, Wang WU, Lieber CM. (2005) Multiplexed electrical detection of cancer markers with nanowire sensor arrays. Nat Biotechnology 23:1294-1301. Zhang, G.J.; Tanii, T.; Kanari, Y.; Ohdomari, I. (2007) Production of nanopatterns by a combination of electron beam lithography and a self assembled-monolayer for an antibody nanoarray. J. Nanosci. Nanotechnol. 7:410-7. Corresponding author: G.J. Zhang Tel: +65-67705390, Fax: +65-6774-5754, E-mail:
[email protected] _________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Bead-based DNA Microarray Fabricated on Porous Polymer Films J.T. Cheng1, J. Li2, N.G. Chen1, P. Gopalakrishnakone3 and Y. Zhang1, 2,* 1
Division of Bioengineering, Faculty of Engineering, National University of Singapore, Singapore 2 Nanoscience and Nanotechnology Initiative, National University of Singapore, Singapore 3 Department of Anatomy, Yong Loo Lin School of Medicine; National University of Singapore, Singapore Abstract — By attaching different biomolecule probes to surface-modified microspheres, a large number of complementary targets labeled with fluorophores can be interrogated simultaneously using the bead-based microarray format. In this work, a new bead-based DNA microarray was fabricated on a porous polymer film with well-ordered array of pores. The well-ordered porous polymer film was prepared using the non-lithographic breath figure method. Different biotinylated DNA probes were conjugated to avidin-modified polystyrene (PS) microspheres. The DNA microarray was fabricated by randomly dispersing DNA probe-conjugated PS microspheres into the pores on the polymer film. Multiplexing detection of DNA was performed using target DNAs labeled with ATTO dyes using a fluorescence microscope imaging system. The Bead-based DNA microarray was constructed for detection of four serotypes of Dengue virus. This technique was demonstrated to be a simple and cost-effective method for rapid detection of DNA targets. Keywords — Pattern, Microspheres, DNA, Microarray
I. INTRODUCTION DNA microarray technology has been well-known for its tremendous power to conduct parallel analyses of nucleic acids in a single experiment. It has been an extremely important tool in biomedical research and also in diagnostic, treatment and monitoring applications [1]. In a traditional microarray, various DNA probes are immobilized in an array of microscopic spots on a planar substrate [2], such as nylon membrane, glass slide or compact disc. Each type of probes can be spatially addressed, enabling high throughput. However, planar microarrays are restricted by the diffusionlimited kinetics; therefore the amount of probes that can be immobilized on the planar substrate and the amount of targets that can be hybridized on the planar microarray are limited, resulting in low single-to-noise (S/N) ratio. Instead of using planar surface, bead-based microarrays are applying the non-planar bead surface as the substrate to immobilized probes. The immobilization reaction can take place in a solution with shaking or votexing, resulting in high reaction efficiency. Also, the high surface-to-volume ratio of beads allows a larger amount of probes to be immobilized. Moreover, beads can be fabricated with magnetic,
electrical or spectral properties, with various functional groups and of different sizes. Therefore, bead-based microarray format is becoming increasingly popular in nucleic acid analysis. Various bead-based DNA microarrays have been developed. The most famous are the bead suspension array using flow cytometry [3] for detection and fiber-optic bead-based microarrays [4] from Illumina (San Diego, CA, USA). Facilities such as flow cytometry and the imaging system compatible for the use of optical fibers are needed for these techniques. Other work has been reported on the construction of bead-based DNA microarrays on a platform with patterns fabricated using lithographic methods [5]. However, the need of clean room facilities increases the cost. Herein, we present a novel technique for the fabrication of bead-based DNA microarrays. This technique applied well-ordered porous polymer films with micrometer-sized pores to serve as the microarray platform. Beads were deposited and confined in the surface pores of the porous films. The porous polymer films were fabricated by the breath figure method, a low-cost non-lithographic technique. Fluorescent microscope equipped with charge-coupled device (CCD) camera was used for the signal detection. In the whole construction of the bead-based DNA microarrays on porous film, only common laboratory facilities were used, resulting in a very simple and cost-effective method. The bead-based DNA microarray was demonstrated for rapid detection of dengue virus. II. EXPERIMENTAL SECTION A. Materials Polystyrene-block-poly(ethylene-ran-butylene)-blockpolystyrene (PSEBS, 31wt% styrene) and all solvents were obtained from Sigma-Aldrich. Polystyrene-block-poly(4vinylpyridine) (PS-b-P4VP, MnPS= 20.0kg/mol, MnP4VP= 19.0 kg/mol, Mw/Mn=1.09) was purchased from Polymer Source Inc. SuperAvidinTM coated polystyrene (PS) microspheres, mean diameter 9.9 m (1% w/v aqueous suspension; binding capacity: 0.036 g biotin-FITC/ mg microspheres) were purchased from Bangs Laboratories, Inc.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 842–845, 2009 www.springerlink.com
Bead-based DNA Microarray Fabricated on Porous Polymer Films
The 25-mer DNA probes were designed according to the sequences of the four serotypes of dengue viruses. Dengue Probes (DP), DP1, DP2, DP3 & DP4 were the capture probes for the Type 1, Type 2, Type 3 & Type 4 dengue viruses, respectively. Their genome positions (GenBank accession No.) are as follows: DP1: 10,550-10,574 (M87512); DP2: 10,557-10,581 (M20558); DP3: 10,530-10,554 (M93130); and DP4: 10,483-10,507 (M14931). All of the four DNA probes were purchased with their 5’ end functionalized with biotin. Dengue targets (DT), DT1, & DT2 were the perfect match targets for DP1 & DP2, respectively. All the targets were biotinylated at their 5’ end. All the DNA probes and target were purchased from Proligo (Singapore). The sequences of the DNA probes and targets used in the beadbased DNA microarray were listed in Table 1. Table 1 Sequences of the probes and targets used in the bead-based DNA microarray Name biotin-DP1 biotin-DP2 biotin-DP3 biotin-DP4 biotin-DT1 biotin-DT2
Probe sequence biotin-5’-GGGAAGCTGTATCCTGGTGGTAAGG-3’ biotin-5’-ATGAAGCTGTAGTCTCACTGGAAGG-3’ biotin-5’-AGGGAAGCTGTACCTCCTTGCAAAG-3’ biotin-5’-GAGGAAGCTGTACTCCTGGTGGAAG-3’ biotin-5’-CCTTACCACCAGGATACAGCTTCCC-3’ biotin-5’-CCTTCCAGTGAGACTACAGCTTCAT-3’
Streptavidin labeled ATTO dyes, streptavidin–ATTO 488, streptavidin–ATTO (1mg/ml in 1 x PBS buffer; Molecular Weight = 60.0 k Da) were purchased from Jena Bioscience. Buffers were prepared as TTL buffer (100 mM Tris-HCl, pH 8.0; 0.1% Tween 20; and 1 M NaCl), TT buffer (250 mM Tris-HCl, pH 8.0; and 0.1% Tween 20) and TTE buffer (250 mM Tris-HCl, pH 8.0; 0.1% Tween 20; and 20 mM EDTA). Hybridization mixture contained 300 M NaCl, 0.5% sodium dodecyl sulfate (SDS) and 30% formamide. B. Preparation of the polymer porous films The polymer porous films were prepared by the breath figure method. A mixture of PS-b-P4VP and PSEBS (1/1, w/w) was dissolved in a mixture of toluene and benzene (7/3, v/v) to form a 5~30 mg/ml polymer solution. Equal amount of the solutions were cast onto clean glass substrates (15×15 mm) in a sealed 300 L chamber at room temperature, and then the chamber was immediately vacuumized to an appropriate vacuum level of -50 kPa within about 30 seconds. At the same time, the humidity in the chamber was maintained at 80-85 r.h.%. After dried, porous polymer films were obtained.
_________________________________________
843
C. Patterning of microbeads on the polymer porous films A glass slide covered with porous polymer films was inserted into a suspension of 9.9 m polystyrene microbeads in a vial. The vial was then shaken on a shaker to allow beads to settle in the pores by gravity. The settling time is about two hours. After the microbeads were dispensed into the arrayed pores, the surface of the porous films was washed with water to remove the excess beads. The film was then dried at room temperature. D. Preparation of probe beads and dengue detection A hundred microliters of avidin-PS beads were washed three times using TTL buffer and resuspended in 20 μl TTL buffer. According to the binding capacity, 0.04 nmol of biotinylated probes was added. Probes were immobilized onto the bead surface via the biotin-avidin coupling by gentle mixing at room temperature for 2 hrs. After probe immobilization, beads were rinsed twice using TT buffer and resuspended in TTE buffer at 80 oC for 10 min to remove any unstable biotin-avdin coupling. Beads were washed again and resuspended in 100 μl hybridization mixutre. Twenty microliters of beads conjugated with each type of probes (PS-DP1, PS-DP2, PS-DP3 & PS-DP4) were mixed and diluted to 0.1 wt % using hybridization mixture. Porous film was immersed in the bead suspension in a vial and shaked for 2 hrs. Excess beads were rinsed away. One microliter 100 μM biotin-DT1 or biotin-DT2 was mixed with 9 μl 1 mg/ml avidin-ATTO 488 or avidinATTO550, respectively. The mixture was diluted to 100 μl using TTL buffer and incubated at room temperature for 2 hrs. Twenty microliters of of ATTO-labeled targets from each mixture were mixed and diluted to 800 μl in hybridization mixture. The bead-based DNA microarray on porous film was immersed in the the target solution and incubated at 37 oC for 1 hr. Array was rinsed with hybridization buffer to remove excess targets thereafter. III. RESULTS AND DISCUSSION The construction of the bead-based DNA microarray is shown schematically in Figure 1. Firstly, porous polymer films with well-ordered honeycomb structure could be formed when a polymer solution was drop-cast onto a glass slide in a vacuum chamber with moist atmosphere. The vacuum caused a rapid evaporation of the volatile solvent (e.g. toluene and benzene) and subsequently a rapid cooling of the solution, and then sub-micrometer or micrometer sized water droplets were formed on the surface of the cold solution [6]. Convection currents induced by temperature gradients and
IFMBE Proceedings Vol. 23
___________________________________________
J.T. Cheng, J. Li, N.G. Chen, P. Gopalakrishnakone and Y. Zhang
Persent of Pores (%)
844
80
(b)
60 40 20 0 0.00
0.04
0.08
0.12
0.16
Concentration (wt.-%)
Fig. 3 (a) Optical image of beads-based patterns on the polymer porous films (54%). (b) Bead occupancy as a function of the bead concentration.
Fig. 1 Schematic design of the bead-based DNA microarray on porous film capillary forces between the water droplets favored a regular stacking of the water droplets with a narrow size distribution [7]. After the solvent evaporated completely, a polymer film was formed with the water droplets in the film. After the water droplets evaporated, a porous film with well ordered pore structure was formed. Polystyrene (PS) beads with avidin on their surface were mixed with the four types of the dengue probes in separate tubes. An amount of each type of beads, PS-DP1, PS-DP2, PS-DP3 & PS-DP4 was pooled and mixed. The porous film was immersed in the diluted bead mixture to allow settling of beads into the surface pores to form one-bead-in-one-pore pattern. To detect dengue virus, the ATTO-labeled targets were applied on the array to hybridize with the DNA probes immobilized on the beadbased microarray on the porous films. In order to prepare the porous polymer films for the preparation of the bead-based DNA microarrays, a polymer blend of PS-b-P4VP and PSEBS was dissolved in a solution of mixed toluene and benzene. PSEBS is an elastomer and
Fig. 2 (a) Optical and (b) SEM images of PS-b-P4VP/PSEBS porous films by the breath figure method. The polymer concentration of preparing the polymer porous films was 15 mg/ml. The relative humidity is 82 r.h.% and the vacuum level is -50 kPa.
_________________________________________
the PS-b-P4VP/PSEBS blend polymer has good mechanical properties. Then the porous films with long-range honeycomb structure were prepared in a vacuum chamber with a moist atmosphere. Figure 2 exhibits optical and SEM images of porous polymer films made of a blend of PS-bP4VP and PSEBS. The porous films show two-dimensional and periodical structures with hexagonal array of pores in a long-range order. The pore sizes are on the microscale. The pore sizes of the films were altered by controlling the air humidity and vacuum level in the vacuum chamber. Herein, the pore diameter is 10~12 m when the vacuum level is about -50 kPa, a polymer concentration is 15 mg/ml and a relative humidity is about 82 r.h.%. The porous polymer films can be used to pattern beads. Figure 3a exhibits the photograph of beads-based pattern on the porous polymer films. The beads were embedded within the holes on the porous films in a one-bead-in-one-pore format. Figure 3b shows that the bead occupancy on the porous films increased as the bead concentration increased. DNA probes were immobilized on the bead surface via the biotin-avidin coupling. Equal amount of the four types of probe immobilized beads, PS-DP1, PS-DP2, PS-DP3 & PS-DP4 beads, were pooled and diluted to 0.1 wt % bead suspend in hybridization mixture. A porous film with pore density of 8.38×109 pores/m2 was immersed in the bead suspension in a vial with gentle shaking. The probe beads were settled down in the surface pores to form bead-based DNA microarray on the porous film. Figure 4 shows (a) the
Fig. 4 (a) bright field and (b) fluorescence microscope image of beads-based DNA microarray on the polymer porous films (bead occupancy 50%).
IFMBE Proceedings Vol. 23
___________________________________________
Bead-based DNA Microarray Fabricated on Porous Polymer Films
bright field and the fluorescent (b) image of the bead-based DNA microarray after hybridization with ATTO488–DT1 & ATTO550-DT2. On an area of 1.79×104 μm2 on the porous film, shown in the Figure 4, there were 19 replicates for both the detection of DT1 (green beads) and DT2 (red beads). The rest 35 beads (white circled in Figure 4a) were the beads attached with DP3 and DP4. The replicates helped prevent the false-positive or false negative signals. The detection condition could be optimized for even faster analysis and the bead-based DNA microarray could be constructed for the detection of other diseases.
845
ACKNOWLEDGMENT The authors would like to acknowledge the financial support from Agency for Science, Technology and Research, Science & Engineering Research Council, Singapore (A*STAR SERC) (R-398-000-030-305) and the National University of Singapore.
REFERENCES 1. 2.
IV. CONCLUSIONS A novel bead-based DNA microarray was constructed on a well-ordered porous polymer film. The porous polymer films were fabricated by the one-step breath figure templating method, a low-cost non-lithographic method. Beads immobilized with DNA probes were patterned on the porous film. The pore size was complementary to the bead size; therefore beads were secured in the pores. These beadbased DNA microarrays were used for the detection of dengue virus. This technique was demonstrated to be a simple and cost-effective method for rapid and highthroughput detection of DNA targets. Other molecules, such as proteins, peptides or other small molecules can also be immobilized on the bead surface to form a bead-based microarray on porous film. Future work is designed to apply the well-ordered porous films to construct various bead-based microarrays or even cell microarrays.
_________________________________________
3.
4.
5. 6.
7.
Venkatasubbarao S (2004) Microarrays – status and prospects. Trends Biotechnol 22: 630-637 Schena M, Shalon D, et al. (1995) Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray. Science 270: 467-470 Xu H, Sha MY, et al. (2003) Multiplexed SNP genotyping using the QbeadTM system: a quantum dot-encoded microsphere-based assay. Nucleic Acids Res 31: e43. Ferguson JA, Boles TC, et al. (1996) A fiber-optic DNA biosensor microarray for the analysis of gene expression. Nat Biotech 14: 16811684 Barbee, KD, Huang X (2008) Magnetic Assembly of High-Density DNA Arrays for Genomic Analyses. Anal Chem 80: 2149-2154 Limaye AV, Narhe RD, Dhote AM, et al (1996) Evidence for Convective Effects in Breath Figure Formation on Volatile Fluid Surfaces. Phys Rev Lett 76: 3762 Pitois O, Francois B (1999) Crystallization of condensation droplets on a liquid surface. Colloid Polym Sci 277: 574
Address of the corresponding author: Author: Associate Professor ZHANG Yong Institute: Division of Bioengineering, National University of Singapore Street: Block E3A-04-15, 7 Engineering Drive 1 City: Singapore Country: Singapore Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Monolithic CMOS Current-Mode Instrumentation Amplifiers for ECG Signals S.P. Almazan, L.I. Alunan, F.R. Gomez, J.M. Jarillas, M.T. Gusad and M. Rosales Microelectronics and Microprocessors Laboratory, Department of Electrical and Electronics Engineering, University of the Philippines, Diliman, Quezon City, Philippines Abstract — In this paper, four monolithic current-mode instrumentation amplifier (in-amp) topologies are implemented in 0.25um CMOS process, with positive second-generation current conveyors (CCII+) as building blocks. The in-amp topologies are designed to handle biomedical signals, specifically that of the Electrocardiogram (ECG). Four types of CCII+ are characterized and realized using a rail-to-rail op-amp and different types of current mirrors. The Op-Amp with Simple Current Mirror exhibits the highest current swing and the lowest power consumption, and is thus chosen as the optimum CCII+ block to be used in all four in-amps. All current-mode in-amps are implemented using a standard 0.25um CMOS process and yield excellent common-mode rejection ratio (CMRR) greater than 150dB for a differential gain of 100. All four in-amps consumes less than 2.5mW of power for a single voltage supply of 2.5V. However, the 2-Current Conveyor with Op-Amp at the Output (2-CC with Op-Amp) has adjustable output reference voltage and provides the lowest output impedance among the four in-amps. Keywords — Instrumentation Amplifiers, Current-Mode, Rail-to-Rail Op-Amps, Current Conveyors, ECG
II. CURRENT CONVEYORS A positive second-generation current conveyor (CCII+) is a 3-terminal device with a representation shown in Fig. 1.
Fig. 1
CCII+ black-box reperesentation
Current conveyor performance relies on its ability to act as a voltage buffer between inputs, and to convey current between two ports that have extremely different impedance levels. To realize this, a CMOS CCII+ can be implemented by a high gain op-amp with a class AB output buffer stage, and connected with a negative-feedback loop, followed by a current mirror [4], as shown in Fig. 2.
I. INTRODUCTION The measurement of low-energy bio-potential signals such as the ECG makes the in-amp a significant signal-conditioning block for biomedical systems. With the use of in-amps, it is possible to accurately amplify these weak electric body signals even in the presence of high-amplitude common-mode noise that may tend to corrupt the desired signal. Having this critical application, in-amps are particularly designed to achieve high CMRR to correctly extract and amplify low-amplitude differential signals, and to block unwanted noise potentials that are usually common to the in-amp inputs [1], [2]. Recent works have explored alternative ways to implement in-amps using the current-mode approach to overcome the requirement for matched resistors [3]. One approach is through the use of current conveyors. Vital parameters for characterization and analysis of in-amps include CMRR, input and output impedance, input and output swing, and power consumption. A simulated ECG signal with common-mode noise is used in simulating these different in-amp circuits to determine which in-amp configuration could best extract these biomedical signals.
Fig. 2
CCII+ block with series-RC compensation
A. Op-amp block To implement a CCII+, the Complementary Differential Pair with NMOS Cascode Load (ComdiffCasc-PN) with Push-Pull Inverter Output Stage [5] shown in Fig. 3, is the topology used as the op-amp block. The op-amp has rail-to-rail swing at both the input and output and has fewer stages than the other rail-to-rail op-amp configurations in [4], which makes it more stable. Also, its push-pull output stage further improves the differential gain and provides higher output voltage swing. Setting all the transistor lengths (L) to 1.2um, the initial op-amp transistor widths (W) may be solved using the current equation, given in Eq. 1, for a MOSFET in saturation.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 846–850, 2009 www.springerlink.com
Monolithic CMOS Current-Mode Instrumentation Amplifiers for ECG Signals
Vdd P
P
VP
N
P P
P
V+
P
N
N
N
N Fig. 3
B. Current conveyor block P
N
N
N
P P
P Rc Cc + Vo
N N
N
ComdiffCasc-PN with push-pull inverter output stage
The reason for choosing the 1.2um transistor length, instead of the minimum length of 0.25um, is that short-channel effects are more significant at channel lengths less than 1um. Furthermore, submicron lengths for differential input pairs tend to introduce large offset voltage [6]. The tail current (Ibias) is set to be 20uA to minimize the total current and to meet the power consumption limits. ID
847
k'W 2 VGS VT 2 L
(1)
A width of 5um/20um is used for NMOS/PMOS transistors with Ibias/2 current. Plugging these values into the schematic produced an op-amp differential voltage gain of less than 10,000, which is below the target gain of at least 1x106. The transistor widths are then resized to increase the gain. A series of simulations aimed at characterizing the op-amp were carried out. The effects of varying the width of each transistor on the op-amp parameters such as differential gain, common-mode gain, 3-dB bandwidth, unity-gain bandwidth and phase margin were studied and plotted. Finally, transistor sizes were assumed to be multiples of 6, to aid in the layout implementation. The compensation resistor (RC) and capacitor (CC) used were 10K and 500fF, respectively. Due to parasitics introduced by metal layers, the total output resistance in the layout simulations increased, as compared to schematic simulations – thus increasing the differential gain. However, slight mismatches in the layout and the lack of symmetry in routing also increased the common-mode gain, hence decreasing the CMRR. Furthermore, these parasitics introduce more poles, thus decreasing the bandwidth of the op-amp. It was also observed that the phase margin of the layout implementation is 20º lower than that of schematic simulations. As such, it is suggested that the phase margin be set at a relatively high value early in the schematic design stage to ensure stability in the layout stage.
The designed rail-to-rail op-amp is then connected to an output buffer stage before being paired with four types of current mirrors, which are: Simple Current Mirror, Cascode Current Mirror, High-Swing Cascode Current Mirror and Wilson Current Mirror. The requirements for the current mirrors to be used for the CCII+ are high output impedance, wide output voltage swing, small input bias voltage, and good high frequency response [7]. Even though the Op-Amp with Simple Current Mirror displayed the lowest output impedance among the four CCII+ implementations, it provided the highest current swing and had the least layout area and power consumption (having the least number of transistors) among all designs. The positive input signal swing is determined by the state of transistor M1 while the negative input signal swing is determined by M2, as shown in Fig. 2. To have a functional output buffer stage, transistors M1 and M2 in Fig. 2 must remain saturated [7]. Therefore, the positive and negative input signal swing is restricted by Eq. 2-3. VIN VIN
VDD VT VDSAT VDSAT 1
(2)
VDSAT 2 VT VDSAT
(3)
To minimize the voltage drop across the mirrors, the width of the transistors comprising the current mirrors is increased. This decreases the required VGS for saturation. Hence, decreasing the input bias voltage requirements of the current mirrors increases the input swing at node X in Fig. 2. For uniformity and ease of layout, the widths of all the NMOS and PMOS transistors for the simple current mirrors are set to be 48um and 192um respectively. III. CURRENT-MODE INSTRUMENTATION AMPLIFIERS Initial simulations on the Improved 2-Current Conveyor (Improved 2-CC) in-amp, shown in Fig. 4, verified the functionality of the Op-Amp with Simple Current Mirror. The in-amp produced a very high CMRR. However, it was observed from the in-amp’s gain plot that the differential gain varies with the differential input voltage by ±10%. This gain irregularity is explained by the current error of the CCII+ (difference in current at the Z and X terminals). Ideally, the current at the output node Z of the conveyor follows the current at the input X but based on simulations, the current error increases as differential input is increased. Stability was compromised due to the multiple stages employed in the CCII+ implementation. A series-RC compensation was placed between the buffer stage output and the current conveyor output node Z for all the CCII+
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
848
S.P. Almazan, L.I. Alunan, F.R. Gomez, J.M. Jarillas, M.T. Gusad and M. Rosales
blocks used in all the in-amp circuits. The compensating resistor was set to 1K while the compensating capacitor was limited to 2pF due to lay-out area considerations.
Fig. 6
Fig. 4
Improved 2-CC schematic design [8]
The 2-Current Conveyor (2-CC) is the most basic current-mode implementation of the in-amp. Any voltage difference applied between the in-amp inputs V+ and V- will also be reflected across RIN, forcing a current through it.
Fig. 5
2-CC schematic design [3]
By the operation of the CCII+, this same current will be conveyed and will then flow through RL. It can be derived that the differential voltage gain of this topology is AVD
VOUT (V ) (V )
RL RIN
(4)
The resistors chosen for this circuit were 50K for RL (as in the other topologies) and 500 for RIN to achieve a gain of 100. A large value of RL was needed to make the output voltage swing approach the supply rail despite the low current being passed at the output. The configuration in Fig. 6 is an improvement to the 2-CC in-amp. As seen, an op-amp is added as an output stage to decrease the output impedance of the in-amp circuit. This op-amp, A3, is chosen to be tied at the output of the conveyor A2 to produce a positive gain. As in the previous topology, the differential gain is given in Eq. 4. The cascaded op-amp A3 is the same op-amp used in the CCII+ implementation, with the same sizing and biasing. However, it was necessary to increase the compensation of the op-amp to make the 2-Current Conveyor with Op-Amp at the Output (2-CC with Op-Amp) stable. The op-amp’s CC was increased from 500fF to 2pF, while RC was increased from 10K to 20K .
2-CC with op-amp at the output schematic design [4]
The positive terminal of A3 must be biased to at least VDSAT, instead of just being grounded. This is to ensure that the output current mirrors of the conveyor A2 have enough drain-source bias to maintain saturation, and thus enables the conveyor A2 to properly mirror current iR1. In this design, a 1.25V-bias serves two purposes – aside from ensuring that conveyor A2 effectively mirrors the current iR1, it also raises the in-amp’s output reference voltage to 1.25V, placing the output reference exactly between the rails. The 1.25V could be derived from the 2.5 VDD through voltage dividers. The Improved 2-CC, shown in Fig. 4, is typified by a feedback loop from the unloaded output of A2 to the X terminal of A1. Effectively, the voltages V- and V+ are imposed on the opposite ends of RIN and determines the value of the current i. This same current is forced to flow into the Z terminal of A2. Therefore, the total current leaving the X and Z terminals of A1 is 2i. AVD can be calculated to be, AVD
VOUT (V ) (V )
2
RL RIN
(5)
The 3-Current Conveyor (3-CC) topology consists of three CCII+’s and two resistors, with current conveyors B and C cascaded. As seen in Fig. 7, the current entering RL is twice the current entering RIN, so AVD can be shown in Eq. 5.
Fig. 7
3-CC schematic design [9]
The CCII+ labeled as C, has VBIAS at its Y terminal instead of being directly grounded. Again, this is to ensure that there is enough VDS at the output current mirrors of conveyor B. As done previously, VBIAS was set to 1.25V.
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Monolithic CMOS Current-Mode Instrumentation Amplifiers for ECG Signals
IV. REPRESENTATION OF THE ECG SIGNAL A representation of an ECG waveform using a piece-wise linear voltage source file is employed to verify the capability of the in-amps to accurately amplify ECG signals amidst common-mode noise. The model used in plotting the signal is adapted from related literature on biomedical signals. In addition, the simulated ECG signal was made to last for only 3 periods, so that simulations would not take much time. Common-mode noise was added to the ECG signal, as a cascade of 100mV-sinusoids at 60, 120, 180 and 240Hz. A 1.25V DC voltage source was also used as a commode-mode input voltage to maintain correct biasing of the in-amp.
849
improvement in results, the third conveyor also degrades the bandwidth of the circuit and consumes more power. The 2-CC with Op-Amp in-amp provides the smallest output impedance because of the addition of an op-amp as an output stage. Furthermore, this topology has an easily adjustable output reference voltage. Shown in Table 1 is the comparison between the achieved parameters of the 2-CC with Op-Amp and the specifications of the INA326, which is also a current-mode in-amp. Meanwhile, shown in Fig. 8 is the output plot obtained as the 2-CC with Op-Amp is applied with the simulated ECG signal and common-mode noise. Clearly, it can be seen that the in-amp has indeed rejected the common-mode signals incorporated in the simulated ECG signal.
V. RESULTS AND ANALYSIS All the current-mode in-amp topologies achieved the fixed differential gain of 100, a superior CMRR of at least 150dB, and very high input impedance in the G -range. However, the in-amps failed to provide a rail-to-rail input voltage signal swing because of the input signal constraints of the current conveyors. Hence, to allow the uV-range signal level of the ECG as input to the in-amps, the 1.25V DC input common-mode voltage was necessary. Fig. 8 Output of 2-CC with Op-Amp with input ECG signal and noise Table 1 Parameters of 2-CC with Op-Amp vs. INA326 2-CC with Op-Amp 102.92 4.16E-7 167.87dB 6.14mV to 2.46V -3.24 uV
0.1 to 10000 1.995E-04 114 dB Vss+10mV to 100uV
Supply current
798.51uA
2.4mA
Power consumption RIN+, RINROUT 3-dB bandwidth PSRR @ 60Hz Settling time Slew rate
1.99mW 794G , 792G 58.49 1.26MHz 85.68dB 639.28ns 1.9V/us
Current*(±2.7 to ±5.5) 100G , 100G -1KHz 110dB 0.95ms Filter limited
Parameters Differential gain Common-mode gain CMRR Output voltage swing Input offset voltage
VI. CONCLUSION
INA326
No significant difference is observed between the 2-CC and the Improved 2-CC in-amps. Both topologies consume less power because they have fewer components compared to the other two topologies. The only difference between them is that the 2-CC has a greater tendency to suffer crosstalk due to the unused output of one of its conveyor blocks. The 3-CC in-amp, on the other hand, also produces the same performance despite the additional conveyor. Simulations have shown that aside from failing to produce an evident
In-amp designs should always take into account the trade-off between CMRR and stability. Simulations showed that a decrease in the common-mode gain increases the CMRR, thus compromising stability. Generally, cascading more stages to improve certain parameters, such as CMRR, worsens the stability of a system. All the current-mode in-amp topologies achieve impressively high CMRR. Hence, any of the implemented current-mode circuits is an outstanding differential amplifier and thus, a potential ECG system block. However, the 2-CC with Op-Amp provides the lowest output impedance that is essential for connecting the in-amp into a larger system. It also has the advantage of being able to adjust its output reference voltage, allowing it to handle both negative and positive signals with respect to the reference voltage.
REFERENCES 1. 2.
J. G. Webster, Medical Instrumentation – Application and Design, 3rd ed., John Wiley and Sons, Inc., 1998 C. Kitchin, L. Counts, A Designer’s Guide to Instrumentation Amplifiers, 3rd ed. Analog Devices, Inc., 2006
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
850 3. 4.
5.
6.
S.P. Almazan, L.I. Alunan, F.R. Gomez, J.M. Jarillas, M.T. Gusad and M. Rosales C. Toumazou, F. J. Lidgey, and C.A. Makris. “Current-mode instrumentation amplifier,” Short Run Press Ltd., Exeter, 1993 K. Koli and K. Halonen, “CMRR enhancement techniques for current-mode instrumentation amplifiers,” IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 47, no. 5, May 2000 M. Lorenzo and A. Manzano, “Design and implementation of CMOS rail-to-rail operational amplifiers,” Department of Electrical and Electronics Engineering, University of the Philippines, October 2006 P.R Gray, et al., Analysis and Design of Analog Integrated Circuits, 4th Ed., John Wiley and Sons, Inc., 2001
7.
8.
9.
A. Sedra and G. Roberts, “Current conveyor theory and practice,” Analogue IC Design: The Current-mode Approach, Short Run Press Ltd., Exeter, 1993 S. J. Azhari and H. Fazlalipoor, “A novel current mode instrumentation amplifier (CMIA) topology,” IEEE Transactions on Instrumentation and Measurement, vol. 49, no. 6, December 2000 A. A. Khan, M. A. Al-Turiagi and M. A. El-Ela, “An improved current-mode instrumentation amplifier with bandwidth independent of Gain,” IEEE Transactions on Instrumentation and Measurement, vol. 44, no. 4, August 1995
_________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Cells Separation by Traveling Wave Dielectrophoretic Microfluidic Devices T. Maturos1, K. Jaruwongrangsee1, A. Sappat1, T. Lomas1, A. Wisitsora-at1, P. Wanichapichart2 and A. Tuantranont1 1
1 Nanoelectronics and MEMS Laboratory, National Electronics and Computer Technology Center, Pathumthani, Thailand 2 Biotechnology Unit, Prince of Songkla University, Songkhla, Thailand
Abstract— In this work, we present a microfluidic device with a 16 parallel electrode array and microchamber for cell separation by using travelling wave dielectophoretic force. The dielectrophoretic PDMS chamber was fabricated using standard microfabrication techniques. The Cr/Au parallel electrode array of 100 μm wide and 300 nm thick was patterned on a glass slide by sputtering through microshadow mask. In order to test twDEP devices, two different size of polystyrene microspheres suspension in deionized water were used as the tested cells. Each type of polystyrene was tested in both the separated and mixed solution. Cells response to the electric field in various mechanisms depending on the applied voltage and frequency of AC signals. For 4.5 μm polystyrene, cells were forced to locate in the center between electrode array and move along the channel and the traveling wave dielectrophoresis occur, when the applied voltage was 10 V and the frequency of the applied signals is in the range of 50 kHz-700 kHz. For 10 μm polystyrene the twDEP occurs when the applied voltage was 7 V and frequency was in the range 30 kHz-1MHz. For the mixed solution containing equal amount of 4.5 and 10 μm microspheres, the big microspheres were moved under twDEP force when the applied voltage was 7 V and the frequency was in the range 25 kHz-1MHz while the small microspheres were attached to the electrodes. Therefore, the twDEP device can separate the microspheres with different sizes and it can be further applied for cells separation and manipulation. Keywords— Traveling wave dielectrophoresis, cell separation, lab-on-a-chip.
I. INTRODUCTION Recently, there have been growing interests in using the MEMS technology for biological and medical applications. The use of electrokinetic effects for manipulating microparticles has been shown to be efficient for biological applications. The examples of electrokinetic phenomena include dielectrophoresis, electrorotaion and traveling wave dielec-trophoresis [1-5]. Dielectrophoresis (DEP) is an electrokinetic movement of neutral particles induced by polarization in non-uniform electric field [6]. The time averaged dielectrophoretic force on a dielectric sphere in non-uniform electric fields is represented by
F
2
2SH 0H m r 3 [Re( f CM ) E (
2S
O
) Im( f CM ) E 2 ]
Where H 0 is the vacuum dielectric constant,
r the particle
radius, f CM is a Clausius-Mossotti factor, and
H *p
and
H are the relative complex permittivity of the particle and the medium, respectively. The positive and negative DEP force is determined by the real part of a Clausius-Mossotti factor. When the dielectric constant of particle is larger than that of medium, the first term is positive; the dielectrophoresis is positive and the particle moves towards the locations with the greatest electric field. In contrast, if the dielectric constant of particle is less than that of medium, the first term is negative, the dielectrophoresis is negative and the particle is repelled from the locations with the greatest electric field [7, 8]. Traveling wave dielectrophoresis (twDEP) is a traveling wave electric field which can be produced when a 90o-phase shift signal sequence is applied to a parallel electrode array. twDEP occurs when the real part of the force equation is much less than the imaginary part [9]. Since the strength of the force depends strongly on the medium, the particle's electrical properties, the particle’s shape and size, and the frequency of the electric field, varying of any parameter allows the manipulation of particles with great selectivity which can be applied in many applications. In this work we present a design and fabrication of twDEP device for separation of cells with difference size. The characterization of chip was studied by observing the movement of microparticles under different conditions of applied twDEP. * m
II. EXPERIMENTAL STUDY A. Dielectrophoretic devices The dielectrophoretic chamber was fabricated using standard microfabrication techniques. A Silicon wafer was cleaned in piranha solution (mixture of 1: 4 of 50% H2O2 and
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 851–854, 2009 www.springerlink.com
852
T. Maturos, K. Jaruwongrangsee, A. Sappat, T. Lomas, A. Wisitsora-at, P. Wanichapichart and A. Tuantranont
(a)
(b)
Fig. 1 Electrode geometry and phase applied on the electrode (a) and a photo of the twDEP devices (b)
97% H2SO4) at 120 oC for 10 minutes, then carefully rinsed several times in deionized water and dried with a gentle stream of air. After that the silicon wafer was dehydrated by heating at 150-200 oC for 10 minutes. SU-8 photoresist was spin-coated on the silicon wafer using a spin coater (Laurell technologies corp. model WS–400A–6NPP), then soft baked to remove all the solvent in the layer. The photoresist coated wafers were exposed using MJB4 mask aligner (SUSS microtec), then post-baked in order to selectively cross-link the exposed portions of the film. The sample was left in the desiccators to cool down slowly at room temperature for more than 13 hours. Finally, samples were developed and cleaned with deionized water and isopropyl alcohol, and then it was gently dried with air. Spin speed, exposure time, baked time, and developing time were optimized to achieve a smooth surface on mold. The chamber was made from Poly (dimethylsiloxane) (PDMS) by solvent casting and drilling. PDMS was prepared by mixing the precursors sylgard with a curing agent at a ratio of 10:1 by volume. The prepolymer mixture was degassed at 20-50 mTorr at room temperature in desiccators pumped with a mechanical vacuum pump for 10 minutes to remove any air bubbles in the mixture. PDMS mixtures were gradually poured onto the SU8 master mold to the height over the depth of designed chamber. After the PDMS is cured at 100 oC for 30 minutes on a mold, the molded polymer samples are peeled off.
(a)
(b)
Fig. 2. The photograph of twDEP chip (a) experimental set up (b)
_______________________________________________________________
The mask for fabricating electrode was designed to have 8 parallels bar shape by using L-Edit software as shown in Fig.1. The Cr/Au electrode array of 100 μm wide and 300 nm thick was patterned on glass slide by DC sputtering through microshadow masking. The chromium and gold were sputtering under argon plasma. The sputtering pressure, sputtering current, and time for chromium are 3x10-3 mbar, 0.2 A, and 2 minutes, respectively. Next, gold was sputtered under the argon pressure of 3x10-3 mbar and the sputtering current of 0.2 A for 10 minutes. The sputtering was conducted at room temperature. The chamber and electrodes were treated under oxygen plasma (Harrick scientific corp. model PDC –32 G) before being attached to each other.
Fig.3 The 4.5 μm microsphere were attached to the center between electrodes array when the applied voltage and frequency to the electrode array was 10 V and 500 kHz
B. Separation test In order to test the twDEP devices, polystyrene microspheres suspended in water were used as the tested cells. Polystyrene microspheres of 4.5 m mean diameter and a density of 4.99 u 10 8 particles per milliliter were purchased from Polysciences, Inc, Warrington, PA, USA. While the 10 m polystyrene microspheres was purchase from Sigma Aldrich company, USA. The polystyrene solution was dilute 1000 times in deionized water. A 10 milliliter polystyrene solution was dropped in the chamber. The chamber was then closed on top by glass slide. The electrodes were energized with four square wave signals. The cells movements under electric fields were observed by using microscope CX41 (Olympus). Cells respond to the electric field in various mechanisms depending on the frequency of applied AC signals. The experimental setups are shown in Fig.2.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Cells Separation by Traveling Wave Dielectrophoretic Microfluidic Devices
Fig.4 The 10 μm microsphere were attached to the center between electrodes array when the applied voltage and frequency to the electrode array was 7 V and 250 kHz
Fig.5 The mixd of two type of microsphere were in the applied voltagae and frequency of 7 V and 25 kHz. The big microsphere were attached to the center between electrodes array (solid line) while the small microspheres were attached to the electrodes (dotted line)
853
tween electrodes. Therefore, that indicates that the 4.5 μm cells are subjected to the effects of a traveling dielectrophoresis force when the applied voltage was 10 V and the frequency was in the range 50 kHz-700 kHz. These results are consistent with the theory that twDEP occurs when the frequency of applied AC fields is in the range where dielectrophoresis (DEP) is negative so when cells experience twDEP force they were repelled from the electrode rather than being trapped by positive DEP. When applied the proper voltage and frequency to the 10 μm microspheres, the big cells show similar behavior to the small cells. In this case, the twDEP occurs when the applied voltage was 7 V and frequency was in the range of 30 kHz-1MHz, as shown in Fig. 4. For the mixed solution containing equal amounts of 4.5 and 10 μm microspheres, the big cells start moving to the center between the electrode arrays when the frequency reaches 5 kHz. When the frequency was 25 kHz-1MHz, the big cells were attached to the center between electrodes, and moved under twDEP force while the small microspheres were attached to the electrodes as shown in Fig.5. The threshold frequency that the big and small cells have started separating were recorded when the big cells started moving to the center between electrode arrays and the small cells were attached to the electrodes. The threshold frequency at each applied voltage is shown in Fig. 6. It can be seen that at the high applied voltage cells separated at the high applied frequency. However, the best separating condition for the 4.5 and 10 μm microspheres was at the applied voltage and frequency equal to 7 V and 25 kHz-1 MHz, respectively. This result can be further applied in biological and medical application such as motion control and cell selectivity.
III. RESULT AND DISCUSSION Two types of cells were tested separately to find the optimum conditions that traveling wave dielectrophoretic occur. For the 4.5 μm microsphere, when the applied voltage and frequency to the electrode array was 10 V and 50 Hz, cells were moved over to the electrode edges in a vertical circular motion and moved faster when the frequency or voltage was increased. When the frequency of the applied signals were more than 350 Hz, cells started moving slowly and tried to move to the center of electrode array. As the frequency of the applied signals was in the range of 50 kHz700 kHz, cells were attached to the center between electrode arrays and moved slowly along the channel as shown in Fig.3. When the frequency of the applied signals was more than 700 kHz, cells started moving out of the center be-
_______________________________________________________________
Threshold frequency (kHz)
12 10 8 6 4 2 0 3
4
5
6
7
8
9
Voltage (V)
Fig. 6 The relationship between the applied voltage and threshold frequency for cells separation
IFMBE Proceedings Vol. 23
_________________________________________________________________
854
T. Maturos, K. Jaruwongrangsee, A. Sappat, T. Lomas, A. Wisitsora-at, P. Wanichapichart and A. Tuantranont
IV. CONCLUSIONS
REFERENCES
The traveling wave dielectrophoretic microfluidic devices were successfully fabricated. When energized the electrodes with four square wave signals, cells response to the electric field in various mechanisms depending on the amplitude and frequency of applied AC signals. Cells with different size were subjected to traveling wave dielectrophoresis in the different voltage and frequency range. And the results for the mixed of two type of cells show that at the proper applied voltage and frequency, two cells can be separated. Furthermore, this twDEP separation approach could be applied to the flow system and integrated with other devices for using in biological and medical application.
1. 2.
3.
4. 5. 6. 7.
8.
9.
_______________________________________________________________
Huang, Y., and Pethig, R. (1991) Electrode design for negative dielectrophoresis. J Meas Sci Technol 2: 1142-1146. Wang, X.-B. Huang, Y., Gascoyne, P. R. C Becker, F. F., Hölzel, R. and Pethig, R. (1994) Changes in Friend murine erythroleukaemia cell membranes during induced differentiation determined by electrorotation. Biochimica et Biophysica Acta 1193: 330-344. Schnelle, T., Hagedorn, R., Fuhr, G., Fiedler, S., and Müller, T. (1993) Three-dimensional electric field traps for manipulation of cells-calculation and experimental verification. Biochimica et Biophysica Act 131: 127-140 Wang, X.-B. Huang, Y.,Burt, J.P.H., Markx, G.H., and Pethig, R. (1993) J. Phys. D: Appl. Phys. 26: 1278-1285 Huang, Y., Wang, X.-B., Tame, J.A., and Pethig, R. (1993) J. Phys. D: Appl. Phys. 26: 1528-1535 Pohl, H. A. (1978) Dielectrophoretic. Cambridge, UK. Fu, L. M., Lee, G. B. (2004) Manipulation of microparticles using new modes of traveling-wave-dielectrophoretic force : numerical simulation and experiment, IEEE/ASME Transactions on mechatronics, vol 9, 2004, pp. 377-383. Wang, X.-B., Hughes, M.P., Huang, Y., Becker, F.F., Gascoyne, P. R. C. (1995) Non-uniform spatial distributions of both the magnitude and phase of AC electric fields determine dielectrophoretic force. Biochimica et Biophysica Acta. 1243: 185-194 Batchelder, J. S. (1983) Dielectrophoretic manipulator. Rev. sci. Instrum 54: 300-302
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Novel pH Sensor Based on the Swelling of A Hydrogel Membrane K.F. Chou, Y.C. Lin, H.Y. Chen, S.Y. Huang and Z.Y. Lin Department of Biomedical Engineering, Yuanpei University, Hsin Chu, Taiwan Abstract — A novel pH sensor consists of the pH sensitive hydrogel and conductive polymer layer was developed. The monomers of hydroxyl ethyl methacrylate were polymerized and modified by UV radiation to become the pH sensitive hydrogel. The pH sensitive functional group of poly(HEMA) was created by UV treatment. The sensing principle of the device was based on the piezoresistive effect induced by the swelling behavior of the hydrogel varied with pH value of testing solution on the conductive polymer. The correlation between the sensitivity of the device and UV radiation dosages was investigated. The stree-strain relationship of the polymer membranes and output characteristic of pH sensor were measured, and the computer-aid analysis based on the hyperelastic–viscous model of hydrogel was achieved by using the fine element method. The optimization aspect of pH sensor also was found from the simulating process. Keywords — hydroxyl ethyl methacrylate, pH sensitive, sensor.
II. MATERIAL AND METHOD A. Sensing principle The sensing device was composed of the pH sensitive hydrogel layer(poly(HEMA)), resistance Ag gel layer and SiO2 substrate. The operating principle of the sensor was showed as Fig. 1. The swelling of the polymer film induced by absorbing testing solution, and the swelling degree changed with the pH value of solution. The resistance layer under the hydrogel deformed with the swelling of sensitive layer such that resistance changed. The relationship between the resistance variation and strain are linear as formula (1). The resistance layer connected with a Wheatstone bridge. The voltage output of Wheatstone bridge could be measured as the change of the resistance of Ag gel. 'R R
H
(1)
I. INTRODUCTION For past years, a lot of papers on pH sensor based on hydrogel had been published[1-9]. The operating principle of the pH sensors with hydrogel substrate all are based on the bending of flexible matrix induced by the swelling effect of gel. The deformation of geometry structure could be transfer into the change of capacitance or resistance. These similar principles were applied in the design of various sensors. Herber et al.[2] developed a hydrogel biosensor for monitoring the partial pressure of CO2 in the stomach. Han et al.[1] combinated the pressure transducer and pH sensitive gel to develop a biosensor applied into the measurement of the concentration of glucose in blood. On the other hand, Trinh et al.[9] investigated the effect of polymerization condition on the sensitivity and response time, and predicted the characteristics of the sensor by using MooneyRivlin model and finite element method. Most researchers chose PVA-PAA copolymer as the pH sensitive materials. In our previous study[10,11], the swelling degree of poly(HEMA) exposed by gamma ray in buffer solutions changed with the pH value of buffer. Besides, the pH sensitive behavior could be modified by irradiation dosages. However, in this study, we modified the poly(HEMA) with ultraviolet irradiation but not gamma ray in order to keep the mechanical properties of hydrogel thin film.
Fig. 1 Operating principle of pH sensor B. Materials All chemical reagents were purchased from Tokyo Kasei Kogyo Co. Ltd. and were used as received. Water was dis-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 855–858, 2009 www.springerlink.com
856
K.F. Chou, Y.C. Lin, H.Y. Chen, S.Y. Huang and Z.Y. Lin
tilled and deionized at 18 M resistance. Aqueous solutions (10 wt%) of ammonium persulfate(APS) and sodium metabisulfite (SMBS) were used together as initiators and were made prior to every use. All chemicals were used as received unless otherwise stated. Ethylene demethacrylate (EDMA) were used as crosslinking agent. C. Polymerization of poly(HEMA) The monomer mixture was comprised of monomers (HEMA), crosslinking agent (ethylene dimethacrylate, EDMA), and solvent (D. I. water). HEMA monomer diluted with water (HEMA : water = 60 : 40). Next, EDMA, ammonium persulphate and sodium metabisulphite was added into the solution and mixed by ultrasonic machine for 5 min. The polymerization of these monomer mixtures are initiated by APS and SMBS concentrations were 0.5 wt% and 0.2 wt% of the total monomer, respectively. Prepolymerization occurred at 65ഒfor half hour. Then the prepolymers were spin coated on resistance layer to form the polymer thin film with thickness of 300 ˩m. After prepolymerization, the specimens were exposed on 360nm UV radiation for 2hr (dosages of 72J/cm2). D. Characteristic analysis of the sensitive layer The chemical structure of poly(HEMA) was determined by FTIR. The hydrogel samples were immersed in various pH buffer solutions. Their weight changes were measured with an electronic balance. The swelling degree of hydrogel was calculated from the result. The definition of swelling degree (SR) listed as equation (2). Wt means the weight of thin film at time t, W0 is the initial weight. SR
Wt W0 W0
Fig. 2 Fabrication of pH sensor F. pH measurement system The infrastructure of pH measurement system was shown as Fig. 3. The pH sensor contains a sensing chip and Wheatstone bridge. In the signal condition part, The output signal are amplified and filtered through AD620 IC. Then, the voltage output converted to digital signal with NI 6024E DAQ. Finally, the curve of pH value versus time is displayed by the panel of virtual instrument.
(2)
E. Fabrication of Sensing Chip The fabrication process of sensing chip was shown as Fig. 2. The process was divided into three steps (a) two square SiO2 plates were made up through a PMMA mold, and Ag gel layer was spin coated on SiO2 substrate. (b) Au conducting electrode was fabricated by sputtering coater, and remove the bump by using acetone. Poly(HEMA) sensitive layer was filled by the principle of siphon effect. (c) The second Ag gel layer was coated on the hydrogel layer.
_________________________________________
IFMBE Proceedings Vol. 23
Fig. 3 pH measurement system
___________________________________________
A Novel pH Sensor Based on the Swelling of A Hydrogel Membrane
857
III. RESULTS AND DISCUSSION A. Characteristic analysis of the sensitive layer The effect of UV radiation on the chemical structure of poly(HEMA) was analyzed by FTIR. The spectrum was shown as Fig. 4. Poly(HEMA) thin film exposed on 360nm UV radiation of 72 J/cm2 presents two significant shift and adsorption at 1244cm-1 and 3500cm-1, respectively. The adsorb band at 1244cm-1 corresponds to the acrylate group and crosslink structure of EDMA. That implies that UV 100
Transmittance(%)
90
3500cm-1 80
70
Fig. 6 Stress distribution applied in the P(HEMA) thin film radiation energy would destroy the crosslink structure and create weak basic groups. On the other hand, the decreasing of restrict force was due to the breaking of crosslink structure. Therefore, the absorption peak of hydrophilic groups was enhanced. This inference was verified by the swelling experimental. Fig. 5 presents the result of the experimental. The swelling degree of poly(HEMA) immersed in acidic solution was higher than base solution, the swelling ratio exponential decays with pH value of buffers. The Young’s modulus of poly(HEMA) was obtained by tensile testing, the value is 0.73MPa. The swelling stress distribution was simulated with Comsol Multiphysics software, shown as Fig. 6. The deformation was limited along axis y and z.
1244cm-1
B. Output Characteristic of the resistance Ag gel layer 60
non UV exposed UV exposed
50
40 1000
1500
2000
2500
3000
3500
-1
wave number(cm )
The plot of resistance of Ag gel layer versus pH value of buffer solutions was shown as Fig. 7. The trend of data could be divided into two stages. The resistance of Ag gel increases with the increasing pH value of buffers in the range of pH1~pH4. However, the trend reverses at pH4. The resistance decreases with the increasing pH value of buffers in the range of pH4~pH7. The characteristic of Ag gel was matched with the swelling behavior of sensitive
Fig. 4 FTIR spectrum of P(HEMA) 172 16
non UV exposed UV exposed
14
168
164
R(:)
SR(%)
12
10
160
156 8
152 6 1
2
3
4
5
6
7
148 1
pH
3
4
5
6
7
pH
Fig. 5 Plot of swelling degree of P(HEMA) versus pH value of buffers
_________________________________________
2
Fig. 7 Plot of resistance of Ag gel versus pH value of buffers
IFMBE Proceedings Vol. 23
___________________________________________
858
K.F. Chou, Y.C. Lin, H.Y. Chen, S.Y. Huang and Z.Y. Lin
layer as the value of pH>4. Nevertheless, it might be that the moist raises the conductivity of Ag gel, such that the opposite trend was happened as the value of pH50% of biotinylated 5-ASA attaches to actin filaments through streptavidin cross linking. Absorbance spectra of biotinylated 5-ASA and 5-ASA attached to actin filaments are shown in figure 3 A & B. B. In Vitro Motility Assay Sliding velocity measurement is another attempt to confirm non covalent attachment of actin to 5-ASA. Maximum velocity of single bundle of actin filament only on myosin heads was calculated and found to be 4.0-6.0 m/s (4). The average velocity calculated was found to be 3.59 m/s. Direction of motile actin
Figure2: TEM images of actin filaments. 3A represents 5-ASA molecules on actin filaments while 3B shows actin filaments without 5-ASA molecules.
Quantitation of attachment of 5-ASA to biotinylated actin filaments was determined UV-VIS spectroscopy. Concentration of biotinylated drug was calculated by using
_________________________________________
Figure 3: UV-VIS spectra of 5-ASA, biotinylated 5-ASA (A) and biotinylated 5-ASA & actin attached 5-ASA (B). filaments attached to drug was tracked by determining the centroid of actin-drug complex in each frame. After superimposing consecutive frames the path followed by the actin filaments was obtained as shown in figure 4.
IFMBE Proceedings Vol. 23
___________________________________________
In-Vitro Transportation of Drug Molecule by Actin Myosin Motor System 6
5
m /Sec
4
3
2
1
0
0
5
10
15
20
25
30
35
40
45
50
Frames
Figure 4: Histogram for velocity of drug attached actin filaments calculated by processing each frame in Matlab (A).The track of motile actin filaments attached to 5-ASA (B).
IV. DISCUSSION & CONCLUSION Development of Nano-Robotic devices based on molecular motors (Actin-Myosin) is long anticipated. Attachment of actin to limited cargoes is the pre-requisite to realize such devices. Attachment of actin filaments to drugs by streptavidin-biotin cross linking has not been reported yet. In the present study, we have reported a new methodology to directly attach 5-ASA to actin filaments which is beneficial as load on actin filaments will be reduced. If drug is coating on certain cargoes and then attached t actin filaments then the velocity of actin filament will be reduced as load on filament will be increased. We have calculated the velocity of drug attached actin filaments as 3.59 m/s which is good enough to carry drug in desired direction. It has been reported that when a polystyrene bead is attached to actin filaments then the velocity reduces to 0.99 m/s (7). If we will coat drug on these polystyrene beads and then attach them to actin filaments then actin filament will not be able to carry drug. Confirmation of drug attached to actin filament was concluded by fluorescence microscope data where smooth surface of actin filaments get distorted when drug molecules attaches to it as shown in figure 1A & B. Similar structural distortion in actin
_________________________________________
905
filaments was also observed in TEM images (figure 2A & B). Quantitation of drug was analyzed by UV-VIS spectroscopy, it was calculated that approximately 50% of the drug in the solution is attached to actin filaments via streptavidin- biotin cross linking. Therefore efficiency of attachment of 5-ASA with this method is not only stable but also efficient. In the present work, a simple, cheapest and less time consuming method of attachment of drug to single bundle of actin filaments has been reported using biotin and streptavidin as a cross linker. Attached drug can be exploited for transportation for realizing nano-robotic devices in future.
REFERENCE [1] Mansson A. Sundberg M. Bunk R. Balaz M. Nicholls I A. Omling P.
[2] [3]
[4] [5] [6] [7]
Tegenfeldt J O. Tagerud S. Montelius L. 2005. Actin-based molecular motors for cargo transportation in nanotechnology—potentials and challenges. IEEE Transactions on Advance Packaging. 28: 547-554. Schliwa M. Woehlke G. (2003). Molecular motors. Nature 422: 759765. Månsson A. Sundberg M. Bunk R. Balaz M. Rosengren J P. Lindahl J. NichollsI. Omling P. Tågerud S. Montelius L. (2004). Nanotechnology and actomyosin motility in vitro on different surface chemistries. Biophyisical Journal 86: 58a. Katsuhisa T. and Ken S. (1991) A physical model of ATP-induced actin-myosin movement in vitro. Biophys. J. 59: 343-356. Calum S Neish. Robert M Henderson. J Michael Edwardson. Visualisation Of The Streptavidin-Biotin Interaction Using AFM. Biophys. J. (Annual Meeting Abstracts) 82 (1): 337. Bagga E. Kumari S. Kumar R. Bajpai R. P. Bharadwaj L. M. (2005). Covalent immobilization of myosin for in-vitro motility of actin. Pramana 65: 967-972. Kaur H. Das T. Kumar R. Ajore R. Bharadwaj L. M. (2008) Covalent Attachment of Actin Filaments to Tween 80 Coated Polystyrene Beads for Cargo Transportation. Biosystems.92:69-75.
IFMBE Proceedings Vol. 23
___________________________________________
Tumour Knee Replacement Planning in a 3D Graphics System K. Subburaj1, B. Ravi1 and M.G. Agarwal2 1
OrthoCAD Network Research Centre, Indian Institute of Technology Bombay, Mumbai, India 2 Department of Surgical Oncology, Tata Memorial Hospital, Mumbai, India
Abstract — Limb salvage surgery has replaced amputation as the treatment of choice for sarcomas of the extremities. However, complications such as prosthesis loosening and fracture of bone or prosthesis continue to occur due to poorly aligned prosthesis or unconsidered bone deformities. These can be minimized by detailed implantation planning: intervention, resection, selection, and alignment decisions considering anatomical variations. Previous works employed interactive identification of anatomical landmarks, and prosthesis position planning by superimposing prosthesis drawing on radiographic image, which is cumbersome and error-prone. We present an automated methodology for mega endoprosthesis implantation planning in a 3D computer graphics environment. First, a virtual anatomical model is reconstructed by stacking and segmenting CT scan images. A neighborhood configuration based 3D visualization algorithm has been developed for fast rendering of the volumetric data, enabling a quick understanding of anatomical structures. Key skeletal landmarks used for implantation are automatically localized using curvature analysis of the 3D model and knowledge based rules. Anatomical details (mainly dimensions and reference axes) are extracted based on the landmarks and used in resection planning. A decision support method has been developed for segregating prosthesis components into three sets: ‘most suitable’, ‘probably suitable’, and ‘not suitable’ for a particular patient. The geometrical landmarks of the prosthesis components are mapped with respect to the anatomical landmarks of the patient’s model to derive alignment relationships. 3D curved medial axes of both (prosthesis and anatomical models) are used for reference and alignment. A set of selection and positional accuracy measures have been developed to evaluate the anatomical conformity of the prosthesis. The computeraided methodology is illustrated for tumour knee endoprosthetic replacement. It is shown to reduce the time required for implantation planning and improve the quality of outcome. The 3D environment is also more intuitive and easy-to-use than the traditional approach relying on 2D images.
fold. (i) enable a surgeon to understand the patient's anatomy and the surgical challenges it poses, (ii) help in the development of customized prostheses, alignment of prosthesis components, and evaluation of post-surgical outcomes and (iii) present the data in a way that enables fast and accurate decisions. Limb salvage surgery (prosthetic replacement after excision of tumour affected bone region) has replaced amputation (excision of lower part of the limb) as the treatment of choice for sarcomas of the extremities (e.g. knee) [1]. However, complications such as prosthesis loosening and fracture of bone or prosthesis continue to occur [2,3]. This can be reduced by detailed implantation planning (intervention, resection, selection, and alignment), which should be carried out bearing in mind that the anatomy of each person is unique [4,5]. A typical prosthesis set, used in massive replacement of distal femur portion of the knee due to primary bone tumour, composes of femoral condyle (FC), space fillers (SF), femoral fork (FK), tibial tray (TT), tibial poly (TP), femoral and tibial stem (FS and TS), and articulation and locking elements (Fig. 1).
Keywords — tumour knee replacement, virtual 3D reconstruction, prosthesis selection, prosthesis alignment, anatomical understanding
I. INTRODUCTION Virtual surgery planning is an interdisciplinary contribution of medical and computational knowledge to health sector. The goals of preoperative planning tools are three-
Fig. 1 Endoprosthesis used for reconstructing distal femur/knee after excision of tumour (a) prosthesis design verification using radiographic image (b) typical components of the prosthesis
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 906–910, 2009 www.springerlink.com
Tumour Knee Replacement Planning in a 3D Graphics System
Previous work employed interactive identification of anatomical landmarks and prosthesis positioning by superimposing 2D prosthesis drawing on radiographic image (Fig. 1a). This approach is cumbersome and error-prone, especially for complex anatomical situations, such as total reconstruction of the knee joint [5,6,7]. This can be eliminated by using patient specific 3D computerized anatomical models [4,8]. Even though significant research has been carried out on arthritis knee surgery, 3D solutions for planning are rare especially in tumour knee replacement (TKR). In our knowledge, there is no reported literature on virtual surgery planning of TKR, despite an increasing demand from fields such as computer- and robotic-assisted surgery. In this work, we focus on using 3D computer graphics and geometric reasoning algorithms to aid surgeons in planning tumour knee replacement.
II. MATERIALS AND METHODS A. Reconstruction of 3D Anatomical Models
907
We have developed a methodology for automatic localization and identification of anatomical landmarks on a 3D bone model [9]. The model surface is segmented into different regions based on Gaussian and mean curvature gradients, which are then classified as peak, ridge, pit and ravine. These anatomical landmarks are constrained in a spatial relationship that is the same for all the knee models. In order to encode these constraints in a network diagram, we characterize the edges pairs by the positional adjacency of a landmark relative to its neighboring landmarks. Skeletal landmarks are automatically identified by an iterative process using a spatial adjacency relationship matrix formed by these relations between the features. Figure 5 shows the anatomical landmarks on distal femur and proximal tibia which are most distal point of medial and lateral condyles (MCF and LCF), medial and lateral epicondyles (MEF and LEF), medial and lateral peak of anterior patella tracking edges (MPF and LPF), medial and lateral peak on tibial condylar plateau (MPT and LPT), tibial tuberosity (TTT), medial and lateral intercondylar tubercles (MIT and LIT), and apex process of fibula (HF).
The reconstruction steps of 3D anatomical models start with the imported axial CT images in DICOM (Digital Imaging and Communications in Medicine) format [8]. These images are first processed with filters to reduce the noise while preserving the edge information in the form of intensity gradient. Global thresholding of bony tissue, based on a range of HU values, is carried out in line with local adaptive thresholding, which segments by analyzing 26 neighbors (N26). Next, 3D region growing algorithm is initiated to group the bone data from the thresholded regions. Due to the similarities (overlap) in bone and surrounding tissues, classification only based on thresholding is not feasible. 3D morphological operators (both manual and automatic) are used to close the discontinuities in outer (periosteal) and inner (endosteal) contours. These segmented data are rendered in two modes: direct or volume rendering using transfer functions and surface rendering by fitting triangles using surface tiling. B. Identification of Anatomical Landmarks Anatomical landmarks are distinct regions or points on bones with uniqueness in shape characteristics in their vicinity. Manual location of landmarks and related measurements consume time, require a high level of expertise, and may lack in accuracy. Geometrically invariant measures such as curvatures, concavity, and convexity are useful in identifying landmarks on a 3D anatomical model.
_______________________________________________________________
Fig. 2 Anatomical landmarks (a) on distal femur (b) on proximal tibia C. Assessing Anatomical Parameters Primary anatomical parameters are extracted using the associated landmarks. These include medio-lateral length
IFMBE Proceedings Vol. 23
_________________________________________________________________
908
K. Subburaj, B. Ravi and M.G. Agarwal
(ML), anterio-posterior length (AP), and inner and outer diameters (d and D) of femur (MLF, APF, dF, and DF) and tibia (MLT, APT, DT, and dT) bone respectively. Others include femur valgus (V) angle, femur bone curvature (RCF), and resection length (RL) decided by the surgeon according to spread of disease. D. Prosthesis Components Selection Prosthesis components selection is driven by the extracted anatomical details and a decision based methodology. The standard modular prosthesis components are classified into three major categories: small (S), medium (M), and Large (L) according to their sizes (ranges are decided by one of our internal studies on knee morphometry). These components are indexed in terms of their geometric characteristics and design driven parameters to create a database. Critical components (tibial tray, femoral condyle, and tibial poly) are selected independently based on anatomical parameters. Other components (FK, SF, FS, and TS) are selected based on both dimensions and influence of previously selected components. As per this strategy, we found the best selection flow is TT-FC-TP-FK-SF-TS-FS. Remaining components are chosen based on sizes of the mating components. The selection is performed in two steps: First, the components which are undersized and oversized are eliminated, then the dimensions of components are mapped with the corresponding measured anatomical parameters to form fuzzy-based-decision-tree based on the pre-defined rules compiled from surgeons' experience. The TT and FC are first selected based on APT and MLF respectively. The TP is selected based on MLFC and APTT. The remaining components are selected based on the dimensions of these three. FK is chosen based on DF and influenced by DFC. FE is chosen based on RL and RCL. Based on dF and dT, FS and TS are selected. Each branch of the decision tree is assigned a weight to evaluate its suitability. The choice of prosthesis components are classified with a qualitative tag as: ‘most suitable’, ‘probably suitable’, and ‘not suitable’.
ing reference axes such as AF with AFS and AT with ATS. This is repeated for the entire set AA and AP. Next, point to point registration (iterative closest point) with anatomical landmarks on bone model and geometric landmarks on prosthesis components. This include surface matching on both prepared tibial and femoral bone surface with FK and TT respectively. Space fillers are chosen from the 'most suitable' prosthesis components set accordingly after aligning FC, FK, and FS in femoral side, TT, TP, and TS in tibial side. The orientation of the bone is not changed during positioning. F. Anatomical Suitability Evaluation Modular prosthetic components may require a little compromise in confirming anatomical shape and size which affects the functional outcome of the joint. A set of anatomical suitability metrics were evolved to evaluate the selected prosthesis components considering their suitability with respect to patient's anatomy (Fig. 3). These measures are: geometric error (EG) (dimensional difference between the resected bone and selected prosthesis components), curvature error (EC) (difference between the measured bone curvature and the obtained curvature after implanting prosthesis with curved stem, with and without osteotomy), re-
E. Prosthesis Positioning Positioning of prosthesis components with 3D anatomical model is carried out with reference to anatomical landmarks and reference axes. A set of reference axes of both anatomical (AA) and prosthesis (AP) model are generated by computing medial axis. Major reference axes include longitudinal axes of the femur (AF) and tibia (AT) bone, and axis of FS (AFS) and TS (ATS). Registering components with corresponding bone is carried out in two steps. First, match-
Fig. 3 Anatomical suitability metrics based on (a) geometry (b) bone curvature (c) reconstruction length (a) knee shift
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Tumour Knee Replacement Planning in a 3D Graphics System
909
construction length error (ERL) (difference between actual RCL and effective length of the prosthesis (EPL), given by the distance between tibial component's bottom and femoral component's top surface) and knee centre shift error (ECS) (difference between the natural knee's articulation centre and the artificial implant's articulation centre after implantation). These metrics are important in deciding whether standard modular prosthesis can be used or customized version is required for the patient.
III. RESULTS In this study, distal femur knee replacement for a patient (35/M) has been taken as an example to illustrate the methodology. 3D model of the knee joint is reconstructed from 468 CT axial slices in DICOM format (Fig. 4).
Fig. 5 Aligned distal femur prosthesis in 3D anatomical model prosthesis is rendered with 3D bone model for visual verification. The suitability metrics indicate a numerical measure to the surgeons to give more attention to error-prone steps during surgical procedure. Integrating 3D geometric reasoning algorithms for anatomical understanding with decision methods makes the system intelligent and independent.
ACKNOWLEDGMENT Fig. 4 Reconstructed 3D anatomical model of the knee Anatomical landmarks and parameters extracted and prosthesis is selected based on these data. In femur (ML:64mm, AP:60mm, D:28-25mm, d:16-13mm, VA: 4.60, RL:120mm), tibia (ML:70mm, AP:58mm, d:14mm). The 'most suitable' set is large TT, FC, and TP. Generated reference axes along with landmarks are used in positioning of prosthesis and aligning knee into pre-operative posture (Fig.5). The final suitability metrics are EG:6mm, EC:4mm, ERL:3mm, and ECS:4.5mm. The measured metrics are within the limits, which does not affect the gait of the limb.
REFERENCES 1. 2. 3. 4. 5.
6.
IV. CONCLUSIONS The implantation planning of knee prosthesis after excision of primary tumour is discussed. The aligned endo-
_______________________________________________________________
This work is a part of the project supported by the Office of the Principal Scientific Adviser to the Government of India, New Delhi.
7.
Agarwal M (2007) Low cost limb reconstruction for musculoskeletal tumors. Cur Opin Orthop 18:561-571 Sim IW, Tse LF, Ek ET et al (2007) Salvaging the limb salvage: Management of complications. Eur J Surg Oncol 33:796-802 Malawer MM, Sugarbaker PH (2001) Musculoskeletal cancer surgery: treatment of sarcomas and allied diseases. Kluwer, Netherlands Sutherland CJ, Bresina (1994) Use of general purpose mechanical CAD in surgery planning. Comput Med Img Graph 18:435-442 Kendall SJH, Singer GC, Briggs TWR et al (2000) A functional analysis of massive knee replacement after extra-articular resections of primary bone tumors. J Arthroplasty 15:754-760 Viceconti M, Testi D et al (2003) An automated method to position prosthetic components within multiple anatomical spaces," Comput Methods Programs Biomed 70:127-127 Hewitt B, Shakespeare D (2001) A straightforward method of assessing the accuracy of implantation of knee prostheses. Knee 8:139-44
IFMBE Proceedings Vol. 23
_________________________________________________________________
910 8.
9.
K. Subburaj, B. Ravi and M.G. Agarwal Subburaj K, Ravi B (2007) High resolution medical models and geometric reasoning starting from CT/MR images. Proc. IEEE Int Conf CAD Comput Graph, Beijing, China, 2007, 141-144 Subburaj K, Ravi B, Agarwal MG (2008) 3D shape reasoning for identifying anatomical landmarks. Comput Aided Des Appl 5:153-60
_______________________________________________________________
Corresponding author: Author: Institute: Street: City: Country: Email:
Dr. Bhallamudi Ravi Indian Institute of Technology Bombay Powai Mumbai India
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Color Medical Image Vector Quantization Coding Using K-Means: Retinal Image Agung W. Setiawan1, Andriyan B. Suksmono2 and Tati R. Mengko1 1
Biomedical Engineering, School of Electrical Engineering & Informatics ITB, Bandung, Indonesia Telecommunication Engineering, School of Electrical Engineering & Informatics ITB, Bandung, Indonesia
2
Abstract — Retinal images play an important role in supporting medical diagnosis. Digital retinal image usually are represented in such a large data volume that takes a considerable amount of time to be accessed and displayed. Digital medical image compression therefore become crucial in medical image transfer and storage in electronic database server. This research is concerned with the development of a color medical image coding scheme using vector quantization. K-means is clustering technique that is applied to create codebook in vector quantization (VQ) coding. This research investigates the performance of each of these clustering technique in four color models: RGB, 4:4:4 YUV, 4:2:0 YUV and HSV. The VQ coding scheme is conducted separately to image components in each channel of the color models. Reconstructed color image is obtained by combining the VQ decoding result of each image components. The VQ coding performance relies on the quality of the utilized codebook. The VQ codebook used in this research was developed from retinal image. The retinal image was processed as a set of four quarter-images. This special treatment is required to cope with the large computational load of the VQ codebook generation, while ensuring the incorporation of the color and texture diversity of the training set in the resulted codebook. The RGB 444 color mode (coding of the red, green, and blue channels by the size of 4×4) produces the best subjective and objective quality of image coding. However, the optimum color models for tele-ophthalmology and electronic medical records are the YUV 4:2:0 and RGB 848 for retinal image. Keywords — Color Medical Image, Retinal Image, Vector Quantization, K-Means.
Public Switched Telephone Networks which have narrow bandwidth capacities. Tele-diagnosis with asymmetric coding has an advantage, such as the decoding computation in client side is simpler that in the sever side. Thus, vector quantization coding is used in this research because of its lossy-tolossless capability in sending images in Region of Interest (RoI). The research aims are to develop a color medical image coding based on vector quantization, to search for an optimal size of codeword in retinal images and to decide a suitable model of color image for quantization vector coding. One of the most important steps in vector quantization coding is to make a coding book. In this research, coding book is made by K-means algorithm. K-means is an unsupervised learning algorithm which is widely used for data classification. The procedure of its data classification is simple and easy for clustering. Vector quantization method is commonly used for digital data compression, such as digital image and sound compression. For image compression, vector quantization is computed in adjacent correspondence image pixels. It is likely that the adjacent pixels of pixel P will have the same value as pixel P.
II. VECTOR QUANTIZATION One of the most important steps in vector quantization coding is to make a coding book. In this research, coding
I. INTRODUCTION Image is one of the medical data needed in diagnosis. Types of medical image are gray image – for example is Xray image and color image – such as retinal image. These medical images are saved into a medical record database sever. Image compression/coding will be needed to reduce the storage capacity in the server. Without image compression, the size of the images saved in a server will be large. For example, the normal size for a 1600 × 1216 pixels retinal image is 5,701 MB. Therefore, clients will need a lot of time to access such a large file in tele-diagnosis. Meanwhile, the access time also depends on the network bandwidth used. As we know, most of Indonesia rural areas use
Fig. 1 Vector quantization coding scheme.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 911–914, 2009 www.springerlink.com
912
Agung W. Setiawan, Andriyan B. Suksmono and Tati R. Mengko
book is made by K-means algorithm. K-means is an unsupervised learning algorithm which is widely used for data classification. The procedure of its data classification is simple and easy for clustering. Vector quantization method is commonly used for digital data compression, such as digital image and sound compression. For image compression, vector quantization is computed in adjacent correspondence image pixels. It is likely that the adjacent pixels of pixel P will have the same value as pixel P. A vector quantizer consists of two parts, encoder and decoder. The function of encoder is to get the input vector and to result the codeword which gives minimum distortion. The most minimum distortion is determined by Euclidian distance between input vector and every codeword in the codebook. Then, this codeword index is saved in a digital data storage or sent through the communication channel. The function of decoder is to change the codeword index received into the codeword form the codebook. In vector quantization, image coding is done by dividing an image into small blocks of pixels, until it is fit with the codeword dimension, like 2×2, 4×4, etc. The input image in encoder side is divided into small blocks of pixels. The chosen codeword should have the least distance between input vectors. After the codeword has been found, the encoder will result the numbers of codeword index. Then, these process will be repeated until all of the small pixels blocks have their own index numbers. The encoder reconstructs the image based on the index numbers obtained. A look–up table is needed to produce reconstruction vectors from index numbers. The image encoding – by vector quantization method – is lossy, since it is unsimiliar between the input image and the reconstructed image. These differences are occured because the small blocks of pixels is changed into the codeword which has the least distance with that small pixel block.
Fig. 2 Retinal image training. image channels before it is classified by K-means algorithm, which need channels to be extracted. The block diagram is shown in Figure 3.
Fig. 3 Codebook generation block diagram. III. SYSTEM DESIGN The codebook is made by using four retinal images 1600 × 1216 pixels which are cropped until its quarter size. Then, each of these cropped images is combined to be an image in its original size, 1600 × 1216 pixels. This step is effective to decrease the computation load in codebook compilation. Four images chosen are varied in patterns because it is important to obtain a codebook with various values, so that it can be used for various types of retinal images. The tested images can be shown in Figure 2. There are five color models to be coded, RGB, YUV RGB, YUV 4:4:4, YUV 4:2:0 and HSV. All of those color models have three channels. Codebook is made in each
_______________________________________________________________
Coding scheme for RGB, YUV 4:4:4, and YUV 4:2:0 is shown in Figure 6 below. Channel 1 is R & Y channel, channel 2 is G & U channel, and channel 3 is B & V channel. Two types used in this coding are : 1. 2.
Input image (RGB) splits directly into channel R, channel G and channel B. Input image (RGB) is changed into color model YUV 4:4:4 and YUV 4:2:0. The converted image then splits into channel Y, channel U and channel V images.
Each of the images channels is coded by using the codebook, and results the index for every channel. To get a coded image, encoding is done by using the same codebook.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Color Medical Image Vector Quantization Coding Using K-Means: Retinal Image
913
easier to be done. Otherwise, the computation load will be abundant if the interval is not in [0,255], out of memory and the codebook cannot be obtained. Figure 5 shows the coding scheme of HSV model. Before the decoded image is converted into RGB, it is divided by 255 to restore its original interval [0,1]. IV. EXPERIMENTAL RESULTS We use 10 RGB retinal images of 1600 × 1216 pixels. We tested the system to assess the effectivity of the developed algorithms. Parameter used here is PSNR (Peak Signal-to-Noise Ratio), from RGB coded images with various size of codeword in each color models. PSNR formula for a color image with three channels:
§ · ¨ ¸ 2 255 PSNR=10log10 ¨ ¸ ¨ MSE (1) MSE (2) MSE (3) ¸ ¨ ¸ 3 © ¹
Fig. 4 RGB & YUV image coding scheme.
(1)
Time to produce a codebook of retinal images with 4x4 codeword dimensions is also used as a parameter in this research. This codebook is arranged by classification methods, like K-means. The average PSNR value for retinal images with K-means is in the interval 36,75 – 42,26 dB and can be seen in Figure 6.
PSNR (dB)
The average PSNR value 50.00 45.00 40.00 35.00 30.00 25.00 20.00 15.00 10.00 5.00 0.00
444 448 484 488 844 848 884 888 RGB
YUV 444
YUV 420
HSV
Fig. 6 The average PSNR value. Fig. 5 HSV image coding scheme. For HSV model, a converted image from RGB has an interval [0,1], so that it must be multiplied by 255 to get an image with interval [0,255]. Therefore, the codebook compilation, encoding process and decoding process will be
_______________________________________________________________
Tabel 1 gives the size of codebook file and retinal images index with codeword dimension 4×4 and 8×8. It can be shown that all of the channels of RGB, YUV 4:4:4 and HSV have the same size in codebook file and index. The size of codebook file in YUV 4:2:0 is the same as the other channels, but its index file size is different from the other color models. In retinal images, the size of index file in
IFMBE Proceedings Vol. 23
_________________________________________________________________
914
Agung W. Setiawan, Andriyan B. Suksmono and Tati R. Mengko
channel Y is the same as the other color models’, but the size of index file in channel U and V for 4×4 dimension codeword is 31 KB and for 8×8 dimension codeword is 9 KB. In An objective (PSNR values) and subjetive assessment shows that the image quality is not very different/almost similiar with the image quality in other models. The size of index file in channel U and V is small, so that it will be easier to transfer the retinal images in tele-ophthalmology system. Moreover, this advantage will reduce the capacity required for saving the file in electronic medical records server. Based on the experimental results, an RGB retinal image with combination 848 (red channel is coded in 8×8, green channel is coded in 4×4, and blue channel is coded in 8×8), objectively and subjectively – has a good quality and the coded image can be read. This combination will reduce the size of the codebook and index which will be sent or saved. Table 1 Size of codebook file and retinal images index Color Model
Codeword 4 × 4 (KB) R
RGB
YUV 4:4:4
YUV 4:2:0
HSV
1.
2.
REFERENCES 1.
Codeword 8 × 8 (KB)
Codebook
Index
Codebook
Index
5
120
17
31
G
5
120
17
31
B
5
120
17
31
Y
5
120
17
31
U
5
120
17
31
V
5
120
17
31
Y
5
120
17
31
U
5
31
17
9
V
5
31
17
9
H
5
120
17
31
S
5
120
17
31
V
5
120
17
31
According to the research results, it can be concluded that :
_______________________________________________________________
2.
3.
4.
5.
6.
7.
V. CONCLUSIONS
Color model RGB with 444 combination (red, green and blue channel are coded in 4×4) subjectively and objectively (PSNR value) – yields the best quality in retinal and cataract coded images. In tele-ophthalmology application and electronic medical records, YUV 4:2:0 is the most optimal color model for retinal and cataract images since it can reduce the size of the codebook and index which will be sent or saved. A retinal image with RGB 8×8-combination (red channel is coded in 8×8, green channel is 4×4 and blue channel is coded 8×8) subjectively and objectively – has a relatively good quality and its coded image can be read. This combination can reduce the size of the codebook and index which will be sent or saved.
8.
A.B. Suksmono, U. Sastrokusumo & K. Kondo, “Adaptive image coding based ono vector quantization using SOFM-NN algorithm”, Proc. of IEEE-APCCAS ‘98 A.B. Suksmono and U. Sastrokusumo, “A Client-Server Architecture of a Lossy-to-Lossless VQ-Based Medical Image Coding System for Mobile Telediagnosis: A preliminary design and result”, Proc. of APT Workshop-MCMT 2002, jakarta. Indonesia. Setiawan, A.W. Design and Implementation of Java™ Distributed Vector Quantization and Huffman Image Coding for X-Ray Medical Image Transfer. Final Project, Department of Electrical Engineering ITB, 2005. A.D. Setiawan, A.B. Suksmono, and B. Dabarsyah, Scalable Radiology Image Transfer and Compression Using Fuzzy Vector Quantization, J. Of eHealth Tech. & Applications, 2007. A.B. Suksmono, T.L.R. Mengko, U. Sastrokusumo, A.D. Setiawan, A.W. Setiawan, R.N. Rohmah, N.S. Surbakti, P. Rahmiati, D. Danudirdjo, A. Handayani, J.T Pramudito, Development of Asymmetricand Distributed-Image Coding for Telemedicine Applications, Journal. of E-Health Tech. & Applications, 2007. A.B Suksmono, T.L.R Mengko, R. N Rohmah, D Secapawati, J.T Promudito, U Sastrokusumo, Lossy-to-lossless Client Server Medical Image Coding System: Web-based Implementation by Using Java RMI, 2nd APT Telemedicine 2004, New Delhi, India. Hazem Al-Otum. Walid Shahab. Dan Mamoon Smadi. Color Image Compression Using a Modified Agnular Vector Quantization Algorithm. Journal of Electrical Engineering, Vol. 57, No. 4, 2006. Maher.A. Sid-Ahmed. Image Processing: Theory, Algorithms, & Architectures. McGraw-Hill. Singapore: 1995.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of the ECG Detector by Easy Contact for Helping Efficient Rescue Operation Takahiro Asaoka and Kazushige Magatani Department of Electrical and Electronic Engineering, Tokai University, Japan Abstract — When a large-scale disaster occurred, it is expected that much damage occurs. In recent years, various rescue robots are developed as second anti-disaster measures mainly. The realization of a quick rescue operation is expected by these rescue robots. To save life of more victims, it is important that rescuers divide a victim by triage. Effective rescue operations can be realized by it. However, well trained person for the triage can do correct judgment, but it is difficult for other people. Therefore, if rescuers gather bio-signals of the victim when they find or save the victim, they can get a clear criterion. From this, it is thought that many rescuers can evaluate the condition of the victim correctly using bio-signals of the victim. So, we developed the device to detect an electrocardiogram by easy contact in this study. The electrocardiogram is one of the important vital signs in the bio-signal of the human. If we can detect an electrocardiogram, we can diagnose the change of the condition of the person. At the time of the disaster, the diagnosis of the irregular pulse is effective in particular. It is possible to diagnose the irregular pulse by only one detected electrocardiogram. However, the measurement of electrocardiogram at the time of the disaster is difficult, because 12 lead electrocardiogram which is the most general method needs ten electrodes. Therefore we avoid the method to need many electrodes in this way, we aimed for the detection of the electrocardiogram with as few electrodes as possible to be able to measure it at the stricken area. In addition, for the realization of a more effective measurement, I aim at the development of the induction method that is able to detect a electrocardiogram by electrodes which do not touch skin directly. Keywords — ECG (Electrocardiogram), Capacitive coupling, Easy contact
I. INTRODUCTION As the scale of the disaster becomes big, this widespread damage becomes terrible. And, rescue persons increase together with victims. As this result, there is also possibility that rescue persons receive a serious injury or lose thier life by a second disaster. In order to avoid these situations, robots that engage in rescue operations are being researched and developed. By the introduction of various rescue robots, a quick rescue operation is enabled safely by the people of small number. In addition, the rescue person performs triage after the rescue of the victim. The triage is to choose a priority of the treatment by the state of the victim, but it is
necessary training to be able to judge accurately. It is difficult to judge appropriate for other people for that reason. Therefore we developed the device which could detect a bio-signal by easy contact. We think that the detection result of this device helps triage. In addition, if rescue robots have this function, the rescue person can also decide a priority of the help of the victim by this function in the rescue operation. Especially, it is useful in the case of the help of the victim who became burying alive. The survival rate of the victim decreases remarkably after passing 72 hours from buried alive. So, it is desirable to grasp the condition of the victim easily by using this function. Therefore, our goal is the development of a system which can be installed on a rescue robot, measure the victim's vital signs and send these data to medical doctors. In this paper, we describe developed measuring methods for ECG (Electrocardiogram) by using capacitive coupling. In our method, only touching victim's skin with electrodes, a correct victim's ECG is obtained. II. METHODOLOGY ECG is a bio-signal which is a voltage gap to be lead in two different positions of the heart, and indicates electrical activity of the heart. ECG serves for diagnosis and cure for heart diseases. The heart rate and the existence of the arrhythmia which is helpful on a disaster are obtained by ECG. They are able to be diagnosed from the time relation of each wave of ECG. Therefore, it is possible to estimate them from only one of induction methods of ECG. There are two leading methods for ECG if we classify them widely. One is the limb leading by bipolar, the other is the precordial leading by unipolar. These leading methods have each merit. However, at the disaster area, we have to select leading method in accordance with the situation. Therefore, we developed the system which can choose either leading method by the situation. In general, many electrodes are necessary for measuring typical ECG. And to avoid artifacts, the resistance between electrodes and human body surface is fixed on the low resistance value. In other to keep this condition, conductive paste is used between electrodes and body surface. However, it is difficult to measure ECG using these methods at the disaster area. Therefore, new ECG measuring method
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 915–918, 2009 www.springerlink.com
916
Takahiro Asaoka and Kazushige Magatani
which needs only two or three electrodes without conductive paste was developed. Using this method we can detect arrhythmia from measured ECG easily. In our method, ECG is led by capacitive coupling between body and electrode. This principle is explained as following. At first action potential occurs by the heartbeat. By this electric potential, an electric field is formed, and this affects the body surface. This electric field and the electrode form a capacitor. This capacitive coupling between a body surface and the electrode derives ECG signal from a body to the electrode. So, we can obtain ECG without direct touching of electrodes to the body surface. For example, even if electrodes are touched to the victim’s clothes, ECG of victim will be able to be measured. Fig.1 shows an equivalent circuit of this principle.
III. DETECTING DEVICE
Fig. 2 The ECG electrodes and amplifier.
Fig. 1 The equivalent circuit of capacitive coupling. Where, Vp is the action potential of the heart. Vout is the output voltage. Cb is the capacitance between human body and ECG detector. Cg is the capacitance between human body and ground. Cc is the capacitance between ECG detector and ground. C and R are the capacitance and the resistance of the first op-amp respectively. From this figure, a equation of Vout/Vp is shown as following.
Vout Vp
Fig.2 shows a picture of developed ECG measurement system. As shown in this figure, this system uses three electrodes. One is for the common and others are for leading ECG. In our system, normal detected ECG is measured by using three electrodes. And, capacitive coupled ECG is measured by using only two electrodes. Materials of each electrode are copper sheets. Because of high impedance between a capacitive coupling electrode and body, a voltage follower amplifier is installed on electrode. And low impedance output of a voltage follower is send to the ECG amplifier. The block diagram of our ECG measurement system is shown in Fig.3. As shown in this figure, signals from electrodes are differential amplified by an instrumentation amplifier. Noises in amplified ECG are reduced using filters. B.E.F. removes hum noise.
1 C C C 1 1 1 1 jZCb R Cb jZC g R C g jZCc R Cc
_________________________________________
IFMBE Proceedings Vol. 23
Fig. 3 Block diagram of ECG amplifier.
___________________________________________
Development of the ECG Detector by Easy Contact for Helping Efficient Rescue Operation
917
ECG was experimentally measured about three normal subjects by using each lead method. The detection method of the ECG is mentioned earlier. In all leading methods, ECG was able to detect. As an example, detection results for one subject are showed the following. The result of ECG waveform by limb leading is shown in Fig.4, and Fig.5 shows the ECG for precordial leading. These results were measured by using three electrodes. In these experiments, two electrodes were used for bipolar leading of ECG and one electrode was used for common. A result of ECG for unipolar leading is also shown in Fig.6. This result was acquired by capacitive coupling. As shown in these figure, an amplitude of unipolar leading (Fig.6) is smaller than bipolar leading (Fig.5). In addition, as shown in Fig.5, there are many distortions with ECG waveform. However, measuring ECG using two electrodes is easier than using three electrodes for our objective.
IV. EXPERIMENT AND RESULT (1) Detection by each lead method
Fig. 4 The ECG wave of the limb lead.
(2) ECG detection through the clothes
Fig. 5 The ECG wave of the precordial lead with three electrode.
Fig. 7 The ECG from the surface of a T-shirt.
Fig. 6 The ECG wave of the precordial lead with two electrodes.
Fig. 8 The ECG from the surface of a T-shirt by capacitive coupling.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
918
Takahiro Asaoka and Kazushige Magatani
ECG through the clothes was experimentally measured about one subject by using precordial leading. Fig.7 shows the ECG waveform that is acquired from the surface of a Tshirt by using three electrodes. Fig.8 shows the ECG waveform that is acquired from the surface of a T-shirt by using two electrodes. This result was acquired by capacitive coupling. These results were obtained by higher amplitude than previous experiment result. From these results (especially in Fig.8), a lot of noises are generated in ECG detection through the clothes. So, we think that it is hard to stabilize ECG pattern by using a developed amplifier. So, we are considering about optimal amplitude and filtering of ECG amplifier. And I think that a more accurate filter is necessary to suppress the noises. V. CONCLUSION In this paper, we described about the developed device for measuring vital signs of suffered person under the disaster. ECG measuring system by using capacitive coupling
_________________________________________
type electrodes was developed. This system can detect ECG by easy contact through textiles. However, the output of this detecting method is unstable, and includes a lot of noises. Therefore, we have concluded that if these problems are improved, our developed system will be useful to rescue sufferers.
REFERENCES 1.
Takahiro ASAOKA, Yoshiaki KANAEDA and Kazushige MAGATANI, “Development of the device to detect human’s biosignals by easy sensing”, 30th Annual International Conference of the IEEE EMBS (2008)
Author: Institute: Street: City: Country: Email:
Takahiro ASAOKA TOKAI University 1117 Kitakaname, Hiratsuka Kanagawa Japan
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
A Navigation System for the Visually Impaired Using Colored Guide Line and RFID Tags Tatsuya Seto1, Yuriko Shiidu1, Kenji Yanashima2 and Kazushige Magatani1 1
Department of Electrical and Electronic Engineering, Tokai University, Hiratsuka, Japan 2 National Rehabilitation Center for the Disabled, Hiratsuka, Japan
Abstract — There are approximately 300,000 visually impaired persons in Japan. A navigation system using GPS and point blocks are examples of famous supporting systems for independent activities of the visually impaired. However, these support systems are usually useless in the indoor space (e.g. underground shopping mall, hospital, etc.) and cost a lot of money to spread. Most of the visually impaired in Japan are old and they probably cannot use complex support systems. Therefore, our objective is the development of a simple and inexpensive navigation system for the visually impaired which can use in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored guideline that is set on the floor. In this system, a color sensor installed on the tip of a white cane senses the colored guideline, and the system informs the visually impaired that he/she is walking along the guideline by vibration. The color recognition system is controlled by a one-chip microprocessor and this system can discriminate 6 colored guidelines. RFID tags and a receiver for these tags are used in the map information system. The RFID tags are set on the colored guideline. An antenna for RFID tags and the RFID tag receiver are also installed on a white cane. The receiver receives tag information and notifies map information to the user by mp3 formatted pre-recorded voice. Three normal subjects who were blindfolded with an eye mask were tested with this system. All of them were able to walk along the guideline. The performance of the map information system was good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired. Keywords — white cane, RFID tags, colored guideline, color sensor, visually impaired.
I. INTRODUCTION A white cane is a typical supporting device for the visually impaired. They use a white cane while walking for the detection of obstacles around them. In their known area, they can walk independently using a white cane. However, they cannot walk without help of others in their unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not navigation device that gives them a route to the destination. Therefore, a navi-
Fig. 1 Example of colored navigation lines
gation system that supports independent activities of the visually impaired is required. Many navigation systems for the visually impaired are developing. For example, a navigation system using GPS that supports the independent walking of the visually impaired is being developed. However, most of them are for outdoor space and not for indoor. Our objective of this study is a development of the navigation system which can be used in the indoor space and supports independent activities of the visually impaired without help of others. In Japan, navigation line system is used for the normal person. This system is composed of some colored tapes that set on along the walking route. These colored lines are called colored navigation line. Each color is assigned for each destination. If we walk along one of these navigation lines, we can arrive the destination that corresponds the color of line easily. Fig.1 shows an example of the navigation line system. II. METHODOLOGY A. Conception Fig.2 shows the conception of our system. This system is composed of colored navigation lines, RFID tags and an intelligent white cane. A navigation line is set on the floor along the walking route to the destination. If there are many
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 919–922, 2009 www.springerlink.com
920
Tatsuya Seto, Yuriko Shiidu, Kenji Yanashima and Kazushige Magatani
Voice Navigation
White Cane Color Sensor & Tag Antenna
Navigation Line
RFID Tag
Fig. 2 A conception of the navigation system destinations, different color is assigned for each route. At the each landmark point of the walking route, an RFID tag that indicates area code is set on a navigation line. An intelligent white cane includes a RGB color sensor, a transceiver for RFID tags, a vibrator and a voice processor. These devices included in a white cane are controlled by one chip microprocessor. A color sensor installed on the tip of a white cane senses the color of a navigation line. A visually impaired user swings the white cane left to right or right to left in order to find a target navigation line. If this sensor catches the target color, the white cane informs the visually impaired that he/she is walking along the correct navigation line by vibration. The white cane also makes communication with a RFID tag at the landmark point of walking route. If the white cane finds a RFID tag, a voice processor notifies area information that corresponds to the received area code by pre-recorded voice. Therefore, a user of this system can obtain the area information and reach the destination, only walking along the selected navigation line by using an intelligent white cane. B. Color sensing system
Vibrator
RGB Color Sensor
A block diagram of a colored line sensing system is shown in Fig.3. In this system a RGB color sensor installed on the tip of a white cane senses the floor color. RGB outputs of this sensor are amplified and noise reduced by a low pass filter. Then these signals are analog to digital converted by 8bit resolution, and then analyzed by a CPU (one chip microprocessor). Intensity of RGB signals change as the circumferential brightness, even if the sensed color is same. However, the ratio of RGB signals dose not change under same color sensing. In our system, digitized RGB signals are transformed to the Yxy notation. A point of one color in x-y coordinate of Yxy notation is located a same point at any condition. Therefore, the system evaluates a line color correctly at any condition by using Yxy notation. The x-y coordinate of Yxy notation is shown in Fig.4. If a sensed color is the target color, a CPU turns on a vibrator to notify that the user is on the right route. This system can discriminate 6 or more colors of the navigation line. C. RFID tag system
A/D Converter LED
Fig. 4 xy-Chromaticity diagram
CPU Power Supply
Colored Navigation Line
Fig. 3 A block diagram of the color detecting system
_________________________________________
A block diagram of RFID tag information system is shown in Fig.5. In the navigation route, there are some landmark points where the system has to notify a user of the area information. For example, a corner to turn left or right, an entrance to the elevator, stairs are typical landmark points for the visually impaired. In our previous navigation
IFMBE Proceedings Vol. 23
___________________________________________
A Navigation System for the Visually Impaired Using Colored Guide Line and RFID Tags
system, optical beacons that were set on the ceiling and a receiver for the beacons were used for this objective. From experiments, it is confirmed that the performance of the optical beacon system was good. However, an optical beacon consumes electric power continuously to emit the area code as infrared signals. And a user has to have a receiver for the beacon in addition to a white cane. Therefore, RFID tags are used in our new system. It is not necessary to have own power source for typical RFID tag. The dimension of RFID tag is 8.5cm x 5cm, and the frequency characteristic is 135kHz. The power of RFID tag is supplied from the transceiver that can make communication with RFID tag as radio frequency wave. And RFID tag system can be installed in a white cane. These are the benefit for using RFID tags. Antenna
Power Booster REID Micro Reader RS232C Preamplifier One Chip Microprocessor Navigation
Voice Processor Speaker
Voice
921
distance between an RFID tag and a white cane is about 50cm. Fig.6 shows an RFID tag that is used in our system. The voice processing unit is also necessary for RFID tag system. At a landmark point, it is necessary to notify area information by voice. In our system, a CPU selects and outputs the pre-recorded phrase that corresponds with a received area code. Pre-recorded phrase is encoded in mp3 format and saved in the memory of one chip microprocessor. III. EXPERIMENT There normal subjects who were blindfolded with an eye mask were tested with our developed system. Two experiment routes were set on the floor and RFID tags were also set at the landmark point of these route. One navigation route was red and another was blue. Start point of both navigation lines were same. A destination of red navigation line is a door to the laboratory and blue is the toilet. Fig.7 shows the schema of experiment route. As shown in this figure, a subject had to turn left to follow red route at the crossing. And a subject had to turn right to follow blue route. RFID tags were set on the navigation lines at the crossing and the destinations. In this experiment, the distance form start point to the destination was about 15m in all routes.
Fig. 5 A block diagram of a RFID tag transceiver and a voice processor Both receiving sensitivity and output power of ordinary RFID Micro Reader (transceiver) are too small for our system. So, a preamplifier for the receiver and a power booster for the transmitter are developed and equipped in our system. An antenna for the transceiver is also installed on the tip of a white cane. By using this system, communicable
Fig. 7 The schema of experiment route
Fig. 6 An example of used RFID tag
_________________________________________
All subjects could walk along the navigation line correctly, and all colored lines were continuously detected stably. In all cases of this experiment, an intelligent white cane found RFID tag and notify the subject of turning information, and all subjects turned right or left and reached to the destination. Therefore, we think that our navigation system for the visually impaired worked perfect. Fig.8 shows one subject under experiment.
IFMBE Proceedings Vol. 23
___________________________________________
922
Tatsuya Seto, Yuriko Shiidu, Kenji Yanashima and Kazushige Magatani
obtain the area information and reach the destination, only walking along the selected navigation line by using an intelligent white cane. We think that our developing method to navigate a visually impaired person using navigation lines worked well. However, previous system was too heavy and huge. So, we developed a new system. These problems were improved in a new system. Therefore, we concluded that our navigation system will be a valuable one for the visually impaired.
REFERENCES 1.
Fig. 8 A subject of the experiment IV. CONCLUSION We described about our developed new navigation system for the visually impaired. A user of this system can
_________________________________________
Y. Shiizu, K. Magatani et al.:”Development of an intelligent white cane which navigates the visually impaired”, Proceedings of the 29th Annual International Conference of the IEEE EMBS (2007)
Author: Tatsuya Seto Institute: Tokai University Street: 1117Kitakaname City:Hiratsuka Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
A Development of the Equipment Control System Using SEMG Noboru Takizawa1, Yusuke Wakita1, Kentaro Nagata2, Kazushige Magatani1 1
Department of Electrical and Electronic Engineering TOKAI University, Japan 2 Kanagawa Rehabilitation Center, Japan
Abstract — SEMG(Surface Electromyogram) is one of the bio-electric signals that is generated from the muscle. There are many kinds of muscles in a human body, and used muscles are different for each body movement. Therefore, any kind of body motions can be detected by analyzing generated SEMG patterns. Our objectives of this study are the development of a method that makes perfect detection of the hand motions possible using SEMG patterns and applying the method to the man-machine interface. In this paper, four channels SEMG signals which were measured from a right forearm of the subject were used to detect right hand motions. And a radio controlled vehicle was controlled as the results of detecting hand motions. In our system, suitable electrode positions are selected from analyzing results of 48 multi-channel SEMG patterns for each subject. Measured SEMG signals from electrodes are amplified, filtered and analog to digital converted. Digitized SEMG data are analyzed in a personal computer. The hand motion detection is done using the method of canonical discriminate analysis. In this study, five basic hand movements (wrist flexion, wrist extension, grasp, pronation, supination) were detected and used to control a vehicle. Three subjects were studied with our system. They were male and had not used our system before experiment. After few minutes training, they tried to control a radio controlled vehicle. In spite of first operation, they could control a vehicle almost perfectly and average recognition rate of hand movements were more than 90% for all subjects. From these results of our experiment, we have concluded that our new man-machine interface worked perfect and will be a valuable interface in future. Keywords — EMG, Movement identification, Machinery control.
vehicle and a remodeled radio controller are shown in Fig.1. A user of this vehicle can control five actions (turn left, turn right, go forward, go back, firing) using radio controller. In this study these five actions are controlled by subject’s hand movements.
Fig. 1 The vehicle and a remodeled radio controller II. METHODOLOGY In our system, we tried to use four channels SEMG that were measured a right forearm in order to control a radio control vehicle. It is very important for this objective to select suitable electrode positions. So, we developed a SEMG amplifier and the method that made perfect hand motion detection possible. Fig.2 shows a block diagram of
I. INTRODUCTION SEMG (Surface EMG) is one of the bio-electric signals that are generated from the muscle. There are many kind of muscle in a human body, and used muscles are different for each body movement. Therefore, any kind of body motions can be detected by analyzing generated SEMG patterns. Our objectives of this study are the development of a method that makes perfect detection of the hand motions possible using SEMG patterns, and applying this developed method to the man-machine interface. In this study, our target machine that is controlled by SEMG is a radio controlled toy vehicle on market. This
Fig. 2 A block diagram of one channel SEMG amplifier
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 923–926, 2009 www.springerlink.com
924
Noboru Takizawa, Yusuke Wakita, Kentaro Nagata, Kazushige Magatani
Fig. 5 An example of distribution map Fig. 3 A multi-channel electrode one channel SEMG amplifier. As shown in this figure, SEMG measured from a right forearm using an Ag electrode is amplified, noise reduced by a low pass filter and analog to digital converted. And then digitized data is inputted to a personal computer and analyzed in it. In order to select suitable electrode positions, 48 channels forearm SEMG for each hand motion are measured. A multi-channel electrode that includes 48 channels Ag electrodes is developed for this purpose. Fig.3 shows the multichannel electrode. As shown in Fig.3, 48 channels Ag electrodes are set on a silicon rubber sheet. This electrode is wrapped around the forearm and 48 channels SEMG are measured form each Ag electrode. Fig.4 shows a 96 channels SEMG amplifier that includes low pass filters. We can
measure 2 forearms SEMG at the same time using this amplifier. In our system, suitable electrode positions are decided by Monte Carlo method. As mentioned earlier, 48 channels forearm SEMG for each hand motion are measured and digitized at first. 1000 sets of four channel electrodes are generated randomly. And recognition rate of hand motions are calculated and evaluated by canonical discriminate analysis for each electrode set. And finally, the electrode set that achieves highest average recognition rate is selected as the suitable electrode set [1]. Fig.5 shows the an example of distribution map for each hand motion on canonical space. As shown in this figure, cluster of each hand motion are separated each other on canonical space. After deciding electrode set, hand motion analysis is done using selected four channel electrode positions. Fig.6 also shows four channels Ag electrode set that is developed for our objective.
Fig. 4 A SEMG amplifier that include 96 channels
_________________________________________
IFMBE Proceedings Vol. 23
Fig. 6 Four channels Ag electrode
___________________________________________
A Development of the Equipment Control System Using SEMG
925
Subject(b)
III. EXPERIMENT AND RESULTS
Recognition At first, three normal male adult subjects were studied with our SEMG control system. All subjects set a multichannel electrode on a right forearm and measured 48 channels SEMG for each hand motions. Suitable four channel electrode positions were decided by Monte Carlo method as mentioned earlier. After selecting electrode positions, four channel electrodes (shown in Fig.6) were set on a correct position of right forearm, and real time hand motion recognition test was done. Table 1 shows the correspondence between a hand motion and a motion of a radio controlled vehicle. All subjects moved hand 40 times for each hand motion. And recognition rates of each hand motion were calculated. The result of this experiment is shown in Table 2. As shown in this table, recognition rates for each hand motion are more than 95 % in all subjects. After, this experiment, all subjects tried to control a vehicle by SEMG. In this experiment, all subjects could control the vehicle almost perfect.
Movement Wrist flexion Wrist extension Grasp Pronation Supination Average
Recognition Movement Wrist flexion Wrist extension Grasp Pronation Supination Average
Go forward
Firing Turn left Turn right
Subject(a)
Recognition
Wrist flexion Wrist extension Grasp Pronation Supination Average
[%] 100 100 96.67 96.67 100 98.67
Go back
Table 2 recognition rate for each subject
Movement
100 100 100 100 95 99
Subject(c)
Table 1 The correspondence between a hand motion and a vehicle action
Wrist flexion Wrist extension Grasp Pronation Supination
[%]
[%] 100 100 100 96.33 100 99.33
_________________________________________
IV. CONCLUSION In this paper, we described about our developed manmachine interface using SEMG. In this system, by using four channels SEMG a radio controlled toy vehicle are controlled. SEMG generated by hand motions are measured and evaluated, and acted hand motion are estimated by measured SEMG. Vehicle’s actions are corresponded with estimated hand motions. Therefore, a radio controlled vehicle is able to be controlled by SEMG for each hand motion in our system. Electrode positions for hand recognition are selected using Monte Carlo method. As using this method, we can obtain suitable set of electrode positions. From our experiment results, recognition rates of hand motions were more than 95% using this method. These results show that our developed method works perfect. In this paper, we described experiment results about three adult subjects. After those experiments, many children were also tested with our system. In spite of the first experience of our system for all children, most of them could control a vehicle perfect. However, our system is too huge to control a small interface. And in real time control by using SEMG, there are
IFMBE Proceedings Vol. 23
___________________________________________
926
Noboru Takizawa, Yusuke Wakita, Kentaro Nagata, Kazushige Magatani
some problems about control timing. Therefore, we concluded that if these problems are improved, our developed system will be a valuable one for the man-machine interface using SEMG.
Author: Institute: Street: City: Country: Email:
Noboru Takizawa Tokai University 1117Kitakaname Hiratsuka Japan
[email protected] REFERENCES 1.
K. Ando, K. Magatani et al.:”Development of the input equipment for a computer using surface EMG” Proceedings of the 28th IEEE EMBS Annual International Conference (2006)
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
The Analysis of a Simultaneous Measured Forearm’s EMG and f-MRI Tsubasa Sasaki1, Kentaro Nagata2, Masato Maeno3 and Kazushige Magatani1 1
Department of Electrical and electronic Engineering, Tokai University, Japan 2 National Rehabilitation Center, Japan
Abstract — Generally, voluntary muscles are able to be moved freely, if we want to move them consciously. In addition, EMG is generated in a muscle according to contraction of it. If we want control some equipment like an artificial hand by using EMG, a beginner cannot control it without concentration on the motion of equipment. However, if he/she is practiced in operation of the equipment as a result of training, he/she can control it without concentration. In other words, a skilled person has obtained some abilities from training, and these abilities will change brain functions. Making the change of brain function as a result of training clear by using f-MRI is our objective of this study. In our study, 6 channels EMG which are generated from right forearm are measured and basic hand motions are recognized by using them. In our experiment, a subject in a gantry of MRI moves his/her right hand, and generated 6 channels EMG are amplified, filtered and digitized. Digitized EMG data is analyzed and then a recognition result is indicated to a subject as an animation of the recognized hand motion by a LCD projector. A subject trains himself/herself in order to improve the recognition rate. F-MRI is measured while training continuously. In this paper, we will talk about our measurement system and the results of experiment. Keywords — fMRI, EMG, artificial hand, training,
I. INTRODUCTION Generally, voluntary muscles are able to be moved freely, if we want to move them consciously. In addition, EMG is generated in a muscle according to contraction of it. Therefore, body motions can be estimated by analyzing generated EMG patterns, and the results of estimation are used as the control source for some equipment. If we want to control some equipment like an artificial hand by using EMG, a beginner cannot control it without concentration on the motion of equipment. However, if he/she is practiced in operation of the equipment as a result of training, he/she can control it well without concentration. In other words, a skilled person has obtained some abilities from training, and these abilities will change brain functions. Making the change of brain function as a result of training clear by using f-MRI is our objective of this study. In our study, 6 channels EMG which are generated from right forearm are measured and basic hand motions (wrist
flexion, wrist extension, grasp and release) are recognized by using them. In our experiment, a subject in a gantry of MRI moves his/her right hand, and generated 6 channels EMG are analyzed in a personal computer. The result of recognition is indicated to a subject as an animation of the recognized hand motion by a LCD projector. Therefore, a subject can check the recognition result immediately. All subjects of this experiment are beginners of the EMG recognition and train themselves in order to improve the recognition rate. F-MRI is measured while training continuously. And then, the changes of brain function as a result of training are analyzed by using these f-MRI data.
II. EXPERIMENT METHOD Fig.1 shows a simplified block diagram of our measurement system. As shown in Fig.1, Magnetom-Vision 1.5T(SIEMENS) is used as the f-MRI. 6 channels EMG are measured from right forearm of a subject in a gantry of MRI. Wires of electrodes are shielded and extended to the EMG amplifiers which are set on the outside of the MRI room. A LCD projector in the MRI room projects a result of hand motion recognition to the permeable screen which is set in front of a gantry of MRI. A subject can see the recognition result as an animation of hand motion through a mirror which is set in front of subject’s eyes.
Fig. 1 The Measurement System
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 927–930, 2009 www.springerlink.com
928
Tsubasa Sasaki, Kentaro Nagata, Masato Maeno and Kazushige Magatani
A block diagram of one channel EMG amplifier is shown in Fig.2. As shown in this figure, SEMG is amplified about 4400 times and a band width is limited from 10 to 1kHz. In addition, a EMG amplifier includes a hum filter which eliminates 50Hz. In digitization of EMG, sampling frequency is 2kHz, and bit resolution is 16bits. Digitized EMG data are analyzed in a personal computer. Hand motion detection is done using the method of canonical discriminate analysis. An integrated value for 300ms of absolute value of EMG is used as the EMG feature extraction. In our system, suitable position of electrodes is estimated by Monte Carlo method [1], [2]. At first, 48 channels EMG pattern of a subject is recorded for each hand motion. Next, positions of 6 channels electrode are generated 1000times randomly. From results of these trials the suitable position of electrodes is decided. Using this Monte Carlo method, even if we use only 4 channels of EMG, recognition rate of basic 8 hand motions (wrist flexion, wrist extension, grasp, release, radial deviation, ulnar deviation, pronation, and supination) are more than 96%. A 48 channels multi-electrode is shown in Fig.3 (a) and a 48 channels EMG amplifier using detection of suitable position of electrodes is also shown in Fig.3(b).
(a) A 48 channels multi-electrode
Fig. 2 Machine constitution (b) A 48 channels EMG amplifier electrodes detecting system
Fig. 3 A suitable position of A structure of the surface 6 channel multi-electrodes is shown in Fig.4 (a). This multi-channel electrode is composed of 6 silver electrodes. Each electrode is 1mm in diameter. To fit a forearm bodyline, flexible silicone rubber is use as a base of 6 silver electrodes A thickness of this silicone rubber is 2mm. Positions of silver electrodes are set as the results of Monte Carlo method for each subject, as mentioned earlier. The 6 channel EMG amplifier is shown in Fig.4 (b). A 6 channel active low pass filter and a regulated power supply are also shown in Fig.4 (c) and (d). In our system, sometimes ordinary Ag-AgCl electrodes are used. In this case, positions of electrodes are decided as the anatomical position of muscles. Fig.4 (e) shows the shielded Ag-AgCl electrodes. Fig.5 shows the attached multielectrode on the subject’sforearm. It is well known that many electric noises are generated by MRI. At first, electromagnetic noise from MRI was measured and analyzed. In this experiment, electrodes were
_______________________________________________________________
not attached on a subject’s forearm. Each electrode was terminated with 10k ohm respectively, and MRI was measured. Generated noise under MRI measuring was recorded and analyzed. An example of power spectrum for recorded noise is shown in Fig.6. As shown in this figure, 3 peaks of power are observed, and frequencies of all peaks are more than 1kHz. Therefore, noise reduced EMG is obtained by using low pass filter whose cutoff frequency is set under 1kHz. Because of this result, in our system, maximum frequency of amplified EMG signal is limited under 1kHz as mentioned earlier. Next, two beginners of the EMG recognition were tested with our system. Multi-channel EMG patterns of all subjects were measured and suitable positions of electrodes were decided before experiment. In this experiment, a subject in a gantry of MRI moved their hand. Because the motion of electrodes and wires of electrodes in accordance with
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Analysis of a Simultaneous Measured Forearm’s EMG and f-MRI
929
2 QY GTURGEVTWO PQKUG
(a) 6 channels silver electrodes
(b) 6channels EMG amplifier
( TGSWGPE[=* \? Fig. 6 power spectrum of noise from MRI (c) 6channels active filters
(d) A power supply for the system
hand motions will generate electric noise, basic 4 hand motions (wrist flexion, wrist extension, grasp and release) which do not cause motion artifact are selected as recognizedmotions. An example of distribution of classified basic motions of a hand in canonical space is shown in Fig.7 (same colored dot means same motion). As shown in this figure, all clusters are separated each other and we can recognize each hand motion. Fig.8 shows an example of f-MRI result under EMG recognition test. The subject was a beginner of the EMG rec-
(e) Ordinary Ag-AgCl Electrodes
Fig. 4 An EMG measuring system
y
x z
Fig. 5 An attached 6 channels electrodes Fig. 7 Distribution result of classified basic motions of hand
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
930
Tsubasa Sasaki, Kentaro Nagata, Masato Maeno and Kazushige Magatani
ognition, and it was hard for the subject to control EMG pattern. Many activated places are observed in Fig.8. We think that because of inexperience in EMG recognition is the main cause for multi-activation of the brain.
Fig. 8 An example of f-MRI result under EMG recognition test Fig.9 express signal strengths which can measure at the time of simultaneous acquisition of f-MRI and EMG. As shown in this figure, we can observe activations in a various area of brain that include motor are.
III. CONCLUSIONS We can use EMG as the control source for some equipment. For example, an artificial hand is controllable using EMG. However, a beginner cannot control it without concentration and skilled person can control it without concentration. A skilled person has obtained some abilities, and these abilities will change brain functions. In order to observe these changes of brain functions, we developed EMG measurement system which can measure 6 channels EMG and f-MRI simultaneously using canonical discriminate analysis, 4 basic hand motions were recognized from forearm EMG and f-MRI was measured simultaneously. From the results of this experiment, we can see that many places of the brain were activated for a beginner of the EMG recognition. We think that because of inexperience in EMG recognition is the main cause for multiactivation. We would like to trace the change of brain activation in subject’s training stage in figure.
REFERENCES 1.
5 KIPCN5 VTGPIVJ
4 1 + 4 1 + 4 1 + 4 1 + CXGTCIG
5 GE
Fig. 9 Signal strength when of f-MRI and EMG simultaneous measured
_______________________________________________________________
2.
Kentaro Nagata, Masafumi Yamada and Kazushige Magatani, “Recognition method for forearm movement based on multi-channel EMG using Monte Carlo method for channel selection”, Proceeding of the International Federation for Medical & Biological Engineering Vol.12 (2005) Kentaro Nagata, Keiichi Ando, Shinji Nakano, Hideaki Nakajima, Masafumi Yamada and Kazushige Magatani, “Development of the human interface equipment based on surface EMG employing channel selection method”, Proceedings of the 28th IEEE EMBS Annual International Conference New York City, USA, AUG 30-Sept 3, 2006
Author: Institute: Street: City: Country: Email:
Tsubasa Sasaki Tokai University 1117 Kitakaname Hiratsuka Japan
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of A Device to Detect SPO2 which is Installed on a Rescue Robot Yoshiaki Kanaeda1, Takahiro Asaoka2 and Kazushige Magatani3 1
Yoshiaki KANAEDA, Tokai Univ., Japan 2 Takahiro ASAOKA, Tokai Univ., Japan 3 Kazushige MAGATANI, Tokai Univ., Japan Abstract — When a disaster occurs, there is a possibility that a lot of suffering person results. There is also possibility that a rescue person encounters disaster in order to rescue suffering persons. Therefore, there are many researches about rescue robots for the disaster. These robots can find sufferers and rescue them. However, they cannot evaluate sufferer’s condition. If we can measure sufferer’s vital sign at the disaster area, measured data will be useful for triage. So, we are investigating a measured system which is installed on a rescue robot and can measure the vital sign at the disaster area easily. SPO2 (Arterial blood oxygen saturation degree) is one of the important vital sign of a human. We can evaluate sufferer’s state using SPO2. For example, normal level of SPO2 is more than 96%, and under 90% of SPO2 means that he/she is in the fatal condition. SPO2 is able to be measured by using a pulse oxymetry method easily. Therefore, we are developing the easy measuring method of SPO2 of sufferers at the disaster area which is based on pulse oxymetry. Oxy or de-oxy hemoglobin makes change the color of red blood cell. Infrared ray is absorbed in de-oxy hemoglobin and red ray is absorbed in oxy hemoglobin. Therefore, SPO2 can be measured, if light characteristics of blood can be measured. In our method, pulse signal is obtained as the reflection of applied light to the skin. And red ray and infrared ray are used to obtain two kind of pulse. And then, SPO2 is calculated using the amplitude of these two pulses. We developed and assessed a SPO2 measurement system which can measure SPO2 by only easy touching to the skin. Four subjects were studied with our system. In all cases, our developed system was able to measure SPO2 correctly.
Keywords — SPO2, Easy touching, Vital sign, Hemoglobin, Pulse oxymetry. I.
INTRODUCTION
A disaster like a big earthquake usually brings many destroyed buildings and many sufferers are generated in the disaster area. There is also possibility that a rescue person receives a serious injury or loses his/her life by a second disaster. In order to avoid these situations, robots that engage in rescue operations are being researched and developed. By employment of various types of rescue robots, rescuers can search a sufferer under debris, remove obstacles, make a route to him/her, and rescue him/her. However, they cannot measure the sufferer’s condition. We think that if they can measure sufferer's vital signs (for example a
body temperature, pulses, and so on) at the disaster area, it will be possible to evaluate a sufferer's condition and to realize the effective rescue ability. So, we developed the device which detected bio-signals. This device helps triage by detecting a bio-signal after rescue with a victim. In addition, if this device is installed in a rescue robot, the rescuer can decide a priority of the help of the victim by bio-signals from this device in the rescue operation. Therefore, our goal is the development of a system which can be installed on a rescue robot, measure the sufferer's vital signs and send these data to medical doctors. In this paper, we describe developed measuring methods for SPO2 using pulse oximetry. In our method, only touching sufferer's skin with a sensor, a correct sufferer's SPO2 is obtained. II. METHODOLOGY A. SPO2 Detection SPO2 is an index to show a ratio of oxygen in the body, and is also the thing which expressed the ratio of the oxidation hemoglobin of all hemoglobin by % notation. It is possible to calculate a SPO2 by using 2 types of pulse waves which are measured by infrared rays and red rays respectively. Most of the pulse oxy meter which is a general SPO2 detector is a penetrative type. But, it is difficult to detect SPO2 using penetrative method at disaster areas. So, we developed SPO2 measurement system by using pulse detection of reflection type. B. Pulse Detection The pulse is one of the most important vital sign of a human, and it is generated continuously while our life is lasting. The pulse is generated by blood flow that is caused by beats of the heart. In other word, we can observe the pulse as a change of artery diameter from the result of a beat of the heart. Hemoglobin in the blood absorbs infrared ray. Therefore, if we irradiate infrared ray to a blood vessel, absorption rate of infrared ray will change as increase and decrease of a blood flow. The absorption and reflection of infrared ray inside the blood can be used to detect the pulse. It is possible
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 931–934, 2009 www.springerlink.com
932
Yoshiaki Kanaeda, Takahiro Asaoka and Kazushige Magatani
to detect the pulse by measuring the blood flow of the capillary which is distributing to the skin of human body with infrared ray. In other words, change of reflection rate of infrared ray irradiated a skin of human expresses the pulse. Then, as two characteristics of infrared ray, infrared ray has permeability to all the substances including the skin, and has a characteristic of absorption toward blood. In our method, the pulse is measured using these characteristics. Infrared ray from an infrared LED is irradiated to the skin. Even if there is one of clothes between the LED and the skin, infrared ray can permeate the clothes completely and reaches the skin surface. A part of this infrared ray is absorbed or penetrates vessels, and the rest is reflected. We can observe the pulse as the intensity change of this reflection. By the way, red ray is also absorbed into blood. However, infrared ray is mainly absorbed into de-oxyhemoglobin and red ray is mainly absorbed into oxyhemoglobin. These characteristics are used to calculate SPO2 in our system. So, we developed a unified sensor which has two types of a pulse sensor. One uses infrared ray (900nm) and the other uses red ray (660nm). Both sensors consist of a LED and a photo sensor. The unified sensor is touched to the human body. It emits infrared and red ray at the same time from each LED, and each photo diodes detect the each reflected ray as electric current.
Fig.1 shows the view of our system and sensor. Electric current from a photo diode is converted to voltage and then amplified and filtered for noise reduction. Direct contact with human skin of the sensor is the best condition to detect a pulse. However, in many cases at the disaster area, it is difficult to touch a sensor to a skin directly, and there are sufferer's cloths between the sensor and a skin. For such cases, in our measurement system, the brightness of a LED and the amplitude of a photo detector are able to control in order to set their level optimal. These controls of the system are conducted by a one-chip microprocessor (PIC16F877). In our system, intensity of LED is changeable by using a 8bit Digital to Analog converter (AD558). This D/A converter can change the output voltage of a power supply for LED according as a input of the converter. This voltage varies in a range from 0 to 10V by the signal changes in a range from 0 to 255. A block diagram of the system is shown in Fig.2
Pulse wave using infrared
Fig. 3 2 types of measured pulse
Fig. 1 SPO2 detecting system device and an unified sensor
AD558 L.P.F P Notch Filter I C Differential Amplifier Regulated Power Supply
LED Amplifier H.P.F I/V Converter
S k i n
PhotoDiode
7Segment LED
B l o o d
Pulse wave using red
Fig.3 shows 2types of measured pulse wave. The pulse by using infrared ray is shown as yellow line and by using red ray is shown as pink line. C. Calculation of SPO2 It is possible to calculate SPO2 by using these pulse waves. SPO2 expresses the percentage rate of oxyhemoglobin in blood. This value is one of the most important vital sign of a human. SPO2 reflects the body condition strongly. The body condition for each SPO2 rate is shown in Table I. It is said that about 95% and more of SPO2 is normal level in human. Some medical treatments will need in the condition from 90% to 94% of SPO2. Then, 89% and under-values of SPO2 mean fatal condition, and a correct medical treatment is necessary as soon as possible.
Fig. 2 Block diagram of SPO2 measuring system
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Development of A Device to Detect SPO2 which is Installed on a Rescue Robot
Table 1 Symptom Relation of SPO2 Oxygen Saturation Blood (%) 99 - 95 94 - 90 89 - 75 74 -
Condition
Symptom of body
No damage Minor damage Middle damage Serious damage
Good health Minor symptom Cyanosis Damage to tissue Cellular death
933
of finger. In addition, a sensor is not always set on a skin directly. So, we tried to measure pulse wave from some body places expect a tip of finger, and tried to measure through clothes. In all cases, we could detect pulse wave using our developed system. An example of measured pulse wave (through clothes) is show in Fig.4.
Hemoglobin is in the red blood cell. And oxy or de-oxy hemoglobin makes the color of red blood cell change. Using this light characteristic of red blood cell, SPO2 can be obtained. In actuality, SPO2 is obtained as a ratio of a magnitude of the pulse for infrared ray and a magnitude of the pulse for red ray. To calculate this value, following formulas were used.
A = log (I / (I I))
B = log (L / (L L)) = A/ B SPO2 = (Ia/(Ia+La))××100
(1)
Where, I means the maximum amplitude of infrared pulse signal, and L means the maximum amplitude of red pulse signal. I and L mean the minimum amplitude respectively. A and B mean the difference between maximum and minimum amplitude of each signals. And is the absorbing light constant. Ia and La mean the average of pulse wave respectively. SPO2 is calculated by formula (1). D. Experiment of pulse detection
Fig. 5 The pulse wave through two clothes An example of pulse wave that was measured through two pieces of clothes is shown in Fig.5. In this figure, pulse wave was measured as same as Fig.4. As shown in these figures, it is difficult to measure pulse wave through the clothes. However, if a gain of signed amplifier and brightness of LED is optimal, we can obtain pulse wave through the clothes. E. Experiment of SPO2 detection
Fig. 4 The pulse wave through one clothes Usually, SPO2 is measured from a tip of finger. However, at the disaster area a SPO2 sensor is not always set on a tip
_______________________________________________________________
The SPO2 was experimentally measured by using our developed system. Fig.6 shows externals of the comparison experiment. In the experiment, a pulse oxy mater on the market (OLV-3100 Nihon Kohden Ltd.) was also used as a reference. This device’s sensor is the penetrating type, but our sensor is the reflection type. In this figure, indicated number by 7segments means calculated SPO2 value. In Fig.6 (Experiment 1), a subject attached the sensor of a pulse oxy mater to the forefinger of the light hand, and the sensor of our device to the forefinger of the left hand respectively. As shown in Fig.6, if brightness of LEDs at set to suitable level, obtained SPO2 value will be correct. However, sometimes errors occurred in our system. Error range of the SPO2 which was measured by our developed system was about from -4% to +4%. We think that a cause of this error is mainly crosstalk between two kinds of light (infra-
IFMBE Proceedings Vol. 23
_________________________________________________________________
934
Yoshiaki Kanaeda, Takahiro Asaoka and Kazushige Magatani
red and red ray). We think that it is necessary to develop a new type unified sensor which is able to cancel this cross talk.
Fig.7 is an experiment result of acquisition on the clothes. From this fig.7, it has been understood to be able to acquire the pulse wave from clothes. III. CONCLUSIONS In this paper, SPO2 measuring system by using infrared and red ray was developed. This system can detect a pulse on subject’s cloths and on many parts of body. And SPO2 value was calculated using measured pulse. This value had some error. Main cause of this error will be crosstalk between photo sensors. Therefore, we have concluded that if these problems are improved, our developed system will be useful to rescue sufferers. And, to install it in the rescue robot, the circuit is miniaturized.
REFERENCES Fig. 6 Experiment 1 and this result
[1] Takahiro ASAOKA ,Yoshiaki KANAEDA and Kazushige MAGATANI, “Development of the device to detect human’s bio-signals by easy sensing”, IEEE EMBS 2008 [2] Yasuhiro SAEKI, Komin TAKAMURA and Kazushige MAGATANI, “The measurement technique of human’s bio-signals”, IEEE EMBS 2006 [3] Keisuke YASUDA, Koumin TAKAMURA, Takumi MASUDA and Kazushige MAGATANI, “A remote measurement method of the human bio-signals”, IEEE EMBS Asian-Pacific Conference on Biomedical Engineering 2003
Author: Institute: Street: City: Country: Email:
Yoshiaki KANAEDA Tokai University 1117 Kitakaname Kanagawa Hiratsuka Japan
[email protected] Fig. 7 Experiment 2 and this result
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Estimation Method for Muscular Strength During Recognition of Hand Motion Takemi Nakano1, Kentaro Nagata2, Masahumi Yamada2 and Kazusige Magatani1 1
TOKAI University, JAPAN Kanagawa RehabilitationInstitute, JAPAN
2
Abstract — In this study, we describe the estimation method for muscular strength during recognition of hand motion based on surface electromyogram (SEMG). Although the muscular strength can consider the various evaluation methods, a grasp force is applied as an index to evaluate the muscular strength. Today, SEMG, which is measured from skin surface, is widely used as a control signal for many devices. Because, SEMG is one of the most important biological signal in which the human motion intention is directly reflected. And various devices using SEMG are reported by lots of researchers. Those devices which use SEMG as a control signal, we call them SEMG system. In SEMG system, to achieve high accuracy recognition is an important requirement. Therefore, conventionally SEMG system mainly focused on how to achieve this objective. Although it is also important to estimate muscular strength of motions, most of them cannot detect power of muscle. The ability to estimate muscular strength is a very important factor to control the SEMG systems. Thus, our objective of this study is to develop the estimation method for muscular strength, and reflecting the result of measured power to the controlled object. Since it was known that SEMG is formed by physiological variations in the state of muscle fiber membranes, it is thought that it can be related with grasp force. We applied to the least-squares method to construct a relationship between SEMG and grasp force. In order to construct an effective evaluation model, four SEMG measurement locations in consideration of individual difference were decided by the Monte Carlo method. From the experimental result, the performance of our method using two normal subjects shows that the recognition rate of four motions was perfect and the error rate of the grasp force estimation at that time was less than 5 %. Keywords — SEMG, The grip presumption, Operation recognition
I. INTRODUCTION Electromyography (EMG), is obtained by measuring the electrical signal associated with the activation of the muscle. EMG can be used for a lot of studies (e.g., clinical, biomedical, basic physiological, and biomechanical studies). Recently, in order to describe the neuromuscular activation of muscles within functional movements, kinesiological electromyography deserves attention and is established as an evaluation tool for various applied research. In order to apply it simply, the surface electromyogram (SEMG) which
is measured from the skin surface, is widely used as a control source for human interface such as myoelectric prosthetic hands. The SEMG related system has many practical applications in various fields such as human interfaces. Human interfaces are reported by many researchers, and we call them ಯSEMG interfacesರ. Our study also aims to develop the SEMG interfaces like myoelectric prosthetic hands. As for a SEMG interface, it is desirable to operate it with the same feeling as the sensations of real body movements In order to achieve this objective, accurate recognition of motions, which is an essential requirement and estimation of muscular strength are both important factors. However, conventional SEMG interfaces have mainly focused on how to achieve the accuracy using sophisticated signal-processing techniques which are represented by a neural network model and a nonlinear one. We think the ability to estimate muscular strength is also an important factor in controlling it. And furthermore, in consideration of simplicity and ease of use in using the SEMG interface, construction of a simple discriminant model with a small number of measurement electrodes has been one of a SEMG interface's main requirement. In this study, our objective is to develop an estimation method for muscular strength while maintaining the accuracy of hand motion recognition and to reflect the result of the measured power on the controlled object. In order to achieve this purpose, we directed our attention to measuring sufficient amounts of information for SEMG. This is because, the disadvantages of SEMGs are that they have a large detection area and therefore, have more potential for cross talk from adjacent muscles. Each subject's SEMG greatly differed depending on the individual's tissue characteristics, physiological cross talk and so on. Therefore, it is a problem subjects. We think that there is a suitable measurement location for every subject, and it is more effective for solving individual differences than employing strong discriminant simple linear models, and the selection method of optimal electrode configuration for using them effectively. The selection method has been one of our main objectives. As for the number of measurement electrodes, our current work shows ಯfour electrodesರare satisfactory to our objective. In order to perform the selection of an optimal measurement electrode configuration , a 96ch multielec-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 935–937, 2009 www.springerlink.com
936
Takemi Nakano, Kentaro Nagata, Masahumi Yamada and Kazusige Magatani
trode is required to be able to measure an individual's differences for SEMG.
mum grasp force, but our equipment directly indicates the change of grasp. Fig.2 shows measuring equipment for grasp force and the resistance-to-voltage converter.
II. MATERIALS AND METHODS A. 96ch Multi electrode and SEMG measurement system The multi electrode is one of the features and the key of our system. This is used in order to detect an individual difference of a measuring SEMG while it is being used by the subject moving their hand. This multi electrode was attached to the forearm and an optimal electrode configuration was selected from it. The structure of this multi electrode is shown in Fig.1. To fit a forearm, we use a flexible silicone gum as the base of 96 silver electrodes. And we also designed the SEMG amplifier, which amplifies an SEMG signal about 3,000 times and the frequency band is limited from 10 Hz to 1,000 Hz. The amplified SEMG signals are sampled by a 16-bit A/D converter at a rate of 2,000 Hz.
Fig. 2 Grasp force measuring equipment for PC input and the resistance-to-voltage converter
C. Grasp force estimation method It is well known that there is a close relationship between the muscular force and the EMG signal. However, we used subject-specific models to reduce problems due to intrasubject variation. Thus, our system applies a personal EMG/force relationship depending on the individual’s muscle characteristics. Now, the relationship is written by
D EX m H m
ym
Fig. 1 Structure of 96-channles surface multi electrode. By the side of each electrode is shown the placement number from 1 to 96
B. Grasp force measurement system
where ym is the grasp force data which is performed as the response variable, and X m is the features of SEGM which is performed as the predictor variables. X m is the mean value of selected four SEMG. The term H m is the residual and it is written by
In order to estimate a grasp force based on an SEMG signal, it is necessary to determine the relationship between the SEMG characteristic of grasp and a real grasp force for every individual. In the case of most measurement equipment for grasp force, the measured value is indicated by a needle that moves over a dial gauge, and as a result the acquired grasp data cannot be stored into a personal computer (PC). To analyze the relationship using a PC, we developed a grasp force measurement system that provides a real time grasp force measurement and the output data is stored on a PC with the SEMG simultaneously. A potentiometer is attached to the axis of rotation of the indicator, and translates the indicated value on the dynamometer into a resistance value. The resistance is converted to voltage using a resistance-to-voltage converter which we made. And the analog output voltage is converted to digital for PC input. Most dynamometers indicate only the maxi-
_______________________________________________________________
(1)
Hm
ym yˆ m
(2)
And a SSE is denoted by m
SSE
¦H
(3)
2 m
l 1
The usual method of estimation for the regression model is ordinary least squares. The coefficients D and E are determined by the condition that the sum of the square residuals is as small as possible. Thus, partial differentiation is applied to equation (12), it becomes
IFMBE Proceedings Vol. 23
w ( SSE ) wD w ( SSE ) wE
m
2¦ ( yl D EX l )
0
(4)
l 1 m
2¦ X l ( yl D E X l )
0
l 1
_________________________________________________________________
A Estimation Method for Muscular Strength During Recognition of Hand Motion
Finally, the regression coefficients are given by
Eˆ Dˆ
¦
m l 1
( X l X )( yl y )
¦
m l 1
(5)
( X l X )2
y Eˆ X 2 3
Registration of the SEMG characteristic of the four motions for every subject. This data was used as “predefined data”for the both modeling Selection of best configuration of four measurement electrodes Construction of estimation model and comparison with a real measured grasp data
According to this procedure, the estimation experiment was performed. Fig.4 shows the comparison results of grasp force data between the estimated value and the real measured one. From Fig.4, the estimated value of the grasp force is a good approximation to the real one.
4000 3500 3000 2500 2000 1500
40
1000
35
500 0
0
20 grasp force 40
60
Fig.3 Relationship between the SEMG signals and measured Graspforce
Grasp force [kgf]
SEMG Value
operate the system. Instead, they performed in their own way according to their individual’s characteristics. The experiment was set up as follows: 1
where X is the mean of the X values and y is the mean of the y values. The result of the relationship between the SEMG and the grasp force is shown in Fig.3.
937
30 25 20 15 10 5 0
III. EXPERIMENTAL RESULTS AND DISCUSSION We tested two normal subjects to evaluate our system. Requested motions were set to four types which were composed of three motions and one state (Grasp, Release, Pronation, and Rest state). The neutral state is relaxed and there is no motion. This is the initial state and recognition of the three motions is started after this state. The recognition was performed by every 300 ms, and once a grasp motion is recognized, our system moves to the grasp force estimation mode. While a grasp motion is recognized, our system keeps evaluating the grasp force. As for the motions, each subject received training to get used to the control of our system. However, we didn’t instruct the subject how to
_______________________________________________________________
-5
0.0
5.0
10.0
15.0
20.0
25.0
time [sec]
Fig. 4 The grasp force data of estimated value and real measurement value IV. CONCLUSION We have presented an estimation method for muscular strength while maintaining the accuracy of hand motion recognition, and experimental results shows the recognition of four motions were perfect and the grasp force estimated results fit well with the real measured.
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Navigation System for the Visually Impaired Using GPS Tomoyuki Kanno1, Kenji Yanashima2 and Kazushige Magatani1 1
Department of Electrical Engineering, Tokai University, Japan 2 National Rehabilitation Center for the Disabled, Japan
Abstract — It is possible for the visually impaired to walk their known area independently by using a white cane. However, in their unknown area, they cannot walk without help of others even if they can use a white cane very well. In such cases, not only a visually impaired person but he/she who assists them receives many stress in dairy activity. Therefore, many types of independent walking assist system for the visually impaired are developing. Our objective of this study is the development of an auto navigation system for the visually impaired which assists independent walking of them. This system measures the position of a visually impaired person (a user) by using GPS (Global Positioning System) and navigates him/her to the destination like as a car navigation system. In our system, a location data is stored in the map database and a position of user are analyzed and optimal route for a user to the destination is calculated. Our system guides a user to the destination along this route, and notifies the user of this route and some information for the safety walking by artificial voice. Three normal subjects were tested with our navigation system. All subjects were blind folded by an eye mask and equipped a navigation system. As a result, our system worked well and all subjects were able to walk to the destination following guidance voice. So, we are developing a new navigation system which is installed on a cellular phone. In Japan, there are some cellular phones that include GPS. And most of application programs for cellular phone are written by JAVA. Therefore, GPS included in a cellular phone will be used and every functions of the navigation system will be written by JAVA. In this paper, we also reports about this new navigation system.
likes a car navigation system. However, there are many different points from a car navigation system. For example, the suitable route is the shortest route in a car navigation system. However, in our system, the suitable route is not the shortest route and is the most safety route for the visually impaired. Therefore, a safety level of each roads are stored in the map database and these information are used in order to calculate a suitable route. And it is important to notify the user of turning and landmark information. In our system, these information is announced by pre recorded voice.
Keywords — Visually impaired, GPS, Navigation
Fig. 1 Navigation system using GPS
I. INTRODUCTION The objective of this study is the development of the auto navigation system that supports the independent walking of the visually impaired. This system calculates a suitable route to the destination and notifies the user of this route and some information for the safety walking. Fig.1 shows the conception of our navigation system. In our system, the destination is inputted at first. The system obtains the user’s position by using GPS and calculates a suitable route to the destination by using map information of the system database. And then, the system guides the user along this route. The voice guidance is used for the notification of a route and some information to user. The action of our system
II. METHODOLOGY A. GPS GPS(Global Positioning System) is one of the famous positioning system using satellites. We can obtain the latitude longitude and altitude of the measured position every one second easily. High accuracy data of positioning is able to obtain by using DGPS. In our previous system, DGPS was used in order to measure correct position. However, now we can use GPS without SA, and typical error range of
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 938–941, 2009 www.springerlink.com
The Navigation System for the Visually Impaired Using GPS
GPS is less than 5m. Therefore, normal GPS is used in our system. B. Map database As mentioned earlier, our system has a map database. This database consists of three layers. Latitude and longitude sets of each crossing and landmark are stored in the first layer. Link information between each crossing and landmark are stored in the second layer. And safety levels of each road that is defined the way between one crossing and another are stored in the third layer. In our system, the most likelihood user’s position is determined base on GPS and this database information by using map matching method [1]. And the third layer of the database is used in order to obtain a suitable route as following. An example of visualized map information based on a map database is shown in Fig.2.
939
uct of the physical distance and the safety level. Therefore, the logical distance of bad condition road is longer than the physical distance in our system. These logical distances of each road is used for the suitable route calculation. Fig.3 shows a result of the suitable route calculation. In this figure, the system calculated the route form point 1 to point 9. The shortest route is shown in Fig.3. However, in this database the road condition was set as route 34 to 33 was danger for the visually impaired. So, the calculation result is shown in Fig.4. This result is not shortest route, however the road 34 to 33 is avoided.
Fig. 3 Shortest route
Fig. 2 Digital map C. Suitable route Our map database has the safety information of the rode as mentioned earlier. Generally, a rode has many kind of condition and safety. For example, even if the straight road, a rough conditioned rode is not good for the visually impaired. And a road that has stairs or has heavy traffic is not good also. So, the safety levels of each rode for the visually impaired are defined and these levels are stored in the map database. The calculation of a suitable route is based on Dijkstra's method. Generally, the every road distances are used and the suitable route is determined as the shortest distance to the destination in Dijkstra's method. In our system, logical distance of the road is determined as the prod-
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 4 Suitable route
_________________________________________________________________
940
Tomoyuki Kanno, Kenji Yanashima and Kazushige Magatani
D. Voice notification As mentioned earlier, voice processing system is used in order to notify the user to some information of the crossing and the landmark. In our system, the processing of voice guidance is separated into two stages. When an user enters into the landmark point, the system has to notify him/her of the landmark information by pre-recorded voice. In order to avoid miss hearing of this information, voice processing system is separated into two stages. At first stage, when an user enters into the area that is 7.5m away from the landmark, the system notify of the entering to the important area by beep sound. At second stage, after the first stage when an user reach the area that is 5m away from the landmark, the voice notification is started. In our system, first beep sound means “voice guidance point comes soon”, and after this warning, the voice guidance is started. The conception of this method is shown in Fig.5.
Fig. 6 A result of experiment IV. CONCLUSION In this paper, we described about our developed navigation system for the visually impaired. In this system, suitable route from start point to the destination is calculated by using user’s position that is obtained using GPS and a digital map database. In our system, area information is announced by artificial voice at the landmark. Developed navigation system was tested for normal subject. The subject equipped our system and walked along generated suitable route. From this experiments, some problems of our system become clear. However, we think that these problems will be improved near future. And our navigation system will be a very valuable one to support activities of the
Fig. 5 Voice guidance area III. EXPERIMENT&RESULTS Five normal subject was tested with our new navigation system. All subjects were blindfolded by an eye mask and equipped the system. The navigation route was set as the point 30 to point 57 and all subjects walked following the guidance voice. A green line of Fig.5 shows that walked on the calculated suitable route. From this experiment, the following characteristics became clear. (1)All subjects could walk to the destination following the guidance voice. (2)There are some areas where GPS did not work. Therefore, some system that estimates the position of the user is need. (3)In some points, voice processing system did not work because of accuracy of GPS.
Fig. 7 Location data catched by cellular phone
_______________________________________________________________
_________________________________________________________________
IFMBE Proceedings Vol. 23
The Navigation System for the Visually Impaired Using GPS
visually impaired. This study, we propose a navigation system on cellular phone. Cellular phone has high performance processor, high density memory and GPS receiver. It is possible to make more small navigation system for visually impaired. We make an application program written by JAVA language. We test this program, and get location data. Infuture, we will make navigation program on cellular phone.
_______________________________________________________________
941
REFERENCES 1.
J.Tanaka, K.Yanashima, K.Magatani “Development of the guidance system for the visually impaired using GPS” Department of Electrical Engineering, Tokai University, Japan Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Tomoyuki Kanno Tokai University 1117 Kitakaname Hiratsuka Japan
[email protected] _________________________________________________________________
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 1 S.C. Chen1, S.T. Hsu2, C.L. Liu2 and C.H. Yu3 1
Department of Physical Medicine and Rehabilitation, Taipei Medical University and Hospital, Taipei, Taiwan 2 Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan 3 Department of Physical Therapy and Assistive Technology, National Yang-Ming University, Taipei, Taiwan Abstract — Properties of human joints are essential for orthopedic practice, physical therapy, designing related motion assistive devices, etc. Few studies, however, have ever tried to collect and arrange this knowledge, making researchers need to look up references tiresomely. In view of the above, Hsu et al. [1] proposed a coding system of human joint properties, trying to make reference consulting more convenient and efficient. This coding system includes eight properties of 41 kinds of human joints. The eight properties are locations of joints, numbers of bones involved in joints, functions of joints, shapes of joint surfaces, arthrokinematics of joints, directions of rotation of joints, directions of translation of joints, and numbers of axes of joints. Our work, taking one step ahead, adds one property—the weight bearing of joints—into the original coding system, so as to make the coding system more suitable for medical use. For our newly added property, simple digits are assigned as codes to symbolize the behavior of the 41 kinds of joints. This new property is then joined into the original coding system, and a new coding system with the nine properties is established as a result. In conclusion, this new coding system not only serves as a concise approach to investigating human joint properties, but relatively meets the needs of medical science. We expect that researchers can make further comprehensive studies with this new coding system. Certainly, to upgrade and improve this new classification system, we need specialists’ and experts’ feedbacks. Keywords — Coding system, weight bearing, property of human joint, human joint.
I. INTRODUCTION The concern with properties of human joints, which are the essence of orthopedic practice, physical therapy, as well as designing motion assistive devices, has been growing recently. Pertinent studies have focused on properties of such specific joints as the shoulder joint [2], elbow joint [3], wrist joint [4], hip joint [5], and so forth [6, 7], making the related knowledge well developed and human joint properties well understood. Germane information is so plentiful while works trying to organize it are so rare that reference consulting has been an arduous task. To solve this problem, Hsu et al. [1] col-
lected and arranged this copious knowledge, and proposed a coding system containing eight properties of the 41 kinds of human joints. The eight properties are locations of joints, numbers of bones involved in joints, functions of joints, shapes of joint surfaces, arthrokinematics of joints, directions of rotation of joints, directions of translation of joints, and numbers of axes of joints. In our work, we attempt to add one more property—the weight bearing of joints, an indispensible factor in the field of rehabilitation—into the original coding system, expecting to enable this system to be more favorable for medical use.
II. CODING PROCEDURE FOR WEIGHT BEARING A. Weight bearing of human joints Weight bearing of joints can be defined as a capability of tissues to bear the compressive force imposed by the weight of the body located above them without being injured or damaged [8]. Besides, joints situated in the lower extremity or in the axial body are generally belonging to weightbearing joints [8]. Given this point, we categorize the 41 kinds of joints considered in [1] into two groups: weightbearing joints and non-weight-bearing joints. And those in the lower extremity or axial body are weight-bearing, while others are not. This classification is given in Table 1. Table 1 Classification of joints according to the property of weight bearing Weight-bearing joints Cervical spine, lumbar spine, lumbosacral j’t, thoracic spine, atlantoaxial j’t, atlanto-occipital j’t, symphysis pubis j’t, intermetatarsal j’t, patellofemoral j’t, sacroiliac j’t, tarsometatarsal j’t, proximal tibiofibular j’t, distal tibiofibular j’t, subtalar j’t, interphalangeal j’t (foot), tibiofemoral j’t, metatarsophalangeal j’t, hip j’t, transverse tarsal j’t, talocrural j’t
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 942–945, 2009 www.springerlink.com
Non-weight-bearing joints Temporomandibular j’t, suture j’t of the skull, sternocostal j’t, intrasternal j’t, costotransverse j’t, costovertebral j’t, 2nd, 3rd carpometacarpal j’t, intermetacarpal j’t, acromioclavicular j’t, humeroulnar j’t, interphalangeal j’t (hand), proximal radioulnar j’t, distal radioulnar j’t, metacarpophalangeal j’t, 1st, 4th, 5th carpometacarpal j’t, sternoclavicular j’t, humeroradial j’t, glenohumeral j’t, scapulocostal j’t, radiocarpal j’t, midcarpal j’t
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 1
B. Coding rule of the property of weight bearing
III. RESULTS We now combine the codes in Table 2 with the coding system proposed in [1]. The combination, serving as the coding system of nine properties of human joints, is presented in Table 3. Table 3 Coding system of nine properties of human joints
Code of number of axes
Code of weight bearing
IFMBE Proceedings Vol. 23
Code of direction of translation
_______________________________________________________________
Code of direction of rotation
Code of weight bearing 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Suture j’t of the skull Temporomandibular j’t Atlanto-occipital j’t Atlantoaxial j’t Cervical spine Thoracic spine Lumbar spine Lumbosacral j’t Costovertebral j’t Costotransverse j’t Sternocostal j’t Intrasternal j’t Sternoclavicular j’t Acromioclavicular j’t Glenohumeral j’t Scapulocostal j’t Humeroulnar j’t Humeroradial j’t Proximal radioulnar j’t Distal radioulnar j’t Radiocarpal j’t Midcarpal j’t st th 1 , 4 , 5th carpometacarpal j’t nd rd 2 , 3 carpometacarpal j’t Intermetacarpal j’t Metacarpophalangeal j’t Interphalangeal j’t (hand) Sacroiliac j’t Hip j’t Symphysis pubis j’t Tibiofemoral j’t Patellofemoral j’t Proximal tibiofibular j’t Distal tibiofibular j’t Talocrural j’t Subtalar j’t Transverse tarsal j’t Tarsometatarsal j’t Intermetatarsal j’t Metatarsophalangeal j’t Interphalangeal j’t (foot)
Code of arthrokinematics
Joint Suture j’t of the skull Temporomandibular j’t Atlanto-occipital j’t Atlantoaxial j’t Cervical spine Thoracic spine Lumbar spine Lumbosacral j’t Costovertebral j’t Costotransverse j’t Sternocostal j’t Intrasternal j’t Sternoclavicular j’t Acromioclavicular j’t Glenohumeral j’t Scapulocostal j’t Humeroulnar j’t Humeroradial j’t Proximal radioulnar j’t Distal radioulnar j’t Radiocarpal j’t Midcarpal j’t st th 1 , 4 , 5th carpometacarpal j’t nd rd 2 , 3 carpometacarpal j’t Intermetacarpal j’t Metacarpophalangeal j’t Interphalangeal j’t (hand) Sacroiliac j’t Hip j’t Symphysis pubis j’t Tibiofemoral j’t Patellofemoral j’t Proximal tibiofibular j’t Distal tibiofibular j’t Talocrural j’t Subtalar j’t Transverse tarsal j’t Tarsometatarsal j’t Intermetatarsal j’t Metatarsophalangeal j’t Interphalangeal j’t (foot)
Joint
Code of shape of surfaces
Table 2 Codes of weight bearing of joints
Code
Code of function
Based on the definition given above, all the 41 kinds of joints are then coded. The coding result is given in Table 2, in which the order of the 41 kinds of joints is arranged by their locations: from superior to inferior and from proximal to distal.
Code of number of bones involved
C. Codes of weight bearing of joints
Code of location
Since we are to code the joints and combine the coding result with the ones in [1], the coding rule of the property of weight bearing is required. We define the coding rule as follows: If a joint is a weight-bearing joint, it is coded by 1; if not, it is coded by 0. For instance, the cervical spine in the left column of Table 1 is weight bearing, and thus the code of this property of the cervical spine is 1; on the other hand, the temporomandibular joint in the right column possesses no weight-bearing ability, and thus its code is 0.
943
1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4
2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 1 1 1
1 3 3 3 2 2 2 2 2 2 1 1 3 3 3 3 3 3 3 3 3 3 3 1 2 3 3 2 3 1 3 2 2 2 3 3 2 2 2 3 3
0 2 4 3 0 0 0 0 0 0 0 0 5 1 6 1 2 6 3 3 4 5 5 0 0 4 2 0 6 0 2 0 0 0 2 1 0 0 0 4 2
0 7 3 5 5 5 5 5 5 1 0 0 5 3 6 3 3 5 1 3 3 3 3 0 3 3 3 3 3 0 7 3 3 3 3 3 3 3 3 3 3
0 1 4 5 7 7 7 7 1 2 0 0 7 7 7 2 1 5 3 3 4 4 4 0 0 4 1 1 7 0 5 0 c c 1 7 7 4 0 5 1
0 4 0 0 0 0 0 0 6 5 0 0 0 0 0 6 0 0 0 0 0 0 0 0 2 0 0 6 0 0 0 3 c c 0 0 0 0 3 0 0
s 1 2 2 3 3 3 3 1 1 s s 3 3 3 1 1 2 1 1 2 2 2 s 0 2 1 1 3 s 2 0 1 1 1 3 3 2 0 2 1
0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1
_________________________________________________________________
944
S.C. Chen, S.T. Hsu, C.L. Liu and C.H. Yu
The definitions of the codes of the original eight properties follow those in [1], and are as tabled below. Table 4 Coding rules of the eight properties of human joints defined in [1] Property
Code Definition 1 Skull 2 Trunk Location 3 Upper extremity 4 Lower extremity Number of bones 1 Simple joint involved 2 Compound joint 1 Synarthrotic joint Function 2 Amphiarthrotic joint 3 Diarthrotic joint 0 (Unable to be classified) 1 Plane joint 2 Hinge joint Shape of surfaces 3 Pivot joint 4 Condyloid joint 5 Saddle joint 6 Ball and socket joint 0 (Unable to be classified) 1 Spin 2 Roll 3 Slide Arthrokinematics 4 Spin and roll 5 Spin and slide 6 Roll and slide 7 Spin, roll and slide 0 (Unable to be classified) 1 Rotation around a medial-lateral axis 2 Rotation around an anterior-posterior axis 3 Rotation around a vertical axis 4 Combination of 1 and 2 Direction of 5 Combination of 1 and 3 rotation 6 Combination of 2 and 3 7 Combination of 1, 2 and 3 a Dependent rot. around a medial-lateral axis b Dependent rot. around an anterior-posterior axis c Dependent rot. around a vertical axis 0 (Unable to be classified) 1 Translation alone a medial-lateral axis 2 Translation along an anterior-posterior axis 3 Translation along a vertical axis 4 Combination of 1 and 2 Direction of 5 Combination of 1 and 3 translation 6 Combination of 2 and 3 7 Combination of 1, 2 and 3 a Dependent trans. alone a medial-lateral axis b Dependent trans. along an anterior-posterior axis c Dependent trans. along a vertical axis 0 Nonaxial joint 1 Uniaxial joint Number of axes 2 Biaxial joint 3 Triaxial joint s (Unable to be classified)
IV. DISCUSSION A coding system containing nine joint properties is proposed in this work. For each property, all the 41 kinds of joints are assigned appropriate codes; namely, the joints are expressed by a series of nine digits indicating their nine properties, allowing researchers to learn their needed infor-
_______________________________________________________________
mation through readily consulting the table. Once the code of a certain property is learned, Table 2 and Table 4 can serve as “decoders” to decipher the code, and knowledge of the required properties is then obtained. The coding system, thus, presents an efficient approach to acquiring or gathering pertinent materials. A. Applications of the coding system The advantage given above further enables researches to learn these properties simultaneously, for they do not have to consult to different references individually for an individual need anymore. Instead, all the nine properties of these 41 joints can be readily obtained at one time by referring to this system. Thus, a systematical classification of joints, or even an investigation of similarities among joints, is made to be possible. Take the property of the number of axes for example. We can classify the joints according to this property by simply sorting their codes of this property with the help of suitable software. The result classifies all the joints into five groups, as the five ones indicated in the row of Number of Axes in Table 4; joints in the same group possess the similarity regarding the property. In the same way, similarities among different joints for two or more properties can also be disclosed as well. For instance, if we are interested in finding out all the diarthrotic joints which are also saddle joints, we can focus our attention on the codes of function and shape of surfaces. By searching for the joints with the code of function being 3 (indicating the diarthrotic joint), and the code of shape of surfaces being 5 (indicating the saddle joint), three kinds of joints satisfying this requirement are found; they are the sternoclavicular joint, midcarpal joint, and 1st, 4th and 5th carpometacarpal joint. Other similarities among joints for various combinations of properties can be determined through the same method. Properties of human joints are the crucial foundation of such pertinent medical fields as orthopedic practice, physical therapy, and designing related motion assistive devices. In addition to these, human joint properties, especially those concerning movement, are also related to the mechanism design for humanoid robots in the mechanical field [9]. It could be beneficial to apply this coding system to this mechanical field. For example, researchers can code the kinematic pairs, which are considered as joints of a kinematic chain [10], according to their pertinent properties. With the two sets of coding systems, relationship between human joints and kinematic pairs will be revealed. Moreover, the equivalent kinematic pairs of human joints, allowing corresponding movements to those of human joints, can also be determined.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 1
B. Interdependence among the properties The coding system proposed in this study considers nine different properties, six of which are interdependent upon at least one another, and three interdependent cases exist. The first interdependence occurs among the function, the direction of rotation, direction of translation, and the number of axes. If a joint rotates around one, two, or three axes, its number of axis was one, two, or three. If a joint rotating around no axis allows translations, its number of axis was 0. If a joint allows neither rotation nor translation, then its function is synarthrotic and is coded by “s” in Table 3. The interdependence is denoted by ID1 in Table 5. The second interdependence occurs among the function and the shape of surfaces. If the code of the function is 1 or 2, the code of the shape of surfaces must be 0; only when the code of the function is 3 can the code of the shape of surfaces be 1 to 6. It agrees with the fact that only diarthrotic joints can be classified according to their shapes [11]. The interdependence is denoted by ID2 in Table 5. The third interdependence occurs among the function and the arthrokinematics. If the code of the function is 1, the code of the shape of surfaces must be 0; only when the code of the function is 2 or 3 can the code of the shape of surfaces be 1 to 7. It indicates that the synarthrotic joint allows no movement between bones. The interdependence is denoted by ID3 in Table 5. Except the three sorts of interdependences, other properties are independent of each other. Table 5 Interdependence among the properties Weight bearing
Number of axes
Direction of translation
Direction of rotation
Arthrokinematics
Shape of surfaces
Function
Number of bones
Location Location Number of bones Function Shape of surfaces Arthrokinematics Direction of rotation Direction of translation Number of axes Weight bearing
---- ID2 ID3 ID1 ID1 ID2 -ID3 -ID1 -- ID1 ID1 ID1 -ID1 ID1 ID1
ID1
ID1 ID1 --
approach to investigating joint properties, and also meets the needs of medical science. Besides, with this system, comprehensive inspection of information about joints and other interdisciplinary applications are attainable. In the future, feedbacks from specialists and experts are needed, so as to enhance this new classification system.
ACKNOWLEDGMENT The National Science Council of Taiwan (R.O.C.) is acknowledged for the support of this paper (NSC95-2221-E038-010-MY3).
REFERENCES 1.
S. T. Hsu, S. C. Chen, C. H. Yu and C. L. Liu (2007) Establishment of the Coding System of Human Joint Properties, Proc. 24th National Conference on Mechanical Engineering of the Chinese Society of Mechanical Engineers, Chungli, Taiwan, 2007, pp 5593-5598 2. G. R. Johnson and J. M. Anderson (1990) Measurement of threedimensional shoulder movement by an electromagnetic sensor. Clinical Biomechanics, vol. 5, issue 3, pp. 131-136 3. A. P. Vasen, S. H. Lacey, M. W. Keith and J. W. Shaffer (1995) Functional range of motion of the elbow. Journal of Hand Surgery, vol. 20, issue 2, pp. 288-292 4. L. Leonard, D. Sirkett, G. Mullineux, G. E. B. Giddins and A. W. Miles (2005) Development of an in-vivo method of wrist joint motion analysis. Clinical Biomechanics, vol. 20, pp.166-171 5. F. H. Dujardin, X. Roussignol, O. Mejjad, J. Weber and J. M. Thomine (1997) “Interindividual variations of the hip joint motion in normal gait. Gait & Posture, vol. 5, pp. 246-250 6. R. A. Malinzak, S. M. Colby, D. T. Kirkendall, B. Yu and W. E. Darrett (2001) A comparison of knee joint motion patterns between men and women in selected athletic tasks. Clinical Biomechanics, vol. 16, pp. 438-445 7. R. S. Sodhi (2000) Evaluation of head and neck motion with the hemispherical shell method. International Journal of Industrial Ergonomics, vol. 25, pp. 683-691 8. J. E. Muscolino (2006) Kinesiology: The Skeletal System and Muscle Function. Mosby Elsevier, p. 64 9. H. Yussof, M. Yamano, Y. Nash and M. Ohka (2006) Design of a 21DOF humanoid robot to attain flexibility in human-like motion, the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), 2006, pp. 202-207 10. R. L. Norton (2001) Design of Machinery: An Introduction to the Synthesis and Analysis of Mechanisms and Machines. McGraw Hill Publishing Company, New York, p. 27 11. N. Palastanga, D. Field and R. Soames (2006) Anatomy and Human Movement: Structure and Function. Butterworth Heinemann Elsevier, New York, pp. 23-24
--
V. CONCLUSIONS This work proposed a coding system containing nine properties of human joints. The system serves as a concise
_______________________________________________________________
945
Author: Shan-Ting HSU Institute: Department of Mechanical Engineering, National Taiwan University Street: No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 City: Taipei Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 2 S.C. Chen1, S.T. Hsu2, C.L. Liu2 and C.H. Yu3 1
Department of Physical Medicine and Rehabilitation, Taipei Medical University and Hospital, Taipei, Taiwan 2 Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan 3 Department of Physical Therapy and Assistive Technology, National Yang-Ming University, Taipei, Taiwan Abstract — In rehabilitation engineering and assistive technology, precedent researches have focused on a single characteristic or property of a single joint, while little attention was paid to similarities among joints. This work attempts to investigate similarities among human joints by referring to the coding system of human joint properties proposed in Part 1. To investigate the similarities, eight out of the nine properties in this coding system (except the locations of joints) are divided into five cases. In Case 1, we consider the numbers of bones involved in joints and shapes of joint surfaces. In Case 2, we consider the directions of rotation, directions of translation, and numbers of axes of joints. In Cases 3, 4 and 5, the functions of joints, arthrokinematics of joints, and weight bearing of joints are considered respectively. Similarities among the 41 kinds of joints in this coding system are hereby investigated by sorting the joints’ codes in each case. As results, the 41 kinds of joints are classified into 12, 13, 3, 6 and 2 groups in Case 1 to Case 5 in sequence. Regarding the properties discussed in each case, joints in the same group possess similar properties. However, four groups in Case 1, four groups in Case 2, and one group in Case 4 each have only one kind of joint, which is unique, having no similarities with other joints in the properties considered. In conclusion, similarities among human joints are discovered systematically by referring to the coding system proposed in Part 1. These similarities offer an innovative and utile reference for assistive technology device designs and other pertinent research. Similarities beyond these five cases can be further investigated with the same method. Keywords — Similarity among human joints, coding system, property of human joint, sorting, human joint.
have been proposed for various joints based on their various properties [3, 4]. Besides, many types of artificial joints have also been proposed for different human joints [5, 6]. Another research field related to human joint properties is the mechanism design for humanoid robots. In its designing process, the degrees of freedom (DOFs) of human joints have to be considered, so that the DOFs of humanoid robot joints can approximate to that of human joints [7]. Though properties of human joints are extensively applied in these pertinent fields, most studies have focused on a single property of a single joint; properties of joints have been investigated individually before a designing process begins. Similarity among joints, deserving consideration, has unfortunately hitherto been neglected [8]. If similarities among joints are taken into consideration, one advantage is that, for instance, designers can simply change one existing design of a similar joint to complete a new design fitting their needs. Developing a design in this way thus takes much less time than beginning a completely new one from scratch. On these grounds, the purpose of this paper is to investigate the similarities among human joints. To achieve this purpose, we attempt to apply the coding system proposed in the first part of our work. One reason for using coding systems, according to Groover [9], is design retrieval—a mechanism for determining whether a similar part exists; if so, an uncomplicated change in an existing part can be made. This is our expectation. Therefore, the coding system is utilized in this part of our work as an approach to investigating similarities among joints.
I. INTRODUCTION In the field of rehabilitation engineering and assistive technology, numerous attempts have been successfully made to discover properties of human joints; knowledge of joint properties has been widely understood. These properties are indispensable to pertinent researches in such medicine fields as physical medicine, rehabilitation, assistive technology devices, and so forth. Take continuous passive motion (CPM) for example. It was proved that CPM has a great effect on recovering the range of motion of joints [1, 2], and a considerable number of designs for CPM devices
II. DETERMINATION OF CASES AND RESEARCH METHOD The coding system proposed in Part 1 considers nine properties, eight of which are of our interests (except the locations). They are categorized into five cases. In Case 1, the number of bones involved and the shape of surfaces are considered. The two properties are both pertinent to anatomical structures of joints. In Case 2, the direction of rotation, direction of translation, and the number of axes are considered. The three properties are related to the direction of motion allowed at joints. In Case 3, the func-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 946–949, 2009 www.springerlink.com
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 2
5 5 5 5 1 0 0 0 3 3 0 3 3 3 3 3 3 3 7 3 3 7 3 5 1 3 3 3 3 5 3 6 5 3 0 5 3 3 3 3 3
7 7 7 7 2 0 0 0 0 1 0 0 c c 4 0 7 7 1 1 1 5 1 5 3 3 4 4 5 7 4 7 5 7 0 1 7 2 1 4 4
0 0 0 0 5 0 0 0 2 6 0 3 c c 0 3 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 6 0 0 0
3 3 3 3 1 s s s 0 1 s 0 1 1 2 0 3 3 1 1 1 2 1 2 1 1 2 2 2 3 2 3 2 3 s 1 3 1 1 2 2
Code of weight bearing
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 2 2 2 2 2 3 3 3 4 4 4 5 5 6 6 6 0 0 0 1 2 4 5
Number of joints sharing the similarity
2 2 2 2 2 1 1 1 2 2 1 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 2 2 3 3 3 3
Code of number of axes
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2
Code of direction of translation
IFMBE Proceedings Vol. 23
2 2 2 2 2 2 2 3 3 4 4 4 4 4 4 4 3 4 1 3 3 4 4 2 3 3 2 3 4 3 3 3 3 4 1 2 4 3 4 3 3
Code of arthrokinematics
Cervical spine Thoracic spine Lumbar spine Lumbosacral j’t Costotransverse j’t Sternocostal j’t Intrasternal j’t 2nd, 3rd carpometacarpal j’t Intermetacarpal j’t Sacroiliac j’t Symphysis pubis j’t Patellofemoral j’t Proximal tibiofibular j’t Distal tibiofibular j’t Tarsometatarsal j’t Intermetatarsal j’t Acromioclavicular j’t Subtalar j’t Temporomandibular j’t Humeroulnar j’t Interphalangeal j’t (hand) Tibiofemoral j’t Interphalangeal j’t (foot) Atlantoaxial j’t Proximal radioulnar j’t Distal radioulnar j’t Atlanto-occipital j’t Metacarpophalangeal j’t Metatarsophalangeal j’t Sternoclavicular j’t st 1 , 4th, 5th carpometacarpal j’t Glenohumeral j’t Humeroradial j’t Hip j’t Suture j’t of the skull Costovertebral j’t Transverse tarsal j’t Scapulocostal j’t Talocrural j’t Radiocarpal j’t Midcarpal j’t
Code of direction of rotation
Joint
Code of function
_______________________________________________________________
Code
Code of shape of surfaces
For Case 1, the codes of the number of bones involved and the shape of surfaces are to be sorted. We select the codes of the number of bones as the primary codes for sorting, the codes of the shape of surfaces as the secondary codes. And the sorting result is given in Table 1. The result remains constant if we switch “the codes of the number of bones involved” and “the codes of the shape of surfaces” to secondary codes and primary codes respectively. It is learned from Table 1, from the top, that a group of 16 kinds of joints has similarities concerning the two properties; another group containing two kinds of joints has similarities; still another group containing five ones has similarities, and so on. Altogether, there are 12 groups of joints, among which four groups possess only one kind of joint. Joints in these four groups are unique, sharing no similarities with others in the properties considered. For Case 2, as given in Table 2, the codes of the direction of rotation, direction of translation, and the number of axes are to be sorted. The codes of the direction of rotation is chosen as the primary codes for sorting, the codes of the direction of translation as the secondary, and the codes of the number of axes as the third. Similarly, the order of sorting has no influence on the result. As a result of Case 2, joints are classified into 13 groups; those in the same group possess same properties considered in this case. Among the 13 groups, four groups possess only one kind of joint.
the number of bones and shape of surfaces Code of number of bones involved
III. SORTING PROCEDURE AND RESULTS
Table 1 Similarity among joints and the sorting result regarding
Code of location
tion of joints is considered. This property has to do with range of motion at joints. In Case 4, we consider the arthrokinematics of joints, which refers to relative movements occurring between joint surfaces. In Case 5, the weight bearing of joints is considered, which is an essential factor widely considered in physical therapy. The similarities of joints are then investigated based on these five cases. In each case, codes of the considered properties are sorted incrementally by Microsoft Excel®. Similarities among joints can then be determined by the sorting results.
947
1 1 1 1 0 0 0 0 16 0 1 1 1 1 1 1 1 0 2 1 0 0 0 5 1 1 1 0 3 0 1 0 3 1 0 2 0 0 0 3 1 0 0 3 1 0 1 1 1 0 1 0 1
_________________________________________________________________
948
S.C. Chen, S.T. Hsu, C.L. Liu and C.H. Yu
Table 2 Similarity among joints and the sorting result regarding the direction of rotation, direction of translation, and the number of axes
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 2 2 3 3 4 4 4 4 4 4 5 5 5 5 7 7 7 7 7 7 7 7 7 7 c c
0 0 0 0 0 2 3 3 0 0 0 0 4 6 6 5 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 c c
s s s s s 0 0 0 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 1 1
Code of weight bearing
Code of number of axes
0 0 0 0 0 3 3 3 3 3 3 3 7 5 3 1 3 1 3 3 3 3 3 3 3 5 5 7 3 5 5 5 5 5 3 6 3 3 3 3 3
Number of joints sharing the similarity
Code of direction of translation
0 0 0 0 0 0 0 0 2 2 2 2 2 0 0 0 1 3 3 4 4 5 5 4 0 3 6 2 4 0 0 0 0 5 1 6 6 1 0 0 0
Code of arthrokinematics
1 1 1 1 1 2 2 2 3 3 3 3 3 2 2 2 3 3 3 3 3 3 3 3 2 3 3 3 3 2 2 2 2 3 3 3 3 3 2 2 2
Code of direction of rotation
2 1 1 1 1 1 1 1 1 1 2 1 1 2 1 1 2 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1
Code of function
1 2 2 3 4 3 4 4 3 3 4 4 1 2 4 2 3 3 3 2 3 3 3 3 4 2 3 4 4 2 2 2 2 3 3 3 4 4 4 4 4
Code of shape of surfaces
Suture j’t of the skull Sternocostal j’t Intrasternal j’t nd rd 2 , 3 carpometacarpal j’t Symphysis pubis j’t Intermetacarpal j’t Patellofemoral j’t Intermetatarsal j’t Humeroulnar j’t Interphalangeal j’t (hand) Talocrural j’t Interphalangeal j’t (foot) Temporomandibular j’t Costovertebral j’t Sacroiliac j’t Costotransverse j’t Scapulocostal j’t Proximal radioulnar j’t Distal radioulnar j’t Atlanto-occipital j’t Radiocarpal j’t Midcarpal j’t st th 1 , 4 , 5th carpometacarpal j’t Metacarpophalangeal j’t Tarsometatarsal j’t Atlantoaxial j’t Humeroradial j’t Tibiofemoral j’t Metatarsophalangeal j’t Cervical spine Thoracic spine Lumbar spine Lumbosacral j’t Sternoclavicular j’t Acromioclavicular j’t Glenohumeral j’t Hip j’t Subtalar j’t Transverse tarsal j’t Proximal tibiofibular j’t Distal tibiofibular j’t
Code of number of bones involved
Joint
Code of location
Code
Table 3 Similarity among joints regarding the function
0 0 0 5 0 1 0 1 1 2 1 0 0 4 1 1 0 1 0 2 1 0 1 0 1 0 2 0 1 0 0 6 0 0 1 1 0 4 1 1 1 1 1 1 0 10 0 0 1 1 1 1 2 1
With the identical sorting procedure, joints can be sorted for Cases 3, 4 and 5, and similarities among joints in these cases are then obtained. The results are directly presented in Table 3 to Table 5. Each table gives the codes of the joints belonging to a group as well as the number of the joints sharing the similarity.
_______________________________________________________________
Joints
Code
Number of joints
Suture j’t of the skull, sternocostal j’t, intrasternal j’t, 2nd, 3rd carpometacarpal j’t, symphysis pubis j’t
1
5
2
14
3
22
Cervical spine, thoracic spine, lumbar spine, lumbosacral j’t, costovertebral j’t, costotransverse j’t, intermetacarpal j’t, sacroiliac j’t, patellofemoral j’t, proximal tibiofibular j’t, distal tibiofibular j’t, transverse tarsal j’t, tarsometatarsal j’t, intermetatarsal j’t Temporomandibular j’t, atlantooccipital j’t, atlantoaxial j’t, sternoclavicular j’t, acromioclavicular j’t, glenohumeral j’t, scapulocostal j’t, humeroulnar j’t, humeroradial j’t, proximal radioulnar j’t, distal radioulnar j’t, radiocarpal j’t, midcarpal j’t, 1st, 4th, 5th carpometacarpal j’t, metacarpophalangeal j’t, interphalangeal j’t (hand), hip j’t, tibiofemoral j’t, talocrural j’t, subtalar j’t, metatarsophalangeal j’t, interphalangeal j’t (foot)
Table 4 Similarity among joints regarding the arthrokinematics Joints Suture joints of the skull, sternocostal j’t, intrasternal j’t, 2nd, 3rd carpometacarpal j’t, symphysis pubis j’t Costotransverse j’t, proximal radioulnar j’t Atlanto-occipital j’t, acromioclavicular j’t, scapulocostal j’t, humeroulnar j’t, distal radioulnar j’t, radiocarpal j’t, midcarpal j’t, 1st, 4th, 5th carpometacarpal j’t, intermetacarpal j’t, metacarpophalangeal j’t, interphalangeal j’t (hand), sacroiliac j’t, hip j’t, patellofemoral j’t, proximal tibiofibular j’t, distal tibiofibular j’t, talocrural j’t, subtalar j’t, transverse tarsal j’t, tarsometatarsal j’t, intermetatarsal j’t, metatarsophalangeal j’t, interphalangeal j’t (foot) Atlantoaxial j’t, cervical spine, thoracic spine, lumbar spine, lumbosacral j’t, costovertebral j’t, sternoclavicular j’t, humeroradial j’t Glenohumeral j’t Temporomandibular j’t, tibiofemoral j’t
IFMBE Proceedings Vol. 23
Code
Number of joints
0
5
1
2
3
23
5
8
6
1
7
2
_________________________________________________________________
Investigation of Similarities among Human Joints through the Coding System of Human Joint Properties—Part 2
Table 5 Similarity among joints regarding the weight bearing Joints Suture j’t of the skull, temporomandibular j’t, costovertebral j’t, costotransverse j’t, sternocostal j’t, intrasternal j’t, sternoclavicular j’t, acromioclavicular j’t, glenohumeral j’t, scapulocostal j’t, humeroulnar j’t, humeroradial j’t, proximal radioulnar j’t, distal radioulnar j’t, radiocarpal j’t, midcarpal j’t, 1st, 4th, 5th carpometacarpal j’t, 2nd, 3rd carpometacarpal j’t, intermetacarpal j’t, metacarpophalangeal j’t, interphalangeal j’t (hand) Atlanto-occipital j’t, atlantoaxial j’t, cervical spine, thoracic spine, lumbar spine, lumbosacral j’t, sacroiliac j’t, hip j’t, symphysis pubis j’t, tibiofemoral j’t, patellofemoral j’t, proximal tibiofibular j’t, distal tibiofibular j’t, talocrural j’t, subtalar j’t, transverse tarsal j’t, tarsometatarsal j’t, intermetatarsal j’t, metatarsophalangeal j’t, interphalangeal j’t (foot)
Code
Number of joints
0
21
949
V. CONCLUSIONS In this part of our work, similarities among joints are discovered systematically by sorting their codes proposed in Part 1. The similarities can serve as an innovative and utile reference for assistive technology device designs and other pertinent studies. Similarities beyond these five cases can also be determined with the same method.
ACKNOWLEDGMENT The National Science Council of Taiwan (R.O.C.) is acknowledged for the support of this paper (NSC95-2221-E038-010-MY3).
REFERENCES 1
20
1.
2.
3.
IV. DISCUSSION
4.
Five cases of similarities are discovered in this work through the coding system proposed in Part 1. Researchers can determine any other case of similarities based on their needs through the same method. Since the coding system provides us with nine properties of joints, a combination of up to 29-1= 511 cases can then be investigated. The similarities determined in this work may be applied to practical use. Take the atlantoaxial joint, humeroradial joint, and the tibiofemoral joint in Table 2 for example. As indicated in this table, all the three joints possess the same direction of rotation, same direction of translation, and (therefore) the same number of axes. Even if almost all the other codes are different, it could be possible to make a pertinent adjustment in an existing device related to the movement of, for instance, the atlantoaxial joint to produce a fitting device applicable to the humeroradial joint or the tibiofemoral joint. In this work, joints are sorted by their codes with the help of packaged software. Programs searching the codes or expert systems that help users find out the appointed codes are worth developing. With them, joints possessing the required codes of the appointed properties can therefore be discovered with more efficiency, thus reducing the time for discovering the similarities among joints.
_______________________________________________________________
5. 6. 7.
8.
9.
J. Morris (1995) The value of continuous passive motion in rehabilitation following total knee replacement. Physiotherapy, vol. 81, issue 9, pp. 557-562 D. Ring, B. P. Simmons and M. Hayes (1998) Continuous passive motion following metacarpophalangeal joint arthroplasty. The Journal of Hand Surgery, vol. 23, issue 3, pp. 505-511 J. H. Saringer and J. J. Culhane (1999) Continuous Passive Motion Device for Upper Extremity Forearm Therapy. United States Patent 5951499 A. H. Brook, P. J. Carian, L. Katzin, E. E. Landsinger, J. D. Moore, L. D. Rotter and S. Schreiber (1989) Continuous Passive Motion Devices and Methods. United States Patent 4875469 H. C. Amstutz and I. C. Clarke, Natural Shoulder Joint Prosthesis, United States Patent 4261062, 1981 H. Grundei, J. Henssge and G. Schutt (1980) Endoprosthetic Elbow Joints. United States Patent 4224695 H. Yussof, M. Yamano, Y. Nash and M. Ohka (2006) Design of a 21DOF humanoid robot to attain flexibility in human-like motion, the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), 2006, pp. 202-207 S. T. Hsu, S. C. Chen, C. H. Yu and C. L. Liu (2007) Establishment of the Coding System of Human Joint Properties, Proc. 24th National Conference on Mechanical Engineering of the Chinese Society of Mechanical Engineers, Chungli, Taiwan, 2007, pp 5593-5598 M. P. Groover (2001) Automation, Production Systems, and Computer-Integrated Manufacturing. Prentice Hall, New Jersey, pp. 420431 Author: Shan-Ting HSU Institute: Department of Mechanical Engineering, National Taiwan University Street: No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 City: Taipei Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Circadian Rhythm Monitoring in HomeCare Systems M. Cerny, M. Penhaker VSB – Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Biomedical Engineering Laboratory, Ostrava, Czech Republic Abstract — HomeCare system is designed, developed and realized in our laboratory and special Test flat. It is compact remote monitoring system of base life functions primarily. But not only standard biosignals like ECG and others can give us information about changes in state of health of monitored man. One of the possible monitored values can be circadian rhythm. The monitoring of the circadian rhythm of people in HomeCare systems is very usable. Each man has own rhythm. This rhythm is periodical for elderly people. In case the life cycle is changed, we can deduce some heath problem will occur. This article shows our designed a tested technical solutions for circadian rhythm monitoring single elderly man in his flat and methods of interpretation. Keywords — ZigBee, HomeCare, Day Activity
I. INTRODUCTION Elderly people living alone in their own flats have similar day activities. Their own one day cycle is similar for the most of days. There are not so many abnormalities. The personal characteristic circadian rhythm could be calculated from several measured circadian cycles. If any deviation from the characteristic circadian rhythm is detected, it could signify a health problem. The typical example of day life cycles abnormality could be the typical case of early fall detection during night hours. Old alone living man must regularly visit toilet between second and third hour in the morning. One day he wakes up and goes to the toilet. But he doesn’t come back to his bed. The movement monitoring system has found out that he reached the toilet and went away from the toilet. The last found man’s position was at the corridor. Another movement had not been detected in other rooms in his flat for a set up time. It could be the sign that he has fallen down. Afterwards the movement monitoring system goes to alarm state. Thanks to the early fall detection his broken leg could be sooner restored to health. There are two categories of the circadian rhythms deviations. The first type is “short-term” deviation. This type of deviation was described in previous example. It is detection of urgent and accidental changes of circadian rhythm. This detection helps to recognize urgent health problems and accidents.
The second category is “long-term deviation”. It means that the evolution of the circadian rhythm is recognized. This evolution could not be strictly pathologic. The degree of pathology has to classify the doctor. Afterwards, this knowledge could be applied to the adaptive and self learning system, which will classify these rhythm changes automatically. The typical example of long term deviation can be the change in the frequency of toilet room visiting. Of these reasons the monitoring of the personal activity circadian rhythms has information validity in the HomeCare projects. There are discussed technical solutions for movement monitoring, first. Than there is discussed designed autonomous movement monitoring system. II. MOVEMENT MONITORING The first designed and realized method of the movement monitoring uses standard PIR sensors. These sensors were modified. The communication was realized by ZigBee technology. The measured data were represented in PC by visualization software which was developed in Labview. This method has some advantages: The hardware is powered by only one CR battery. No wires are required for data transmission. The sensor can work for a long time. The disadvantage is: The bigger pet could be detected instead of person. In the case that more people will be in the flat, there are misdetections too. These misdetections will influence circadian cycle monitoring. The information about the position is not exact. The area of presence is only found out. The second designed solution uses the Location Engine solution from Texas Instruments. This engine is included in their single on chip ZigBee chip named CC2431. The location algorithm used in the CC2431 Location Engine is based on Received Signal Strength Indicator (RSSI) values. The RSSI value will decrease when the distance increases. Four reference nodes were placed in the flat. The flat user was equipped with special watches, where was mounted the ZigBee chip with location engine. The advantage of this solution is that the information about position of monitored person is as exact as possible. It
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 950–953, 2009 www.springerlink.com
Circadian Rhythm Monitoring in HomeCare Systems
951
isn’t possible to detect incorrectly other person or pet in the flat. This solution has one important disadvantage: The user has to wear an active ZigBee device. This fact could be unacceptable for some elderly people. But the ZigBee chip could be a part of continuously measuring device for ECG measurement which is used in HomeCare system. Thanks to this solution the movement monitoring could be more tolerable for some people.
It could be represented by following picture. The displayed flat layout concerns to our HomeCare Testing Flat
A. Technical solution discussion The definitive determination of the human position in the flat can not be done with only one of the introduced movement monitoring technologies. The combination of proposed two technologies has to be used for definitive person’s position determination. The additional usage of the optoelectronic bars in the doors will be optimal. Afterwards, the person in the flat can be successfully located. The circadian rhythm could be constructed from these determined person’s position data. III. THE CIRCADIAN RHYTHM INTERPRETATION The interpretation of the measured data is an important part of this project. It was necessary to define some condition for the data interpretation, respective circadian rhythm construction. There are conditions which are related to long and short term activities representation. From circadian rhythm has to be recognizable person’s movement between monitored rooms in his flat. The short time activities have to be recognizable too. The circadian rhythm shall be processed in the microprocessors by the reason of planned movement system autonomy.
Fig. 1. The movement monitoring during day hours Information about human’s position in the flat is not so exact during night hours due to method of movement monitoring. The PIR based monitoring system purveys only information about area of supposed person’s presence. The coordinates of person’s position in the P vector are replaced by coordinates of the middle of PIR sensor detection area (see Fig. 2.). Because the information about person’s position given by PIR sensors could be influenced by many errors (misdetection due to for example pets) the additional system of optoelectronic bars controls the validity information.
A. The mathematical description The information about human’s position in the flat is different for day hours and night hours. In the day hours is active location engine, so measured data are more precise that in the nigh hours, when only the PIR sensor based measurement is active. Actual position of human in the flat could be defined as vector P (eq.1): P
f ( x, y , t )
where x , y are coordinates of persons ' s position
(1)
Fig. 2. The movement monitoring during night hours B. The chronological succession of person’s presence in the flat The simplest method how to represent measured data is the chronological succession of person’s presence in the monitored areas of the flat. The circadian rhythm which corresponds to the information about person’s movement activity in the flat could be represented by:
t is time
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
952
M. Cerny, M. Penhaker
24
C
¦ K [t ]
K [ t ] C, t R
+
t 0
(2)
where K is a code for monitored area in the flat
The actual position of the person in the flat will be represented by one non-zero matrix element in the position matrix. The circadian rhythm C could be afterwards represented by the position matrix (eq.4.). This matrix has to be three dimensional matrix, where each two dimensional matrix M(t ) concerns to one moment (see Fig. 4.).
The K code for monitored area of the flat could be represented by RGB code. This coding is very useful for next representation of measured data. For each room of flat has been defined own color. Each color is described by the RGB code. The RGB code definition for each are must be carefully defined. From the RGB coding has to easily be recognizable connections between coded areas. Each room of the flat has one of the RGB code part in maximal values are zeros. The distance from beginning of coordinates system is took into consideration. In the figure 3 is displayed suitable RGB coding for our testing flat. RGB coding is suitable for larger flats too, it is because of number of possible combinations.
24
C
¦ M >t @
where t R
+
t 0
M >t @
x max
y max
i 0
i 0
¦¦ a >t @
(4)
i, j
0 for i z x t , j z y t
where a i , j > t @
1 for i
xt , j
yt
Fig. 4. The position matrix Fig. 3. RGB codes for each room of flat and the position vector p When RGB coding is used the circadian rhythm could be displayed as “chronological succession of RGB codes” .
The position matrix is not suitable for next data processing, and it is not useful for data processing in the microcontrollers, what is one of the conditions defined in the beginnings chapter IV.
C. The position matrix
D. The position vector
Actual position of monitored person it flat is defined by vector P (eq.1.). When the monitored flat will be represented by matrix, which has the same dimensions as the monitored flat (in the cm) afterwards matrix elements could represent person’s presence in the monitored flat.
The actual position of the monitored person could be defined as a position vector. This vector is a distance coordinates beginnings and the measured actual position. The module of this vector could be counted for each measured person’s position (eq. 5.).
ai , j
0
for i z x , j z y
1
for i
x, j
G p
y
2
2
(5)
(3)
where
E. The colored vector of position
ai , j is matrix element x , y are coordinates of person ' s position
_______________________________________________________________
px p y
In the consequence of defined conditions and proposed possibilities of circadian rhythm interpretation it seems to
IFMBE Proceedings Vol. 23
_________________________________________________________________
Circadian Rhythm Monitoring in HomeCare Systems
953
be most effective to combine “The chronological succession of RGB codes” and “The colored vector of position”. Afterwards can be constructed colored vector of position (Fig. 6.). The x axis is the time axis, the y axis has values of
G p
. The RGB code has been added to each recorded value of position, to we can see, where (in which room) the user was found out. The added RGB code is necessary, because the exact person’s position can not be exactly recognized from position vector modulus.
flat too. The autonomous movement monitoring system was designed and it is in the realization now. The method of the circadian rhythm construction has been specified. The analyze system has to be designed. The goals of this work could be used in HomeCare systems or in the other circadian rhythms monitoring projects.
ACKNOWLEDGMENT This work was supported in part by grant GACR 102/05/H525: The Postgraduate Study Rationalization at Faculty of Electrical Engineering and Computer Science VSB-TU Ostrava. This work was supported in part by grant GACR 102/08/1429, Safety and security of networked embedded system applications.
REFERENCES 1.
Fig. 5. Colored vector of position (simulation) This type of interpretation of circadian rhythm concerns to all conditions defined above. The color of vector is very usable for processing of long-term deviations. Personal characteristic long term circadian rhythm can be constructed from the colored vector of position. The values of modulus of position vector are usable for short term deviations detections and classification.
2. 3.
4.
5. 6. 7.
IV. CONCLUSION The proposed HomeCare system has been particularly realized and tested in our special HomeCare Testing Flat. The realized measuring devices have been based on Bluetooth technology. The new measuring devices using ZigBee are in progress now and they will be tested in the near future. The circadian rhythm monitoring system is an important part of our HomeCare system and it is tested in our Testing
_______________________________________________________________
Noury, N., From Signal to Information : the smart sensor. Applications in Industry and Medicine Habilitation a diriger des recherches, , 2002 Bronzino, J. D.: The Biomedical Engineering Handbook, CRC Press 1995 Cerny M., Poujaud J.: Movement monitoring in HomeCare, In Acta Electrotehnica, Special issue MediTech 2007, IEEE EMBS conference, 27-29th September, 2007, Editura Mediamira, Cluj-Napoca, Romania, ISSN 1841-3323 str. 99-102 Penhaker, M., Imramovský, M., Tiefenbach, P., Kobza, F.: Medical diagnostic devices lectures , ISBN 80-248-0751-3, Ostrava 2004 [czech] Cerny, M.,Penhaker, M.: Biotelemetry lectures, VSB TU Ostrava 2007 ISBN: 978-80-248-1605-0 K.Aamodt, CC2431 Location engine, Application note AN042 (rev.1.0), Texas Instruments [online] Penhaker, M., Cerny, M., J., Floder, J., Embedded Biotelemetry System for Home Care monitoring, ICBEM 2007, 6.10 – 19.10 2007, Aizu Wakamatsu, Japan
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Martin Cerny VSB – Technical University of Ostrava 17.listopadu 15 Ostrava Poruba Czech Republic
[email protected] _________________________________________________________________
Effects of Muscle Vibration on Independent Finger Movements B.-S. Yang1,2 and S.-J. Chen1 1
Department of Mechanical Engineering, National Chiao Tung University, Hsinchu, Taiwan 2 Brain Research Center, National Chiao Tung University, Hsinchu, Taiwan
Abstract — Previous studies had demonstrated that small amplitude muscle vibration (MV) could increase the motor pathway excitability of the vibrated hand muscle and inhibit the excitability motor pathways of neighboring muscles in healthy individuals. The purpose of this study is to examine whether the vibration-induced neurophysiological changes reflect in the voluntary control of finger movements. We tested the hypothesis that MV to individual hand muscle (abductor pollicis brevis, APB; first dorsal interosseus, FDI; or abductor digiti minimi, ADM) affects the control of finger abduction/adduction. Each fingertip position was captured using a motion capture system during four experiment conditions: no MV, MV to APB, MV to FDI, and MV to ADM. Two consecutive 10-s trials in each condition were tested for each finger. We calculated individuation index (Iind) and index of selective activation (ISA) to compare finger independency with and without MV. The results indicated that muscle vibration could enhance the independency of finger movements. Keywords — Muscle Vibration, Finger independency, Muscle selectivity
I. INTRODUCTION Lack of finger independency and muscle selectivity is a major contributor to hand dysfunction post stroke [1-4]. Rosenkranz and Rothwell [5] reported that for healthy adults, small amplitude ( 0.05)
IFMBE Proceedings Vol. 23
________________________________
Effects of Phototherapy to Shangyingxiang Xue on Patients with Allergic Rhinitis
IV. DISCUSSION AND CONCLUSIONS The findings of our study demonstrate that phototherapy to acupuncture point (Shangyingxiang Xue) may improve the clinical symptoms of AR. Most of the symptoms were quickly and significantly improved, however, the smell impairment did not improve until after the 3th therapy. This was probably because the pre-treatment score was lower than the others that there was no much room for decreasing it or phtotherapy to acupuncture point was not so effective on improving olfactory disorder. Although phtotherapy with combination of 660 nm and 850 nm to Shangyingxiang Xue may therefore be viewed as a useful additional approach in the treatment of AR, the definite mechanism remained unclear. Low-energy illumination therapy has proved effective in some clinical situations such as pain relief and wound healing. Illumination at both the visible and the infrared range were shown to be of therapeutic benefit, but these two types of illumination differ markedly in both photochemical and photophysical properties. The visible light may initiates the cascade of metabolic events at the level of the respiratory chain of the mitochondria, whereas infrared illumination does so by activatin enzymes. Accordingly, on the basis of findings in previous studies, the illumination selected for our study was red light at 660 nm and infrared light at 850 nm. No adverse side effects of phototherapeutic treatment were observed in this study. During the experimental period, nobody dropped out of this group. We believe that many patients can obtaine relief of symptoms with this new therapeutic protocol. Although there was no much difference of clinical effects between groups of phototherapy and drugs, safety concerns about medical treatment usually pose an important restriction on the use. In conclusion, AR may be treated effectively and safely by illumination on Shangyingxiang Xue at 660 nm and 850
_______________________________
995
nm. Further studies are necessary to establish which wavelengths and/or acupuncture points are therapeutically most effective and safe for the treatment of inflammatory disorders of the nasal mucosa.
REFERENCES 1.
A. B. Kay, “Allergy and allergic diseases”. First of two parts, N. Engl. J. Med. vol. 344, no. 1, 30-37,. 2001. 2. N. Aberg, J. Sundell, B. Eriksson et al, “Prevalence of allergic diseases in school children in relation to family history, upper respiratory infections, and residential characteristics”. Allergy. vol. 51, no. 4,.232-237, 1996. 3. K B. Sibbald, “Epidemiology of allergic rhinitis”. Monogr. Allergy. vol. 31, 61-79, 1993. 4. A.W.Law, S.D. Reed, J.S. Sundy, K.A. Schulman, “Direct costs of allergic rhinitis in the United States: estimates from the 1996 Medical Expenditure Panel Survey”. J. Allergy Clin. Immunol. Vol.111, 296300, 2003 5. D.A. Stempel, R. Woolf, “The cost of treating allergic rhinitis”, Curr. Allergy Asthma Rep. vol.2, 223-230, 2002 6. M. S. Dykewicz, S. Fineman, D. P. Skoner et al, “Diagnosis and management of rhinitis: complete guidelines of the joint task force on practice parameters in allergy, asthma and immunology”. Ann. Allergy Asthma Immunol. vol. 81, 478-518, 1998. 7. P. van Cauwenberge, C. Bachert, G. Passalacqua et al, “Consensus statement on the treatment of allergic rhinitis”. Eur. Acad. Allergol. Clin. Immunol. Allergy. vol. 55, 116-134, 2000. 8. J. Bousquet, P. van Cauwenberge, N. Khaltaev, “Allergic rhinitis and its impact on asthma”. J. Allergy Clin. Immunol. vol. 108, Suppl. 5, . S147-334, 2001. 9. J. M. Weiner, M. J. Abramson, R. M. Puy, “Intranasal corticosteroids versus oral H1 receptor antagonists in allergic rhinitis, systemic review of randomized controlled trials”. Br. Med. J. vol. 317, 16241629, 1998. 10. C.C. Xue, R. English, J.J. Zhang, et al.” Effect of acupuncture in the treatment of seasonal allergic rhinitis: a randomized placebo controlled clinical trial”. Altern Ther Health Med. vol.9, 80-87, 2003 11. I.C. Sallet, S. Passarella, E. Qualiarielio, ”Effects of selective irradiation on mammalian mitochondria”. Photochem Photobiol 45a:433, 1987;
IFMBE Proceedings Vol. 23
________________________________
The Study of Neural Correlates on Body Ownership Modulated By the Sense of Agency Using Virtual Reality W.H. Lee1, J.H. Ku1, H.R. Lee1, K.W. Han1, J.S. Park1, J.J. Kim2, I.Y. Kim1 and S.I. Kim1 1
Department of Biomedical Engineering, Hanyang University, Korea Institute of Behavioral Science in Medicine, Yonsei University Severance Mental Health Hospital, Korea
2
Abstract — The sense of one’s own body as part of the self is a fundamental aspect of self-awareness. The recent distinction between sense of agency and sense of body-ownership has attracted considerable empirical and theoretical interest. In this study, we compared the strength of virtual hand illusion induced by agency controlled movement, to investigate the contributions of visual-motor stimulation and ownership to body using fMRI and behavioral study. In the synchronous conditions, virtual hand angle scaled by a scale factor of real hand angle, while the asynchronous condition showed virtual hand angle didn’t correspond to real hand angle. Left precentral gyrus, left SMA, left anterior cingulate cortex and right parahippocampal were significant differences between synchronous condition and asynchronous condition with visual feedback. Left SMA was correlated positively with ownership score. The SMA function is integrated various sensory inputs and optimized action selected and compare body forward model with proprioception, then estimate body image. Keywords — body ownership, sense of agency, left SMA
tions produce a sense of agency [8]. The recent distinction between sense of agency and sense of body-ownership has attracted considerable empirical and theoretical interest [9]. The respective contributions of central motor signals and peripheral afferent signals to these two varieties of body experience remain unknown [10, 11]. In this study, we were interested in the effects of agency controlled (synchronously and asynchronously) movement on brain areas known to be involved in bodily selfawareness. II. METHODS A. Subjects Sixteen healthy volunteers (average age: 24.8, range: 21 ~ 31, SD: 2.51), 8 male and 8 female subjects, were recruited for this study. All were free of neurological or psychiatric illness.
I. INTRODUCTION B. Experiment Task The development of BMI (Brain Machine Interface) technology is able to human brain to communicate with machine, so people can move the prosthesis their intention [1, 2]. To make the prosthetic feel like the subject’s own limb, the functional development of prosthesis is important, as well as it also increases the importance of the psychological element (bodily self-awareness) [3]. The sense of agency and the sense of body-ownership jointly constitute the core of our bodily self-awareness [4]. The sense of agency is the sense of intending and executing actions, including the feeling of controlling one’s own body movements, and, through them, events in the external environment [5, 6]. The sense of agency involves a strong efferent component, because centrally generated motor commands precede voluntary movement [4, 6]. Body ownership refers to the sense that one’s own body is the source of sensations [4, 6]. The sense of body ownership involves a strong afferent component, through the various peripheral signals that indicate the state of the body [7]. The sense of body ownership is present not only during voluntary actions, but also during passive experience. In contrast, only voluntary ac-
Subjects moved the virtual hand corresponded to real hand angle toward randomized target angle with and without visual feedback. We made a MR-compatible device that allowed us to modify the subject’s degree of control of the movements of a virtual hand in MR room. The experiment tasks consisted of four agency controlled conditions (Synch 1, Synch 0.5, Synch 2, Asynch) to modulate participants’ visual - motor sense (fig. 1). In the Synch 1, virtual hand angle corresponded to real hand angle (real hand angle u 1), while the Synch 0.5, 2 showed virtual hand angle scaled by a scale factor of real hand angle (real hand angle u 0.5, 2). In Asynch, virtual hand angle didn’t correspond to real hand angle (agency controlled). C. Procedure At the beginning of task, subjects moved the real hand toward target angle without visual feedback to measure baseline hand movement sensation. Then, they performed four blocks corresponded each condition (fig. 2). One block
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 996–999, 2009 www.springerlink.com
The Study of Neural Correlates on Body Ownership Modulated By the Sense of Agency Using Virtual Reality
997
consisted of 4 sessions (training session with visual feedback, ownership and agency questionnaire session, test session without visual feedback). Three kinds of measurement were obtained in this study. One is self-report questionnaire of virtual hand ownership “it seemed like the virtual hand was my hand” and sense of agency “it seemed like I could have moved the virtual hand if I had wanted”. Another is the proprioceptive shift, change of hand movement sensation compared baseline with test session (baseline – test session), and the other is brain activity each condition. D. Data Acquisition
Fig. 1 Consisted of experiment task (1) Synch 1 : virtual hand angle corresponded to real hand angle (real hand angle u 1), (2,3) Synch 0.5, 2 : virtual hand angle scaled by a scale factor of real hand angle (real hand angle u 0.5, 2), (4) Asynch : virtual hand angle didn’t correspond to real hand angle
fMRI data was done on a 1.5-T MRI system (Sigma Eclipse, GE Medical Systems). BOLD (blood oxygenation level dependent) signals were obtained using an EPI sequence (Gradient Echo, 64u64u30 matrix with 3.75u3.75u5 mm spatial resolution, TE: 14.3, TR: 2s, FOV: 240 mm, Slice thickness: 5 mm, FA=90, # of slices: 30). A series of high resolution anatomical images was also acquired with a fast spoiled gradient echo sequence (256u256u116 matrix with 0.94u0.94u1.50 mm spatial resolution, FOV: 240 mm, Thickness: 1.5 mm, TR: 8.5s, TE: 1.8s, FA: 12, # of slices: 116). E. Data analysis
Fig. 2 Experiment procedure One block consisted of 4 sessions (training session with visual feedback, ownership and agency questionnaire session, test session without visual feedback).
_______________________________________________________________
Data analysis was conducted with AFNI (Analysis ofFunctional NeuroImages: AFNI, Ver 2007_05_29_1644) freeware developed by R.W. Cox [12]. The first 15 time points in all the time series data were discarded to eliminate the fMRI signal decay associated with magnetization reaching equilibrium. All remaining fMRI data were coregistered to the first remaining time sample to correct for the confounding effects of small head motions during task performance. Then, 'spikes' value are corrected in the 3D+time input dataset despike routine provided in AFNI and performed mean-based intensity normalization. Further processing included temporal smoothing (three-point LowPass filter (0.15u(a-1) + 0.7u(a) + 0.15u(a+1)) as well as detrending to remove constant, linear and quadratic trends from the time series data. Spatial Normalization was performed to transform Talairach space using Montreal Neurological Institute (MNI) N27 template provided in AFNI (bilinear interpolation, spatial resolution: 2u2u2 mm3). Further processing included spatial smoothing (Gaussian filter with 9 mm full-width at half-maximum (FWHM), respectively). After preprocessing, condition-specific effects were estimated according to the general linear model (GLM). According to behavioral data, three synchronously conditions (sync 1, sync 0.5 and sync 2) didn’t differ. For this reason, three synchronously conditions data (brain ac-
IFMBE Proceedings Vol. 23
_________________________________________________________________
998
W.H. Lee, J.H. Ku, H.R. Lee, K.W. Han, J.S. Park, J.J. Kim, I.Y. Kim and S.I. Kim
tivity and behavioral data) averaged one synchronously condition data. We performed a repeated measure ANOVA to test differences in neural activity between synchronously condition and asynchronously condition. Then, correlation analysis conducted between the behavioral data and brain activations.
III. RESULTS A. Behavioral data According to the behavioral study, in synchronous condition more felt body ownership and agency score than asynchronous condition. And proprioceptive shift (baseline – test session) was more shifted in synchronous condition (fig. 3). B. fMRI data Left precentral gyrus, left SMA, left anterior cingulate cortex and right parahippocampal were significant differences between synchronous condition and asynchronous condition with visual feedback. Left superior orbital gyrus and left hippocampus were significant differences between synchronous condition and asynchronous condition without visual feedback (Table 1, 2). Fig. 3 Behavioral data Table 1 Main effect brain region test differences between synchronously condition and asynchronously condition with visual feedback Volume
t
x
y
z
region
192
4.4959
-15
-23
68
Left precentral gyrus
160
4.5644
-3
-3
60
left SMA
128
4.4399
-3
39
4
left anterior cingulate cortex
104
4.6158
37
-33
-14
right parahippocampal
Table 2 Main effect brain region test differences between synchronously condition and asynchronously condition without visual feedback Volume
t
x
y
z
region
176
-4.5188
-11
57
-2
Left superior orbital gyrus
96
4.2105
-27
-21
-2
left hippocampus
Fig. 4 Correlation with Left SMA % signal change difference (sync – async) and Ownership score difference (sync – async)
C. Correlation analysis We performed correlation analysis contrasted % signal change (sync – async) with contrasted behavioral data (sync – async). Therefore, left SMA was correlated positively with ownership score (fig. 4).
_______________________________________________________________
IV. DISCUSSION We investigated the effects of agency controlled (synchronously and asynchronously) movement on brain areas. We particularly focused on the correlation between left
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Study of Neural Correlates on Body Ownership Modulated By the Sense of Agency Using Virtual Reality
SMA and ownership score. According to few neuroimaging studies, the SMA is implicated in the planning of motor actions and is associated with bimanual control. One could say that the SMA sends a "plan" of the motor action to the primary motor cortex, which executes the action. The SMA is implicated in actions that are under internal control, such as the performance of a sequence of movements from memory [13, 14]. The elicitation of ownership depends on the integration of visual and motor information and the differences between the visual and position sense representations. The period before the ownership develops is critical in this respect, and it probably involves a recalibration of position sense for the hand [15, 16]. Thus, the recalibration of limb position might be a key mechanism for the elicitation of the ownership, and indeed experiencing the ownership has behavioral consequences for arm movements [16]. SMA is one of the region this feature.
REFERENCES 1.
2. 3. 4.
5.
7.
8. 9.
10.
11.
12.
13.
14.
Buch E, Weber C, Cohen LG, Braun C, Dimyan MA, Ard T et al. (2008) Think to move: a neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke 39: 910-917. Lebedev MA, Nicolelis MA. (2006) Brain-machine interfaces: past, present and future. Trends Neurosci 29: 536-546. Gallagher II. (2000) Philosophical conceptions of the self: implications for cognitive science. Trends Cogn Sci 4: 14-21. Tsakiris M, Haggard P, Franck N, Mainy N, Sirigu A. (2005) A specific role for efferent information in self-recognition. Cognition 96: 215-231. Haggard P. (2005) Conscious intention and motor cognition. Trends Cogn Sci 9: 290-295.
_______________________________________________________________
6.
15.
16.
999
Tsakiris M, Prabhu G, Haggard P. (2006) Having a body versus moving your body: How agency structures body-ownership. Conscious Cogn 15: 423-432. Sato A, Yasuda A. (2005) Illusion of sense of self-agency: discrepancy between the predicted and actual sensory consequences of actions modulates the sense of self-agency, but not the sense of selfownership. Cognition 94: 241-255 Schwabe L, Blanke O. (2007) Cognitive neuroscience of ownership and agency. Conscious Cogn 16: 661-666. Costantini M, Haggard P. (2007) The rubber hand illusion: sensitivity and reference frame for body ownership. Conscious Cogn 16: 229240. Tsakiris M, Schutz-Bosbach S, Gallagher S. (2007) On agency and body-ownership: phenomenological and neurocognitive reflections. Conscious Cogn 16: 645-660. Platek SM, Keenan JP, Gallup GG, Jr., Mohamed FB. (2004) Where am I? The neurological correlates of self and other. Brain Res Cogn Brain Res 19: 114-122. Cox RW (1996). AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29: 162-73 Shima K, Tanji J. (1998) Both supplementary and presupplementary motor areas are crucial for the temporal organization of multiple movements. J Neurophysiol 80: 3247-3260 Supplementary motor area at http://en.wikipedia.org/wiki/ Supplementary_motor_area Naito E, Roland PE, Ehrsson HH. (2002) I feel my hand moving: a new role of the primary motor cortex in somatic perception of limb movement. Neuron 36: 979-988. Ehrsson HH, Spence C, Passingham RE. (2004) That's my hand! Activity in premotor cortex reflects feeling of ownership of a limb. Science 305: 875-877. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Jeonghun Ku, Ph. D Department of Biomedical Engineering, Hanyang Univesity Seoul Korea
[email protected] _________________________________________________________________
Diagnosis and Management of Diabetes Mellitus through a Knowledge-Based System Morium Akter1, Mohammad Shorif Uddin2 and Aminul Haque3 1
Dept. of Computer Science and Engineering, Jahangirnagar University, Savar, Dhaka, Bangladesh 2 Imaging Informatics Division, Bioinformatics Institute, A*STAR, Singapore 3 Bangladesh Institute of Health Sciences (BIHS) Hospital, Savar Center, Dhaka, Bangladesh
Abstract — The chronic disease diabetes mellitus is increasing worldwide. It is caused by absolute or relative deficiency of insulin, and is associated with a range of severe complications including renal and cardiovascular disease as well as blindness. Preventive care helps in controlling the severity of this disease; however, preventive measures require correct educational awareness and routine health checks. Medical doctors help in effective diagnosis as well as treatment of diabetes, although it is associated with high costs. With this view, the purpose of the present research is to develop a low-cost automated knowledge-based system that helps in self-diagnosis and management of this chronic disease for patients as well as doctors. Our knowledge-based system has an easy computer interface, which performs the diagnostic tasks using rules acquired from medical doctors on the basis of patient data. At present our system consists of 26 rules and it has been implemented using Prolog programming. Some real-life experimentations were performed, which confirmed the effectiveness of the developed system. Keywords — Diabetes mellitus, insulin deficiency, disease diagnosis, health care management, knowledge-based (expert) system.
I. INTRODUCTION In the year 2000, a study [1] among 191 World Health Organization (WHO) member states confirms that 2.8% people for all age groups had diabetes, and this is expected to be 4.4% by the year 2030. This implies that the total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030. In Bangladesh, diabetes is reaching epidemic proportions; in some sectors of our society more than 10% of people have diabetes [2]. Diabetes causes [3] severe life threatening complications, such as hypoglycemic coma, blurred vision, loss of memory, severe impairment of renal function, insulin allergy, acute neuropathy, etc. Diabetes management requires dietary control, physical exercise and insulin administration. Medical doctors help in effective diagnosis as well as treatment of diabetes. However, it obviously associates with high costs. Despite remarkable medical advances, patient self-management remains the cornerstone of diabetic treat-
ment. Knowledge-based intelligent system has been proven effective in solving many real-world problems requiring expert skills. Hence, to reduce the cost and to improve the early detection as well as self-awareness of diabetes mellitus, automated expert system might be a promising solution. Knowledge-based system for diagnosis can be used in variety of domains: plant disease diagnosis, credit evaluation and authorization, financial evaluation, identification of software and hardware problems and integrated circuit failures etc. [4]-[6]. The main objective of the present research is to develop a low-cost automated knowledge-based system incorporating the skills of medical experts to help the patients in selfdiagnosis and management of this chronic disease. Recently, expert systems have been developed in laboratory stage for diabetes awareness and management, and insulin administration [7]-[11]. Compared to these systems, our approach is more simple and pragmatic. The paper is organized as follows. In Section II the medical knowledge about diabetes is described briefly, Section III presents system architecture, Section IV describes rule-based decision-making process as well as some experimental results, and finally Section V draws the conclusions. II. MEDICAL KNOWLEDGE OF DIABETES Diabetes mellitus is a clinical syndrome characterized by hyperglycemia, due to absolute or relative deficiency of insulin. [12] A. Classification of diabetes 1. Type-I (Insulin-Dependent Diabetes Mellitus—IDDM) diabetes tends to occur in the young. Type-II (NonInsulin-Dependent Diabetes Mellitus—NIDDM) diabetes mellitus occurs more often in older people who are obese and had sedentary lifestyles. 2. Gestational diabetes mellitus (GDM) is a glucose intolerance detected in a pregnant woman who was not known to have these abnormalities prior to conception. [3]
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1000–1003, 2009 www.springerlink.com
Diagnosis and Management of Diabetes Mellitus through a Knowledge-Based System
B. Diagnosis Diagnosis [13]-[14] is a process, by which a doctor searches for the cause (disease) that best explains the symptoms of a patient. Our knowledge-based system is mainly used for performing diagnosis based on patient data. Patient data can be demographic or clinical. Demographic data relates the information such as patient’s age, sex, location, income, etc. Clinical data is divided into physical signs and laboratory results. Physical signs are those detected by a physical examination of patient, like BMI (body-mass index), pulse rate and blood pressure. Laboratory results are those detected via laboratory tests, like blood test, urine test, etc. The diagnosis system is based on following patient data. [12] 1. 2.
3.
x x x
This process continues until the inference mechanism is unable to match any rules with the facts in the working memory. And finally, the user interface allows interaction with the inference engine to diagnose diabetes. The structure of our expert system is shown in Fig. 1. User interface
Patient database
Inference engine
Knowledge base
Fig. 1: Architecture of the proposed system. IV. DIAGNOSTIC APPROACH AND EXPERIMENTAL RESULTS
Diet alone— 50% can be controlled adequately. Diet + oral hypoglycemic agent— 20-30% can be controlled. Diet and insulin— 30% can be controlled. III. SYSTEM ARCHITECTURE
The computer-based system consists of a user interface, an inference engine, knowledge base, working memory and case history file. The case history file contains the demographic and clinical data of the patient. The knowledge base contains the heuristic knowledge encoded in some form (e.g., rules) for diagnosing diabetes. The working memory contains the (initial) patient data, partial conclusions, data given by user and other information related to the case under consideration. It provides facts and serves as a store for inferences or conclusions. Inference engine mimics the expert system’s reasoning process. It woks on the facts in the working memory and uses the domain knowledge contained in the knowledge base to derive (or infer) new facts as well as conclusions. [5] It achieves this by searching through the knowledge base to find rules whose premises match the facts contained in the working memory. If such a match is found, it adds the conclusion of the matching rule to the working memory.
_______________________________________________________________
Working memory User
Test urine for glucose and ketones. Measure random or fasting blood glucose: x Fasting plasma glucose >= 7.0 mmol/l x Random plasma glucose >= 11.0 mmol/l. Oral glucose tolerance test: x Fasting plasma glucose 6.1-6.9 mmol/l x Random plasma glucose 7.0-11.0 mmol/l.
C. Method of Treatment
1001
Diagnostic approaches can be grouped into four classes: model-based, rule-based, neural-network based and casebased. The knowledge representation schemes of modelbased and rule-based diagnoses can be treated as symboloriented, while neural network and case-based diagnoses are instance-oriented. [4] In our system we use rule-based approach. A rule-based expert system is an expert system based on a set of rules that are used to make decisions. [6] To design an expert system a knowledge engineering process is needed, in which the rules used by human experts are accumulated and translated into an appropriate form for computer processing. In our expert system we divide the treatment of diabetes into three categories. 1. Type-I diabetes— only insulin is used. 2. Gestational diabetes— only insulin is used, and 3. Type-II, where the oral hypoglycemic agent, diet, and sometimes insulin are used. In this category, there are special treatments: x The drugs for obese and lean patients. x The drugs, which should not used for the patient who have renal diseases, lactic acidosis, liver disease, etc.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1002
Morium Akter, Mohammad Shorif Uddin and Aminul Haque
Fig. 2: Sample test result (case1).
Fig. 5: Sample test result (case4). At present, the system uses 26 rules and is implemented using Prolog programming language. We did experimentations using data of 100 patients. Some self-explanatory sample test results are shown in Figs. 2- 5. V. CONCLUSIONS
Fig. 3: Sample test result (case2).
A rule-based expert system for diagnosis of diabetes has been introduced in this paper. The aim of our expert system is to help the diabetes patients in low-cost treatment and management. Some real life experimentations were performed, which confirmed the effectiveness of the proposed system. The system can be used in both home and hospital environments. There are some limitations in the developed knowledgebased system. For example, the number of rules is not sufficient for a general robust expert system. Moreover, wide real-life experimentations were not performed yet. Rooms are also available for improvement of the expert system on the basis of patients’ feedbacks. Our future goal is to overcome the above limitations to make it a robust one.
REFERENCES 1.
2. 3.
Fig. 4: Sample test result (case3).
_______________________________________________________________
4.
S. Wild, G. Roglic, A. Green et al., “Global prevalence of diabetes estimates for the year 2000 and projections for 2030”, Diabetes Care, vol. 27, pp. 1047-1053, (2004). Diabetic Association of Bangladesh, “Diabetes Mellitus”, 2005 (ISBN: 984-32-2552-X). Hajera Mahtab, Zafar A Latif, Md. Faruque Pathan, “Diabetes Mellitus - A Handbook For Professionals”, BIRDEM, 3rd Ed., 2004 (ISBN: 984-31-0100-6). Jie Yang, Chenzhou Ye, Xiaoli Zhang, “An expert system for fault diagnosis”, Robotics, vol.19, pp. 669-674, (2001).
IFMBE Proceedings Vol. 23
_________________________________________________________________
Diagnosis and Management of Diabetes Mellitus through a Knowledge-Based System 5.
Karthik Balakrishnan and Vasant Honavar, “Intelligent diagnosis systems”, available online at http://www.cs.iastate.edu/~honavar/ aigroup.html (accessed on December 17, 2007). 6. Costas Papaloukas, I. Dimiritrios, Aristidis Likas, S. Christos Stroumbis, Lampros K. Michalis, “Use of novel rule-based expert system in the detection of changes in the ST segment and the T wave in long duration ECGs”, J. Electrocardiology, vol. 35, no.1, (2002). 7. V. Ambrosiadou and A. Boulton, “A knowledge-based system for education on management of diabetes”, Proc. IMACS World Congress on Scientific Computation, 1988, pp. 186-189. 8. Rudi Rudi, and Branko G. Celler, “Design and Implementation of Expert-Telemedicine system for Diabetes Management at Home”, Intl. Conf. on Biomedical and Pharmaceutical Engineering (ICBPE 2006), pp. 595-599. 9. Diabetes be aware: avbaible online at http://www.hpb.gov.sg/ diabetes/ (accessed on July 11, 2008). 10. Kyung-Soon Park, Nam-Jin Kim, Ju-Hyun Hong, Mi-Sook Park, EunJong Cha, Tae-soo Lee, “PDA based Point-of-care Personal Diabetes Management System”, Proceedings of the IEEE 27th Annual Conference on Engineering in Medicine and Biology (Shanghai, China, September 2005), pp. 3749-3752.
_______________________________________________________________
1003
11. V. Ambrosiadou, Diabetes, “An expert system for education in diabetes management”, in Expert Systems Applications, Sigma Press, UK, pp. 227-238, (1989). 12. Davidson, “Principles and Practice of MEDICINE”, Churchill Livingstone, 19th Ed, pp. 644, (2002). 13. G. Gogou, N. Maglaveras, V. Ambrosiadou, D. Goulis, C. Pappas, “A neural network approach in diabetes management by insulin administration”, J. Med. Syst. Vol. 25, no. 2, pp. 119-31, (2001). 14. Bert Kappan, Wim Wiegerinck, Ender Akay, “Promedas: A clinical diagnostic decision support system”, Available online at http://download.intel.com/research/share/UAI03_workshop/kappen/ kappen_uai2003.pdf (accessed on June 8, 2007).
Corresponding author: Mohammad Shorif Uddin Bioinformatics Institute 30, Biopolis Street, #07-01 Matrix Singapore 138671 E-mail:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Modeling and Mechanical Design of a MRI-Guided Robot for Neurosurgery Z.D. Hong1, C. Yun1 and L. Zhao2, 3 1
Robotics Institute, Beihang University, Beijing, China Harvard Medical School and Brigham and Women’s Hospital, Boston, USA 3 XinAoMDT Technology Co., Ltd., Hebei, China
2
Abstract — Magnetic resonance imaging (MRI) has established itself as a standard tool in clinical diagnostics and advanced brain research, and is one of the real time modalities currently being an intra-operative image of neurosurgery. Robots are known to be accurate manipulation device that can share the digital workspace of MR imagers. It is, therefore, conceivable that MRI-guided robot could improve the accuracy of neurosurgery, and provide great assistance for surgeons to execute the planned neurosurgical maneuver in precise manner. This paper deals with a particular hybrid robot developed for keyhole neurosurgery. The kinematics model, dynamic simulation, the control system, and the functional design of the robot are treated, which focus on the necessary actuators’ features characterized on the basis of the dynamic simulations. Keywords — MRI, Robot, Neurosurgery, Kinematics and dynamics
I. INTRODUCTION Magnetic resonance imaging (MRI) is superior to X-ray CT in terms of good soft tissue contrast, lack of ionizing radiation, and functional imaging. MRI which has a relatively wide opening has been developed and spread recently. This kind of MRI is called Open MRI. But the opening is still narrow for sufficient number of surgeons and assistants to stand by the patient. In addition, manual positioning of tools is not as precise as the coordinate information provided by tomography. MRI-compatible robots provide surgeon in attractive way: The robots eliminate hand tremors and enable accurate and dexterous operation; In robotic surgery with MRI-guidance, recurrence of tumors will be reduced because surgeons can confirm the removal of tumor cells and improve the reduction rate of the tumor cells, because MRI can distinguish tumors or other lesions from normal tissues; Furthermore MRI can eliminate exposure to radiation from computer tomography (CT) or positron emission tomography (PET). A variety of MRI-compatible robots are being developed and reported recently. Chinzei, et al. developed a Cartesian type surgical robot using an X-Y-Z table for use in an intraoperative MRI for biopsy needle or pointing device according to preoperative planning [1]. T. Mashimo, et al. developed a manipulator using a spherical ultrasonic motor which
is constructed of three ring-shaped stators and a spherical rotor and has two or three degrees of freedom [2]. S.P. DiMaio, et al. designed a comprehensive robotic assistant system with steady needle placement and high-fidelity MRI imaging [3]. Kerieger et al. presented a 2-DOF passive, unencoded and manually manipulated mechanical linkage to aim a needle guide for transrectal prostate biopsy with MRI guidance [4]. The goal of our project is to design, fabricate, and test an Ultrasonic Motor actuate MR compatible robotic system for automatically and interactively aligning and orienting the surgical needle throughout the intra-operative MRI guided neurosurgical procedures, such as biopsy and brachytherpy. This paper design a particular hybrid robot developed for keyhole neurosurgery. The robot consists of a DELTA parallel mechanism with 3 DOFs and a serial manipulator with 2 DOFs, which function alignments and orienting of the needles respectively. The kinematics model, dynamic simulation, singularity analysis and control system design of the robot are accomplished. The necessary actuators features were characterized on the basis of the multi-body package MSC.ADAMS. II. MATERIAL AND METHOD A. System infrastructure MRI-guided robotic system requires surgical planning, MR-image acquisition, human-machine interface, navigation, and sensing. To address these components required for MRI-guided intervention, a schematic diagram of the proposed infrastructure is illustrated in Fig.1. The entire system consists of three main subsystems as follows: MRI scanner and its image processing console, navigation computer and monitor, and robot. MR images are transferred across a local area network (LAN) in DICOM format from the MRI console (in the MRI console room) to the navigation computer (inside the operation room). The navigation computer provides pre-operative surgery planning, intra-operative image processing, MRI scanning controls, motion planning, remote actuation, and control of the robotic components. The surgeon is able to control and command the entire system through interactive interface on the navigation consoles.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1004–1008, 2009 www.springerlink.com
Modeling and Mechanical Design of a MRI-Guided Robot for Neurosurgery
1005
Fig. 1 Entire robot system B. Mechanical modeling of Robot for neurosurgery
C. Kinematics analysis
Parametric assembly of the robot, shown in Fig. 2, has been developed by means of the solid modeler NX5.0. The robot comprises DELTA mechanism, developed by L.W. Tsai [5-7], and serial manipulator which function the alignments and orienting of the needles respectively. DELTA mechanism is characterized by three identical kinematical chains, symmetrically placed at 1200 between each other, which drive a moving platform with respect to a fixed base. The following elements are the elements of the topological structure of one of the three kinematical closed chains, respectively: an engine, an intermediary mechanism with four bars, which are parallel two to two, and finally a passive revolute link connected to the moving platform.
Delta only provides the function as X-Y-Z Cartesian type mechanism [5-7] without any rotation of moving platform. So the equivalent freedoms, labeled in 1 - 6, of the complete robot were described in Fig. 3. zi (i 1, 2,3) are the moving axis of freedom 4, 5, and 6. z1 T1
z2
T2
z3
l
Fig. 3 Equivalent kinematics diagram of the robot Let OXYZ (Fig. 4) be a fix Cartesian frame, and its origin be set on the center point of the fix platform, and I0 0 . p is set at the center of moving platform, the length of the link connecting with moving platform is l1 , and the offset between serial manipulator and needle is d .
6HULDO 0DQLSXODWRU
'(/7$
Fig. 2
3D robot model
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
1006
Z.D. Hong, C. Yun and L. Zhao
p
TE
c1 s 2 s1 s 2 c2
ª c1c 2 «s c « 1 2 « s2 « ¬ 0
0
s1 c1 0 0
lc1c 2 º ls1c 2 + d »» ls 2 l1 » » 1 ¼
(3)
Table 1 Denavit-Hartenberg parameter of serial manipulator
Fig. 4 a) Schematic of the 3 DOF Delta parallel mechanism and b) Description of the joint angles and link lengths for leg i
Link
Di
ai
di
Ti
1
0
0
0
T1
2
-900
0
0
T2
3
0
0
l
0
§ y d · T 1 r arctan ¨ E ° ¸ © xE ¹ ° ° § § zE · · ° ®T 2 arctan ¨¨ ¨ ¸ sin T 1 ¸¸ y d ¹ ©© E ¹ ° ° l 2 ( z E l1 ) / sin T 2 ° °¯
The direct and inverse kinematical equations of the DELTA were analyzed in [6-7]. The analytical solution proposed by [7] for the direct and inverse kinematical analyses are shown in Eq. 1 and Eq. 2 respectively.
O
Tp
ª1 « «0 «0 « ¬«0
0 0 1 0 0 1 0 0
xP º » yp » zp » » 1 ¼»
(1)
TN B TP u P TN is then determined. The overall inverse kinematics was obtained from Eq. 2 and Eq. 4.
So the overall direct kinematics
p y cos Ii px sin Ii °Ti 3 r arccos b °° Ti1 2 arctan ti , i 1, 2,3 ® ° pz a sin Ti1 ° Ti 2 = arcsin b sin Ti 3 °¯
° ° ® ° °z ¯ p
where
(2)
ti
c cos T13 cos I2 cos T23 sin I2
Ti3᧨a, b, px , p y , pz
,
l 0i , l1i , l 2i
are
innerve singularities
0 ¯S
the
function
of
. More details were found in [7].
The direct kinematics of the 2-DOF serial manipulator (Fig. 3) was completed in Eq. 3 based on D-H parameters shown in Table 1, where c1 , c2 , s1 , and s2 represent cos T1 , cos T2 , sin T1 , and sin T2 respectively. The inverse kinematical model of the 2-DOF serial manipulator was shown in Eq. 4.
_______________________________________________________________
0 Ti2 = ® ¯S
.
D. Dynamic simulation
0,1, 2,3)
2l 0i
0 ᧨i =1᧨᧨ 23, ¯S
, or Ti 2 -Ti1 = ®
,
k0 cos T13 k1 sin T13 k2 cos T23 k1 sin T23 k3
l1i r l1i 4l 0i l 2i
0 Ti3 = ® ¯S
easily. Some typical singularities are Ti3 = ® , or
and Ti 3 (i 1, 2) are the functions of geometric parameters and positions of DELTA mechanism; ki (i
When
B
of Delta mechanism appeared. The direct singularities are solved analytically, but not all singularities are determined
y p =ccosT13 xp
(4)
The model was exported to the multi-body package MSC.ADAMS, in order to obtain both kinematical and dynamic simulations of the neurosurgery procedures. Moreover, using the FEM analyzer MSC.MARC, the elements’ stresses in dynamic conditions were assessed to optimize the detailed design of each component. Finally the necessary actuators’ features were characterized on basis of the dynamic simulations. In Table 2 the maximal torques of all actuators were calculated. The largest torque is 3.5 N .m , and the Ultrasonic motor meet the demand (Ultrasonic motor’s rate torque is 0.6 N .m , ratio of reducer is 20:1).
IFMBE Proceedings Vol. 23
_________________________________________________________________
Modeling and Mechanical Design of a MRI-Guided Robot for Neurosurgery
1007
Table 2 Simulation results DOF
T11
Maximum actuator 3.56 torque (N.m)
PMAC
T21
T31
T1
T2
1.22
1.22
0.06
0.001
RS232
PC
In Fig. 5 the torque exerted by the actuators at T11 and T1 is presented for the cases analysis of maximal torque. The maximal absolute value (dash line) is about 3.56 N .m .
Navigation Model
PComm32 PRO/ PtalkDT
PMAC Special Program
Fig. 6 Control system III. CONCLUSIONS
Fig. 5 Actuators’ torque of first motor and fourth motor If a serial mechanical architecture, for example Cartesian, were used to move the same masses along the same trajectories, the requested power would be much greater due to the robot moving masses, which are pre-eminent if compared to the equipment mass. E. Control system For the control of the robot the commercial motion controller PMAC has been chosen (8 axes, DSP56303, 40 MHz, endowed with teach pendant). It offers the user the possibility to implement her/his own defined model based control laws and coordinate transformation between the world Cartesian coordinate system and the robot internal coordinate system minimizing the effort of writing interpolating algorithms. The programming environment runs on Windows NT based personal computers and, besides allowing writing compile and debugging the application software, it permits to evaluate the behaviors of the controlled machine and then to choose the vast solution. The control diagram was shown in Fig. 6. PMAC communicate with PC according to RS232 interface. Surgical planning and joint interpolation were conducted in PC and PMAC respectively.
_______________________________________________________________
We are designing a MR-compatible robotic system that can be used for automatically or interactive orientation and alignment of a needle based on the neurosurgical procedure, such as biopsy and brachytherapy, with intra-operative MRI-guidance. The robot has been designed such that it will perform desired tasks inside open MRI scanner. The direct and inverse kinematic equations that describe the robot behaviors can be implemented on a commercial motion controller for robotic application, such as PMAC, and singularities were considered during the design of control system. Ultrasonic motor were selected based on dynamic simulation. Current and future work includes the integration of the robot subsystem and performance of series of experimental test inside the MR scanner using the first physical prototype.
ACKNOWLEDGMENT The test was performed by XinAoMDT Technology Co., Ltd. (Langfang, Hebei, China). The authors would like to thank XinAoMDT Technology Co., Ltd. for providing help and equipments.
REFERENCES 1.
2.
3.
K Chinzei, R Kikinis, F Jolesz (1999) MR compatibility of mechatronic devices: desighn criteria, MICCAI Proc. vol.1679, Lecture Notes in Computer Science, Cambridge, UK, 1999, pp 1020-1031 T Mashimo, S Toyama (2007) MRI-Compatibility of a manipulator using a spherical ultrasonic motor, IFToMM Proc. 12th World Congress, Besancon, France, 2007, pp 1-6 S P DiMaio, G S Fischer, S J Haker et al.(2006) A system for MRIguided prostate interventions, BioRob Proc. 1st International Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 2006, pp 68-73
IFMBE Proceedings Vol. 23
_________________________________________________________________
1008 4.
5. 6.
7.
Z.D. Hong, C. Yun and L. Zhao
A Krieger, R C Susil, C Menard et al. (2005) Design of a novel MRI compatible manipulator for image guided prostate interventions, IEEE Tran. on Biomed. Eng. 52: 306-313 Tsai L W.Multi-degrees-of-freedom mechanism for machines tools and the like. U S Patient Pending, 1955. No.08/415 Tsai L W, Walsh G C, Stamper R E (1996) Kinematics of a novel three DOR translational platform. ICRA Proc. the 1996 IEEE Int Conf on robotics and Automation, Minnesota, 1996, pp 3446-3451 BI Shu-sheng, ZONG Guang-hua (2003) Kinematics of Delta parallel mechanism with offsets. ACTA Aeronautica et Astronautica Sinica. 24(1):84-89
_______________________________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Z. D. Hong Robotics Institutes of Beihang Universigy Xueyuan Road No.37, Haidian District Beijing China
[email protected] _________________________________________________________________
The Study for Multiple Security Mechanism in Healthcare Information System for Elders C.Y. Huang1 and J.L. Su1 1
Department of Biomedical Engineering, Chung Yuan Christian University, 200, Chung Pei Rd., Chung Li, Taiwan 32023, R.O.C.
Abstract — There were many information systems that were applied to healthcare for recent decades. They were consisted of data, databases, and management tools, etc. and the most important works are going to maintain the security and integrity of system and data. In this study, we have used DICOM 3.0 as the data structure of vital signs and images in order to exchange data easier among heterogeneous system and Web service as the port for database connectivity. We used LDAP in JNDI to set a homepage with username and password to authenticate user has entrance authority or not. Key generation in JCE and X.509 in JCA for digital signature achieved encryption and decryption of personal profile and vital signs/images inside DICOM file for private key at client site and public key at server site to ensure the privacy of personal data and integrity of vital signs/images. All of JNDI, JCE, and JCA are security libraries which provided by Java JDK 1.6. We have tested the system security by user with authority or not, and different keys to verify the correctness of decryption/encryption for personal profile and vital signs/images. The preliminary result shows that the healthcare information system for elders we developed has the abilities to check who has the authority to access the data stored in database, personal profile consists inside the DICOM files will keep privacy except the persons who has correct key to decrypt, and also ensure vital signs or image integrity to display. Finally, the system used Java JDK can be easier to achieve the purposes to maintain authority, privacy, and integrity for data and system. Keywords — Digital Imaging and Communications in Medicine (DICOM), JAVA development kit (JDK), Security, Healthcare Information System (HIS).
tion system [2] and the law of medical information security and privacy protection in Taiwan was according to Health Insurance Portability and Accountability Act (HIPAA) [3]. Therefore, data encryption before delivery or communication will be necessary to improve data security and privacy. For this reason, there were many researches focus on privacy or security, e.g. data security about remote vital [4], used security agent to implement interconnection of systems [5], and DICOM-Network Attached server oriented in safety to improve the safety for original server in PACS [6]. The previous study implemented the healthcare information system for elders was focused on DICOM files generation middleware for vital signs, stored into a SQL database, and used ActiveX technique to display on homepage [7]. It was not yet concerned about security and integrity of data and personal privacy. As mentioned above, the personnel profile and facial information encapsulated in DICOM file that can be reconstructed personal face must be encrypt for security and privacy. For this purpose, in this study we used the security components supplied by Java JDK version 1.6 [8] to implement the essential authentication control and achieved the security of contents of file for the purposes of system safety and data privacy. The following sections will descript infrastructure of system, materials and methods used for this study, results, and finally will give some discussions and conclusion.
II. SYSTEM DESCRIPTION I. INTRODUCTION There were many medical information systems developed in recent decades for specific clinical usage. Such as CoMed system that was a real-time collaborative medical system about images and audio of teleconference and relative information of laryngeal diseases by using a multimedia medical database system [1]. It had gotten better security by distinguished degrees of login level. But just used this matter was not enough to make sure the integrity and privacy of data if someone got these data by abnormal manner. Besides, NEMA of USA has announced DICOM 3.0 PS 15 as the standard of security of medical data and medical informa-
The concept architecture of the healthcare information system for elders was implemented as figure 1. At client site, there are two capture devices including vital signs of electrocardiogram and image for visual light real-time image. These data accumulated by a data accumulation PC located beside elders and operated by users who have essential computer knowledge. Before the work of upload and download files, the data should be encoded and encrypted to transfer into an encrypted DICOM file. Then users need to login the website by username and password that are stored in lightweight directory access protocol (LDAP) server to get authentication. In the end, the files were sent through web server would be decrypted and checked integrity and
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1009–1013, 2009 www.springerlink.com
1010
C.Y. Huang and J.L. Su
readout the value of data element correctly. Thus, encryption these object groups were concerned about data security. In this study, we focused on patient profile for privacy and vital/image which were part of analysis data/facial information that were with regard to personal security or integrity. B. Java server pages (JSP) with Java applications as user interface For JSP is a server- and platform-independent technology to create dynamic web content. It can change the overall page layout without altering the underlying dynamic content. For the contents of DICOM files are dynamic, JSP would be the adaptive method for implement the Web-based application that is announced by DICOM. Java applications used Java JDK can perform decode/encode, encryption and decryption DICOM files and it will achieve easy implement. Also, the applications can be called and executed by JSP.
Fig. 1 System architecture then stored in database/file server. For those who have authorization would review the data about personal profile and other secured profiles by browsing PC. In this study, we have used the non-encrypted DICOM files included vital signs transferred from MIT-BIH arrhythmia database [9] and visual-light images captured from environment by a real-time USB webcam. The equipments of system were consisted of a Intel® Pentium 1.7 GHz Notebook installed Windows® XP service pack 2 acted as a data accumulation PC and a Intel® Core™ 2 2.4 GHz desktop PC installed Windows® server 2003 acted as web server, database server, and file server. The JSP homepages and Java application were coded both by NetBeans IDE 6.1.
C. Security of system and data x
LDAP server as multi-system entrance has reduced complication for user management by single account/password. Beside, it can distinguish user level and encode data to protect user privacy. Therefore, we used Apache Directory Studio as the role of LDAP server [10]. It provided with LDAP protocol and could run on multi-platform, e.g. Linux and Windows. Using Java naming and directory interface (JNDI) in JDK, JSP and Java application could connect to LDAP server, manage account, and achieve the authentication and authorization service for security of system. x
III. METHODS There were many methods used in this study included of JSP, LDAP server, key pair generation, and DICOM encryption/decryption using the keys. A. DICOM as file format A DICOM file was constructed of file preamble, 4 chars of ‘DICM’, and many object groups, e.g. patient, study, series, equipment, image/waveform, etc. and a set of data elements grouped an object. The data elements were consists of tag, optional value representation, value length, and value field. Both of group and element were assigned 2-byte number to consist a specific tag that describes a data element. Then the value length and value field would supply to
_______________________________________________________________
Apache Directory Studio as LDAP server
Key pair generation and Signature certificate
Here we have used Java cryptographic extension (JCE) and Java Cryptography Architecture (JCA), JCE is a part of JCA from JDK 1.4. [11]. Server used KeyPairGenerator to start with key pair generation. First, it decided key length to generate key pair by generateKeyPair() and used getPrivate{}/getPublic{} to get private/public key. The X.509 in JCA used to encode private keys before sent to client by X509EncodedKeySpec Class before transmission and Encoded method returned the key bytes according to the X.509 standard. After that, sent the private key and saved in client to use for DICOM files encryption and the public key was stored in server site for decryption. The digital signature (DS) was generated by using an instance of the Signature class for files in client. Following, file and DS both sent to server site to certificate integrity of data in transmission period.
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Study for Multiple Security Mechanism in Healthcare Information System for Elders
x
1011
Encryption and decryption
As DICOM PS3.15 suggested RSA key pair as the public and private key pair shall be transmitted in an X.509 signature certificate. According to private RSA key encrypts digital signature, there are four tag values shall be embedded into DICOM file including MAC Algorithm (0400,0015) value could be one of “RIPEMD160”, ”MD5”, or “SHA1”. Value of Certificate Type (0400,0110) is “X509_1993SIG”, Certified Timestamp Type (0400,0305) is “CMS_TSP”, and Certified Timestamp (0400,0310) is “Internet X.509 Public Key Infrastructure; Time Stamp Protocols; March 2000”. While accumulation PC got original DICOM files, encryption and transmission progressed. The flowchart for Fig. 3 Apache DS setting for levels of authorization DICOM files encryption and decryption shown as figure 2. In client site, DICOM files encryption would have to decode original DICOM files and then to decide which objects need to be encrypted or non-encrypted. Encrypted-objects were encrypted used private key in encryption process. Finally, all objects would merge with the related public key ID and DS to generate encrypted DICOM files. Then these files transmitted to server through network. When server received encrypted DICOM files, used the public key ID to access the public key stored in server and decoded nonencrypted and encrypted objects simultaneously. Then decrypted the encrypted objects and merged with nonencrypted objects to reconstruct the decrypted DICOM file if DS has certificated correctly.
IV. RESULTS A. Authentication and Authorization
Fig. 2 Flowchart of encryption and decryption of DICOM files
_______________________________________________________________
In this study, we have set three roles of level for users by setting the ApacheDS LDAP server as figure 3 to test authentication for users and about authorization for distinction of data process. Figure 4 shows the implement result of the username and password fields for users to input in main homepage were sent to check coincidence compared to the contents setup in LDAP server by web server. Then it was redirected to achieve the distinguish homepages for levels setting that integrated different functions for Care taker, Care giver, and Management status. It has supplied a mean to authenticate users and give users authorization to achieve the correct procedures resulted from setting LDAP server properly.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1012
C.Y. Huang and J.L. Su
ment image. The results of decryption compared by original data and processed data, there were no difference when used the corresponding key of pair but not match when used incongruous keys.
V. DISCUSSION
Fig. 4 Homepage for user login with different level to get authorization B. Encryption and Decryption There were three kinds of data have been tested for encryption and decryption extracted from DICOM files. They were text-numeric for personal profile, one-dimension vital signs, and two-dimension image. The length of key we used was 512 bits for it was properly for time consumed in CPU operation. Figure 5 was the results have tested by an encryption and decryption Java application. The left side of figure was the original data and right side was decrypted results by corresponding public key about the part of text data as personal profile, graphic vital signs, and environ-
According to the results, we have shown that using Java language and JDK 1.6 supplied by Sun Microsystems® could achieve the purposes of authentication and authorization combined with LDAP server. It means that the levels of users could set and distinguish efficiency. We also could use the LDAP information to enter other information systems only need to connect the LDAP server. Besides, used keys pair generated by JDK security classes would maintain privacy and integrity of data by encryption and decryption worked well by using proper keys length. To combine encrypted data into DICOM files would spread through the heterogeneous healthcare information systems without falsification and readable by users with correct key.
VI. CONCLUSIONS In this paper, we have presented to use JSP and JDK to achieve authentication by connecting LDAP server, keys pair of generation for encryption and decryption to reach the purpose of data privacy and integrity. For development of HIS for elders, next we will focus our efforts on software agents for automatic events detection and upload events with encryption. It is necessary for health care giver to reduce necessary of manpower by avoiding operation of the routine works about reviewing vital signs and images and uploading recorded files.
ACKNOWLEDGMENTS This work was supported in part by National Science Council, Taiwan, R.O.C., under Grant NSC 95-2221-E-033033-MY3 and NSC 95-2221-E-033-071-MY3.
REFERENCES 1. 2. 3.
Fig. 5 Test results of encryption and decryption by Java application
_______________________________________________________________
Sung MY, Kim MS, Kim EJ et al. (2000) CoMed: a real-time collaborative medicine system. Int J Med Inform 57:117-126 Digital Imaging and Communications in Medicine (DICOM) at http://medical.nema.org/ Health Insurance Portability and Accountability Act (HIPAA) at http://www.hhs.gov/ocr/hipaa/
IFMBE Proceedings Vol. 23
_________________________________________________________________
The Study for Multiple Security Mechanism in Healthcare Information System for Elders 4.
5.
6.
7.
8. 9.
Gritzalis D and Lambrinoudakis C (2000) A data protection scheme for a remote vital signs monitoring healthcare service. Med Inform 25:207-224 Gritzalis D and Lambrinoudakis C (2004) A security architecture for interconnecting health information systems. Int J Med Inform 73:305309 Tachibana H, Omatsu M, and Higuchi K, and Umeda T (2006) Design and development of a secure DICOM-Network Attached Server. Comput Meth Prog Bio. 81:197-202. Huang, CY and Su, JL (2007) A middleware of DICOM and Web service for home-based elder healthcare information system, Proc. of the IEEE/EMBS Region 8, Int. Conf. on Inform. Tech. App. Biomed., ITAB, Tokyo, Japan, 2007, pp 182-185 JDK at http://java.sun.com/ PhysioNet at http://www.physionet.org/
_______________________________________________________________
1013
10. Apache Directory Studio at http://directory.apache.org/studio/ 11. Java SE security at http://java.sun.com/javase/technologies/security/ Author: Chang-Yi Huang Institute: Chung Yuan Christian University/Department of Biomedical Engineering Street: 200, Chung Pei Rd. City: Chung Li Country: Taiwan, R.O.C. Email:
[email protected] Coauthor: Jenn-Lung Su Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Individual Movement Trajectories in Smart Homes M. Chan1,2, S. Bonhomme1,2, D. Estève1,2, E. Campo1,3 1
LAAS-CNRS ; Université de Toulouse ; 7, avenue du Colonel Roche, F-31077 Toulouse, France 2 Université de Toulouse ; UPS 3 Université de Toulouse ; UTM
Abstract — A project in innovation technology for the advancement of smart home to help elderly is conducted. The aim is to anticipate the dangerous situations that may happen at home (fall, restlessness, fainting, running away etc.) trough individual data collection and analysis of movement trajectories. This paper describes a conceptual space/time model of movement trajectories (beginning, end, stop and move). These models can be used for danger prevention in smart home. Results are presented for apartment scenarios and individual movement trajectories. Keywords — individual movement trajectory, elderly, multisensor home system.
I. INTRODUCTION The problem of caring for elderly and people with physical disabilities will become more serious when a significant part of the increasing global population is in the 65 or over age group. In addition, the existing welfare model is not able to meet the increased needs. According to Eurostat, 12.1 % of the Europeans will be aged over 80 and 30 % will be over 60 in 2060 [1]. The wish to improve the quality of life for disabled and the rapid growth in the number of elderly has prompted an uncommon effort from both industry and academia to develop smart home technology. Currently, a significant number of dedicated smart home projects are being carried out throughout the industrialized world [2]. It has been shown that the ability to correctly identify mobility of occupants may have significant implications and applications in smart home. For example it may support independent living as people age at low costs [3]. Due to progress in electronics and information technology, one of key components in the development of smart home is detection and recognition of activities or mobility of daily life. For instance, in an attempt to develop a smart home environment, we introduce the concept of individual movement trajectory to monitor the mobility of the occupants. Individual movement trajectories have been considered as the path followed by an individual moving in space and time. Each point in this path represents one location in space and one instant in time. Typically, trajectory data are obtained from devices that capture the location of individual at specific time inter-
vals. The background geographical information on which the individual is moving indoor or outdoor is of fundamental importance for the analysis of trajectory data. In this paper, we define the concept of individual movement trajectory as a set of moves and stops with a beginning and an end and the geographical location [4]. Some examples of movement trajectories of subject living in an institution and the system used to gather raw data are presented. The discovered travel patterns are stored in the movement trajectory database with the events linked to the data, in order to help the user (physician, caregiver, family members) to comprehend, visualize and question movement trajectory pattern relationships of elderly and disabled living alone for safety purpose. II. MATERIAL AND METHOD A. Case Study Multisensor system: The system is composed of infrared sensors (S1 to S10) with a binary output, a personal computer (PC), and a communication network connecting the sensors to the PC. Software collects raw data and processes them to assess automatically among variables (getting up, getting out, going to bed, going to washroom, and restlessness in bed or in apartment) the travel patterns of selected residents. The multisensor system is installed in a room of a setting for elderly. Ten areas are defined according to Figure 1. The PC is in the staff room. When a participant travels through the areas of the room, the sensors are activated. In real time the PC through the acquisition card acquires the
Infrared sensor
S4
S3 S1
Sensors
S2 S5
Detection area
Acquisition interface
S9
Bed
Washroom Direction of moving
S6
1,35m
Total detection area: 5,24 m x 3,82 m
S10
Armchair S7
Figure 1. Multisensor system
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1014–1018, 2009 www.springerlink.com
RS232
S8
Individual Movement Trajectories in Smart Homes
1015
data, which are a collection of 10 binary bits (1 or 0) corresponding to the activity of each sensor (on or off). The date of the day is stored with the data [3]. B. Trajectories: Definitions Definition 1 (Trajectory): A trajectory is a set of ordered stops and moves in a precise geographical location indoor or outdoor, with a beginning and an end. Definition 2 (Stop): A stop occurs when the subject is motionless. A non-empty time interval represents a stop. Definition 3 (Move): A move in a trajectory has a time interval; it is preceded by a stop and is followed by another stop. Definition 4 (Spatial characteristic type): A spatial characteristic type is located in the geographical earth surface indoor or outdoor. Definition 5 (Sub-trajectory): Each trajectory is a set of sub-trajectories. C. Sub-trajectories: Definitions In a movement trajectory, a participant’s activity is represented as a sub-trajectory. The participant must be alone; the multisensor system can’t distinguish the movements of two participants in the same area. Indeed, the participant doesn’t carry any electronic device. Definition 6 (Going to bed): The participant activates a series of sensors as shown in Figure 1, from stop 1, an area adjacent to bed (S6, S5, S4, S3 or S8) he gets to stop 2, which is bed area. Definition 7 (Getting up): The reverse procedure of going to bed confirms getting up. Definition 8 (Getting in): The participant from stop 1 (area 1) gets to stop 2 (area 2). Definition 9 (Getting out): The participant wanders in the room and then gets out. From stop 1 (area 2), he gets to stop 2 (area 1). Definition 10 (Going to washrooms): The participant from stop 1 (area 2) gets in stop 2 (area 9). Definition 11 (Going out to washroom): The participant from stop 1 (area 9) gets in stop 2 (area 2). Definition 12 (Immobility or stop): The area 0 is artificial or virtual; if the area is 0 in stop 1, that means 10 sensors are in inactive state. In Table 1, from stop 1 (area 0) to stop 2 (area 3), the duration of immobility is 2.5 s. The multisensory system detects the participant at area 3 at 21:19:12.5, but doesn’t know where the motionless participant is at 21:19:10.0. Definition 13 (Restlessness): The multisensor system observes the series of sensor activations as shown in Table 2. In order to optimize data storage, raw data are stored as
_______________________________________________________________
shown in Table 4. In this case, several stops and moves are in the same area, as shown in Table 3. In this case, a threshold ts is defined, the participant in a specific area triggers the sensor at ti and, at ti+1. If (ti+1 – ti) > ts, the participant is quiet, and if (ti+1 - ti) < ts, the participant is restless [5]. Definition 14 (Mobility or move): The participant travels from stop 1 (area 7) to stop 2 (area 3), the time intervals is 50.5s. From stop 1 (area 3) to stop 2 (area 6), the time interval is 4s as shown in Tabs 4 and 5. Table 1. Immobility sub-trajectory Duration in seconds Move 1
Stop 1 0
Stop 2 3
Time Interval (h min s) 21:19:10.0-21:19:12.5
Duration 2.5
Table 2. Restlessness sub-trajectory Duration in seconds Move 1 2 3 4 5
Stop 1 7 7 7 7 7
Stop 2 7 7 7 7 0
Time Interval (h min s) 21:19:10.0-21:19:10.5 21:19:10.5-21:19:11.0 21:19:11.0-21:19:11.5 21:19:11.5-21:19:12.0 21:19:12.0-21:19:12.5
Duration 0.5 0.5 0.5 0.5 0.5
Table 3. Restlessness sub-trajectory Duration in seconds Move 1
Stop 1
Stop 2 7
Time Interval (h min s) Duration 0 21:19:10.0-21:19:12.5 2.5
Table 4. Mobility sub-trajectory Duration in seconds Move 1 2 3 4
Stop 1 7 0 3 0
Stop 2 0 3 0 6
Time Interval (h min s) 04:19:10.0-04:19:10.5 04:19:10.5-04:20:00.5 04:20:00.5-04:20:02.5 04:20:02.5-04:20:04.5
Duration 0.5 50 2 2
Table 5. Mobility sub-trajectory Duration in seconds Move 1 2
Stop 1 7 3
Stop 2 3 6
Time Interval (h min s) 04:19:10.0-04:20:00.5 04:20:00.5-04:20:04.5
Duration 50.5 4
D. Results A subject of 81 years old took part in the trial. The participant or his family signed an informed consent administered by the nursing staff before the trial. During the eight
IFMBE Proceedings Vol. 23
_________________________________________________________________
1016
M. Chan, S. Bonhomme, D. Estève, E. Campo
months trial, he occupied his apartment in the night and was observed from 9 pm to 7 am. During the day, he was out of his apartment and took part in group activities in the institution. The participant got up 36 nights. Table 6 gives the results obtained with these 36 nights. The nursing staff noticed their getting in and out the participant room and the participant night activities after direct observation. These staff ratings allowed the concordance between staff ratings and automatic data processing to be verified and validated for assessing the resident travel behaviors. Table 6. 36 night activities Nights 28 3 5
Activities getting up/wandering in the bedroom/visiting washroom getting up/wandering in the bedroom/visiting washroom/getting out getting up/wandering in the bedroom
The movement trajectory as shown in Table 7 includes several sub-trajectories: - Motionless or stop in area 7 during 61s. - Restlessness in areas 5, 10, 9 and 8 during respectively 50 s, 95 s, 102s and 16s. - Going out in area 1 during 19s. - Going from one area to another area with durations ranging between 1s and 6s. For area 5, which is a location adjacent to bed, the participant may remain restless 50s or in his bed. The multisensory system distinguishes with difficulty restless period in bed or in area 5 as shown in Figure 1. A pressure sensor placed in bed could really prove the participant in bed. - Areas 6, 5, 4, 3, 8 are locations for traveling from one stop to another stop. - Table 8 shows durations of moves or stops for different areas that the participant traveled.
Table 7. An example of movement trajectory Duration in seconds Move 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Stop 1 0 7 5 5 6 6 7 5 5 4 3 2 9 10 10 9 9 2 3 8 8 3 2 2 1 1 2 1 1 2 1 1 2 2 3 8
Stop 2 7 0 0 6 0 7 5 0 4 3 2 9 10 0 9 0 2 3 8 0 3 2 0 1 0 2 1 0 2 1 0 2 0 3 8 7
Time Interval (h min s) 21:19:10-21:20:11 21:20:11-21:20:12 21:20:12-21:21:02 21:21:02-21:21:04 21:21:04-21:21:06 21:21:06-21:21:12 21:21:12-21:21:17 21:21:17-21:21:23 21:21:23-21:21:24 21:21:24-21:21:26 21:21:26-21:21:30 21:21:30-21:21:34 21:21:34-21:21:39 21:21:39-21:23:14 21:23:14-21:23:18 21:23:18-21:25:00 21:25:00-21:25:15 21:25:15-21:25:16 21:25:16-21:25:18 21:25:18-21:25:23 21:25:23-21:25:24 21:25:24-21:25:26 21:25:26-21:25:31 21:25:31-21:25:32 21:25:32-21:25:37 21:25:37-21:25:39 21:25:39-21:25:40 21:25:40-21:43:59 21:43:59-21:44:00 21:44:00-21:44:01 21:44:01-21:44:05 21:44:05-21:44:07 21:44:07-21:44:09 21:44:09-21:44:14 21:44:14-21:44:20 21:44:20-21:44:36
_______________________________________________________________
Table 8. Mobility sub-trajectories Duration 61 1 50 2 2 6 5 6 1 2 4 4 5 95 4 102 15 1 2 5 1 2 5 1 5 2 1 19 1 1 4 2 2 5 6 16
Area 9 10 7 5 1 8 9 6, 5, 3 7, 9, 8, 2, 1 3, 2, 1 5, 6, 4, 3, 2, 1 7, 5, 2, 8, 1
Duration (s) 102 95 61 50 19 16 15 6 5 4 2 1
Mobility/Immobility in washroom and restless in washroom and restless in bed and motionless restless outside restless restless restless restless restless and outside (area 1) restless and outside (area 1) restless and outside (area 1)
Table 9. Duration of stop or move in each area Area 1 2 3 4 5 6 7 8 9 10
Duration (s) 19, 5, 8, 2 x 2, 1 5x2, 4, 2, 4 x 1 6, 4, 2 x 2 2 50, 6, 2, 1 6, 2 62, 5, 1 16, 5, 1 102, 15, 5 95, 4
Tabs 8 and 9 summarize the results given in Table 7 and shows that the participant got up once from 21:20:11 to 21:44:36. He was in bed in the beginning and immobile (area 0). He was restless in 10 areas and remained restless in 9 and 10, and was probably preparing to get up in 5, he
IFMBE Proceedings Vol. 23
_________________________________________________________________
Individual Movement Trajectories in Smart Homes
1017
covers approximately 1m. The speed of the move from area 5 to area 2 is 0.23m/s or 830m/h (duration 13s and covered distance 3m according to Table 6). Covered distances and travel speeds could be calculated for sub-trajectories and distance and speed thresholds could be defined for elderly monitoring procedure for safety purpose.
III. CONCLUSIONS
Figure 2. Participant movement sub-trajectories remained restless during 50s, that was probably the bed area and not really area 5. At 21:21:12, he was in area 7, but probably not in bed. Figs 2 and 3 show a participant movement trajectories from one location to another location. The location of the beginning is area 7 (Figs 2 and 3) and the location of the end is area 3 (Fig 2) and area 7 (Fig 3). Figure 3 shows 3 activities according to Table 7 (getting up, wandering in the bedroom, going to washroom and going out three times) and most sub-trajectories are direct (from bed to washroom). If it is supposed that from one area to another area, the subject
Using a multisensor system allows to collect individual movement trajectory and to estimate covered distances and travel speeds. Due to adjacent areas in the room specifically around the bed (areas 6, 5, 4, 3 and 8), the individual trajectory pattern relationships are hard to be visually identified because the adjacent areas are not easy to be visually identified and can lead to some errors. So a precise location of the infrared sensors is crucial. An attempt to classify different patterns of individual movement trajectories was carried out. Only sub-trajectories were classified [6]. The results show here the participant occupying bed and washroom (respectively 61s, 95s and 102s) and some direct movement sub-trajectories [7]. Choosing a beginning area and an end area for an individual movement trajectory and ordering the durations for moves and stops as shown in Table 8 and Table 9 allow better to comprehend the relationship between stop or move durations and areas in movement trajectories, even covered distances and speeds. Individual movement trajectories could be used to better identify abnormal trajectories in order to ensure security and safety of older individuals living alone instead of condemning then to go to institution and to leave their own home.
ACKNOWLEDGMENT The authors would like to acknowledge the French Ministry Research New Technology (RNTS) and EDF Research and Development for their support.
REFERENCES 1. 2.
3.
4.
Figure 3. Participant movement trajectory
_______________________________________________________________
Rodier A. (2008) Union européenne: le défi du vieillissement. In Le Monde September 1st, 2008. Chan M., Estève D., Escriba C., and Campo E. (2008) A review of smart homes—Present state and future challenges. Computer Methods and Programs in Biomedicine 91:55-81. Chan M., Bocquet H., Campo E., Val T., and Pous J. (1999) Alarm communication network to help carers of elderly for safety purposes: a survey of a project. Journal of the Rehabilitation Research 22(2): 131-136. Spaccapietra S., Parent C., Damiani M. L., Antonio de Macedo J., Porto F., and Vangenot C. (2008) A conceptual view on trajectories. Data & Knowledge Engineering 65(1):126-146.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1018 5.
6.
7.
M. Chan, S. Bonhomme, D. Estève, E. Campo
Chan M., Campo E., Laval E., and Estève D. (2002) Validation of a remote monitoring system for the elderly: Application to mobility measurements. Technology and Health Care 10:391-399. Chan M., Campo E., and Estève D. (2004) Classification of elderly repetitive trajectories for an automatic behaiour monitoring system. In Proceedings Mediterranean Conference on Medical and Biological engineering “Health in the Information Society” (MEDICON 2004), Island of Ischia, Naples (Italy), July 31 – August 5, 2004, 4p. Martino-Saltzman D., Blash B. B., Morris R. D., and Wynn McNeal L. (1991) Travel behavior of nursing home residents perceived as wanderers and nonwanderers. The Gerontologist 31(5):666-672.
_______________________________________________________________
Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Chan Marie LAAS-CNRS 7 avenue du Colonel Roche 31077 Toulouse cedex 4 France
[email protected] _________________________________________________________________
MR Image Reconstruction for Positioning Verification with a Virtual Simulation System for Radiation Therapy C.F. Jiang, C.H. Huang, T.S. Su Department of Biomedical Engineering, I-Shou University, Kaohsiung, Taiwan, R.O.C Abstract — This paper proposes a systematic method for 3D reconstruction of the MR images of a head to create a virtual radiation target for positioning verification in the treatment planning of radiotherapy. According to different features in the scalp and the brain, two different methods were applied to segment the entities. The scalp was detected by autothresholding and the brain was segmented by region-growing. The detected contours were then reconstructed as the 3D polygon meshes and transformed into VRML format to be presented in the virtual simulation environment. This scheme is further implemented as a module embedded in a virtual simulation system of 3D Conformal Radiotherapy (3D-CRT). System verification was achieved by using a full scale AP model of a human head as a target volume for positioning verification. We compared the positioning of the AP model in the real and virtual systems and found that the measurements were very close (difference < 0.3 cm). This state of the art integrates the techniques of image processing, computer graphics and virtual reality. With the assistance of the system, the position verification process may become more efficient and accurate than the conventional positioning toolkit only based on 2D image registration. Keywords — MR image, 3D reconstruction, positioning verification, radiotherapy.
I. INTRODUCTION Radiotherapy is one of the most important methods to treat cancers nowadays. Since radiation can also damage the normal cells, treatment planning, which includes beam field design and dose calculation prior to radiotherapy is an essential work to reduce the chance of receiving radiation by the normal cells. Patient positioning verification is a routine work to confirm the meet of the radiation target to the planning target, in such a way to reduce the harm of the radiation to the normal tissue and meanwhile to increase the radiation effect to the malignant tissue. Thus patient positioning verification is the first key to a successful radiotherapy. The three dimensional conformal radiotherapy (3D-CRT) has been introduced to improve the limitation of the regular field shape in the conventional 2D treatment planning system by shaping a target volume in a 3D imaging study,
namely, the 3D treatment planning [1]. During the 3D planning, the target shape in MRI images has to be delineated manually on a slice-by-slice basis. After that the target volume is reconstructed and projected into 2D images planes from different angles within a cycle of gantry rotation by using the proprietary software, which is usually bundled with the radiation system. The optimal radiation field is determined by inspecting the target position in the projected image planes from the radiation beam’s eye view. Accordingly, the 3D irregular field shape, called prescription volume, which conforms to the target volume can be achieved by figuring the beam portals at each angle and adjusting the length of each leaf in the multi-leaf collimator (MLC). However, the planning process of the 3D-CRT is labor intensive and sometimes need recurring verifications with the patient on site. The overall process can be a physical as well as spiritual torture for a patient. Thus, the 3D-CRT need to encompass a component which can save the labor, accelerate the time, and most importantly decrease the occurrences of patient visits. Recent works have proposed a collaborative virtual environment as a platform of radiation treatment planning for easy communication in telemedicine [2-3]. In our previous works, we had at first developed a virtual radiation simulator for the training purpose [4]. A virtual linear accelerator associated with a virtual MLC was later integrated with the simulator as a double-platform simulation system [5]. The setting parameters, such as the gantry rotating angle, and the altitude and position of the patient table in the simulation room can be exactly transferred into the treatment room in order to verify the adequacy of both environment setting. However, the target volume used in this prototype is a virtual head, obtained by laser scanning of a full scale 3D solid anthropometric model (AP model) to replace the real patient for positioning and verification in treatment planning. The high-expense and non-deformability of the AP model make it not adapted to case varieties. As a result, the application of the virtual system is restricted to training purposes. In this study, we propose an MRI image reconstruction scheme based on image segmentation techniques to create a target volume. This scheme is then further integrated into the virtual system as the rendering module of target volume for positioning verification directly in a 3D space. With the
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1019–1023, 2009 www.springerlink.com
1020
C.F. Jiang, C.H. Huang, T.S. Su
assistance of this module, the target volume can be generated in an efficient way and the virtual radiation simulation system is prone to an economical solution for clinical application. II. METHODS
neighboring pixels that meet the growing criteria. Two major criteria control the growing process a similarity criterion that selects the pixel to merge into the region and a stopping criterion that stops the growth of the region. The region growing proceeds with the similarity criterion, which is defined as a limited intensity difference, , between the target pixel, p, and the seed, s.
Ri
A. MR Image Reconstruction
R i 1 { p | I ( p ) I ( s) G }
(3)
In order to create a 3D polygon mesh of the target, we need to detect the target contours first. Two methods are applied to segment the tissue according to its appearance differentiable by its intensity or by its texture from its surroundings. The MR images used in this pilot study are obtained from the Database of the Virtual Human Project [6]. An example of the original MR image is given in Fig. 3a where the brain is more texture-like, while the head contour is the outmost layer brighter than the background. The segmentation is described in details as follows.
The seed, s, is updated as the mean intensity of the new growing region for each iteration. The stopping criterion is predefined as the region that exceeds the union of the brain areas manually outlined in the three orthogonal planes.
1. Image Segmentation by thresholding
4. Polygon Mesh Creation
For the tissue with great intensity gradient in the MR images, such as the scalp, we applied the auto-thresholding method developed by Ostu [7] to binarize the image. Due to the double-mode nature of the intensity histogram of MR images, we modified the one-threshold algorithm to a double-threshold one for detection of the head contour. The algorithm derives the optimal thresholds k1* and k2* to separate objects by maximizing the between-class variance B2. (1)
Based on these detected contours, the polygons mesh interconnecting the contours can be created by using minimum distance method [10], which is performed by looking the nearest points at the contours in two adjacent slices. The baseline of the first triangle was drawn by joining the first point of each adjacent contour. The next point in each adjacent contour with the minimum distance to the baseline was selected. These three points formed a triangle. The following points are connected together in the same way to form a polygon mesh.
(2)
B. Integration into the VSS
G B2 (k1 , k 2 ) w0 ( P 0 PT ) 2 w1 ( P1 PT ) 2 w2 ( P 2 PT ) 2
G B2 (k1* , k 2* )
max G B2 ( k1 , k 2 )
1d k1 k 2 L
where w0, w1 and w2 are the probabilities of the three nonoverlapping classes in the images, 0 , 1 and 2 are their mean gray values, and T is the mean gray value of the entire image. An optimal set of thresholds k1* and k2* can be selected by maximizing B2. Following that, we used morphological image processing techniques [8] to refine the segmented entity—first using ‘fill’ to seal the holes inside the head and then ‘opening’ to remove the resulting binarization debris outside the head. “fill” operation : 2. Image Segmentation by region growing For the tissue with its texture different to its surroundings in the MR images, such as the brain, we applied the region growing algorithm for its segmentation [9]. It starts by selecting a seed point, s, inside the brain area and forms the grow regions by appending to the seed the four orthogonal
_________________________________________
3. Contour Detection Once the region of interest (ROI) is segmented by using the pre-described methods, the contours of the regions can be simply detected by searching the first gradient radiating outwards from the centroid of the ROI.
The created polygon mesh was transformed into VRML format and its size was adjusted according to the scale ratio predefined by the measures of objects’ sizes in real and in virtual. All objects in the system had been scaled down according to this scale ratio. A virtual head holder was created at a fixed position on the table to allocate the virtual target when the VRML file was read into the virtual simulation system. C. System Verification System verification was achieved by using the full scale AP model of a human head created in our previous approach [4-5] as a target volume for positioning verification. (Fig.1 and Fig.2) We assumed that if the calibration of the virtual system simulation was correct, when transferring the whole setting in the real environment into the virtual sys-
IFMBE Proceedings Vol. 23
___________________________________________
MR Image Reconstruction for Positioning Verification with a Virtual Simulation System for Radiation Therapy
tem, the position of the isocenter (the center of the beam) in the virtual system should be the same as in the real. There was a minute difference (< 0.3 cm) between these two measurements. This error could be due to the projection bending of the scalar on a head contour in real case (Fig. 1) rather than on a plane in virtual case (Fig. 2).
1021
A. Image Processing and Reconstruction Figure 3b shows the detected contour of Fig. 3a via the double-thresholding technique followed by two-step morphological image processing and maximum gradient detection. Fig. 4a demonstrates the polygon mesh achieved by interconnected the adjacent contours with minimum distance criteria and Fig 4 (b) shows its rendering surface. The same way is applied to generate the brain mesh except the segmentation method that is based on region growing. The two meshes are created separately as two data sets and can be combined together when being loaded by using the target loading module described later.
Fig. 1 The position of AP model within the head frame is set to align the tumor (facing inside) with the isocenter.
(a)
(b)
Fig. 3 (a) MR image of a head and (b) the corresponding detected head contour (see details in the text.)
(a)
Fig. 2 The calibration in virtual system is verified by the same position setting as in Fig.1 with the virtual scalar on the top of the virtual AP model. The vertical plane at the center is an extension of the laser beam for easy identification of the isocenter location.
III. RESULTS The results will be presented in two aspects first regarding the MR image segmentation and reconstruction, and second about integration of the target loading module with the virtual simulation system.
_________________________________________
(b)
Fig. 4 (a) polygon mesh of the head and (b) surface rendering of the head B. Integration of the Module into the VS System In this module, an interface was designed to select and load the polygon mesh files into the VR-scene on the virtual head holder. The VRML-format mesh files can be multiselected and rendered together as an entity, such as the head with a brain inside (Fig.5a). The virtual system also provides a MLC configuration function and immediately reveals the 3D beam with color as
IFMBE Proceedings Vol. 23
___________________________________________
1022
C.F. Jiang, C.H. Huang, T.S. Su
the result of the configuration. The visibility of the beam provides an easy way to assure that the 3D radiation beam cover the target volume optimally as shown in Fig.5b. In the position verification, the virtual scalar can be turned on as the reference for the radiation field delineation in comparison with that in the portal film.
(a)
(b)
Fig. 5 (a)The reconstructed head with the brain inside had been load into the virtual system; (b) preview of the beam shape and irradiation fields in the virtual system with different rotation angles of the gantry.
IV. DISCUSSION AND CONCLUSIONS We propose the MR image processing method to reconstruct the target volume for positioning verification in 3D Conformal Radiotherapy. A module with this MR image processing function is integrated into a virtual simulation system previous developed by us in order to load the reconstructed 3D target in this virtual scene such that the commitment between the radiation beam field and the target can be investigated with a 3D global view. The system verification shows the precision less than 0.3 cm. To achieve this precision, the VR system needs to be customized according to the real objects in the radiation treatment room. Nevertheless, this is easy to fulfill by the commercialized computer graphic packages. The key to make this system applicable in clinics is the capability to embed the various target volume of the real patients in this system. In other words, to make a virtual system able to contain a real 3D data from medical images is the aim of this study. Thus we believe that the upgraded system with the newly developed module has the potential for clinical use. At this stage we only consider the targets in the same image data set; therefore registration is not necessary. However, the image processing module will be further expand is this regards.
ACKNOWLEDGMENT This work was supported in part by the grant from the National Science Council (NSC 95-2221-E-214-005)
REFERENCES 1. 2.
3.
Fig. 6 An example to demonstrate that the VS system also provide
4.
a function of on-line MLC design. The result can be viewed immediately with the loaded virtual target. 5.
The intersection of the virtual beam and the target volume can be inspected from different angles as shown in Fig.6 to verify the adequacy of the gantry angle and the MLC configuration.
_________________________________________
6.
S. Webb (1993)The physics of three-dimensional radiation therapy, London, U.K: Institute of Physics Publishing, Ntasis, E., Maniatis, T. A., and Nikita, K. S. (2002) Real-time collaborative environment for radiation treatment planning virtual simulation, IEEE Trans. Biomed. Eng. 49(12): 1444-1451. Ntasis, E., et al, (2005) Telematics enabled virtual simulation system for radiation treatment planning, Comput. Biol. Med. 35: 765–781. T.S. Su, D.K. Chen, W.H. Sung, C.F. Jiang, S.P. Sun, and C.J. Wu (2005) Development of a virtual simulation environment for radiation treatment planning, J. Med. Bio. Eng. 25(2): 61-66 T.S. Su, W.H. Sung, C.F. Jiang, S.P. Sun, and C.J. Wu (2005) The development of a VR-based treatment planning system for oncology, Proc. of 27th IEEE Conference of the Engineering in Medicine and Biology Society (EMBC05), 2005. National Library of Medicine. The visible human project, National Institute of Health, U.S., http://www.nlm.nih.gov/research/visible/ visible_human.html
IFMBE Proceedings Vol. 23
___________________________________________
MR Image Reconstruction for Positioning Verification with a Virtual Simulation System for Radiation Therapy 7.
N. OTSU(1979) A threshold selection method from gray-level histograms. IEEE Trans. SMC,9, 62-66 8. Gonzalez RC, Woods RE. (2002) Digital image processing, Upper Saddle River, NJ, Prentice-Hall Press;. 9. A.P. Dhawan (2003) Medical image analysis, Hoboken, New Jersey, John Wiley & Sons, Inc. 10. A.B Ekoule, F.C. Peyrin, and C.L. Odet (1991) A triangulation algorithm from arbitrary shaped multiple planar contours, ACM Trans. Graphics, 10(2), 182-199.
_________________________________________
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
1023
Ching-Fen Jiang I-Shou University No. 1, Sec.1, Hsueh-Cheng Rd, Ta-Hsu Hsiang Kaohsiung Taiwan, R.O.C
[email protected] ___________________________________________
Experimental setup of hemilarynx model for microlaryngeal surgery applications J.Q. Choo1, D.P.C. Lau2, C.K. Chui1, T. Yang1, S.H. Teoh1 1
Department of Mechanical Engineering, National University of Singapore, Singapore 2 Department of Otolaryngology, Singapore General Hospital, Singapore
Abstract — Mechanical models of the human larynx have been used to validate mathematical vocal cord models, but most lack the applicability for ex-vivo surgical experiments. To our best knowledge, no technical evaluation has been reported using mechanical models to study vocal cord wound closure techniques. We design and develop a customizable and versatile mechanical hemilarynx with direct simulation of vocal cord vibration and airflow that can facilitate ex-vivo surgical experiments on the vocal cord. This setup enables experimental validation on the mechanical stability of novel devices, which are currently developed for epithelial wound closure in microlaryngeal surgery. Keywords — hemilarynx, microlaryngeal surgery, wound closure.
periments, with focus on prototype wound closure devices that our team has designed. The efficacies of anchoring and mechanical stability of these devices under flow and vibration conditions can be assessed and evaluated using the model. A mechanical hemilarynx setup, with one vocal fold replica opposite a flat wall, was designed [4, 6]. The use of a hemilarynx setup eliminates geometrical incongruence that occurs due to inconsistent membrane construction and adhesion on opposing airway walls [6]. This setup correlates most closely with the clinical situation when patients undergo removal of a diseased vocal fold (cordectomy). II. MATERIALS AND METHODS
I. INTRODUCTION Current practices in wound closure following endoscopic surgery to the vocal fold include sutures or tissue adhesives. These methods have proven technically difficult to achieve in such minimal access surgeries or yield less than optimum healing results respectively [1, 2, 3]. Our team is in the process of design and development of novel epithelial wound closure devices that can be easily handled and inserted onto the vocal fold cover with microlaryngeal surgical instruments. These devices must be able to withstand loadings induced by vocal fold vibration and glottal airflow. Experimental validation of these novel wound closure devices via a mechanical larynx model is a crucial and integral step to address limitations and complement results of other analyses, such as Finite Element simulation on these devices. Many mechanical models of the human larynx have been designed to validate mathematical models of the vocal fold [4, 5]. However, none of these models has been applied in the study of vocal fold wound closure techniques to our best knowledge. These mechanical setups of the larynx lack the applicability and customizability for ex- vivo surgical experiments. In this paper, we design and develop a simple and customizable mechanical larynx setup for ex- vivo surgical experiments with direct simulation of vocal fold vibration and glottal airflow. This setup serves as a medium to study vocal fold wound closure methods in ex-vivo surgical ex-
The mechanical hemilarynx consists of a detachable vocal cord piece within a Perspex flow channel. This fits into a wooden casing connected to an external airflow supply. The components of the Perspex flow channel consist of 2 Perspex boards of thickness, interconnected with a Perspex board of thickness, with a hole serves as an airflow outlet. Four guiding screws of size M5 connect a detachable vocal fold replica piece made of Perspex onto the medial surface of one of the Perspex boards. These screws have partial threads removed, such that the vocal cord piece can slide freely along the screw track. It is assembled as shown in Figure 1. The dimensions of the designed Perspex channel follow closely that of the glottal airway- The depth of the
Figure 1: Assembly of Perspex Channel with Perspex rod (piston)
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1024–1027, 2009 www.springerlink.com
and vocal cord piece
Experimental setup of hemilarynx model for microlaryngeal surgery applications
1025
airway channel, enclosed by the sides of the encasing box is 25mm, the width of the channel is 27mm and the length of the channel (from airway entry at the bottom to the top of the vocal fold) is 90mm [4]. The geometrical contour of the detachable vocal fold piece shown in Figure 1 follows that of the outer geometrical contour of the vocal cord described by Alipour et al [4]. This piece is kept to a uniform 2mm in thickness and has a rectangular slot of 15mm by 5 mm removed from its middle section to allow greater clearance for wound closure devices to be inserted without being pushed off by the Perspex rod indicated in Figure 1. The thin structure of the vocal fold piece coupled with the rectangular slot drastically reduces the inertial properties of the vocal fold such that it could slide with ease along the guiding screw tracks. A single layer of polyurethane film with tensile strength of approximately 3,000psi, tensile break length of 500% is slightly stretched over the detachable vocal fold piece [7]. The top and bottom ends of the sheet are secured to the distal ends of the detachable vocal fold piece using laboratory Parafilm® .This film serves as the primary medium for ex vivo microlaryngeal surgery experiments. It simulates the compressive stresses acting on the novel wound closure devices at the interface areas of contact. The wooden encase is designed a high degree of versatility and customizability for a wide variety of laryngeal ex vivo surgical experiments on the vocal folds. It is equipped with an I- shaped recess to allow ease of insertion and removal of the Perspex channel. It is also equipped with multiple observation holes for an endoscopic camera system with a stroboscope feature to track the vibration of the vocal cord replica under different flow and vibratory parameters. A portable stroboscope is used in place of the endoscope camera system for this phase of experimentation. A picture
of the assembled wooden encase with base supports and a flange that connects to external airflow via a compressor is shown in Figure 2. A selected prototype of one of the novel wound closure devices is secured onto the film using laboratory forceps. The Perspex channel is then assembled and fitted into the wooden encase as shown in Figure 3. The flat end of the Perspex rod is pushed through corresponding slots on the walls of the wooden encase, Perspex channel and detachable cord piece until it comes into direct contact against the polyurethane sheet. The circular end of the piston is then connected to the core of the linear vibrator, placed on a Perspex stand. A loudspeaker that serves as an external linear vibrator then closed in towards the wooden encase, until that a premade mark of 4mm from the distal flat end of the piston is level with the slot on the slot of the wooden encase. The piston-vibrator system serves as an external simulation of the vocal cord vibration. A signal function generator, with frequency and amplitude control is calibrated to generate sinusoidal waveforms and is connected to the linear vibrator (loudspeaker). The core of the vibrator then acts as an actuator that pushes the Perspex rod further inwards and draws it outwards in a cyclic motion such that the latex sheet undergoes a forced series of cyclic tension and compression. Air supply is regulated by an air compressor, which can be varied from a few bars to 15 bars. The air is directed through a 3 meter long soft rubber tube fitted into the steel flange at the bottom of the encased box as shown in the Figure 4 below. Finally, a portable stroboscope is placed at a distance of approximately 10cm away from the observation hole as indicated in Figure 4. The stroboscope generates a series of rapidly flashing lights at frequencies prescribed by a dial control to match the frequency of the vocal fold membrane vibration.
Figure 2: Wooden Encase with Perspex Top Stand
Figure 3: Exploded View of Hemilarynx Model
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
1026
J.Q. Choo, D.P.C. Lau, C.K. Chui, T. Yang, S.H. Teoh
Figure 4: Complete experimental setup with in- experiment views of mock vocal cord with prototype of novel wound closure device during forced vibration and airflow conditions.
III. RESULTS
IV. DISCUSSION
Experiments are conducted with air supply at constant gauge pressure of 1.5 bars, with low phonation frequencies of 40Hz, 60Hz and 80Hz each with durations of 30 mins [5]. Vibration frequencies of the vocal fold pieces are tallied and validated with stroboscope flash emissions readings. Observation via the circular slot is conducted at time points of 10 and 30 minutes for all 3 sets of frequencies tested to observe retention of the prototype wound closure device and the tearing of the film at large displacements and high frequencies. Results of the experiment are given in Table 1 below. Table 1 Ex-vivo experimental trials on stability of novel wound closure device in a hemilarynx model. Retention of selected prototype on film
Tearing of film observed
Actuator Frequency (Hz)
Stroboscope Frequency (Hz)
40
40.1
Yes
Yes
30 (mins) No
60
60.3
Yes
Yes
No
80
80.2
Yes
Yes
No
10 (mins)
30 (mins)
Neither tearing of the polyurethane film nor dislodgment of the selected prototype is observed within a timeframe of 30 minutes for frequencies of 40Hz, 60Hz and 80Hz respectively.
_______________________________________________________________
Through the ex- vivo experimental setup of the hemilarynx, we are able to validate the dynamic anchoring capabilities of the selected prototype wound closure device, under low frequencies of phonation of 40Hz – 80Hz, characterized by large displacements of the vocal cords. Similarly, the hemilarynx setup is very easily customized to cater for ex-vivo experimentation on other prospective prototype designs of the novel wound closure devices, as well as that involving microsutures and surgical glue. The results of such experimentation with the hemilarynx model will give a good indication of the respective mechanical stability of these materials and devices under vocal cord vibration and subglottal airflow conditions. It will serve to validate and complement other forms of mechanical analysis for epithelial wound closure in microlaryngeal surgery. The mechanical hemilarynx has several drawbacks compared with an actual human larynx. A single layer polyurethane film is only sufficient to generate compressive stresses along a small area of contact interface with the selected prototype. On the other hand, compressive stresses will act throughout the embedded ends of the selected prototype in an actual larynx due to the presence of the deeper layers of the lamina propria and vocal ligament in a human vocal fold. Viscous and adhesive effects of vocal fold tissue will present shear stresses on the novel wound closure device when it is implanted in a human larynx environment.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Experimental setup of hemilarynx model for microlaryngeal surgery applications
In addition, it is difficult to achieve good simulation of subglottal airflow around the vocal fold piece and the wound closure device due to the presence of guiding screws in the Perspex flow channel. Nevertheless, this is not deemed as a major limitation as pressures exerted by vocal cord muscles during phonation far dominate over subglottal airflow pressures [8].
REFERENCES 1. 2. 3.
4.
V. CONCLUSION 5.
The experimental hemilarynx setup is a highly versatile and customizable. It offers direct simulation of vibration of the vocal cords and has been designed to facilitate mechanical means of surgical experimentation on the vocal cords. Most importantly, it has demonstrated good applicability in the evaluation of novel epithelial wound closure devices.
1027
6.
7. 8.
Woo P. (1995). Endoscopic Microsuture Repair of Vocal Fold Defects. J Voice 9: 332-9 Flock S, Marchitto, KS. (2005). Progress toward seamless tissue fusion for wound closure. Otolaryngol Clin N Am 38(2):295-305 Fleming D, et. al. (2001). Comparison of mircoflap healing outcomes with traditional and microsuturing techniques: Initial results in a canine model. Ann Otol Rhino Layrngol 110: 707-11 Alipour F, Scherer R. (2001). Effects of oscillation of a mechanical hemilarynx model on mean transglottal pressure and flows. J. Acoust. Soc. Am. 110(3):1562-9 DOI: 10.1121/1.1396334 Ruty N, Pelorson X, Hirtum A. (2007). An in vitro setup to test the relevance and the accuracy of low- order vocal fold models. J. Acoust. Soc. Am. 121(1): 479-90 DOI: 10.1121/1.2384846 Titze I, Schmidtm S, Titze M. (1995). Phonation threshold pressure in a physical model of the vocal fold mucosa. J. Acoust. Soc. Am. 97(5): :3080-4 DOI: 10.1121/1.411870 Taller R, McGary J, Charles W. (1987). United States of America Patent No. 468449 Zhang K, Siegmund T (2007). A two- layer composite model of the vocal fold lamina propria for fundamental frequency regulation. J. Acoust. Soc. Am. 122(2):1090-101 DOI: 10.1121/1.2749460
ACKNOWLEDGMENT This research is funded by an NMRC grant (Development of a vocal cord clip and applicator for laryngeal micro-surgery).
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Virtual Total Knee Replacement System Based on VTK Hui Ding, Tianzhu Liang, Guangzhi Wang, Wenbo Liu Department of Biomedical Engineering, Tsinghua University, Beijing, China Abstract — In this paper we present a surgery planning software of total knee replacement (TKR). It was established initially based on the open source package - Visualization Toolkit (VTK). The main innovative feature of this surgery planning system is that it could help the surgeon to choose correct size, orientation and position of the prosthetic components, in order to restore the correct alignment of the mechanical axis of the lower limb. To simulate the multiple degrees of freedom of the embedded prosthesis in operation, such as introversion-extroversion, exterior-interior rotation, flexion-extension, and translations along anatomical axes, in total knee replacement procedure, the appropriate human-machine interactive technology and alignment restrictions were designed to help the operator to perform the virtual cutting and virtual assembly manipulation. Moreover, the clinical commonly used quantitative index contradistinction was provided to assist the doctor to determine the spatial relation of the implantable prosthetic components in 3D space. The function design of the software system was evaluated by the virtual motion analysis of the knee flexion, which obtained a good effect. The process diagram and detailed treatments of virtual operation was provided in this paper. The software of surgery planning plays important roles in preoperative lectotype of the prosthesis and alignment procedure of bone resection and assembly operation. Keywords — Total Knee Replacement, Visualization Toolkit (VTK), Surgery Planning.
I. INTRODUCTION Total knee replacement (TKR) is a surgical operation in which a surgeon removes a patient’s damaged knee joint surface and replaces it with artificial components to restore the disabled knee functions. The purpose of the TKR surgery is to correct the axial alignment of the lower extremity, maintain the joint stability, relieve the pain in the joints and thus restore the whole function of the knee joint. In order to ensure the good postoperative function of the knee joint, the bone resection position and the placement of the prosthetic components have to be controlled precisely within three dimensional spaces. Therefore, the preoperative planning software of total knee replacement was receiving more and more attention. At present, many institutions have successfully developed total knee replacement surgery planning system [1-3]. Using the planning software, surgeons can virtually simulate the surgical procedures with
computer. The system allows them to determine important surgical parameters such as the amount and angle of resection and choose the best artificial knee joint for patient before real operation [1]. As the key procedure to success restore the knee function are the correct selections of prosthesis and put it to the correct location relative to the femur and tibia in TKR surgery. The most import feature of the simulation system is the appropriate human machine interface which provides all kinds of changeable characteristic of the spatial position of the prosthesis in the 3D space. The export of the quantificational indices of operation planning systems provides a significant basis for the establishment of the operation planning. This paper introduces a TKR planning software which was developed based on Visualization Toolkit (VTK). The preoperative computer tomographic (CT) images of the patient were used to reconstruct 3D bone models. The 3D models of prosthetic components were obtained from 3D laser scan. The appropriate human-machine interactive technology was used to manipulate the 3D model of bones and prosthetic components. Through the simulation of knee flexion of the lower limb models after virtual total knee replacement, the surgeon could observe and evaluate the assembly result from different view point, which help the surgeons to decide the appropriate position of the bone resection and place the embedded prosthesis to desired position. This simulation system is dedicated to assist the surgeon in the preoperative determination of accurate location of the osteotomy, selection of prosthetic implants, as well as the 3D positioning of the implants. II. MATERIALS AND METHODS A. Description of the TKR Planning system The basic steps of computer-assisted preoperative planning consist in seven steps: (1) Construct the three dimensional models of limb (femur and tibia) and prosthetic components. The 3D model of lower limbs was constructed from CT images of lower limbs and the 3D model of prosthetic components was from three dimensional laser scan; (2) Identify the limb mechanical axis; (3) Determine the spatial position of prosthetic models in 3D space with re-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1028–1031, 2009 www.springerlink.com
Virtual Total Knee Replacement System Based on VTK
spect to the bones; (4) Pre-assembly of bones and prosthetic components; (5) Quantitatively determine the appropriate bone resection positions and directions; (6) Virtual Cutting; (7) Educe quantitative indicators and save the key index and assembled model. A block diagram of the planning procedure is illustrated in Fig. 1. The TKA planning system was developed based on the open source platform of Visualization Toolkit (VTK) and development platform of Visual Studio 2005. Loading limb and prosthesis model Identify the limb mechanical axis Determining the spatial position of model Pre-assembly Determining quantity of resection Virtual Cutting Output Result
Fig. 1 A block diagram of the TKR planning system B. Materials 1. Three dimension model of lower limb: The 3D bone models were constructed from computed tomography (CT) scan. Lower limb CT scan was performed with 1mm slice space and image resolution of 512*512. The corresponding pixel size is 0.782*0.782 mm. Slices covers the range from the proximal end to of femur to the distal end of tibia.
Fig. 2 Three dimension model of the prosthetic components
_________________________________________
1029
2. Three dimension model of the prosthetic component: The 3-D prosthetic component models was constructed from three-dimensional laser surface scans, which reverseengineered the articular surfaces into point clouds with associated point normals. The sampling resolution of the laser scanner used was about 0.5 mm. A 3-D surface model of the prosthetic component is shown in Fig. 2. C. Realization of the Virtual TKR System 1. Mechanical axis Selection: Osteotomy and prosthesis assembly is based on the mechanical axis of the lower limbs. As a result, lower limb mechanical axes determination is essential in the virtual TKR system. We select the femoral mechanical axis, FMA, from the center of the femoral head to the center of the intercondylar notch of femur, and select the tibial mechanical axis, TMA, from the center of the tibial plateau to the center of the ankle joint. 2. Determining the spatial position of model: Total knee replacement is an extremely “position sensitive” surgical operation [1]. The restoration of correct alignments of lower limb mechanical axes demands precisely positioning manipulation of prosthetic components of femur and tibia in relative with the bones in 3D space. Thus the assembly of tibia and femur prosthetic components may involve multiple degrees of freedom misalignments, such as introversionextroversion, external-internal rotations, flexion- extension, and translations in all directions. When we construct the 3D model, each component had its own coordinate system (three orthogonal axes). If we could define the mechanical axes and the flexion axis of the femur and tibia precisely, then the model can be transferred to any desired 3D position with rigid body transformations. In distal femoral replacement, the bone-cutting of the thighbone basically included the section of distal femoral condyle, the anterior and posterior femoral condyle, and the anterior and posterior bevel of the femoral condyle. Three steps were designed to determine the position of the thighbone prosthesis. The first step is to choose the anatomical landmarks to determine the mechanical axes of femur and the axis of joint flexion-extension. We palpate a set of points in the surface of femoral head to calculate the center of the femoral head, and pick up the intercondylar notch of femur with the 3D model. Thus the femoral mechanical axis, FMA, from the center of the femoral head to the center of the intercondylar notch was determined. Then we palpate the anterior and posterior femoral condyles to determine the flexion-extension axis of femur. The second step was to determine the section location of distal femoral condyle. The primal section plane always perpendicular to the femoral mechanical axis and can be adjusted in the direction of proximal-distal. A reference
IFMBE Proceedings Vol. 23
___________________________________________
1030
Hui Ding, Tianzhu Liang, Guangzhi Wang, Wenbo Liu
plane was generated to help the visual inspection of the resection position. After the initial section plane of distal femoral condyle was determined, the third step could be performed. The flexion-extension axis of femur and the reference plane were used to restrict the flexion-extension of the femoral prosthesis. The cutting planes of anterior and posterior femoral condyle were set parallel to the flexion-extension axis and perpendicular to the reference plane. Moreover, he the cutting planes of anterior and posterior bevel were set parallel to the flexion-extension axis and keep a specific angle to the reference plane. According to the known sizes and angles of the prosthesis, those cutting planes of anterior and posterior femoral condyle can be determined. The selected landmarks and generated reference plane are show in Fig. 3 and Fig. 4.
extension axis. In the third step, through the projection of the center of the femoral condyle notch, the projection of sagittal plane was drawn on the reference plane, and the prosthesis was aligned to make the center of the prosthesis located in this vertical line, with the posterior cutting surface of the thighbone aligning with the posterior cutting line on the reference plane. The similar alignments were performed with the tibial bone and prosthesis.
Fig. 5 Pre-assembly of femur and tibia prostheses
Fig. 3 The palpate of anterior, interior and exterior posterior femoral condyle
Fig. 4 Reference planes for determine the section of anterior and posterior femoral condyle.
For the tibia, we select the tibial mechanical axis, TMA, from the center of the tibial plateau to the center of the ankle joint. The sagittal plan of tibia was determined by the tibial plateau center, the astragal center and the landmarks interior 1/3 part of the tibial tubercle. With those selection, the restrictions to the tibial prosthesis was applied. 3. Pre-assembly: The pre-assembly of the femur and tibia prosthesis were processed by a sequence of alignments. In the first step, the direction of the prosthetic axis was aligned with the mechanical axis of the thighbone model, and the prosthesis could be moved proximally and distally. Then the inner and outer diameter of the prosthesis was aligned with the thighbone model along the flexion-
_________________________________________
4. Determining of the Bone-cutting Quantity: In the operation planning system, the bone-cutting quantity of the distal femoral condyle was calculated by the cross point of the bone-cutting plane of the distal femoral condyle and the thighbone axis. Because the thighbone prosthesis was restricted to move along the mechanical axis of femur, the relative position of the center of prosthesis had no change during the movement. Therefore, the cutting quantity of the distal femur could be determined easily. In usual cases, if the tibial plateau was perpendicular to the mechanical axis of the tibia, the thighbone prosthesis was in the external rotation position with an angle of 3°. Then bone-cutting quantity of the interior femoral condyle of posterior was longer than that of the exterior femoral condyle of posterior for 2 to 3mm. The cutting quantity of the tibial plateau referred to the distance from the centers of the tibial plateau to the cutting plan of femur. As the mechanical axis of the tibia always past through the centers of the tibial plateau and the bone-cutting surface, the distance between the two centers could be calculated directly as the bone-cutting quantity of the tibial plateau. 5. Virtual resection: The cutting surface of the virtual cutting was realized with the implicit function in the VTK. In the single plane cutting, the implicit function of the virtual cutting was set as the corresponding cutting surface function. However, the virtual cutting of multi-plane was needed in the femur resection, which made it necessary to process the regions of all the cutting surfaces with the “And” operation, and the implicit function of the result
IFMBE Proceedings Vol. 23
___________________________________________
Virtual Total Knee Replacement System Based on VTK
region was set as the implicit function of the virtual cutting to obtain the effect of simultaneous multi-plane cutting. 6. Quantitative indicators of output and save model after assembly: Before the pre-assembly, the models of prosthesis were not in the same coordinate system with the bone models. Therefore, the methods of normal align, rotating align and pan align were selected to complete the alignments between the prostheses, the femur and tibia. The alignments result in accurate assembly of all prosthetic components and bones in the same coordinate system. D. Virtual motion analysis The simulation result of the prosthesis replacement was analyzed initially through the virtual inspection of surgeon. The models can be manipulated to perform rotations along the flexion-extension axis. Fig. 6 illustrated the virtual knee flexion with the 0°, 45° and 75° respectively.
1031
tion process to establish a knee operation planning system. It made full use of the available data obtaining technology including the CT scanning and the laser tridimensional scanning to realize the 3D expression of the virtual objects, and the VTK interaction technology to realize the manmachine interaction. In present study, we use 3D CT image with 1mm slice distance which generate a good 3D model of the whole femur and tibia. In actual use, the CT scan could be performed only with the hip, knee and ankle joints to build the axes of lower limb, which reduce the number of images. The article provides a full set of virtual surgery process procedure, it helps the pre-operative prosthesis selection, osteotomy planning, and other assembly simulation to improve the accurate of the surgery. Further tests will focus on the improvement of simulation procedure and the human-machine interactive. The accuracy of the system will be evaluated and improved through calibration.
ACKNOWLEDGMENT The work described in this paper has been supported by the National High Technology Research and Development Program of China, under Grant 2006AA02Z4E7 and the National Nature Science Foundation of China under grant 30772195. The assistance of Dr. Zhou Yixing and Dr. Tang Jing in definition of orthopaedic procedure is gratefully acknowledged.
(a) virtual knee flexion with the 0°
REFERENCES [1] S. H. Park, Y. S. Yoon, L. H. Kim, S. H. Lee, M. Han. Virtual Knee Joint Replacement Surgery System [J]. Geometric Modeling and Imaging, GMAI 2007, Proceedings, 2007: 79-84. [2] M. Fadda, D.Bertelli, S. Martelli, M.Marcacci, P.Dario, C.Paggetti, D.Caramella, D.Trippi. Computer Assisted Planning for Total Knee Arthroplasty [J]. Computers in Biology and Medicine, 2007, 37(12): 1771-1779. [3] Nofrini L., La Palombara F., Marcacci M., M.D. Martelli S., F. Iacono F., M.D. Planning of Total Knee Replacement: Analysis of the Critical Parameters Influencing the Implant [J]. Annual International Conference of the IEEE Engineering in Medicine and Biology – Proceedings, 2000, 3: 1861-1863.
(b) Virtual knee flexion with the 45°
(c) Virtual knee flexion with the 75°
Fig. 6 Virtual knee flexion III. RESULT AND DISCUSSION This paper mainly dealt with combining the computer simulation technique with the conventional surgery opera-
_________________________________________
Author: Institute: Street: City: Country: Email:
Guangzhi Wang Department of Biomedical Engineering, Tsinghua University Medical Science Building, Tsinghua University Beijing China
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Motor Learning of Normal Subjects Exercised with a Shoulder-Elbow Rehabilitation Robot H.H. Lin1, M.S. Ju2, C.C.K. Lin3, Y.N. Sun1 and S.M. Chen4 1
Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan 2 Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan 3 Department of Neurology, National Cheng Kung University Hospital, Tainan, Taiwan 4 Department of Physical Medicine and Rehabilitation, National Cheng Kung University Hospital, Tainan, Taiwan
Abstract — A shoulder-elbow rehabilitation robot has been developed as clinical treatments to facilitate motor learning and accelerate recovery of motor functions for stroke patients. However, the connection between motor learning and muscle activation patterns for stroke patients remained unknown. This study was tried to fulfill the gap by examining the muscle coordination and motor learning strategies of normal subjects while they interacted with the rehabilitation robot. A Hill-type biomechanical model based on twelve shoulder and elbow muscles was hence constructed for the upper-limb to simulate the interaction. Two normal subjects were recruited to perform upper limb circular tracking movements, clockwise and counterclockwise, on transverse plane at shoulder level in a designed force field generated by the rehabilitation robot. From the inverse dynamics analysis, the interaction was analyzed and the patterns of muscle activation were calculated. EMG signals of eight upper limb muscles were also measured for model validation and muscle coordination observation. The principle component analysis (PCA) was performed to distinguish different groups of muscle co-activation. Results showed that the constructed biomechanical model may be used as a tool for evaluating effects of treatment and be utilized as a blueprint for the design of the training protocol for the stroke patients. Keywords — stroke, rehabilitation, biomechanical model, motor control, optimization.
I. INTRODUCTION Rehabilitation program is the main stay of treatments for patients suffering from trauma, stroke or spinal cord injury. Conventionally, these programs rely heavily on the experience and manual manipulation by the therapists on the patients’ affected limbs. Since the number of patients is large and the treatment is time-consuming, it is a great advance if robots can assist in performing treatments. Recently there have been many researches working on how to use robots in assisting patients in rehabilitation [1-4]. Although many positive outcomes are reported for robotaided treatment, clinical assessments currently used for stroke patients are based on subjective evaluation of physicians with assessment indices, such as modified Ashworth
scale, Brunnstrom’s stage and Fugle-Meyer’s score, etc. It is hard to describe the individual characteristics of stroke patients with those assessment indices. Hence, there is a need of studies of connection between motor learning and muscle activation patterns for stroke patients. To fulfill the gap, the goal of this research was tried to construct a biomechanical model of the upper limb such that the muscle activations can be estimated for both normal subjects and stroke patients. A better understanding of the motor control and the recovery mechanism of stroke patients during treatments are then provided to physicians. Furthermore, customized treatments for stroke patients may be designed by using the analyses to facilitate the rehabilitation training. II. MATERIALS AND METHODS In our previous studies, a robot was developed for neurorehabilitation of the upper extremity [5]. The robot was designed to perform two-dimensional motion on the transverse plane and was able to provide a desired resistive or assistive force to the subjects. Upper limb circular tracking movements, clockwise and counterclockwise, on transverse plane at shoulder level were hence performed in a designed force field. EMG signals were measured by eight surface electrodes (Delsys, Inc.) from following muscles: pectoralis major (PEC), deltoid (DEL), biceps (BIC), triceps (TRIC), pronator teres (PTm), supinator (SUP), flexor digitorum superficialis (FDS), extensor digitorum (ED). The signals were amplified with a gain of 1000 V/V and band-passed of 20 Hz to 450 Hz and sampled at 2.0 kHz/channel. Signals were processed by full-wave rectification and linear envelope. Smoothed EMG signals were derived after normalizing the signals with the EMG of the isometric contraction and passing the signals through an adaptive filter (cutoff frequency 0.25~2.5Hz) [6]. A Hill-type biomechanical model which includes twelve muscles around shoulder and elbow joints was constructed for the upper-limb to simulate the interaction between subjects and the rehabilitation robot. These muscles includes
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1032–1036, 2009 www.springerlink.com
Motor Learning of Normal Subjects Exercised with a Shoulder-Elbow Rehabilitation Robot
deltoid anterior (DA), deltoid middle (DM), deltoid posterior (DP), teres major (TM), pectoralis major (PM), triceps brachii (TB), biceps brachii (BB), anconeus (ANC), brachialis (BRA), brachioradialis (BRD), pronator teres (PT), and extensor carpi radialis (ECRL). The free body diagram of this model is shown in Fig. 1. According to Fig. 1(c), the dynamic equations of the upper are described as G G G (1) F1 F2 m1a1 , G G G G G G G (2) L1 u F2 W 1 W 2 Lg1 u m1a1 I1D 1 . Likewise, the dynamic equations of the forearm as illustrated in Fig. 1(b) are given by G G G (3) R F2 m2 a2 , G G G G G G (4) L2 u R W 2 Lg 2 u m2 a2 I 2D 2 , where the subscripts 1 and 2 represent the shoulder and elbow, respectively. R is the resistive force imposed at the subject’s wrist by the robot. F expresses the force. L is the segment length of the arm. Lg represents the length from the center of the mass to the proximal end. The torque is given by . m describes the mass. The acceleration and angular acceleration are symbolized by a and , respectively. I stands for the moment of inertia.
m2
§ fj ¨¨ ¦ j 1 © PCSA j 12
min J
· ¸¸ ¹
2
(5)
Subject to G j
G G u f j W1
0,
(6)
j
G G u f j W 2
0,
(7)
¦d n
j 1
G
¦d m
j 1
f j t 0,
(8)
f j Foj d 0 ,
(9)
where fj, dj, and PCSAj are the muscle force, moment arm and physiologic cross-sectional area of the muscle j, respectively. n is the muscle number around the shoulder and m is the muscle number around the elbow. Foj represents the optimal muscle force of the muscle j. With incorporation with the curves of the muscle forcelength properties and force-velocity properties, muscle activations were computed by equation below Aj
fj
,
(10)
f oj f LT f FV
where Aj is the muscle activation of the muscle j, and foj is the isometric force of the j muscle. The force-length and the force-velocity factors are represented as fLT and fFV, respectively.
L2
R
1033
Lg 2
I2 f2
R
I 1 , I1 , I1
F2
I 2 , I2 , I2
t2
In v e rse d y n a m ic s
ts te
S ta tic o p tim iz a tio n mt
G e o m e tric L1 R e la tio n
(b) t2
F1 M u sc le 1
A1
E x c i t a t i o n- A c t i v a t i o n D y n a m ic s
u1
M u sc le 2
A2
E x c i t a t i o n- A c t i v a t i o n D y n a m ic s
u2
Aj
E x c i t a t i o n- A c t i v a t i o n D y n a m ic s
uj
F2
-F2 f1 m1
t1
I1
mt
G e o m e tric L 2 R e la tio n
-t 2
Fi mt G e o m e tric L j R e la tio n
F1 Y
Lg1
t1 O
L1
Fig. 2 Inverse analysis of the human-robot interaction
X (a)
M u sc le j
(c)
Fig. 1 Free body diagram of the upper-limb biomechanical model With the measurement data and the muscle-skeleton parameters from the anthropometry, the required forces and torques at the shoulder and elbow for the arm’s movements can then be solved by applying the inverse analysis (Fig. 2). In order to solve the load sharing problem for each muscle forces, a nonlinear optimization problem was formulated as in [7, 8]
_______________________________________________________________
In order to gain a more perspective realization of the coactivation between muscles during a tracking movement, the principle component analysis (PCA) was applied [9]. The purpose of the PCA is to reduce the dimensionality of dependent variables to several orthogonal components. Hence, the data covariance matrix of the calculated muscle activations was formed and then decomposed by computing the corresponding eigenvectors and the eigenvalues. According to the eigenvalues, the number of principle components (PCs) is choose to retain most of the major informa-
IFMBE Proceedings Vol. 23
_________________________________________________________________
1034
H.H. Lin, M.S. Ju, C.C.K. Lin, Y.N. Sun and S.M. Chen
tion hidden in data such that different groups of muscles coactivation were distinguished. III. RESULTS AND DISCUSSIONS An example of the calculated muscle activations was shown in Fig. 3 for a normal subject tracking in the CW direction. Fig. 4 showed the muscle activations of a tracking cycle in both CW and CCW direction for a normal subject. In this figure, the left and right side plots indicated results in the CW and CCW tracking direction, respectively. From top to bottom, the first plot showed wrist’s trajectories in X axis (solid line) and Y axis (dashed line); the second plot showed the angles of shoulder (solid line) and elbow (dashed line). All muscle activations were presented in plots from third to seventh. The muscle activation of TB, BRA, ECRL, TM, and DP were expressed in solid lines; dashed lines are used to showed that of BB, BRD, PT, PM and DM. As for the ANC and DA, dotted lines were used. From this figure, one can find that in the CW direction TB was first activated from 0% to 45% of the cycle and followed by the BB from 35% to 70% and then TB from 70% to 100% again. On the other hand, for the CCW direction, BB was first activated from 0% to 45% and followed by TB from 41% to 95% and then BB from 95% to 100% to complete the cycle. PM and DA were activated at the same time since they are both the prime mover for the horizontal flexion at the shoulder joint. In the constructed model, all muscles were assumed to be mono-articular except the biceps and the triceps which were modeled as bi-articular such that agonist muscles and antagonist muscles can not be activated simultaneously. In contrast, biceps and triceps may have temporary coactivation due to the continuity and correlation of the movements as seen in Fig. 4.
The comparisons with measured EMGs for the muscle activations were shown in Fig. 5 for direction CW as an example. The measured EMGs were presented in dotted lines, and the computed muscle activations were shown in solid lines for TB, BB, PM, and DM. The dashed and dashed-dotted lines illustrated muscle activations of DA and DP, respectively. The muscle activation patterns obtained from the biomechanical model were consistent with the EMG signals. However, only one measured EMG signal for the deltoid muscle can be acquired but there are there heads of deltoid muscles constructed in the model. As a result, the estimated muscle activations may provide more detail information than the EMG can offer. The corresponding correlation analysis between the computed muscle and activations and the measured EMGs were shown in the Table 1. The ill-correlation of the PM muscle may result from the noise-contamination of the EMG measurements or the surface electrode may not be placed correctly. For the CCW direction, the calculated muscle activation of DM indicated a short period of activation which may explain the illcorrelation with the measured EMG signals.
Fig. 4 Muscle activations of a tracking cycle: CW (left), CCW (right)
Fig. 3 Example of the calculated muscle activations Fig. 5 Comparisons between EMG measurements and calculated muscle activations for a normal subject in the CW direction
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Motor Learning of Normal Subjects Exercised with a Shoulder-Elbow Rehabilitation Robot
1035
BRD, ECRL and PT were activated at the early stage of the tracking cycle. Same situation can also be found in the CCW direction. Furthermore, the stroke patient’s TB was first activated in the CCW direction. Also, DP and DM were activated in the 0~30% of the cycle, which remained silent for the norm subjects.
IV. CONCLUSIONS
Fig. 6 Values of PC1(solid) and PC2 (dashed) in the CW direction Fig. 6 showed the results obtained from the PCA analysis for a tracking cycle in the direction CW as an example. Since the first and the second principle component (PC1, PC2) explain 70% and 10% of the data variance, respectively, the analysis is therefore focus on the examination of the PC1 (solid line) and PC2 (dashed line). The coefficients of PC1 and PC2 were indicated in the Table 2. From the plot and the table, one can see that the values of PC1 were positive from around 50% to 82% of cycle and mainly contributed from PM and DA. From 35% to 60% of cycle, the values of PC2 were positive due to the contribution of BB. As for the rest of the tracking cycle, both PC1 and PC2 presented negative values which may be explained by TB and DP. Fig. 7 showed the calculated muscle activation patterns of one cycle for a stroke patient in the CW (left plots) and CCW (right plots) directions while performing circular tracking movements. Observe that the stroke patient performed in-coordinative TB and BB activation for the CW tracking cycle and the secondary movers such as BRA,
X,Y 0
80
20
40
60
80
100 TB,BB
0
20
40
60
80
100
0.1 0
20
40
60
80
100
0
20
40
60
80
100
0
20
40
60
80
100
0.04
0.05 0
0.1 0
120 100 80 60
0
20
40 60 % of CW cycle
80
100
CW 0.727 0.798 0.294 0.851
CCW 0.663 0.401 0.676 -0.025
Table 2 Coefficients of PC values PC1 PC2 PC1 PC2
TB -0.3125 -0.6796 TM -0.1438 -0.2125
BB 0.0646 0.3083 PM 0.5645 -0.2820
ANC 0.0767 -0.0729 DP -0.2836 -0.4287
BRA -0.1171 -0.0159 DM -0.0737 -0.1088
BRD -0.0674 0.0015 DA 0.6689 -0.3422
ECRL -0.0325 0.0138 PT -0.0171 0.0049
20
40
60
80
100
0
20
40
60
80
100
0
20
40
60
80
100
0
20
40
60
80
100
0
20
40
60
80
100
0
20
40
60
80
100
0
20
80
100
The research is supported by the Ministry of Economic Affairs, Taiwan, under contract 97-EC-17-A-19-S1-053.
0.1 0
TM,PM
0.02 0
Muscle TB BB PM DM
ACKNOWLEDGMENT 0
0.2
0.1
0
and the measured EMGs
0.5 0
100
DP,DM,DA
TB,BB BRA,BRD,ANC ECRL,PT
60
0.05 0
TM,PM
40
120 100 80 60 0
DP,DM,DA
20
angle
angle
0
Table 1 Correlation coefficients between computed muscle activations
1
0.5
ECRL,PT BRA,BRD,ANC
X,Y
1
With the help of the biomechanical model, the pattern of the muscle activation of the upper-limb of stroke patients can be estimated. The information may be utilized as a blueprint for the design of the training protocol. In future, time course variation of these muscle activations may provide an assessment tool for the stroke patients during robotaided rehabilitation.
0.1 0.05 0
REFERENCES
0.02 0
1. 2.
0.05 0 0.1 0.05 0
40 60 % of CCW cycle
Fig. 7 Muscle activations of a stroke patient in circular tracking move-
3.
Krebs HI, Hogan N (2006) Therapeutic robotics: A technology push. Proc. of IEEE 94:1727-38, 2006. Hesse S, Werner C, Pohl M, et al. (2005) Computerized arm training improves the motor control of the severely affected arm after stroke A single-blinded randomized trial in two centers. Stroke 36:19601996. Colombo R, Pisano F, Micera S, et al. (2005) Robotic techniques for upper limb evaluation and rehabilitation of stroke patients. IEEE Trans Neural Syst Rehab Eng 13:311-324.
ments (left CW, right CCW)
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
1036 4. 5.
6.
H.H. Lin, M.S. Ju, C.C.K. Lin, Y.N. Sun and S.M. Chen
Dipietro L, Krebs HI, Fasoli SE, et al. (2007) Changing motor synergies in chronic stroke. J Neurophysiol 98:757-768. Ju MS, Lin CCK, Lin DH, Hwang IS, Chen SM (2005) A rehabilitation robot with force-position hybrid fuzzy controller: Hybrid fuzzy control of rehabilitation robot. IEEE Trans Neural Syst Rehabil Eng 13:349-358. Cheng HS, Ju MS, Lin CCK (2003) Improving elbow torque output of stroke patients with assistive torque controlled by EMG signals. Tran ASME J Biomech Eng 125:881-886.
_______________________________________________________________
7.
8.
9.
Happee R (1994) Inverse dynamic optimization including muscular dynamics, a new simulation method applied to goal-directed movements. J of Biomech 27:953-960. Challis JH (1997) Producing physiologically realistic individual muscle force estimations by imposing constraints when using optimization techniques. Medl Eng & Physics 19:253-261. Johnson DE (1998) Applied Multivariate Methods for Data Analysts. Duxbury Press, Pacific Grove, CA.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Using Virtual Markers to Explore Kinematics of Articular Bearing Surfaces of Knee Joints Guangzhi Wang1, Zhonglin Zhu1, Hui Ding1, Xiao Dang1, Jing Tang2 and Yixin Zhou2 1
Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China Beijing Ji Shui Tan hospital, the 4th Medical College of Peking University, Beijing, China
2
Abstract — This paper proposed a method of using virtual markers to track the movement of the articular bearing surfaces of femur, tibia and patella simultaneously. With this method, the femur, tibia and patella were treated as three rigid body connected by soft tissue and thus the articular surface moves with corresponding bone in three dimensional (3D) space. The contact kinematics of articular bearing surface can be figured out by track the position and orientation of bones during knee flexion. To perform this method, three sets of tracking marker were attached tightly to femur, tibia and patella to track the 3D motion of lower limb. The articular bearing surfaces of knee joint were digitized to create three groups of registered virtual markers associated to the tracking markers of each bone, respectively. During knee flexion, 3D movement of femur, tibia and patella were tracked by capture the motion of tracking markers. Then the 3D coordinate of each virtual marker was calculated using the registration information of virtual markers. Thus, the trajectory of virtual markers could be tracked during knee flexion which indicates the real kinematics of bearing surfaces of knee joint. Three fresh-frozen lower limbs were tested before and after total knee replacement operation using this technique. The test results show that this method could be used to explore the contact condition of articular bearing surfaces accurately. Keywords — total knee arthroplasty, kinematics, articular surfaces, motion tracking, virtual marker.
I. INTRODUCTION It is known that the geometry of the articular can affect the location of the contact point during knee motion and ultimately affect the trajectory of the leg. For the total knee arthroplasty (TKA), the position and orientation of the prosthesis components can significantly affect the pattern of knee motion. Surgical implantation strategies, such as placement and sloping of the components, can have dramatic effects on gait. Moreover, the geometry, position and orientation of the prosthesis components affect the mechanics of knee joint which is correlated with the reliability of the TKA. So that, an accurate knowledge of in vivo kinematics of the human knee is important in order to improve the treatment of knee pathologies. Knee kinematics had been measured extensively in cadavers and in living sub-
jects [1, 2]. However, direct measurement of contact kinematics of articular bearing surfaces remains technical challenge in biomedical engineering. Previous studies have used various technologies to measure kinematics in human subjects, including video, computerized tomography (CT) and magnetic resonance imaging (MRI) techniques. In recent years, image based 3D model and 2D fluoroscopic image measurements were widely used in the study [3, 4] . Some recent studies have implemented 3D models generated from CT and MRI scans to estimate the motion of the contact points of the femur on the tibial plateau using the bony geometry of the femur and tibia [2, 5]. However, it is hard to obtain the geometry and acquire the motion of the bones simultaneously with the image techniques during knee flexion to quantify tibiofemoral contact in consecutive joint angles. The object of this study was to quantify the contact kinematics between the tibial and femoral cartilage and/or implanted components during consecutively knee flexion. We use a new methodology to track the movement of virtual markers located on the articular bearing surfaces of femur, tibia and patella simultaneously. Thus the tibiofemoral articular contact of different legs can be compared using the model obtained by digitizing the bony surface geometries of knee joints or using the computer model of artificial knee joint components. The kinematics of articular bearing surfaces before and after the total knee replacement can be compared using this technology. II. MATERIALS AND METHODS A. Materials The three fresh-frozen knee joints specimens (total-femur to midtibia) were used to perform the test. All specimens were anatomically normal, with no varus or valgus deformities. The institutional review board approved the study. Before the test, three metal screws were inserted into the patella, tibia, and femur. A small X-shaped traceable frame with a marker on each of the four arms was fixed to the metal screw tightly to capture the movement of bones in six degrees of freedom (DOF).
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1037–1041, 2009 www.springerlink.com
1038
Guangzhi Wang, Zhonglin Zhu, Hui Ding, Xiao Dang, Jing Tang and Yixin Zhou
The specimens were tested before and after implanted with different designed prosthesis such as Simith-nephnew GENESIS II patella preserving and high flexion insert replacement by the same surgeon. Ligament balancing was performed with the femoral and tibial components in their neutral positions.
frames. The root-mean-square (RMS) accuracy for each IRED was 0.1 mm in the plane of flexion and 0.15 mm perpendicular to the plane of flexion. The weight of the patellar tracking frame and associated cable was less than 15g. The effect of this on the kinematics would therefore be easily offset by the quadriceps load.
B. Experiment Setup 1) Weightbearing apparatus: A custom-built Oxford-type weightbearing rig (Fig. 1) actively loaded the knee by moving the femur head to produce continuous flexion and extension to simulate deep knee bends before and post TKA operation. The quadriceps muscles were loaded by dead weights via thin wires which pulling the rope attached to each individual muscle head of the quadriceps. The load was intended to mimic the passive resistance of the quadriceps tendon and to ensure that the patella remained in contact with the femur. The simulated ankle joint allowed rotations about three orthogonal axes and no translations. The simulated hip joint allowed vertical translation, rotation in coronal plane and flexion resulting in one rotational and two translational degrees of freedom. Therefore, the knee retained its full six degrees of freedom.
3
4
1
2 Fig. 2. Traceable frame and digitizer probe for selection of virtual markers. (1) femur tracking frame, (2) tibia tracking frame, (3) digitizer probe, (4) virtual marker.
C. Traceable Markers and Virtual Markers
1
2 5
3
4
Fig. 1. Weightbearing flexion rig and motion tracking markers. (1) Hip joint, (2) traceable frame fixed to femur, (3) traceable frame fixed to patella, (4) traceable frame fixed to tibia, (5) dead weights. 2) Measurement Devices: To determine the tibiofemoral and patellofemoral tracking characteristics throughout knee flexion and extension, we firmly attached X-shaped traceable frame of marker arrays, each with four infraredemitting diodes (IREDs) on each of the four arms, to the femur, tibia and patella using metal screws. The threedimensional position and orientation of bones were tracked with an Optotrak CertusTM optoelectronic camera system (Northern Digital, Waterloo, Canada) via the traceable
_______________________________________________________________
While the small traceable frame tightly fixed to bones (Fig. 2), 6 DOF movement of each bone can be captured by the Optotrak CertusTM motion capture system. Each bone was regarded as a rigid body moving in 3D space. Thus, any specific point located in the bone will move with the rigid body (bone) and the trajectory of this point in 3D space can be calculated by resolve the rotation and translation of the rigid body. Therefore this specific point can be tracked as if it is a virtual traceable marker. If we define a group of virtual markers on each of the articular bearing surfaces, the 3D movements of articular surface of each bone can be tracked to explore the contact kinematics of bones with consecutively flexion and extension of the knee. A digitizer probe with 3 IRED markers (Fig. 2) was used to digitize the articular bearing surface of femur, tibia and patella. The tip of digitizer probe was calibrated with accuracy of 0.1mm (RMS) before it can be used. The virtual markers located in the articular bearing surface of each bone were obtained by moves the tip of digitizer probe in front of the cartilage while record the 3D location of probe tip and the traceable frame mounted on each bone simultaneously with Optotrak CertusTM system. Then the recorded 3D location of virtual markers with respect to the traceable frame of corresponding bone can be transferred to world coordinate system to analysis the relative motion of the bones.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Using Virtual Markers to Explore Kinematics of Articular Bearing Surfaces of Knee Joints
D. 3D model of articular surfaces of knee joints For the accurate modeling of the bones, a set of point palpation was performed to sample the coordinates of the articular bearing surfaces of the femur, tibia and patella. Thus, the articular geometries can be represented as point clouds and the 3D surface models could be generated by orderly connect the points to produce mesh grid. For model the implanted components, we obtained the articular geometries using a three-dimensional laser scanner, which reverse-engineered the articular surfaces into point clouds. The sampling resolution of the laser scanner used was about 0.5 mm. E. Experiment procedure Before the TKA procedure, the specimens were set up to the weightbearing rig. Three traceable frames were tightly fixed to the femur, tibia and patella to track the motion of bones. Then, the knee was opened to perform the virtual marker palpation procedure of the articular surfaces using digitizer probe. More than 300 points were obtained for each articular surface. After suture of the knee, flexion and extension test were performed with the weightbearing rig. 3D movements of the femur, tibia and patella were measured via the traceable frames and recorded at 25 Hz by the Optotrak Certus system during the consecutively knee flexion and extension. Six trials were repeated for each specimen. After the normal knee flexion test, the TKA procedure was performed by the same surgeon. During the TKA procedure, register points were selected to sample the location of the prosthesis components relative to the traceable frames which was used to register the 3D prosthesis model as the virtual markers. And then, the specimens were tested with the weightbearing rig and Optotrak CertusTM 3D motion tracking system again. F. Data Processing 1) Coordinate Transformation: When we test the knee, the Optotrak Certus system records the translation vector and rotation matrix of each traceable frame individually. To identify the kinematics of the articular surface, the relative 3D movements of virtual markers need to be resolved. First of all, the virtual markers were defined by digitizing the articular surface with in the traceable frame coordinate system of each bone. So that the 3D coordinates rigid-body transformation is needed to transfer the locations of virtual markers from the frame coordinate system to the world coordinate system. With the rotation matrix R and translation vector T of each traceable frame recorded by the Opto-
_______________________________________________________________
1039
trak Certus system, the coordinate of each virtual marker Pfi in traceable frame coordinate system can be transferred to Pwi in world coordinate system by equation:
Pwi
R Pfi T
(1)
where, R is an orthogonal rotation matrix and T is displacement vector which indicate the location of the traceable frame in world coordinate system. 2) Re-orientation Process: With the weightbearing rig all the bones moved simultaneously throughout the knee flexion and extension. To represents the kinematics of articular bearing surface of knee joint, the relative movement of tibiofemoral and patellofemoral joint are desired. In this study, relative movements were calculated using a customwritten Matlab program. The kinematics of the joints were determined from the traceable markers and represented as Cardan angles as described by Tupling and Pierrynowski [6]. Translations and rotations of the patella and tibia relative to the femur were calculated from the measured 6D data of the traceable frame. 3) Registration of 3D model and virtual markers: Because the number of virtual markers sampled with digitizer probe is limited, the generated mesh grid is sparse. We obtained the articular geometries from three-dimensional laser scans, which reverse-engineered the articular surfaces into point clouds with higher resolution and generate fine 3D model of the surface. We registered the 3D model of joint components with the virtual markers and transferred the 3D model to the position determined by virtual makers in each joint flexion angle to improve the accuracy of kinematics analysis of the articular surfaces. 4) Kinematics of the 3D Model of Articular Surfaces: In order to compare the kinematics of articular bearing surface at different knee positions and across different subjects, the registered 3D model of bones or implanted components were used to characterize the roll and gliding motion of components in three-dimensional space. By put the motion of 3D model together, the roll and gliding character and contact area of the articular bearing surfaces during knee flexion can be observed.
III. RESULTS A. Virtual markers and 3D Model of articular surface: Fig. 3 (left column) depicts samples of virtual markers (point cloud) of the surface of a femoral component obtained by digitizer and 3D laser scanner respectively. The corresponding 3D model of femoral component is recon-
IFMBE Proceedings Vol. 23
_________________________________________________________________
1040
Guangzhi Wang, Zhonglin Zhu, Hui Ding, Xiao Dang, Jing Tang and Yixin Zhou
structed by orderly connect the points to construct mesh grid and calculate the associated normals of the surface. The bottom right model was obtained by multiple passes of 3D laser scan. B. Kinematics of articular bearing surface: After we obtain the 3D model of articular bearing surface and record the motion of femur, tibia and patella, the kinematics of articular bearing surfaces can by analyzed. The translations and rotations of the patella and tibia relative to the femur were calculated from the measured movement of traceable frame in three-dimensional space. Fig. 4 illustrates the articular bearing surface of femoral condyle and tibia plateau of two joint flexion angles. The left figure indicates the both side of the femoral condyles balanced contact tibia plateau and the right figure indicates that with more joint flexion the lateral side of femoral condyle is not contact tibia plateau.
Fig. 4. Contact analysis of articular bearing surface of femoral condyle and tibia plateau in two joint flexion angles.
Fig. 5. Contact analysis of articular surface of femoral condyle, tibia plateau and patella with three joint flexion angles.
IV. DISCUSSION AND CONCLUSIONS
Fig. 3. Point cloud and corresponding 3D surface model. Left column are the virtual markers digitized using 3D probe and point cloud obtained via 3D laser scanner, right column illustrate the corresponding 3D surface model.
Fig. 5 is another example of 3D kinematics analysis of the articular bearing surface. Three surface models were obtained from the virtual markers of femur, tibia and patella joint surfaces using 3D digitizer. The left figure illustrate that in the beginning of knee flexion the patella contacts the femoral condyles in both side. In the middle figure, the patella contacts femoral condyles only in lateral side. The right subplot indicates that with highly knee flexion the patella sink to the deep trochlear groove of the femur and not contact the formal condyles any more.
_______________________________________________________________
Using virtual marker tracking technique, we tracked the contact kinematics of articular bearing surfaces of femur, tibia and patella simultaneously. Three-dimensional joint kinematics and contact locations were determined with consecutive joint angles. This method is potentially useful in the accurate measurement of joint kinematics. The contact kinematics of tibiofemoral articular with different prosthesis component design can be compared by register the model to the moving virtual markers in consecutive joint angles. The kinematics of articular bearing surfaces before and after the total knee replacement can be compared using this technology. The accuracy of our method highly depends on the representation of the articular geometry. The virtual marker tracking technique can be used with the CT-based bone model surfaces, and MR-based articular cartilage model surfaces.
ACKNOWLEDGMENT This work was supported in part by the National High Technology Research and Development Program of China, under grant 2006AA02Z4E7; and the National Nature Science Foundation of China under grant 30772195.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Using Virtual Markers to Explore Kinematics of Articular Bearing Surfaces of Knee Joints 5.
REFERENCES 1.
2.
3.
4.
Anglin C., Brimacombe J.M., Wilson D.R. et al. (2007) Intraoperative vs. weightbearing patellar kinematics in total knee arthroplasty: A cadaveric study. Clin Biomech 23(1):60-70 Moro-oka T, Hamai S, Miura H, et al. (2008) Dynamic activity dependence of invivo normal knee kinematics. J Orth Res 26(4): 428434 Li G, DeFrate LE, Park SE, et al. (2005). In vivo articular cartilage contact kinematics of the knee: an investigation using dual-orthogonal fluoroscopy and magnetic resonance image based computer models. Am J Sports Med 33:102–107. Patel V., Hall K, Ries M et al. (2004) A three-dimensional MRI analysis of knee kinematics. J Orth Res 22:283–292
_______________________________________________________________
6.
1041
Fregly BJ, Rahman HA, Banks SA. (2005) Theoretical accuracy of model-based shape matching for measuring natural knee kinematics with single-plane fluoroscopy. J Biomech Eng 127:692–699. Tupling SJ, Pierrynowski MR. (1987). Use of cardan angles to locate rigid bodies in three-dimensional space. Med Biol Eng Comput 25:527–532.
Author: Institute: Street: City: Country: Email:
Guangzhi Wang Department of Biomedical Engineering, Tsinghua University Medical Science Building, Tsinghua University Beijing China
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Simultaneous Recording of Physiological Parameters in Video-EEG Laboratory in Clinical and Research Settings R. Bridzik, V. Novák, M. Penhaker Video-EEG/Sleep laboratory, Clinic of Child Neurology, University Hospital Ostrava, Czech Republic VSB – Technical University of Ostrava, FEECS, Ostrava, Czech Republic Abstract — The possibility of a flexible connection of various diagnostic equipment and seeking for correlations between recordings of different physiological functions is a powerful tool of diagnostics. There are many well-established interconnections, such as video-EEG or video-PSG. However, in some cases we need a combination of parameters “tailored” to the individual patient. Finding solutions for the patient is an interesting competence of the biomedical engineer/EEG technician. Keywords — Video-EEG, pH-metry, device connection, biosignal processing
I. INTRODUCTION Simultaneous recording of several physiological parameters is often used in diagnostics of many diseases in videoEEG laboratory. Some combinations are fixed – for example the polysomnography comprises recording of EEG, EOG, EMG, respiratory effort, airflow and SpO2. Many patients need an individualized combination of physiological parameters according to the diagnostic situation. For example, sometimes the correlation between the sleep stage and the moment of urination (wet time) is needed; sometimes the question should be the correlation between respiratory parameters and reflux of the gastric content to the esophagus (detected by esophageal pHmetry). In case of these individualized combinations of physiological parameters there are some possibilities of quick and flexible approach:
II. FLEXIBLE APPROACH
detector is blinking, when the patient wets. The accuracy is influenced by the resolution and frame-rate of the video. B. Software association of recordings off-line It means subsequent combination of digital recordings after the end of measurement. This solution is applicable when the accuracy requirement is about 1 second. It is necessary to compare the internal time clocks of both equipments before and after the measurement. Possible difference should be compensated by linear interpolation. Biological check of synchrony is useful, too. The data format for the measurement interchange depends on the software of the auxiliary device – it should be binary, simple ASCII, European data format (used for EEG/PSG data transfer), or the XML format are the best choice in our opinion. 1) One of the simplest data format appropriate for the data interchange is the ASCII. The data in simple ASCII file are not structured, so it is rather complicated to read and analyze this data (lack of standardization). 2) The XML is a very flexible data format[3]. It has the advantage of a much better possibility to structure the data. The fact that this data format is supported by MS-Excel, web browser etc. is another advantage of the XML. The XML DOM Interface is an ActiveX object supporting the browsing of the XML data by any application using the XML modules[2]. 3) The European data format is a widely used data standard[1,4]. Many laboratories use it for EEG data sharing. Unfortunately, it was developed predominantly for the EEG and is not so flexible as the XML. 4) The binary format of the physiological function recording is simple and structured, but with high variability, and there is often a problem with the lack of documentation.
A. Functional addition through the video
C. Hardware combination of devices
This is a very simple, but flexible and fast way. When combining the external measurement with video-EEG equipment, there is a possibility to record the analogue output of the external device on the video. This simple approach is useful especially when the moment of some event should be recorded – for example the LED on urination
Hardware interconnection is possible in case of analogue output of the secondary device (which should be connected to auxiliary data channel of EEG/PSG machine). Case report: We have evaluated a case of 3-month-old boy with paroxysmal apnea and convulsions. The EEG and sleep EEG was without epileptic spikes and sharp waves.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1042–1043, 2009 www.springerlink.com
Simultaneous Recording of Physiological Parameters in Video-EEG Laboratory in Clinical and Research Settings
1043
III. CONCLUSIONS
Fig. 1 Combination of 3 devices EEG, SpO2 (in the video-channel)
There are several physiological parameters useful for the correlation in the video-EEG or polysomnographic laboratory. We recommend combining e.g. the pH-metry with the video-EEG. In some cases, it is useful to interconnect the bicycle-ergometer with the detectors of the EEG and cardiologic parameters (ECG, SpO2). In children with nocturnal wetting, the correlation should be between the EEG (sleep stage) and the urination probe. The physician makes the clinical decision, but the engineer provides the technical solution. In our opinion, the best choice for data interchange between the different equipment in one laboratory and even for the data sharing between laboratories is the XML format[3]. We have developed an XML parser for the biomedical data in the C++ language.
and esophageal pH associated off-line.
The anti-epileptic drugs were ineffective. Because of symptoms of possible gastro-esophageal reflux, we decided to perform a complex recording of several physiological parameters: the video, EEG, ECG, SpO2 and esophageal pHmetry. We had no measurement device for SpO2 with digital output at that time, so we used the analogue output in the video-channel. The data from pH meter were associated after the end of the recording. The correlation found episodes of the gastro-esophageal reflux with the drop in the peripheral hemoglobin saturation. The diagnosis of the gastro-esophageal reflux disease, not the epilepsy, was established. The child was treated for the reflux and administration of antiepileptic drugs was stopped. The patient is currently doing well.
REFERENCES 1.
2. 3. 4.
Kemp B, Värri A, Rosa AC, Nielsen KD, Gade J. A simple format for exchange of digitized polygraphic recordings. Electroenceph. Clin. Neurophysiol. 82, 1992:391-393. Stenback J., Hégaret P.,Hors A. XMLDocument Object Model HTML, W3C, 2003 XML at http://www.xmlfiles.com EDF at http://www.edfplus.info Author: Institute: Street: City: Country: Email:
Radim Bridzik University Hospital Ostrava 17. listopadu 1790 Ostrava, 708 52 Czech Republic
[email protected] _________________________________________ IFMBE Proceedings Vol. 23 ___________________________________________
Preliminary Modeling for Intra-Body Communication* Y.M. Gao1,3, S.H. Pun1,2, P.U. Mak1,2, M. Du1,3 and M.I. Vai1,2 1 Key Laboratory of Medical Instrumentation & Pharmaceutical Technology, Fu jian Province, CHINA Department of Electrical and Electronics Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, CHINA 3 Institute of Precision Instrument, Fu zhou University, Fu zhou, CHINA
2
Abstract — Intra-Body Communication (IBC) is an interesting and emerging communication methodology in recent years. Beneficial from conductive property of human body, IBC treats human body as transmission medium for sending/receiving electrical signal. As a result, interconnected cable and electromagnetic interference can be greatly reduced for devices communicated within human body. These advantages are significant for home health care system, which abundant of interconnected cables are needed. Furthermore, IBC technology also provides an alternative solution to communicate with implanted devices. Currently, pioneer researchers have proposed two general methods for realization of IBC, namely: “Capacitive coupling technique” and “Galvanic coupling/Waveguide type technique”. Among the methodologies, the Galvanic technique requires neither return path nor common reference. This feature enables the technology attractive for networking biomedical devices on human body and draws much attention from recent studies. Thus, in this work, a preliminary 3D model of electromagnetic (EM) technique is developed in order to provide insight for electric signal distribution within human body. The development of a mathematical model also facilitates future researches in the aspect of communication theory of IBC, such as: optimizing carrier frequency, identifying channel characteristics, developing suitable modulation/demodulation technique, and etc. In this paper, a mathematical model, which employs a homogeneous cylinder in analogous to a human limb, is obtained by solving the Laplace Equation analytically. The proposed model is simple and preliminary, yet it encourages further development of better model in the next phase and lays a foundation for future research and development of Intra-Body Communication.
Keywords — Intra-Body Communication, Body Area Network, Galvanic coupling type/Waveguide type technique.
I. INTRODUCTION Home health care system and long term physiologic parameters monitoring system are important for chronic disease patients and elderly. They could provide patient and paramedic instant health information and generate alert while emergence. As a result, the research and development of these systems are attracting much attention from
researchers. Currently, one of the main research directions is to develop versatile, intelligent, accurate, and convenient system, because it can elevate the performance for the users. Intra-Body Communication (IBC) is a new communication technique, which employs human body as transmission medium for electrical signal. This is a special type of communication methodology and treats human tissue as a “cable” for electrical signals transmission. The merits of this technique are successful removal of connection cable, reduction of possible electromagnetic radiation, and less probably to be interference by external noise. These features are attracting much interest in the aspect of Body Area Network (BAN); especially, IBC works and relies on the human body. It also can improve home health care system and long term monitoring system, as abundant cables create problems for this kind of system. Thus, these features and advantages motivate the development and advance the research of IBC. In this article, the authors attempt to present a mathematical model for IBC. In this model, a human limb is generalized as a 5-cm radius homogeneous cylinder with 30-cm long. After applying the quasi-static conditions, the governing equation has been reduced to Laplace Equation and a mathematical model for IBC can be obtained. The proposed model is simple and preliminary, yet it encourages further development of better model in the next phase and lays a foundation for future research and development of IBC.
II. BACKGROUND IBC was first introduced by T.G. Zimmerman in 1995[1]. They employed the electrostatic coupling technique to demonstrate the feasibility of the idea and built the first prototype system. The design consisted of a transmitter and a receiver, which had electrodes attached to the human body as depicted in Figure 1. The electric field of the transmitter is induced by the electrodes, coupled to the human body and flowed towards the ground, and then the electrodes of the receiver detected the electric field flowing through the human body with respect to the ground. Normally, only a few potion of electric field flowed to other parts of the body
* The work presented in this paper is supported by The Science and Technology Development Fund of Macau under grant 014/2007/A1 and by the Research Committee of the University of Macau under Grants RG051/05-06S/VMI/FST, RG061/06-07S/VMI/FST, and RG075/07-08S/VMI/FST.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1044–1048, 2009 www.springerlink.com
Preliminary Modeling for Intra-Body Communication
1045
and detected by the receiver eventually. Thus, the sensitivity of the receiver should be high enough in order to have better and reliable performance.
technique is relative new, the research of galvanic coupling technology mainly focused on application – especially biomedical application. Although the achieved data rate of Galvanic coupling type technique is low, independent of the earth ground and current propagation within human tissue are more attractive than the electrostatic coupling technique. Based on these reasons, the research direction of the authors is focused on the Galvanic coupling type IBC.
III. MATHEMTICAL MODEL
Figure 1 Electrostatic coupling technique Since the development of the prototype IBC system by Zimmerman, several interesting applications, such as: heart rate and oxygen saturation sensors[2], sensors[3, 4], modeling[5, 6] and various communication techniques: On/Off Keying(OOK)[5], Differential Binary Phase Shift Keying (BPSK), Frequency Shift Keying (FSK)[2, 7, 8] etc. have been employed for improving the performance and stability of IBC. The Galvanic coupling/Waveguide type technique[9-13] is an alternative configuration for implementation of IBC. As shown in Figure 2, electrodes of the transmitter are attached to one end of the human body while electrodes of the receiver are attached to another end. When electrical signals are applied to the electrodes of the transmitter, the signal will propagate along the human body in analogue to electromagnetic wave propagate along a waveguide. The electrodes of the receiver, which attached on the other end of the body, detect and convert the information from the transmitter. Unlike the electrostatic coupling technique, the electrical signal propagated via ionic fluid and thus, less dependent on the surrounding environment. Since the
Since the first report of IBC appeared in 1995, Galvanic coupling technique received much attention from researchers and engineers. During the period, various prototypes and experiments have been built and conducted. Other than demonstrated the feasibility of the method and developed application for IBC, one of the main research goals is to investigate the electrical properties of IBC. Currently, various researchers will employ different carrier frequency, coupling amplitude, encoding scheme, electrode location, and etc. The reasons behind of this phenomenon would be quite complicated; however, lack of good understanding of IBC propagation mechanism may be an explanation. Based on this observation, the authors attempt to develop a model in order to give insight about the electrical and communication properties of Galvanic coupling type IBC. As the initial attempt of modeling IBC, a human limb is selected and ring-type electrodes are attached to either bands (at z=z1 & z2) as shown in Figure 2. The irregular geometry of human limb is difficult to be described by mathematics, so, a h=30-cm long homogeneous cylindrical muscle with a=5-cm in radius would be studied instead (as depicted in Figure 3). Then, applying the Maxwell Equations for the simplified problem to study the mechanism of IBC and find out the mathematical model for the problem.
Figure 3 Simplified model with ring electrodes
Figure 2 Waveguide type technique
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
1046
Y.M. Gao, S.H. Pun, P.U. Mak, M. Du and M.I. Vai
Additionally, as the material of the IBC problem is biological tissue and is operating at low frequency[9-11], the quasi-static approximation may be used. The quasi-static approximation states that “if the dimension of the studied problem is less than 1 meter and satisfies Eq. (1), the capacitive effect can be neglected” [14, 15]. (1) In Eq. (1), Hr and V represent the relative permittivity and conductivity of the tissue and Z denotes the operating angular frequency. In Table 1, the quasi-static approximation with muscle’s electrical characteristics[16] is listed. Table 1 Quasi-static condition 10kHz
100kHz
Conductivity V
3.4e-1
3.9e-1
Relative Permittivity Hr
3.0e4
8.0e3
|0.049
|0.114
where (6) and, Io represents the modified Bessel function of the first kind of order 0. (5) and Eq. (6), the potential on the With Eq. surface and within the human limb could be found spatially. From Figure 4, the plot of electrical signal inside the human limb is shown (X-Z sectional view at y=0 of the voltage distribution). Due to the electrode configuration employed in this case, the expected or calculated voltage induced by the electrodes is symmetry with respect to z-axis and the voltage on the same level of z-axis at the surface of the human limb is equal. This suggests that the electrodes of the receiver should not be placed on the same level of z-axis for this parallel ring type electrode case.
From Table 1, for the operating frequency less than 100kHz, the quasi-static could be applied and according to the prior research, the operation frequency of waveguide type technique is typically less than 100kHz[9, 11]. Thus, the governing equation becomes Laplace Equation and, the formulation of the IBC problem can be written as: Figure 4 X-Z sectional view (at y=0) of the voltage distribution (2)
of the cylinder
with general side-excited case: Electrode 1 (J1) -3
3
(3)
x 10
2 1 0
where the ring type injected current signal is defined as:
-1 -2
(4)
-3 -4 -5
By solving the above formulation, an analytical solution for voltage distribution within the cylinder can be obtained. (5)
_______________________________________________________________
-6
0
0.05
0.1
0.15
0.2
0.25
Electrode 2 (J2) 0.3
Figure 5 Voltage distribution on the surface of human limb along z-axis
IFMBE Proceedings Vol. 23
_________________________________________________________________
Preliminary Modeling for Intra-Body Communication
1047
On the other hand, from Figure 5, if electrodes are placed in the direction of z-axis, the voltage difference between electrodes of the receiver will be high (e.g. if the electrodes of receiver located at z=0.2cm and z=0.25cm, the voltage difference would be around 1mV). Thus, according to the mathematical model derived by this paper, the electrodes of the received of IBC would be better placing across z-axis.
IV. SUMMARY Intra-Body Communication (IBC) is an interesting and emerging communication methodology in these years. Beneficial from conductive property of human body, IBC treats human body as transmission medium for sending electrical signal. In this paper, a mathematical model, which employs a homogeneous cylinder in analogous to a human limb, is obtained by solving the IBC problem analytically. Through applying the quasi-static approximation, the Maxwell Equation can be simplified to Laplace Equation and the obtained an analytical solution for the voltage distribution can be obtained in certain symmetrical cases. The obtained model reveals the propagation mechanism of the ring type electrodes in human limb and suggests the insight of the configuration of the electrodes placement of the receiver. The proposed model is simple and preliminary, yet it provides a foundation for analysis and future development of IBC. The obtained model encourages further development of better model in the next phase and lays a foundation for future research and development of Intra-Body Communication.
ACKNOWLEDGEMENT The authors would like to express their gratitude to The Science and Technology Development Fund of Macau and Research committee of University of Macau for their kind support. The authors also appreciate for the continuous support from the colleagues in Institute of Precision Instrument - Fu Zhou University, Biomedical Engineering Laboratory, Microprocessor Laboratory, Control and Automation Laboratory - University of Macau.
REFERENCES [1] T. G. Zimmerman, (1995), Personal Area Networks (PAN): Near-Field Intra-Body Communication, in Media Art and Science. Master Thesis: Massachusetts Institute of Technology.
_______________________________________________________________
[2] K. Hachisuka, A. Nakata, T. Takeda, Y. Terauchi, K. Shiba, K. Sasaki, H. Hosaka, and K. Itao, (2003), Development and performance analysis of an Intra-Body Communication Device, in The 12th International Conference on Transducers, Solid-State Sensors, Actuators and Microsystems 2003, pp. 1722-1725. [3] M. Shinagawa, M. Fukumoto, K. Ochiai, and H. Kyuragi, (2004), A Near-Field-Sensing Transceiver for Intrabody Communication Based on the Electroopic Effect, IEEE Transactions on Instrumentation and Measurement, vol. 53, pp. 1533-1538. [4] A.-i. Sasaki, M. Shinagawa, and K. Ochiai, (2004), Sensitive and stable electro-optic sensor for intrabody communication, in The 17th Annual Meeting of the IEEE Lasers and Electro-Optics Society, 2004 (LEOS 2004), pp. 122-123. [5] K. Fujii, K. Ito, and S. Tajima, (2003), A study on the receiving signal level in relation with the location of electrodes for wearable devices using human body as a transmission channel, in IEEE Antennas and Propagation Society International Symposium 2003, pp. 1071-1074. [6] K. Fujii and K. Ito, (2004), Evaluation of the Received Signal Level in Relation to the Size and Carrier Frequencies of the Wearable Device Using Human Body as a Transmission Channel, in IEEE Antennas and Propagation Society International Symposium 2004, pp. 105-108. [7] E. R. Post, M. Reynolds, M. Gray, J. Paradiso, and N. Gershenfeld, (1997), Intrabody buses for data and power, in First International Symposium on Wearable Computers, 1997, pp. 52-55. [8] K. Partridge, B. Dahlquist, A. Veiseh, A. Cain, A. Foreman, J. Goldberg, and G. Borriello, (2001), Empirical Measurements of Intrabody Communication Performance under Varied Physical Configurations, in Symposium on User Interface Software and Technology Orlando, Florida, pp. 183-190. [9] D. P. Lindsey, E. L. Mckee, M. L. Hull, and S. M. Howell, (1998), A new technique for transmission of signals from implantable transducers, IEEE Transactions on Biomedical Engineering, vol. 45, pp. 614-619. [10] M. Oberle, (2002), Low power system-on-chip for biomedical applications, PhD Thesis vol. ETH NO.14509, IIS/ETH Zurich, [11] T. Handa, S. Shoji, S. Ike, S. Takeda, and T. Sekiguchi, (1997), A very low-power consumption wireless ECG monitoring system using body as a signal transmission medium, in 1997 International Conference on Solid State Sensors and Actuators, 1997 (TRANSDUCERS '97), pp. 1003-1006. [12] M. Wegmueller, A. Lehner, J. Froehlich, R. Reutemann, M. Oberle, N. Felber, N. Kuster, O. Hess, and W. Fichtner, (2005), Measurement System for the Characterization of the Human Body as a Communication Channel at Low Frequency, in 27th Annual International Conference of the Engineering in Medicine and Biology Society, 2005 (IEEE-EMBS 2005). , pp. 3502-3505. [13] M. S. Wegmuller, (2007), Intra-body communication for biomedical sensors networks, in ETH Zurich. vol. Phd Zurich: Swiss Federal Institue of Technology Zurich (ETH). [14] R. Plonsey, (1995), Volume conductor theory, in The biomedical engineering handbook, J. D. Bronzino, Ed.: CRC Press LLC, pp. 119-125.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1048
Y.M. Gao, S.H. Pun, P.U. Mak, M. Du and M.I. Vai
[15] R. M. Gulrajani, (1998), Bioelectricity and Bioelectromagnetism: John Wiley & Son, INC. [16] C. Gabriel and S. Gabriel, (1996), Compilation of the dielectric properties of body tissues at RF and Microwave Frequencies, Occupational and environmental health directorate, Radiofrequency Radiation Division, Brooks Air Force Base, Texas (USA)
_______________________________________________________________
Author: Institute:
Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Gao Yue Ming Key Laboratory of Medical Instrumentation & Pharmaceutical Technology, Fu jian; Institue of Precision Instrument, Fu zhou University No.523, Gong ye Road Fu zhou, Fu jian China
[email protected] _________________________________________________________________
Application of the Home Telecare System in the Treatment of Diabetic Foot Syndrome P. Ladyzynski1, J.M. Wojcicki1, P. Foltynski1, G. Rosinski2, J. Krzymien2, B. Mrozikiewicz-Rakowska2, K. Migalska-Musial1 and W. Karnafel2 1 Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, Warsaw, Poland Department and Clinic of Gastroenterology and Metabolic Diseases, Medical University of Warsaw, Warsaw, Poland
2
Abstract — Diabetes is a group of metabolic diseases affecting more than 200 mln people worldwide, which is characterized by elevated blood glucose level. Diabetes causes a number of late complications among which diabetic foot syndrome (DFS) is one of the most dramatic as a major cause of the lower limb amputations. In IBBE PAS the TeleDiaFoS system aimed at monitoring of DFS treatment was developed. In the system, the Central Clinical Server is accessed by the Patient’s Module using a wireless internet connection to send the wound pictures, the blood glucose (BG) readings and the blood pressure (BP) values. Clinical verification of the TeleDiaFoS system has been organized as a randomized 90-days trial with the study and the control groups consisting of 10 type 2 diabetic patients, each. Currently, the evaluation of the first patient treated with multi-injection insulin delivery and antibiotic – dalacin therapies has been terminated. Home telecare therapy led to 12-fold reduction of the wound surface (from 356 mm2 to 29 mm2). During the whole 90-days period BP was controlled efficiently, however, acceptable BG level has not been maintained. In conclusion, application of the system leads to more effective realization of the DFS therapy and has a positive impact on the patient’s comfort. Keywords — Diabetes mellitus, diabetic foot syndrome, telemedicine, wound healing, foot scanner
I. INTRODUCTION Diabetes mellitus is listed among the most serious and dangerous chronic conditions such as heart disease and stroke (cardiovascular diseases), cancer, asthma and chronic obstructive pulmonary disease that cause 30 mln deaths all over the world each year. The number of diabetic patients has already exceeded 200 mln and it will reach 330 mln cases by the year 2030 according to the predictions. The International Diabetes Federation and the WHO describe the dramatic increase of the number of cases of diabetes that is occurring throughout the world today as “the most challenging health problem of the 21st century” [1]. Diabetes is characterized by high blood glucose level (hyperglycemia) that impairs the function of the body’s proteins leading to a number of late complications related to the microangiopathy, the macroangiopathy and the neuropathy.
Among these complications the diabetic foot syndrome (DFS) that is caused by neuropathy, angiopathy or both is one of the most dramatic. DSF is manifested in the form of ulcers, which are unhealing open sores or wounds that most often are located on the sole of the foot. If left untreated these foot ulcers can become infected and may lead to amputation. It is estimated that DSF is a direct cause of more than 50% of all lower limb amputations in the world. Diabetes has been an incurable disease, so far. Therefore, patients with diabetes require life-long instantaneous monitoring and treatment aimed not only at controlling of the blood glucose level but also at screening for, diagnosing and taking care of the late complications of diabetes. Telemonitoring and telecare have been used to serve these purposes starting from the early 90-ies. First home telecare systems were applied during insulin therapy of diabetic patients for the remote monitoring of the patient’s metabolic state, the life-style and the course of the treatment. These systems contributed to an improvement of: the reliability of the data collected and reported by the patients, frequency of the check-ups of the data performed by the physician and different measures of the outcome of the treatment [2-4]. Later, telemedicine usage has spread on screening, monitoring and treating of the late complications of diabetes such as the diabetic retinopathy, the cardiovascular disorders and the DFS. In the last few years, several video consultation services for the DFS patients have been reported, basing on the use of the modern mobile phones integrated with the digital cameras [5-8]. In the first part of the manuscript the design and implementation of the TeleDiaFoS home telecare system developed by the authors is presented. In the second part, a clinical trial validating the feasibility of the system is described and the results of the first clinical application of the TeleDiaFoS system are shown.
II. TELEDIAFOS SYSTEM The TeleDiaFoS consists of: the Central Clinical Server (CCS), the Diabetologist’s Workstation, the Podiatrist’s
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1049–1052, 2009 www.springerlink.com
1050
P. Ladyzynski, J.M. Wojcicki, P. Foltynski, G. Rosinski, J. Krzymien, B. Mrozikiewicz-Rakowska, K. Migalska-Musial, …
Workstation and a set of the Patient’s Modules (PM). The structure of the system is presented in Figure 1.
The data can be either entered manually or can be uploaded automatically from the PM, which is a homecare device (see Figure 3).
Fig. 1 Structure of the TeleDiaFoS system The main database of the system is located in CCS. This database [9] contains 5 modules: x x
x
x
x
Patient’s Registration, which groups the standard patient’s data like name, address, and telephone number. Examinations, which collects the data related to the patient’s general state, diabetes treatment and diabetic foot examination (vibration sensing test, pressure and temperature sensing test, ankle-brachial index, pedobarographic examination, images from scintillation camera, angio-CT, USG, RTG). In this module there are also results of the tissue evaporative water loss (TEWL) measurements, pH and temperature of the skin and the images of the foot ulcers taken with an optical scanner during routine check-ups in the foot care clinic. Telemedical Data, which gathers the data sent by the patient using the homecare PM, i.e. blood glucose concentration, arterial blood pressure, heart rate and images of the foot wounds. Graphical Presentation, which makes it possible to annotate the schematic pictures of the patient’s feet by means of marking the location and size of the ulcers and other changes on the foot like tinea pedis, gangrene, pseudomonas aeruginosa and keratosis (see Figure 2). It is also possible to erase parts of the foot image, which corresponds to the amputated parts of the patient’s foot. Additionally, the physiological parameters being monitored, i.e. the blood glucose concentration, the arterial blood pressure and the heart rate can be displayed on the time graphs. Options, which enables to synchronize the database, change the optical scanner, and display short information about the TeleDiaFoS database.
_________________________________________
Fig. 2 Exemplary drawing of feet state. Wounds and tinea pedis are present on both feet. PM is operated by a patient using a simple two-buttons remote controller. There are four LEDs located on the front panel of PM which signals the current state of the device. The following operations can be performed: x x x
taking pictures of the sole of the patient’s foot with a built-in foot scanner, downloading the blood glucose data, the blood pressure recordings and the heart rate values from memory of the external meters connected to PM, switching PM on and off.
The Accu-Chek Active glucometer (Roche, Germany) and BP 3BU1-4 blood pressure meter (Taiwan) were selected devices to gather the patients’ data. PM is able to communicate with glucometer and to download the data stored in its internal memory using the infrared interface (IrDA). The communication with the blood pressure meter is possible via the RS232 port.
IFMBE Proceedings Vol. 23
___________________________________________
Application of the Home Telecare System in the Treatment of Diabetic Foot Syndrome
1051
and 6 times was sending the data from measuring devices. No technical problems during the homecare test were identified. The patient has not reported any problems related to the management of PM. III. CLINICAL VERIFICATION OF THE TELEDIAFOS
Fig. 3 Patient’s module with a foot prepared for scanning (upper picture); glucometer and blood pressure meter connected to PM (lower picture)
It is possible to send the pictures of the foot being scanned together with the blood glucose readings and the blood pressure values being downloaded from the abovementioned meters to the CCS. The PM is connected to the Internet using the built-in wireless modem, which operates in HSDPA/UMTS/EDGE/GPRS modes. The modem software selects automatically the highest data transfer speed available in the network of the internet provider. The foot image files and the files containing the data downloaded from the external meters are transferred to the CCS automatically using the ftp protocol. At the server site specialized software service decodes the data and inserts them into the TeleDiaFoS database. Physicians can access the patients’ data using the workstations with locally replicated copies of the main database. The data on the local workstations are synchronized with the data stored in the main database through the virtual private network (VPN). Proper operation of all building blocks of the TeleDiaFoS and the technical feasibility of the automatic teletransmission of the data from PM were confirmed during extensive laboratory tests. The initial tests in real homecare conditions were performed afterwards with one patient over a period of 32 days. The patient sent 6 pictures of the foot
_________________________________________
Clinical verification of the TeleDiaFoS system has been organized as a randomized 90-days trial. The study protocol has been approved by the Local Ethical Committee. The target of 10 type 2 diabetic patients in the study group and in the control group has been set. Each patient has to sign the informed consent for participation. The inclusion criteria comprise the ability and willingness of the patient to perform everyday self-monitoring (i.e. blood glucose and blood pressure testing) and control (i.e. insulin injections), age between 45 and 65 years, neuropathic DFS with an ulcer located on the sole of one foot, ankle-brachial index greater than 0.7. Patients are excluded from the study in the case of the proliferative retinopathy and / or maculopathy that require active treatment, kidney dysfunction (creatinine > 2 mg/dL), cardiac insufficiency, cardiac infarction within the last 3 months before the study and mental impairment. The study protocol includes 3 clinical visits at the beginning of the study, after one month and at the end of the study. During the first visit each patient is trained to be able to use the glucometer, the blood pressure meter and to control the PM. Currently, the treatment of the first patient from the study group (66 years old male with a 9-years history of diabetes, treated with multi-injection insulin delivery and antibiotic – dalacin therapies) has been terminated.
Fig. 4 Blood pressure of the diabetic patient with DFS treated using the TeleDiaFoS
During the monitoring period 13 transmissions of the data were performed (in average once per week). Most of the time blood pressure was controlled satisfactory (see Figure 4). However, acceptable glycemic control has not been maintained (see Figure 5). Despite this, the treatment that
IFMBE Proceedings Vol. 23
___________________________________________
1052
P. Ladyzynski, J.M. Wojcicki, P. Foltynski, G. Rosinski, J. Krzymien, B. Mrozikiewicz-Rakowska, K. Migalska-Musial, …
ACKNOWLEDGMENT The work has been financed using funds for the scientific research in the years 2005–2008 in the framework of the research grant from the Polish Ministry of Science and Higher Education (Grant No. 3 T11E049 29).
REFERENCES 1. 2.
Fig. 5 Blood glucose concentration of the diabetic patient with DFS treated using TeleDiaFoS
Fig. 6 Exemplary pictures of the foot wound sent by the patient at the beginning of the study period (left), 23 days later (in the middle) and at the end of the study (right) was supported by an application of the TeleDiaFoS system led to more than 12-fold reduction of the wound surface, i.e. from open wound with a surface of 356 mm2 to minimal change of the skin area measuring 29 mm2 (see Figure 6). IV. CONCLUSIONS The TeleDiaFoS system takes the advantage of the wireless telematic technology which enables the physician to monitor and analyze patients’ data remotely. Application of the system leads to more effective realization of the DFS therapy and has a positive impact on the patient’s comfort eliminating the inconvenience related to regular visits in the outpatient clinic.
_________________________________________
International Diabetes Federation at http://www.idf.org Montani S, Bellazzi R, Quaglini S, d’Annunzio G (2001) Metaanalysis of the effect of the use of computer-based systems on the metabolic control of patients with diabetes mellitus. Diabetes Technol Therap 3:347–356. 3. Farmer A, Gibson OJ, Tarassenko L, Neil A (2005) A systematic review of telemedicine interventions to support blood glucose selfmonitoring in diabetes. Diabetes Med 22:1372–1378. 4. Ladyzynski P, Wojcicki JM, Krzymien J et al. (2006) Mobile telecare system for intensive insulin treatment and patient education. First applications for newly diagnosed type 1 diabetic patients. Int J Artif Organs 29:1074–1081. 5. Clemensen J, Larsen SB, Kirkevold M, Ejskjaer N (2008) Treatment of diabetic foot ulcers in the home: video consultations as an alternative to outpatient hospital care. Int J Telemed Appl 2008:132890 DOI 10.1155/2008/132890 6. Hsieh CH, Tsai HH, Yin JW et al. (2004) Teleconsultation with the mobile camera-phone in digital soft-tissue injury: a feasibility study Plast Reconstr Surg 114:1776–1782 7. Wilbright WA, Birke JA, Patout CA et al. (2004) The use of telemedicine in the management of diabetes-related foot ulceration: a pilot study. Adv Skin Wound Care 17: 232–238 8. Finkelstein SM, Speedie SM, Demiris G et al. (2004) Telehomecare: quality, perception, satisfaction. Telemed J e-Health 10:122–128 9. Foltynski P, Wojcicki JM, Migalska-Musial K et al. (2006) TeleDiaFoS database for monitoring diabetic foot patients. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, Korea, 2006, p 4926 10. Foltynski P, Ladyzynski P, Migalska-Musial K et al. (2007) A new device for monitoring of foot wounds healing. Int J Artif Organs, vol. 30, Congress of Europ. Soc. Artif. Organs, Krems, Austria, 2007, p 746. Corresponding author: Author: Piotr Ladyzynski, Ph.D. Institute: Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences Street: 4 Trojdena Street City: 02-109 Warsaw Country: Poland Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
In-vitro Evaluation Method to Measure the Radial Force of Various Stents Y. Okamoto1, T. Tanaka1, H. Kobashi1, K. Iwasaki2, M. Umezu1 1
Integrative Bioscience and Biomedical Engineering, Twins, Waseda Univ., Japan 2 Waseda Institute for Advanced Study, Waseda Univ., Japan
Abstract — Endovascular treatments of stenosis in carotid artery often utilize a self-expanding stent. There are a variety of stents commercially available. The radial force of a stent is a key parameter to characterize its mechanical property, but there is no reliable evaluation to standardize the force measurement. In measuring the radial force accurately, the stent deformation has to be uniform for the whole structure so as to ensure the force balance. Here we aim to develop such a measuring system of stent radial force, and compare the results with those obtained by conventional techniques. A developed device consisted of a tubular stent holder and a load measuring system. The Precise, Protege and Easy Wall stents with a diameter (D) of 10 mm were placed by a delivery system. Radial forces were measured with D = 8, 6, 5, and 4 mm at a temperature of 37 degrees. Maintaining the circular shape of stents was found to influence substantially the magnitude of radial force. The value increased about 80 % at maximum for those measured by conventional techniques. These data show that the radial force has to be uniformly applied in a circumferential direction. Also, the Easy Wall stent showed a distinct tendency in contrast to the Precise and Protege stent. The Easy Wall stent was found to be placed with eliminating the axial variation of mesh geometries. These results indicate that the radial force measurements of stents have to ensure the uniform mesh geometry in both the axial and circumferential direction. It was concluded that our device can ensure the reliable measurement of stent radial forces.
A measurement of stent radial force, such as seen in literatures [1-4], often utilizes a film-type jig to hold a stent tested [1] [2](Fig. 1). In our laboratory, we previously used a comb-type jig (Fig.2). However, these conventional systems have some disadvantages. First, stents tended to show varying patterns of the cross-section in the axial direction according to loading applied. Second, the uniformity of the stent mesh was no longer maintained, resulting in varying mesh intervals locally. Particularly, the large deformation with the irregular mesh was often detected at the edge. In order to solve these problems, we aim to develop a new system for measuring a stent radial force. Here we compare the data with those obtained by our previous measurement system in order to establish a reliable methodology of stent radial force measurement.
Keywords — Stent, Radial force, Self-expanding stent, Mechanical property Fig.1 Schema and photograph of film-type jig
I. INTRODUCTION Endovascular treatments of stenosis using a stent are a common procedure. There are indeed a variety of stents available in market. In clinical practice, however, selection criteria of stents are often obscure, and are determined empirically by surgeons. In order to establish a selection guideline of stents, it is mandatory to understand mechanical properties of stents, such as radial force, and characterize them among a variety of stents in a standardized condition. In doing so, a reliable in vitro evaluation methodology has to be established.
Fig.2 Photograph of comb-type jig
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1053–1056, 2009 www.springerlink.com
1054
Y. Okamoto, T. Tanaka, H. Kobashi, K. Iwasaki, M. Umezu
II. MATERIAL AND METHOD Fig.3 shows a schema of a newly developed device for measuring a stent radial force. It consisted of a U-shaped stent holder and a load measuring system. A pair of stent holders constituted a tubular geometry, in which a stent is placed by a delivery system. Fig.4 shows a photograph of a stent delivered. As shown in the figure, the cross-section of a stent remained circular, and there is no marked variation of mesh-to-mesh pitches in the axial direction. Conventionally, the cross-sectional shape of the stent in response to loading applied depends on the geometry of a jig (Fig.2). Also, the edge of the stent tends to be locally deformed. These drawbacks were improved in our current measuring system. The stent tested comprised three commercially available ones, namely the Precise, Protege and Easy Wall stents with a diameter (D) of 10 mm, as shown in Table. 1. These stents varied in mesh geometry, and the Precise and Protege stents had a similar zigzag shape fabricated by laser cutting of a tubular Nitinol, while the Easy Wall stent was a wirebraided type. Experiments were conducted at a temperature of 37 degrees. Radial forces were measured with D = 8, 6, 5, and 4 mm for three stents, respectively.
Electronic balance
Calibrate Stent 0.50mm Variable diameter 5mm
Calibrate
Fig.3 Schema of a developed device for measuring a stent radial force
Fig.4 Photograph of a stent delivered
Table 1 Stents (Left: Precise, Center: Protege, Right: Easy Wall)
Stent Dia.-Length Material Manufacturer
Precise
Protege
Easy Wall
10-40mm
10-40mm
10-32mm
Ni-Ti
Ni-Ti
Elgiloy
Cordis / J&J
ev3
Boston Scientific
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
In-vitro Evaluation Method to Measure the Radial Force of Various Stents
1055
Fig.5 Comparison of radial force between comb-type and tube-type jigs at a diameter of 8 mm (Left: Precise, Centre: Protégé, Right: Easy Wall)
Fig.7 Characteristics of a radial force
Fig.6 Photograph of irregularities on mesh of the Protege stent (D=8mm)
III. RESULTS AND DISCUSSION Fig.5 compares the results obtained by a comb-type and tube-type jig for the Precise, Protege and Easy Wall stent at a diameter of 8 mm. In the Precise and Protege stents, radial force measured showed 45 %, and 79 % higher for the tubetype jig than those of a comb-type jig. On the other hand, the Easy Wall stent showed an opposite tendency, where the tube-type jig showed 29 % lower value. In the case of the Precise and Protege stent, the elimination of clearance within a jig using the tube-type one elevated the magnitude of the radial force. The comb-type jig had the clearance at the corner, where the stents were deformed locally. As a result, the zigzag stent segment did not shrink in the circumferential direction, resulting in the lower value of radial force. On the other hand, the Easy Wall stent had a diamond-shaped segment, which released the radial force in the axial direction. This stent structure substantially reduced the radial force. In the comb-type jig, the Easy Wall stent received the counter force due to the mechanical contact at the edge, whereas the tube-type jig had the same crosssectional shape along with the axis. These results indicate that the present tube-type jig system enables to measure the radial force of stents in a reliable manner. Also, we used a delivery system to insert a stent inside a jig. Then, we found an interesting phenomenon for the Protege stent. In releasing a stent out of the delivery system, the Precise and Easy Wall stent did not show any mesh irregularities. However, the Protege stent showed a local
_________________________________________
Fig.8 Extensional rate of the stent longitudinal length during radial force measurements
mesh deviation during release as shown in Fig.6. This phenomenon attributes to the mesh design, and the use of a delivery system in testing stent radial force was found to be important. Fig.7 characterizes a radial force measurement at a diameter of 8, 6, 5, 4 mm for the Precise, Protege and Easy Wall stents. Also, Fig.8 shows the degree of extension of the longitudinal length of stents. By comparing those two figures, there is an interesting trend of stent radial force. In Fig.7, it can be observed that the Easy Wall stent did not even show a marked elevation of radial force under maximum compression. This was caused by the axial extension as shown by Fig.8. Indeed, the axial extension of the Precise and Protege stent did not vary substantially, which in turn led to increase the radial force linearly for the loading applied.
IV. CONCLUSIONS We demonstrated that the present system enabled to measure stent radial force more accurately than previous ones. Also, it was found that the radial force was substan-
IFMBE Proceedings Vol. 23
___________________________________________
1056
Y. Okamoto, T. Tanaka, H. Kobashi, K. Iwasaki, M. Umezu
tially affected by the mesh structure. These data may provide a useful knowledge to assess the radial force in clinical practice and to establish the selection guideline of stents in endovascular treatments.
REFERENCES 1.
2.
ACKNOWLEDGMENT 3.
This research was partly supported by Health Science Research Grant (H20-IKOU-IPAN-001) from Ministry of Health, Labour and Welfare, Japan, Biomedical Engineering Research Project on Advanced Medical Treatment from Waseda University (A8201), and “Establishment of Consolidated Research Institute for Advanced Science and Medical Care”, Encouraging Development Strategic Research Centers Program, the Special Coordination Funds for Promoting Science and Technology, Ministry of Education, Culture, Sports, Science and Technology, Japan.
_________________________________________
4.
Stephan H, Jakub Wiskirchen, Gunnar Tepe, Michael Bitzer, Theodor W, Dieter Stoeckel, Claus D (2000) ,Physical Properties of Endovascular Stents: An Experimental Comparison, JVIR 11 :pp645-654 Dieter Stoeckel㧘 Alan Pelton, Tom Duerig(2003) Self-expanding nitinol stents: material and design considerations, Eur Radiol 14: pp293-301 John F.Dyet, William G. Watts, Duncan F. Ettles, Anthony A. Nicholon(2000) Mechanical Properties of Metallic Stents: How Do These Proprtties Infuluence the Choice of Stent for Specifid Lesions?, Cardiovasc Intervent Radiol 23:pp47-54 Regis Rieu, Paul Barragan, Catherine Masson, Jean Fuseri Vincent Garitey, Marc Silvestri, Pierre Roquebert, and Joe¨ l Sainsous(1999) Radial Force of Coronary Stents: A Comparative Analysis, Catheterization and Cardiovascular Interventions 46:pp380-391 Author: Y.Okamoto Institute: Graduate School of Integrative Bioscience and Biomedical Engineering, TWIns, Waseda University Street: 2-2 Wakamatsu-cho City: Shinjuku Tokyo Country: JAPAN Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Motivating Children with Attention Deficiency Disorder Using Certain Behavior Modification Strategies Huang Qunfang Jacklyn1, S. Ravichandran2 1
Student, Singapore Management University, Singapore Faculty, Temasek Engineering School, Temasek Polytechnic, Singapore
2
Abstract — Attention-Deficit Hyperactivity Disorder (ADHD) is a neurobehavioral developmental disorder and its manifestation during childhood are characterized by a persistent pattern of inattention and/or hyperactivity. ADHD is currently considered a persistent and chronic condition for which no medical cure is available, although medication and therapy can treat symptoms. Children with such mental retardation more often exhibit behavior problems than children without disabilities. Teaching these students is more challenging and is considered as one of the most important functions of special education. Though a common behavior modification strategy is not always useful in dealing with neurobehavioral development, it may be more appropriate to design strategies based on the cognitive ability of the subject undergoing treatment. This paper summarizes the knowledge gained by the authors on designing specific cognitive strategy to deal with children in the age group (8 – 12 year). The strategy discussed in this paper is mostly focused on educating children with attention deficit disorder than treating hyperactivity as hyperactivity is mostly managed by the used of medication. The behavior modification strategy suggested is based on the fact that children within the age group have strong interest in specific activities involving audiovisual stimulus. The strategy aims in motivating the children to learn a specific act such as learning subjects like mathematics. The protocol basically motivates the children to focus and concentrate on solving a problem in order to be rewarded with audiovisual stimulus which are of interest to them. Though it may be beyond the scope of this paper to provide a comprehensive picture on the neuro-cognitive modifications associated with this behavior modification strategy, it provides a realistic picture on the changes reflecting the improvements in the cognitive aspects of the subject. Keywords — ADHD, Behavior modification, Cognitive strategy
I. INTRODUCTION Attention-Deficit/Hyperactivity Disorder (ADHD) is typically first diagnosed in childhood, with symptoms persisting into adolescence and adulthood [1]. ADHD is characterized by inattention, impulsivity, and hyperactivity. It
has recently been estimated to affect 3.5% of school-aged children worldwide and is said to be one of the most common psychiatric disorders of youth [2, 3]. Children with these problems are often unpopular and lack reciprocal friendships, but are not always aware of their own unpopularity [4]. Though these symptoms tend to decline with age, at least 50% of children with ADHD still experience impairing symptoms in adulthood [5]. Despite the vast literature supporting the efficacy of stimulant medication in the treatment of attention-deficit/hyperactivity disorder (ADHD), several limitations of pharmacological treatments highlight the clear need for effective alternative psychosocial treatments. There are also evidences on interventions involving both the school and parents training which have resulted in classifying these as “empirically validated treatments” [6].
II. DEFINITION OF ADHD Most studies on ADHD attribute reasons/explanations on how they affect the cognitive processes of subjects [7, 8, 9]. However the field has not been able to develop an evidencebased intervention based on cognitive-behavioural principles [10]. Indeed, part of the challenge has been the changing conceptualization of the etiology and behavioural profile of ADHD [11]. Based on the pattern of symptoms present, the Diagnostic and Statistical Manual (DSM-IV) distinguishes three subtypes, namely, the inattentive, the hyperactive/impulsive, and the combined subtype [12]. The latter is by far the most common. Children with the inattentive ADHD subtype suffer from rejection by peers. Their social behaviour is of a more subdued nature [4]. Children with ADHD display chronic and pervasive difficulties with inattention, hyperactivity, and impulsivity that result in profound impairments in academic and social functioning across multiple settings (typically, at home, in school, and with peers). Some children with ADHD, despite having difficulties in performing tasks at school, have a healthy social life, while others appear to be unable to connect with peers and other people in a normal way [4].
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1057–1060, 2009 www.springerlink.com
1058
Huang Qunfang Jacklyn, S. Ravichandran
III. NEUROPHYCHOLOGICAL ASPECTS OF ADHD Explanations on the neuropsychological aspects of ADHD as per available literature imply that the disorder has a pathological basis in descending cortical control systems such as the prefrontal cortex and frontostriatal networks [13].The literature also states that, the prefrontal cortex is believed to bias subsidiary processing implemented by posterior cortical and subcortical regions in accordance with current goals. One of the mechanisms by which the prefrontal cortex is believed to exert its coordinating effects is via the suppression or gating of neural signalling irrelevant to current behaviour or goal [14, 15].
Treatment of ADHD is currently managed through various methods such as pharmacological intervention, behaviour interventions, cognitive behaviour treatment, neuralbased intervention. Each treatment has its own merits and demerits and quite often a combination of two interventions is sought by the clinicians in treating ADHD children. We intend not to elaborate on the merits and demerits of each intervention in this paper, but would provide a brief insight into each intervention and focus more on the behaviour intervention and strategies beneficial in motivating children to develop attention for solving problems in education. A. Pharmacology
IV. ASSESSMENT OF ADHD The use of “Disruptive Behaviour Rating Scale” to obtain parent and teacher ratings of the 18 symptoms of DSM-IV ADHD has been reported as a qualitative yard stick to categorize subjects under various types depending on the manifested symptoms [16]. Children were categorized as ADHD only if symptoms were present prior to age seven and if these symptoms caused significant functional impairment. Individuals with six or more symptoms of inattention but fewer than six symptoms of hyperactivity-impulsivity were identified as Predominantly Inattentive Type, participants with six or more symptoms of hyperactivity-impulsivity but fewer than six symptoms of inattention were categorized as Predominantly Hyperactive/Impulsive Type, and individuals with six or more symptoms on both dimensions were coded as Combined Type [17]. V. CHANGING CONCEPTUALIZATION OF ADHD The changing conceptualization of ADHD necessitates the field to constantly evaluate the adequacy of treatment approaches, and the working theory of ADHD has an important impact on treatment approaches. It has been reported that the sensory integration therapy, for use with children with ADHD involves compensatory strategies, such as altering or avoiding certain stimulus characteristics of the physical environment. This therapy is based on the assumption that ADHD is an “input” problem, and that sensory and motor input is processed and interpreted in faulty ways which result in inappropriate responses to sensory stimuli [18]. In ADHD cases, we are dealing more with cognitive deficiencies as opposed to distortions [19].
_________________________________________
VI. TREATMENTS CURRENTLY BEING PRACTICED
Though it is beyond the scope of this paper to discuss the various medications available for ADHD children, developments in the pharmacological management of ADHD mean that there is now a much greater choice of approved medications these days. However, there are limitations to an exclusively pharmacological approach in the treatment of ADHD. These include the limited effects of stimulant medication on problems such as academic achievement and peer relationships, the fact that up to 30% of children do not show a clear beneficial response to stimulants. It is reported that the long term side effects of taking stimulants has been a limiting factor in the used of these medication as children suffer side effects such as insomnia and appetite suppression. [6]. B. Behaviour Interventions It is seen that the inattentive, hyperactive, and impulsive behaviors that characterize ADHD often contribute to impairment in the parent–child relationship and increased stress among parents of children with the disorder [20, 21]. It follows that one evidence-based component of comprehensive treatment for ADHD involves working directly with parents to modify their parenting behaviours in order to increase positive outcomes with their children [22]. Effectively modifying poor parenting practices is of utmost importance, as poor parenting is one of the more robust predictors of negative long-term outcomes in children with behaviour problems [23]. Parents are taught to identify and manipulate the antecedents and consequences of child behaviour, target and monitor problematic behaviours, reward pro-social behaviour through praise, positive attention, and tangible rewards, and decrease unwanted behaviour through planned ignoring, time out, and other non-physical discipline techniques.
IFMBE Proceedings Vol. 23
___________________________________________
Motivating Children with Attention Deficiency Disorder Using Certain Behavior Modification Strategies
In classroom behaviour management, teachers are instructed regarding the use of specific behavioural techniques, which includes giving praises, planning ignorance etc. Behavioural goals are set at a level that is challenging, yet attainable, and are made increasingly more difficult until the child's behaviour is within developmentally normative levels based on the principle of shaping. Behaviourally based classroom interventions appear to be a very effective means of behaviour change in children with ADHD in the school setting. However, as cooperation of school professionals is necessary for these interventions, some of these challenges exist as with home behavioural programs [6]. C. Cognitive-behavioral Treatments It has been reported by researchers that studies conducted on cognitive-behavioural approaches which include training in self-instructions, problem-solving, self reinforcement, and self-redirection have helped children to cope with errors. Children were taught a five-step process of problemsolving, including defining the problem, setting a goal, generating problem-solving strategies, choosing a solution, and evaluating the outcome with self-reinforcement. These concepts were reinforced through the use of modelling and role-playing exercises, instructional training, homework, and behavioral techniques, such as social reinforcement and a token system. As observed from the studies, there was a significant decrease in child activity level following CBT than controls [24].
may allow for lower doses of medication to be used in conjunction with behaviour management in the home and school settings, resulting in increased satisfaction with treatment [27, 28]. VII. STRATEGIES TO IMPROVE ATTENTION SPAN The current behaviour modification strategies are based on identifying and manipulating the antecedents and consequences of child behaviour, target and monitor problematic behaviours, reward through praise, positive attention, and tangible rewards. Our focus was more on those subjects who have attention deficit yet who were willing to cooperate with assistances for learning. This group of people has been identified with certain interests in audio-visual stimulus and hence were willing to participate in solving problems leading to positive attention. VIII. TREATMENT PROTOCOL Our treatment protocol relies on some of the audio-visual animations which can be played using an available audiovisual system. As it is possible to play a variety of animated audio-visual animations which are of considered fascination by the children, it serves as a reward for every positive step taken by those who cooperate on solving problems, thereby prompting better attention.
D. Neural-based Interventions
IX. CONCLUSION
Neurofeedback, based on electroencephalogram biofeedback, is sometimes used as one of the intervention in the management of ADHD children. The theoretical basis of neurofeedback describe ADHD as a disorder of neural regulation and under arousal, and it is assumed under this approach that these neural deficiencies are amenable to change using behavioural methods [25]. Some refer this intervention as a relatively unresearched treatment, and the research that has been conducted has reportedly been inconsistent and problematic. It is believed that this is due to methodological problems such as confounded treatments, inconsistent use of dependent measures, and a lack of clinically meaningful dependent measures [18, 26]. E. Combined behavioural-pharmacological interventions It has been reported that Medication Management was as effective as Combined Treatment in reducing ADHD symptoms, with no clear incremental benefit of behaviour therapy noted [6]. Also, it is believed that combined treatment
_________________________________________
1059
Treatment of ADHD is currently managed through various methods such as pharmacological intervention, behaviour interventions, cognitive behaviour treatment, neuralbased intervention. Each method has its own merits and demerits and quite often a combination of two interventions is sought by the clinicians in treating ADHD children. No single strategy in the management of ADHD children has been advocated by the clinicians in the past and the success of each strategy depends on the cooperation extended by the subjects in a given setup. The current strategy based on behaviour modification is very attractive due to the fact that it does not relay on pharmacological intervention. However any strategy has its own limitations and in this behaviour modification strategy, the success purely depends on the ability of the subjects to be motivated in learning for rewards given for every positive step in improving their attention span. It is also worth noting that if the reward falls short of the subject’s expectation, it may results in poor attention span of the subjects under this treatment protocol.
IFMBE Proceedings Vol. 23
___________________________________________
1060
Huang Qunfang Jacklyn, S. Ravichandran
REFERENCES 1.
2.
3. 4. 5.
6.
7.
8.
9.
10.
11.
12. 13.
14.
15.
16.
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders, 4th edition, Text Revision (DSM-IVTR). Washington, DC: American Psychiatric Association. Polanczyk, G., de Limas, M. S., Horta, B. L., Biederman, J., & Rohde, L. A. (2007). The worldwide prevalence of ADHD: A systematic review and metaregression analysis. American Journal of Psychiatry, 164, 942948. American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders (4th ed., rev.). Washington DC: Author. Nijmeijier, J. S., et al. (2008). Attention-deficit/hyperactivity disorder and social dysfunctioning. Clinical Psychology Review, 28, 692-708 Faraone, S. V., Biederman, J., & Mick, E. (2006). The age-dependent decline of attention deficit hyperactivity disorder: a meta-analysis of follow-up studies. Psychological Medicine, 36, 159165. Chronis, A. M., Jones, H. A., & Raggi, V. L. (2006). Evidence-based psychosocial treatments for children and adolescents with attentiondeficit/hyperactivity disorder. Clinical Psychology Review, 26, 486502. Barkley, R. A. (2006). Attention-deficit hyperactivity disorder: A handbook for diagnosis and treatment, 3rd Edition. New York: Guilford Press. Sonuga-Barke, E. J. S. (2002). Psychological heterogeneity in AD/HD — A dual pathway model of behaviour and cognition. Behavioral Brain Research, 130, 2936. Sonuga-Barke, E. J. S. (2003). The dual pathway model of AD/HD: An elaboration of neurodevelopmental characteristics. Neuroscience and Biobehavioral Reviews, 27, 593604. Hinshaw, S. P. (2006). Treatment for children and adolescents with Attention-Deficit/Hyperactivity Disorder. In P. C. Kendall (Ed.), Child and adolescent therapy: Cognitive-behavioral procedures, Third Edition. New York: Guilford Press. Toplak, M.E., et al. (2008). Review of cognitive, cognitivebehavioral, and neural-based interventions for Attention-Deficit/ Hyperactivity Disorder (ADHD). Clinical Psychology Review, 28, 801-823 American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington DC: Author. Halperin, J. M., & Schulz, K. P. (2006). Revisiting the role of the prefrontal cortex in the pathophysiology of attention-deficit/hyperactivity disorder. Psychological Bulletin, 132, 560–581. Barkley, R. A. (1997). Behavioral inhibition, sustained attention, and executive functions: Constructing a unifying theory of ADHD. Psychological Bulletin, 121, 65–94. Pennington, B. F., Grossier, D., & Welsh, M. C. (1993). Contrasting deficits in attention deficit disorder versus reading disability. Developmental Psychology, 29, 511–523. Barkley, R.,&Murphy, K. (1998). Attention-deficit hyperactivity disorder: A clinical workbook (2nd ed.). New York, NY, US: Guilford Press.
_________________________________________
17. Shanahan, M. A., et al. (2006). Processing Speed Deficits in Attention Deficit/Hyperactivity Disorder and Reading Disability. Journal of Abnormal Child Psychology 34:585-602 DOI 10.1007/s10802-0069037-8 18. Waschbusch, D. A., & Hill, G. P. (2003). Empirically supported, promising, and unsupported treatments for children with AttentionDeficit/ Hyperactivity Disorder. In S. O. Lilienfield, S. Jay Lynn, & J. M. Lohr (Eds.), Science and pseudoscience in clinical psychology. New York: Guilford Press. 19. Hinshaw, S. P. (2006). Treatment for children and adolescents with Attention-Deficit/Hyperactivity Disorder. In P. C. Kendall (Ed.), Child and adolescent therapy: Cognitive-behavioral procedures, Third Edition. New York: Guilford Press. 20. Fischer, M. (1990). Parenting stress and the child with attention deficit hyperactivity disorder. Journal of Clinical Child Psychology, 19, 337346. 21. Johnston, C., & Mash, E. J. (2001). Families of children with attention-deficit/ hyperactivity disorder: Review and recommendations for future research. Clinical Child and Family Psychology Review, 4, 183207. 22. Pelham,W. E., Wheeler, T., & Chronis, A. (1998). Empirically supported psychosocial treatments for attention deficit hyperactivity disorder. Journal of Clinical Child Psychology, 27, 190205. 23. Chamberlain, P., & Patterson, G. R. (1995). Discipline and child compliance in parenting. In M. H. Bornstein, & H. Marc (Eds.), Handbook of parenting Applied and practical parenting, Vol. 4 (pp. 205225) Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. 24. Fehlings, D. L., Roberts,W., Humphries, T., & Dawe, G. (1991). Attention deficit hyperactivity disorder: Does cognitive behavioral therapy improve home behavior? Developmental and Behavioral Pediatrics, 12(4), 223228. 25. Butnik, S. M. (2005). Neurofeedback in adolescents and adults with attention deficit hyperactivity disorder [Special issue: ADHD in adolescents and adults]. Journal of Clinical Psychology, 61, 621625. 26. Kline, J. P., Brann, C. N., & Loney, B. R. (2002). A cacophony in the brainwaves: A critical appraisal of neurotherapy for Attention Deficit Disorders. The Scientific Review of Mental Health Practice, 1, 4656. 27. MTA Cooperative Group. (1999a). A 14-month randomized clinical trial of treatment strategies for Attention Deficit Hyperactivity Disorder (ADHD). Archives of General Psychiatry, 56, 10731086. 28. Pelham, W. E., Erhardt, D., Gnagy, E. M., Greiner, A. R., Arnold, L. E., Abikoff, H. B., et al. (submitt for publication). Parent and teacher evaluation of treatment in the MTA: Consumer satisfaction and perceived effectiveness. Authors: Institutes:
Huang Qunfang Jacklyn¹, S.Ravichandran² Singapore Management University¹, Temasek Polytechnic² Addresses: 90 Stamford Road #04-71¹ 21 Tampines Ave 1² City: Singapore Country: Singapore Email:
[email protected],
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Regeneration of Speech in Voice-Loss Patients H.R. Sharifzadeh1, I.V. McLoughlin1 and F. Ahmadi1 1
School of Computer Engineering, Nanyang Technological University, Singapore
Abstract — This paper considers regeneration of natural sounding speech from whisper-speech, produced by patients with vocal tract lesions affecting the glottis. Such reconstruction is important for both total and partial laryngectomy patients to improve on the monotonous robotized sound typical of electrolarynx devices. Reconstruction of speech from whispers has been demonstrated previously, however the resulting speech does not exhibit particularly high intelligibility, and more importantly, sounds un-natural. It is the conjecture of the authors that limited pitch variations in the reconstructed speech contributes most to that lack of naturalness. In this paper, a method for pitch contour variation in reconstructed speech is presented. This method extracts voice factors which are important to 'naturalness' from the whispered signal and applies these to the reconstructed speech. The method is based upon our previous published work which implemented an analysis-by-synthesis approach to voice reconstruction using a modified CELP codec. Keywords — Rehabilitation, Laryngectomy, Speech, Whispers, CELP codec
I. INTRODUCTION The speech production process starts with lung exhalation passing a taut glottis to create a varying pitch signal which resonates through the vocal tract, nasal cavity and out through the mouth. Within the vocal, oral and nasal cavities, the vellum, tongue, and lip positions play crucial roles in shaping speech sounds; these are referred to collectively as vocal tract modulators [1]. Total laryngectomy patients will have lost their glottis and also the control to pass lung exhalation through the vocal tract in many cases. Partial laryngectomy patients, by contrast, may still retain the power of controlled lung exhalation through the vocal tract. Despite the loss of the glottis including vocal folds, both classes of patients retain the power of vocal tract modulation and therefore by controlling lung exhalation, they have the ability to whisper [2]; in other words, they maintain most of the speech production apparatus. In normally phonated speech, pitch itself is known to be the major source of important tonal information, alongside the lower formants [3]. Since there is no fundamental pitch present in whispers, other parameters contribute more heav-
ily to perceived tonal variation. These include duration, amplitude, and formant location/bandwidth (whisper-speech formants are both wider and upward shifted compared to normal speech, as it is described in section 2). Although the contribution of amplitude to pitch variation has been studied by others such as [4], neither duration nor formant location have been used for tone reconstruction. The new technique presented in section 4 describes an algorithm which combines the use of formant information, along with amplitude, to estimate pitch contour variations in vowels. The approach discussed in this paper utilizes a code excited linear prediction (CELP) codec and is based upon our previous published work in [5]. In a standard CELP codec, speech is generated by filtering an excitation signal in which the excitation sequence is selected from a codebook of zero-mean Gaussian sequences and then is shaped by an LTP filter to convey the pitch information of the speech [6]. For the purpose of speech reconstruction from whispers, we need to modify the standard CELP codec which details are described in section 3. Section 2 outlines whispered speech features regarding formant information in vowels and also in terms of their acoustic and spectral characteristics while section 3 explains the modified CELP codec customized for our purpose of natural speech regeneration. Section 4 discusses our proposed method for pitch contour variation in reconstructed speech and finally section 5 concludes the paper. II. ACOUSTIC AND SPECTRAL FEATURES OF WHISPERED SPEECH
Whispered speech can be categorized into two different classes: soft whispers and stage whispers [7]. Soft whispers which are also referred to as quiet whisper are produced in certain circumstances to reduce perceptibility, such as whispering into someone’s ear in the library and usually used in a relaxed, comfortable, low effort manner [8]. Stage whispers, on the other hand, are a combined kind of whisper one would use if the listener is some distance away from the speaker [7]. In other words, stage whisper is actually a whispery voice in phonetic terms, which is a type of phonation that involves vocal fold vibration [9]. Within this paper, we will concentrate on soft whispers which are produced without vocal fold vibration and not
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1065–1068, 2009 www.springerlink.com
1066
H.R. Sharifzadeh, I.V. McLoughlin and F. Ahmadi
only are more commonly used in daily life but also are the same type of whispers produced by laryngectomy patients. As mentioned, essential physical features of whispered speech include the absence of vocal cord vibrations which leads to the absence of fundamental frequency and consequent harmonic relationships [10]. This is the most significant acoustic characteristic of whispers. By considering a source filter model [11], exhalation can be identified as the source of excitation in whispered speech and the shape of the pharynx is adjusted so that the vocal cords do not vibrate [12]. Thus, turbulent aperiodic airflow is the only source of sound for whispers, and it is a strong, rich, and hushing sound [13]. Regarding spectral characteristics, whispered speech sounds do have peaks in their spectra with roughly the same frequencies as those for normally phonated speech sounds [14]. These formants occur within a flatter power frequency distribution, and there are no harmonics in the spectra corresponding to the fundamental frequency [10]. Whispered vowels differ from normally voiced vowels. All formant frequencies (including the important first three formants) tend to be higher [15], particularly the first formant which shows the greatest difference between two kinds of speech. Lehiste in [15] reported that F1 is approximately 200-250 Hz higher, whereas F2 and F3 are approximately 100-150 Hz higher in whispered vowels. Furthermore, unlike phonated vowels where the amplitude of each higher formant is less than for lower formants, whispered vowels usually have second formants that are as intense as first formants. These differences mainly in first formant frequency and amplitude are thought to be due to the alteration in the shape of the more posterior areas of the vocal tract including the vocal cords which are held rigid. Although these changes in acoustics are significant, there is only reported to be a small reduction, of 10 percent or less, in the accuracy of vowel identification for whispered speech [16]. As mentioned, since excitation in whisper mode speech is the turbulent flow created by the exhaled air passing through the open glottis, the resulting signal is completely noise excited [12, 16]. One of the consequences of a glottal opening is an acoustic coupling to the subglottal airways. The subglottal system has a series of resonances, which can be defined as the natural frequencies with a closed glottis. The average values of the first three of these natural frequencies have been estimated to be about 700, 1650, and 2350 Hz for an adult female and 600, 1550, and 2200 Hz for an adult male, but there are of course substantial individual differences [17]. Analysis shows that the effect of these subglottal resonances is to introduce additional pole-zero pairs into the vocal tract transfer function from the glottal source to the mouth output. The most obvious acoustic
_______________________________________________________________
manifestation of these pole-zero pairs is the appearance of additional peaks or prominences in the output spectrum. The influence of zeros can also be seen sometimes as minima in the spectrum [18]. Whispered Speech Encode codeword 0 codeword 1 gain
Pitch Template
Code book
LTP synthesis filter
LPC analysis filter
codeword 1023
pitch estimate
mean square per analysis frame
analysis & quantization
code book index gain
LTP
generate
code book index gain
perceptual weighting filter
LSP
narrow
LTP
Adjustment Parameters
LSP
codeword 0 codeword 1 Code book
gain
LPC analysis filter
LTP synthesis filter
Reconstructed Speech
codeword 1023
Decode
Fig. 1 A block diagram of a modified CELP codec used for speech reconstruction
III. MODIFIED CELP CODEC As described in section 1, the approach used in this project utilizes a code excited linear prediction (CELP) codec in which the excitation sequence is selected from a codebook of zero-mean Gaussian sequences and then is shaped by an LTP filter to convey the pitch information of the speech. Figure 1 shows a block diagram of the CELP codec as implemented in this paper, with the modifications for whisper-speech reconstruction identified. In comparison with the standard CELP codec, we have added a “pitch template” corresponding to the “pitch estimate” unit while “adjustment parameters” in this model are used to generate pitch factors as well as to apply necessary LSP modifications. The pitch estimate method is described within section 4 while LSP modifications, required for preparing the whispered speech signal to insert pitch into it, is described in the following. For this research, a 12th order linear prediction analysis is performed on the waveform, which is sampled at 8 kHz. A
IFMBE Proceedings Vol. 23
_________________________________________________________________
Regeneration of Speech in Voice-Loss Patients
1067
frame duration of 20 ms is used for the vocal tract analysis (160 samples) and a 5 ms sub frame duration (40 ms) for determining the pitch excitation. In the CELP codec, as well as many other low bit-rate coders, linear prediction coefficients are transformed into line spectral pairs (LSPs) [19]. LSPs describe the two resonance states of an interconnected tube model of the human vocal tract. These conditions are those that describe the modelled vocal tract being either fully open or fully closed at the glottis respectively. In reality, the human glottis is opened and closed rapidly during speech and thus the actual resonances occur somewhere between the two extreme conditions. The LSP representation, hence, has a significant physical basis [20]. However, this rationale is not necessarily true for whispered speech (since the glottis does not vibrate), and thus we need to make some adjustments to the model. In figure 2, the linear prediction spectrum obtained from analyzing a typical segment of a vowel in whispered speech (/a/) is plotted and dashed lines drawn at the LSP frequencies derived from the linear prediction parameters are overlaid on it. As discussed in [21], spectral peaks are generally bracketed by LSP line pairs, with degree of closeness being dependent upon the sharpness of the spectral peak, and its amplitude. However, as previously discussed in section 2, whispered speech has few significant peaks in the spectrum which implies wider distances between LSP lines. Hence to emphasize formants, it is necessary to narrow the LSP lines corresponding to likely formants, i.e. the 2 or 3 narrowest of them. Since altering the frequency of lines may lead to formation of unintentional peaks by narrowing the gap between two irrelevant pairs, it is important to choose the pair of lines corresponding to likely formants. As mentioned, this might be done by choosing the three narrowest LSP pairs which works well when the signal has fine peaks (see our previous paper in [5]), but in case of the expansion of formant bandwidths (common in whispered speech), which leads to the increase of distance between the corresponding LSPs, the choice of the 3 narrowest LSPs may not identify the three correct formant locations, particularly for a vowel. The narrowing procedure is, hence, strengthened with the formant position indication based upon the classic approach of finding phases of the poles of vocal tract transfer function. As a matter of fact, the improved algorithm will narrow the LSP pairs which are corresponding to each of the three formants (regardless of the fact they are the narrowest LSP pairs or not). Figure 3 shows the result of LSP narrowing algorithm as described above for the same frame in figure 2.
_______________________________________________________________
Fig. 2 Plot of LPC spectrum with LSPs overlaid for a whispered vowel /a/
Fig. 3 Reconstructed LPC spectrum after applying improved LSP narrowing procedure for the whispered frame illustrated in figure 2
IV. PITCH CONTOUR VARIATION METHOD The pitch estimation algorithm discussed in this paper is based upon our recent work in [5] in which parameters are extracted from normally phonated speech and is tried to regenerate the excitation based on these parameters using CELP excitation synthesis method, but for improving the method to gain more natural synthesized speech, a novel approach for pitch estimation in terms of formant locations and amplitudes is proposed in this section. (Further details regarding the approach basics and the corresponding LTP filter parameters used in the CELP codec can be found through [5].) By performing several alteration steps, a whispered vowel segment is converted to a normally phonated segment in a CELP based regeneration method. These steps are mostly being accomplished between the encoding and decoding modules of the CELP codec. As mentioned in section 4, the procedure is commenced by smoothing the LSP parameters to stabilize the formant locations of each segment. The formant bandwidths are also decreased to intensify formant amplitudes. The resulting formants are then shifted down to match the formant locations of normally phonated vowels. As described in [5], long term predictor delay, P, has the main role in pitch determination in the CELP codec. Pitch contour variation, hence, can be achieved by variation of this parameter which, in our proposed method, is based on formant frequencies and amplitude according to (1) as follows:
IFMBE Proceedings Vol. 23
_________________________________________________________________
1068
H.R. Sharifzadeh, I.V. McLoughlin and F. Ahmadi
P ( n)
° Pn 1 P D (T n T ) Oh S ® ° Pn 1 P D (T n T ) ¯
h ! 0 (1) h0
in which n represents the number of current speech frame, h shows the instant amount of F1n F2n which calculates the covariance of the first and second formants, S is the average value of formants in L previous frames which is calculated based on (2), and are the factors to be set for generating the most natural voice and is the gain value in the CELP codec. n 1
S
i
¦
been proposed. By extraction of voice factors including formant location and amplitude, this method tried to regenerate natural pitch contours particularly in vowels. By consideration of formant locations as well as amplitudes, a formula for generating pitch parameters in CELP codec was proposed. The results of the current work could highly contribute to natural speech regeneration for voiceloss patients.
REFERENCES 1.
F 1i F 2 i
(2)
nL
2
Figure 4 illustrates the regenerated pitch contours of two samples of whispered vowel /a/ based on (1). Figure 5 demonstrates F1, F2, and pitch contour of a whispered and corresponding reconstructed vowel.
2.
3. 4. 5.
6. 7.
8.
9.
10.
Fig. 4 Pitch estimation based on the first 2 formants of whispered vowel /a/ 11. 12. 13. 14.
15. 16.
17.
Fig. 5 Formants and pitch values for whispered vowel /a/ before 18. 19.
and after enhancement
V. CONCLUSION 20.
A method for real time synthesis of normal speech from whispers through an analysis-by-synthesis CELP codec with concentration on natural pitch generation in vowels has
_______________________________________________________________
21.
Vary P, Martin R (2006) Digital speech transmission, John Wiley & Sons Ltd, West Sussex Pietruch R, Michalska M, Konopka W, Grzanka A (2006) Methods for formant extraction in speech of patients after total laryngectomy, Biomedical Signal Processing and Control, Vol. 1, pp. 107-112 Plack C J, Oxenham A J (2005) Pitch: neural coding and perception, Springer Handbook of Auditory Research, New York Morris R W, Clements M A (2002) Reconstruction of speech from whispers, Medical Engineering and Physics, vol. 24, pp. 515-520 Ahmadi F, McLoughlin I V, Sharifzadeh H R (2008) Analysis-bysynthesis method for whisper-speech reconstruction, IEEE Asia Pacific Conference on Circuits and Systems (APCCAS 2008), China Atal B S (1982) Predictive coding of speech at low bit rates, IEEE Transaction on Communications, pp. 600-614 Weitzman R S, Sawashima M, Hirose H (1976) Devoiced and whispered vowels in Japanese, Annual Bulletin, Research Institute of Logopedics and Phoniatrics, vol. 10, pp. 61-79 Solomon N P, McCall G N, Trosset M W et al. (1989) Laryngeal configuration and constriction during two types of whispering, Journal of Speech and Hearing Research, vol. 32, pp 161-174 Esling J H (1984) Laryngographic study of phonation type and laryngeal configuration, Journal of the International Phonetic Association, vol. 14, pp. 56-73 Tartter V C (1989) What’s in whisper?, Journal of Acoustical Society of America, vol. 86, pp. 1678-1683 Fant G (1960) Acoustic theory of speech production, Mouton & Co, The Hague Thomas I B (1969) Perceived pitch of whispered vowels, Journal of the Acoustical Society of America, vol. 46, pp. 468-470 Catford J C (1977) Fundamental problems in phonetics, Edinburgh University Press, Edinburgh Stevens H E (2003) The representation of normally-voiced and whispered speech sounds in the temporal aspects of auditory nerve responses, PhD Thesis, University of Illinois Lehiste I (1970) Suprasegmentals, MIT Press, Cambridge Kallail K J, Emanuel F W (1985) The identifiability of isolated whispered and phonated vowel samples, Journal of Phonetics, vol. 13, pp. 11-17 Klatt D H, Klatt L C (1990) Analysis, synthesis, and perception of voice quality, variations among male and female talkers, Journal of Acoustical Society of America, vol. 87, pp. 820-857 Stevens K N (1998) Acoustic phonetics, The MIT Press, Cambridge Goalic A, Saoudi S (1995) An intrinsically reliable and fast algorithm to compute the line spectrum pairs in low bit-rate CELP coding, In proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 728-731 McLoughlin I V (2007) Line spectral pairs, Signal Processing Journal, pp. 448-467 McLoughlin I V, Chance R J (1997) LSP-based speech modification for intelligibility enhancement, In proceedings of 13th International Conference on DSP, vol. 2, pp. 591-594
IFMBE Proceedings Vol. 23
_________________________________________________________________
Human Gait Analysis using Wearable Sensors of Acceleration and Angular Velocity R. Takeda1, S. Tadano1, M. Todoh1 and S. Yoshinari2 1
Division of Human Mechanical Systems and Design, Graduate School of Engineering, Hokkaido University, Sapporo, Japan 2 Human Engineering Section, Product Technology Department, Hokkaido Industrial Research Institute, Sapporo, Japan
Abstract — This work proposes a method for measuring human gait posture using wearable sensors. The sensor used consist of a tri-axial acceleration sensor and three gyro sensors aligned on three axes. These are worn on the abdomen and the lower limb segments (both thighs, both shanks and both feet) to measure acceleration and angular velocity during walking. Three dimensional positions of each lower limb joint are calculated from segment lengths and joint angles. Segment lengths are calculated by physical measurement and joint angles can be estimated mechanically from the gravitational acceleration along the anterior axis of the segments. However, the acceleration data during walking includes three major components; translational acceleration, gravitational acceleration and external noise. Therefore, an optimization analysis was represented to separate only the gravitational acceleration from the acceleration data. Because the cyclic patterns of acceleration data can be found during constant walking, a FFT analysis was applied to obtain some characteristic frequencies. A pattern of gravitational acceleration was assumed using some parts of these characteristic frequencies. Every joint position was calculated from the pattern under the condition of physiological motion range of each joint. An optimized pattern of the gravitational acceleration was selected as a solution of an inverse problem. Three healthy volunteers walking straight for 20 seconds on a flat floor were measured. For comparison reflective markers were also placed on the volunteers for camera recordings. As a result, the characteristic three-dimensional walking of each volunteer could be expressed using a stick figure model. The trajectories of the knee joint in the horizontal plane were checked by images on PC. Therefore, this method provides important quantitive information for gait diagnosis.
An alternate method for measuring human motion is by wearing small acceleration sensors on the body [1]. Wearable sensor systems are useful because it allows measurements outside the laboratories [2] [3]. However, wearable acceleration sensors can not provide position data directly. Therefore, many works in the past using wearable sensors systems have been limited to monitoring gait events [4] or comparing raw acceleration data [5]. A popular method for measuring three-dimensional position from acceleration is by double integrating the acceleration data. However integrating acceleration data accumulates error causing drift. Instead of integrating acceleration data, this work will use the gravitational acceleration measured by a wearable sensors system to calculate tilt angle of body segments. A optimization analysis was developed to estimate the gravitational acceleration included in acceleration data. This analysis used cyclic patterns of acceleration data during constant walking and physiological motion range of each joint to estimate only the gravitational acceleration. Gaits of three healthy volunteers were measured and the acceleration data of every lower limb segment was measured simultaneously. As a result three-dimensional walking established in this method could be visualized by using a stick figure model.
Keywords — gait analysis, gravitational acceleration.
A sensor system consisting of a tri-axial acceleration sensor (H34C, Hitachi Metals, Ltd.) and three single axis gyro sensors (ENC-03M, muRata Manufacturing Co., Ltd.) was used for this investigation. The axes of both acceleration and gyro sensors were orthogonally aligned. The sensor system also contained a data logger that simultaneously recorded the 3 axis acceleration and 3 axis angular velocity data for a maximum of 150 seconds at a sampling rate of 100Hz. Sensors were placed on seven segments (abdomen (AB), left and right thigh (LT, RT), left and right shank (LS, RS), left and right foot (LF, RF)) as shown in Fig. 1.
wearable
sensor
system,
I. INTRODUCTION Gait analysis is an important clinical tool for diagnosing patients with walking disabilities. Currently, the main method for gait analysis is done by tracking a patient’s movement through camera-based analysis systems, like the Vicon motion analysis system (Vicon Motion Systems, Inc.). Camera-based systems can provide three-dimensional position of body segments, but use of these systems is generally indoors in laboratories.
II. METHOD A. Sensor System
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1069–1072, 2009 www.springerlink.com
1070
R. Takeda, S. Tadano, M. Todoh and S. Yoshinari
movement of body segments during constant walking [7] [8]. Fast Fourier transformation was applied to the acceleration data to investigate the frequencies of these cyclic patterns. The frequency analysis results are shown in Fig.3. Peaks at certain frequencies were found. The first peak is the same frequency as the gait frequency and defined as the primary GF (gait frequency). This work focused on the first three frequencies in Fig. 3, the primary GF, secondary GF, and tertiary GF. The acceleration data of these GFs were extracted from the original acceleration data using low-pass and band-pass filters. As shown in Fig. 2, the composite data coincides with the original acceleration data with the noise removed. Since the heavy line in Fig. 2 still contains both gravitational acceleration and translational acceleration, wave decomposition was performed.
5 GPUQ T* GCF
# D F Q O GP & CVC.QI I GT
6 JKI J
(Q Q V
5 JCPM
Fig. 1
Sensor attachment location.Sensors are attached to the abdomen (AB), both thighs (LT, RT), both shanks (LS, RS), both feet (LF, RF).
The three-dimensional position of lower body joints can be calculated from orientation and length of each body segment. We presumed that the orientation of the segment is equal to the orientation of the attached acceleration sensor.
sin-1(
a ) g
(1)
Here, a i is the acceleration measured along x axis direction of segment (LT, RT, LS, RS, LF, RF). During static state the gravitational acceleration is the only acceleration for a i , therefore the orientation angle T is the orientation against the direction of gravity. Gyro sensors were used to measure the horizontal rotation of the abdomen. Rotation T AB can be calculated by integrating the angular velocity , measured from the gyro sensor placed on the abdomen.
T AB
³ Z dt
Data collected by the acceleration sensors during gait has three major components: translational acceleration, gravitational acceleration, and external noise [6]. Therefore a method was developed to separate out the gravitational acceleration to obtain segment orientation using Eq. 1. The thin line in Fig. 2shows the raw data taken from the anterior axis of an acceleration sensor of the RS. Noise is removed from this data by using cyclic acceleration patterns during gait. Cyclic patterns have been reported in the
_______________________________________________________________
Time (s)
Fig. 2 Anterior axial acceleration data of right shank. Thin line represents raw acceleration data and Heavy line represents composite data of primary, secondary and tertiary GFs.
Primary GF
(2)
C. Gravitational Acceleration Estimation
Spectrum
T
x
Acceleration (m/s2)
B. Acceleration and Gyro Sensor Measurement
Secondary GF
Tertiary GF
Frequency (Hz)
Fig. 3
Frequency analysis of anterior axial acceleration during constant gait. The primary, secondary and tertiary frequencies are circled in red.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Human Gait Analysis using Wearable Sensors of Acceleration and Angular Velocity
D. Experiment Process
3ULPDU\*)
6HFRQGDU\*)
7HUWLDU\*)
Acceleration (m/s2)
1071
0
Three volunteers, 2 male and 1 female, participated in the experiment. None of the volunteers had any past history of disabilities or injuries. Along with sensors reflective markers are placed on the volunteers for analysis with video camera. In addition lower body measurements of each volunteer were taken. Volunteers were asked to walk a flat straight corridor; the gait velocity was at the discretion of the volunteers. To avoid errors caused by attachment, measurements of each sensor were measured before and after trials.
1 Time (s)
Calculate Tilt Angle of Segment
Fig. 4
The extracted primary, secondary and tertiary GF acceleration data. One primary GF cycle is equal to two secondary GF cycles and three tertiary GF cycles.
Calculate Joint Position Calculate Joint Angle
Wave decomposition divided the secondary and tertiary acceleration data into peaks and valleys. Since one primary GF cycle is equal to two secondary GF cycles and three tertiary GF cycles, the secondary was divided into 4 peaks and valleys and the tertiary GF was divided into 6 peaks and valleys, with 0 as the threshold. The gravitational acceleration was assumed to be a combination of these peaks and valleys. There are 24 = 16 combination for the secondary GF, and 6 2 = 64 combinations forthe tertiary GF, giving a total of 16 × 64 = 1024 combinations for each sensor. In this work, normal walking was assumed to be bilaterally-symmetric, meaning that right and left leg acceleration combinations are the same. Therefore, the total number of combinations was limited to 16,777,216 (1024 for the foot × 1024 for the shank × 16 for the thigh). An optimization algorithm was developed to establish the gravitational acceleration within the 16,777,216 combinations.First, one combination is considered and the orientation of each body segment is calculated using Eq. (1), then the ankle, knee and hip joint positions can be calculated from the tilt angle and length of segments. Next, the range of motion of flexion-extension of the hip and knee and plantar flexion-dorsal extension for the ankle were checked (hip: -20°to 90°, knee: 0° to 120°, ankle: -30° to 50°). Afterwards, the ankle and hip position during heel contact was checked to remove any combinations that showed inadequate posture. Within the combinations that fulfilled the conditions above, one combination with the least amount of vertical position error for the left and right ankle during heel contact was taken as the gravitational acceleration. Using the selected gravitational acceleration a stick figure model was created to visually confirm the lower limb posture.
_______________________________________________________________
Check Joint ROM Check Ankle and Hip Position Optimal Gait Pattern Search (Lowest Ankle Position Error) Results (Gravitational Acceleration)
Fig. 5
Gravitational acceleration estimation algorithm. Combination created from primary, secondary and tertiary GF are used to estimate optimal gravitational acceleration.
III. RESULTS AND DISCUSSIONS Figure 6 and 7 are a comparison of the calculated hip and knee joint angles with this method and the angles determined using camera images. The horizontal axis represents percent in a gait cycle and vertical axis represents joint angle in degrees. There are differences in the swing phase for the hip joint, but the maximum and minimum peak values and joint angles during heel contact were similar. In the knee joint angle resultsshown in Fig. 7, the peak flexion angle was 10 degrees different. The differences in the results may be caused by several reasons. Since the acceleration data are larger at the end segments of the limb some frequencies containing high amplitude acceleration data at the shank or foot could have been excluded by the low-pass and band-pass filters. These high frequencies could be the reason why there were differences in the peak flexion angles in the knee joint but not so much in the hip joint.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1072
R. Takeda, S. Tadano, M. Todoh and S. Yoshinari
Flexion Angle (degree)
Fig. 8 .Stick figure lower limb representation of a subject during gait.
Gait Cycle (%)
Fig. 6 Hip joint flexion angle comparison. Thin line represents joint angles measures by camera and heavy line represents those of this method.
Future works will be to improve estimation of gravitational acceleration with consideration for swing phase. In addition, conduct experiments on patients with lower limb disabilities to verify effectiveness for gait diagnosis.
Flexion Angle (degree)
ACKNOWLEDGMENT
The authors would like to express their thanks to M. Morikawa and M. Nakayasu of formerly master course students of Laboratory of Biomechanical Design (Division of Human Mechanical Systems and Design, Hokkaido University), for their support and cooperation to experiment and computer data analysis of this study
Gait Cycle (%)
REFERENCES
Fig. 7 Knee joint flexion angle comparison. Thin line represents joint angles measures by camera and heavy line represents those of this method.
1. 2.
Another reason is because the gravitational acceleration estimated by this method was optimized for heel contact, and the pattern may not have been optimal for the swing phase, where the maximum knee flexing occurs. A Stick figure representation of a volunteer using this method is shown in Fig. 8.
4.
5.
IV. CONCLUSIONS This work showed that the acceleration patterns during gait can be used to estimate lower body posture. We developed an optimization analysis that estimated the gravitational acceleration. The estimated gravitational acceleration was used to calculate the orientation of each body segment and gait of the subjects could be visually confirmed by stick figure model. However, there are some limitations to this work. Because this method uses the cyclic acceleration pattern, this can only target constant movements such as gait, running and climbing of stairs.
_______________________________________________________________
3.
6. 7.
8.
Morris J.R.W. (1973) Accelerometry - A technique for the measurement of human body movements. J Biomechanics 6: 729-736. Veltink P.H., Bussmann HansB.J., de Vries W. et al. (1996) Detection of static and dynamic activities using uniaxial accelerometers. IEEE Trans Rehab Eng 4: 375-385. Bouten C.V.C., Koekkoek K.T.M., Verduin M. et al. (1998) A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity. IEEE Trans Biomed Eng 44: 136-147. Jasiewicz J.M., Allum J.H.J., Middleton J.W. et al. (2006) Gait event detection using linear accelerometers or angular velocity transducers in able-bodied and spinal-cord injured individuals. Gait Pos 24: 502509 Kavanagh J.J., Barrett R.S., Morrison S. (2004) Upper body accelerations during walking in healthy young and elderly men. Gait Pos 20: 291-298. Luinge H.J. (2002) Inertial sensing of human movement. PhD. thesis, Twente University Press. Grossman G.E., Leigh R.J., Abel L.A. et al (1988) Frequency and velocity of rotational head perturbations during locomotion. Exp Brain Res 70, 470-476. Lamoth C.J.C., Beek P.J., Meijer O.G. (2001) Pelvis-thorax coordination in the transverse plane during gait, Gait Pos 16: 101-114. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Ryo Takeda Hokkaido University Kita 13 Nishi 8, Kita-ku Sapporo Japan
[email protected] _________________________________________________________________
Deformable Model for Serial Ultrasound Images Segmentation: Application to Computer Assisted Hip Athroplasty A. Alfiansyah1,2, K.H. Ng2 and R. Lamsudin3 1
Faculty of Industrial Engineering, Islamic Indonesian University, Indonesia Department of Radiology, Faculty of Medicine, University of Malaya, Malaysia 3 Faculty of Medicine Islamic Indonesian University, Indonesia
2
Abstract — In this paper, we present a segmentation method for ultrasound images acquired serially for intra-operative data acquisition in computer assisted total hip athroplasty application. To extract the bone surface from ultrasound image, we propose a method based on model deformable (snake) with integration of local intensity variable around the evolved contour. We also proposed an adaptive additional force calculated from image gradient vector flow in narrow band around the contour. In this serial image segmentation, we utilized the segmentation result of previous image as an input of next image segmentation. Finally, we perform a post treatment of the final contour to select exclusively the real points on the bone surface. This point selection is based on intensity criteria, so that we only keep the points having strong intensity amongst the points on the final contour. Validated on a fresh cadaver and a health subject, we found that the precision of this method is applicable for the input of registration method in computer assisted orthopedic surgery. Despite of its possibility to be applied in off line manner, the needed time calculation is acceptable for intra-operative application. Keywords — ultrasound images, segmentation, model deformable, gradient vector flow, computer assisted surgery.
I. INTRODUCTION Image guided surgery proposes intra-operative navigation for the surgeons in performing minimally invasive surgery. Using this navigation, some medical interventions that considered as too dangerous before, it becomes a daily routine in operating room nowadays. Computer assisted surgery also proposes better result of surgical procedure, shorten post-operative recovery needed times and lower overall cost motivated the emergence of computer assisted surgery. Using this approach, surgeon can design a preoperative planning based on available images acquired preoperatively. This planning is then integrated to the operating room by performing a registration between intra-operative images data with the preoperative ones. This registration process determines the spatial relationship between preoperative and intra- operative datasets. The needed intra-operative data are usually obtained by sliding a three dimensional pointer on the anatomical part
using. This method is fast, easy, and accurate; but it has twofold important drawbacks: invasive; acquisition area is limited in the exposed anatomical part and cause additional pain. In the other hand, the lack of dataset in some areas may affect the registration accuracy. For orthopedic application, CT scan is commonly used as pre-operative data and implanted fiducially markers, optically tracked sensor data, or localized X ray data as intraoperative ones. Since past few years, ultrasound images have also been used as intra-operative modality because of its non-invasive, inexpensive, and real-time character. But due to the poor image quality (speckle, delayed echoes, reverberation…), ultrasound images are very difficult to analyze in order to obtain an accurate segmentation result. Besides of segmentation result quality, to achieve accurate registration result, sufficient amounts of intra-operative data are also required. The more data are utilized for registration, the more accurate registration result can be obtained. These additional data sets can be gathered from a set of serial ultrasound images in the interested area. However, the complexity of ultrasound images analysis will significantly increase the needed intra-operative time. This intraoperative additional time is critical to reduce the blood loss and infection risk. Thus, we need a fast serial ultrasound image segmentation algorithm applicable in surgical intervention and require minimum user interactions. This paper focuses on an automatic bone surface detection method applied for serial ultrasound images acquired intra-operatively. The proposed method based on a deformable model (snake), with an additional force derived from the gradient vector flow. These serial segmentation results of such images are then applied as an input for a registration processes for orthopedic surgery, especially total hip athroplasty for our case. II. METHOD To extract the bone surface from ultrasound images, we follow the expert reasoning described in [1,2]. As shown in figure 1, bony structures can be characterized by a strong intensity change from a bright pixel group to a global dark
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1073–1076, 2009 www.springerlink.com
1074
A. Alfiansyah, K.H. Ng and R. Lamsudin
area below, because bones high absorption rate generates an acoustic shadow behind. The segmentation method should produce a smooth contour, because discontinuities are normally not present on the bone. Some artifacts might be generated below the bone structure when ultrasound probe position is close to the bone surface during acquisition.
Figure 1: Ultrasound image of iliac. Considering all that, we applied a method based on open active contour represented as a set of discrete points, initialized as a simple contour placed near to the bone surface. This contour is then evolved until meets the bone surface.
method. For each step in time discretization, the evolved snake contour is updated using this equation:
Vt
Regional energy integration: in classical snake, P(v) is defined as negative of gradient image to stop the evolution in the image border. In our case in bone detection from ultrasound images, we need to integrate the local intensity variation, so that the contour detects only a group of brightto-dark intensity variation, and not in the opposite contrast. The final contour should be placed about in the middle of the bright intensity stripe. To handle this local image intensity change automatically, we propose to integrate the regional energy around the evolving contour. We defined this term as difference of mean intensity between the regions above (Vup), below (Vdown) and at the considered location (Vmiddle) as illustrated at figure 2. So, it can be formulated as follows:
Dif (vi )
We propose previously in [1, 2], an automatic bone surface detection method for single ultrasound images based on deformable model. The method also includes region based energy for detecting the image local contrast; and intensity based posteriori point selection to select only the real bone. Model deformable: The active contours are generally modeled as the sum of internal energy, that imposes the regularity of the curve; and external energy that attracts the contour toward the image significant features. The segmentation process is then achieved by minimization of the following total energy function [3]:
³ (D v' ( s)
E v' ' ( s) 2 )ds ³ P((v))ds : :
2 u Vmiddle Vdown Vup
(3)
if the difference is negative a penalization is applied.
if Dif (vi ) 0 k Eregional ® ¯ Dif (vi ) otherwise
(4)
2
InternalEnergy
v' ( s)
(2)
where Vt is evolved contour at discretized time step t , A and I are pentadiagonal matrices express the contour rigidity and identity matrix respectively. Implementing this snake using that numerical scheme, bring us to linear system, so we have to solve a pentadiagonal banded symmetric positive system. We can compute the solution using a LU decomposition of (WA I ) need to be computed only once.
A. Model deformable
E (s)
(WA I ) 1 (Vt 1 P(Vt 1 ))
(1)
ExternalEnergy
and v' ' ( s) represent the derivative of the model, and
P (v)
is an image potential associated with the image force. We proposed an open contour active model with two free extremities to make it move easily. From the given initial contour, the model is evolved to find the minimum energy. The minimization process is estimated via finite elements
_______________________________________________________________
IFMBE Proceedings Vol. 23
Figure 2: Regional energy definition.
_________________________________________________________________
Deformable Model for Serial Ultrasound Images Segmentation: Application to Computer Assisted Hip Athroplasty
The final external energy is then incorporated to the contour actives as the multiplication result between this regionbased measurement and the original gradient term.
E 'regional
Eregional u Eexternal
g ( f )
e
h ( f ) 1 e
(5)
1075
f
K
f K
B. Gradient Vector Flow (GFV) In order to simplify the snake initialization and avoid local minimum solution, in previous implementation we followed Cohen [4] who proposed an additional force in the contour normal direction. This force pushes the contour evolved upward since the initial contour is placed below the edge to be detected from ultrasound image. In the context of serial images segmentation, we propose to use the segmentation result of the previous image as the initial contour of next image segmentation process. It is possible because the bone positions in two consecutive ultrasound images are normally not so distance. This approach will accelerate the overall segmentation method, since contour active requires only some amount of iterations to find the bone surface. Unfortunately, this initialization technique might give false segmentation result when the initial contour from previous image placed above the desired bone surface. In such position, the contour active will never find the correct result situated below, because the additional force push always upward. To overcome this drawback, we proposed an adaptive additional force that capable of pulling the evolved contour toward the edge, even its position is above. This additional force is derived from image Gradient Vector Flow (GVF) [5, 6] as an energy that regularizes the potential gradient in order to improve active contours facing the weak image intensity and boundary concavities. GFV improves active contour to converge to long, thin boundary indentations and extend the capture range. In our context, this force is integrated to pull the contours toward the edge to be detected where ever its position is. This method replaces the external force term P (v) with a gradient vector field, gvf (I ) , derived from the equilibrium state of this partial differential equation:
gvf t
g ( f ) 2 gvf h( f )( gvf f )
(5)
The first term in that equation is referred to as smoothing term since it produces a smoothly varying vector field. The second term ( gvf f ) is referred to as the data term, since it encourages the vector field gvf (I ) to be close to f . The weight functions g(.) and h(.) are applied to the both of smoothing and data terms. We propose the following weight functions:
_______________________________________________________________
Figure 3: Gradient Vector Flow of the image in figure 1. Using these weight functions, GVF field will conform to the distance map gradient near the relevant features, but vary smoothly away from them. The constant K determines the extent of the field smoothness and the conformity gradient. This additional force gives the desired edge a capability to pull the contour active downward when it is situated above, without loss its capability to push the contour upward in the other situation. GVF calculation is a time and memory consuming process. To reduce the needed computation time and memory for computing of this force, we proposed to calculate only in the narrow band area around the evolved contour. We expect that the initial contour (comes from previous segmentation) is situated in this defined narrow band. This is the often the case in the images acquired serially, due to the similarity of two consecutives images of this type. C. Posteriori selection. Since the active contour initialization goes from one side to the other of the image, and considering that the real bone contour does not always do so, we need to perform a posterior point selection procedure. Amongst all points on the final contour, only those statistically having high enough intensity are retained. D. Implementation issue To integrate both of two methods effectively, but at the same time respect the operating room time constraints, we perform this segmentation in offline manner instantaneously after the images acquisition. For the first images, the addi-
IFMBE Proceedings Vol. 23
_________________________________________________________________
1076
A. Alfiansyah, K.H. Ng and R. Lamsudin
tional force is fixed like a balloon force in vertical direction. When it reaches the convergence, we start the segmentation process using the previous contour as initialization and images energy and GVF calculation in the range of 15 pixels around the contour. For each images, we evolved the snake in fixed number iteration without convergence detection. In our experiences, 70 iterations are sufficient to put the contour in the desired edge. Once all of the images have been segmented, we perform a posteriori selection to detect the real points on the bone surface. III. VALIDATION To perform the validation of segmentation method quantitatively, we utilize an approach presented by Chalana [7] that compares the difference between the manual segmentation and automatic one. Measured as Mean Sum of Distance (MSD), the method calculates the mean of closest points in two compared curves as error measurement. The method was validated on three fresh cadavers and two healthy subjects (25 images for each) by comparing the result of automatic segmentation results with the manual ones. The images were acquired in the hip bone area, as we are interested particularly to apply the method for hip athroplasty application. The result this validation is presented in table 1. Table 1 MSD between automatic and manual segmentation (in mm) Subject Cadaver iliac wing (3 set)
0.46r0.39
MSD
Max 1.34
Cadaver symphis (3 set)
0.57r0.43
0.82
Cadaver ischium (3 set)
0.85r0.54 0.53±0.30
1.40
Cadavre knee (3 sets) Healthy iliac wing (2 set)
0.56r0.89
1.34
Healthy symphis (2 set)
0.63r0.22
0.90
IV. CONCLUSIONS We presented this paper that focused on segmentation method applied on ultrasound images acquired serially for computer assisted orthopedic surgery. The segmentation result is then utilized as enhanced input of geometric based registration between CT scan and ultrasound image for total hip athroplasty. The method extract the bone surface from ultrasound image based on model deformable (snake) with integration of local intensity variation information around the evolved contour. We also proposed an adaptive additional force calculated from image gradient vector flow in narrow band around the contour. In this serial image segmentation, we consecutively utilize the segmentation result of previous image as the input of next image segmentation. And finally, we perform a post-treatment on the final contour to select exclusively the real points situated on the bone surface. This point selection is based on point’s intensity criteria, so that, we only keep the points having strong intensity amongst the points on the final contour. Validated using some different object, we found that the precision of this method is acceptable and quite fast for real clinical application in computer assisted surgery.
REFERENCES 1.
0.81
From that table, we can see that almost of the segmentation result on the total hip surgery acquisition areas is acceptable as the input of computer assisted surgery. On ischium, this error is rather big due to its poor images quality on this anatomical area. Although the method is not intended to perform a real time bone contour extraction, but in average it needs less than 1 second giving the final results for each image. Despite of its capability to describe the segmentation error precisely, MSD also prone to the segmentation error performed manually that utilized as gold standard. Furthermore, in serial images there are a big amount of images to be analyzed, thus manual segmentation might then becomes a hard, tedious and labor intensive task for the operators.
_______________________________________________________________
The other error source might also comes from the inter- and intra-operator variation during segmentation. Further work, we plan to investigate this variability for validating the segmentation method.
2.
3. 4.
5. 6.
7.
Alfiansyah A., Streinchenberger R., Kilian P., Bellemare ME., Coulon O. ‘Automatic segmentation of hip bone surface in ultrasound images using an active contour,’ in: CARS 2006 Computer Assisted Radiology and Surgery, June 2006. Alfiansyah A.,’Integration of intra-perative ultrasound images in Computer Assisted Orthopedic Surgery’, Ph.D Dissertation, Universite de la Mediterranne, 2007. M. Kass, A. Witkin, and D. Terzopoulos. ‘Snakes : Active contour models’. International Journal of Computer Vision, 1:321332, 1988. Laurent D. Cohen, On active contour models and balloons. Computer Vision, Graphics, and Image Processing: Image Understanding (CVGIP:IU), 53(2):211218, March 1991 C. Xu and J. L. Prince. Snakes, shapes, and gradient vector flow. IEEE Transactions on Image Processing, 7(3):359-369, 1998. C. Xu and J. L. Prince. Generalized gradient vector flow external forces for active contours. Signal Processing, 71(2):131_139, December 1998. V. Chalana and Y. Kim. A methodology for evaluation of boundary detection algorithms on medical images. IEEE Trans. Med. Imaging, 16(5):642_652, 1997. Author: Agung ALFIANSYAH Institute: Islamic Indonesian University/University of Malaya Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Bone Segmentation Based On Local Structure Descriptor Driven Active Contour A. Alfiansyah1,2, K.H. Ng1 and R. Lamsudin3 1
Faculty of Industrial Engineering, Islamic Indonesian University, Indonesia 2 Department of Radiology, University of Malaya, Malaysia 3 Faculty of Medicine Islamic Indonesian University, Indonesia
Abstract — In this paper we present a method to segment the bony structures from CT scan data using active contour guided by image local descriptor derived from 3D Hessian matrix that encodes important local shape information. Eigenvalues decomposition of 3D Hessian matrix measures the maximum changes in the normal vector (gradient vector) of the underlying intensity iso-surface in a small neighborhood. Thus, the eigen-values of the Hessian matrix can be explored to locally describe whether the underlying iso-surface behaves a form like a sheet, a tube, and a blob. Inspired by this fact, we propose a novel filtering method capable in estimating the sheet-ness measure that enhances the bony structures in the image but capable to remove the noise. We choose a segmentation method based on geometric active contour for the reason of its robustness against noises often presence in images, and also its capability to automatically handle the topological change. To incorporate this Hessian based local structure descriptor, we utilized the sheet-ness measure to drive the surface evolution in geodesic contour active toward the sheet like structure, lower for tube like region and zero for the others. We do not define the tube-like elimination term, as the curved end of the bone structure have a behavior that is both sheet-like and tube like. The contour active is implemented by following the surface evolution using level-set implementation in a narrow band area around the evolved surface to make the computation faster and more efficient. Keywords — CT scan image, segmentation, model deformable, Hessian filter, local descriptor.
I. INTRODUCTION Segmenting the bony structure from Computer Tomography Scanner (CT scanner) images plays the roles important in computer assisted surgery, not only for reconstructing three dimension model in intra operative navigation, but also for pre-operative surgical planning. Methodologically this segmentation method is also important for feature based registration, where CT scan is commonly utilized as preoperative data. In almost conventional technique, the segmentation of such data type is done using a method based on application of threshold value of the bony anatomy in CT scan. To enhance the result, this method is then followed by large connected component detection or sometimes a man-
ual post-processing. This segmentation of bone using thresholding approach is a fairly successful procedure, since the CT values for bone are higher than that of surrounding soft tissues. Although the CT value for bone is well-known (as Hounsfield value) and can be considered as constant for each images acquisition thanks to image modality precalibration, a discrimination based on these values does not always work well in practice for all areas of anatomical bone at all different condition of patient. In orthopedic surgery, it is often that the CT scan images are polluted by the noises comes from metallic object implanted to the patient before. Furthermore, some different anatomical bones are often situated near each other, that make the segmentation gives the false result. This paper, we present a segmentation algorithm to be applied in bony anatomical extraction using contour active geodesic. To enhance such deformable model, we applied a local descriptor based on second-order local shape operator. Such local descriptor is useful to determine the anatomical bone structures, as they have a specific geometrical form that near to the sheet and tube. Our proposed filter is also capable to reduce the presence of the noise in the image by detecting the other local structure different from that in the anatomical bone. The remaining of this paper is organized as follows: Section 2 introduces the proposed method that includes the local structure descriptor based on image’s Hessian filter and brief theory of geodesic active contour. The filter method and model deformable integration strategies as well as our numerical implementation scheme are also detailed in this section. Section 3 presents the experimental results of the application of the method. And finally, Section 4 concludes this paper. II. METHOD A. Local structure descriptor One of the common approaches to analyze the local behavior of an image I by consider its Taylor expansion in the neighborhood of a point x0
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1077–1081, 2009 www.springerlink.com
1078
A. Alfiansyah, K.H. Ng and R. Lamsudin
I ( x0 'x) | I ( x0 ) 'x T I ( x0 ) 'x T H ( x0 ) ! where I is the gradient vector and H denotes the Hessian matrix. Both the gradient and the Hessian matrix are usually used separately. The gradient vector is widely used as a normal to an implicitly defined iso-surface and its magnitude provides edge detector information, while the Hessian matrix encodes the shape information description of how the normal to iso-surface changes. We want to take the profits of this local shape information by defining a Hessian matrix based image filter. This Hessian based filter is a voxel based filter, meaning that each voxel is calculated for a filter response. In 3D image I , this second order derivative of the intensity values to form a Hessian matrix for each voxel can be calculated as:
H
2I
ª I xx « « I yx « I zx ¬
I xy I yy I zy
I xz º » I yz » I zz »¼
(1)
stops at bone boundaries on the image. Thus at every voxel, we determine whether the underlying iso-intensity surface behaves like a sheet. In this case, we know that the eigenvectors corresponding to the null eigenvalues span the plane of the plate structure and the other eigenvector is perpendicular to it. To determine such structures, we define firstly three ratios, Rsheet, Rblob, Rnoise, that describe the structure likeness of an iso-surface to the some geometry form to differentiate the sheet-like structures from others. They are defined as:
Rblob
O1 O2 O3 2 O3 O2 O1
Rnoise
O12 O2 2 O32
Rsheet
2 O3
(2)
We utilize these three ratios to determine a score measures the structure sheet-ness to decide whether a structure should be kept during segmentation, as it have a sheet-like structure, or be removed because it is considered as having the blob-like and noise structures. The structure sheet-ness S can be calculated follows:
Then, from this Hessian matrix we can compute the three eigenvalues (°O1°t°O2°t°O3°) that each of them encodes the local shape information. This eigenvalue decomposition measures the maximum changes in the normal vector (gradient vector) of the underlying intensity iso-surface in a small neighborhood. Some previous works are interested particularly on eigenvalues and eigenvectors exploitation of the Hessian matrix based analysis. Sato [1] and Frangi [2] independently employed these eigenvalues to design a filter for vessel enhancement in 3D medical digital images. Later then, Sato et al. [3] generalized their previously introduced concept to enhance tubular, blob, and sheet–like anatomical structures. The relationship between different combinations of the eigenvalues of Hessian matrix and their corresponding possible 3D local structure can be observed at Table 1.
Each term of the equation 2 has a different function that depends on the characteristics of Table 2.
Table 1 Eigenvalue decomposition from Hessian matrix to determine
Table 2 Properties of ratios defined from the Hessian eigenvalue for
the the local structure description (°O1°t°O2°t°O3°) Eigenvalues °O1°|°O2°|0; °O3° >> 0
Likeness local structure Sheet-like
°O1°|0; °O2°|°O3° >> 0
Tube-like
°O1°|°O2°|°O3° >> 0
Blob-like
°O1°|°O2°|°O3°| 0
Noises-like
For our application, we are particularly interested to apply this local descriptor in segmenting the bony structure, in which propose a sheet-ness measure that enhances bone structures and then use it to guide a deformable model that
_______________________________________________________________
S
0 if O3 ! 0, ° ° ° § Rsheet 2 · ¸ °exp¨¨ 2 ¸ ° © 2D ¹ °° 2 ®§¨1 exp§¨ Rblob ·¸ ·¸ otherwise °¨ ¨ 2E 2 ¸ ¸ © ¹ ¹ °© °§ 2 · °¨1 exp§¨ Rnoise ·¸ ¸ °¨ ¨ 2J 2 ¸ ¸ © ¹¹ °© °¯
(3)
different local structure form in image. Ratio
Sheet
Tube
Blob
Noise
Rsheet,
0
1
1
Undefined
Rblob
1
1
0
Undefined
Rnoise
O3
2O3
3O3
0
Each term in equation (3) correspond to a filtering process that enhance specific geometric structure, but remove the others. Each of them is:
IFMBE Proceedings Vol. 23
_________________________________________________________________
Bone Segmentation Based On Local Structure Descriptor Driven Active Contour
§ R 2· x exp¨ sheet2 ¸ corresponds to a sheet structure en¨ 2D ¸ © ¹ hancement term, that will be maximum value in the sheet like structure, but minimum for others. § § R 2 ·· x ¨1 exp¨ blob2 ¸ ¸ is a term to remove the blob and ¨ ¨ 2E ¸ ¸ © ¹¹ © noise structure, because it having zero value for both, but high value for sheet. 2 ·· § § R x ¨1 exp¨ noise2 ¸ ¸ is background and noise reduc¨ 2J ¸ ¸ ¨ © ¹¹ © tion term that high only in the present of structure.
Using our defined term, we do not define a tube elimination term, thus this structure will be kept during segmentation. We decided to design our filtering method like that because sometimes the curved ends of bony structures have a behavior that is both tube-like and sheet-like. Thus, to accommodate this form, the sheet-ness measure is designed to be maximum for sheet-like voxels, lower for tube-like regions, and zero for other structures. The advantage of this approach is that after the sheetness computation, we have a confidence score denotes the sheet-like at each voxel, and in addition, for high score locations, we can estimate the thickness of the sheet as well as the normal vector to the plane.
1079 1
0
_______________________________________________________________
G
wC wp
NN
(5)
G where N is the local curvature of the contour, and N is the inward unit normal. The detailed this derivation can be found in [5]. For the intrinsic property of being closed, the curve under the evolution of the curve shortening flow (5) will continue to shrink until it vanishes. By adding a constantQ , which we will refer to as the “inflation term", the curve tends to grow and counteracts the effect of the curvature term when Q is negative [6]:
wC wp
G (N Q ) N
(6)
The image influence can be introduced to the above framework by changing the ordinary Euclidean arc-length function along the curve C given by:
B. Contour Active Geodesic
We assume that C (0, t ) C (1, t ) and similarly for the first derivatives for closed curves. The curve length functional can be defined as:
(4)
By taking the first variation of the length functional, we obtain a curve shortening flow, in the sense that the Euclidean curve length shrinks as quickly as possible when the curve evolves:
ws
Firstly motivated in majority part by the classical parametric snakes introduced by Kass et al. [4], at nowadays, the deformable model is widely utilized for segmentation in many computer vision applications. This method was elaborated extensively by so much research in medical image analysis and computer vision, and then the corresponding literature is very rich. We are specifically interested to a type of this deformable model known as contour active geodesic, which has been extended from the original model to naturally handle changes in topology due to the splitting and merging of contours during evolution. We begin by describing the active contour itself, and then, we will integrate local descriptor later as we move forward. Let C ( p, t ) be a family of closed curves where t parameterizes the family and p the given curve, where 0 d p d 1 .
wC dp wp
³
L(t )
wC wp wp
(7)
to a geodesic arc length wsI
I ws I
wC wp wp
(8)
by multiplying a conformal factor I , where I I ( x, y ) is a positive differentiable function that is defined based on the given image I ( x, y ) . We now take the first variation of the following geodesic curve length function: 1
LI (t ) I
³ 0
wC dp wp
(9)
and reach a new evolution equation combining both the internal property of the contour and the external image force:
IFMBE Proceedings Vol. 23
wC wt
INN I
(10)
_________________________________________________________________
1080
A. Alfiansyah, K.H. Ng and R. Lamsudin
As in equation (6), we also add an inflation term to this curve evolution model: wC wt
I (N Q ) N I
(11)
The method is initialized using a surface issued from image binarisation by applying the threshold value of the bony anatomy in CT scan. This result is then enhanced using large connected component to remove the remaining speckle in the binary volume.
More details of the geodesic active contours explication can be found in the work of Caselles et al. [7,8], Kichenassamy et al. [6], and Yezzi et al. [9]. C. Integration scheme The generic numerical implementation of geodesic active contour using level set formulation [10, 11] of Equation 11 is (that now become standard): w\ wt
§
§ \ © \
I ¨ div¨¨ ¨ ©
· · ¸ Q ¸ \ I \ ¸ ¸ ¹ ¹
(12)
where \ is the level set function, usually initialized as a signed distance function to be updated in each time step, § \ · ¸ essentially the curvature N of the level sets and div¨ ¨ \ ¸ ¹ © of \ . Classically, I is usually associated to a decreased function of gradient intensity having near to 0 value on the region with high gradient, but it will be 1 in the region with constant intensity variation. This value is applied to stop the implicit active contour from evolving at image boundaries or salient features. To incorporate the Hessian filter into model deformable, we modified I so that it depends not only to the image gradient, but also to the sheet-ness measurement defined previously. Thus for each voxel in equation 12, I function should be redefined as:
I
S
§ I · ¨ ¸ e© K ¹
2
(12)
III. EXPERIENCES At this moment the validation is performed qualitatively. In evaluating the segmentation results, we have particularly focused on CT scan hip bone image for our application in computer assisted total hip athroplasty. We utilized two CT scan image with different conditions: a clear and noise polluted CT scan images. For our application context, the method segments the CT scan image much better than conventional binary segmentation of bones; in which sometimes missing the segmentation and incapable in removing the remaining noise issued by metallic object in CT scan data for the patient with implanted prothese due to their previous surgery. IV. CONCLUSIONS We proposed a segmentation method which combines the capability of Hessian filter in detecting specific local structure in images to drive a geodesic contour active in segmenting structures having sheet-like form. We introduced a generalization of sheet-ness measurement derived from the Hessian shape operator property that robust enough facing the noise present in image. This sheet-ness operator is directly integrated to the image energy in geodesic contour active. In current work, we are not only validating the method with different CT image data set but also better integrating this Hessian filter to the deformable by taking into account the direction of the sheet-ness measure and its normal vectors.
where S and I denotes the sheet-ness measure calculated from equation 3 and image gradient respectively.
REFERENCES 1.
D. Implementation issue To make the computation efficient, the proposed contour active is implemented using the zero level set of a signed distance function in the narrow band around the evolved contour. The derivative term \ is calculated using central different method as \ is relatively smooth. For I , we applied a second-order essentially non-oscillatory scheme to capture the image gradient that sometimes sharp.
_______________________________________________________________
2.
3.
4.
Sato Y, et al, “Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images”, Medical Image Analysis, vol. 2, no. 2, pp. 143-168, 1998.. A. F. Frangi, Wiro J. Niessen, Koen L. Vincken, and Max A. Viergever, “Multiscale vessel enhancement filtering,” Lecture Notes in Computer Science, vol. 1496, pp. 130–138, 1998.. Sato Y, et al, “Tissue classification based on 3D local intensity structures for volume rendering”, IEEE Transactions on Visualization and Computer Graphics, vol. 6, no. 2, pp. 160-180, 2000. Kass, M., Witkin, A., and Terzopoulos, D., “Snakes: Active Contour Models", International Journal of Computer Vision, (1988), pp. 321331.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Bone Segmentation Based On Local Structure Descriptor Driven Active Contour 5.
6.
7. 8.
Tannenbaum, A., “Three Snippets of Curve Evolution Theory in Computer Vision", Mathematical and Computer Modelling, 24, (1996), pp. 103-119 Kichenassamy, S., Kumar, A., Olver, P., Tannenbaum, A., and Yezzi, A., “Conformal Curvature Flows: From Phase Transitions to Active Vision," Archive of Rational Mechanics and Analysis, 134 (1996) pp. 275-301. V. Caselles, F. Catte, T. Coll, et F. Dibos. “A geometric model for active contours.” Numerische Mathematik, 66:1-31, 1993. V. Caselles, R. Kimmel, et G. Sapiro. “On geodesic active contours.” International Journal of Computer Vision, 22(1):61-79, 1997.
_______________________________________________________________
1081
9.
Yezzi, A., Kichenassamy, S., Kumar, A., Olver, P., and Tannenbaum, A., “A geometric Snake Model For Segmentation of Medical Imagery," IEEE Trans. on Medical Imaging, 16 (1997) pp. 199-209. 10. Osher, S. and Fedkiw, R., “Level Set Methods and Dynamic Implicit Surfaces”, Springer-Verlag, 2003. 11. Sethian, J., “Level Set Methods and Fast Marching Methods", Cambridge University Press (1999) Author: Agung ALFIANSYAH Institute: Islamic Indonesian University/University of Malaya Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
An Acoustically-Analytic Approach to Behavioral Patterns for Monitoring Living Activities Kuang-Che Liu1, Gwo-Lang Yan2, Yu-Hsien Chiu1, Ming-Shih Tsai3, Kao-Chi Chung4 1
ICT-Enabled Healthcare Program - Industrial Technology Research Institute, Tainan, TAIWAN Department of Computer Science and Information Engineering, Southern Taiwan University, Tainan, TAIWAN 3 Potz General Hospital , Chiayi, TAIWAN 4 Institute of Biomedical Engineering, National Cheng Kung University, Tainan, TAIWAN
2
Abstract — Risk prevention and alarm are crucial for home care of aged people who live alone. In this paper, an acoustically-analytic approach is proposed to extract behavioral patterns for modeling and monitoring living activities. An experimental environment was established for collecting sound tracking data of behaviors from daily activities. Each sound tracking data was transcribed to the corresponding sound event sequences by caregivers and psychologists. A living activities mining algorithm is applied to extract meaningful sequences of sound event as behavioral patterns. Then K-means algorithm is adopted to classify events into several fuzzy clusters relating to living activities for modeling behaviors in daily activities. An experimental database consists of 150 sound tracking data was established by collecting three guided behaviors included rolled out of bed to drink water, go to the toilet and watch the fish jar from 5 subjects in two weeks. 476 meaningful sequences of sound events were explored and clustered into 32 quasi-activities. These quasi-activities were further concluded into 5 kinds of behavioral patterns for modeling living activities. The preliminary result shows the potential for modeling and proactively detecting abnormal behaviors or changes of the aged.
safety of aged persons are via wireless networks [5, 6]. Patients wear portable wireless sensors for monitoring their daily safety. If any abnormal conditions such as dizzy, heart attack or falling down occur, the information will be reported to healthcare staffs or nursing stations for providing first aid or further assistances. However, this approach will not work if the patients do not want to wear portable wireless sensors. In this research, an acoustically-analytic approach is proposed to extract behavioral patterns for modeling and monitoring living activities. A living activities mining algorithm, which utilizes an event matching algorithm and K-means algorithm [7], is proposed to process the sequences of sound events to explore living activity models Discovered living activity models provide references about living models and behavioral prediction of aged people. These behavior models also can assist caregivers at home or care centers to prevent accidents of the aged people effectively and increase their quality and satisfaction.
Keywords — Aged people, Living Activities, behaviorism
II. MATERIALS I. INTRODUCTION Advancement of medical technologies and living quality extend global human life. According to the official report from United Nations, the percentage of aged people population has reached to 7% in 2004, and it will arrive to 21% in 2050 in official estimation [1]. Due to the aging society, aged people’s care and its related services are getting common recently. Consequently, concerns of aged people’s safety and life quality are more important. For issue of improving aged care and aged people’s life quality, apply well-developed information communication technology (ICT) into aged people care service is the tendency. Monitoring living activities of the aged people can prevent accidents and increase safety in their daily lives [2, 3, 4]. Currently, most common approaches to monitor the
AND METHODS
Humans’ behaviors express their inner thoughts and desires. Through monitoring and observing daily actions of humans, the personal behavior model can be concluded and effective to predict the future behaviors. According to behaviorism from Edwin Ray Guthrie [8], Human has a tendency to duplicate the successful movements. This process is called canalization [9] which is key factor for human to form a habit. When human comes across the same issue daily, he/she defers to the custom movement to respond and carry out the same issue. Behaviors in human life will produce sounds; these sounds will be appropriate sources for behavior mining due to the issue of privacy. For example, people will feel humiliated and their privacies are violated if they are monitored in video while they are taking a shower or toileting. In this research, an approach of behaviors mining via sound data was proposed. Figure 1 shows the framework of
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1082–1085, 2009 www.springerlink.com
An Acoustically-Analytic Approach to Behavioral Patterns for Monitoring Living Activities
this research. It consists of three phases, which are development of sound tracking data collecting pilot environment, tagging sound tracking data to sequences of sound event and living activities mining and behavior model exploring algorithms.
1083
M: Placement of Microphone
Kitchen
Living Activities
Behavioral sound tracking collection
Sound tracking database
M
Fish Jar M
Subjects Sound tracking data
M Tagging System Experts
Sequences of sound event database
M
Bed Room
Lavetory
Bed
Toilet
Sequences of sound event
M Living Activities Mining Algorithms
Behavioral patterns
Fig. 2 Sound tracking data collection environment
Behavioral patterns
Fig. 1 Research framework of the proposed system for mining and modeling the aged people’s behaviors
A. Collection of behavioral sound tracking A sound tracking data collecting environment was established to simulate real bedroom, lavatory and kitchen of aged peoples’ daily living environment. Figure 2 illustrates the sound tracking data collection environment. Five Omnidirectional capacitance microphones with -65dB +- 3dB sensitivity at 1000Hz and frequency response range 50 ~ 18000 Hz are set up in proper positions to clearly record sound data of aged people. The sound data was recorded in wave format with 8 kHz sample rate and 8 bits resolution. Real-time sound tracking data is sent to the data center by the network, which is build up in the ceilings. Figure 3 illustrates the network deployment of sound data collection environment. Network protocol in this environment is control area network (CAN) 2.0 with sample rate 1 megabps [10]. Sound data captured by omni-directional capacitance microphones and transfer to data server via Ethernet for database establishment. In this research, the sound tracking data collected from three habits of daily life was recorded in the environment for further processing in the second phase.
_______________________________________________________________
Fig. 3 CAN Network deployment for sound tracking data collection B. Sound tracking data tagging process In the phase, sound tracking data which was collected from the phase A was been analyzed according to two different habits of daily life and tagged into the corresponding sequences of sound events. The experts such as caregivers and psychologists of behavior were invited to tag the sequences of sound events with possible activities according
IFMBE Proceedings Vol. 23
_________________________________________________________________
1084
Kuang-Che Liu, Gwo-Lang Yan, Yu-Hsien Chiu, Ming-Shih Tsai, Kao-Chi Chung
to the behavior conditions of the subjects manually. Figure 4 shows the tagging system interface. Those tagged sound event sequences provide information about subjects’ daily behaviors.
Each category discovered from the above algorithm represents a behavioral model. An example of event matching mining string exploring algorithm can be described as: Assumed there are three data blocks: Data block 1: ABABC Data block 2: ABAAC Data block 3: CCBAC The probability of each behavioral pattern discovered from above data blocks is shown as following.
prob( ABA) 0.333 prob( BA) 0.667 With proposed algorithm and set the length constrain is 2, data segment ABA and BA will be explored to represent a meaningful behavioral patterns.
III. RESULTS AND DISCUSSION Fig. 4 Sound data tagging system interface C. Mining and Modeling Process For the issue of canalization, individual living activities will be reduplicated in the daily life. In this phase, a living activities mining algorithm, which utilizes a event matching algorithm and K-means algorithm[11], is proposed to explore sequences of sound event that are representative of daily activities and classify those sound event sequences to several categories according to the similarities. The steps of living activities mining algorithm is: Step 1: Apply the event matching algorithm to explore meaningful behavioral patterns B1~BM from sound event sequences S1~SN, mutually which are trivial sound segments tagged from phase B. Step 2: Set length constrain for filtering behavioral patterns from step 1 to form B1~BO. Step 3: Calculate the probability of each behavioral pattern by
prob( Bi )
Table 1
Then set constrain of probability to over 0.05 for filtering behavioral patterns from step 2 to B1~BP. Step 4: Apply K-means algorithm to classify behavioral patterns from step 3 to M categories according to the similarities of behavioral patterns.
Expected and experimental classified results
Expected Result Drinking water Watching fish jar Toileting
N ( Bi ) C2N
_______________________________________________________________
In the experiment, five subjects who are over sixty years old were brought into the sound data collecting environment to collect the testing database. They were guided by the working staffs with the two instructed behaviors of drinking water and toileting in the way of instruct-to-action. 150 sound tracking data was recorded and tagged into sequences of sound event by the caregivers and psychologists of behavior for different behaviors. 11,175 sequences of sound event were tagged totally. After processed living activities mining algorithm, 476 meaningful sequences of sound event were explored. Those sequences of sound event were processed by the living activities extraction algorithm to conclude behavioral models. The experimental result is shown in Table 1.
Experimental Result Drinking water on bed Drinking water in kitchen Watching fish jar in kitchen Washed hands after finished Finished with lavatory door closed Finished with lavatory door open
Figure 5 shows a state diagram example for one of the selected behavioral model of daily activities - toileting, which includes the states of door-opening, flushing, and door-closing. These states would be executed in sequence if no specialized condition happened. However, different
IFMBE Proceedings Vol. 23
_________________________________________________________________
An Acoustically-Analytic Approach to Behavioral Patterns for Monitoring Living Activities
outcome of action and sound would result from different conditions. State transfer diagram could be used for modeling human action and exploring action changing and accidents.
1085
of the sound events to explore living activities. These living activities were further induced into individual behavior models, which can be applied to alarm abnormal and dangerous conditions in aged care. The experimental result shows the proposed living activities mining algorithm could explore living activity models correctly and effectively. This result also shows the aged people’s behavior detection system via sound traces is feasible. In the future, the proposed algorithm needs more sound data with no limitation for providing more effective and reliable activity modeling.
ACKNOWLEDGMENT The authors would like to thank the National Science Council, Republic of China, for financial support of this work, under Contract No. NSC 96-2221-E-218-051-MY3. Fig. 5 A state diagram example for daily activities - toileting In this research, three expected behavioral models of living activity should be explored, but there were six different sound traces found in the experiment. This is because that the subjects have the action variation in their daily lives. Although the subjects were guided by the same instructions, they would act for different actions which represented different sound event sequences. These six different experimental results were further analyzed and found that they actually could be concluded into three kinds of behaviors as shown in Table 1. Those results gave us a great encouragement that the proposed algorithm is feasible to explore living activities correctly. IV. CONCLUSIONS In this research, an acoustically-analytic approach is proposed to extract behavioral patterns for modeling and monitoring living activities of aged people. A sound tracking data collecting environment was established to simulate real bedroom, lavatory and kitchen of aged peoples’ daily living environment. The sound data was analyzed by two different habits of daily lives and tagged into the corresponding sequences of sound events by experts. Then living activities extraction algorithm was adopted to analyze the sequences
_______________________________________________________________
REFERENCES 1. 2.
UN Population Division at: http://www.un.org/esa/population Y. T. Gross, et al. “Why do they fall? Monitoring risk factors in nursing homes”, J Gerontol Nurs, 1990. 16(6):p. 20-5. 3. A. H. Myers, et al. “Risk factors associated with falls and injuries among elderly institutionalized persons”, Am J Epidemiol, 1991, 133(11):p. 1179-90. 4. L. Z. Rubenstein, K. R. Josephson, A. S. Robbins “Falls in the nursing home”, Ann Intern Med, 1994. 121(6)”p. 442-51. 5. J. Yao, “A Wearable Point-of-Care System for Home Use That Incorporates Plug-and-Play and Wireless Standards”, IEEE Transactions on Information Technology in Biomedicine, v 9, n 3, September, 2005, p. 363-371. 6. H. Harry Asada, Phillip Shaltis, Andrew Reisner, Sokwoorhee, Reginald C. Hutcinson, “Mobile Monitoring with Wearable Photoplethysmographic Biosensors”, IEEE Engineering in Medicine and Biology Magazine, 2003 ,p. 28-40. 7. A. M. Fahim, A. M. Salem, F. A. Torkey “An Efficient Enhanced kmeans Clustering Algorithm”, Journal of Zhejiang University Science, Vol. 10, pp. 1626-1633, 2006. 8. S. Smith, E. R. Guthrie “General Psychology in Terms of Behavior 1921”, Nov. 2007, ISBN: 0-548-74745-8. 9. G. Gottleb “Experiential Canalization of Behavioral Development: Theory”, The American Psychological Association, Inc., Vol. 27, No. 1, pp. 4-13, 1991. 10. “CAN Specification Version 2.0”, Robert Bosch GmbH, 1991. 11. L. Rabiner and B.H. Juang (1993) : ‘Fundamentals of Speech Recognition’ , (Prentice-Hall, New Jersey), pp 125-128
IFMBE Proceedings Vol. 23
_________________________________________________________________
Implementation of Smart Medical Home Gateway System for Chronic Patients Chun Yu1, Jhih-Jyun Yang2, Tzu-Chien Hsiao3, Pei-Ling Liu4, Kai-Ping Yao2, Chii-Wann Lin1,5 1
Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC. 2 Department of Psychology, National Taiwan University, ROC 3 Department of Computer Science/Institute of Biomedical Engineering, National Chiao Tung University, Taiwan, ROC. 4 Institute of Applied Mechanics, National Taiwan University, Taiwan, ROC 5 Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taiwan, ROC. Abstract — Because of the rapid aging population in Taiwan and the trend of fewer children, people are looking into technical solutions for continuous/intermittent monitoring of vital signs in the home setting environment and the interactions between family members. In this study, we have designed and implemented a home medical gateway system to connect the home-care side and the health informatics side. The home-care part provides five vital signs monitoring and on-line feedback message. Users are allowed to browse their records and read the received health information (e.g. physical checkup, health education, preventive inoculation…etc.) on the Flash based interface. This study also evaluated the practicability of the home gateway system. The number of interviewees is twenty. The analysis results show the positive user feedback of the system, and have high potential to promote the quality of patient’s life. An example case of obstructed sleep apnea (OSA) patient has been studied with this system. The result shows that the gateway system can help the OSA patient to monitor and improve their sleep quality. Keywords — Smart medical home, home gateway, telehomecare, telemedicine
I. INTRODUCTION Being the fastest aging nation in the world, Taiwan is now facing challenges of changing society structure, house hold style, and national expanses on health care. According to the definition of population structure of the United Nations, the Aging Population Society is that over 7% of total population is exceeding 65 years old. The Extra-aging Population Society is that over 20% of total population is exceeding 65 years old. In terms of the population statistic in Taiwan in 2008, the populace of older over 65 years is 2.34 million, which is 10.21% of total population. This already exceeded the threshold of the Aging Population Society [1]. It is inevitable to development Smart Medical Home to decrease the expenditure of medical service price. For those who live in the cities and urban areas, we normally can access the health care facilities quite efficiently. It would thus require more emphasis on the quality of long term health care, especially for the home care sector of whole social care system. In view of the possible causes of
deteriorating life quality can be simply age related or as complicated as disease related, the development of “Smart Medical Home” will have to cover deep thoughts of both humanity and medical needs [2-4]. These include possible disabilities caused by losses of sensory and motor functions, pains and discomforts caused by physical and metabolic changes, and psychological stresses due to self awareness and lack of knowledge about the health status and possible progresses. Since 1990, there are more and more researches and development focus on the new technologies which could apply in the home care service [5-7]. There are also many researches focus on the cost and social impact with the home care service, but the real evaluation still lack [8,9]. This study focuses on the design and planning of a smart medical home gateway system which provide convenient home health care services. This study integrated several medical care services in the gateway system, including the biosignal record and monitor, physiology check records, health and hygiene education, and social welfare information. The health care services are designed for the chronic patients, elder and parents. The user could manage their families’ and their self’s health, and get health information and welfare from public resource more conveniently. In order to evaluate the practicability of the home gateway system, this study investigated and analyzed the user feedback by paper survey. II. METHOD A. Architecture of the home gateway system This study focuses on the design and planning of a smart medical home gateway system which provide convenient home health care services. The assignment of the home gateway system is transmitting the biosignal to the server, information exchanging with hospital and social welfare canter, and a health manager interface for the user. The architecture of the smart medical home gateway system is shown in figure 1. The biosignal would transmit to server through the home gateway way. The home gateway also
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1086–1089, 2009 www.springerlink.com
Implementation of Smart Medical Home Gateway System for Chronic Patients
provides the function of security monitor, which will send the emergency massage for the user. Moreover, the home gateway system transmits the health information from public resource and display on the user interface which was made in this study. The server in this smart medical home gateway system was developed by MySQL Database. MySQL database could edit and modify through Microsoft Access conveniently. The interface was designed and made by Flash in this study. The user could use all function in the user interface through a touch screen.
1087
In this system, user could choose the biosignal device following up the physicians’ indication. The safe range of each biosignal should be sat to suit for personal health condition. Table 1 The characters of medical devices Transmission rate
Biosignal
Safe range
Blood pressure monitor
Blood pressure
S: 85~95 mmHg D:120~140 mmHg
1dataset after measurement
Blood glucose monitor
Blood glucose
1, W @, >1, H @) denotes the region containing entire image surface. We denote a rectangular sub region of which p is the center and w and h are the width and height as,
Ultrasonic System Transducer Heart
Left Ventricular
1103
A-D Converter
R ( p , w, h )
§ª w wº ª h hº· ¨¨ « p x , p x » , « p y , p y » ¸¸. 2 2¼ ¬ 2 2¼¹ ©¬ (2)
A sub–image cropped by the subregion R ( p , w , h ) from the entire image Fi is represented by,
f i ( R ( p , w , h ))
( Fi R ( p , w , h ) )
( f ( p , i ) p R ( p , w , h ) ).
(3)
The parallel translation of a pixel is defined as an add operation for vectors,
PC
p'
pq
( p x , p y ) (q x , q y )
US Beam
(4)
( p x q x , p y q y ),
Fig. 1 Diagram of ultrasonic system for data acquisition cardio-gram. The RF signal in each frame consists of L scan-lines. Same as in the diagnosis equipment, these scanlines are aligned pertinent to the direction of themselves in the frame buffer and interpolated to derive the echocardiogram. Creating images from acquired RF signal instead of using echocardiogram generated by diagnosis equipment allows us to control the spacial resolution of the resultant images. This means that we are able to obtain echocardiograms with intended resolution. Fig.1 illustrates the block diagram of the data acquisition process.
then, the distance between two pixels p and q is obtained by,
d ( p, q) (p
x
pq
qx )2 ( p
y
q y )2 .
(5)
And the angle of the line segment connecting p and q is defined as,
T ( p, q)
§ py qy tan 1 ¨ ¨ p q y © x
· ¸ ¸ ¹
(6)
B. Image representation
C. Connected Multiple ROIs
We handle the echocardiogram derived above as a high– resolution gray-scale raster image. We show here some definitions and representation related to images used in the following contents. Let p ( p x , p y ) and f ( p , i ) f ( p x , p y , i ) denote
In the motion tracking method, we should fist define an objective function to optimize. A well–defined objective function is required to give a good evaluation for displacement between two images and be robust for image noise. Our former research[5] shows that an objective function based on the residual instead of the cross– correlation derives good performance for the myocardial motion tracking in echocardiograms. The basic idea of the method in [5] is that elastic links which connect adjacent ROIs can reduce tracking errors due to velocity estimation error. Fig. 2 shows that the model of this idea. The connecting multiple ROIs increase the complexity of objective function.
the pixel at ( p x , p y ) and the pixel value corresponding the pixel p. An echocardiogram of i-th frame is represented as a grayscale image, which is a vector where a pixel value is an element,
Fi
( f ( p, i ) p R ALL ),
_________________________________________
(1)
IFMBE Proceedings Vol. 23
___________________________________________
1104
Y. Maeda, W. Ohyama, H. Kawanaka, S. Tsuruoka, T. Shinogi, T. Wakabayashi and K. Sekioka
^p 1 ,
Let P
p 2 , " , p n ` and Q
^q 1 , q 2 , " , q n `
represent a set of tracking point on the i-th frame and i + 1th frame respectively. The objective function for P and Q is defined as, n
h( P, Q )
¦ ®¯J ( p j 0
j
,qj )
k ½ ( L( p j ) L(q j )) 2 ¾, 2 ¿ (7)
L( p j )
°d ( p j 1 , p j ), 1 d j d n 1 , ® °¯d ( p0 , p j ), j n
(8)
where L(pj) denotes a distance between the i-th and i + 1-th tracking points. The parameter k reflects the elastic property of myocardium and controls change of distance between tracking points. A large k constrains tracking points not to change the distance between each other. The objective function in (7) is based on the assumption where the elastic property of myocardium is homogeneous. This assumption sometimes results tracking errors in [5]. In order to adopt this assumption on the real property of myocardim, we modify (7) by introducing heterogeneous elastic parameters. 2 k j § L ( p j ) L ( q j ) · ½° ° ¨ ¸ . ®J ( p j , q j ) ¦ ¸ ¾ 2 ¨© L( p j ) j 0 ° ° ¹ ¿ ¯ (9) n
h' (P, Q )
Fig. 3 Shape–constraint elastic link model The each parameter kj in K ^k 1 , k 2 , " , k n ` reflects the elastic property of link corresponding to the tracking point. The proposed method minimizes the (9) by using a dynamic programming approach. D. Shape–Constraint Elastic Link Model The motion property of myocardium in the long–axis B–mode scan is different from that in the short–axis scan. For instance, in the long–axis the shape of left ventricular endocardium does not changed while the cavity of left ventricle does, during systole and diastole cycle. Motivated by this property, we introduce the shape–constraint elastic link model (SCEL) into the objective function of motion tracking. The basic concept of SCEL model is illustrated by Fig.3. The model considers the angle of each elastic link not only the length of them. The model constraints the angle change between two consequent frames. The objective function considering the SCEL model is expressed as,
° k j § L( p j ) L(q j ) · ¨ ¸ ®J ( p j , q j ) ¦ ¨ ¸ 2 L ( p ) j 0 ° j © ¹ ¯
2
n
h' ' ( P, Q )
`
v j (T ( q j 1 , q j ) T ( p j 1 , p j )) 2 .
(10)
Where, the angle change between frames,
T ( q j 1 , q j ) T ( p j 1 , p j ) , are calculated by equation
Fig. 2 Model of the objective function in the proposed method
_________________________________________
(6).vj is weighting factors which control the importance of shape–constraint in the objective function. At this point, we defined the weighting factors experimentally. For minimi-
IFMBE Proceedings Vol. 23
___________________________________________
Quantitative Assessment of Left Ventricular Myocardial Motion Using Shape–Constraint Elastic Link Model
1105
zation of the objective function (10), we can also employ dynamic–programming approach. E. Determination of weighting factors kj and vj To track myocardial motion accurately using SCEL, we have to determine a suitable combination of weighting factors kj and vj . These parameters control the level of importance of the image correlation, distance change between tracking ROIs and angle consistency in the objectivefunction (10).
Fig. 5 An example of myocardial wall thickening assessment using motion tracking result
III. EXPERIMENTS AND RESULTS
(a)
(b)
(b)
(d)
Fig. 4 Examples of tracking result by the proposed and the conventional methods. (a),(c) and (b),(d) are results by the proposed and conventional methods respectively.
We use ultrasonic diagnosis equipment ( Hitach–medical EUB–6500 ) to acquire ultrasonic RF signals. The signals capture the motion of left ventricular myocardium in short– and long–axis view. Ultrasonic data from ten normal subjects are used for verification of the proposed method. High spacial resolution echocardiograms are created from the RF signals. Initial tracking points are defined by manually on the first frame. Fig. 4 illustrates the examples of tracking result. In the figure, (a),(c) and (b),(d) show the result by the proposed method (10)and the conventional CMR one (9) respectively. Since we use one heart beat length signals, the tracking points is expected to return at the position of initial ones. From these results, it is obvious that the proposed method is able to track the myocardial motion more reliable than the conventional one. Fig.5 shows an example of application of the proposed result. This figure illustrates change of myocardial thickening during one heart cycle. The myocardial thickening of four myocardial region, anterior, lateral, posterior and septum, are calculated using the motion tracking result by the proposed method.
CONCLUSIONS The determination of the weighting factors is quite important; however it is almost impossible to determine a suitable combination of factors to a particular echocardiogram. From this reason, we determine the value of factors employing the following exploratory experiment. At first, three sequences of echocardiogram are selected randomly for the exploratory experiment. The myocardial motion contained in the three sequences is tracked by manual. Next, a range of kj and vj is defined within that the values do not corrupts the motion tracking.
_________________________________________
In this paper, we propose a new modality to track the myocardial motion from 2–D echocardiogram. The experimental results with clinical subjects show that the performance of the proposed method is empirically superior to the conventional methods. Further study topics include (1) further improvement of tracking accuracy, (2) define an effective procedure to define the elastic parameters kj and vj in (10) and (3) detailed performance evaluation with enough number of clinical data.
IFMBE Proceedings Vol. 23
___________________________________________
1106
Y. Maeda, W. Ohyama, H. Kawanaka, S. Tsuruoka, T. Shinogi, T. Wakabayashi and K. Sekioka
REFERENCES [1] B. Lucas and T. Kanade. An iterative image restoration technique with an application to stereo vision. Proc. DARPA IUWorkshop, pages 121–130, 1981. [2] Y. Chunke, K. Terada, and S. Oe. Motion analysis of echocardiograph using optical flow method. Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, 1:672–677, 1996. [3] P. Baraldi, A. Sarti, C. Lamberti, A. Prandini, and F. Sgallari. Evaluation of differential optical flow techniques on synthesized echo images. IEEE Trans. Biomed. Eng, 43(3):259–272,1996.
_________________________________________
[4] Michael S¨uhling, Muthuvel Arigovindan, Christian Jansen, Patrick Hunziker, and Michael Unser. Myocardial motion analysis from b– mode echocardiograms. IEEE Transactions on Image Processing, 14(4):525–536, 2005. [5] Wataru Ohyama, Masaki Inami, Tetsushi Wakabayashi, Fumitaka Kimura, Shinji Tsuruoka, and Kiyotsugu Sekioka. Automatic tracking for regional myocardial motion by correlation method with connecting multiple rois. IEEJ Trans. EIS, 124(10):2079–2086, 2004.
IFMBE Proceedings Vol. 23
___________________________________________
Assessment of Foot Drop Surgery in Leprosy Subjects Using Frequency Domain Analysis of Foot Pressure Distribution Images Bhavesh Parmar Ramrao Adik Institute of Technology, Navi Mumbai, India Abstract — Foot problems in leprosy subjects are of major concern. Quantitative assessment of corrective foot drops surgery in detecting effectiveness of recovery after surgery overtime having prime importance. This paper quantifies and distinguishes between the foot pressure images of leprotic subject for preoperative as well as different duration of postoperative and of normal foot. Detailed analysis is performed on walking foot pressure distribution image in the frequency domain. Power ratio (PR) (ratio of power in higher spatial frequency components to the total power in the power spectrum) is used to distinguish between the foot pressure image pattern in preoperative and postoperative over different time duration. Statistical Analysis involving calculation of ‘p’ using Welch ANOVA test followed by post Dunnett’s test and Kruskal-Wallis test has been carried out on mean PR to understand the progress of recovery after surgery from preoperative state and with normal state in all the foot sole areas. It is observed that the postoperative stages are the one with increase sensitive feet and decrease PR. These results could help in early detection of recovery and/or defect of surgery, and thereby help orthopedic surgeons for early corrective action for preserving or restoring normal foot function. Keywords — Foot drop surgery, Foot pressure images, Frequency domain Image Analysis, Leprosy, Power ratio.
I. INTRODUCTION Leprosy, a disease as old as mankind, has been a public health problem to many developing countries. Leprosy has always and everywhere been regarded as a special disease. It has stimulated studies in various fields, such as pathology, histology, immunology and rehabilitation. Of them rehabilitation of the victims has assumed prime importance as its focus is on prevention of further advancement of deformities. This paper aims to understand the pressure distribution patterns under the foot-soles of leprotic subjects at different stages of curing of leprotic disorder with different time duration after surgery with new foot pressure parameter PR (ratio of power in higher spatial frequency components to the total power in the power spectrum of the foot pressure image) developed by Prabhu et al., 2001[1]. This work could help the orthopedic surgeons in detecting the effectiveness of surgery with time duration after correction of leprotic feet and thereby in taking early corrective action
for curing of the feet disorders of the disease as well as the functional rehabilitation of the leprosy feet, in time. Section II describes the foot pressure measurement system used for acquiring the data and image processing methodology. Section III describes the frequency domain analysis of walking foot pressure images. In section IV results with analysis is been discussed. At last conclusions with the scope for the future extension are discussed in section VI. II. FOOT PRESSURE MEASUREMENT SYSTEM The Foot pressure measurement system involves the use of an optical pedobarograph developed earlier [2] in the Biomedical Engineering laboratory, IIT, Madras. Standing foot pressure measurements gives only a fraction of the potential information regarding the functionality of the foot. More valuable information would result from the dynamic distribution of pressures and loads, particularly since many abnormalities of the foot only reveal themselves in dynamic (walking) measurements while remaining hidden on a fairly detailed static (standing) analysis [3]. The size of the individual footprint is varying and these foot print images are divided into ten standard regions as per the method indicated in Patil et al [4]. Now each plantar area is obtained manually, from the corresponding frames of distinct phase of walking cycle, i.e., area 1 and 2 from heel– strike; area 4 from mid–stance and area 5 to 10 from push– off phases of walking cycle.
Figure 1 The Fourier spectrum, F(u,v) of an image showing the higher and lower spatial frequency regions.
LFP
D0 ½ ° ° F ( u ,v ) 2 ¾ ® °¯ D ( u ,v ) 0 °¿
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1107–1111, 2009 www.springerlink.com
¦
HFP TP LFP
F ( 0 ,0 ) 2
(1) (2)
1108
Bhavesh Parmar
PR
§ HFP · ¨ ¸ © TP ¹
x 100
(3)
Frequency domain analyses of these images corresponding to all the plantar areas are performed, and the PR (ratio of high frequency power to the total power in an image) is calculated using equations (1-3).
spectrum of foot pressure image intensity for a leprotic foot before corrective surgery (Figure 3b) shows higher relative value of high spatial frequency power in the total power spectrum distribution compared to that of a normal foot (Figure 2b), giving rise to higher value of PR (37.7) in that area of the foot for the leprotic subject before the surgery.
III. FREQUENCY DOMAIN ANALYSIS. The frequency domain analysis, of walking foot pressure light intensity images, is done for 12 leprotic subjects. Comparisons of PR values are made between normal and leprotic subjects pre and post corrective foot drop surgery. (a)
A. Results
(b)
Figure 4 (a) The walking foot pressure image intensity (spatial) distributions; the directions of x and y axes are from medial to lateral and posterior to anterior side, respectively, (b) the power spectrum after deleting the DC component, in area 2 of the left foot for the same subject, (as in Fig 3) after corrective foot drop surgery (41-60 days after correction).
(a)
(b)
Figure 2 (a) The walking foot pressure image intensity (spatial) distributions; the directions of x and y axes are from medial to lateral and posterior to anterior side and (b) power spectrum after deleting the DC component, in area 2 for a normal subject, respectively.
(a)
(b)
Figure 5 (a) The walking foot pressure image intensity (spatial) distributions; the directions of x and y axes are from medial to lateral and posterior to anterior side, respectively, and (b) the power spectrum after deleting the DC component, in area 2 of the left foot for the same subject) (as in Figs. 3 and 4) after corrective foot drop surgery (above 210 days after correction).
(a)
(b)
Figure 3 (a) The walking foot pressure image intensity (spatial) distributions; the directions of x and y axes are from medial to lateral and posterior to anterior side, respectively, and (b) the power spectrum (after deleting the DC component ), in area 2 of the left foot for a leprotic subject before corrective foot drop surgery. Figures 2(a) and 3(a) show the spatial variation of intensity distribution in area 2 (lateral heel) for a normal and leprotic subject (with foot drop and claw toes) before the corrective foot drop surgery. It is observed that the spatial intensity variation distribution for a leprotic foot is not uniform (Figure 3a) compared to that of a normal foot (Figure. 2a). The power spectrum of these intensity distributions are shown in Figures 2(b) and 3(b), respectively. The power
_________________________________________
Similarly the spatial variation of intensity distribution in area 2 (lateral heel ) for the same leprosy subject after corrective surgery (41-60 days duration after surgery and 210 days after surgery) are shown in Figures 4(a) and 5(a), respectively. The corresponding power spectrums are shown in Figures 4(b) and 5(b), respectively. It is observed, from Figures 4(a) and 5(a), that the foot pressure spatial intensity distribution is becoming nearly uniform, giving rise to lower relative value of high spatial frequency power in the total power spectrum compared to the corresponding value before corrective foot drop surgery, thus decreasing the value of PR (28.5 and 20.5, respectively) towards normalization. B. Analysis of Results The Figures 6 and 7 show summary of variations of values of PR in different foot sole areas for two typical leprotic
IFMBE Proceedings Vol. 23
___________________________________________
Assessment of Foot Drop Surgery in Leprosy Subjects Using Frequency Domain Analysis of Foot Pressure Distribution Images
subjects before and after different durations of the foot drop corrective surgery in comparison with normal subjects. It is observed from the Figures, that the PR values are very high prior to surgery in heel, first and second metatarsal head and second to fifth toes. The high values of PR observed in the heel area could be due to the foot drop and in the mid foot areas due to lateral shift of walking pattern observed in foot drop subjects. The high values of PR in second to fifth toes could be due to claw toes exerting high pressure distributions. There is general recovery (after foot drop surgery) in the heel to fore foot regions, as shown in decrease of average PR values in all foot areas (fig. 6) from 28-30 (before surgery ) to 18-22 as the recovery process takes from 40-60 days to 210 days. The values in the second to fifth toes did not reduced due to non correction of claw toes in all the leprotic subjects. From comparison of both the Figures 6 and 7, it is observed that the recovery pattern is different in both the subject. In Fig. 6 the recovery is faster than the Fig 7, form that we can say that the recovery of foot sole parameter may also depend on the type of corrective surgery.
1109
The above Figure 8 shows summary of variations of mean values of PR in different foot sole areas for all the leprosy subjects before and after different durations of the foot drop corrective surgery in comparison with normal subjects. It is observed that the PR values are very high prior to surgery in heel, second metatarsal head and second to fifth toes. The high values of PR felt in the heel area in mid-stance phase, could be due to the foot drop and in the lateral mid foot area due to lateral shift of walking pattern observed in foot drop subjects. The high values of PR in second to fifth toes could be due to claw toes exerting high pressure distributions. There is gradual recovery (after foot drop surgery) in the heel to fore foot regions, as shown decrease of average PR values in all foot areas from 30-35 to 20-22 as the recovery process takes from 40-60 days to 210 days. The values of PR after 210 days being very nearly equal to normal subject values. The values in the second to fifth toes did not reduce due to non correction of claw toes in all the leprotic subjects. C. Statistical Analysis A statistical study was carried out on PR values obtained from the foot image data of normal and leprosy subjects
Table 1 Values of mean difference and significance level (p) for different group taking normal as a control group for all foot sole areas.
_________________________________________
-28.595** -22.250+ -19.150+
-25.645**
-21.750+ -20.520+
+
-31.136**
-6.284§ -7.903*
-6.437§ 24.100 + 25.833
-4.939§ -6.244*
-7.976§
-5.357§
1.465§
1.220§
-3.928§
-25.071**
-14.25**
10
-9.763§
-9.864**
-10.175**
9
-8.124§
-10.158**
-23.899**
8
-6.319§
-16.022**
-14.885**
7
-6.912§
-11.403**
-9.119** 2.189§
-16.939** -9.785**
-16.955**
6
0.647§
Postoperative (>210 days)
5
1.718§
The mean PR values, for all leprosy subjects in preoperative and postoperative stage of corrective surgery-with different time durations of recovery compared to corresponding normal feet, are shown in first group for all the specified foot sole areas.
Postoperative (111-150 days)
4
3.179*
sole for leprosy subjects before and after the foot drop correction with varying recovery time.
2
2.184§
Figure 7 Variations of mean values of PR in the different areas of the foot
Postoperative (90-110 days)
-10.471*
Postoperative (40-60 days)
-2.885§
Preoperative
Areas of foot 1
-10.747§
sole for leprosy subjects before and after the foot drop correction with varying recovery time.
Group
-6.876§
Figure 6 Variations of mean values of PR in the different areas of the foot
** Very significant (0.0011@>2@ The recovery of contractile speed caused the tetanus’s failure to fuse in lower frequency stimulation, whereas the tetanus torque became less depressed in higher frequency stimulation [1]. This phenomenon is termed as low frequency fatigue (LFF). In addition, potentiation is one important mechanical characteristic and also considered a
competing process in which repetitive stimulation of a fatigued muscle yields augmentation of peak force [3]. Our aim was to observe whether these two phenomena occur in the recovery process and to determine the main feature of potentiation in post fatigue paralyzed muscles. From physiologic aspects, many studies have investigated the evoked EMG of the stimulated muscle during the fatiguing process. It is tempting to use EMG as an indicator of muscle state. The close correlation between muscle force and amplitude of stimulus-evoked EMG indicates the possibility that the EMG signal can directly provide information for monitoring electrically-elicited muscle contraction, and as control signals for compensating for the decrease of muscle force due to fatigue>4@ The temporal features have also been used to quantify the muscle fatigue process, including: latency, rise time to peak (RTP), and PTP duration (PTPD) as well as frequency characteristics. Both temporal and frequency features were presumably related to the propagation velocity of the motor unit action potential in muscle fibers. Most of the results showed a decrease in muscle fiber conduction velocity during the muscle fatigue process [5]. For the recovery process, Shields et al., proposed a result of their study on paralyzed soleus muscles [6]. They found that after 5 min from the cessation of electrical stimulation, the evoked EMG had almost returned to its original features, but the recovery of muscle force was not so significant. According to their study, if the evoked EMG can be a fatigue indicator, it is true only when the recovery process does not start, or other factors may exist contributing to the varying relationship between EMG amplitude and the muscle contraction force during the recovery process. The aim of this research is to characterize the muscle recovery processes by observing the twitch and fused isometric contractions. In addition to contractile activity, a recovery manifestation constituted from myoelectric activities is also necessary for reflecting the condition of the stimulated muscles. The relationship between myoelectric parameters and the recovered force measurements were analyzed to examine the condition of excitation-contractile coupling.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1667–1671, 2009 www.springerlink.com
1668
N.Y. Yu and S.H. Chang
II. METHODS A. Subjects Thirteen spinal cord injured subjects (11 male and 2 female, 26-43 years, 63-85 kg and 1.62-1.84 m) with mean ± SD age of 35.0±7.3 years were recruited for this experiment. The average time post-injury was 4.3±2.68 years. The neurological levels of the subjects are between T4 and T11 and with no or little spasticity in the lower limb muscles. The local Ethics Committee approved the experimental procedures, and all subjects signed an informed consent that was approved by the Human Subjects Review Board of IShou University. B. Experimental Setup: During the experiment, each subject lay on a long bench with his or her hip and knee joints fixed in a 0q flexion. The tibialis anterior was stimulated by a Grass electrical stimulator (S8800) and constant-current unit (CCU1 A, Grass Instrument, Quincy, MA) using 300 μs, 20-Hz square-wave pulses. For inducing muscle fatigue, current with supramaximal amplitude was applied to the tibialis anterior with the subject lying on the bench with the lower leg fixed on a testing device, as shown in Figure 1. To detect the muscle force output, the generated ankle dorsiflexion torque representing the force output of the tibialis anterior can be collected from the analogue output of the torque sensor (TP-10KMAB/CB, Kyowa). To detect the evoked EMG, surface electrodes with a 2-cm inter-electrode distance (Norotrode 20, Myotronics-Noromed, Inc.) were applied on the stimulated muscle. They are bipolar Ag-Ag chloride surface electrodes with a 7 mm diameter and fixed 20 mm inter-electrode distance. Surface EMG was amplified (Model 12A14 DC/AC Amplifier, Grass Medical Instruments) with a gain of 1000, and passed through a frequency window of 3-3k Hz (Neurodata Acquisition System, Model 12, Grass Medical In-
struments). The EMG signal was sampled at a rate of 5k Hz stored on computer disk for later analysis using Matlab (Mathwork Co., Natick, Mass.) signal processing software. A hardware blanking with a sample and hold design was used for stimulus artifact suppression. This hardware-based blanking circuit is triggered by a synchronous pulse from the stimulus pulse itself C. Experimental Protocols: The recovery tests were performed in the right lower legs of all of the subjects. Supra-maximal electrical stimulation (determined by the maximal peak to peak amplitude of the evoked EMG) was delivered to induce muscle fatigue. A modified Burke fatigue protocol was delivered for 4 min, which activated the tibialis anterior every second for 0.33 sec using a 20 Hz stimulation frequency. Before the fatigue protocol, 3 pulses with 1 Hz and 7 pulses with 20 Hz of stimulation frequency were delivered to induce 3 twitches and 1 fused contraction for the collection of the initial baseline data. Immediately after the fatigue protocol, the same testing protocol was delivered to the muscle in 1, 3 and 5 minutes, and then every 5 minutes for 60 minutes. At the same time, the twitch and tetanic force, as well as the stimulus-evoked EMG, were collected for the off-line analyses. D. Data Analyses: In every test of the experiment, every 3 consecutive twitches were averaged together to represent a single twitch. The peak torque (PT) in the twitch and fused contractions can be obtained by finding the maximal value in the array of torque data. For measuring the total activation produced by twitch or tetanus, the torque-time integral (TTI) was also computed. For observing the myoelectric activities, the stimulus-evoked EMG of the stimulated muscle was studied during the recovery process. Peak-to-peak amplitude (PTP) and the other characteristics, such as root mean square (RMS) and temporal parameters, including rise time to peak (RTP) and PTP duration (PTPD), were used to quantify the recovery process of the myoelectric activities. The measured parameters of twitch or fused contractions were plotted in correspondence with the elapse of recovery time. The plotted curves were fitted by a function for characterizing the recovery process. The recovery process was characterized empirically by an exponentially asymptotic curve:
Y
Figure 1: Schematic presentation of testing device.
_______________________________________________________________
A(1 e W 0 t ) M 0
where Y is the predicted value of torque output, t is the elapsed time, W0 is the time constant, M0 is the initial value of torque output in the recovery process, and A is a scaling
IFMBE Proceedings Vol. 23
_________________________________________________________________
Mechanical and Electromyographic Response to Stimulated Contractions in Paralyzed Tibialis Anterior Post Fatiguing Stimulations 1669
parameter for recovering the normalized data to its original value. A nonlinear curve fitting technique was used to find the optimal parameters such that error could be reduced to a minimum by using the Nelder-Mead simplex algorithm. After the normalization, the recovery processes from muscle fatigue could be analyzed by comparing the time constant of the measurements of the evoked EMG and torque output. E. Statistical Analyses: Repeated measures of ANOVA were utilized to compare the difference of time constant between EMG and mechanical parameters. If significant main effects were observed, Bonferroni post-hoc tests were performed. For observing the relationship between EMG and the recovered force, a series of correlation analyses was conducted. III. RESULTS A. General manifestation of recovery processes As depicted on Figure 2, the torque output of ankle dorsiflexion was found to have diminished after modified Burke fatigue protocol. After about 5 minutes post stimulation, the torque output of stimulated muscle was found to have increased to about one half of the initial amplitude. After about 10 minutes, the output increased, approaching 60% of the initial amplitude. In the observation of force development of a single twitch, the rise time to peak torque was delayed until about 10 minutes of time had elapsed. The twitch contraction time was prolonged at around 1 to 5 minutes after fatigue stimulation. After that, the twitch contraction time returned to its initial value.
. Figure 2. (a) PT and (b) TTI of twitch (-- and --) and tetanic (-'- and - -) contractions. Data, expressed as percentage of initial values, are means+SE or means-SE.
_______________________________________________________________
B. The Recovery Processes of Muscle Fatigue The data on the recovery process were fitted empirically by an exponentially asymptotic curve. After all of the measuring parameters had been fitted, the time constants were extracted. The respective mean and standard error of recovery time constants are Twitch PT (.112r.034), Twitch TTI (.185r.060), Tetanic PT (.287r.033), and Tetanic TTI (.443(.056). Figure 2 shows the changes of biomechanical measurements in the recovery processes. Repeated measurements of ANOVA showed significantly different changing speeds within biomechanical parameters (F= 15.23, p 3.5 kg). The porous coatedhydroxyapatite implant was inserted press-fit into the femur. Each animal received one implant carrying about 12μg BMP-2 (group I). A hydroxyapatite-coated porous implant was placed as a control in the contralateral femur in the same manner (group II). Wounds were closed in layers. Bred in the same condition.
1681
perpendicular to the implant long axis.The specimens were stored in physiological saline solution until mechanical evaluation, which was performed within 2 h after retrieval. The pull-out tests were conducted on a material testing machine (Instron 1185) linked to a computer at the speed of 1mm/min to test the shear stress. The interface shear strength was calculated by the following formula:
V
F Sdt
V = shear strength; F = max load when collapse; d =
diameter of the specimen; t =height of the specimen
C. Implant retrieval and evaluation Euthanasia was performed by using carbon dioxide after 6, 12, or 24 weeks. Following euthanasia, the implants with their surrounding tissue were retrieved and prepared for histological (n = 6 for each material and time period) and Mechanical analysis (n = 6 for each material and time period). D. Histology The histological samples were fixed in 4% phosphate buffered formaldehyde solution (pH = 7.4), dehydrated in a graded series of ethanol, and embedded in methylmethacrylate. Following polymerization, 10- m thick sections were prepared per implant using a modified sawing microtome technique. Before sections were made, the specimens were etched with hydrochloric ethanol for 15s, stained with methylene blue for 1 min, and stained with basic fuchsin for 30s. The prepared sections were examined under a light microscope. Before sacrificed, three animals were injected subcutaneously with a fluorescent tetracyclin (25 mg/kg body weight, Merck, Darmstadt, Germany) five days in order to label the process of bone formation. In addition to the thin sections described above, two additional 30- m thick sections were prepared from the samples of the three animals that received fluorochromes. These sections were not stained but evaluated with a fluorescence microscope equipped with an excitation filter of 470-490 nm. The light and fluorescent microscopic assessment consisted of a complete morphological description of the tissue response to the different implants. In addition, quantitative information was obtained on the amount of bone formation into the various mesh implants. E. Mechanical testing Soft tissues covering the bones were removed by using a scalpel blade from freshly excised specimens. The specimens containing implants were scoured to a flat surface,
_______________________________________________________________
III. RESULT A. Scanning electron micrograph The particle of hyaluronic acid with BMP-2 was uniformly mixed and distributed over hydroxyapatite-coated porous titanium arranged irregularly. Part of the particle uniformly distributed among the porous titanium as globular or needle could combine with newly formed mixture in multi-punciform or multi-extent. The surface of the particle is exposed except contacted area and there are many irregular fissure 100μrn—200μrn interconnected to each other among them B. Histology After 6 weeks’ implantation, there were no signs of inflammation and slightly newly formed bone filled in the interface gap of purely hydroxyapatite-coated implants. The interface of soft tissue was obvious. But to BMP-2-coated implants, more newly formed bone could be observed in the interface gap than control group. Bone grew into the cylinder pore and contacted with titanium sphere in some extent. Bone had also been formed along the outer surface of the cylinder. After 12 weeks’ implantation, more newly formed bone and obvious reconstruction could be observed in purely hydroxyapatite-coated implants. And more newly formed and completely calcified bone could be obtained in BMP-2-coat group. Bone filled in almost all the gap and even the junction of titanium sphere. After 24 weeks’ implantation, newly formed bone filled all the gap in two groups and there was no difference between two groups. The growth of new bone could be clearly differentiated and recognized by observation of fluorescence microscope. The newly formed bone was deposited initially at the surface and then grew into the pore. Organized bone formation with clear ossification fronts was not observed in any of the implants. In all of the specimens, diffusely stained deposits
IFMBE Proceedings Vol. 23
_________________________________________________________________
1682
P. Lei, M. Zhao, L.F. Hui, W.M. Xi
of tetracycline (and sometimes calcein) were mainly localized in contact with the Ti fibers C. Mechanical testing The results of pull-out test of purely hydroxyapatite coated implants and BMP-2 groups were listed below. There were 36 implants inserted being prepared for mechanical tests, but 5 implants were excluded because of unexpected death or technical error during preparation or pullout(n=5). There was a significant increase in shear strength over time for all coatings. Shear strength values for the BMP-2-coated implants were higher than the HY-coated implants at 6 and 12 weeks(p ‹0.05). But at 24 weeks there were no significant statistical differences between two groups. IV. DISCUSSION At present, many new researches about bony tissue and engineering focus on precursor cell of bone formation, implants and bone growth factor. Studies show that marrow stroma cells have the ability of multi-directional differentiation and bone formation, but the results of implantation were not the only as expected. Some studies show that combined MSC with HA or BMP-2 could improve the ability of bone formation greatly. It proved that good implantation with growth factor and MSC could get the best effect of bone formation[2-11] because it could gain better interface of impaction and sooner healing. The pattern of binding between bone and implantation is the major criterion to judge the biocompatibility and bioactivity. There might be some fibrae encyst of different thickness formed around the implantation with negative biocompatibility and thickening by the time. Eventually fluidify, inflammation or necrosis occurred between implantation and encyst could cause loosening or displacement leading to abortive implantation. Lamellate fibrous tissue formed on the interface between implantation and bone soon after insertion of titanium. It becomes thinning by time-lapse and does not lead to some adverse effect such as inflammation. Ideal synosteosis is direct osseo-integrate on the interface of implantation and bone. After implantation, environmental newly formed osteoblast leads to obvious bone formation, so compact direct osseo-integrate is getting sooner and better immediate union[14,15]. All these require implantations to have better bone tissue biocompatibility and bioactivity including bone conductivity and osteoinductive[16]. 6 weeks later the cylinders of porous titanium is treated by alkali-heat were inserted, newly formed bone being observed on the interface and direct osseo-integrate being
_______________________________________________________________
obtained to speed up healing of porous titanium means that hydroxyapatite coating could lead to bone formation satisfactorily because of its good biocompatibility. The growth of bone after implantation of porous titanium was due to its surface structure, porous framework, survival ability of bony bed of host, coating with bioactivity and reconstruction of environmental bone. The rough surface of porous titanium could provide good interface for bone growth[17,18]. Studies showed that the tiniest diameter of pore which required bone growth in environmental and good blood supply was 100μm. Experimental result of cell culture in vitro showed hydroxyapatite coating could help osteoplast to adhere on the surface of implantation, to proliferate and diffuse. Meanwhile, Liang Huifang et al proved that bone formation of implants with hydroxyapatite coating was faster than those without whether on the surface and in the pores in vivo. Copious athletic osteoblast around the coating form new woven bone, which made it easier to grow into the pores, maturate and reconstruct. Hydroxyapatite coating also can eliminate the shield of stress caused by porous structure which allow new bone to form along the surface of porous structure to inner part even the impaction of titanium globular, so more direct conjunctive interface is obtained. All above can speed up the healing of porous titanium and bone tissue, profit stabilization of early period. It does benefit a lot to the recovery of bone defect, the fusion of vertebra and arthro-replacement. There are three essential pacing factors for mesenchymal cells to induce bone formation: inducing factor, target cells and eligible environment. BMP-2 as inducing factor could stimulate the genes with effect of bone formation to express which cause the differentiation of mesenchymal cells. The target cells of BMP-2 include undifferentiated mesenchymal cells existing in muscles and connective tissue around blood vessels, marrow stroma cells including determined osteogenic precursor cell(DOPC) and inducible osteogenic precursor cell (IOPC) and connective tissue cell in periosteum. Marrow stroma cells are more sensitive to BMP-2 than others[17]. Studies showed that imbed marrow stroma cells origin from autologous bone marrow in bone defect could enhance the inducing ability of BMP-2 greatly[19,20]. It is not the truth that induction happens everywhere BMP-2 imbedded, but the induction of BMP-2 requires favorable environment, the strongest in bone marrow, muscles and brain tissue, the weakest in spleen, liver and kidney et al. This research showed that hydroxyapatite coating could enhance the combination of bone and implants, but the effect was not so satisfactory.as that in 6 weeks BMP-2 imbedded. More bone formation and osseo-integrate were observed than purely hydroxyapatite-coated implants especially the inner part of porous titanium to urge speeding up of healing. All these means BMP-2 has a good effect of
IFMBE Proceedings Vol. 23
_________________________________________________________________
Bone Morphogenetic Protein-2 and Hyaluronic Acid on Hydroxyapatite-coated Porous Titanium to Repair the Defect …
induction and can further to enhance the effect of hydroxyapatite –coating. There are four breaking patterns between bone and implants: implants and tissue, woven bone and mature bone, in the mature bone and in the coating fracture face through the coating not on the face of bone. The rough surface of porous titanium could provide more interface for apposition growth. Porous structure could complicate the interface of bone and titanium and diffuse the strength on the surface including shear strength, pressure and tensile force etc. The internal inlaying locking due to above could obtain more conjunctive strength than some coating on the smooth surface[21]. And if added hydroxyapatite–coating to porous titanium, bigger conjunctive strength could be obtained [22]. In this research, direct osseo-integrate was formed between bone and implants. The mechanical inlaying impaction was caused by newly formed bone in the pore and the gradient structure between coating and implants enhanced conjunctive strength. According to the results of pull-out test, porous structure and hydroxyapatite–coating could promote conjunctive strength in intermediate and long-term stage, but the induction of BMP-2 in the earlier period after implantation could not be ignored. Newly formed bone could early grow into pores and form mechanical inlaying impaction. The results might be influenced by the position of pedestal(the distance of inner edge and interface of bone and implatation) in this research. Besides this, the size of titanium powders, surface roughness, the way making coating and the thickness might influence the results too.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
V. CONCLUSIONS 16.
Bone morphogenetic protein-2 and hyaluronic acid on hydroxyapatite-coated porous titanium has a good effect on repairing the defect of distal fumer in rabbit. It is a kind of fine biotechnology for future clinical application.
REFERENCES 1. 2.
3.
18.
19.
Urist MR. Bone: formation by autoinduction. Science, 1965,150:893– 899. Kandziora F, Scholz M, Pflugmacher R, et al. Experimental fusion of the sheep cervical spine. Part II: Effect of growth factors and carrier systems on interbody fusion. Chirurg, 2002,73(10):1025-38. Hiller T, Pflugmacher R, Mittlmeier T, et al. Bone morphogenetic protein-2 application by a poly(D,L-lactide)-coated interbody cage: in vivo results of a new carrier for growth factors.J Neurosurg, 2002,97(1 Suppl):40-8.
_______________________________________________________________
17.
20.
21. 22.
1683
Lind M, Overgaard S, Jensen TB, et al. Effect of osteogenic protein 1/collagen compositecombined with impacted allograft around hydroxyapatite-coated titanium alloy implants is moderate. J Biomed Mater Res,2001,55:89 –95. Esenwein SA, Esenwein S, Herr G, et al. Osteogenetic activity of BMP-3-coated titanium specimens of different surface texture at the orthotopic implant 5bed of giant rabbits. Chirurg,2001,72:1360 – 1368. Yan MN, Tang TT, Zhu ZA,et al. Effects of bone morphogenetic protein-2 gene therapy on the bone-implant interface: an experimental study with dogs.J Zhonghua Yi Xue Za Zhi, 2005,85:1521-1525. Sachse A, Wagner A, Keller M,et al. Osteointegration of hydroxyapatite-titanium implants coated with nonglycosylated recombinant human bone morphogenetic protein-2 (BMP-2) in aged sheep.J Bone, 2005,37:699-710. Kusakabe H, Sakamaki T, Nihei K,et al. Osseointegration of a hydroxyapatite-coated multilayered mesh stem.J Biomaterials, 2004,25:2957-2969. Kim HW, Lee EJ, Jun IK, et al. On the feasibility of phosphate glass and hydroxyapatite engineered coating on titanium. J Biomed Mater Res A, 2005,75:656-67. Li LH, Kim HW, Lee SH, et al. Biocompatibility of titanium implants modified by microarc oxidation and hydroxyapatite coating. J Biomed Mater Res A, 2005,73:48-54. Coathup MJ, Blackburn J, Goodship AE, et al. Role of hydroxyapatite coating in resisting wear particle migration and osteolysis around acetabular components. Biomaterials, 2005,26:4161-9. Takemoto M, Fujibayashi S, Neo M, et al.Mechanical properties and osteoconductivity of porous bioactive titanium. Biomaterials, 2005,26:6014-23. Aebli N, Stich H, Schawalder P, et al.Effects of bone morphogenetic protein-2 and hyaluronic acid on the osseointegration of hydroxyapatite-coated implants: an experimental study in sheep. J Biomed Mater Res A, 2005,73:295-302. Dorr LD, Wan Z, Song M, et al.Bilateral total hip arthroplasty comparing hydroxyapatite-coating to porous-coated fixation.JArthroplasty,1998,13:729-36. Tanzer M, Kantor S, Rosenthall L, et al. Femoral remodeling after porous-coated total hip arthroplasty with and without hydroxyapatitetricalcium phosphate coating: a prospective randomized trial. J Arthroplasty, 2001,16(5):552-8. Hench LL. Bioactive ceramics: Theory and clinical applications. In: Bioceramics (Eds. Andersson ÖH, Happonen RP, and Yli-Urpo A), Pergamon. Oxford,1994,3-14. Popa C, Simon V, Vida-Simiti I, et al. Titanium--hydroxyapatite porous structures for endosseous applications. J Mater Sci Mater Med, 2005,16(12):1165-71. Simon M, Lagneau C, Moreno J, et al. Corrosion resistance and biocompatibility of a new porous surface for titanium implants. Eur J Oral Sci, 2005 ,113(6):537-45. Knabe C, Howlett CR, Klar F, et al. The effect of different titanium and hydroxyapatite-coated dental implant surfaces on phenotypic expression of human bone-derived cells. J Biomed Mater Res A, 2004,71(1):98-107. Minamide A, Yoshida M, Kawakami M, et al. The use of cultured bone marrow cells in type I collagen gel and porous hydroxyapatite for posterolateral lumbar spine fusion. Spine, 2005,30(10):1134-8. Fujisawa A. Investigation for bone fixation effect of thin HA coated layer on Ti implants. Kokubyo Gakkai Zasshi, 2005,72(4):247-53. Kold S, Rahbek O, Zippor B, et al. No adverse effects of bone compaction on implant fixation after resorption of compacted bone in dogs. Acta Orthop, 2005 ,76(6):912-9
IFMBE Proceedings Vol. 23
_________________________________________________________________
Landing Impact Loads Predispose Osteocartilage To Degeneration C.H. Yeow1, S.T. Lau1, Peter V.S. Lee1,3,4, James C.H. Goh1,2 1
Division of Bioengineering, National University of Singapore, Singapore Department of Orthopaedic Surgery, National University of Singapore, Singapore 3 Biomechanics Lab, Defence Medical and Environmental Research Institute, Singapore 4 Department of Mechanical Engineering, University of Melbourne, Australia 2
Abstract — Knee osteoarthritis is a prevalent disease worldwide and is characterized by progressive degeneration in structure and functionality of the articular cartilage. While it is widely suggested that activities involving large landing impact loads may lead to post-traumatic osteoarthritis, there is little understanding on whether these loads can inflict cartilage lesions and trigger degeneration. This study sought to investigate whether landing impact loads applied to the osteocartilage will render it towards degeneration. Menisci-covered and exposed osteochondral explants were extracted from tibial cartilage of fresh porcine hind legs and placed in culture for up to 14 days. A single 10-Hz haversine impact compression was performed at Day 1. Control (non-impact) and impacted explants were randomly selected for cell viability, glycoaminoglycan and collagen content assessment, histology, immunohistochemistry and micro-computed-tomography. When 2-mm displacement compression was applied, exposed explants attained a considerably greater peak impact stress than meniscicovered explants. There was no observable difference in cell viability, glycoaminoglycan and collagen content, and Mankin scores between menisci-covered and exposed explant groups. Both groups were noted with diminished proteoglycan and type II collagen staining at Day 14; the exposed group indicated increased cartilage volume at Day 7-14. Large landing impact loads can introduce structural damage to the osteocartilage, which leads to osteoarthritis-like degenerative changes. The inferior resilience of menisci-covered regions, against impact-induced damage and degeneration, may be a key factor involved in the meniscectomy model of osteoarthritis. Keywords — Osteoarthritis, compressive impact, degeneration, damage, menisci
I. INTRODUCTION Knee osteoarthritis (OA) is a prevalent joint disease worldwide, with almost 1 in 3 adults displaying OA symptoms in the United States.1 Post-traumatic OA can occur following sports injuries incurred during high-impact activities such as gymnastics and basketball. Therefore, impact trauma is a potential risk factor for joint degeneration and early-onset OA.2 Huser and Davies3 and Mrosek et al.4 found that impact trauma on cartilage led to release of glycoaminoglycans (GAG), chondrocytic death, and diminished type II collagen and aggrecan expressions, which are
characteristic of OA. However, the loading conditions adopted in these studies were inadequate for simulating landing impact. Different cartilage regions have varying resistance to loading. Thambyah et al.5 demonstrated that menisci-covered cartilage was thinner than exposed cartilage by 40%, and possessed substantially lower subchondral bone quantity and calcified layer thickness. Furthermore, Yeow et al.6 noted that exposed regions did not incur more damage than neighboring menisci-covered regions during simulated landing impact of the knee joint. Altogether, these studies suggested that exposed cartilage regions are more capable of load-bearing than menisci-covered regions. However, it is not well understood whether these regions may be different in their susceptibility to degeneration. This study sought to investigate whether landing impact loads applied to the osteocartilage will render it towards degeneration. We hypothesized that impacted menisci-covered regions are more vulnerable to structural damage and more inclined towards degeneration relative to exposed regions. II. METHODS A. Specimen preparation and impact procedures Fresh porcine hind legs (pig age: ~2 months old; weight: ~40 kg) were obtained from a local abattoir (Primary Industries, Singapore). Osteochondral explants (4-mm diameter; 7-mm height) were extracted from menisci-covered and exposed tibial cartilage regions at Day 0, and incubated at 37°C and 5% CO2 for up to 14 days in Dulbecco's Modified Eagle's Medium (DMEM) (Gibco, Switzerland) supplemented with 1 g/L L-glutamine, 10 mg/mL streptomycin sulfate, 10,000 units/mL penicillin G sodium and 10% (v/v) fetal bovine serum (Sigma-Aldrich, US). A 30-mmdiameter compression plate was attached to the material testing system (810-MTS, MTS Systems Corporation, USA) to apply impact compression, which was displacementcontrolled at 2-mm based on a single 10-Hz haversine loading curve to simulate landing impact.6 Dental cement (Baseliquid and powder, Dentsply, China) was used to secure the explant in the potting cup. Each explant was immersed in medium and a 10-N pre-loading was applied by adjusting
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1684–1687, 2009 www.springerlink.com
Landing Impact Loads Predispose Osteocartilage To Degeneration
1685
the MTS actuator to allow contact between compression plate and explant. Impact compression was conducted on Day 1; peak impact stress was obtained from the peak compressive load, measured by the attached load-cell (9347B, Kistler, Switzerland), divided by explant cross-sectional area. Control (non-impact) and impacted explants were randomly selected for cell viability, GAG and collagen content assessment, histology, immunohistochemistry and micro-computed-tomography (MicroCT).
30min, followed by addition of strepavidin peroxidase for 45min and chromagen-substrate solution for 3min at 37oC.
B. Cell viability, GAG and collagen content assessments To quantify cell viability, explants were incubated for 30min in 2ml of medium containing 0.1% fluorescein diacetate, 0.05% propidium iodide (Sigma-Aldrich, US). Confocal fluorescence imaging (LSM510 META, Carl Zeiss, Germany) was performed using excitation wavelength of 488nm (green) and 543nm (red) to visualize surface live and dead cells. Pixel area fractions of viable green and nonviable red cells were measured using ImageJ (Version 1.4, National Institute of Health, US). To assess GAG and collagen content, cartilage was detached from the explant using a microtome blade, and measured for wet weight before immersing in 1ml of 0.025% pepsin in acetic acid for 1 week to permit degradation. The resultant solution was extracted for content assessment using Blyscan sulfated glycosaminoglycan assay and Sircol soluble collagen assay (BioColor Ltd, UK) respectively. These colorimetric assays allowed quantification by analyzing absorbance values of GAG- or collagen-bound dyes using a spectrophotometer, where wavelengths of 656nm and 540nm were used respectively. C. Histology and immunohistochemistry (IHC) Explants were fixed in 10% buffered formalin for 1 week and decalcified in 30% formic acid for 2 weeks. They were then dehydrated and progressively cleared in ethanol and toluene before embedding in paraffin. 10-Pm slices were then obtained using a microtome (RM2255, Leica Microsystems, Germany), and were deparaffinized and stained using Hemotoxylin & Eosin and Safranin-O/Fast Green staining protocols to visualize osteochondral structure, cell distribution and proteoglycan distribution. Three independent observers were trained to grade the photomicrographs using Mankin scoring system; they were blinded against control and impact groups to eliminate bias. Additional histological slices were obtained for IHC analysis (Ultravision Detection System, LabVision Corp, USA). The slices were labeled with primary monoclonal anticollagen type II antibodies (Chemicon International, US), prepared at a dilution of 1:200, overnight at 4oC. Biotinylated goat antimouse secondary antibodies were then applied at 1:200 for
_______________________________________________________________
D. MicroCT Explants were scanned using MicroCT scanner (SMX100CT, Shimadzu, Japan) with the following settings: X-ray voltage (32kV), current (115PA), detector size (5”), scaling coefficient (100) and pixel spacing (14.3Pm/pixel). The scans were reconstructed to obtain 3D explant geometry using VGStudioMax (Version 1.2, Volume Graphics, Germany). The cartilage was segmented based on a consistent set of threshold gray-values (20086-24334), which permitted a fair demarcation of cartilage region from underlying bone. Cartilage volume of each explant was then measured to examine volume change over time. E. Statistical analysis Student’s t-test (SigmaStat 3.1, SysTat Software Inc, USA) was used to compare between control and impacted explants for cell viability, GAG and collagen content, Mankin scores and cartilage volume. Normalization was based against control menisci-covered explants at Day 0/1. All significance levels were set at p=0.05. III. RESULTS There was a notable difference (p1 r (t )@ C (t ) r (t ) I (t )
(1)
The cognitive process r(t) is a combination of its previous value r(t-1) and the input to the cognitive process f(t), as described in equation (2). Both functions r(t) and f(t) are computed at the CogS.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1688–1691, 2009 www.springerlink.com
Drug Addiction as a Non-monotonic Process: a Multiscale Computational Model
r (t )
1 1 tanh>D r (t 1) E f (t ) J @ 2 2
1689
(2)
The input to the cognitive process f(t) is a weighted sum of the internal and external processes. While the exact realistic structure of f is hard to guess, we propose a reasonable model: f (t )
> ZS S (t ) Z P P(t ) Z D D(t )@
^Z A > AS (t ) AP (t ) AD (t )@ ZQQ(t )`
(3)
The internal processes are P(t), representing the level of pain or negative consequences in areas such as health or social relations; S(t), representing the level of stress or negative emotional state of the virtual patient; D(t), representing the craving level, based on the dopamine transmission in the Nucleus Accumbens (NAac); and q(t), representing the saliency of drug-associated cues. Intakes of drugs can increase P(t) [15]. The level of S(t) increases during withdrawal periods [16, 17, 18] and may trigger craving [19]. These intakes were shown to affect the level of dopamine in the NAac and the addicted behavior in rats [20, 21] and in humans [22, 23]. The value q(t) increases with repeated drug consumption [24, 25] and weights the affects of drug related cues Q(t) when encountered. External processes in this model are AP(t), representing a painful trauma that may cause a virtual patient to stop taking drugs [26, 27]; AS(t), representing a stressful episode that may lead into instantaneous drug-use [28, 29]; AD(t), representing drug priming that may cause drug-use [30, 31]; and Q(t), representing a drug-associated cue [32]. Relevant mathematical details are presented in [8].
Fig. 2: G(t) means, SEMs and G(t+1) versus G(t) for 10 simulations. At the top, GD’s evolution between ages 19 to 21, and at the middle, VD’s evolution between ages 36 to 38. At the bottom, G(t+1) against G(t). The diagonal line corresponds to G(t+1) = G(t). The age intervals considered in Figure 2 are 19 to 21 years old for GD, and 36 to 38 years old for VD. The bottom part of Figure 2 plots the ratio of G(t+1)/G(t). This ratio describes convergence to attractors when the trajectory falls on points with values < 1, and it describes divergence when the points are >1 [33]. We asked whether the dynamic of G is more converging (to addictive or healthy states) or diverging, and found that the trajectories neither converge nor diverge. For GD's profile 229 points of the trajectory had the value < 1 whereas 271 points were > 1. Similarly for VD's profile, 275 points had values < 1 and 225 points were > 1, out of 500 iteration points. The translational meaning is that addiction is a process of intervening parameters in a complex manner: addiction can be affected by external parameters given that they are provided during times of divergence. Table 1 The values of the constants used in GD and VR simulations. Value
III. RESULTS
alpha
In this section two different case studies of drug-seeking behavior are considered: The profile of GD outlines a continuous relapse pattern of a person who started using drugs at age 17 and has later been unsuccessfully trying to get away from drug-seeking behavior, whereas the profile of VD exhibits a person that has first used drugs close to age 36 and for which the drug-seeking behavior changes from healthy to maladaptive and back to healthy. The G value of both cases, depicted in Figure 2, is averaged over 10 simulations, having a noise injected in the process. The constants used for both simulations are presented in Table 1. For the presented simulations, the inhibition I(t) of VR was set 33% higher than the one of GD, and the compulsion C(t) of VR was set 60% higher than the one of GD. The initial values of all the signals were chosen randomly in their defined intervals.
_______________________________________________________________
Value
0.15
beta
0.25
gamma
0.2449
S decay for G > 0
0.02
S decay for G < 0
0.02
P decay for G > 0
0.0002
P decay for G < 0
0.1
D decay for G > 0
0.00002 0.02
D increase steps
20
D decay for G < 0
AS constant steps
20
AS decay
0.9
AS probability
0.02 20
AS decrease steps
60 0.4
AP decrease steps AP probability AD decrease steps
0.04 5
AD probability
0.03
q saliency constant steps 10 20 Q constant steps
AP decay AP constant steps
10
AD decay
0.2
AD constant steps
3
q decay for G > 0
0.002
q decay for G < 0
0.005
Q probability
0.01
Q decay
0.9
weight P
1.1
Q decrease steps
40
weight A weight Q
0.4
weight S
0.3
1
weight D
0.4
IFMBE Proceedings Vol. 23
_________________________________________________________________
1690
Y.Z. Levy, D. Levy, J.S. Meyer and H.T. Siegelmann
Fig. 3: The internal processes S(t), P(t), D(t), and q(t) plotted against G(t) for GD (red dots) and VR (blue cross). Next, we asked whether the dynamic behaviors of the individuals differ in fluctuations based on the severity of their addiction. For both simulations, we measured average and standard deviation, and then calculated the integral of the function from its average to measure the consistency of the fluctuations. The values of averages, standards deviation and integrals were (-0.0975, 0.0623, 24.5661) for the more severe case, GD, and were (0.4117, 0.1083, 41.4371) for the lighter case, VR. The addictive person has less fluctuations and flexibility in his drug-seeking behavior that the healthier one, seemingly due to the strong saliency to cues [9]. In Figures 3 and 4 the values of G(t) for GD and VR are plotted against the internal processes and the external processes, respectively. The internal processes affect G, and G affects these processes. The values of S(t) are higher during withdrawal than during use. The values of P(t), D(t) and q(t), on the other hand are higher during drug use. The effect on G by both internal and external events correlates with equations (1), (2), (3). IV. CONCLUSIONS The dynamical analysis in this paper provides analytical measures to behavioral facts pertaining to addiction. The underlying hypothesis that addiction is a non-monotonic dynamical disease differs from the current state-of-the-art. The effort here is to understand the individual fluctuations in drug seeking behaviors with the future goal of predicting stages of possible treatments and stages of need for extra cautiousness. The typical case of the more compulsive, less inhibited individual will have an early onset to addiction. His behavior does not fully converge to addictive behavior due to internal and external events that cause him to try a way out; we stress in our model that none of the cases are monotonic. This virtual individual demonstrates numerous efforts to
_______________________________________________________________
rehabilitate mainly due to pain, and may have short periods of withdrawal, but relapses are fast and overwhelming. The overall flexibility of this subject's behavior is far smaller than the flexibility in the behavioral variables of the less severe case, probably due to the growing saliency to drug cues. The healthier person, with the stronger inhibition falls into addiction later in life. With similar internal and external events to the other virtual case, this individual rehabilitates. The total flexibility and change in the behavioral parameters is overall much greater than the severely addicted individual, and the actual behavior do not present a relapse pattern. Both cases demonstrated a behavior that is affected by internal and external events, providing a hope for rehabilitation for people in either situation. A translational application of the model could be the identification of a treatment that given with the right timing will change the effect that the internal and external processes have on the drug-seeking behavior of the individual, and hence ease the process of recovery. The analyses presented in this work are promising, and we will continue with deeper investigations into the individual dynamics of addiction.
ACKNOWLEDGMENT We would like to thank Megan Olsen, Gal Niv, Pascal Steiner, Rocco Crivelli, and Fabio Santaniello for their valuable assistance. Scientific suggestions by Rita Goldstein and Nora Volkow were incorporated in this paper, and we are thankful for their advice.
REFERENCES 1. 2.
Interlandi J. (2008). What addicts need. Newsweek, March 3:36–42 Redish A. D. (2004). Addiction as a computational process gone awry. Science, 306:1944–7
IFMBE Proceedings Vol. 23
_________________________________________________________________
Drug Addiction as a Non-monotonic Process: a Multiscale Computational Model 3.
4.
5.
6. 7.
8.
9.
10.
11.
12.
13.
14. 15.
16. 17.
18.
Gutkin B. S., Dehaene S., Changeux J.-P. (2006). A neurocomputational hypothesis for nicotine addiction. Proc Natl Acad Sci U S A, 103(4):106–1111 Winick, C. (1962). Maturing out of narcotic addiction. The United Nations Office on Drugs and Crime (UNODC) Bulletin on Narcotics, 1962(1):1–7 Sobell L. C., Ellingstad T. P., Sobell M. B. (2000). Natural recovery from alcohol and drug problems: methodological review of the research with suggestions for future directions. Addiction, 95(5):749– 764 Misch D. A. (2007). "Natural recovery" from alcohol abuse among college students. Journal of American College Health, 55(4):215–218 White W. (2002). An addiction recovery glossary: The languages of American communities of recovery. Amplification of a glossary created for the Behavioral Health Recovery Management project Levy Y. Z., Levy D., Meyer J. S., Siegelmann H. T. (2008). Drug Addiction: a computational multiscale model combining neuropsychology, cognition, and behavior. Technical Report UM-CS-2008-34, Department of Computer Science, University of Massachusetts Amherst, September 2008 Goldstein R. Z., Volkow N. D. (2002). Drug Addiction and Its Underlying Neurobiological Basis: Neuroimaging Evidence for the Involvement of the Frontal Cortex. Am J Psychiatry, 159(10): 1642– 1652 Durston S., Thomas K. M., Yang Y., Ulug A. M., Zimmerman R. D., and Casey, B. J. (2002). A neural basis for the development of inhibitory control. Dev Sci, 5(4):F9–F16 Leon-Carrion J., Garcia-Orza J., Perez-Santamaria F. J. (2004). Development of the inhibitory component of the executive functions in children and adolescents. Int J Neurosci, 114(10):1291–311 Blakemore S.-J. and Choudhury S. (2006). Development of the adolescent brain: implications for executive function and social cognition. J Child Psychol Psychiatry, 47(17):296–312 Robinson T. E., Berridge K. C. (1993). The neural basis of drug craving: an incentive-sensitization theory of addiction. Brain Res Rev, 18(3):247–91 Robinson T. E., Berridge K. C. (2001). Incentive-sensitization and addiction. Addiction, 96(1):103–14 De Alba I., Samet J. H., Saitz R. (2004). Burden of medical illness in drug- and alcohol-dependent persons without primary care. Am J Addict, 13(1):33–45.50 Koob G. F., Le Moal, M. (2001). Drug addiction, dysregulation of reward, and allostasis. Neuropsychopharmacology, 24(2):97–129 Hodgins D. C., el Guebaly N., Armstrong, S. (1995). Prospective and retrospective reports of mood states before relapse to substance use. J Consult Clin Psychol, 63(3):400–7 Aston-Jones G., Harris, G. C. (2004). Brain substrates for increased drug seeking during protracted withdrawal. Neuropharmacology, 47 Suppl 1:167–79
_______________________________________________________________
1691
19. Stewart J. (2000). Pathways to relapse: the neurobiology of drug- and stress-induced relapse to drug-taking. J Psychiatry Neurosci, 25(2):125–36 20. Bonci A., Bernardi G., Grillner P., Mercuri N. B. (2003). The dopamine-containing neuron: maestro or simple musician in the orchestra of addiction? Trends Pharmacol Sci, 24(4):172–7 21. Di Chiara G. (2002). Nucleus accumbens shell and core dopamine: differential role in behavior and addiction. Behav Brain Res, 137(12):75–114 22. Volkow N. D., Fowler J. S., Wang G. J. (2004). The addicted human brain viewed in the light of imaging studies: brain circuits and treatment strategies. Neuropharmacology, 47 Suppl 1:3–13 23. Volkow N. D., Fowler J. S., Wang G. J., Swanson J. M. (2004). Dopamine in drug abuse and addiction: results from imaging studies and treatment implications. Mol Psychiatry, 9(6):557–69 24. Robinson T. E., Berridge K. C. (2003). Addiction. Annu Rev Psychol, 54:25–53 25. Hyman S. E. (2005). Addiction: a disease of learning and memory. Am J Psychiatry, 162(8):1414–22 26. Bradby H., Williams R. (2006). Is religion or culture the key feature in changes in substance use after leaving school? Young Punjabis and a comparison group in Glasgow. Ethn Health, 11(3):307–24 27. Barth J., Critchley J., Bengel J. (2006). Efficacy of psychosocial interventions for smoking cessation in patients with coronary heart disease: a systematic review and meta-analysis. Ann Behav Med, 32(1):10–20. Am J Psychiatry, 159(10):1642–52 28. Sinha R., Fuse T. Aubin L. R., O’Malley S. S. (2000). Psychological stress, drug-related cues and cocaine craving. Psychopharmacology (Berl), 152(2):140–8 29. Erb S., Shaham Y., Stewart J. (1996). Stress reinstates cocaineseeking behavior after prolonged extinction and a drug-free period. Psychopharmacology (Berl), 128(4):408–12 30. Spealman R. D., Barrett-Larimore R. L., Rowlett J. K., Platt D. M., Khroyan T. V. (1999). Pharmacological and environmental determinants of relapse to cocaine-seeking behavior. Pharmacol Biochem Behav, 64(2):327–36 31. de Wit H., Stewart, J. (1983). Drug reinstatement of heroin-reinforced responding in the rat. Psychopharmacology (Berl), 79(1):29–31 32. See R. E. (2002). Neural substrates of conditioned-cued relapse to drug-seeking behavior. Pharmacol Biochem Behav, 71(3):517–29 33. Kaplan D., Glass L. (1995). Understanding Nonlinear Dynamics. Springer, New York Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Yariv Z. Levy Department of Computer Science 140 Governors Drive Amherst, MA USA
[email protected] _________________________________________________________________
Adroit Limbs Pradeep Manohar1 and S. Keerthi Vasan2 1 2
Sri sairam engineering college/EEE, Anna university, Chennai, India Sri sairam engineering college/ECE, Anna university, Chennai, India
Abstract — Necessity is the mother of all inventions. BrainThe Master of our body generates signals in accord with our thoughts and decrees every part to perform the desired actions. This paper is a boon to the amputees since it can decrease their encumber. This paper targets in trapping the signals by the use of Brain wave sensor (Sensors that are attached to the scalp in order to monitor the Brain Wave activity in different parts of the brain) and feed the signals to the so designed artificial hand. Adroit limb is different from the already existing ones. It can encompass activities like peeling; feel things as our normal human hand. The existing models can provide only support but the proposed prototype for this paper can respond to External Stimulus. Brain waves are obtained from a special analysis of EEG (Electro Encephalo Gram). These brain waves show us the brain's response to an external stimulus or event. Brain activity before, during, and after a stimulus presentation is recorded. This allows us to observe where, when, and how the brain responds to a given stimulus. The need for an artificial hand plays a very important role in the life of Amputees and physically challenged. Such people are given an external attachment of an artificial hand which gives a look similar to that of a normal hand which cannot perform all the desired acts. The existing models cannot stimulate actions as per our thought. The signals are directly obtained from the brain without any pre existing sensor for this purpose. The problems faced by the amputees are also increasing day by day. The proposed prototype is a panacea for all such problems faced by them to a greater extent. Keywords — Artificial limb, Brain wave controlled, EEG technic, Amputees, Stepper motor
ALPHA STATE: alphastateat7to12Hz.
Our
brain
is
said
to
be
in
Fig. 1 Alpha At this state our body and mind relaxes. Our mind reaches the gate way to creativity . THETA STATE: The frequency in this state is around4to7Hz
Fig.2 Theta Our creativity and intuition shoots through the roof and we are approaching the gateway to enhanced learning and memory. DELTA STATE: In this state a person has magnetic memory and increased learning capability. The frequency of the signal generated by the brain is less than 4 Hz. A person can remember things. .
I. WORKING PROCEDURE A. WORKING STATE OF BRAIN BETA STATE: Our brain usually operates at Beta wave state. Around 13 to 40 Hz A person in this state hasacuteconcentration.
Fig.3 Alpha B. BLOCK DIAGRAM: The block diagram of our proposed project can be divided into sections: Transmitter Section: The signal generated by the brain is captured, processed and finally converted to digital signal for transmission. The transmitter serves this purpose.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1692–1695, 2009 www.springerlink.com
Adroit Limbs
1693
Receiver Section: The processed signals are obtained by the receiving end and the signals are matched with actions that are programmed in the microcontroller and the action as per thought is performed by the proposed prototype.
Sensor
Volt. Amp
TX.
RXXX
RS232
Micro Controller
RS232
MC
A/D
D/A
Stepper Driver
Relay ckt Amp Power Supply
Nano DC Motors
Darlington Amp
Stepper Motor
Fig.4 Transmitter and receiver section C. CIRCUIT DESCRIPTION: The components mentioned in the block diagram can be explained briefly as follows: Sensor: In order to capture the thought of a person the mind reader/Brain wave sensor is used. Electroencephalography (EEG) is the measurement of electrical activity produced by the brain as recorded from electrodes placed on the scalp. In this paper we report the results from first human experiments using a new electrophysiology sensor called ENOBIO, using carbon nanotube arrays for penetration of the outer layers of the skin and improved electrical contact. These tests, which have included traditional protocols for the analysis of the electrical activity of the brain--spontaneous EEG and ERP--, indicate performance on a par with state of the art researchoriented wet electrodes, suggesting that the envisioned mechanism--skin penetration--is responsible. Microcontroller: The micro controller is programmed using mat lab which would be the front end. The microcontroller is programmed for specific actions. So that when the signal generated by the brain is encountered by, it
_______________________________________________________________
action is performed as per thought. The micro controller works in the range of 4.85 to 5V.It comprises of five ports and eight slots, different programs can be stored in these slots. It is programmed at its ports numbered 18 and 19 where the read and write operation can be performed. The purpose of using MAX232 is for level shifting. The program from the system is encoded in the microcontroller. Voltage amplifier: Because the signal strength of the biological signal is so low, the signal will also have to be amplified via a power amplifier with good SNR. This module modifies the signal in order to drive therestofthedevice. Converters: In order to drive the microcontroller, the signal that is generated by the brain which is in analog formisconvertedtodigitalusingA/Dconverter. RS232: In telecommunications, RS-232 (Recommended Standard 232) is a standard for serial binary data signals connecting between a DTE (Data terminal equipment) and a DCE (Data Circuit-terminating Equipment). It is commonly used in computer serial ports. RS-232 devices may be classified as Data Terminal Equipment (DTE) or Data Communications Equipment (DCE); this defines at each device which wires will be sending and receiving each signal. The RS-232 standard defines the voltage levels that correspond to logical one and logical zero levels. Valid signals are plus or minus 3 to 15 volts. The range near zero volts is not a valid RS-232 level; logic one is defined as a negative voltage, the signal condition is called marking, and has the functional significance of OFF. Darlington amplifier: the Darlington transistor (often called a Darlington pair) is a semiconductor device which combines two bipolar transistors in a single device so that the current amplified by the first is amplified further by the second. This gives a high current gain (written or hFE), and takes less space than two discrete transistors in the same configuration. Integrated packaged devices are available, but it is still common also to use two separate transistors. A Darlington pair behaves like a single transistor with a high current gain (the product of the gains of the two transistors):
Relays: One simple method of providing electrical isolation between two circuits is to place a relay between them. Stepper motor: Since fingers need to perform precise actions. The use of stepper motor is incorporated at the joints. Unlike conventional motor, Stepper motor can perform accurate motions by rotating at specific degrees Stepper motors operate much differently from normal DC motors, which rotate when voltage is applied to their
IFMBE Proceedings Vol. 23
_________________________________________________________________
1694
Pradeep Manohar and S. Keerthi Vasan
terminals. Stepper motors, on the other hand, effectively have multiple "toothed" electromagnets arranged around a central gear-shaped piece of iron. The electromagnets are energized by an external control circuit, such as a microcontroller. D. WORKING: Brain, the source of all thoughts produces impulses according to the thoughts we make. These impulses are always analog in nature. These analog signals are sent through nerves to all parts of the body. These signals are of constant amplitude and of frequency modulated type. The impulse is of constant amplitude for one thought with varying frequency. For another thought there will be a sharp difference in the amplitude.The energized copper coil produces current. This current activates the printed circuit boards (PCB) at the other end. A transmitter/receiver device placed externally to the body, is connected to the PCB’s. The connecting wires are through the tiny hair pores. The transmitter/receiver device, placed externally to the human body now takes the entire command. This acts as a secondary brain outside the body. This device relays the message, according to the information obtained from the brain to the computer or any robot that can act according to
BRAIN
Nerve impulses
Brainwave sensor Electrical signals
External device
Activates the PCB
Programmed Micro controller
Energizes the Cu coil
Does it Match
Stepper Motor
Interfacing Gadget
ACTION AS PER THOUGHT
the instructions from the device. Also the robot may be connected through a computer. To operate a computer the instructions must be in the form of digital signals. But the brain, as already stated always produces analog signals only. Hence this analog signal has to be converted to machine understandable digital signals. So we incorporate an analog to digital signal converter. The transmission medium from the transmitter/receiver device may be designed according to the convenience. It may be a wire. But wire can be used only up to a limited range. So for the convenience of human radio waves are used for transmission. The micro controller is programmed for certain actions to be performed. Any action to be performed is received from the brain. These actions are matched in with the already programmed actions in the micro controller. If the action given in by the brain does not match with the programmed one then it is stored in as a new action. This is in turn fed in to the stepper motor. E. STAGES OF OUR PROTOTYPE: First module: The first module in our prototype would be designing the model that will be attached (wrapped) to our hands and movements are made as per our thoughts, the actions produced by the model are viewed in the monitor. Second module: The second module would be developing an artificial hand and making it perform actions as per the thought by fixing it either in the hands of physically challenged or normal people. Final module: The final outcome would be our objective that is to make the hand perform normal actions such as making it to peel oranges and so on and this will be attached to physically challenged people. Thus it functions similar to normal hands. F. ADVANTAGES: Performs action similar to normal hand.Any actions can be performed as per thought. Simple construction and light weight. Capable of performing activities like peeling. Low cost. Respond to external stimulus. G. DISADVANTAGES:
STORED AS NEW ACTION
Fig.5
Subjected to battery weakness. Cannot lift heavy objects. Restricted not to do few actions.
Working
_______________________________________________________________
IFMBE Proceedings Vol. 23
_________________________________________________________________
Adroit Limbs
1695
II. CONCLUSION:
REFERENCES
This paper is for the physically challenged people who love to live like the ordinary men and women they can perform all works in a much effective manner than a normal human. This technology definitely is a boon to special community.
_______________________________________________________________
1. 2. 3. 4. 5. 6. 7.
www.dailymail.co.uk www.ieeeexplore.ieee.org/ie15/10678/33710/ 01694113.pdf?arnumber=1604113 http://courses.ece.uiuc.edu/ece445/projects/fall2006/ project9_proposal.doc www.national academy of sciences.com www.cnnnews.com www.bbc.com Electronics For You.(edition:may2006)
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Mathematical Model to Study the Regulation of Active Stress Production in GI Smooth Muscle Viveka Gajendiran1 and Martin L. Buist1 1
Division of Bioengineering, National University of Singapore, Singapore
Abstract — In the gastrointestinal (GI) system, motility is governed by the contraction and relaxation of smooth muscle (SM) in response to many regulatory factors. SM in a hollow organ like stomach exhibits two types of contraction: tonic, to maintain the shape of the organ, and phasic, in response to neurotransmitters, hormones or other signaling molecules. Motility disorders such as dysphagia, gastroesophageal reflux disease, irritable bowel syndrome and hypotensive or hypertensive disorders all involve abnormal SM function. Hence, it is important to gain a deeper understanding of contraction and its regulation in GI SM cells. Skeletal muscle cells have force-velocity curves in which shortening velocities are determined only by load and the myosin isoform. In contrast, when smooth muscle activation is altered, e.g. by changing a hormone or agonist concentration, a different set of velocity-stress curves can be obtained. This difference is due to the regulation of both the number of active cross bridges (determining force) and their average cycling rates (determining the velocity). The regulatory system depends on the phosphorylation of cross-bridges, which in turn depends on cytosolic Ca2+ levels. Thus, a model has been proposed to study the effects of (a) Ca2+ concentration on the kinetics of myosin phosphorylation, (b) cross-bridge cycling rates, and (c) the latch state on tonic and phasic contractions. The objective of the model is to describe the regulation of myosin phosphorylation and active stress production in terms of the intracellular Ca2+ concentration. A mathematical formulation of cross bridge cycling has been adapted from the literature and the parameters have been fitted to experimental data from GI SM. Here it was assumed that the Ca2+-calmodulin mediated phosphorylation of myosin is the primary determinant of the kinetics of cross bridge cycling. This model is the first step towards developing a dynamic model of GI SM contraction. Keywords — Gastrointestinal (GI), Smooth muscle (SM) contraction, Myosin Phosphorylation, Active Stress, Crossbridge model, Latch state
I. INTRODUCTION Smooth muscle cells line the walls of most of the hollow organs within the body. The ability of these cells to generate force and motion is essential to the normal functioning of the cardiovascular, respiratory, digestive systems. Since altered smooth muscle contraction may contribute to disease
states, it is important to understand the mechanisms underlying smooth muscle contraction. The functional role of the gastrointestinal tract is to digest and absorb nutrients. These processes are facilitated by the orchestrated movement of the luminal contents from the mouth to the anus [1]. Smooth muscle cells are the force producing elements of the GI tract. The force producing mechanisms of these cells are controlled by the intrinsic electrical slow wave activity and the enteric nervous system through electro-mechanical excitation-contraction coupling and also through pharmaco-mechanical coupling [2,4]. Pacemaker activity and the subsequent slow waves generated by the interstitial cells of Cajal (ICC) provide the background tone of the smooth muscle syncytium [1,2,4]. The tonic contractions contribute to maintaining the shape of the organ against an imposed load. Neurotransmitters from the enteric nervous system, along with hormones and paracrine factors released in response to physiological stimuli, modify SM cell behavior such that phasic contractions are elicited. A tonic contraction also enhances the open probability of voltage dependent Ca2+ channels in SM cells and increases the amplitude of phasic contractile activity in response to various stimuli. The contractile unit of smooth muscle cells consists of thin actin containing filaments and the thick myosin containing filaments [5]. Tiny projections from the myosin filament called cross bridges extend towards the actin filament. These projections are the molecular motors which convert the chemical energy into mechanical work during their cyclic interaction with actin [5]. In smooth muscle, Ca2+ is the trigger for contraction [2]. Increase in intracellular Ca2+ can be due to several distinct mechanisms [3]. When the concentration of intracellular Ca2+ increases, a series of events take place. The ubiquitous cytoplasmic protein calmodulin binds to Ca2+ ions in a ratio of 4:1 [5]. In the absence of Ca2+, caldesmon protein strongly binds to the tropomysin –actin complex on the thin filament preventing actin-myosin interaction. This inhibition is removed when the Ca2+-calmodulin complex binds to caldesmon. In the thick filament, the Ca2+calmodulin complex binds to a specific site on the myosin light chain kinase (MLCK) converting it from the inactive form to the active form. This allows the MLCK to phosphorylate the regulatory light chain of myosin (MRLC)
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1696–1699, 2009 www.springerlink.com
A Mathematical Model to Study the Regulation of Active Stress Production in GI Smooth Muscle
resulting in actin-myosin interaction and this greatly potentiates the splitting of ATP and cross-bridge cycling [6]. The presented model is designed based on this mechanistic approach. Apart from the regular cross-bridges, a population known as latch bridges is also produced [6, 7]. A latch is a state where there are reduced cross bridge cycling rates, dependent on low but significant levels of Ca2+-dependent cross bridge phosphorylation. Latch–bridges contribute to sustained force maintenance during low levels of intracellular Ca2+, especially during tonic contraction. II. METHODS The Hai and Murphy model [8] is a kinetic model for the interaction of cross-bridge forming myosin heads with thinfilament actin which brings about contraction in SM. The model is based on the hypothesis of two types of crossbridge populations – (a) cycling phosphorylated crossbridges (AMp) and (b) noncycling dephosphorylated crossbridges (latch bridges, AM). According to the model both of these populations contribute to the development of stress in SM. The schematic representation of the four state model is shown in fig 1 and the resulting equations are given below. K2 (Phosphatase)
Myosin Phosphorylation
A+M
d[AM] dt
1697
K5 [AMp]-(K7+K6) [AM]
(4)
In addition to the assumptions used in the Hai and Murphy model [8], the following assumptions were made. Only one regulatory mechanism was considered; the phosphorylation of MRLC. In the absence of direct/conclusive evidence for the regulation of MLCP (phosphatase), K2 and K5 were set to be constants. The Hai and Murphy model (Eqns (1)-(4)) was implemented and solved in Matlab using an in-built explicit Runge-Kutta formula, the Dormand-Prince pair.Calcium dependence (Eqn (5)) was added through the regulation of rate of phosphorylation (K1) by MLCK as proposed in the uterine contraction model of Bursztyn et al. [9]. Then simulations were run for time dependent intracellular Ca2+ data produced by the electrophysiology model developed by Corrias and Buist [11]. The rate of phosphorylation was made dependent on the intracellular Ca level, [Ca2+]i through the relation K1(t) =
[Cai (t )]n [Ca(1 / 2 MLCK ) ]n [Cai (t )]n
(5)
where, Cai (t) is the time dependent intracellular [Ca2+], Ca (1/2MLCK) is the [Ca2+]i required for half activation of MLCK and n is the Hill coefficient of activation.
Mp + A Table 1 : Parameters used for the simulation (from [9])
K1 (Kinase) K4
K3 K7 Cycling Crossbridge K5 (Phosphatase)
AMp
AM Latch-bridge formation
K6 (Kinase)
Cross-bridge formation
d[AMp] dt
-K1 [M] +K2 [Mp] +K7 [AM]
(1)
K4 [AMp] +K1 [M]-(K2+K3) [Mp]
(2)
K3 [Mp] +K6 [AM]-(K4+K6) [AMp] (3)
_______________________________________________________________
UNIT
K2
1.2387
1/s
K3
0.1419
1/s
K7
0.0378
1/s
Ca(1/2MLCK)
256.98
nM
n
8.7613
-
Total myosin phosphorylation and stress were calculated using the following relations.
by Hai and Murphy [8].
d[Mp] dt
VALUE
By assumption K6=K1 ; K5=K2 ; K4=k3/4 [8]
Fig.1 Four state cycling cross bridge model proposed
d[M] dt
NAME
Stress = AMp + AM
(6)
Phosphorylation = Mp + AMp
(7)
III. RESULTS The model of gastric smooth muscle electrophysiology model from [11] was used to produce [Ca2+]i as a function
IFMBE Proceedings Vol. 23
_________________________________________________________________
1698
Viveka Gajendiran and Martin L. Buist
of time for six slow waves with a plateau phase membrane potential of -43 mV. This is above the threshold membrane potential needed for the activation of contraction in SM [10, 12]. This [Ca2+]i data, shown in fig 2, served as the input, Cai(t), that was used to calculate K1(t) from Eqn 5. The values K1(t) computed for the Cai(t) data were substituted in Eqns 1-4, and solved for time dependent values of [M], [Mp], [AMp] and [AM] shown in fig 3. The stress and phosphorylation produced in the SM cell was calculated from Eqns 6 & 7 and is shown in fig 4. The stress values were normalized by 0.8 (maximum stress) as the maximum number of attached cross-bridges (AMp + AM) was assumed to be 80% of the total myosin [8]. The phosphorylation was expressed as a fraction of total myosin. 3.5
x 10
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
0
[Ca2+]i (in mM)
60
80
100
120
Fig.4 Normalized Phosphorylation (solid line) and Stress (dotted line)
-4
curves produced by the model for the Ca2+ data in Fig.2.
IV. DISCUSSION
2.5
2
1.5
1
0
20
40 60 Tim e ( Se c)
80
100
120
Fig .2 Intracellular Ca2+ concentration corresponding to slow waves in canine gastric smooth muscle cells. (Ref:11. Alberto et al 2008) 1
M Mp AMp AM
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
40
TIME (S e c)
3
0.5
20
0
20
40
60 Tim e (Se c)
80
100
120
Fig .3 Variation of the fractions of the Species (M, Mp, AMp and AM) along the slow wave corresponding to the Cai(t) in fig 2.
_______________________________________________________________
The Hai and Murphy model, modified to include a dependence on the intracellular calcium concentration, has been tested for the Ca2+ data produced by the electrophysiology model for gastric SM [11]. To date most of the modeling work in this field has been done for a single calcium ion transient event i.e. one cycle of [Ca2+]i accumulation and decay over a time range [8,9]. This, however, may not accurately represent the physical situation. Physiologically a train of slow waves will determine the short and long term intracellular Ca2+. Hence it is more useful to study the contraction pattern of smooth muscle in response to multiple slow waves. Here a train of six slow waves was chosen as the test data set. The Ca2+ data corresponds to slow waves with a plateau phase membrane potential of -43 mV. Slow waves must exceed a ‘mechanical threshold’ for excitation-contraction coupling to occur. It has been found experimentally that the threshold value for different SM cells lies between -40 mV and -60mV, and above this threshold, there is a sharp rise in the Ca2+ concentration in many GI SM types causing active contraction to take place [10, 12]. The preliminary results presented here show that the slow waves with plateau phase above the mechanical threshold are able to produce phasic contractions. The latch bridge formation and its contribution in the stress can be seen evidently in figure 4. Though the phosphorylation levels begin to fall following the peak of the Ca2+ transient, the stress still increases to reach a maximum before decreasing at a later time. Also though the level of phosphorylation falls so as to be almost negligible, a considerable amount of stress is still maintained. This can also be seen in fig 3 in the AM (latch-bridge) curve.
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Mathematical Model to Study the Regulation of Active Stress Production in GI Smooth Muscle
The tonic contraction mechanism of GI SM will be studied in near future. Once it is established, it is intended to use the behavior to derive realistic initial conditions for the phasic contractions. It is believed that phasic contractions are superimposed on the inherent basal tone of the GI SM and occur in response to stimuli from various agonists [1,3,4,]. It has been noted that the contraction response to the Ca2+ current caused by the slow wave is biphasic [12]. That is, the first peak value in the slow wave causes contraction in the SM cells and if the plateau depolarization of the slow wave is above the threshold, a second phase of contraction is initiated with the amplitude of the contraction depending upon the membrane potential and the duration of the plateau phase. The current model is able to produce only a monophasic stress response as it can be seen in fig 4 (stress curve). This disagreement between the experimental observation and the model behavior needs to be addressed. The model presented here is the first step towards developing a dynamic model of GI SM contraction. Though this model is based on the assumption that only the MAPK is regulated by the intracellular [Ca2+]i concentration [9], other Ca2+ dependent and independent regulatory systems can not be excluded. Experiments are being done to examine if there is a thin filament regulatory pathway involved in the SM contraction [14,15]. Using the experimental results and evidence for such regulatory pathways, the present model will be further extended by the inclusion of more control systems. Recently studies have also been performed relating to Ca2+ sensitivity [13]. It is proposed that the sensitivity of the contractile apparatus to Ca2+ is not constant and varies under different conditions. This behaviour of the SM cell contractile elements can be accounted for by including a Ca2+ sensitivity factor in the model. The future developments do, however, depend upon the availability of concrete experimental evidence and data.
behaviour of the SM cell. This model is at the cellular level of the hierarchy of cell – tissue – organ as seen in the Physiome project. In the future such models can serve as a tool to make predictions under prescribed conditions and also can be used to study diseases of the GI system.
REFERENCES 1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
V. CONCLUSIONS
12.
In summary, a simple model, adapted from the Hai and Murphy model [8], has been used to explain the calcium dependence of the SM contraction and its regulation. A mathematical formulation for the four states and cross bridge cycling has been used with parameters fitted from the experimental data to study the effects of (a) Ca2+ concentration on the kinetics of myosin phosphorylation, and (b) the latch state during tonic and phasic contractions. As the model is being developed, we expect it to serve as a bridge between electrophysiological and mechanical
_______________________________________________________________
1699
13.
14.
15.
Szurszewski, J. H. Electrical basis for gastrointestinal motility. In: Physiology of Gastrointestinal Tract (2nd ed), edited by L. R.Johnson. New York: Raven, 1987, p. 383-422. Sanders K.M.,Koh S.D, Ward S.M. Organization and Electrophysiology of Interstitial Cells of Cajal and Smooth Muscle cells in the gastrointestinal Tract. In : Physiology of the gastrointestinal tract( 4th Ed), edited by L.R.Johnson.New York.Raven Press,2006. p533-576. Makhlouf G.M and Murthy K.S. Cellular Physiology of Gastrointestinal Smooth Muscle.. In Physiology of the gastrointestinal tract( 4th Ed), edited by L.R.Johnson.New York.Raven Press,2006. p524-532. Johnson L.R. Smooth Muscle Physiology. In: Physiology of the gastrointestinal tract. Chapter 13.s (2nd Edition, 1987). New York. Raven Press.P246-262. Horowitz A, Menice CB, La Rorte R and Morgan K.G. (1996). Mechanisms of smooth muscle contraction. Physiol Rev, 76:9671003. Dillon P.F, MO Aksoy M.O, Driska S.P, and Murphy R.A(1981). Myosin phosphorylation and the cross-bridge cycle in arterial smooth muscle. Science , January 1981:Vol. 211. no. 4481, pp. 495 – 497. Dillon P. F and Murphy R. A(1982). Tonic force maintenance with reduced shortening velocity in arterial smooth muscle. Am J Physiol Cell Physiol 242: C102-C108. Hai C.M, Murphy R.A (1988). Cross-bridge phosphorylation and regulation of latch state in smooth muscle. Am J Physiol Cell Physiol 254: C99–C106. Bursztyn L , Eytan O, Jaffa A.J, Elad D ( 2007) . Mathematical model of excitation-contraction in a uterine smooth muscle cell. Am J Physiol Cell Physiol 292:1816-1829.. Vogalis F, Publicover N.G, J.R Hume J.R, Sanders K.M (1991). Relationship between calcium current and cytosolic calcium in canine gastric smooth muscle cells. Am J Physiol Cell Physiol 260: C1012C1018. Corrias A, Buist M.L. Quantitative description of gastric slow wave activity. Am J Physiol Gastrointest Liver Physiol.294 (4):G989-G995, 2008. Ozak H, Stevens R.J, Blond D.P, Publicover N.G,Sanders K.M. Simultaneous measurement of membrane potential, cytosolic Ca2+, and tension in intact smooth muscles. Am J Physiol Cell Physiol 260: C917-C925, 1991 Ratz P.H, Berg K.M, Urban N.H, Miner A.S. Regulation of smooth muscle calcium sensitivity: KCl as a calcium-sensitizing stimulus. Am J Physiol Cell Physiol 288: C769-C783, 2005. Gerthoffer WT, Pohl J.(1994) Caldesmon and calponin phosphorylation in regulation of smooth muscle contraction. Can J Physiol Pharmacol. Nov;72(11):1410-4. Winder SJ, Allen BG, Clément-Chomienne O, Walsh MP.(1998) Regulation of smooth muscle actin-myosin interaction and force by calponin. Acta Physiol Scand. Dec;164(4):415-26.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Design of Customized Full Contact Shoulder Prosthesis using CT-data & FEA D. Sengupta1,2, U.B. Ghosh1 and S. Pal1 1
2
School of Bioscience & Engineering, Jadavpur University, Kolkata-700 032 India. Tata Consultancy Services, Technopark Campus, Kariyavattom, P. O. Thiruvananthapuram 695581, Kerala.
Abstract — Shoulder joints were replaced due to trauma or chronic arthritis and other disease processes, osteoarthritis and or other disease processes. Usually the available prostheses generally use some standard ball diameters and stem length of the humeral component. Many varieties are available including modular type with replaceable ball diameter Therefore there is a mismatch in the ball size and actual bony cavity and the prosthesis that will be inserted leading to excess bone loss during insertion in the bone which sometimes calls for an early revision surgery. So patient specific hemi-shoulder prosthesis was designed from the CT- scan data obtained from a specific patient. The materials used were medical grade stainless steel (316L), Ti6Al4V, Co-Cr-Mo alloy respectively. This type of prosthesis will be manufactured using computer aided manufacturing technique. Keywords — Full contact, shoulder prosthesis, arthritis, customization, CT-data, FE-analysis
I. INTRODUCTION The first shoulder replacement was performed in 1893 by a French surgeon, Pean, for a tuberculosis infection. However, it was not until the 1970's that shoulder replacements were routinely performed. Initially, implant designs were highly con-strained devices that did not accurately restore shoulder biomechanics and ultimately failed. Modern total shoulder implants allow for more motion and less constraint. A shoulder replacement involves replacing the humerus) and the glenoid with metal and plastic parts that then act as a new shoulder joint [1, 2, 3] Shoulder joints are replaced due to traumatic injury, osteoarthritis or other disease processes. Usually the available prostheses generally use some standard ball diameters and stem length of the humeral component. Many varieties are available including modular type with replaceable ball diameter, therefore there is a mismatch in the ball size and actual bony cavity and the prosthesis that will be inserted leading to excess bone loss during insertion in the bone which sometimes calls for an early revision surgery. Therefore a patient specific hemi-shoulder prosthesis was designed using the CT-scan data obtained from a particular patient. The materials used for design of the prosthesis were medical grade stainless steel (316L), Ti6Al4V, Co-Cr-Mo alloy respectively.
II. MATERIAL & METHODS The shoulder is a multifunctional joint with a large number of functions consists of a chain of bones connecting the humerus to the trunk. The shoulder consists of scapula and clavicle and functions as a movable but stable base for the motions of the humerus. The Sterno-Clavicular joint connects the clavicle and sternum, and the scapula in its turn is connected to the clavicle by the Acromio-Clavicular joint. Another connection between the scapula and the thorax is the Scapulo-Thoraeic Gliding Plane, which constraints possible movements with two Degrees-of-Freedom (DOF) and makes the system a closed chain mechanism. There are large numbers of muscles also which control the movement of the joint. The steps involved for processing the CT-data from a specific patient of a particular bone are as shown in figure 1. Image processing of the CT slices (in the DICOM format), obtained from local hospital with informed consent for the exact contour generation of the inner medullary cavity of the humerus were performed using the software MIMICS® and the subsequent 3D model was formed which was meshed using the Magics® software to refine the finite
Fig. 1 Steps involved for the image processing of the CT slices
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1700–1703, 2009 www.springerlink.com
Design of Customized Full Contact Shoulder Prosthesis using CT-data & FEA
1701
element mesh. Then it was exported to FEA-software Ansys® and then volumes were generated from these areas and meshed with 10-nodded tetrahedral element. Then this file was send to MIMICS for material property assignment based on gray values The Gray values were then converted to the Hounsfield units. The gray value is determined by the intensity of the colour. For air (colourless entity) the gray value is 0 which increases for the denser objects. The corresponding value of HU of air is –1024. Before assigning materials to the elements of the volumetric mesh, Mimics will first calculate a gray value for each element. This gray value will then be used in further calculations. MIMICS® uses an accurate method to assign gray values to elements by calculating exact intersections between voxels. The voxels are referred to those volumetric regions in a 3D region analogous to a pixel in a 2D region. While being accurate, care has been taken that the calculations can be performed efficiently. For assigning the material property of bone a fixed value of 0.3 was given for Poisson’s ratio. For assignment of material property element by element MIMICS® calculates the average gray value of all the voxel inside an element in HU and then the whole range of densities are linearly divided into the given number of groups. For each group it assigns a particular gray value of Young’s modulus according the above formula. The loading conditions were adopted from the various angles of arm abduction from Van der Helm’s data [4, 5] as shown in Table 1.
formed on the cortical region of the masked portion and a 3D volume is generated from this region. The polylines which were created thereafter were ex-ported as area.lis files in the finite element software Ansys for the designing of the customized implant. The polylines which were generated by the software corresponding to the inner medullary cavity as shown in fig 2(A) were chosen and the outer polylines were deleted as in fig 2(B). Two planes mutually perpendicular to each other were drawn in order to add the innumerable small line which has been created corresponding to each of the CT slices while exporting from MIMICS®. These lines in each of the layers were then added by Boolean operation and areas were created. The coordinates of the center of the areas were retrieved and joined by spline in order to create the central axis. Four coordinates on each of the areas were chosen so that they can be joined to form segmented spline. The coordinates in the chosen layer lying in the same direction were joined by spline. Areas were created, from which volumes were generated which formed the humerus stem of the customized implant as shown in Fig. 3 A & B. The range of the Hounsfield unit in the Humerus is –1024 to 1650 (ref. MIMICS®). The computed density () from this range is
Table 1 Gleno Humeral Joint Reaction Force for Different Angles of Arm Abduction for Unloading Conditions (Van Der Helm 1994)
The models were solved using a PC with Intel® Pentium IV Processor (3.2 GHz, HT, 1536MB RAM) and in ANSYS®. Each of the models was solved in approximately 15 minutes. Altogether it took 11 hours for processing. The post-processing was done in ANSYS® postprocessor.
Load Case 2 3 4 5 6 7
Abduction Angle (º) 30 60 90 120 150 180
FX(N)
FY(N)
FZ(N)
164.46 323.74 383.71 314.03 137.74 39.78
-16.14 -3.68 34.62 45.56 11.86 -3.73
14.03 -36.88 -77.28 -137.96 -134.43 -72.51
Resultant Force(N) 165.84 352.85 392.95 346.01 192.83 82.79
U 689.3044 0.673148u HU and Young’s modulus(E) is given by
E
In this study we have considered three cases for comparison and improved design 1) normal joint, 2) joint with standard prosthesis and 3) with a CT-data based customized shoulder prosthesis for a better understanding of our design. The customized shoulder implant is designed by taking the measurement from the CT scan slices. The CT scan data collected from a local hospital were exported in image processing software MIMICS®. The slice distance was1mm and there were a total of 171 slices. The bone scale was chosen for proper visibility. The primary masking was applied with the threshold ranging from 226 to 1905 where –1024 corresponds to the gray value of air. Region growing is per-
_______________________________________________________________
(1)
IFMBE Proceedings Vol. 23
689.3044 0.0000013 u U 3.15
(2)
Fig. 2 (A) The polylines in the whole humerus, (B) polylines of the inner medullary cavity
_________________________________________________________________
1702
D. Sengupta, U.B. Ghosh and S. Pal
Fig. 3 (A) Designed customized prosthesis, (B) Prosthesis inserted
Fig. 5 Axial stress (MPa) in X (A) & Z direction (B) at 90º arm abduction
inside bone, (C) constrained Prosthesis.
in bone inserted with the customized prosthesis when the muscle forces are maximum.
III. RESULTS & DISCUSSION The results of the study are quite involved and as we tried three Materials, SS316l, Ti6 Al4V and Co-Cr-Mo alloy. It was found that the materials didn’t change the stress pattern significantly. We are showing the variation of the shear stress in Fig.4 and Fig.5 at 30º and 60º arm abduction using different materials as indicated. Similarly Normal, stress and Von Mises stress criteria for failure were also calculated for the normal humerus and humerus with standard prosthesis inserted and newly designed customized implant with the three different materials. Now the corresponding Von-Mises stress on fig.6 (A) the customized prosthesis was depicted in the below and (B) VM-strain of Co-Cr-Mo. Alloy.
Fig. 6 (A) Von-Mises stress & (B) Von-Mises strain. Table 2 Maximum Von-Mises stress in customized prosthesis During 90º arm abduction Abduction Angle (º)
Fig. 4 (A) Variation of Shear stress (XZ) in standard prosthesis and 4(B) the customized prosthesis of the materials.
_______________________________________________________________
30
Co-Cr-Mo 64.706
Von-Mises Stress (MPa) Stainless Steel 95.34
60
131.47
118.07
58.164
90
169.6
95.34
10.104
120
245.04
132.65
58.164
150
61.802
--
Ti6Al4V 55.565
--
From all these results we may infer that the effect of load transfer along the humerus, using a detailed 3-D Finite Element Analysis (FEA) showed a variation of stress along the length of the normal humerus from 2.0MPa to 20 MPa. For the stress-strain analysis of the surgical construct the prosthesis was inserted in the humerus. The failure criterion
IFMBE Proceedings Vol. 23
_________________________________________________________________
Design of Customized Full Contact Shoulder Prosthesis using CT-data & FEA
von-mises stress in the interfacial nodes between the inserted Co-Cr-Mo prosthesis and the normal bone were in the range 18.0MPa to 245.0MPa, for S.S prosthesis was between 90.0 MPa to 130.0 MPa while for the Ti6Al4V was between 10.0MPa to 95.0MPa. Thus for all the above cases it is lesser than the corresponding yield stress of the metals and bone used, which ensures the safety of the construct under the subjected loading conditions. Only this prosthesis needs to be made using CAD technique. Now we are working on parametric model generation directly from the CT-data, getting the whole dimensions of the medullary cavity and ball diameter & length.
ACKNOWLEDGMENT We gratefully acknowledge the Department of Science & Technology, New Delhi, Government of India for funding the project & CNCRI, Kolkata for providing CT-data.
_______________________________________________________________
1703
REFERENCES 1. 2. 3. 4. 5.
Dines DM, Warren RF. Modular shoulder hemiarthroplasty for accurate fractures: surgical considerations. Clin Orthop 1994; 307: 18-26. E.Y. Chao, B.F. Morrey, "Three-dimensional rotation of the elbow", J. Biomechanics, 11, 1978, 57-73. Fenlin JM et al. 1994 Flatow EL. Prosthetic design considerations in total shoulder arthroplasty. Semin arthroplasty 1995; 6:233-244. F.C.T. Van der Helm, (1994), "A finite element musculoskeletal model of the shoulder mechanism", J. Biomechanics, 27, 551-569. Van der Helm, F.C.T (1994a). Analysis of the kinematic and dynamic behaviour of the shoulder mechanism. J. Biomechanics 27,527-550. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Prof Subrata Pal School of Biosc. & Engg. , Jadavpur University 188, Raja S.C. Mullick Road Kolkata - 700 032 India
[email protected] _________________________________________________________________
An Interface System to Aid the Design of Rapid Prototyping Prosthetic Socket Coated with a Resin Layer for Transtibial Amputee C.W. Lai1, L.H. Hsu1, G.F. Huang2 and S.H. Liu1 1
Department of Mechanical Engineering, National Cheng Kung University, Taiwan 2 Department of Physical Therapy, Fooyin University, Taiwan
Abstract — For employing the technology of rapid prototyping (RP) to fabricate prosthetic socket for transtibial amputee, a CAD stump model must be constructed. Currently, the commercial CAD systems need experienced skills to define objects. To simplify the construction procedure of stump model and easily used by a prosthetist, an interface system for the specific purpose of defining stump model should be developed. The main objective of this study is to improve the tedious process and quality uncertainty of conventional method to produce prosthetic sockets. This study developed an interface system that allows a prosthetist to modify the shapes of stump model so that the acceptable pressure distribution on pressure tolerant (PT) and pressure relief (PR) areas can be achieved. This study used C++ programming language and Open Graphic Library to develop an interface system. The point data of stump surface obtained by a T-scan system, which includes a hand-held 3D laser scanner and tracking system, is utilized to build the original shape of the stump. The primary shape of stump model is modified based on the requirements at the PT and PR areas of the specified amputee. As soon as the contact pressures between stump and socket are verified by finite element analysis, the modified stump model can be used to design a specific CAD socket model for the production using a RP machine. This interface system directly used a file of scanned points to build a stump shape and also provided the functions to identify the shapes of PT/PR areas. The modification requirements, including designations of the regions to be modified and the distances to be indented or raised, can be easily manipulated through appropriate interface dialogs. After a RP socket model has been designed, the socket is manufactured using an FDM machine. To reinforce the strength, the RP socket is coated with a resin layer. The measurement of contact pressures has been implemented and its trial use is under way. Keywords — CAD, transtibial prosthetic socket, rapid prototyping, C++ programming.
I. INTRODUCTION In the traditional process of fabricating prostheses, there are many factors will effect the quality of fit. A shape of the residual limb is acquired by using plaster bandages. And a plaster positive model of the residual limb is made from plaster bandages. Plaster may then be added to the pressure relief areas which are sensitive and removed from the pres-
sure tolerant areas which are loaded for weight bearing of the model. The modification of the plaster depends on the skill and expertness of the prosthetist. CAD/CAM techniques had been viewed as a solution to reduce errors in the fabrication process [1]. This study employed reverse engineering to replace the plaster method to design and fabricate a prosthetic socket [2]. The prosthetic socket can be produced from a CAD model accurately and quickly by using rapid prototyping (RP) technology [3]. However, current CAD systems are not friendly for a prosthetist to operate. If a custom-made interface system is available and easily operated by any user such as a prosthetist, the prosthetic socket may then be fabricated conveniently. II. METHODS A. Scanning Employing the reverse engineering technology, the fist step is to acquire the point data of the shape by scanning the subject. The point data is the basis for subject construction. Therefore, how to measure the subject is the key to subsequent work. In previous study, the point data of stump was acquired by using the CCD (Charge Coupled Device) laser scanner to scan the shape of gypsum positive model. As a result that the CCD laser scanner can’t revolve around the subject, it was necessary to revolve the subject to complete the scanning mission. After all the gypsum positive model is just a rough copy of the stump. The credibility of the point data needed to improve. With technology developing, the T-scan which is a handheld laser scanner can scan the shape of subject easily and quickly (Fig. 1). Using the T-scan to scan the shape of stump directly improves the credibility of the point data of stump. The amputee just sits on a seat and doesn’t need to change his/her posture. Taking the T-scan along the shape of stump, at the same time, the point data had be obtained will display on the monitor. It is easy to perceive the contrast between the point data acquired by using a CCD laser scanner to measure the gypsum positive model and using the T-scan to measure the stump directly.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1704–1707, 2009 www.springerlink.com
An Interface System to Aid the Design of Rapid Prototyping Prosthetic Socket Coated with a Resin Layer for Transtibial Amputee
1705
1. The function of point data reading. Point data is indicated by space parameters X, Y, Z. By reading the point data, the model can be constructed on the window. 2. The function of surface modification. The interface is capable of controlling the modification value and the smooth of surface after modification. 3. The model after construction and modification can be read and modified by other CAD software. Program operation is reading point data to construct the surface model of the stump. And then users can define the shape and location of modification areas. After giving the maximum modification displacement, the program will let modification displacement of each point smaller than max modification displacement to keep the modification surface smooth. The modified model can be saved for fabricating (Fig. 3).
Fig. 1 Scanning the stump using T-scan
Fig. 2 Constructing the residual limb B. Filtering the point data Since there would be millions of point in the point data acquired by the T-scan, it is necessary to preprocess the point data. Filtering the trivial and unnecessary points in the point data benefits the stump model reconstruction. In order to simplify the operation when importing the point data to interface system and modifying surface of stump, the point data had been divided to multi-layer and every layer must have the same number of points. C. Constructing the stump surface model After a preprocessing process, the point data is imported to the interface system and the stump surface model will be constructed (Fig. 2). The stump surface model is constructed by many small triangle meshes. Changing the position of points will make the shape of stump to change. Using this method the socket surface will be designed from changing the stump surface. D. The interface system This study applied C++ program language and OpenGL to develop an interface system that can design the surface model of the socket. In order to achieve construction and modification of stump surface model, this interface system has following functions:
_______________________________________________________________
IFMBE Proceedings Vol. 23
Fig. 3 Operating procedure of the interface system
_________________________________________________________________
1706
C.W. Lai, L.H. Hsu, G.F. Huang and S.H. Liu
III. CASE STUDY A. Acquiring the point data of a stump After scanning the shape of the patient’s amputation, the point data of a stump would be acquired. The point data preprocessing includes planar sectioning (Fig. 4) and filtering which is provided by CATIA; planar sectioning divides the point data divided into multi-layers, and filtering lets the point data have the same number of points per layer. Then, the data saved by CATIA is changed into TXT format (Fig. 5), the number of layers is inputted in the first row and the number of points per layer in the second row.
Fig.6 Modification of patellar tendon
B. Modification of the stump shape by using the point data The point data is imported into the interface system after preprocessing and reconstructed model of the patient’s stump (Fig. 2). According to PR and PT areas of the stump, the appropriate shape and location are chosen to modify the surface of the stump. The PR areas should be offset outward of the stump surface, while the PT areas should be offset inward of the stump surface. For example, at Patellar tendon, PT area, choose a rectangle to fit the area and offset inward with an appropriate displacement (Fig. 6). The modified areas in this study include the Tibia, Fibula, Patella Tendon and Calf muscle (Fig. 7).
Fig.7 The modified stump model
Fig. 8 Building the solid model of a prosthetic socket C. Design of a socket model
Fig. 4 Planar sections of the point data
Fig. 5 File format of the data utilized in this study
_______________________________________________________________
The modified stump surface is imported to CATIA to design a socket. Because a patient would wear socks on their stump before putting on the prosthetic socket, there should be a gap between the stump and the prosthetic socket. This study designated 4.5 mm for the gap. The modified stump surface is offset 6 mm to form the outer surface of a prosthetic socket. The solid model of the prosthetic socket is obtained by shelling the outer surface with 1.5 mm. (Fig. 8). For connecting an adapter and a shank, the bottom of the socket should be made with a specific shape. As soon as a socket model is determined, a STL file is transferred to a rapid prototyping machine for fabricating prosthetic socket (Fig. 9). This preliminary RP socket is then coated with a resin layer for reinforcing its flexural strength.
IFMBE Proceedings Vol. 23
_________________________________________________________________
An Interface System to Aid the Design of Rapid Prototyping Prosthetic Socket Coated with a Resin Layer for Transtibial Amputee
1707
Fig. 9 Importing socket model into the slicing software for an RP machine Fig. 10 A preliminary socket fabricated in an FDM machine IV. CONCLUSIONS This study developed a method together with a prototype interface system for surface modification during the design of a prosthetic socket. The interface system provided the following features 1. The interface system demonstrated that a prosthetic socket can be designed using the scanned points of a stump. A preliminary socket is then fabricated by a RP machine using the socket model (Fig. 10). This preliminary RP socket is wrapped with a layer of resin to reinforce it strength (Fig. 11). The resin-reinforced socket has been worn by the specific amputee for trial use and the interface pressures between socket and stump has been measured. 2. The interface system allows a prosthetist to designate the required modification displacement and the corresponding modified shape can be acquired. The shape to be modified can be adjusted by five parameters that represented the weighting values of the space. This method is quicker than using control points to modify surfaces. 3. This interface also helps to shorten the training time of a prosthetist and to modify pressure-tolerant and pressure-relief areas more easily. 4. This interface can store the digitized socket model, which can be used to fabricate a new socket at the patient’s request. Since the interface system only provides shape modification, other functions needed in the socket design process, such as shelling the surface model of a stump to form a solid model, and the detailed shape at the end of a socket, should be developed.
_______________________________________________________________
Fig. 11 Coating the preliminary RP socket with a resin layer
ACKNOWLEDGMENT The authors would like to thank the National Science Council Taiwan for its support through grant No. 97-2212E-006-105.
REFERENCES 1.
2.
3.
Oberg K, Kofman J, Karisson A, Lindstrom B, Sigblad G (1989) The CAPOD system-a scandinavian CAD/CAM system for prosthetic socket. Journal of Prosthetics & Orthotics Vol. 1:139-148 Zheng S, Zhao W, Lu B (2005) 3D reconstruction of the structure of a residual limb for customising the design of a prosthetic socket. Medical Engineering & Physics Vol. 27:67-74 Rogers B, W. Bosker G, H. Crawford R, C. Faustini M, R. Neptune R, Walden G, J. Gitter A (2007) Advanced trans-tibial socket fabrication using selective laser sintering. Prosthetics and Orthotics International Vol. 31:88-100 Author: Lai, Chih-Wei Institute: Department of Mechanical Engineering, National Cheng Kung University, Taiwan Street: University Road City: Tainan Country: Taiwan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Correlation of Electrical Impedance with Mechanical Properties in Models of Tissue Mimicking Phantoms Kamalanand Krishnamurthy1, B.T.N. Sridhar2, P.M. Rajeshwari3 and Ramakrishnan Swaminathan1 1
Department of Instrumentation Engineering, MIT Campus, Anna University, Chennai, 600044, India 2 Department of Aerospace Engineering, MIT Campus, Anna University, Chennai, 600044, India 3 Marine sensors and Electronics, National Institute of Ocean Technology, Chennai, 600100, India
Abstract — Pathological changes in soft-tissues are primarily correlated with changes in their mechanical and electrical properties and are used to differentiate diseases from normals. Although there are evidences that establish the association of electrical and mechanical properties in biological aspects, their interrelationships are not well established. In this work, an attempt has been made to correlate electrical impedance of soft tissue-mimicking phantoms with those of its mechanical properties. The electrical property of the polyacrylamide gel phantoms prepared as per the standard protocol was studied using a precision impedance analyzer. Tensile tests were conducted using an universal testing machine and the derived mechanical properties such as breaking stress, breaking strain, initial modulus and Young’s modulus were correlated with impedance values. It was observed that for a given concentration of a gel phantom the percentage variation in impedance correlate well with the percentage variation in the Young’s modulus. The magnitude of variation in Young’s modulus found to be more than that of the impedance. Similar correlations were observed for other mechanical properties. It appears that this study seems to be useful as the research on tissue mimicking phantom gels play important role in mechanical studies and in ultrasonic bioeffects. In this paper the objectives of the study, methodology and the significance of results are discussed in detail. Keywords — palpation, mechanical properties, electrical impedance, soft tissue, polyacrylamide gel.
I. INTRODUCTION Physicians and surgeons evaluate the tissue behavior to diagnose medical conditions; any changes in tissue condition are seen as indicators of disease or illness. One of the primary interests of the physician is to examine the tissue stiffness or elastic modulus, a property pertaining to the material’s resistance to deformation. This assessment of tissue stiffness is known as palpation and relies on the qualitative determination of tissue elasticity by the physician [1]. Several diseases are known to alter the mechanical properties of soft tissues. During many abdominal operations palpation is used to assess organs, such as the liver, and it is not uncommon for surgeons to palpate tumors that were undetected previously by classic imaging methods such as CT, MRI, or B-scan
ultrasound because none of these methods currently provide the type of information elicited by palpation. These findings imply that measurement of mechanical properties of tissues has a potential to offer a new method of classification of tissues [2]. Many tissue-mimicking phantoms have been developed and researchers have emphasized that these tissuemimicking phantom gels play important roles in research on biomedical applications. Characterizing the mechanical behavior of small tissue samples and tissue-like phantoms can significantly contribute to better elasticity imaging algorithms [3]. Polyacrylamide gels are similar to tissues in many respects [4]. There are evidences that establish the association of electrical and mechanical parameters in biological aspects. The electrical activity of the heart generates an active tension that causes the heart to deform. It is therefore clearly evident that the mechanical activity of the heart is heavily dependent on the electrical activity. In turn, the mechanical activity affects the propagation of electrical activity [5]. The purpose of this study is to correlate the electrical impedance of polyacrylamide based soft tissue mimicking phantoms with their mechanical properties. II. MATERIALS AND METHODS A. Description of the phantom Polyacrylamide based tissue mimicking phantoms are chemical gels which are obtained through chemical reactions [4]. The gels are formed by co-polymerization of acrylamide and bis-acrylamide. A free radical generating system initiates the reaction. Polymerization is initiated by tetramethylethylenediamine and ammonium persulphate. Tetramethylethylenediamine acts as an electron carrier and activates the acrylamide monomer thus providing an unpaired electron to convert the acrylamide monomer to a free radical. The activated monomer reacts with the unactivated monomer to begin the polymer chain reaction. The polymer chains are crosslinked by bis-acrylamide, forming a complex web polymer which depends on the polymerization conditions and monomer concentration. To attain good repeatability in gel formation, the concentration of the ini-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1708–1711, 2009 www.springerlink.com
Correlation of Electrical Impedance with Mechanical Properties in Models of Tissue Mimicking Phantoms
tiator, temperature, and pH must be controlled. Certain factors like impurities in the chemicals and water lead to incomplete gel formation. Polyacrylamide gels can be prepared to match the mechanical properties of different real tissues by changing the acrylamide concentration.
1709
100
100
impedance breaking stress
80
80
A 20% polyacrylamide gel requires a mixture of 19g acrylamide (extra pure) and 1g bis-acrylamide (extra pure) in 100ml deionised water. Ammonium persulphate and TEMED (N,N,N’,N’-tetramethylethylenediamine) was added to initiate and catalyze the polymerization reaction.
% va ria t io n
B. Preparation of 20% acrylamide gel
C. Mechanical testing The mechanical properties of the phantoms were measured using an Instron model 3367 dual column testing system equipped with a 10kg load cell and interfaced to a desktop PC. The phantom samples were cut using a fly type pillar press into flat dumbbell shaped specimens of gauge length 60mm, width 6.43mm and thickness 5.04mm. Tensile test was performed on the phantoms with the crosshead speed set to 0.25mm/min. Based on the destructive mechanical tests performed on the phantoms of various concentrations ,the mechanical properties such as breaking stress, breaking strain, initial modulus and Young’s modulus were determined at a temperature of 25°C.
60
40
40
20
20
0
0 20
30
35
40
45
50
Fig. 1 Impedance and breaking stress Vs gel concentration
100
% va ria t io n
100
impedance breaking strain
80
80
60
60
40
40
20
20
0
0 20
25
30
35
40
45
50
Gel concentration in %
III. RESULTS In FIG 1, the percentage variation in electrical impedance and breaking stress is shown as a function of gel concentration ranging from 20 to 50 percent. It can be seen that the
_________________________________________
25
Gel concentration in %
D. Electrical testing The electrical and dielectric parameters such as resistance, capacitance, impedance and phase angle of the tissue mimicking phantoms were measured at a frequency range of 100 KHz to 110 MHz using a 4294A Precision Impedance Analyzer equipped with a two electrode system comprising of copper electrodes. The phantoms were cut into structures of dimensions 60mm x 6.43mm x 5.04mm. The electrical and dielectric properties of the polyacrylamide phantoms were measured for different concentrations ranging from 20 to 40 percent. The dimensions of the samples were kept the same for all the concentrations and investigations were conducted at a temperature of 25°C. In this study, the entire analysis was made using the electrical impedance specifically measured at a frequency of 100 KHz.
60
Fig. 2 Impedance and breaking strain Vs gel concentration percentage variation in breaking stress increases linearly for a linear increase in gel concentration ranging. The percent-
IFMBE Proceedings Vol. 23
___________________________________________
Kamalanand Krishnamurthy, B.T.N. Sridhar, P.M. Rajeshwari and Ramakrishnan Swaminathan
age variation in impedance increases exponentially for the same concentration range. The correlation coefficients were calculated using the Pearson correlation method. The breaking stress correlates with the electrical impedance with an R value of 0.88. The percentage variation in electrical impedance and breaking strain is shown in FIG 2 as a function of gel concentration ranging from 20 to 50 percent. It was observed that the percentage variation in breaking strain increases nonlinearly with respect to the gel concentration. The breaking strain correlates with the electrical impedance with a correlation value of 0.64. In FIG 3, the percentage variation in electrical impedance and initial modulus is shown as a function of varied gel concentrations. The percentage variation in initial modulus increases linearly with increase in gel concentration. The initial modulus correlates with the electrical impedance with an R value of 0.89. The percentage variation in electrical impedance and Young’s modulus is shown in FIG 4 as a function of gel concentration ranging from 20 to 50 percent. The percentage variation in Young’s modulus increases linearly with increase in gel concentration. In all the above cases, the percentage variation in impedance increases exponentially. A correlation value of 0.88 was found between the Young’s modulus and electrical impedance.
100
100
impedance young's modulus
80
% va ria t io n
1710
80
60
60
40
40
20
20
0
0 20
25
30
35
40
45
50
Gel concentration in % Fig. 4 Impedance and young’s modulus Vs gel concentration IV. CONCLUSIONS
100
100
impedance initial modulus
% va ria t io n
80
80
60
60
40
40
20
20
0
0 20
25
30
35
40
45
Intact electrical and mechanical behavior of tissues is essential for proper physiological function. High degrees of correlations between electrical and mechanical properties have been shown in several cases such as in cardiac tissues. However, very few results are available on the correlation of electrical and mechanical behavior of soft tissues. In this work, an attempt has been made to correlate the electrical properties of tissue mimicking phantoms with mechanical properties. Tissue mimicking phantoms were prepared such that its mechanical properties fall in the physiological range to match the properties of tissues such as that of liver. The impedance measured on phantoms are correlated with breaking stress, breaking strain, initial modulus and Young’s modulus in polyacrylamide gels. Results demonstrate that the changes in electrical impedance correlate well with Young’s modulus, initial modulus and breaking stress. Hence, it appears that these studies are clinically useful as these gels are found to be similar to real tissues in many aspects.
50
Gel concentration (%) Fig. 3 Impedance and initial modulus Vs gel concentration
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Correlation of Electrical Impedance with Mechanical Properties in Models of Tissue Mimicking Phantoms 4.
REFERENCES 1.
2.
3.
Parag R. Dhar, Jean W. Zu. (2007) Design of a resonator device for in vivo measurement of regional tissue viscoelasticity. Sensors and Actuators A 133:45–54. Mostafa Fatemi, Armando Manduca, James F. Greenleaf. (2003) Imaging Elastic Properties of Biological Tissues by Low-Frequency Harmonic Vibration, IEEE Proc. Vol. 91. No. 10. Ramon Q. Erkamp, Andrei R. Skovoroda, Stanislav Y. Emelianov et al. (2004) Measuring the Nonlinear Elastic Properties of Tissue-Like Phantoms. IEEE Proc. Vol. 51, IEEE transactions on ultrasonics, ferroelectrics, and frequency control.
_________________________________________
5.
1711
Ken-ichi Kawabata, Yasuharu Waki, Tsuyoshi Matsumura et al. (2004) Tissue Mimicking Phantom for Ultrasonic Elastography with Finely Adjustable Elastic and Echographic Properties. Jonathan P. Whiteley, Martin J. Bishop, David J. Gavaghan.(2007). Soft Tissue Modelling of Cardiac Fibres for Use in Coupled Mechano-Electric Simulations. Bulletin of Mathematical Biology 69: 2199–2225. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Dr.S.Ramakrishnan Madras Institute of Technology, Anna University Chromepet Chennai India
[email protected] ___________________________________________
Biomechanical Analysis of Influence of Spinal Fixation on Intervertebral Joint Force by Using Musculoskeletal Model H. Fukui1, J. Sakamoto1, H. Murakami2, N. Kawahara2, J. Oda1, K. Tomita2 and H. Higaki3 1
Graduate School of Natural Science and Technology, Kanazawa University, Kanazawa, Japan 2 Kanazawa University Hospital, Kanazawa, Japan 3 Faculty of Engineering, Kyusyu Sangyo University, Fukuoka, Japan
Abstract — Evaluation of the intervertebral joint force in vivo is very important for clinical spine problems, for example, spinal instability, disorder of intervertebral disc, compressive fracture of osteoporotic vertebra, etc. Direct measurement of the joint force is difficult to get permission, so computational method to calculate the force is expected. Spine structure is mainly constructed of many vertebrae and flexible intervertebral disks, and it is unstable by itself under standing condition. Supporting by erector muscles and ligaments keep the spine standing condition, so that loading condition of each vertebra depend on the muscle and ligament forces. Biomechanical analysis of musculoskeletal system taking into account of large number of muscle and ligament forces is necessary to determine the intervertebral joint force. Commercial software to deal with the musculoskeletal system is available nowadays, and it has been applied to clinical, ergonomic or sport biomechanics problem. Surgery of spinal fixation is frequently operated for instability or disorder of intervertebral joints. Although stability of the joints is recovered and nervous symptom related with the disorder joints is relieved, additional trouble at adjoining joints with the fixation is concerned because excessive rigidity and posture change by the fixation makes load of the adjoining joints increase. It is required to evaluate the change of adjoining joint force by the fixation, to prevent additional trouble in advance. In this study, analysis of intervertebral joint force and muscle force was performed in the case with or without spinal fixation using full body musculoskeletal model developed by the AnyBody Modeling System (AnyBody Technology). The joint and muscle forces were evaluated under various posture of flexion and extension. Then influence of the spinal fixation on the intervertebral joint force at adjoining joints with the fixation was discussed. Keywords — Biomechanics, Musculoskeletal Model, Spine, Spinal Fixation, Adjoining Vertebra
I. INTRODUCTION Compression of spinal cord due to spinal instability, disorder of intervertebral disk causes pain and palsy. To eliminate pressure on nerve and reduce symptoms, surgery on vertebral bone and disk is carried out. In such case, surgery fixing unstable vertebrae or intervertebral joint with rod and screw is frequently operated. This is the spinal fixation.
Although stability of the joints is recovered and nervous symptom related with the disorder joints is relieved, additional trouble is caused at unfixed joints after surgery. Especially at adjoining joints with the fixation, deformations of vertebral bone and intervertebral disk are frequently caused. Additional surgery of spinal fixation is required in such case. The trouble is concerned because excessive rigidity and posture change by the fixation makes load of the adjoining joints increase. It is required to evaluate the change of adjoining joint force by the fixation, to prevent additional trouble in advance. However, direct measurement of the joint force is difficult to get permission, and computational method to calculate the joint force is difficult because loading condition of each vertebra change by large number of muscle and ligament forces depended on posture at the time. With the recent development of computational mechanics technique, a more realistic biomechanical analysis become possible, and commercial software to deal with the musculoskeletal system is available. Evaluation of the intervertebral joint force using full body musculoskeletal model is available, too. In this study, to evaluate spinal fixation influence on vertebral loading condition, we analyzed intervertebral joint force and muscle force performed in the case with spinal fixation using full body musculoskeletal model. The joint and trunk muscle forces were calculated under various posture of flexion and extension. And then, we discussed the influence of the spinal fixation on the intervertebral joint force. II. ANALYSIS METHOD A. The AnyBody Modeling System Musculoskeletal model was developed using the simulation software the AnyBody Modeling System (AnyBody Technology Inc.). This software handles model of human body mechanism, and the application area is medical, rehabilitation, ergonomic, sports or astronautical engineering, etc. By using the AnyBody Modeling System, we can de-
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1712–1715, 2009 www.springerlink.com
Biomechanical Analysis of Influence of Spinal Fixation on Intervertebral Joint Force by Using Musculoskeletal Model
velop 3-D musculoskeletal model by a script language called AnyScript. Model consists of segments regarded as bones or rigid elements, joints connecting segments, and muscle-tendon units having physiological characteristics. Driver modules of the AnyScript create posture and movement of joints. The AnyBody Modeling System calculates the joint and muscle forces from giving posture and external load by inverse dynamic analysis.
Spine model is composed of nine segments : sacrum, five lumbar vertebrae, T12, T11 and a lumped thoracic part over T10. Spherical joint with three degrees of freedom was set between every vertebra from T10 to sacrum(Fig. 3). These intervertebral joint angles were varied in according to spinal alignment or posture. Height of the model is set as 1.72m and weight is 63kg assuming a standard Japanese male. T10 T11 T12
B. Analysis model Analysis model was developed based on the Standing Model (the AnyBody Research Project, Aalborg University, Aalborg, Denmark, http://www.anybody.aau.dk/repository) which was human full body musculoskeletal model (Fig. 1).
1713
L1 L2 L3
Intervertebral joint
L4 L5 Sacrum Fig. 3 The dots show the location of the rotational center of the intervertebral joints between the vertebrae.
C. Spinal fixation model
(a) (b) (c)
Fig. 1 Analysis model : (a) 30 degrees extension posture, (b) upright posture, and (c) 45 degrees flexion posture.
Total number of muscle fascicles defined in this musculoskeletal model is 476. In trunk of model 158 fascicles were defined and divided into eight muscles (Fig. 2). Four abdominal muscles were included : rectus abdominus, transversus, obliquus internus and obliquus externus. Quadratus lumborum, psoas major, erector spinae and multifidi were defined as muscles of the lower-back [1]. Rectus abdominus
Obliquus internus
Obliquus externus
Erector spinae
Transversus Multifidi Psoas major
Quardratus lumborum
(a) (b)
Fig. 2 Trunk of model in (a) front view and (b) back view.
Spinal compression fracture is commonly caused on thoracolumbar vertebrae. Disk hernia or spondylolisthesis is frequently caused on lumbar. So, two cases are assumed as model with spinal fixation. One is case that thoracolumbar vertebrae is fixed, another is case that lumbar vertebrae is fixed. T12, L1 and L2 are fixed in the thoracolumbar vertebrae fixed model. L3, L4 and L5 are fixed in the lumbar vertebrae fixed model. Intervertebral joints between fixed vertebrae have no degrees of freedom in the model. Fixed vertebrae moved together as a rigid element when posture of the model was changed. D. Analysis condition Upper body angle of the model was changed from 45 degrees flexion to 30 degrees extention with 15 degrees interval. Posture of the model was set by changing intervertebral joints and hip joint angles. Joint angles of each posture in no fixation model were defined by using radiograph images of lumbar spine taken from a young male[2]. In case of the spinal fixation model, we gave larger joint angle to unfixed joints to compensate for lack of movement of fixed joints. Compensate joint angle were distributed equally among unfixed joints. We calculated intervertebral joint force and trunk muscle force in the case
Eight muscles are defined as trunk muscles.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
1714
H. Fukui, J. Sakamoto, H. Murakami, N. Kawahara, J. Oda, K. Tomita and H. Higaki
with or without spinal fixation under each posture of flexion and extension. Influence of the spinal fixation on the intervertebral joint force at adjoining joints with the fixation was discussed. The solution of the muscle recruited problem is minimization of the maximal muscle activity under constraints of the static equilibrium equations. Muscle activity means muscle force divided by the maximal isometric muscle force at each muscle’s current working condition. The solution is equivalent to minimum muscle fatigue and it is computationally-efficient[3].
posture is larger in case of thoracolumbar vertebrae fixed than case of lumbar vertebrae fixed. In extension posture of 30-degree, quadratus lumborum and multifidi muscle force became significantly larger by lumbar vertebrae fixed. Lumbar vertebrae fixation increase quadratus lumborum muscle force by about threefold, and multifidi muscle force by more than fourfold when upper body is extended. Every muscle of thoracolumbar vertebrae fixed model show little change. Therefore, influence on muscle maintaining extension posture is larger in case of lumbar vertebrae fixed than thoracolumbar vertebrae fixed. In extension posture of 15-degree, psoas major muscle force is larger than other postures, because this posture is unstable in which hip joint angle is large in extension direction and spine is bended in flexion direction. Especially, multifidi is most affected by spinal fixation, because it is located deep muscle layer and attached between lumbar vertebrae. It is thought that excessive muscle force affect on muscle near vertebrae due to spinal fixation.
III. ANALYSIS RESULT A. Influence on trunk muscle Influence of spinal fixation on back muscles is larger than one on abdominal muscles. Muscle forces of the back are shown in Fig. 4. These graphs show muscle forces comparing the cases of thoracolumbar vertebrae fixed, lumbar vertebrae fixed and no fixed. Analysis results of no fixation model show quadratus lumborum and psoas major work a lot in extension posture. Erector spinae works mainly in flexion posture and multifidi brings force in deep flexion posture. In flextion of 30-degree and 45-degree, multifidi muscle force of thoracolumbar vertebrae fixed model is larger than no fixation. Forces of the every back muscle of lumbar vertebrae fixed model are not larger than no fixation. So influence on muscle maintaining flexion
B. Influence on intervertebral joints Intervertebral joint forces in flexion 45-degree are shown in Fig. 5. This graph shows joint forces in vertebral axial direction about the cases thoracolumbar vertebrae fixed, lumbar vertebrae fixed and no fixed. L5-sacrum (L5-S) and L4-L5 joint force of thoracolumbar vertebrae fixed model are larger than no fixation model. Therefore loading of sacrum and lower lumbar vertebrae is increased. This is caused by changing a moment arm from lower vertebrae to Erector spinae
Quadratus lumborum No fixation
120
Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
80 40
400
Muscle force(N)
Muscle force(N)
160
0 -15 0 15 30 Flexion / Extension angle (deg)
Psoas major
Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
600 400 200
No fixation
200
Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
100 0 -15 0 15 30 Flexion / Extension angle (deg) 120 90
No fixation
60
Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
30
0 -30
-15 0 15 30 Flexion / Extension angle (deg)
45
Multifidi
No fixation
800 Muscle force(N)
-30
45
Muscle force(N)
-30
300
0
45
-30
-15 0 15 30 Flexion / Extension angle (deg)
45
Fig. 4 Muscle forces of the back compared of the cases no fixation, thoracolumbar vertebrae fixation and lumbar fixation. Flexion angle
is
written positive value and extension angle is negative value.
_________________________________________
IFMBE Proceedings Vol. 23
___________________________________________
Biomechanical Analysis of Influence of Spinal Fixation on Intervertebral Joint Force by Using Musculoskeletal Model
center-of-gravity of the upper body. In flextion of deep angle, joint force of thoracolumbar vertebrae fixed model is increased. Consequently, influence of spinal fixation on intervertebral joint seems larger in lower vertebral joint when the upper body is flexed deeply. Intervertebral joint forces in extension 30-degree are shown in Fig. 6. Joint forces on L5-S and L2-L3 of lumbar vertebrae fixed model are considerably larger than no fixation, and the force in L2-L3 joint of thoracolumbar vertebrae fixed model is greatly increased. These joints are adjoining to fixed vertebrae. Influence on adjoining intervertebral joints in flexion of 45-degree and extension of 30-degree are shown in Table 1. In thoracolumbar vertebrae fixed model, upper side of fixation is T11-T12 joint and its bottom side is L2-L3 joint. In lumbar vertebrae fixed model, upper and bottom of fixation is L2-L3 and L5-S joint. Although forces of adjoining joints in flexion posture are increased little, those in extension posture are increased at about 20% and 30%. Loading of adjoining vertebrae and disk might be greatly increased and develop disorders when a patient with spinal fixation takes extension posture.
Joint force (N)
2000 1500
No fixation Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
1000 500 0 L5-S L4-L5 L3-L4 L2-L3 L1-L2 T12L1
T11T12
T10T11
Fig. 5 Joint forces in vertebral axial direction in flexion 45-degree. Com-
1715
Table 1 Influence on intervertebral joint adjacent to fixed vertebrae Fixed part
Changes of joint force Flexion of 45-degree Extension of 30-degree Upper Bottom Upper Bottom side side side side
Thoracolumbar
+30N (+5.66%)
+20N (+1.96%)
+58N (+19.0%)
+167N (+20.0%)
Lumbar
-44N (-4.31%)
-260N (-18.2%)
+237N (+28.5%)
+430N (+32.6%)
IV. CONCLUSIONS In this study, we analyzed influence of the spinal fixation on the intervertebral joint force using full body musculoskeletal model developed by the AnyBody Modeling System. Following conclusions were obtained. (1) Excessive forces were produced in muscles near and attached lumbar vertebrae by spinal fixation. (2) Which vertebra and muscle act big force depends on spinal fixation level and patient’s posture. (3) In flextion of deep angle, loading of lower lumbar vertebrae was increased when thoracolumbar vertebrae were fixed. It is recommended to avoid taking deep flexion posture for patients with thoracolumbar vertebrae fixation. (4) Under extension posture, loading of adjoining vertebrae and disk with the fixation were greatly increased and subject to additional disorders. It is recommended to avoid taking excessive extension posture for patients with spinal fixation. Although we described only spinal fixation in this paper, biomechanical analysis of musculoskeletal system will be applied many orthopedic problems. Significantly contribute will be expected for orthopedic therapies.
parison of with and without fixation.
Joint force (N)
2000 1500
ACKNOWLEDGMENT No fixation Thoracolumbar vertebrae fixation Lumbar vertebrae fixation
I would like to thanks to Yasushi NAKADA, Graduate School of Natural Science and Technology, Kanazawa University, for invaluable support.
1000
REFERENCES
500 1.
0 L5-S L4-L5 L3-L4 L2-L3 L1-L2 T12L1
T11T12
T10T11
Fig. 6 Joint forces in vertebral axial direction in extension 30-degree. Comparison of with and without fixation.
_________________________________________
2.
3.
Mark de Zee, et al. (2007) A generic detailed rigid-body lumbar spine model, Journal of Biomechanics, Vol.40, Issue 6, 1219-1227. Jiro Sakamoto, et al (2005) Musculoskeletal analysis of spine and its application for spinal fixation by instruments, Proc. 2005 Int. Symp. Comp. Simulation Biomech., Cleveland, 19-20. Michael Damsgaard, John Rasmussen, et al. (2006) Analysis of musculoskeletal systems in the AnyBody Modeling System, Simulation modelling Practice and Theory, Vol.14, Issue 8, 1100-1111.
IFMBE Proceedings Vol. 23
___________________________________________
Preventing Anterior Cruciate Ligament Failure During Impact Compression by Restraining Anterior Tibial Translation or Axial Tibial Rotation C.H. Yeow1, R.S. Khan1, Peter V.S. Lee1,3,4, James C.H. Goh1,2 1
Division of Bioengineering, National University of Singapore, Singapore Department of Orthopaedic Surgery, National University of Singapore, Singapore 3 Biomechanics Lab, Defence Medical and Environmental Research Institute, Singapore 4 Department of Mechanical Engineering, University of Melbourne, Australia 2
Abstract — Anterior cruciate ligament injury is highly prevalent in activities that involve large and rapid landing impact loads. We hypothesize that restraining anterior tibial translation or axial tibial rotation can prevent the anterior cruciate ligament from failing at the range of peak compressive load that can induce ligament failure when both factors are unrestrained. Sixteen porcine knee specimens were mounted onto a material-testing system at 70-deg flexion. Single 10-Hz haversine impact compression was successively repeated with incremental actuator displacement until ligament failure/visible bone fracture was noted. During impact compression, rotational and translational data of the knee joint was obtained using motion-capture system via markers attached to the setup. Specimens were randomly classified into four test groups: Q (unrestrained setup), A (anterior tibial translation restraint), R (axial tibial rotation restraint) and C (combination of both restraints). The same impact protocol was applied to all specimens. Q specimens incurred anterior cruciate ligament failure in the form of femoral avulsion; the peak compressive forces during failure ranged from 1.4-4.0 kN. A, R and C specimens underwent visible bone fracture with ligament intact; the peak compressive force during fracture ranged from 2.2-6.9 kN. The posterior femoral displacement and axial tibial rotation for A and R specimens respectively were substantially lower relative to Q specimens. Both factors were significantly diminished in C specimens but the peak compressive force was larger compared to Q specimens. Significant restraining of these factors was able to prevent anterior cruciate ligament failure in an impact setup that can induce ligament failure with the factors unrestrained.
axial tibial rotation are potential risk factors in ACL injury mechanism. ACL knee bracing prevents aggravated anterior tibial translation and axial tibial rotation in order to relieve ACL strain or stabilize an ACL-deficient knee. Several studies have performed functional tests to assess the effectiveness of ACL bracing in response to external loading. Beynnon et al. [3] observed that bracing protected the ligament by substantially mitigating the ACL strain in response to anterior-directed loading and internal-external torque of the tibia. However, Ramsey et al. [4] noted that there were no consistent attenuations in anterior tibial translation and bracing the ACL-deficient knee resulted in insignificant kinematics differences for tibiofemoral joint motion. It seemed controversial whether these braces were effective in serving their intended functions. There is no clear evidence that these braces can reduce anterior tibial translation and axial tibial rotation, and relieve ACL loading during injurious landing. Currently, there is no direct evidence to prove that restraining anterior tibial translation or axial tibial rotation prevents ACL failure. The study aimed to show in a previous experimental setup, in which ACL failure can be induced via impact compression, that the introduction of restraint fixtures for anterior tibial translation or axial tibial rotation will avert ACL failure. We hypothesize that restraining anterior tibial translation or axial tibial rotation can prevent the ACL from failing at the range of peak compressive load that can induce ligament failure when both factors are unrestrained.
Keywords — Injury prevention, anterior cruciate ligament failure, compressive impact, tibial translation, axial rotation
II. METHODS I. INTRODUCTION Anterior cruciate ligament (ACL) ruptures has been estimated at 95,000 cases annually in the United States with associated treatment costs of almost one billion dollars [1]. ACL injury is highly prevalent in sports involving large and rapid landing impact loads, such as basketball and skiing. The ACL serves as primary and secondary restraints to anterior tibial translation and axial tibial rotation respectively [2]. Hence, excessive anterior tibial translation and
A. Specimen Preparation Sixteen porcine hind legs (pig age: ~2 months; weight: ~40 kg) were obtained at a local abattoir (Primary Industries, Singapore). Both tibia/fibula and femur were sectioned 15 cm from knee joint centre along the anatomical axes with surrounding soft tissues kept intact. The sectioned ends were then centrally-potted in dental cement (Baseliquid & powder, Dentsply, China), secured to steel potting
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1716–1719, 2009 www.springerlink.com
Preventing Anterior Cruciate Ligament Failure During Impact Compression by Restraining Anterior Tibial Translation …
1717
cups using screws and then mounted onto the Material Testing System (810-MTS, MTS Systems Corporation, USA). B. Experimental setup The setup, adapted from a previous study [5], constrained the mounted specimen to 70-deg flexion to simulate landing posture. The tibia/fibula was limited to axial z-displacement and rotation while the femur was restricted to transverse xand y-displacements (Fig. 1). We attached passive markers on the tibial potting cup and femoral stage, and used a motion-capture system (ProReflex-MCU1000, Qualisys Motion-Capture Systems, Sweden) to obtain the marker trajectories necessary for determining rotational and translational motion of the tibia and femur respectively. A slight posterior preload was applied using a 5-kg weight through a pulley system. Fig 1 Experimental setup with both restraints C. Compression testing The specimens were randomly classified into four test groups: Q (unrestrained setup), A (anterior tibial translation restraint), R (axial tibial rotation restraint) and C (both restraints, Fig. 1). All specimens were subjected to the same impact protocol. The mounted specimens were adjusted via MTS to eliminate tensile/compressive preloading. Impact compression was performed by displacement control at a single haversine of 10-Hz frequency [5] to simulate a landing impact. The compression trial was successively repeated with incremental actuator displacement of 1 mm; after each trial, the specimens were returned to the initial position before the subsequent trial. The compressive force response was measured by the triaxial load-cell (9347B, Kistler, Switzerland). A significant drop (>70%) in compressive force (Fdrop) response was taken as a major ACL failure; Fdrop was estimated from the difference between the peak compressive force (Fpeak) during impact compression and the mean compressive force during post-compression time period 300-500 ms. The impact tests were ended when either a significant Fdrop was observed (presence of major ACL failure) or a visible bone fracture was present (absence of major ACL failure). Presence/absence of ACL failure was confirmed via dissection. Posterior femoral displacement and axial tibial rotation angle at Fpeak were obtained based on the tibial and femoral marker trajectories. D. Statistical analysis One-way ANOVA was performed between test groups to compare Fpeak, Fdrop, posterior femoral displacement and axial tibial rotation. All significance levels were set at p=0.05.
_______________________________________________________________
III. RESULTS All Q specimens underwent ACL failure via femoral avulsion, with mean Fpeak of 3.0[1.1]kN. Corresponding posterior femoral displacements and axial tibial rotation angles at Fpeak were 18.7[7.3]mm and 9.2[5.9]deg respectively. All A, R and C specimens developed visible bone fracture with the ACL intact. The mean Fpeak obtained during fracture were 4.9[1.9], 3.9[0.7] and 5.5[1.2]kN for A, R and C respectively. Corresponding posterior femoral displacements were 1.5[0.9], 15.8[2.6] and 0.8[0.4]mm respectively while the axial tibial rotation angles were 16.1[8.7], 0.9[0.8] and 0.7[0.3]deg (Table 1). C specimens had a significantly higher Fpeak (p75y models to 0.35mm and 0.2mm to represent cortical thinning in late stage osteoporosis. Loads were applied to simulate in vitro biomechanical testing, compressing the vertebra by 20% of its height. Predicted vertebral stiffness and strength reduced with progressive age changes in microarchitecture, demonstrating a 44% reduction in stiffness and a 43% reduction in strength, between the age 75 models. Reducing cortical thickness in the age >75 models demonstrated a substantial reduction in stiffness and strength, resulting in a 48% reduction in stiffness and a 62% reduction in strength between the 0.5mm and 0.2mm cortical thickness models. Cortical thinning in late stage osteoporosis may therefore play an even greater role in reducing vertebral stiffness and strength than earlier reductions due to trabecular thinning. Keywords — Vertebra mechanics, FE modeling, trabecular architecture, osteoporosis, cortical shell
The micro-architectural deterioration includes thinning of the trabeculae, increased spacing between trabeculae, and in the later stages, thinning of the cortical shell. These changes transform the structure from dense and plate-like to a sparse, rod-like structure. It is believed there is an associated change in the failure mechanisms from plastic collapse of the trabeculae to inelastic, or possibly even elastic, buckling of the trabeculae. Due to the experimental difficulty in investigating this complex structure, the effect of bone loss on trabecular failure mechanisms and whole vertebra mechanics is still poorly understood. To overcome experimental difficulties, previous studies have employed computational modeling methodologies. These studies utilize two distinct approaches. Firstly, the macro scale approach, whereby a solid, whole vertebra is simulated and the structure of the trabecular bone approximated as a continuum. The material properties of the vertebral core are varied to represent the changes in trabecular structure and the effect of this change on whole vertebra mechanics is observed. Secondly, micro scale approaches, whereby an isolated section of the trabecular structure is simulated using either continuum or beam elements. Continuum elements are computationally expensive and therefore only small regions of bone can be modeled. Conversely, computationally efficient beam elements have been used to simulate larger regions of trabecular structures. The results for these micro scale models are then extrapolated for a whole vertebra. This study aimed to create a multi-scale finite element model of an L3 human vertebral body. The architectural changes observed in osteoporosis (trabecular thinning, increased trabecular spacing and cortical shell thinning) were modeled and the effect of these changes on whole vertebra mechanics and trabecular mechanics explored. II. METHODS
I. INTRODUCTION Osteoporosis is a disease which affects more than 75 million people in Europe, Japan and the USA, and is the cause of more than 2.3 million fractures annually in Europe and America alone [1]. Osteoporosis is characterized by low bone density and micro-architectural deterioration of bone tissue, resulting in increased susceptibility to fracture [2].
A finite element model of an L3 human vertebra was analysed under compression to assess the relative affect of cortical and trabecular microstructure on vertebral mechanics. The trabecular structure was modeled using threedimensional beam elements and the vertebral cortex simulated with shell elements. Model development involved
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1757–1760, 2009 www.springerlink.com
1758
K. McDonald, P. Little, M. Pearcy, C. Adam
ascertaining the efficacy of the beam elements in simulating trabecular biomechanics, validation of the beam element trabecular lattice and simulation of an intact vertebra. A. Modeling the trabecular core Three dimensional beam elements were used to represent individual struts within the trabecular lattice. As buckling is an important failure mode of longitudinally oriented trabeculae [3], it was important that the model accurately predicted buckling behavior. Therefore, analyses were performed on various beam element configurations to determine an appropriate trabeculum model. The effect of element type, initial beam curvature, mesh density and solution time increment were investigated. Due to the paucity of data on trabeculum failure mechanisms at the microstructural level (specifically buckling), it was necessary to use a buckling mechanics study based on another material to verify the trabeculum model. Investigations performed by Rahman [4] gave critical buckling loads determined experimentally for solid columns of stainless steel (SUS304). This work covered a comprehensive range of slenderness ratios (14-184) and hence failure modes, from purely elastic to purely plastic. The material properties and slenderness ratios of the trabeculum model were altered to represent the steel columns, and Rahman´s experimental results used to verify the prediction of the model. From these analyses it was concluded that two quadratic beam elements, with a slight initial curvature, were able to predict failure (be it elastic buckling, inelastic buckling, or plastic collapse) with a mean error of 20% for the whole range of slenderness ratios. For the trabeculum model, an initial offset of 0.001mm in the centre of the column was necessary to induce buckling. Once confidence was obtained in the ability of the FE techniques to predict buckling, they were applied to represent a trabecular core model. A three dimensional trabecular core was created using a lattice of individual trabeculum beams. Three-node, quadratic beam elements were used to represent the longitudinal trabeculae, and two-node, linear beam elements represented the transverse trabeculae. Since the transverse trabeculae are primarily loaded in tension, the additional complexity of the quadratic beam representation was considered unnecessary. To provide a degree of irregularity to the lattice, as is seen in real trabecular bone, a perturbation factor of 0.3 was applied [5]. The perturbation factor defines the maximum distance each node may be perturbed by as a proportion of the trabecular spacing. For example, with a perturbation value of 0.3, each node point was perturbed ±0-30% of the spacing value. The trabecular spacing and thickness values for the transverse and longitudinal trabeculae were derived from Mose-
_______________________________________________________________
kilde [6]. The values are shown in Table 1. Three trabecular structures were created; age < 50, age 50-75 and age > 75. A tissue modulus of 8GPa and a Poisson´s ratio of 0.3 were applied [7, 8]. An elastic-perfectly plastic yield definition was included which had a yield strain of 0.85% and a yield stress of 68 MPa. The yield strain was determined as an average of the reported compressive and tensile yield strains for trabecular bone [9]. Table 1 Trabecular spacing and thickness values for female vertebral trabecular bone for age < 50, age 50-75 and age > 75 [6] Transverse Spacing (mm)
Model
Longitudinal Spacing (mm)
Transverse Thickness (mm)
Longitudinal Thickness (mm)
Age 75
1.145
1.668
0.107
0.201
The trabecular core model was verified against experimental results. As well as providing structural parameters for three age groups, the Mosekilde study also provided the compressive strength of the trabecular cores for these age groups. To provide a comparison, trabecular core models were created to replicate the in vitro cylindrical bone samples tested by Mosekilde (radius 3.5mm, length 5mm). To replicate the axial compression test performed in the study, the upper nodes of the cores were were free to move only in the axial direction and held in all other degrees of freedom. All bottom surface nodes were held in all degrees of freedom. The upper nodes were displaced in the axial direction until failure of the core occurred. The models were solved using ABAQUS/Standard (version 6.7, Abaqus Inc, RI, USA) using a large displacement (non-linear geometry) quasi-static solution procedure. The maximum compressive strength and stiffness of the cores were determined and compared to the experimental results. Once confidence was gained in the trabecular core model, an intact vertebra was simulated. B. Modeling the Vertebra Using a similar methodology to that employed for the trabecular core, an intact vertebra was simulated whereby the inner trabecular lattice of the vertebra was enclosed by a thin vertebral cortex. Age < 50, age 50-75 and age >75 vertebra models were produced. The cortex was meshed using three dimensional, linear shell elements and the geometry was based on equations given by Mizrahi [10]. The mesh density gave shell elements 2mm in size. The material properties of the cortex were assumed to be the same as the trabecular bone [7, 8]. The thickness of the shell ele-
IFMBE Proceedings Vol. 23
_________________________________________________________________
Relative Roles of Cortical and Trabecular Thinning in Reducing Osteoporotic Vertebral Body Stiffness: A Modeling Study
B. Vertebra Model Figure 1 shows the stress strain curve for each of the vertebra models. The corresponding stiffness and compressive strengths are shown in Table 3. 5
Age 50 cort 0.5 Age 50-75 cort 0.5 Age 75 cort 0.5 Age 75 cort 0.35 Age 75 cort 0.2
4 Stress (MPa)
ments was 0.5mm, which represents a normal cortical shell [11]. In the age > 75 model, the shell thickness was reduced to 0.35mm, and then again to 0.2mm to represent shell thinning, as is observed in the later stages of osteoporosis [11]. This resulted in five vertebra models, all at different stages of osteoporosis. Loads were applied to simulate in vitro biomechanical testing, compressing the vertebra by 20% of its height. The upper endplate was displaced by -6mm axially and held in the transverse plane. The lower endplate was held in all directions. A quasi static solution step (total length 1 sec) was used with a minimum time increment of 0.01 sec and a maximum time increment of 0.1 sec. As previously stated, the ABAQUS non linear geometry capability was used to include the effect of large deformations in the solution.
1759
3 2 1
III. RESULTS 0
The apparent modulus of the trabecular cores and vertebral models were determined using the linear region of the stress strain graph, between 0-0.4% apparent strain. The maximum compressive strength was considered to be the maximum total vertical reaction force that was reached in the simulation.
Table 2 shows the compressive strength of the various trabecular core models determined experimentally by Mosekilde [2] and the corresponding computed compressive strengths from the trabecular core model. The table also shows the apparent stiffness of the cores determined computationally, however Mosekilde did not report the stiffness of the cores tested experimentally and hence no direct comparison can be made. Other studies have reported vertebral trabecular bone samples from the lumbar spine have an apparent modulus of 165 ± 110 MPa [12]. Table 2 Compressive strength of the trabecular core samples determined experimentally by Mosekilde and by FE trabecular core model and the stiffness of the cores predicted by the FE models
FE trabecular core maximum compressive strength (MPa) FE trabecular core apparent modulus (N/mm)
0,002
Fig. 1
Age < 50
Age50-75
Age >75
3.91 ± 1.61 2.84
1.35 ± 0.64 1.21
0.93 ± 0.4 0.54
253
138
74
_______________________________________________________________
0,004 0,006 Strain
0,008
0,01
Stress versus strain for the vertebra models.
Table 3 FE predicted compressive strength and stiffness of the vertebra models Cortical thickness (mm)
Model
A. Trabecular core
Mosekilde maximum compressive strength (MPa)
0
Stiffness (N/mm)
Max. Compressive strength (kN)
Age 75
0.5
336
3.28
Age >75
0.35
256
2.30
Age >75
0.2
176
1.25
IV. DISCUSSION The trabecular core model was able to reproduce compressive strengths and apparent moduli determined experimentally. The predicted compressive strengths for the cores of various structures were within one standard deviation of Mosekilde´s experimental results for the corresponding ages. The apparent moduli determined computationally were within the range of values (165±110 MPa) in the literature [12]. With these results, confidence was achieved in the trabecular beam model. The vertebra model confirms that changes in architecture have a large effect on overall vertebra stiffness and strength. A change in architecture from the age < 50 to the age > 75 cases resulted in a 44% decrease in stiffness and a 43% decrease in vertebral strength. With the age > 75 model, a change in shell thickness from 0.5mm to 0.2 mm, without
IFMBE Proceedings Vol. 23
_________________________________________________________________
1760
K. McDonald, P. Little, M. Pearcy, C. Adam
any change in trabecular structure, resulted in a 48% decrease in stiffness and a 62% decrease in compressive strength. These results not only highlight the importance of the trabecular architecture changes that occur with the osteoporosis process, but also the biomechanical importance of the cortical thinning that occurs in the later stages of the disease. A current limitation of this model is that it has yet to be validated against experimental data for a full vertebral body (including the cortical shell). However validation of the trabecular core model and initial comparisons with the literature indicate the predictions of the vertebra model are reasonable. Reported vertebral body compressive strengths range from approximately 60MPa for a 20 year old vertebra to 2.6MPa for an 80 year old vertebra [13]. The predicted compressive strengths for the vertebral models of different ages are comparable with these values, although slightly lower. The premise for this modeling approach was that buckling mechanisms dominate the response of rod-like osteoporotic bone. Hence, replicating the trabecular network using beam elements provides a sophisticated microstructural model capable of simulating plastic collapse, inelastic buckling, or elastic buckling in bone of various ages. Taking a closer look at the trabecular struts in the age > 75 vertebra model shows the trabecular beams are undergoing large buckling deformation, but no plastic deformation is seen, indicating that the overall failure of the vertebra is due to elastic buckling of the trabeculae. In the age 50-75 model, the beams also experience large amounts of buckling; however there is also plastic deformation throughout the structure. This suggests inelastic failure of the trabeculae is playing a key role in the vertebra failure. Finally, the beams of the age < 50 model show almost no buckling, yet a high amount of plastic deformation, signifying plastic collapse of the structure. While further model investigation and validation needs to be done before any quantitative data on the trabeculae can be reported, these results highlight the distinctive insight into both the trabecular and vertebral mechanics this model allows. In future work, this model will be validated against human vertebra specimens. Once validated, it will be used to investigate current drug therapies and their effects on the bone architecture and vertebral strength, as well the effect on trabecular and vertebral strength of surgical treatments such as vertebroplasty.
_______________________________________________________________
V. CONCLUSION This paper has presented the development of a novel multi-scale, vertebra model produced with beam and shell elements. The model predictions have been validated against experimental data in existing literature and show good agreement. The investigation into the effects of changes in architecture indicate that while the changes in trabecular architecture have a large effect on vertebral strength and stiffness in the early stages of osteoporosis, cortical thinning may have as great an effect (if not greater) in the later stages.
REFERENCES 1.
2. 3. 4.
5.
6.
7.
8. 9.
10.
11.
12.
13.
Osteoporosis, W.S.G.o.t.P.a.M.o., Prevention and management of osteoporosis: report of a WHO scientific group., in WHO technical report series. 2000, World Health Organisation: Geneva. Consensus development conference: Diagnosis, prophylaxis and treatment of osteoporosis. in American Journal of Medicine. 1991. Townsend, P.R. and R.M. Rose, Buckling studies of single human trabeculae. Journal Of Biomechanics, 1975. 8: p. 199-201. Rahman, M.A., J. Tani, and A.M. Afsar, Postbuckling behaviour of stainless steel (SUS304) columns under loading-unloading cycles. Journal of Constructional Steel Research, 2006. 62(8): p. 812-819. Jensen, K.S., Mosekilde, L., Mosekilde, L., A model of vertebral trabecular bone architecture and its mechanical properties. Bone, 1990. 11(6): p. 417-423. Mosekilde, L., Sex differences in age-related loss of vertebral trabecular bone mass and structure--biomechanical consequences. Bone, 1989. 10(6): p. 425-432. Linde, F., Elastic and viscoelastic properties of trabecular bone by a compression testing approach. Danish Medical Bulletin, 1994. 41(2): p. 119-138. Keaveny, T.M., et al., Biomechanics of trabecular bone. Annu. Rev. Biomed. Eng., 2001. 3: p. 307-333. Niebur, G.L., et al., High-resolution finite element models with tissue strength asymmetry accurately predict failure of trabecular bone. Journal of Biomechanics, 2000. 33(12): p. 1575-1583. Mizrahi, J., Silva, M.J., Keaveny, T.M., Edwards, W.T., Hayes, W.C., Finite-element stress analysis of the normal and osteoporotic lumbar vertebral body. Spine, 1993. 18(14): p. 2088-2096. Mosekilde, L., Vertebral structure and strength in vivo and in vitro. Calcified Tissue International, 1993. 53(Supplement 1): p. S121S125; discussion S125-S126. Keaveny, T.M., Pinilla, T.P., Crawford, R.P., Kopperdahl, D.L., Lou, A, Systematic and random errors in compression testing of trabecular bone. Journal of orthopedic research, 1997. 15(1): p. 101-110. Mosekilde, L. and M. Leif, Normal vertebral body size and compressive strength: Relations to age and to vertebral and iliac trabecular bone compressive strength. Bone, 1986. 7: p. 207-212.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Musculo-tendon Parameters Estimation by Ultrasonography for Modeling of Human Motor System L. Lan1, L.H. Jin2, K.Y. Zhu1 and C.Y. Wen1 1
Nanyang Technological University, Singapore Tianjin Zhonghuan Semiconductor Joint-stock Co. Ltd, China
2
Abstract — To provide quantitative insights in analyzing human movement, the musculoskeletal model is widely used to predict the muscle force output. It is important to accurately estimate the model’s parameters on subject specific basis. This paper presents an approach to obtain the parameters in vivo by the ultrasound imaging technology. The origin and insertion points, pennation angle, fascicle length and cross-sectional area of brachialis are measured by off-line analyzing of ultrasound image. The data is used to calculate values of musculotendon length, moment arms, optimal muscle fascicle length, tendon slack length and maximum isometric force. These musculotendon parameters can be used in a musculoskeletal model to predict the muscle force and torque. Keywords — Ultrasound, musculotendon model
I. INTRODUCTION Model of musculoskeletal system is widely used to simulate the muscle force in analyzing human movement [1]. One of the main challenges in modeling is to accurately estimate the musculotendon parameters on subject-specific basis. The underlying muscle contraction and dynamics information can be provided by appreciate modeling. Furthermore, the fact that the modeling and simulation results are significantly affected by the accuracy of estimated values of musculotendon parameters is also revealed by the sensitivity and validation study. In biomechanics studies, most researchers simply adopted formerly reported values from cadaver specimen to predict muscle force and torque [2]. However, parameter values vary widely for even the same muscle in real human limb, and properties of musculotendon systems can also change over age in the same person [3]. To provide quantitative insights in analyzing human movement, it is necessary to estimate the parameters in vivo. Some researchers had used medical imaging techniques to do this task, such as ultrasound [4], computerized tomography (CT) [5] and magnetic resonance imaging (MRI) [6]. Considering the disadvantage of MRI or CT, such as high cost and radiation exposure, the medical ultrasound imaging might be the better choice than others. Furthermore, ultrasonography is convenient on repeated measurement, because medical
ultrasound image can reveal the border between the fat, muscle and bone. The previous musculotendon parameters measurements using ultrasound were mainly used to evaluate the muscle function [7]. Few reports were available on ultrasound measurements to estimate musculotendon parameters for building a customizing specific musculotendon model. In this study, human brachialis muscle is measured, which is the elbow flexor muscle with the larger physiological muscle cross-sectional area (PCSA) in the muscle group of elbow flexors [8]. The physiological parameters of the musculotendon structure are estimated by the geometric model of brachialis muscle and medical ultrasound image. These parameters are employed in an extended Hill-type model to predict muscle force production. II. MODEL DESCRIPTION In this study, the elbow joint is modeled as a uniaxial hinge joint and the axis is determined by the centers of the capitulum and trochlear sulcus. The range of elbow angle is D
D
defined from 0 (full extension) to 90 (full flexion). Muscles are modeled as line segments attaching at bones and joints. In anatomy, brachialis muscle started in the middle of humerus and ended in the head of ulna. We assume that origin and insertion points of the muscle are concentrated at center of muscle attachment areas. Fig. 1 shows the anatomical relationship of brachialis muscle at the elbow joint. The musculotendon model is based on the musculotendon actuator described in [1]. To well and truly describe the physiological structure, we use a modified Hill-type formulation by Brown et al. [9] for the force development of M
the muscle. That is, the muscle force F is described as the sum of force produced by the contractile element FCE and the force produced by the passive elastic element FPE . That is,
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1761–1765, 2009 www.springerlink.com
FM
FPE FCE
(1)
1762
L. Lan, L.H. Jin, K.Y. Zhu and C.Y. Wen
The values of
F PE1 and F PE 2 are given as,
M M LPE1 F PE1 {c PE1k PE1 ln{exp[( L L max ] 1} k PE1 KV }Fmax F PE1 t 0
(6)
and F PE 2 F
PE 2
PE1
{c PE 2 (exp( k PE 2 ( L M LPE 2 )) 1)}Fmax
(7)
d0
where c , c the constants.
PE 2
,k
PE1
,k
PE 2
PE1
,L
PE 2
,L
and
K
are
F M is transferred by the tendon (SE) to generate the movements. (Fig. 2). In this study, the tendon is modeled as a linear spring with a stiffness of K t . Thus, the tendon
Figure 1. The anatomical sketch of elbow and brachialis muscle. The relative position of original and insertion points determine the value of moment arm and musculotendon length.
length can be determined as: where FCE is determined by,
FCE
T
L
Fmax FV (V ) FL( L) E (t )
(2) M
where L is the normalized muscle fascicle length L
to
M
where F
T
FT LTs Kt
is the tendon force with F
(8) T
F M cos D p ,
the optimal muscle fascicle length Lo . V represents the
LTs is the tendon slack length, that is, the length on
normalized muscle contraction velocity to the maximum shortening velocity of the muscle, which is simplified as the derivative of L . E (t ) is the muscle activation, and Fmax
elongation where tendon just begins to develop force. Tendon slack length is determined as,
represents the maximal isometric active force. FL is the force length factor, which is modeled as a polynomial curve.
dL 2dL d 1 2
FL
(3)
where d 3188 is a scaling factor. FV is the force velocity factor, which is approximated by an exponential function [2],
FV
a 1 eb (V c )
LTs where
LMT o
M LMT o 12 Lo
represents
musculotendon length, and
the
maximally
(9) elongated
LMo is muscle optimal
physiological length. Then, based on a geometrical model of the musculotendon structure (Fig. 3), the muscle fascicle length
LM is calculated by subtracting tendon length LT from the
(4)
where a 15 , b 80 , c 00866 . The parallel elastic element is modeled as a two parallel spring system. The total force produced by PE can be PE1
, represents the force presented into two components: F produced by the non-linear spring to resist stretch in the PE 2
, represents the force produced by the passive muscle; F non-linear spring to resist compression during active contraction at short lengths, i.e.,
F PE
F PE1 A(t ) F PE 2
_______________________________________________________________
(5)
Figure 2. A schematic of Hill-type model for contraction dynamics of muscle tissue. The fascicles are represented as the parallel contractile element (CE) and passive elastic element (PE). The series-elastic element (SE) represents the combined tendon and aponeurosis [1].
IFMBE Proceedings Vol. 23
_________________________________________________________________
Musculo-tendon Parameters Estimation by Ultrasonography for Modeling of Human Motor System
1763
LMo per second for young adults and 8 for older adults [10]. MAX
Figure 3. The musculoskeletal structure for ultrasound. All muscle fibers are assumed to be arranged in parallel with the same length, and to insert on tendon with pennation angle. when the muscle fibers shorten, pennation angle increases, but the width of the muscle remains constant.
whole musculotendon length ( D p ) is taken into account:
LMT , and the pennation angle
(L L ) cos D p MT
LM
T
(10)
4). Thus any pennation angles can be determined by the corresponding elbow angles, which can be measured by the goniometer. M
At rest, the relationship of the pennation angle verse joint angle T can be fitted with a linear function [7], i.e.,
Dp
The peak isometric muscle force ( F ) is assumed to be proportional to PCSA. To sum up, the positions of attachment points, the pennation angle, the optimal muscle fascicle length and the PCSA are required to be measured on subject-specific basis. Firstly, the position of the muscle attachment points can be directly identified with respect to the corresponding skin surface under the middle point of the ultrasound probe. Secondly, based on the equation 11, the pennation angles with full extension and half flexion are measured from ultrasound image to calculate the values of a p and bp (Fig.
a pT bp
Thirdly, the muscle fascicle length ( L ) can be calculated using a trigonometry method from the longitudinal view of the ultrasonography of brachialis (Fig. 4). i.e.,
LM is given as [7]:
(11)
LM
Finally, the musculotendon variables (moment arms h ,
LF
LMT 1 L MT 2 sin D p sin D p
(12)
MT
length of musculotendon L ) can be calculated by the distance between the origin point to elbow ll and the insertion point to elbow
ls based on the trigonometric
function (Fig. 1). III. ULTRASONOGRAPHY MEASUREMENT For the model described in the last section, to customize musculotendon model for specific muscles, the following parameters should be specified: the peak isometric muscle force
Fmax ; the optimal muscle fascicle length LMo ;
LF is the visible part of the muscle fascicle, LMT 1 is the distance from fiber proximal end to the bone, LMT 2 is
where
the distance from fiber distal end to the superficial aponeurosis. Due to the force-length property presented by Zajac [1], for muscle with a maximum isometric contraction, we can expect the muscle fascicle length
LM equal to
M o
optimal muscle fascicle length L . Finally, the PCSA is measured by enclosing the outline of the muscle. That is, the muscle linear dimensions (lateral dimension (LD) and the anteroposterior dimension (APD))
LMT , the moment of arm h ; the T tendon slack length Ls ; the maximally elongated
musculotendon length
and muscle’s intrinsic LMT max maximum shortening velocity Vmax . In these parameters,
musculotendon
length
MT
the parameters L
, h,
LTs and LMT max are determined by
the attachment points (insertion and origin), the optimal muscle fascicle length and the pennation angle. The maximum muscle contraction velocity Vmax can be expressed in optimal fiber length
M o
L per second, i.e., 10
_______________________________________________________________
Figure 4. The longitudinal view of the ultrasonography of brachialis and biceps brachii, the data is collected from subject 1. The white fringe of the humerus bone and the dark muscle fascicle can be easily observed in the image.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1764
L. Lan, L.H. Jin, K.Y. Zhu and C.Y. Wen
These results agree with the values reported in literatures ( Fmax : 184.8[11], 178.2[12],
M
217.8[13], 254.4[14]; Lo :
0.099[12], 0.09[13], 0.0942 [14] ). I. CONCLUSION
Figure 5. The transverse view of the ultrasonography of brachialis, the data is collected from subject 3. The LD and APD can be measured from the image.
are measured to evaluate PCSA. The values of LD and APD are measure from the The transverse view of the ultrasonography (Fig. 5).
In this research, a B-mode ultrasonography scanner with 12 MHz, 38 mm line probe (Ultrasonix Sonix RP) is used for data acquisition. Healthy subjects (4 males, age: 21-28 years) sit on a height adjustable chair with a solid back. Their forearms are placed on a horizontal plane at the same height of the shoulder and supported by a bracket. The shoulder is D
in 90 abduction and 0 flexion. During the test, the ultrasound probe is moved along the anterior part of upper arm to find the best image. To enhance ultrasound conduction, coupling gel is applied between the probe and skin surface. To find the muscle architecture parameters, ultrasound image and video are recorded and analyzed off-line. The estimated parameter values are shown in table 1. Table 1. Estimated parameter values
Parameters
LMo
2
3
4
(N)
250.8
211.2
257.4
231
(m)
0.095
0.086
0.092
0.086
0.16
0.17
0.16
0.15
0.22
0.18
0.2
0.18
ap bp
Subjects 1
Fmax
(rad)
REFERENCES 1.
2.
IV. RESULTS
D
Ultrasonography is a low-cost and comfortable method to get the musculotendon parameters in vivo. Our study provides an approach to estimate the parameters of the musculotendon model by the medical ultrasound image and the geometrical model of the anatomical structure. Since the use of cadaver data in modeling of muscle functions will cause accumulating errors in the results, our approach is helpful to build a more accurate model than the models reported in previous studies.
_______________________________________________________________
3.
4.
F. E. Zajac, “Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control,” Crit Rev Biomed Eng, vol. 17, no. 4, pp. 359–411, 1989. N. Lan, “Stability analysis for postural control in a two-joint limb system,” IEEE Trans Neural Syst Rehabil Eng, vol. 10, no. 4, pp. 249–259, 2002. P. W. Brand, R. B. Beach, and D. E. Thompson, “Relative tension and potential excursion of muscles in the forearm and hand,” J Hand Surg [Am], vol. 6, no. 3, pp. 209–219, 1981.
K. Sanada, C. Kearns, T. Midorikawa, and T. Abe, “Prediction and validation of total and regional skeletal muscle mass by ultrasound in japanese adults,” European Journal of Applied Physiology, vol. 96, no. 1, pp.
24–31, 2006. R. C. Lee, Z. Wang, M. Heo, R. Ross, I. Janssen, and S. B. Heymsfield, “Total-body skeletal muscle mass: development and cross-validation of anthropometric prediction models,” Am J Clin Nutr, vol. 72, no. 3, pp. 796–803, 2000. 6. C. N. Maganaris, V. Baltzopoulos, and A. J. Sargeant, “Changes in the tibialis anterior tendon moment arm from rest to maximum isometric dorsiflexion: in vivo observations in man,” Clin Biomech (Bristol, Avon), vol. 14, no. 9, pp. 661–666, 1999. 7. L. Li and K. Y. Tong, “Musculotendon parameters estimation by ultrasound measurement and geometric modeling: application on brachialis muscle,” in Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the, 2005, pp. 4974–4977. 8. K. N. An, F. C. Hui, B. F. Morrey, R. L. Linscheid, and E. Y. Chao, “Muscles across the elbow joint: a biomechanical analysis,” J Biomech, vol. 14, no. 10, pp. 659–669, 1981. 9. I. E. Brown, S. H. Scott, and G. E. Loeb, “Mechanics of feline soleus: Ii design and validation of a mathematical model,” Journal of Muscle Research and Cell Motility, vol. 17, no. 2, pp. 221–233, 1996. 10. D. G. Thelen, “Adjustment of muscle mechanics model parameters to simulate dynamic contractions in older adults,” J Biomech Eng, vol. 125, no. 1, pp. 70–77, 2003. 5.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Musculo-tendon Parameters Estimation by Ultrasonography for Modeling of Human Motor System 11. H. E. J. Veeger, B. Yu, K.-N. An, and R. H. Rozendal, “Parameters for modeling the upper extremity,” Journal of Biomechanics, vol. 30, no. 6, pp. 647–652, 1997. 12. W. M. Murray, T. S. Buchanan, and S. L. Delp, “The isometric functional capacity of muscles that cross the elbow,” Journal of Biomechanics, vol. 33, no. 8, pp. 943–952, 2000.
_______________________________________________________________
1765
13. K. N. An, K. Takahashi, T. P. Harrigan, and E. Y. Chao, “Determination of muscle orientations and moment arms,” J Biomech Eng, vol. 106, no. 3, pp. 280–282, 1984. 14. H. E. J. Veeger, F. C. T. Van Der Helm, L. H. V. Van Der Woude, G. M. Pronk, and R. H. Rozendal, “Inertia and muscle contraction parameters for musculoskeletal modelling of the shoulder mechanism,” Journal of Biomechanics, vol. 24, no. 7, pp. 615–629, 1991.
IFMBE Proceedings Vol. 23
_________________________________________________________________
Mechanical Vibration Applied in the Absence of Weight Bearing Suggest Improved Fragile Bone J. Matsuda1, K. Kurata2, T. Hara3, H. Higaki2 1
Venture Business Laboratory, Niigata University, Niigata, Japan Department of Biorobotics, Kyushu Sangyo University, Fukuoka, Japan 3 Department of Mechanical and Production Engineering, Niigata University, Niigata, Japan. 2
Abstract — Mechanical loading is critical for maintaining bone mass, while weightlessness, such as that associated with reduced physical activity in old age, long-term bed rest, or space flight, invariably leads to bone loss. Fragile bone tissue is more susceptible to fractures. By contrast, extremely low-level oscillatory accelerations, applied without constraint, can increase bone formation. To examine the role of vibration in preventing and improving the fragility of bone, we tested the effect of vibration on bone structure in a tail-suspended hindlimb-unloaded (HS) mouse model. Male 22-week-old JclICR mice were allocated randomly to the following groups: daily-standing control, HS without vibration, HS with vibration at 45 Hz (HS+45Hz), and HS with standing (as an alternative to vibration) (HS+stand). Vibration was given for 5 min/day for 4 weeks. During vibration, a group of mice was placed in a box on top of the vibrating device. The amplitude of vibration was 1.0 mm. After 4 weeks of treatment, the mice were anesthetized and killed by cervical dislocation. Trabecular bone of proximal tibial metaphyseal region of tibial diaphyseal region parameters were analyzed morphologically using in vivo micro-computed tomography. In trabecular bone, the microstructural parameters were improved in HS+45Hz group compared with HS and HS+stand group, including bone volume (BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb.Sp) and trabecular bone pattern factor (TBPf). In conclusion, the results suggest a beneficial effect of vibration in preserving the complexity of trabecular bone.
venting and/or reversing osteoporosis without adverse invasive or pharmacological effects. Despite these potential benefits, few studies have investigated whether WBV can prevent or reverse deleterious changes in bone formation, resorption, or morphology induced by catabolic stimuli. In this study, we analyzed the effects of WBV on osteopenia and unloading-induced bone loss using a mouse model. II. MTERIALS AND METHODS We performed the effect of vibrations on bone structure in a tail-suspended, hindlimb-unloaded (HS’s) mouse model (Fig.1). Twenty male Jcl-ICR mice (Kyudo co., Ltd., Fukuoka, Japan), 22-week-old at the start of the experiment, were allocated randomly to the following groups: normal condition control (Cont), HS without vibration (HS), HS with vibration at 45 Hz (HS+45Hz), and HS with short-term standing as an alternative to vibration (HS+stand). Vibration was given for 5 min/day for 4 wk. During the whilebody vibrational treatment, a group of mice was placed in a box on top of a vibrating device (Fig.2). The amplitude of vibration was 1.0 mm. After a 4-wk treatment period, the rats were sacrificed by cervical dislocation under anesthesia with diethyl ether. The tibia of each mouse was removed
Keywords — Bone loss, Hind-limb suspension, Vibration, Micro-computed tomography Slide guide
I. INTRODUCTION
(Forward-Backward)
Slide guide
The sensitivity of the skeleton to changes in the mechanical environment is characterized by a rapid and sitespecific loss of bone tissue after the removal of function. Conditions such as bed rest, spinal cord injury, and spaceflight can be severely detrimental to the mass, architecture, and mechanical strength of bone tissue, potentially transforming specific skeletal regions into sites of osteoporosis [1- 4]. Whole-body vibration (WBV) can have an anabolic effect in bone tissue and may contribute to more fractureresistant skeletal structures [5]. Therefore, WBV may provide a practical, non-pharmacological alternative for pre-
(Right-Left)
Clip Fixation seat
Fig.1 Depiction of cage used for tail-suspended hind-limb unloaded (HS) experiment.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1766–1768, 2009 www.springerlink.com
Mechanical Vibration Applied in the Absence of Weight Bearing Suggest Improved Fragile Bone
Table.1 Results from micro-computed tomography analysis of trabecular bone.
Amplitude : 1mm
BV/TV , %
0.278 r 0.133
HS+Stand
0.239 r 0.106
HS
0.044 r 0.004
HS+45Hz
0.053 r 0.003
HS+Stand
0.047 r 0.005
Tb.Sp , mm
Fig.2 System for whole-body vibration of mice.
4.179 r 2.741
0.192 r 0.136
HS+45Hz
5.167 r 2.250
HS+Stand
0.189 r 0.098
HS+Stand
4.987 r 2.021
Control
30.438 r 7.492
Control
2.652r 0.121
HS
18.179 r12.830
HS
3.525r 0.304
HS+45Hz
29.601 r11.121
HS+45Hz
2.923r 0.189
HS+Stand
21.533 r 8.787
HS+Stand
3.287r 0.312
Nd/TV , 1/mm
**
TBPf , 1/mm
** ** **
_______________________________________________________________
HS+45Hz
**
Preventing the loss of bone attributable to a change in the mechanical environment is an important aspect of effective rehabilitation. The sensitive of bone to physical stimuli is evident from exercise studies [6, 7], long term bed rest [8] and local load as seen in the humerus of tennis player [9,
5.892 r 1.159
HS
**
IV. DISCUSSION
Control
**
Table 1 displays the bone morphological parameters analyzed from the micro-CT images. The grading criterion for regulation and/or modeling was the presence of a significant difference in the bone parameters between groups. The BV/TV was reduced in the HS group (HS, HS+45Hz, HS+stand) compared with the Cont group. There were significant positive effects on BV/TV in response to standing and especially in response to vibration. Although similar positive trends were observed for Tb.N and Tb.Sp in the HS+stand with HS+45Hz groups, the effects were not statistically significant. There was no significant difference in N.Nd/TV between the Cont and HS+45Hz groups.
0.342 r 0.279
**
III. RESULTS
0.119 r 0.039
HS
Tb.N , 1/mm
**
and placed in 70% ethanol. Trabecular bone of proximal tibial metaphyseal region of tibial diaphyseal region morphology was analyzed using in vivo micro-computed tomography (micro-CT). The trabecular region was selected using contours inside the cortical shell on each twodimensional image. The measured parameters included the trabecular bone volume/total volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb.Sp), trabecular number (Tb.N), node number (N.Nd/TV), and trabecular bone pattern factor (TBPf). Differences between the groups were analyzed using the Kruskal-Wallis test. In case of difference, the MannWhitney U test was applied to test the difference between the different groups. The significance level was 0.05 and all data are presented as mean ± SD.
Control
**
HS+45Hz
0.058 r 0.003 **
0.191 r 0.138
Control
**
HS
Tb.Th , mm
**
0.341 r 0.063 **
Vibrator : 45Hz
Control
**
Controller
1767
** : p < 0.01
10]. The exact mechanical control of bone adaptation is not fully understood. As the important stimuli for bone adaptation, strain magnitude, strain rate, fluid shear flow, strain energy density had been proposed. However, very little is known about the actual bone adaption under absence of mechanical stimuli environment. In this present study, we estimated the effects of whole-body vibration on the bone structure in mechanically unloaded mice. Unloaded mice exhibited significant decreases in the values of trabecular bone parameters relative to the control group, and these decreases were significantly prevented by the short-term reloading of mice in the HS+stand and especially the HS+45Hz groups. Additionally, the values of Tb.N and Tb.Sp in the short-term reloaded mice tended to return to the control values. The values of other parameters were also improved in response to 45 Hz-vibrations (HS+45Hz group). These results emphasize the importance of mechanical loading on the morphological complexity of trabecular bone. Furthermore, these results suggest that losses in bone quality owing to various unloaded conditions may recover with vibrational treatment. Recent studies have indicated that trabecular bone loss and impairment of mechanical properties reduce bone strength and increase fracture risk [11, 12], emphasizing the importance of bone tissue properties for bone mechanical behavior. Our findings suggest that shortterm, whole-body vibration may be a practical, non-invasive, and effective means of providing early recovery in cases of bone loss due to mechanical factors.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1768
J. Matsuda, K. Kurata, T. Hara, H. Higaki 5.
V. CONCLUSIONS Tail-suspended hindlimb-unloaded mice subjected to 5 minutes of daily while-body vibration for 4 weeks had increases in bone volume (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N) and trabecular bone pattern factor (TBPf) compared with HS and HS+stand group. These results suggest that a while-body vibration effective prevented unloaded bone loss in a site of active metabolism, such as trabecular bone.
REFERENCES 1.
2.
3. 4.
Chen JS, Cameron ID et al. (2006) Effect of age-related chronic immobility on markers of bone turnover. J Bone Miner Res. 21: 324331 Green DM, Noble PC et al. (2006) Effect of early full weight-bearing after joint injury on inflammation and cartilage degradation. J Bone Joint Surgery [Am]. 88A: 2201-2209 Houde JP, ScHSlz LA et al. (1995) Bone mineral density changes in the forearm after immobilization. Clin Orthop Relat Res: 199-205 Lang T, LeBlanc A et al. (2004) Cortical and trabecular bone mineral loss from the spine and hip in long-duration spaceflight. J Bone Miner Res 19: 1006-1012
_______________________________________________________________
Rubin C, Turner AS et al. (2001) Anabolism. Low mechanical signals strengthen long bones. Nature 412: 603-604 6. Robinson TL, Snow-Harter C et al. (1996) Gymnasts exhibit higher bone mass than runners despite similar prevalence of amenorrhea and oligomenorrhea. J Bone Miner Res. 10(1): 26-35 7. Snow-Harter C, Whalen R et al. (1995) Bone mineral density, muscle strength, and recreational exercise in men. J Bone Miner Res. 7(11): 1291-1296 8. Nishimura Y, Fukuoka H et al. (1994) Bone turnover and calcium metabolism during 20 days bed rest in young healthy males and females. Acta Physiol Scand Suppl. 616: 27-35 9. Huddleston AL, Rockwell D et al. (1980) Bone mass in lifetime tennis athletes. JAMA. 244(10): 1107-1109 10. Jones HH, Priest JD et al. (1977) Humeral hypertrophy in response to exercise. J Bone Joint Surg Am. 59(2): 204-208 11. McBroom RJ, Hayes WC et al., (1985) Prediction of vertebral body compressive fracture using quantitative computed tomography. J Bone Joint Surgery [Am], 67:1206-1214 12. Silva MJ, Keaveny TM et al., (1997) Load sharing between the shell and centrum in the lumbar vertebral body. Spine, 22:140-150
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Junpei MATSUDA Venture Business Laboratory, Niigata University 2-8050, Ikarashi, Nishi-ku Niigata Japan
[email protected] _________________________________________________________________
A Biomechanical Investigation of Anterior Vertebral Stapling M.P. Shillington1,2, C.J. Adam2, R.D. Labrom1 and G.N. Askin1 1
2
Mater Health Services, Brisbane, Australia Queensland University of Technology, Brisbane, Australia
Abstract — An immature calf spine model was used to undertake anatomical and biomechanical investigations of an anterior vertebral staple used in the thoracic spine to treat scoliosis. The study involved three stages: (1) displacement controlled testing to determine changes in bending stiffness of the spine following staple insertion, (2) measurement of forces in the staple using strain gauges, and (3) micro-CT scanning of vertebrae following staple insertion to describe the associated anatomical changes. The results suggest that the mechanism of action of stapling may be a consequence of hemiepiphysiodesis causing convex growth arrest, rather than the production of sustained compressive forces across the motion segment. Keywords — biomechanics, thoracic spine, staples, shape memory alloy.
I. INTRODUCTION Adolescent idiopathic scoliosis (AIS) is a complex three dimensional spinal deformity diagnosed between 10 and 19 years of age. The natural history of curve progression in AIS is dependent on the patient’s skeletal maturity, the curve pattern, and the curve severity. Currently treatment options for progressive scoliosis are limited to observation, bracing, or surgery. While brace treatment is noninvasive and preserves growth, motion, and function of the spine, it does not correct deformity and is only modestly successful in preventing curve progression. In contrast, surgical treatment with an instrumented spinal arthrodesis usually results in better deformity correction but is associated with substantially greater risk. The risks of surgery are related to the invasiveness of spinal arthrodesis, the instantaneous correction of spinal deformity, and the profoundly altered biomechanics of the fused spine. Fusionless scoliosis surgery may provide substantial advantages over both bracing and definitive spinal fusion. The goal of this new technique is to harness the patient’s inherent spinal growth and redirect it to achieve correction, rather than progression, of the curve. This effect is thought to occur as a consequence of the Hueter-Volkmann law which states that increased compressive loads across a physis will reduce growth, while conversely, increased distractive forces will result in accelerated growth [1]. Currently there are several surgical treatments incorporating the fusionless
ideology, one of which is anterior vertebral stapling (see Figure. 1). By applying implants directly to the spine, anterior vertebral stapling is theoretically more advantageous than external bracing because it addresses the deformity directly at the spine and not via the chest wall and ribs, and, because it eliminates problems with patient noncompliance during brace treatment. Furthermore, minimally invasive tethering of the anterior thoracic spine by means of endoscopic approach is also a less extensive procedure than arthrodesis, with no requirement for discectomies, preparation of the fusion bed, or harvest of bone graft. Results for stapling in humans were presented as early as 1954, but the results were disappointing [2]. Correction of the scoliosis was limited because the children had little growth remaining at the time of treatment, and the curves were severe. Some staples broke or became loose, possibly because of motion through the intervertebral disc. Recently, clinical interest in stapling has increased following the release of a new staple designed specifically for insertion into the spine by Medtronic Sofamor Danek (Memphis., TN). These staples are manufactured using nitinol, a shape memory alloy (SMA) composed of nickel and titanium. SMA staples are unique in that the prongs are straight when cooled but clamp down into the bone in a “C” shape when the staple returns to body temperature, providing secure fixation. Despite the increased clinical interest in the use of SMA staples, little is known about the mechanism of their effect or the consequences of their insertion on the adolescent spine.
Fig 1. Radiograph demonstrating anterior vertebral staples inserted into the thoracic spine
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1769–1772, 2009 www.springerlink.com
1770
M.P. Shillington, C.J. Adam, R.D. Labrom and G.N. Askin
The aim of this study was threefold. Firstly, to measure changes in the bending stiffness of a single spinal motion segment following staple insertion. Secondly, to describe and quantify the loading experienced by the staple during spinal movement. Thirdly, to describe the structural changes to the vertebra following staple insertion. II. MATERIALS AND METHODS A. Specimen Preparation Six to eight week old bovine spines have previously been validated as a model for the adolescent spine [3]. Specimens were obtained from the local abattoir and stored frozen at the testing facility. All specimens underwent pre-test CT scanning to exclude vertebral anomalies. Each vertebral column was cut into monosegmental functional spinal units (FSU) consisting of two adjacent vertebrae with intervening disc, facets, and ligaments. The FSU was then carefully denuded of all paraspinal muscle with care to preserve ligaments and bony structures. In addition, both sets of ribs and part of the spinous processes were removed to induce significant instability. Once prepared the specimens were potted in polymethylmethacrylate to facilitate coupling of the specimen to the testing apparatus. B. Surgical Procedure Four pronged nitinol staples were cooled in an ice bath as per recommended surgical procedure to facilitate their deformation. Using standard instruments the staple was opened to a position of 90°. The surgeon then placed a 5mm nitinol staple (Shape Memory Alloy Staple; Medtronic Sofamor Danek; Memphis, TN) just anterior to the insertion of the rib head so that it spanned the disc and adjacent vertebral endplates. Accurate positioning of the staple was confirmed on post-test radiographs. C. Biomechanical Evaluation A displacement controlled six degree-of-freedom robotic facility was used to test each specimen through a predetermined range of motion in flexion, extension, lateral bending, and axial rotation (see Table 1). Each specimen was tested first in an un-stapled (control) state. A staple was then inserted using the technique described previously and the testing protocol repeated. A total of fourteen segments were tested composing six T3-4, four T5-6, and four T7-8. Force and moment data for each test was recorded via the robot’s force transducer. A fixed axis of rotation for the segment was calculated to be five millimeters anterior to the posterior edge of the annulus in the mid-sagittal plane. Us-
_______________________________________________________________
ing a custom designed MATLAB (version 6.0, MathWorks Inc., Natick, MA) program the force transducer data was synchronized with the robot position data and filtered using moving average methods. The rotational stiffness of the functional spinal unit (FSU) for each applied motion was calculated in Nm/degree of rotation. Each rotational stiffness was calculated as an average of five cycles per test, which were performed following one ‘settling’ cycle. Paired t-tests were used to compare average stiffness mesurements in the stapled and control conditions for each direction of movement. A significance level psi1 , si 2 ,!, siN @T ,
Si where sij
i 1,2,!, M ,
Cartesian coordinates system, M is the number of the faces which are in database, N is the number of the points in single points cloud. In the next step the mean shape S and covariance matrix C are computed (2):
1 M
M
¦ Si , C i 1
1 M
M
~ ~T
¦S S i
i
(2)
i 1
The difference between mean and object that is in data
~
base are describe by the deformation vector S i
Si S .
The statistical analysis of the deformation vectors gives us the information about the empirical modes. Modes represent
_______________________________________________________________
the geometrical features (shape) but also can carry other information like texture, map of temperature and others. Only few first modes carry most information, therefore each original object S i is reconstructed by using some K principal components (3): K
Si
S ¦ a ki F0.05(2, 20); (e) is pooled error.
could be acquired when the cells are sonicated at 5 W/cm2 for 1 min and incubated for 12 h post-sonication. The results of variance analysis for eA and sN are shown in Table 3 and Table 4. The contribution ratio has shown that ultrasound intensity is most significantly effective factor for early apoptosis, and incubation time post-sonication for secondary necrosis. IV. DISCUSSION After sonicated by ultrasound, the degree of membrane damage caused by the mechanical effects of ultrasound and the degree of repair by the fate of cells will determine the type of cell death, as shown in Fig.7. Higher ultrasound intensity would induce more damage on cell membrane, in this way, early apoptosis and secondary necrosis are increased. Continuous sonication accumulates energy on the cell membrane. Compared with pulsed ultrasound, there is few time for the cells to repair themselves during sonication, so it increases irreversible damage induced by ultrasound. It may be the reason for more secondary necrosis induced however early apoptosis doesn’t change significantly with increasing sonication time. Incubation time post-sonication gives damaged cells the time and more chance to repair themselves post-sonication successfully, and then develop to apoptosis. It reaches the peak after about 12 h. After it, more early apoptotic cells develop into secondary necrosis. It explains optimal early apoptosis is obtained at about 12 h post-sonication.
_________________________________________
V. CONCLUSIONS
32.95
C Total
after ultrasound sonication.[4]
It is proven that low intensity ultrasound induces early apoptosis and secondary necrosis in HepG2 cells in this study. Based on the orthogonal analysis, optimal apoptosis with minimal necrosis is acquired when the cells were exposed to ultrasound at 5 W/cm2 for 1 min and incubated for 12 h post-sonication. Our data clearly show that certain conditions to optimize apoptosis induction do exist. For the application in cancer therapy, every parameter, including proper timing, is therefore essential in establishing a protocol for effective induction of apoptosis by ultrasound.
ACKNOWLEDGMENT This work was supported by the Key Program of National Natural Science Foundation of China, No. 30630024; the Doctoral Foundation of Xi’an Jiaotong University, grants No. DFXJTU2005-05.
REFERENCES 1.
2. 3. 4.
Lagneaux L, Meulenaer ECd, Delforge A et al. (2002) Ultrasonic low-energy treatment: a novel approach to induce apoptosis in human leukemic cells. Exp Hematol 30: 1293-1301 Sun SY, Hail N, Lotan R. (2004) Apoptosis as a novel target for cancer chemoprevention. J Natl Cancer Ins 96: 662-672 Fulda S, Debatin KM. (2003) Apoptosis pathways in neuroblastoma therapy. Cancer Lett 197: 131–135 Feril LB, Jr, Kondo T. (2004) Biological Effects of low intensity ultrasound: the mechanism involved, and its implications on therapy and on biosafety of ultrasound. L Radiat Res 45:479-489. Author: Y. Feng Institute: Laboratory of Biomedical Information Engineering of Ministry of Education Street: No. 28, Xianning West Road City: Xi’an city Country: P.R. China Email:
[email protected] IFMBE Proceedings Vol. 23
___________________________________________
Visual and Force Feedback-enabled Docking for Rational Drug Design O. Sourina1, J. Torres2 and J. Wang1 1
Nanyang Technological University/School of Electrical and Electronics Engineering, Singapore 2 Nanyang Technological University/School of Biological Science, Singapore
Abstract — Transmembrane helices play basic roles in biology: signal transduction, ion transport and protein folding. While antibodies can be directed towards hydrophilic regions of molecules, transmembrane regions have not been targeted so far. In this paper, we propose a novel approach to search for helix-helix complementary pairs in order to inhibit, or modulate, the function of membrane proteins where point mutations at the transmembrane domain have been found to lead to various forms of cancer such as the homodimeric epidermal, or fibroblast, growth factor receptors. This method employs visual and force feedback tools to search for the optimal interaction between helices. The search is manual, exploring helix tilt, helix rotation and side chain rotamer selection, with feedback from the models consisting of a repulsion or attraction force. We developed a prototype system that allows real-time interactive visualization and manipulation of molecules with force feedback in virtual environment. In our system, we implemented haptic interface to facilitate the exploration and analysis of molecular docking. Haptic device enables the user to manipulate the molecules and feel its interaction during the docking process in virtual experiment on computer. In future, these techniques could help the user to understand molecular interactions, and to evaluate the design of pharmaceutical drugs. Keywords — Visual docking, force feedback, molecular docking, haptic interface
I. INTRODUCTION During the past decade, efforts have been made to predict complex structures from the structures of individual proteins. Membrane protein structure determination is till the Wild West of Structural Biology [1]. Structural determination using classical techniques such as X-ray diffraction or Nuclear Magnetic Resonance (NMR) is hindered because of the experimental problems associated with lipid-embedded domains. Given the experimental difficulties in membrane protein structure determination, there is a pressing need for prediction methods. Prediction of membrane protein structure consists of two parts: one is topology, the other is helixhelix interaction. Prediction of location of transmembrane (TM) helices and topology has been in general successful. It is often possible to develop a limited set of topological models from the sequence. The total success of membrane protein structure prediction, however, will depend largely
on the second challenge, which is to correctly pack topologically arranged helices [2]. In this respect, the fact that the main contribution to transmembrane interhelical packing is Van der Waals interactions converts the problem into a docking one [3]. We propose and develop visual haptic-based molecular docking system to explore the conformational space of helix-helix interaction manually in the hope to find an optimal conformation with the minimum amount of time. Docking the transmembrane helices is a difficult task for automatic conformational search algorithms when polytopic membrane protein structure is explored because of the exponential increase in conformations as the number of Dhelices is increased. Given the difficulties in exploring the whole conformational space available in polytopic membrane proteins, our strategy for docking TM helical domains is to use a manual approach in a virtual reality environment. This manual approach greatly simplifies the searching task, as changes in helix register, tilt and rotational orientation of the helices around their long axis are accomplished by hand. Feedback will be provided as attraction or repulsion forces felt by the user, which will be generated after calculating bonding and nonbonding interactions. The paper is organized as follows. In Section II, the proposed method is elaborated. In Section III, the system prototype is described and helix-helix docking examples are given. Conclusion and future work is given in Section IV. II. APPROACH There are a number of methods that attempt to solve the docking problem, which entails finding the optimum interaction between two molecular systems by using a scoring function. These methods normally use a conformational search algorithm to explore the conformational space, which is exceptionally time-consuming. In these studies, drugreceptor or protein-protein interactions are the preferred subject of study. One reason for this is that the ultimate goal for the use of transmembrane helices is not as immediately obvious. The other is the lack of structural data for membrane proteins, which precludes ‘direct’ docking attempts between drug and, for example, a membrane receptor. In complex systems, however, the number of unknowns grows
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1874–1877, 2009 www.springerlink.com
Visual and Force Feedback-enabled Docking for Rational Drug Design
exponentially as the symmetry that exists in homooligomers is lost. In this case, additional experimental approaches or demanding computational search methods are necessary. This converts the exhaustive search of conformational space for a 6 TM protein in a formidable task, as rotation, tilt and register can all change in each of the TM helices. One alternative that we propose in our project is to explore the conformational space manually using force feedback in the hope to find an optimal conformation with the minimum amount of time. This manual approach greatly simplifies the searching task, as changes in helix register, tilt and rotational orientation of the helices around their long axis are accomplished with visual and haptic feedback. This is especially convenient when exploring the conformational space in a polytopic membrane protein, in which all TM helices are different. Thus, our strategy for docking TM helical domains is to use visual and haptic feedback in a virtual reality environment. Feedback will be provided as attraction or repulsion forces felt by the user. The rough binding region can be decided based on shape complimentary by visualization system. So far, the sphere models corresponding to Van der Waals surface are implemented in the system we are working on. The key requirement in simulating of the ligand binding process is to have a good model of the interaction between ligand and receptor. During binding, the ligand is moving in the potential field created by the receptor’s atoms, and the system is searching for a stable low potential configuration. While moving one of the molecules around binding site based on shape complimentary, the potential energy at each position is calculated and compared, the minimum value of potential energy is recorded together with the position of the ligand molecule. The possible docking orientations between molecular systems are sampled with input from force calculations that is fed back to the user through the haptic device. The molecular systems are visualized with their van der Waals surfaces. The force feedback requires a high refresh rate (at least 1 kHz), therefore interaction energy calculations should be simplified. This brings the challenge in the project. The problem of fast haptic rendering could be solved by computation of the interactive forces in advance and storing it in a volumetric grid. After preliminary study we also found out that when molecular system is manipulated, the Cartesian coordinates corresponding to interactions with local energy minima should be dynamically stored on the volumetric grid as well. When only forces in rigid docking are required, we could use only the Lennard-Jones (L-J) potential as the main scoring function, as it has been found to be the most important in transmemebrane D-helix interaction [3]. The essential features are approximated quite well by a Lennard-Jones potential [4] (also referred to as the L-J
_______________________________________________________________
1875
potential, 6-12 potential or, less commonly, 12-6 potential). There are many different force field models that can be used to simulate proteins and other organic molecules implemented in AMBER [4], CHARMM [5], MM3 [6], MM4 [7] and MMFF94 [8]. If each force field is normally developed for a particular type of molecule, they rather well adapt to different structures in atomic types. The one we use and which is described below is called OPLS-aa [9] [10] [11]. It was parameterized for use in protein simulations, and also for small organic molecules, and has functional groups for all 20 common amino acids. For the homo-atomic pairs, there are published LJ parameters available (i.e. [12] for OPLS-aa). For the homo-atomic pairs, the interaction of hetero-atomic pairs, the effective values of \ and $ are calculated from those for the homo-atomic pairs. The way of calculation is called mixing rule. OPLS-aa uses the same non-bonded functional forms as AMBER, and the LennardJones terms between unlike atoms are computed using geometric mean mixing rule [13]. III. RESULTS We are developing a prototype of Transmembrane Dhelices Docking System HMolDock (Haptic-based Molecular Docking), using the haptic device PHANTOM 1.5/6DOF (6 degrees of freedom) [14-15]. The molecular structure file format in the Protein Data Bank .pdb [16] is chosen as an input, although there are wide variety of file formats based on standard Cartesian (x, y, z) coordinates (e.g., .mpl, .car, .pdb) for which we will use conversion programs. For now, basic molecular visulization is accomplished. Atom coordinates are got from input PDB files; atom radius and correspondent color are determined based on atom type and its belonging residue which are extracted from PDB data. Two molecules or one molecule and the probe are visualized on the screen. The user can assign a haptic mouse to the probe or to one of the molecules and move the probe/molecule towards/around to another molecule. In Fig. 1, Van der Waals forces of molecular system are investigated with the atom probe. An interaction force is calculated at each position, and the resulting attraction/repulsion force is felt by the user through the haptic device. The force direction and magnitude are visualized as a vector. Thus, a probe/molecule can be selected by the haptic mouse and moved around to let the user ‘feel’ the force changing. In Figure 2, Van der Waals surfaces of two helices are visualized by current system prototype. To differentiate the helices, two color schemes are used. The attraction/repulsion force in this case is mostly due to van der Waals force interactions, as no charged residues are present.
IFMBE Proceedings Vol. 23
_________________________________________________________________
1876
O. Sourina, J. Torres and J. Wang
Fig. 1 Van der Waals forces is investigated with the atome probe
Fig. 3 Interaction of DIIb integrin transmembrane helix and a designed antibody-like complementary peptide anti-DIIb, separated
Fig. 4 Interaction of DIIb integrin transmembrane helix and a designed antibody-like complementary peptide anti-DIIb, in contact
Fig. 2 Van der Waals surface model of a homodimer with two different color schemes
To demonstrate application of our method, let us give an example in molecular medicine. Currently, we are working on models and algorithms of rigid molecular docking. Although Van der Waals surface is represented in our system using simple atoms, we show here a more complex example. To demonstrate application of our method, let us give an example in molecular medicine. Currently, we are working on models and algorithms of rigid molecular docking. Although Van der Waals surface is represented in our system using simple atoms, we show here a more complex example. The example in Figure 3 is especially pertinent to molecular medicine, where an important strategy is to achieve function regulation by modulation of protein-protein interactions. In membrane proteins, a recent approach targets the transmembrane region with certain peptides [17]. The rationale is that transmembrane mutations, for example in receptor tyrosine kinases, have been associated to many types of cancer and developmental deficiencies, which are explained by unregulated activation. Inactivation of the
_______________________________________________________________
receptor is thus crucial for targeted treatment, and this can be achieved with synthetic D-helices that target the native D-helices of the receptor. Figure 3 shows the two transmembrane domains, target and probe, corresponding to DIIb intregrin and a designed anti-DIIb integrin [17], respectively. In Figure 4, the helices move close and they become in contact. IV. CONCLUSIONS We propose a novel approach in helix-helix interaction research and rational design of peptides. We propose and implement novel approach to search for helix-helix complementary pairs that will facilitate the search for membrane helical inhibitor in future. Novel models and algorithms specific to helix-helix docking will be proposed and developed. For now, drug-receptor or protein-protein interactions are the preferred subject of study, and specific features of helix-helix docking are less studied by researchers. Drug design with our system will be tested. Based on the preliminary study with our system, we believe that understanding of the reason certain mutations alter transmembrane interactions and cause disease can be improved with the proposed methods. We are going to do in silico experiments on bio-
IFMBE Proceedings Vol. 23
_________________________________________________________________
Visual and Force Feedback-enabled Docking for Rational Drug Design
molecular docking to study helix-helix interactions in rational drug design.
1877 9.
10.
ACKNOWLEDGMENT
11.
This project is supported by MOE NTU grant RG10/06 “Visual and Force Feedback Simulation in Nanoengineering and Application to Docking of Transmembrane D- Helices”.
12. 13.
REFERENCES 1.
2. 3. 4.
5.
6. 7. 8.
Torres J, Stevens TJ, Samso M (2003) Membrane proteins: the ‘Wild West’ of structural biology. Trends in Biochemical Sciences 28:137144 Popot J L and Engelman D M (1990) Membrane protein folding and oligomerisation: the two-tage model. Biochemistry 29: 4031-4037 Bowie J U (1997) Helix packing in membrane proteins. 272, 780-789 Weiner S J, Kollman P A, Case D A at al (1984) A new force field for molecular mechanical simulation of nucleic acids and proteins. J. Am. Chem. Soc. 106:765-784 Brooks B R, Bruccoleri R E, Olafson B D at al (1983) CHARMM: A program for macromolecular energy, minimization, and dynamics calculations. J. Com. Chem 4:187-217 Lii J-H, Allinger N L (1991) The MM3 force field for amides, polypeptides and proteins. J. Comp. Chem. 12:186-199 Allinger N L, Chen K, Lii J-H (1996) An improved force field (MM4) for saturated hydrocarbons. J. Comp. Chem. 17:642-668 Halgren T A (1996) Merck molecular force field. IV. Conformational energies and geometries. J. Comp. Chem 17:587-615
_______________________________________________________________
14.
15.
16. 17.
Jorgensen W L, Maxwell D S, Tirado-Rives J (1996) Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J. Am. Chem. Soc. 118 11:225-11236 Damm W, Frontera A, Tirado-Rives J, Jorgensen W L (1997 ) OPLS all-atom force field for carbohydrates. J. Comp. Chem. 18:1955-1970 Rizzo R C, Jorgensen W L (1999) OPLS all-atom model for amines: resolution of the amine hydration problem. J. Am. Chem. Soc. 121:4827-4836 OPLS-aa force field parameter at http://egad.berkeley.edu/ EGAD_manual/EGAD/examples/energy_function/ligands/oplsaa.txt Martin M G (2006) Comparison of the AMBER, CHARMM, COMPASS, GROMOS, OPLS, TraPPE and UFF force fields for Prediction of vapor-liquid coexistence curves and liquid densities. Fluid Phase Equilib. 248:50-55 Sourina O, Torres J, Wang J (2008) Visual haptic-based biomolecular docking, Proc of 2008 Int Conf on Cyberworlds, China, 22-24 Sept 2008, pp 240-247 Wei L, Sourin A, Sourina O (2007) Function-based haptic interaction in Cyberworlds, Proc. of IEEE 2007 International Conference on Cyberworlds, Germany, 24 – 26 Oct. 2007, pp 225 -232 PDB - Protein Data Bank, Brookhaven National Laboratory at http://www.rcsb.org/pdb/ Yin H, Slusky J S, Berger B W, Walters R S, Vilaire G, Litvinov R I, Lear J D, Caputo G A, Bennett J S, and DeGrado W F (2007) Science 315: 1817-1822 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 23
Olga Sourina Nanyang Technological University 50 Nanyang Ave Singapore Singapore
[email protected] _________________________________________________________________
A Coupled Soft Tissue Continuum-Transient Blood flow Model to Investigate the Circulation in Deep Veins of the Calf under Compression K. Mithraratne1, T. Lavrijsen2 and P.J. Hunter1 1
2
Auckland Bioengineering institute, University of Auckland, Private Bag 92019, Auckland, New Zealand Department of Biomedical Engineering, Technische Universiteit Eindhoven, PO Box 513, Eindhoven, The Netherlands
Abstract — A coupled computational model of a 3D soft tissue continuum and a one-dimensional transient blood flow network is presented in this paper. The primary aim of the model is to investigate the reduction in vessel cross section area and resulting vessel wall shear stresses (WSS) in response to the compression applied on the calf. Application of external compression on the lower leg is a commonly used prophylaxis against thrombus formation in deep veins. The soft tissue continuum model is a tri-cubic Hermite finite element mesh representing all the muscles, skin and subcutaneous fat in the calf and treated as incompressible with homogeneous isotropic properties. The deformed state of the soft tissue due to the applied compression is obtained by solving large or nonlinear deformation mechanics equations using the Galerkin finite element method. The geometry of the main deep vein network is represented by a 1D Hermite cubic finite element mesh. The flow computational model consists of 1D Navier-Stokes equations and a non-linear constitutive equation to describe vessel radius - transmural pressure relationship. Once compression is applied, the transmural pressure is computed as the difference between the fluid and soft tissue hydrostatic pressure. The latter arises due to the incompressibility of the soft tissue material. Transient flow governing equations are solved using the McCormack finite difference method. The geometry of both the soft tissue continuum and vein network is anatomicallybased and was developed using data derived from magnetic resonance images (MRI). Simulation results from the computational model show a reasonably good agreement with the results reported in the literature on the degree of deformation in vein cross-section area estimated using MRI. Keywords — Deep Veins, Blood flow mechanics, Soft tissue mechanics.
I. INTRODUCTION Deep vein thrombosis (DVT) or formation of thrombi in deep veins of lower limbs is a commom problem in hospitalized patients. It often remains clinically unapparent and resolves without intervention. However, it may lead to other complications such as chronic venous insufficiency and pulmonary embolism. The latter has been estimated to cause about 10% of all hospital deaths [1]. DVT is also known as ‘economy class syndrome’ as it is linked to prolonged seated immobility in long-haul flights [2,3].
It is believed that there are three thrombogenic factors involved in DVT: stasis, hypercoagulability and injury to the vessel wall [4]. Two types of prophylaxis, medicinal anticoagulants (e.g. heparin) and mechanical methods (compression) are generally used for DVT. Most mechanical methods act on the calf and can be further divided into static and intermittent compression. Of the two compression methods, simplicity and low cost have made static method more attractive, especially for concerned air travelers. Compression stockings are commonly used for static compression and they usually apply uniform pressure over the entire calf. In intermittent compression, inflatable cuffs are wrapped around the patient’s calf and periodically inflated using a pump. Despite several clinical and biological studies [5-8] to show the efficacy of the compression method, the mechanism by which it acts is still not well understood. The external compression is generally thought to cause reduction in deep vessel cross-section area, which in turn results in increased flow velocities. Another hypothesis is based on the haemodynamic vessel WSS and its effect on the degree of stimulus for endothelial cell activation [8-9]. It is believed that shear stress and cyclic strain can influence the release of tissue plasminogen activator (t-PA) and its gene expressions. t-PA then converts plasminogen to active plasmin, which dissolves fibrin and prevents thrombus formation. A number of studies [4,10] have looked at modeling external compression and resulting blood flow in deep veins. All these models, however, have either employed images (e.g. MRI) directly [4,10-12] or used the deformation of 2D continuum models (plane strain analysis) of surrounding soft tissue structures to reconstruct the deformed (collapsed) geometry of vessels. The computational model developed in this study is based on a 3D soft tissue continuum model undergoing large deformations (finite elasticity) due to applied external compression with 1D flow network of major deep vessels embedded in it. Coupling between fluid and soft tissue is archived via the vessel wall constitutive equation that describes the relationship between vessel radius and the transmural pressure. The latter is the difference between the fluid pressure and soft tissue hydrostatic pressure which arises due to the incompressibility of the soft tissue material.
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1878–1882, 2009 www.springerlink.com
A Coupled Soft Tissue Continuum-Transient Blood flow Model to Investigate the Circulation in Deep Veins of the Calf …
II. MATERIALS AND METHODS A. Imaging and construction of model geometries The right leg of a healthy 21 year old male volunteer was imaged using a 1.5T MRI scanner (Siemens Magnetom Avanto). The subject was placed in prone position to minimize compression of the calf which would otherwise cause deformed or collapsed veins. A multi-slice 2D time-of-flight sequence was used to enhance the contrast between veins and arteries. The images were acquired with a spacing of 5.36 mm and the slice thickness was set at 8mm. The Bioengineering modeling software CMISS [13] was used to manually segment the MRI images and derive the necessary data for the construction of model geometries. A tri-linear Lagrange finite element (FE) mesh representing the 3D geometry of the soft tissue continuum consisting of skin, subcutaneous fat and muscles was first created and fitted to the digitized data to obtain tri-cubic Hermite FE mesh [14]. Fig. 1 depicts the fitted soft tissue mesh.
1879
Only the vessels with good contrast relative to the surrounding soft tissue were digitized. These included the popliteal vein (PV), the posterior tibial vein (PTV), the lateral posterior tibial vein (LPTV) and the medial and lateral peroneal veins (MPV, LPV). Furthermore, on each image, four data points were created to define the each vessel boundary. The position coordinates of the vessel on each image was then calculated by taking the average of these points. The mean radius of the un-collapsed vessel was also inferred using the same points. A 1D cubic Hermite FE mesh was then created by fitting to the data so obtained. The mean un-collapsed or reference radius of vessel was also fitted as a cubic field along the 1D geometry. The fitted FE mesh of flow network with radius field is shown in fig. 2. B. Soft tissue model The deformation of the soft tissue continuum in response to the applied compression (pressure) on the calf can be obtained by solving the static Cauchy equation, wV ij wx j
U s .b j
0 : i, j 1..3
(1)
where Vij are Cauchy stress tensor components, xj are spatial coordinates, bj are the body force components and Us is the soft tissue density. The mechanical characteristics of the soft tissue were modeled as isotropic, incompressible Mooney-Rivlin material with homogeneous properties. Thus, the constitutive equation of the soft tissue material can be described by a strain energy density function (SEDF) as follows, w c10 I1 3 c20 I1 3 pI 3 1 2
Fig. 1 Fitted tri-cubic Hermite FE mesh of soft tissue continuum in the calf
1.4 mm
2.4 mm
3.4 mm
Fig. 2 (a) Fitted 1D cubic FE mesh of the flow network (b) Radius field (cubic Hermite) in the flow network
_______________________________________________________________
(2)
where w is SEDF, I1 and I3 are the invariants of the right Cauchy deformation tensor. The incompressibility constraint I 3 1 is appended to the SEDF via a Lagrange multiplier, p (hydrostatic pressure). As mentioned earlier, soft tissue hydrostatic pressure acts on the external surface of the vessel wall. Hydrostatic pressure was interpolated using tri-linear Lagrange basis functions. The material parameters used in eq.(2) were 10.0 kPa for both c10 and c20 [15]. Boundary conditions for the problem were prescribed as follows. All nodal degrees of freedom (dofs) of the inner surface where the soft tissue is in contact with bones were fixed. A uniform traction (compression) of 2.67 kPa (20 mmHg) was applied on the outer surface to mimic the action of static compression. The compression pressure of 2.67 kPa was chosen to fall within the range of values used in the experiments reported in reference [11].
IFMBE Proceedings Vol. 23
_________________________________________________________________
1880
K. Mithraratne, T. Lavrijsen and P.J. Hunter
Eq. (1) was numerically solved using the Galerkin FE method for large or non-linear deformations of the soft tissue continuum within CMISS [13]. C. Flow model The blood flow in veins was modeled as an incompressible, homogeneous, Newtonian fluid with axisymmetric laminar flow in a compliant tube. It was further assumed that radial and circumferential flows are negligible compared to the flow in the axial direction and the axial velocity has parabolic variation in the radial direction [16]. With these assumptions, the reduced form of the Navier-Stokes equations (continuity and momentum) can be written as, wR wR R wV V . wt wx 2 wx
(3)
0
wV wV V 2 wR 1 wP 2D 1 V 2D 1 . wt wx R wx U f wx
§ 2QD · V ¨ ¸ 2 © D 1 ¹ R
(4)
III. RESULTS AND DISCUSSION
0
where V is the mean velocity of the profile, R is the vessel radius, P is the fluid pressure and Uf and Q are fluid density and kinematic viscosity respectively. D is the velocity profile parameter. The velocity profile is given by,
u
fully explicit scheme and possesses second-order spatial and temporal accuracy. The following values were used for fluid density and kinematic viscosity: Uf= 1050 kg/m3 and Q = 3.2 mm2/s. The flow parameter, D was set at 1.1. A range of flow rates reported in the literature was prescribed as inlet boundary conditions. A number of studies have reported the mean popliteal venous flow in the horizontal position: 1.7 ml/s [19], 2.2 ml/s [20] and 3.8 ml/s [21]. Since the anterior tibial vessels and medial posterior tibial vessels were not included in the flow net work, it was assumed that the inlet flow to be half of the reported values. The fluid pressure was prescribed as the boundary condition at the outlet. It was also assumed that the pressure at the outlet (popliteal vein) is equal to the femoral venous pressure. The average of the femoral venous pressure (1.3 kPa) reported in the literature [22-24] was used in simulations. All flow simulations were performed by linearly increasing the flow from zero to the rate required.
J §J 2· ª § r · º ¸¸.V .«1 ¨ ¸ » and J ¨¨ © J ¹ «¬ © R ¹ »¼
2 D D 1
The network geometry was reconstructed with respect to the deformed soft tissue continuum to examine the degree of reduction in vessel cross-section area caused by gross deformation of the tissue continuum (without taking the hydrostatic pressure effects into account). The mean radius change of the venous network due to the gross deformation was found be insignificant (less than 2%).
(5) 9 kPa
10.5 kPa
12 kPa
where u is the axial velocity at r. The constitutive equation of the vein wall which describes the mechanical characteristics (vessel radius– transmural pressure relationship) was defined as a cubic § R · polynomial of radius ratio ¨¨ ¸¸ by fitting the experimental © R0 ¹ data [17] obtained from canine femoral vein, Ptr
P p
§ R a¨¨ © R0
3
· § R ¸ b¨ ¸ ¨R ¹ © 0
2
· § R ¸ c¨ ¸ ¨R ¹ © 0
· ¸ ¸ ¹
(6)
where Ptr is the transmural pressure (difference between fluid and hydrostatic pressures), R0 is the un-collapsed (reference) radius and a, b & c are the polynomial fitting coefficients. The above system of equations was solved numerically using the McCormack’s finite difference method [18]. This finite difference scheme is a two-step (corrector-predictor),
_______________________________________________________________
Fig. 3 Hydrostatic pressure distribution of the calf at deformed state The hydrostatic pressure distribution in response to the application of external compression is depicted in fig 3. The distribution of hydrostatic pressure was obtained by fitting it as a tri-linear FE field. The field, thus fitted, was then interpolated to determine the external pressure acting on the vessel wall at each finite difference grid point of the venous
IFMBE Proceedings Vol. 23
_________________________________________________________________
A Coupled Soft Tissue Continuum-Transient Blood flow Model to Investigate the Circulation in Deep Veins of the Calf …
flow network. The mean cross-sectional area of the flow network at various flow rates due to externally applied compression is given in table 1. Table 1 Change in vessel mean cross-section area
1881
ACKNOWLEDGMENTS The work presented in this paper was funded by the Foundation for Research, Science and Technology of New Zealand under the project no. 9077/3604215.
Flow rate [ml/s] 0.25
0.50
1.00
1.50
2.50
Mean area [mm ]
2.4
2.7
3.2
3.5
4.2
% Reduction
84
82
79
77
72
2
REFERENCES 1. 2.
Percentage reduction is based on a mean area of 15 mm2 at 1.0 ml/s
3.
The WSS was computed using eq. 5 (flow profile equation). It is directly proportional to the axial velocity gradient · § wu ¸¸ . Table 2 gives in the radial direction at the wall ¨¨ r w r R¹ © the WSS at various flow rates simulated.
5. 6.
7.
Table 2 Change in vessel wall shear stress (WSS)
8.
Flow rate [ml/s]
Mean WSS [Pa]
4.
0.25
0.50
1.00
1.50
2.50
Uncompressed
0.15
0.30
0.60
0.90
1.42
Compressed
2.20
3.76
6.12
7.96
10.8
The reduction in cross-section area at 1.0 ml/s based on a MRI study [10] has been reported to be about 62%±10%. The predicted value from the present study compares reasonably well with the experimental data. The overestimation could be due to the soft tissue constitutive properties. The WSS estimated using the flow simulations based on reconstructed vessel geometry at 1.0 ml/s from the same MRI study is 1.2 Pa in deep veins. The discrepancy could be attributed to (a) soft tissue mechanical properties and (b) flow profile. It can be easily shown that the flow profile parameter, D determines the gradient of the axial velocity at the wall which is directly proportional to the WSS.
9. 10.
11.
12.
13. 14.
15. 16. 17. 18.
IV. CONCLUSIONS
19.
A coupled computational model of a 3D soft tissue mechanics and a 1D transient flow network were developed. The model was used to investigate the reduction in vessel cross-section area, the resulting flow and wall shear stress. The coupling of soft tissue and fluid was achieved via the vessel constitutive equation, which is a function of the soft tissue hydrostatic pressure and fluid pressure.
_______________________________________________________________
20.
21.
Cohen AT, Alikhan R (2001) Prophylaxis of venous thromboembolism in medical patients. Curr Opin Pulm Med 7:332-337 Adi Y, Bayliss S et al. (2004) The association between air travel and deep vein thrombosis: Systematic review and meta analysis. BMC Card Dis 4:7 Nissen P (1997) The so-called “economy class” syndrome or travel thrombosis. Vasa 26:239-246 Downie SP, Raynor SM et al. (2008) Effects of elastic compression stockings on wall shear stress in deep and superficial veins of the calf. Am J Physiol Heart Circ Physiol 294:H2112-H2120 Agu O, Hamilton G, et al. (1999) Graduated compression stockings in the prevention of venous thromboembolism. Br J Sur 86:992-1004 Salzman EW, McManama GP, et al. (1987) Effect of optimization if hemodynamics on fibrinolytic activity and antithrombotic efficacy of external pneumatic calf compression. Ann Surg 206:5:636-641 Morris RJ and Woodcock JP (2004) Evidence-based compression: Prevention of stasis and deep vein thrombosis An of Sur 239:162-171 Dai G, Tsukurov O, et al. (2000) An in vitro cell culture system to study the influence of external pneumatic compression on endothelial function. J Vas Sur 32:5:977-987 Malek AM, Alper SL et al. (1999) Hemodynamic shear stress and its role in atherosclerosis. JAMM 282:21:2035-2042. Downie SP, Firmin DN et al. (2007) Role of MRI in investigating the effects of elastic compression stockings on the deformation of the superficial and deep veins in the lower leg. J MRI 26:80-85 Dai G, Gertler JP, et al.(1999) The effects of external compression on venous blood flow and tissue deformation in the lower leg. J Bio Mech Eng 121:557-564 Narracott AJ, John GW, et al. (2007) Influence of intermittent compression cuff design on calf deformation: computational results. IEEE EMBS Proc. Lyon, France, 2007, pp 6334-6337 http://www.cmiss.org Fernandez JW, Mithraratne K, et al. (2004) Anatomically based geometric modeling of the musculoskeletal system and other organs. Biomech Model Mechan 2:139-155 Meier P and Blickhan R (2000) Skeletal muscle mechanics: From mechanism to function, John Willey & Sons Ltd pp 207-224 Barnard ACL, Hunt WA, et al. (1966) A theory of fluid flow in compliant tubes. BioPhy J 6:717-724 Dobrin PB, Littooy FN, et al. (1988) Mechanical and histological changes in canine vein grafts. J Surg Res 44:259-265 Anderson JD Jr (1995) Computational fluid dynamics-The basics with applications. McGrawhill, Singapore Knaggs AL, Delis KT, et al. (2005) Perioperative lower limb venous haemodynamics in patients under general anesthesia. Br J Anaes 94:292-295 Morita H, Abe C, et al. (2006) Neuromuscular electrical simulation and an Ottoman-type seat effectively improve popliteal venous flow in a sitting position. J Phys Sci 56:2:183-186 Lurie F, Awaya DJ, et al. (2003) Hemodynamic effect of intermittent pneumatic compression and the position of the body. J Vasc Surg 37:137-142
IFMBE Proceedings Vol. 23
_________________________________________________________________
1882
K. Mithraratne, T. Lavrijsen and P.J. Hunter
22. Arnoldi CC, Linderholm H, et al. (1972) Venous engorgement and intraosseous hypertension in osteoarthritis of the hip. J Bone Joint Surg 54B:409-421. 23. Andersson LE, Jogestrand T, et al. (2005) Are there changes in leg vascular resistance during laparoscopic cholecystectomy with CO2 pneumoperitoneum? Acta Anast Scand 6:360-365
_______________________________________________________________
24. Beebe DS, McNevin MP, et al. (1993) Evidence of venous stasis after abdominal insufflation for laparoscopic cholecystectomy. Surg Gyne Obst 176:443-447 Author:
IFMBE Proceedings Vol. 23
K.Mithraratne :
[email protected] _________________________________________________________________
Finite Element Analysis of Articular Cartilage Model Considering the Configuration and Biphasic Property of the Tissue N. Hosoda1, N. Sakai2, Y. Sawae2 and T. Murakami2 1
Graduate School of Systems Life Sciences, Kyushu University, Fukuoka, Japan 2 Faculty of Engineering, Kyushu University, Fukuoka, Japan
Abstract — Articular cartilage tissue has high water content from 70 to 80% and shows biphasic behavior in which both solid and fluid properties should be considered. Furthermore, the mechanical behavior of cartilage shows depth-dependence. Therefore it is necessary to consider not only the average tissue property but also the local one to explain mechanical and functional behavior. Previously, we created cartilage tissue model considering the depth-dependence of Young's modulus distribution and applied two-dimensional finite element method (FEM) based on biphasic theory[1]. As a result, the deformed profile of depth-dependent Young's modulus model immediately after unconfined compression corresponded to actual profile. Consequently, we confirmed that Young’s modulus has a distribution in the depth direction. In contrast, the total load capacity in FEM analysis was about one order lower than the experimental one. Immediately after compression at high rate, it has not enough time for intrinsic fluid to flow in cartilage, and thus whole tissue including intrinsic fluid shows the behavior like elastic body. Furthermore, the polymeric materials increase their stiffness at higher strain rate. Therefore, apparent elastic modulus is assumed to be larger than the equilibrium Young’s modulus. During total deflection is maintained after compression, the intrinsic fluid flow gradually occurs, and the stress relaxes with decrease of apparent elastic modulus. After enough stress relaxation, an apparent elastic modulus becomes an equilibrium Young’s modulus. We think this is connected to configuration of the cartilage tissue. The aim of this study is to consider configuration of the tissue in addition to the character of biphasic property on the mechanical behavior of cartilage tissue. In this study, we created cartilage tissue model considering spring elements that express function arisen from collagen fiber and Young's modulus distribution depending on the depth. Then, we analyzed the unconfined compression and compared experimental results to FEM analysis.
connection between two or more bones, and functions by supporting and transmitting loads. Cartilage has high water content from 70 to 80 % and shows biphasic behaviors in which both solid and fluid properties should be considered. One of the representative diseases on the synovial joints particularly for elder persons is osteoarthritis. Osteoarthritis causes degeneration and destruction of articular cartilage, which lead to the disturbance of motility. Articular cartilage degenerated by osteoarthritis decreases load-buffering capacity and leads to the bone deformation, the formation of osteophytes, and the subchondral bone induration and thickening. Especially in joints of lower extremities, there are many reports linking age and obesity to the osteoarthritis, or external injury and career to the occurrence of osteoarthritis. Consequently, mechanical factors are believed to play a crucial role in the pathogenesis of osteoarthritis. As a new treatment method, tissue engineering has attracted attention in recent years. At the present stage, however, the mechanical function of tissue-engineered cartilage has not reached the level of native cartilage. It is noted that mechanical stimulation in culture process promotes extracellular matrix synthesis and leads to better function. Therefore, it is necessary to study the optimum conditions of mechanical stimulation. Gaining better understanding of the mechanical and functional environment around chondrocytes is crucial to the clarification of the mechanism of osteoarthritis pathogenesis, the assessment of mechanical properties of the regenerated cartilage, and the clarification of the optimum condition of mechanical stimulation on the metabolism of chondrocytes. A. Articular cartilage
Keywords — Articular cartilage, Compressive deformation, Finite element method, Biphasic analysis, Spring element
I. INTRODUCTION The human synovial joint possesses superior load-bearing and jointing functions with very low friction and low wear. Articular cartilage tissue plays an important role to maintain this function through whole life. The arthrodial joint forms a
Articular cartilage is composed of chondrocytes and an extracellular matrix. Extracellular matrix is mainly composed of proteoglycan and collagen, and produced by chondrocytes to provide scaffold to chondrocytes and support load to cartilage. Proteoglycan is highly hydrophilic and therefore can store a large amount of water and contribute to viscoelasticity. The hydrated proteoglycan is distended but confined by collagen network, and thus it supports the compressive load, while the collagen is given under
Chwee Teck Lim, James C.H. Goh (Eds.): ICBME 2008, Proceedings 23, pp. 1883–1887, 2009 www.springerlink.com
1884
N. Hosoda, N. Sakai, Y. Sawae and T. Murakami
Articular surface Surface zone
Chondrocyte
Middle zone
Collagen
Deep zone
Tidemark
Calcified zone
Fig.1 Cross-section diagram of cartilage tissue B. The behavior of cartilage tissue to mechanical stimuli The structure and property of articular cartilage have depth-dependence. If the load acts on the tissue rapidly, the tissue does not have enough time to exude the interstitial fluid and thus presents similar behavior to the elastic body. While the load acts on the tissue slowly, the interstitial fluid is gradually exuded through the tissue. In compression, the permeability of cartilage becomes lower with an increase in compression strain[2]. When cartilage is compressed at high rate and kept at the definite position, the occurrence of peak stress and following stress relaxation behavior are observed as shown in Fig.2. In our unconfined compression test of 10~15% strain at 1000m/s, peak stress ranged from 0.71 MPa to 3.76 MPa. To understand the mechanotransduction in chondrocytes in cartilage, the actual time-dependent and depth-dependent stress and strain around chondrocytes should be clarified. 0.25
1.8
0.8
0.1
0.6 0.4
0.05
0.2
0
0 0
5
10
15
20
25
30
Time [s]
Fig. 2 Compressive behavior of articular cartilage to definite compressive deflection
_______________________________________________________________
(1)
1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
0.2
0.4
0.6
0.8
1
Relative position
Fig.3 Young's modulus distribution
MODEL AND BOUNDARY CONDITIONS Position [mm]
Stress [MPa]
0.15
1
3.66 46.2e 6.53 x 2.84
E ( x)
0.2
Stress Position
1.2
Previously, we performed unconfined compression test of articular cartilage at definite total deformation in compression tester located on the stage of CLSM. Then, the local compression strain is calculated from the change in distance between the corresponding living cells stained with CalceinAM 1 L (Molecular Probes) in 1000 L PBS in the fluorescence images before compression and at equilibrium. We derived Young's modulus distribution function using inverse relationship between strain and Young's modulus for nearly uni-axial compression. In this study, we applied the compression velocity of 200 m/s and the compression strain of 10 %. Mean value of the average Young’s modulus E0 of solid phase at equilibrium is 0.37 MPa. The Young’s modulus depending on the depth are Eq. (1). We normalized Young’s modulus depending on the depth and plotted those against relative position x in depth direction in Fig.3. The surface side is depicted as 0, and the deep zone end side is 1.
II. FINITE ELEMENT ANALYSIS OF ARTICULAR CARTILAGE
1.6 1.4
C. Young's modulus distribution depending on the depth
Young's modulus [MPa]
pulling coercively. Cartilage is categorized into the surface, middle and deep zones, and calcified zone by the gradient direction of collagen fibrosis, the morphology of the cells, and the presence or absence of calcareous deposits (Fig.1). Consequently, mechanical properties are different in the location.
A. Finite element model General-purpose finite element analysis software (ABAQUS v.6.5) was used to analyze the biphasic model [3]. The shape of articular cartilage model is rectangle that has 2 mm height and 3 mm width. We use two-dimensional model as the first step to future plan on multi-scale analyses to include chondrocyte. Characteristic behaviors in deformed profile, Mises stress and pore pressure obtained from two-dimensional model except small difference in stress and pressure values was verified by our related study using three-dimensional one. We modeled using biphasic fluid
IFMBE Proceedings Vol. 23
_________________________________________________________________
Finite Element Analysis of Articular Cartilage Model Considering the Configuration and Biphasic Property of the Tissue
element (CPE4P). The size of the each element corresponds to 40 m x 40 m and total number of element is 3750. In this model, the upper 10 % of cartilage tissue is surface zone, the lower 20 % of the tissue is deep zone and middle zone is located between surface and deep zones. Young’s modulus of the model is described as equation(1). Articular cartilage model was divided as 50 layers to depth direction and the corresponding Young’s modulus was provided for each layer. The void ratio e is provided for ABAQUS by the following equation.
e
dVV dV g dVt
(2)
Where dVv is a volume of voids, dVg is volume of grains of solid material, and dVt is a volume of trapped wetting liquid. In this study, we assumed that water content of 80 % included in cartilage tissue is a volume of voids and other 20 % is sum of the volume of grains of solid material and a volume of trapped wetting liquid. Thus, void ratio is 4. Poisson's ratio and permeability of solid phase were assumed as constant and used literature data shown in Table.1 [4]. The analysis was conducted for test duration of 300 s. Table.1 Material properties. Young’s modulus of solid phase, E0 [MPa] 0.74
Poisson’s ratio of solid phase, 0.125
Fig. 4 The finite element and the arrangement of spring elements.
C. Boundary conditions To compare experimental result with finite element analysis, the boundary condition for FEM was given to correspond to the compression test. FEM simulation of compress-ion test was performed under compression velocity of 200 m/s and compression ratio of 10 %. Thus, amount of com-pression, compression velocity and compression time to definite deflection were 0.2 mm as 10 % of thickness, 200 m/s and 1.0 s, respectively. We assumed that there is rigid body plate on the cartilage tissue model and gave compressive displacement The node of top surface was compressed at displacement of 0.2 mm by the plate and the transform in y direction (parallel to surface) was unconfined. The node of lower surface was confined and the displacements of the node in both x and y directions were 0. To simulate impermeable compressing plates in compression tester, water was assumed to seep only through right and left side of rectangle model. III. RESULT
-15
B. Spring element performed function caused by collagen fiber The structures of articular cartilage tissue are inhomogeneous and anisotropy. Then, it is believed that locally different property and configuration develop the function of cartilage. We noticed function of collagen fiber network. When proteoglycan supports compression loading, collagen fiber supports tension loading coercively because collagen network confines proteoglycan. So we created cartilage model considering the function of collagen fiber using nonlinear spring element. In this study, as shown in Fig. 4, we conect node of biphasic element in a horizontal direction by nonlinear spring element(SPRINGA) and assumed these nonlinear spring element affect only tensile deformation[5].
_______________________________________________________________
Biphasic element
Spring element
Permeability k [m4/N· s] 2.0×10
1885
In this study, we created three kinds of cartilage tissue models considering nonlinear spring element and biphasic property of the tissue, analyzed and compared experimental results to FEM analyses. First, we noticed the total load capacity of the model. The total load capacity of the model corresponds to reaction force to rigid body plate. The total load capacity of the experimental value is a measured value by a load cell in experiment. To conform the total load capacity of FEM simulation based on biphasic theory to the experimental value during compression, we defined an nonlinear behavior for spring element (Spring 1 in Fig. 5). This model was named Model 1. In this model, the average stress during stress relaxation was too high as shown in Fig.6. Therefore, the spring property during stress relaxation was changed to decrease gradually from Spring 1 to softer one as shown in Fig. 5. The model was named Model 2. The total load capacity of Model 2 corresponded to experimental one.
IFMBE Proceedings Vol. 23
_________________________________________________________________
N. Hosoda, N. Sakai, Y. Sawae and T. Murakami
Force [N]
1886
IV. DISCUSSION
8 7 6 5 4 3 2 1 0
Spring 1
0
0.005 0.01 0.015 Displacement [mm]
0.02
Stress [MPa]
Fig. 5 The behavior of nonlinear spring element.
1.4 1.2 1 0.8 0.6 0.4 0.2 0
Experimental result Non spring Model 1 Model 2
0
10 20 Time [s]
In the case we unused nonlinear spring element, peak stress in FEM model is about one order lower than the experimental one. The total load capacity of the model during compression coressponded to experimental one by using nonlinear spring element. So, it is thought that collagen network play a important part to stress development during compression. However, the stress relaxation behavior of the Model 1 disaccorded with experimental one. It is thought that stress relaxation curve indicated moderate change because nonlinear spring elements supported load and the flow of intrinsic fluid became slow. In actual cartilage tissue, it is thought that tensile deformation of collagen network comes loose with increase of fluid flow. Then, an apparent spring constant decreases. So, we created the model with sowly decreased spring constant. As a result, the total load capacity of FEM model corresponded to experimental one from the start of compression test to equilibrium. To reflect the depth-dependence of Young's modilus distribution for the defomed profile immediately after compression, we need change in nonlinear spring constant and instantaneous elastic modulus.
30
V. CONCLUSIONS
Fig. 6 Total load capacity.
(a)
(b
(c)
Fig. 7 Profile of compressed cartilage immediately after compression. (a)Cartilage specimen.(b)Model 1. (c)Model 3.
However, immediately after compression, the deformed profile of the Models 1 and 2 showed barreled profile and disaccorded with experimental one(Fig. 7). The polymeric materials increase their stiffness at higher strain rate. Therefore, apparent Young’s modulus is assumed to be larger than the equilibrium Young’s modulus. So, we adopted instantaneous elastic modulus as an instantaneous ratio of stress to strain, created the model, and named Model 3. The deformed profile immediately after compression of Model 3 corresponded to actual profile as shown in Fig. 7.
In this study, we created a cartilage tissue model considering the configuration and biphasic property of the tissue, and analyzed by biphasic theory. Then, we compared experimental results to FEM analyses and found the following facts. (1) FEM value of the total load capacity during compression corresponded to experimental one using nonlinear spring element. (2) In this biphasic model, to conform the total load capacity of FEM simulation to the experimental value during stress relaxation, it is necessary to control behavior of nonlinear spring element. (3) The defomed profile immediately after compression corresponded to actual profile by controlling nonlinear spring constant and instantaneous elastic modulus.
REFERENCES 1.
2.
_______________________________________________________________
Hosoda N, Sakai N, Sawae Y, Murakami T. (2008) DepthDependence and Time-Dependence in Mechanical Behaviors of Articular Cartilage in Unconfined Compression Test under Constant Total Deformation. Journal of Biomechanical Science and Engineering Vol.3:221-234 Jurvelin J. S., Buschmann M. D., Hunziker E. B. (2003) Mechanical anisotropy of the human knee articular cartilage in compression, Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, Vol. 217:215-219
IFMBE Proceedings Vol. 23
_________________________________________________________________
Finite Element Analysis of Articular Cartilage Model Considering the Configuration and Biphasic Property of the Tissue 3.
4.
5.
Wu J. Z, Herzog W., Epstein M. (1998) Evaluation of the finite element software ABAQUS for biomechanical modeling of biphasic tissues, Journal of Biomechanics, Vol.31:165-169 Guilak F., Mow V. C. (2000) The mechanical environment of the chondrocyte: a biphasic finite element model of cell-matrix interactions in articular cartilage, Journal of Biomechanics, Vol.33:16631673 Li L.P., Soulhat J., Buschmann M.D., Shirazi-Adl A. (1999) Nonlinear analysis of cartilage in unconfined ramp compression using a fibril reinforced poroelastic model, Clinical Biomechanics, Vol.14:673682
_______________________________________________________________
1887
Author: N. Hosoda Institute: Graduate School of Systems Life Sciences, Kyushu University Street: 744 Motooka City: Nishi-ku, Fukuoka Country: Japan Email:
[email protected] IFMBE Proceedings Vol. 23
_________________________________________________________________
Principal Component Analysis of Lifting Kinematics and Kinetics in Pregnant Subjects T.C. Nguyen1,2, K.J. Reynolds1 1
School of Computer Science, Engineering and Mathematics, Flinders University, Australia 2 South Australian Movement Analysis Centre, Repatriation General Hospital, Australia
Abstract — Low back pain (LBP) is estimated to affect 5090% of women during pregnancy with more than a third of women reporting it as a severe problem compromising their ability to work during pregnancy and affecting normal daily life and sleep patterns. The aim of this study was to investigate the differences in kinematics and kinetics of lifting in pregnant subjects. Methodology Fifteen maternal subjects (age: 32.0±2.3 yrs; height: 165.3±5.5cm; weight: 71.7±6.4kg) were tested at the third trimester of pregnancy (32-38 weeks). Twenty-seven retro reflective markers were placed on the various bony landmarks of the subjects. An eight-camera motion analysis system (VICON 512, VICON, Oxford, UK) was used to record movements of the body segment and synchronised force plate (AMTI, Watertown, MA, USA) parameters in three dimensions. Eight non-pregnant subjects (age: 29.6±3.1yrs; height:167.3±5.6cm; weight:65.4±8.4kg) were used as controls. Each subject lifted a 4 kg plastic box (dimension: 31.5x25x20 cm), which is representative of commonly lifted items. Motion of the ankle, knee and hip joints and pelvic segments were investigated. Principal component (PC) analysis was applied to 23 kinematic and kinetic variables from each group. The total PC score for each subject was calculated, and a one-tail student-t test was used to test for significant differences between the groups. Results Significant group differences (p 0.05) . Ting erector spine muscle, buttocks old skin and bursts of biceps habitually practise side with habitually practise side skin electric picture time difference of reacting, find high, low erector spine muscle, buttocks old skin and whifves of response time difference of biceps to display group have, show difference have all(p> 0.05) . Inferences revealed ' steady degree is taken precedence over activity degree ' generally exist between all movements and dancers, so can different dance behave (high low to behave group) There is difference nature of showing that has (Table 2) . Research this to habitually practise side and habitually practise erector spine muscle of side, buttocks old skin and whifves of biceps complex reaction time order compare alternate analysis respectively. Find t among every group tests all there is not difference of showing that have (p> 0.05) ,Reveal, habitually practise side and habitually practise erector spine muscle, buttocks old skin and streams of biceps muscle of side shrink order have apparent order relation. May because skin sample that electricity test it counts to be enough, skin telecom homogeneity too high and difference loud factor enough of experimenter of symbol, need various to count, could go on analysis carefully than correctly originally ' Table 3) . Decaying rate of brass-wind instrument of skin telecom No EMG duration reaction time By Table 4 being high low to display group have skin telecom symbol complex reaction time abstract find, high 0.61 seconds have skin to be telecom symbol complex reaction time apparent to less than low to display 0.81 seconds now, there is difference nature of showing that has (p aged normal controls Fig. 5
Voxel-by-voxel analysis of [11C]BF-227 PET images in the comparison between aged normal controls and AD patients (SPM2 analysis, p