Lecture Notes in Control and Information Sciences Editors: M. Thoma · M. Morari
306
Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo
Z. Zenn Bien Dimitar Stefanov (Eds.)
Advances in Rehabilitation Robotics Human-friendly Technologies on Movement Assistance and Restoration for People with Disabilities With 249 Figures
13
Series Advisory Board A. Bensoussan · P. Fleming · M.J. Grimble · P. Kokotovic · A.B. Kurzhanski · H. Kwakernaak · J.N. Tsitsiklis
Editors Prof. Z. Zenn Bien Human-Friendly Welfare Robot System Research Center Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu Daejeon, 305-701 Korea PhD Dimitar Stefanov, SRCS Cardiff & Vale NHS Trust Rehabilitation Engineering Unit Cardiff, CF5 2 YN UK and Institute of Mechanics Bulgarian Academy of Sciences Acad. G. Bonchev Street Block 4, 1113 Sofia Bulgaria
ISSN 0170-8643 ISBN 3-540-21986-2
Springer-Verlag Berlin Heidelberg New York
Library of Congress Control Number: 2004106092 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data conversion by the author. Final processing by PTP-Berlin Protago-TeX-Production GmbH, Berlin Cover-Design: design & production GmbH, Heidelberg Printed on acid-free paper 62/3020Yu - 5 4 3 2 1 0
Preface
It is now evident that one of the major application targets of the service robots is to use them as assistive devices for rehabilitation of the physically disabled and for the elderly people. Rehabilitation robotics (RR) is a relatively young but dynamically developing area of research. Some rehabilitation robots have already got out of the research laboratories and have become important members in everyday lives of growing users from many developed countries. It is expected that, in the near future, the rehabilitation robots (RR) will become a significant component of the futuristic welfare service systems in the world. Primarily limited to a small number of relatively simple movement tasks such as object replacement and eating, the application areas of the rehabilitation robotics, along with various intelligent technologies for movement assistance of people with disabilities, are continuously expanding to new dimensions that aim at improved assistance in different kinds of activities and entertainment of people with disabilities and aged people as well. We are witnessing that such intensive development of novel human-machine interfaces, intelligent control algorithms, new materials and efficient actuators have made it possible to invent and test various advanced design ideas. Common understanding and main tendency in the rehabilitation robotics design is that the robots should be human-friendly in the sense that the robotic machine and peripheral devices must be designed for the user to feel more comfortable, safer, and more convenient. Recent intelligent robotic devices for movement assistance are often designed to be equipped with the control strategies that do not cause high cognitive load to the users with various severe movement disorders. The idea for organizing this volume was inspired from the 8th International Conference on Rehabilitation Robotics (ICORR’2003) that was held during April 22-25, 2003 at Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea. The papers presented at the conference represent of course most recent tendencies of R&D in the rehabilitation robotics and intelligent assistive technology. With confidence, however, we like to declare that the current book is not just a variant of the conference proceedings! Different from the papers reported at the ICORR 2003 where specific problems and solutions of various subjects were discussed, the chapters of this book include original, reworked, and generalized materials that match
VI
Preface
to the style and objectives of the book. The book contains not only review articles on some advanced theoretical ideas in the rehabilitation robotics and results from some of the latest projects under development but also details on new advanced rehabilitation devices which have been recently transferred to the industry. A significant part of the book is devoted to the assessment of new rehabilitation technologies and evaluation of prototype devices with endusers. Safety of rehabilitation robots, historical remarks and perspective of rehabilitation robotics are also commented in the book. Also, different from many other books on rehabilitation engineering, the present volume includes a long chapter on robot-assisted neuro-rehabilitation that is considered as one of the latest trends in that area. One of the principal aims of this book is to promote dissemination of the information on the recent status of the rehabilitation robotics (RR). Our intention was to arrange the book in such a way that it is not just a simple collection of papers that would be of some interest to the specialists in a particular area, but rather, a book that contains some basics on the rehabilitation research and can help beginners to start their work in the same area, such as students and young researchers, or can help lecturers who want to introduce their students basics of the modern rehabilitation technology. In order to achieve this objective, most of the articles contain a detailed introduction to the problem to be discussed and an extended overview on the particular subject matter. The chapters that are contained in this book are authored by leading researchers in the field of rehabilitation robotics and represent a large part of the international research community. The book contains 27 chapters, which are grouped into 7 parts. The book begins with an introductory Part 1 devoted to description of the role of the rehabilitation robotics and some important issues of its development. The same part represents also some important milestones of the development of rehabilitation robotics. The chapters included in Part 2 cover three important issues on rehabilitation robotics for assistance of human movements: conceptions and experimental design, safety issues of the rehabilitation robots, and rehabilitation-robot evaluation. Some recent issues of the prosthetics and orthotics design are discussed in Part 3. Part 4 is concerned with the intelligent wheelchairs that can be considered as special mobile robots, designed to accomplish indoor user transportation. A recent trend in the design of assistive devices for mobility is the mechatronic devices for assistance in walking that sense the user’s movement intentions and provide gentle gait support, giving independency and safety to the user. Some examples of such devices are given in Part 5. Part 6 is dedicated to robotassisted neurorehabilitation. Examples of both upper limb robot mediated therapy and lower limb robot mediated therapy are commented in that part of the book. The final part of the book (Part7) talks about the perspectives and trends of the rehabilitation robotics.
Preface
VII
We are assured that the book would provide with a comprehensive overview of the field of rehabilitation robotics, and would satisfy a large group of readers, including researchers in the field, graduate and postgraduate students, and designers that use the RR technology. We believe that the book will become a representative selection of the latest trends and achievements on the rehabilitation robotics area. We would be extremely happy if such an important goal would be achieved. Finally, as the Editors of present volume, we like to take this opportunity to express our heartiest appreciation for all the authors who have worked their chapters with dedication and integrity, and contributed to the highest standard of this volume. We would like to thank also to Dr. Thomas Ditzinger and Ms. Heather King from Springer-Verlag for encouraging us in editing the present book and for their help in arranging this volume.
Daejeon March 2004
Z. Zenn Bien Dimitar Stefanov
Contents
List of Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XIX
List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXVII Part I Introduction 1 Advances in Human-Friendly Robotic Technologies for Movement Assistance/Movement Restoration for People with Disabilities Dimitar Stefanov, Z. Zenn Bien . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Areas of the RR Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Robotic Systems for Movement Assistance . . . . . . . . . . . . . . 1.2.2 Robots for Physical Support and Indoor Navigation . . . . . . 1.2.3 Robots for Physical Rehabilitation . . . . . . . . . . . . . . . . . . . . . 1.2.4 Vocational RR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Emotional Interactive Entertainment Robots . . . . . . . . . . . . 1.3 Specialized Human-Machine Interface . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Rehabilitation Robots in the Smart House Design . . . . . . . . . . . . . . 1.5 Functional Integration of the Robotic Environment . . . . . . . . . . . . . 1.6 Commercialization of RR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Some Issues for Futuristic Intelligent Robotic House Model . . . . . . 1.8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 5 5 7 7 8 9 10 10 12 13 16 18
2 Rehabilitation Robotics from Past to Present – A Historical Perspective Michael Hillman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Earliest Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Assistive Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Fixed Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Mobile Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Wheelchair Mounted Manipulators . . . . . . . . . . . . . . . . . . . . . 2.3.4 Human Machine Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 26 27 28 31 33 35
X
2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12
Contents
Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prosthetics and Orthotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robot Mediated Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robotics in Special Needs Education . . . . . . . . . . . . . . . . . . . . . . . . . Robotics in Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Commercialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alternatives to Robotics in Rehabilitation . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 36 37 38 39 40 40 41 42
Part II Rehabilitation Robots for Assistance of Human Movements II.1 Conceptions and Experimental Design 3 Toward a Human-Friendly User Interface to Control an Assistive Robot in the Context of Smart Homes Mounir Mokhtari, Mohamed Ali Feki, Bessam Abdulrazak, Bernard Grandjean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 MANUS Assistive Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Networking Technologies and Developments . . . . . . . . . . . . . . . . . . . 3.4 General Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 User Interface Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Implementation of a Path Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Gesture Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Obstacles Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Towards the Co-autonomy Concept . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47 47 48 49 50 51 52 52 53 54 55
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II Z. Zenn Bien, Kwang-Hyun Park, Dae-Jin Kim, Jin-Woo Jung . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Intelligent Sweet Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Assistive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Intelligent Man-Machine Interfaces . . . . . . . . . . . . . . . . . . . . . 4.3 KARES II System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Overall Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Soft Robotic Arm with Visual Servoing . . . . . . . . . . . . . . . . . 4.3.4 Intelligent Human-Robot Interfaces . . . . . . . . . . . . . . . . . . . . 4.3.5 User Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57 57 59 59 62 67 72 73 74 77 79 82 89
Contents
XI
5 “FRIEND” – An Intelligent Assistant in Daily Life O. Kouzmitcheva, C. Martens, A. Pape, H. She, I. Volosyak, A. Gr¨ aser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Basic Concepts and Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 The FRIEND Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Hardware Structure of FRIEND . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Multi-layered Control Architecture of FRIEND . . . . . . . . . . 5.2 Application and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The “Beverage Serving” Task . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Task Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Demonstration-Based Programming . . . . . . . . . . . . . . . . . . . . 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95 95 95 96 97 100 101 109 111 119 124
6 GIVING-A-HAND System: The Development of a Task-Specific Robot Appliance M.J. Johnson, E. Guglielmelli, G.A. Di Lauro, C. Laschi, M.C. Carrozza, P. Dario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Domotic-Robotic Integrated System . . . . . . . . . . . . . . . . . . . . 6.2.2 Localized System of Appliances . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Design Concept for the Giving-A-Hand System . . . . . . . . . . . . . . . . 6.4 Domotic/Telematic and Robotic Assistance . . . . . . . . . . . . . . . . . . . 6.5 The Fetch and Carry Robot Appliance Development . . . . . . . . . . . . 6.6 User-Centered Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Prototype of a Local Network with the Robot Appliance . . . . . . . . 6.8 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127 127 128 129 130 132 133 133 135 138 140
7 Cooperative Welfare Robot System Using Hand Gesture Instructions Noriyuki Kawarazaki, Ichiro Hoya, Kazue Nishihara, Tadashi Yoshidome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Cooperative Robot System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Measurement of Distance Using Stereo Images . . . . . . . . . . . . . . . . . 7.4 Detection of the Hand and the Target Object . . . . . . . . . . . . . . . . . . 7.4.1 Detection of the Hand Area Using Color Image . . . . . . . . . . 7.4.2 Tracking of the Hand Using CP . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Detection of the Object Using Gesture Instruction . . . . . . . 7.5 Recognition of the Hand Gesture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143 143 144 145 146 146 147 148 149 150 152
XII
Contents
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon” Ryoji Soyama, Sumio Ishii, Azuma Fukase . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Meal-Assistance Device “My Spoon” . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Operating Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Basic Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Compartment Selection Command Set . . . . . . . . . . . . . . . . . . 8.4.3 Position Adjustment Command Set . . . . . . . . . . . . . . . . . . . . 8.5 Control Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Manual Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Semi-automatic Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Automatic Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Future Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Food Recognition by Using Color Image Processing . . . . . . 8.6.2 Improvements in Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155 155 155 156 156 157 158 158 159 159 159 160 161 161 162 163
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing A.H.G. Versluis, B.J.F. Driessen, J.A. van Woerden . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Visual Servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Vision Aspects of the Visual Servoing . . . . . . . . . . . . . . . . . . 9.3 Control Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Vision System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165 165 167 168 168 169 169 170 171 171 174
Part II Rehabilitation Robots for Assistance of Human Movements II.2 Safety Issues of the Rehabilitation Robots 10 A Safety Strategy for Rehabilitation Robots Makoto Nokata, Noriyuki Tejima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Principles of Safety Standards for Robots . . . . . . . . . . . . . . . . . . . . . 10.2.1 Framework of New Safety Standards for Robots . . . . . . . . . 10.2.2 Safety Standard for Machinery . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Risk Assessment Process and Risk Reduction . . . . . . . . . . . . 10.2.4 Tolerable Risks for Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . .
177 177 177 177 178 179 180
Contents
XIII
10.3 Case Study on Safety of Rehabilitation Robots . . . . . . . . . . . . . . . . 10.3.1 Risk Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Safety Measures of Risk Reduction . . . . . . . . . . . . . . . . . . . . . 10.3.3 Benefit Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Proposal of Risk Assessment Guideline for Rehabilitation Robots 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
181 182 183 183 184 185
11 Safety Evaluation Method of Rehabilitation Robots Makoto Nokata, Koji Ikuta, Hideki Ishii . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Safety Strategy for Human-Care Robots . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Injury to Humans from Human-Care Robots . . . . . . . . . . . . 11.2.2 Classification of Safety Strategies . . . . . . . . . . . . . . . . . . . . . . 11.3 Proposing Evaluation Measures of Safety . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Necessity of Safe Quantitative Evaluation . . . . . . . . . . . . . . . 11.3.2 Selection of Evaluation Measures . . . . . . . . . . . . . . . . . . . . . . . 11.4 General Evaluation Method Using Evaluation Measures . . . . . . . . . 11.5 Deriving Danger-Indexes of Safety Strategy . . . . . . . . . . . . . . . . . . . 11.5.1 Safety Design Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Safety Control Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Proposal of Design Optimization and Practical Examples . . . . . . . 11.6.1 Formulating the Design Optimization Method . . . . . . . . . . . 11.6.2 Maximizing Safety Under Fixed Cost . . . . . . . . . . . . . . . . . . . 11.6.3 A New Method of Calculate a Safe Approach Motion . . . . . 11.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
187 187 187 187 188 189 189 189 190 192 192 193 194 194 195 196 197
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots Noriyuki Tejima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Tolerable Risk and Surface Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Force Limitation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 A Straight Movement-Type Force Limitation Mechanism . . . . . . . . 12.5 A Three-Dimensional Force Limitation Mechanism . . . . . . . . . . . . . 12.6 Reflex Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199 199 199 201 202 204 206 207
Part II Rehabilitation Robots for Assistance of Human Movements II.3 Rehabilitation-Robot Evaluation 13 Usability of an Assistive Robot Manipulator: Toward a Quantitative User Evaluation Bessam Abdulrazak, Mounir Mokhtari, Bernard Grandjean . . . . . . . . . . . 211 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 13.2 Users Needs Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
XIV
Contents
13.3 Hardware and Software Organization . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Software Command Architecture . . . . . . . . . . . . . . . . . . . . . . . 13.4 Quantitative Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Modes and Time of Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Actions Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
212 213 214 215 215 216 217 219 219
14 Processes for Obtaining a “Manus” (ARM) Robot within The Netherlands GertWillem R¨ omer, Harry Stuyt, Geer Peters, Koos van Woerden . . . . 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Wheelchair Mounted Service Manipulator ARM . . . . . . . . . . . . . . . 14.3 The Current Process of Providing an ARM to a User . . . . . . . . . . . 14.3.1 Informing Users about the Benefits of the ARM . . . . . . . . . 14.3.2 Indication Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Stand-Alone Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 Formal Application and Funding of an ARM . . . . . . . . . . . . 14.3.5 Mounting the ARM on the Wheelchair . . . . . . . . . . . . . . . . . 14.3.6 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.7 Service and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 The Future Process of Prescribing the ARM . . . . . . . . . . . . . . . . . . . 14.5 Summary of Two Recent Dutch Arm-User Evaluations . . . . . . . . . 14.5.1 User Study Conducted by iRV . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 User Study Conducted by hetDorp . . . . . . . . . . . . . . . . . . . . .
221 221 221 223 224 224 224 226 226 226 227 227 228 228 229
Part III Prostheses and Orthoses 15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors of an Underactuated Prosthetic Hand M. Zecca, G. Cappiello, F. Sebastiani, S. Roccella, F. Vecchi, M.C. Carrozza, P. Dario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Mechanical Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Sensory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Materials and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.1 Slider Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.2 Tendon Tensiometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.3 Thumb Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.4 Force Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233 233 234 235 236 236 237 239 240 241
Contents
16 Design and Testing of WREX Tariq Rahman, Whitney Sample, Rahamim Seliktar . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Design of WREX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Gravity Balancing With x "= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Clinical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XV
243 243 244 245 248 248
Part IV Intelligent Wheelchairs 17 A Concept for Control of Indoor-Operated Autonomous Wheelchair Dimitar Stefanov, Alexander Avtanski, Z. Zenn Bien . . . . . . . . . . . . . . . . 17.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.1 Methods for Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.2 Path Planning and Navigation to the Goal . . . . . . . . . . . . . . 17.2 Conception of Wheelchair Navigation . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.2 Initial Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Localization of the Wheelchair Position . . . . . . . . . . . . . . . . . . . . . . . 17.4 Scenario of the Wheelchair Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Navigation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Computer Simulation of the Control Algorithm . . . . . . . . . . . . . . . . 17.6.1 Wheelchair Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.2 Modeling of the Sensors and Their Arrangement on the Wheelchair Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.3 Navigation Algorithm of the Simulator . . . . . . . . . . . . . . . . . 17.7 Evaluation of the Control Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.1 Navigation to Multiple Goals . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.2 Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.3 Avoiding a “Trap” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.4 Navigation in a Complex Environment . . . . . . . . . . . . . . . . . . 17.7.5 Route Generation in Partially Known Environment . . . . . . 17.8 Future Plans and Concluding Remark . . . . . . . . . . . . . . . . . . . . . . . . 18 Design of an Intelligent Wheelchair for the Motor Disabled Chong Hui Kim, Jik Han Jung, Byung Kook Kim . . . . . . . . . . . . . . . . . . . 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.2 Software Design for Real-Time System . . . . . . . . . . . . . . . . . .
253 253 254 256 258 258 258 260 262 264 267 267 269 272 283 283 284 286 288 292 294
299 299 300 301 302 302 303
XVI
Contents
18.5 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.2 Hierarchical Control Architecture . . . . . . . . . . . . . . . . . . . . . . 18.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
304 304 306 307 309
Part V Mechatronics Devices for Assistance in Walking 19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device Saku Egawa, Ikuo Takeuchi, Atsushi Koseki, Takeshi Ishii . . . . . . . . . . . . 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Electrically Assisted Walker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Supporter-Embedded Force Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.1 Requirements for the Force Sensor . . . . . . . . . . . . . . . . . . . . . 19.3.2 Sensor Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.3 Sensing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
313 313 314 315 315 316 317 318 319 321 321
20 Human-Friendly Care Robot System for the Elderly Dong Hyun Yoo, Hyun Seok Hong, Han Jo Kwon, Myung Jin Chung . . 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.1 The Functions of Do-u-mi Robot . . . . . . . . . . . . . . . . . . . . . . . 20.2 Overall System of Do-u-mi Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Sound Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.4 Face Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.4.1 Face Candidate Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.5 Autonomous Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
323 323 323 325 326 327 328 330 331
21 Newly Designed Rehabilitation Robot System for Walking-Aid Choon-Young Lee, Kap-Ho Seo, Changmok Oh, Ju-Jang Lee . . . . . . . . . . 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Electric Motor Based Gait Rehabilitation System . . . . . . . . . . . . . . 21.2.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Newly Developed Gait Rehabilitation System . . . . . . . . . . . . . . . . . . 21.3.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.2 Control Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
333 333 334 334 336 338 338 340 342
Contents
XVII
Part VI Robot-Assisted Neurorehabilitation 22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation Rui Loureiro, Farshid Amirabdollahian, William Harwin . . . . . . . . . . . . . 22.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Background to Stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Gentle/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4 Clinical Prototype for Machine Mediated Neurorehabilitation . . . . 22.4.1 Antigravity Mechanism for the Shoulder and Elbow . . . . . . 22.4.2 Exercises & Movement Guidance . . . . . . . . . . . . . . . . . . . . . . . 22.4.3 Different Therapy Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5 Clinical Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.1 Outcome Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.2 Data Analysis and Statistical Methodology . . . . . . . . . . . . . . 22.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
347 347 348 349 350 351 353 354 357 357 358 359 360 361
23 Wire Driven Robots for Rehabilitation Paolo Gallina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1.1 Advantages of Wire Driven Robots . . . . . . . . . . . . . . . . . . . . . 23.1.2 Problems Related to Wire Driven Robots . . . . . . . . . . . . . . . 23.2 Manipulability and Wire Tension Computation . . . . . . . . . . . . . . . . 23.3 NeRebot: An Example of Wire Driven Robot for Rehabilitation . . 23.3.1 Software and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3.2 Treatment Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.4 Conclusions and Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365 365 366 367 367 369 372 373 374
24 A Wrist Extension for MIT-MANUS Hermano Igo Krebs, James Celestino, Dustin Williams, Mark Ferraro, Bruce Volpe, Neville Hogan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 Specification for a New Wrist Device . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.1 Kinematic Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.2 Actuator Placement and Transmission Selection . . . . . . . . . 24.2.3 Actuator Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.4 Sensor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3 Alpha-Prototype Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 Robotic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
377 377 380 381 382 382 383 383 386 388
XVIII Contents
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots Andr´ as T´ oth, Guszt´ av Arz, G´ abor Fazekas, Daniel Bratanov, Nikolay Zlatov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2 Analysis of Spastic Upper Limb Physiotherapy . . . . . . . . . . . . . . . . 25.3 System Design and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.1 Mechanical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.2 The Instrumented Orthoses . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.3 Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.4 User Interface and Programming . . . . . . . . . . . . . . . . . . . . . . . 25.3.5 Safety Measures and Devices . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4 Testing and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5 Clinical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.1 Subjects of the Clinical Trial . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.2 Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.3 Analysis of Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . 25.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
391 391 392 394 394 399 399 401 402 403 405 405 406 408 409
26 STRING-MAN: A Novel Wire-Robot for Gait Rehabilitation Dragoljub Surdilovic, Rolf Bernhardt, Tobias Schmidt and Jinyu Zhang 26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2 Development Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3 Robotic Mechanisms Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.4 Human/Robot Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.5 Sensory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.6 Control Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
413 413 414 414 419 420 421 424
Part VII Perspectives and Trends of the Rehabilitation Robotics 27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade H.F. Machiel Van der Loos, Richard Mahoney, Chantal Ammi . . . . . . . . 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2 Emerging Demographics and Healthcare Trends . . . . . . . . . . . . . . . . 27.3 Emerging Technologies Relevant to Robotics . . . . . . . . . . . . . . . . . . 27.4 RoadBlocks and Enablers of Robotic Applications in Rehabilitation 27.5 Mechatronic/Robotic Applications to Rehabilitation . . . . . . . . . . . . 27.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
427 427 428 429 431 432 432
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 About the Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
List of Contributors
Abdulrazak, Bessam GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Amirabdollahian, Farshid The University of Newcastle, C.R.E.S.T., Stephenson Building, Claremont Road, Newcastle Upon Tyne, NE1 7RU, UK,
[email protected] Ammi, Chantal Dept. of Business Administration, National Telecommunications Institute, 9, rue Charles Fournier, 91011 Evry, France,
[email protected] av Arz, Guszt´ Budapest University of Technology and Economics, Department of Manufacturing Engineering, Egry J. u. 1. Budapest 1111, Hungary
[email protected] Avtanski, Alexander Savvion Inc., 5104 Old Ironsides Dr, Santa Clara, CA 95054, USA
[email protected] Bernhardt, Rolf Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
Bien, Z. Zenn Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Bratanov, Daniel University of Rousse, Department of Manufacturing Engineering, Automation & Robotics Laboratory, 8 Studentska str. 7017, Rousse, Bulgaria
[email protected] Cappiello, Giovanni Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] Carrozza, Maria Chiara ARTS Lab, Scuola Superiore Sant’Anna, Polo Sant’Anna Valdera, viale Rinaldo Piaggio, 34-56025 Pontedera (PI), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] XX
List of Contributors
Celestino, James Department of Mechanical Engineering, Massachusetts Institute of Technology, Room 3-173, 77 Massachusetts Ave, Cambridge, MA 02139, USA
[email protected] Chung, Myung Jin Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Dario, Paolo ARTS Lab, Scuola Superiore Sant’Anna, Polo Sant’Anna Valdera, viale Rinaldo Piaggio, 34-56025 Pontedera (PI), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Di Lauro, G.A. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Driessen, B.J.F. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] Egawa, Saku Mechanical Engineering Research
Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Fazekas, G´ abor National Institute for Medical Rehabilitation, Szanatorium u. 19 Budapest 1528 Hungary
[email protected] Ferraro, Mark Burke Rehabilitation Hospital, 785 Mamaroneck Avenue, White Plains, NY 10605, USA
[email protected] Feki, M.A. GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Fukase, Azuma Intelligent Systems Laboratory, SECOM Co., Ltd., R& D Center, 8-10-16, Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected] Gallina, Paolo Department of Energetics, University of Trieste, Trieste, via A. Valerio 10, 34127 Trieste (Italy)
[email protected] Gr¨ aser, Axel Institute of Automation, University of Bremen, Otto Hahn Allee NW1, 28359 Bremen, Germany,
[email protected] Grandjean, Bernard GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] List of Contributors
XXI
Guglielmelli, E. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
Harwin, William The University of Reading, School of Systems, Engineering, Department of Cybernetics, Whiteknights, Reading, RG6 6AY, UK
[email protected] Ishii, Sumio Intelligent Systems Laboratory, SECOM Co., Ltd., R& D Center, 8-10-16, Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected] Hillman, Michael Bath Institute of Medical Engineering, Wolfson Centre, Royal United Hospital, Bath BA1 3NG, UK
[email protected] Ishii, Takeshi Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Hogan, Neville Department of Mechanical Engineering, Massachusetts Institute of Technology, Room 3-173, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Brain and Cognitive Sciences Dept., Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
[email protected] Hong, Hyun Seok Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Hoya, Ichiro Department of Welfare Systems Engineering, Kanagawa Institute
Ikuta, Koji Department of Micro System Engineering, Graduate School of Engineering, Nagoya University
[email protected] Ishii, Hideki Department of Micro System Engineering, Graduate School of Engineering, Nagoya University
[email protected] Johnson, M.J. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the INAIL Centro Protesi, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Jung, Jik Hang Department of Electrical Engineering & Computer Science, KAIST. 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] XXII
List of Contributors
Jung, Jin-Woo Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Kawarazaki, Noriyuki Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected] Kim, Byung Kook Department of Electrical Engineering & Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Kim, Chong Hui Department of Electrical Engineering & Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea,
[email protected] Kim, Dae-Jin Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Koseki, Atsushi Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Kouzmitcheva, Olena Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected] Krebs, Hermano Igo Massachusetts Institute of Technology, Department of Mechanical Engineering, 77 Massachusetts Ave, 3-137, Cambridge, MA 02139 USA; Weill Medical College of Cornell University, The Winifred Masterson Burke Medical Research Institute, Department of Neurology and Neuroscience
[email protected] Kwon, Han Jo Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Laschi, C. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy
[email protected] Lee, Choon-Young Digital Content Research Division, ETRI, 161 Gajeong-dong Yuseong-gu, Daejeon, 305-350 Korea
[email protected] Lee, Ju-Jang Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Loureiro, Rui The University of Reading, School of Systems, Engineering, Department of Cybernetics, Whiteknights,Reading, RG6 6AY, UK
[email protected] List of Contributors XXIII Mahoney, Richard Rehabilitation Technology Division, Applied Resources Inc., 1275 Bloomfield Ave., Fairfield, NJ, 07004 USA
[email protected] Martens, Christian RHEINMETALL-DEFENCEELECTRONICS, Br¨ uggeweg 54, 28309 Bremen, Germany,
[email protected] Mokhtari, Mounir GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Nishihara, Kazue Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected] Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Peters, Geer RTD hetDorp, Heijenoordseweg 130, NL-6813 GC, Arnhem, The Netherlands,
[email protected] Rahman, Tariq A.I. duPont Hospital for Children, 1600 Rockland Rd, Wilmington, DE 19899
[email protected] Roccella, Stefano Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] R¨ omer, Gert Willem Exact Dynamics, Edisonstraat 96, NL-6942 PZ, Didam, The Netherlands,
[email protected] Nokata, Makoto Department of Robotics, Faculty of Science and Engineering, Ritsumeikan University
[email protected] Sample, Whitney A.I. duPont Hospital for Children, 1600 Rockland Rd, Wilmington, DE 19899
[email protected] Oh, Changmok Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Schmidt, Tobias Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
Pape, Andreas Robert BOSCH GmbH, Gasoline Systems, GS/EFA3, Postfach 30 02 40, 70442 Stuttgart,
[email protected] Park, Kwang-Hyun Department of Electrical Engineering and Computer Science, KAIST, 373-1
Sebastiani, Francesco Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] Seliktar, Rahamim School of Biomedical Engineering, Drexel University, 32 and Chestnut Sts, Philadelphia PA 19104
[email protected] XXIV List of Contributors Seo, Kap-Ho Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Tejima, Noriyuki Department of Robotics, Faculty of Science and Engineering, Ritsumeikan University
[email protected] She, Haiying Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected] T´ oth, Andr´ as Budapest University of Technology and Economics, Department of Manufacturing Engineering, Egry J. u. 1. Budapest 1111 Hungary
[email protected] Soyama, Ryoji SECOM Co., Ltd., Intelligent Systems Laboratory, Medical Welfare Division, 8-10-16 Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected] Stefanov, Dimitar Cardiff & Vale NHS Trust, Rehabilitation Engineering Unit, Cardiff, CF5 2YN, UK; Institute of Mechanics, Bulgarian Academy of Sciences, Acad. G. Bonchev Street, Block 4, 1113 Sofia, Bulgaria D
[email protected] Stuyt, Harry Exact Dynamics BV, Edisonstraat 96, NL-6942 PZ, Didam, The Netherlands
[email protected] Surdilovic, Dragoljub Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
[email protected] Takeuchi, Ikuo Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Van der Loos, H.F. Machiel, Rehabilitation R& D Center, VA Palo Alto Health Care System, 3801 Miranda Ave. # 153, Palo Alto, CA, 94304 USA
[email protected] van Woerden, J.A. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] van Woerden, Koos TNO TPD, Stieltjesweg 1, NL-2628 CK, Delft, The Netherlands,
[email protected] Vecchi, Fabrizio Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] Versluis, A.H.G. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] Volosyak, Ivan Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected] List of Contributors Volpe, Bruce Department of Neurology and Neuroscience, Weill Medical College Cornell University, Burke Medical Research Institute, 785 Mamaroneck Avenue, White Plains, NY 10605, USA; Burke Rehabilitation Hospital, 785 Mamaroneck Avenue, White Plains, NY 10605, USA
[email protected] Yoo, Dong Hyun Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Yoshidome, Tadashi Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected] XXV
Williams, Dustin Interactive Motion Technologies, Inc., 56 Highland Ave, Cambridge, MA 02139, USA
[email protected] Zecca, Massimiliano Scuola Superiore Sant’Anna, ARTS Labs, Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy
[email protected] Zhang, Jinyu Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany Zlatov, Nikolay Cardiff University, Manufacturing Engineering Centre, The Parade P O Box 925 Cardiff CF24 0YF Wales, UK
[email protected] List of Abbreviations
ACC ADL AGW ARM BWS COP DLS DoF DoFs EMG FMMNN FSR HMM ICORR LNA LRF LPM QOL ROL RR RRs RTAI sls TOD
active compliance control activities of daily living automatically guided wheelchairs assistive robotic manipulator body weight support center of pressure double limb support degree of freedom degrees of freedom electromyography, electromyographic Fuzzy Min-Max Neural Networks force sensing resistors Hidden Markov Model International Conference on Rehabilitation Robotics low noise amplifier laser range finder log-polar mapping quality of life respect of living rehabilitation robot rehabilitation robots Real-Time Application Interface single limb support task-oriented design
1 Advances in Human-Friendly Robotic Technologies for Movement Assistance/Movement Restoration for People with Disabilities Dimitar Stefanov and Z. Zenn Bien
Abstract Rehabilitation robots (RR) are expected to play an important role toward the independent life of older persons and persons with disabilities. Such intelligent devices, embedded into the home environment, can provide the resident with 24-hour movement assistance. Modern home-installed robots tend to be not only physically versatile in functionality but also emotionally human-friendly, i.e. they may be able to perform their functions without disturbing the user and without causing him/her any pain, inconvenience, or movement restriction, instead possibly providing him/her with comfort and pleasure. This paper analyzes the main categories of RR. The paper will then discuss some important issues of the future development of an intelligent residential space with a human-friendly rehabilitation robots integrated with it.
1.1 Introduction Recent statistics show a trend of rapid growth in the number of persons with physical disabilities and aged people who need external help in their everyday movement tasks [1]. The problem of caring for older persons and persons with physical disabilities will become more serious in the near future when a significant part of the increasing global population is in the group of 65 years of age or over, and the existing welfare model is not capable to meet the increased needs. It is obvious that such a problem cannot be solved by increasing the number of care-givers. On the other hand, an optimistic view is growing in that the quality of life for the people with movement limitations can be significantly improved by means of various modern technologies, in particular, through rehabilitation robots [2, 3]. It is expected that rehabilitation robots (RR) will have a strong positive emotional impact on persons with physical disabilities and older persons, improving their quality of life, increasing their movement independence, and giving them privacy. The same notion will reduce to some extend the medical care costs per person. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 3-23, 2004. © Springer-Verlag Berlin Heidelberg 2004
4
D. Stefanov and Z.Z. Bien
The idea of rehabilitation robots evolves with the recent development of relevant technology. While the initial conceptions for RR design were mainly concerned with a manipulator usually controlled in a direct mode, the latest design tendencies target to multi-agent robotic solutions where more than one home installed robots are integrated with other home installed devices. Whereas many service robots for ordinary non-impaired people easily meet requirements that are standardized to some extend, the problems and needs of older persons and persons with physical disabilities significantly vary from person to person, which add many serious requirements to the RR design. The robots for people with special needs should apply control algorithms based on a small number of commands relevant to the specifics of the user’s own motions. The technology solutions should also consider the individual’s habits and should meet all safety regulations. The development of sophisticated RR was strongly influenced by the recent fast progress of various intelligent technologies such as fuzzy logic, artificial neural net, and evolutionary algorithms. Thus, those RR’s are sometimes termed “intelligent RR”. The concept of RR for older persons and persons with physical disabilities has been considered important by many social and economic institutions in the advanced countries. In fact, numerous research and demonstration projects on RR have been already completed or are in the phase of development all over the world. Many of these projects are often funded by international R&D organizations and involve participants from different countries. Research activity on RR is relatively higher in Japan, Europe, and the USA, where there is a strong growth (by far) of the aged population along with the availability of broad high technology achievements. Some RRs are available on the market as commercial products and become an important helper for increasing number of patients, improving significantly their quality of life. The objective of this paper is to provide a summary of some recent works related to the notion of the RR for service of older persons and persons with physical disabilities. The subject matter lies in the interdisciplinary area of many different branches of science and technology, and its successful design can be possible only from the result of joint efforts of many specialists in different areas. We find that the technologies and solutions for RR should be human friendly, i.e. the RR should be designed according to the notion of human-centeredness and should possess a high level of intelligence in their control, actions, and interactions with the users, offering them a high level of comfort and functionality. In that chapter we have included some ideas and tendencies that have been proposed and developed recently. Comments on the RR research projects, products, and conceptions are done with a focus on technology innovations that have been taking place. This chapter does not treat non-technical components of the RR design such as interior design, organization of medical care and care-giver servicing, maintenance, etc. This chapter is organized as follows. In Sect. 1.2, we give a brief classification of the RR regarding their application and introduce some recent projects and commercially-available RR products. Next, an overview on the place of the RR in the arrangement of some smart home projects is given. Our vision on the futuristic robotic smart house is also presented.
1 Advances in Human-Friendly Robotic Technologies
5
1.2 Areas of the RR Application Rehabilitation robots are intended to meet different aspects of user’s needs. Although the area of the rehabilitation robotics is relatively new, its intensive development during the last 20 years formed several different domains. 1.2.1 Robotic Systems for Movement Assistance Most of the rehabilitation robots are designed to assist disabled individuals in their everyday movement needs, such as eating, drinking, objects replacement, etc. [4]. Currently, three main types of robot schemes are in operation: desktop-mounted robots (workstations), wheelchair-mounted robots, and mobile autonomous robots. In some simple applications, the robot is fixed to a desk or to the floor. The operator is located in a suitable position near the worktable and controls the robot that performs unaided pick-and-place ADL tasks [5]. Wheelchair-mounted robots [6, 7] can be used both for indoor and for outdoor assistance for the user. The attachment of a robot to the wheelchair significantly increases the movement independence of the user because one can move freely to different location of the house and can perform manipulative tasks in each position with the help of the rehabilitation robot. The drawbacks of such a solution are inclination of the wheelchair due to the weight of the robot, enlargement of the wheelchair width (which is critical for narrow door passage), and changes of the dynamic characteristics of the wheelchair. The mobile robots are remotely controlled devices that navigate autonomously through the home environment and serve the user who is located at a certain position (bed, chair, etc.). The cognitive load of a user can be reduced considerably if the robot automatically performs repetitive movements (in a pre-programmed mode of control). Programs can be successfully executed if the robot, the user, and the manipulated objects remain in the same initial position every time when the concrete task is performed. In the case of a wheelchair-mounted manipulator, the relative position of the user with respect to the manipulator remains the same, but the relative position between the manipulator and the objects may depend on the wheelchair position. In order to avoid this problem, a technique of vision-based automatic navigation of the gripper is adopted to handle the grasped object (KARES I, KARES II, TAURO, etc.) [8–10], or the user performs the end-point control in which the trajectory, orientation, and velocity of the gripper are directly adjusted (HOPE) [5]. Automatically guided wheelchairs (AGW) (known also as “go-to-goal wheelchairs”) are intended to facilitate transportation of individuals with severe dexterity limitations in their hands. Because of their automatic guidance in the home environment, these wheelchairs are often considered a special class of mobile rehabilitation robots for the transportation of a user. Most of the devices of that category are oriented to application in indoor environment but recently some results from design of outdoor autonomous wheelchairs were reported [11]. Different from the standard wheelchairs where the user directly controls the wheelchair movement, AGW
6
D. Stefanov and Z.Z. Bien
autonomously navigate toward the goal. After receiving the user’s instruction about the destination point of the wheelchair, the navigation system first generates the travel routine and then independently steers the wheelchair to the desired position. The automatic control of go-to-goal wheelchairs dramatically reduces the cognitive load of the user and makes possible the safe movement through narrow doors and corridors. In the most wheelchair projects, wheelchair movement is often assumed to occur in a structured or semi-structured home environment. Localization of the current wheelchair position is based either on fixed-location beacons strategically placed at pre-defined locations within the operating home environment [12, 13] or on natural landmarks of the structured environment [14]. Beacon-based systems can be further grouped into systems that refer to active beacons (most often fixed activebeacon emitters installed on the walls) and systems which get navigational parameters from passive targets [12]. Usually, passive beacons are of low cost and can be easily attached to the walls, but the procedures of detecting such markers (typically CCD sensors) and extracting the coded information are rather complicated. The localization systems that are based on active-beacon emitters typically involve simple sensors to identify the beacon positions and apply a simple information decoding procedure. However, such a solution does not allow flexible change of the beacon positions because each sensor should be separately powered and controlled. The guidepath systems for wheelchair navigation can be considered as a special class of beacon-based systems where the guide tracks are embedded in the floor. The magnetic-tape guidepath scheme [15] involves a strip of flexible magnetic material attached to the floor surface. The magnetic stripe follower is based on an array of fluxgate or Hall effect sensors, mounted on the board of the vehicle. Although the approach is widely used in many material-handling applications, its use for wheelchair guidance in the home environment is limited due to the complexity of the movement routines and the need for frequent reconfiguration of the path. An additional limitation comes from the requirement of embedment of the guidepath in the floor. The natural landmark navigation does not require installation of special beacons and the algorithms allow faster adaptation of the wheelchair to the unknown home environment. The proposed navigation solutions vary from detection of the location of ceiling mounted lamps [16] to detection of doorframes and furniture edges [14, 17]. New travel routines can be easily added to the computer memory without assistance of a specialized staff. On the other hand, the solutions based on natural landmarks require much complicated vision sensors, involve complex algorithms for analysis of the visual scene, and entail detection of artifacts from ambient lights. In order to achieve successful wheelchair navigation in case of absence or malfunctioning of some beacons/landmarks, most of the control algorithms identify the current wheelchair location not only by the information from the beacon navigation system but also by the information on the angular position of driving wheels (dead-reckoning procedure). Apart from the automatic navigation to the goal, most of the automatically guided wheelchairs can automatically perform obstacle avoidance maneuvers, referring to the information from range sensors (mostly sonar or optical retroreflective sensors) [18]. In many wheelchair solutions, the same range sensors are used for running the wheelchair in a semi-autonomous mode in which
1 Advances in Human-Friendly Robotic Technologies
7
the user’s instructions can be modified in conjunction with the sensor information on nearly located objects. For semi-autonomous control, several schemes are proposed/tested, such as wall following, people following/avoidance, and narrow corridor passage [14]. In [19] is considered the design of a robotic device that combines autonomous navigated wheelchair with a manipulator installed on it. It is noted that such a wheelchair can be used in three modes: (1) mode for autonomous indoor transportation of the user; (2) mode as a mobile robot that is remotely-controlled to deliver different objects to the user when he/she is placed in the bed; (3) mode as an home inspection robot that is remotely controlled and sends TV images from different places from the user’s home. Powered feeders are rehabilitation robots especially designed for feeding patients with severe movement limitations of their upper limbs. Controlling the robot by themselves, such users can eat a normal meal at their own pace. To this class belong Handy 1 [20, 21], My Spoon [22, 23], and ISAC [24–26]. Handy 1 also has optional functions for brushing teeth, shaving and applying make up. The ISAC implements image recognition in order to determine the exact position of the user’s mouth. 1.2.2 Robots for Physical Support and Indoor Navigation Devices from that group are intended to assist users with both movement weakness and visual impairments. Potential users of these RR are aged people and people with multiple impairments. Such machines are usually designed as motorized base that gives physical support to the user. Forces, caused by the user’s body are monitored and used for the control of the platform. Robot control system determines the current location of the platform and outputs voice-synthesized navigation instructions or warnings about potential obstacles on the intended route. Good examples of that conception are the HITOMI project [27] and the PAM-AID project [28–31], [W1]. The system WHERE (Walking and moving HElper Robot system) developed at KAIST [32, 33] aims to assist the user in walking and gait rehabilitation and provides body weight support to the user. The system automatically detects the user intention regarding walking speed and movement direction. A picture of the WHERE system is shown in Fig. 1.1. 1.2.3 Robots for Physical Rehabilitation Different from the electromechanical devices for passive motion rehabilitation (such as the Artromot system of the ORTHOMED Medizintechnik Ltd) [W2], robotassisted systems for movement rehabilitation can perform various movement programs and can sense the user’s force reactions. In relation to this application, we may mention here the robotic device for stroke rehabilitation in the GENTLE/S Project [34], the robotic therapy system, called Stanford Driver’s SEAT [35], the MIME project [36] of the VA Palo Alto Rehabilitation R&D Center, the robotic
8
D. Stefanov and Z.Z. Bien
system for neuro-rehabilitation of the Newman Laboratory at MIT [37], the system of the University of California, Irvine [38], REHAROB project [39, 40], etc. These robotic systems can be easily programmed to implement different rehabilitation exercises that fit to the concrete needs of the particular users and offer flexible adjustment of various movement parameters such as range of flexion and extension, pause between the sequential motions, force, speed, etc.
Fig. 1.1. Walking and moving helper robot system (WHERE) developed at KAIST (Courtesy of Ju-Jang Lee). The system is used for gait rehabilitation and assists users in walking by providing body weight support. The robot automatically detects user intention regarding walking speed and movement direction
1.2.4 Vocational RR Different from the robots that are primarily designed to help those paralyzed users in their everyday movement activities, the vocational rehabilitation robots perform assistance for the paralyzed user in concrete occupational activities, such as office work [41, 42], programming (RAID and Epi-RAID) [43], work at chemical and biology laboratories (Walky) [44], visual inspection of hybrid integrated circuits (IRVIS) in a real manufacturing environment [45, 46], and operation of a commercial lathe [47].
1 Advances in Human-Friendly Robotic Technologies
9
1.2.5 Emotional Interactive Entertainment Robots Similar to the virtual reality systems for communication with virtual subjects or game play, the emotional interactive entertainment robots (EIAR) [48] are intended to increase the emotional comfort and give some emotional relief of people who live alone. Different from some virtual reality software products where the virtual creatures appear only as images on the computer screen, the entertainment robots are mechatronics devices that exhibit animal-like behavior. Pet robots are one of the latest tendencies in the development of entertainment home robots. Within the project HII House (Home Information Infrastructure House), National Panasonic has demonstrated a conceptual idea for the user-friendly interface for older people in their homes. National Panasonic has developed an electronic home interface and memory jogger designed as cuddly toys – Tama the robocat and Kuma the robot bear. A speech synthesis device can reproduce a number of phrases as a voice response to particular user voice-activated inputs. The device can also be programmed to remind the user to take his/her medication at a particular time. Failure to respond to the device can activate a direct-dial call to the care-giving staff that can then check whether or not the user’s condition is normal. The interactive robot BECKY [49], developed at KAIST, Korea, demonstrates different behaviour in accordance with the emotional status of the user. BECKY recognizes the current emotion of the user by observing the user’s response to its actions and considering the environmental information that can affect the user’s emotional status. BECKY can adapt to the user’s preferences by a learning mechanism based on neural networks. In addition, BECKY can choose and play some music to resume human spirits. The seal robot and the cat robot, developed in Japan [50, 51], are recent results of the research study on the physical interaction between the human and pet robots. The robots were tested for aged people in the hospital in Japan. To the same category we may classify the AuRoRA Project [52, 53], [W3] that studies application of the robots to the education and therapy for children with autism helping them to develop and increase their communication and social interaction skills. Robot is used as an interactive "toy" whose behaviour can change, depending on the child response. We may note that applications of rehabilitation robots are not limited to the listed above. Contemporary technology achievements would be a premise for the design of much sophisticated robots that meet new sides of user’s needs. As a result, we may expect in the near future some new categories of RR that refer to much sophisticated tasks, such as helping the user in bathing, linen change, assisting in cloth change, helping for recovering after falling on the floor, lifting the patient from the bed to the wheelchair, cooking, etc.
10
D. Stefanov and Z.Z. Bien
1.3 Specialized Human-Machine Interface Human-Machine Interface (HMI) can translate the user’s commands for proper operation of the rehabilitation robots, wheelchair, or other home equipment such as lamps, TV sets, telephones, doors, home security systems, etc. on an easy and efficient way. Movement limitations of users with severe paralysis cause a serious problem for reliable control of the home-installed assistive systems. As an HMI, the head-tracking devices are widely used because of their ability to produce up to three independent proportional signals that correspond to the forward-backward head tilting, left-right head rotation, and lateral head tilting. The manner of control is very natural for the user. Recently, some new head tracking techniques involving facial detection [54] and optoelectronic detection of light-reflective head-attached markers (Tracker2000, Head Mouse) have been proposed [55, 56]. Some new technologies such as eye-movement control, brain control [57], as well as gesture recognition [58] and facial expression recognition give new opportunities for human – friendly interaction between the user and the home-installed devices and become a base for new interface solutions in the near future. Voice control is also considered as a natural and easy way of operation of home-installed devices, but its application is still limited because of the high dependence of the recognition ratio on the voice specific and the ambient noise level. Recently, some new voice recognition algorithms based on neural networks have provided optimism that those drawbacks can be overcome soon, and the voice control could be applied in a noisy environment [59]. The soft computing technique1 offers new perspectives on the application of the EMG signals in the control of the home environment, allowing successful extraction of informative signal features in case that a strong noise signal interferes with useful EMG signals [60].
1.4 Rehabilitation Robots in the Smart House Design Intelligent houses for persons with physical disabilities and for older persons should provide their residents with better environmental conditions for independent indoor life. Numerous smart devices and systems, installed in the house, should be capable of being linked with each other, to process information from the inhabitant or from his/her environment, and to make independent decisions and actions for the survival of the resident in cases of emergency. During the last decade, different concepts of a smart house have been developed and tested under various projects. These developments are oriented to different groups of people with special needs (PSN) and refer to different social infrastructure
1
Soft computing differs from conventional (hard) computing in that it is tolerant of imprecision, uncertainty, and partial truth. It includes neural networks, fuzzy logic, evolutionary computation, rough set theory, probabilistic reasoning, and expert systems.
1 Advances in Human-Friendly Robotic Technologies
11
(that can vary from single house to village, nursing home, hospital, etc.). Rehabilitation robots become an important part of the recently developed smart-house models. The “Robotic Room” is a project developed at the Sato Laboratory of the Research Center for Advanced Science and Technology in the University of Tokyo [61, 62]. The Robotic Room can be considered as a step ahead toward the realization of intelligent houses for persons with physical disabilities and as a further development of the idea of the Techno Houses. The environment of the Robotic Room consists of a ceiling-mounted robot arm (called the “long reach manipulator”) and an intelligent bed with pressure sensors for monitoring of the inhabitant’s posture. Modules can monitor person’s respiration without attachment of any additional sensors to the user’s body. The robot arm is intended to bring various objects to the user. The developed life-support infrastructure is to comply with the needs of the rapidly aging society. In the “Intelligent Sweet Home” project at KAIST [63], gestures are adopted for the control of home-installed devices. Parts of the scenario are two rehabilitation robots. First of them is mounted to the user’s bed and helps the user in activities such as book reading, object replacement, quilt adjustment, massage, scratching, etc. The experimental design employs a Manus robot. A supplementary mechanical module provides translation of the robot along the bed. The second robot is a mobile one and it role is to perform transportation tasks. (A Pioneer robot from ActivMedia Robotics, LLC was used in that design.) Ceiling-mounted cameras are used for localization of the current robot position. The same cameras detect the positions of the quilt ends. The interface, called “soft remocon”, consists of three ceiling-mounted video cameras that detect the orientation of the user’s hand. By pointing at the robots, television, or curtains, the user can choose the device that will be controlled. Special light signals confirm user’s selection. After choosing the device, the user sets his/her instruction to it by pre-defined hand gestures. A voice-generated message confirms the recognized gesture command before its execution. A picture of the KAIST Intelligent robotic room is presented in Fig. 1.2.
Fig. 1.2. The intelligent robotic room at KAIST, Korea
12
D. Stefanov and Z.Z. Bien
Figure 1.3 shows the main idea of the “soft remocon”.
a
b Fig. 1.3. Gesture-based human-machine interface. a Soft remocon – The user’s hand gesture is automatically recognized by the TV-based image recognition system and the desired action (“Turn the TV on!”) is executed; b Pointing recognition system – By pointing to the concrete object, the user specified the object that should be replaced, and the robot performs the preliminary defined task
1.5 Functional Integration of the Robotic Environment In some new developments, RR’s are components of the intelligent home environment and act in conjunction with other home-installed devices. In such arrangement, RRs need to be controlled in a coordinated manner. The M3S (Multiple Master Multiple Slave) is a communication strategy especially designed for functional integration of home-installed rehabilitation devices. The M3S specification started from TIDE Project [64, 65], and later it was developed as an open standard (available for free). M3S allows users to assemble a specific complete modular system and to extend or modify the system later in a plug-and-play manner. In case of emergency, the user can halt the operation of the whole M3S system by a Dead Man Switch. The efficiency of that integration strategy has been demonstrated by several evaluations with users from some European countries. The ICAN Project (Integrated Control for All Needs) developed further the idea of functional integration of the home-installed devices [66], [W4]. The main objec-
1 Advances in Human-Friendly Robotic Technologies
13
tive of the project is to propose an optimal control over all home-installed devices by a single interface device, such as a joystick or switch input. A portable device, named function carrier, distributes the commands of a single interface device to various output devices. For example, depending on the setting of the functional carrier, the user will be able to control the wheelchair or the rehabilitation robot using one joystick. The function carrier itself can be designed as a portable computer or palmtop PC that is connected to the interface modules of the separate rehabilitation devices. The overall integration method applies the M3S architecture and communication protocols. ICAN is a collaborative project supported by the Telematics Applications Programme of the European Commission [67].
1.6 Commercialization of RR Commercialization of the rehabilitation robots is often impeded by several factors, such as high cost, low efficiency, and existing welfare regulation. Despite of these problems, some rehabilitation robots, such as Manus (Netherlands) [68, 69], Handy1 (UK) [20, 21], Raptor (USA) [70], and My Spoon (Japan) [22, 23] have already become commercially available products that can be used daily by an increasing number of end-users, offering them enhanced movement assistance and comfort in operation. Although the research in RR continuously explores new service areas for people with disabilities, it seems that people with disabilities have not taken enough advantage of this technology so far. When the MRI was introduced for musculoskeletal imaging, clinicians were resistant to learn the technology because it did not offer anything over CT scans. However, it is now been accepted as a vital part of diagnostic imaging. Similarly, it can be expected that after certain period of improvement and advertising, RR will become a vital part of the treatment of people with disabilities. In general, the decision for buying a rehabilitation robot (DB) can be expressed as:
efficiency, appearance, safety, easiness in control DB = f price
(1.1)
Efficiency of the RR can be found if good answers can be found for the following important questions: 1. What is the list of activities that can be assisted with that robot? 2. What is the importance of each performed tasks for the everyday life and user’s movement freedom? - For example, scratching is a quite natural human action that is important for the user but other activities, such as feeding, bathing, handling with fine objects have dominant priority for the everyday life. The impact of the RR to the quality of life of individuals with movement disabilities can be enhanced by careful choice of activities where the RR can help. However, the gap between academic laboratory research and clinical practice continues to
14
D. Stefanov and Z.Z. Bien
3. 4. 5.
6.
7.
8. 9. 10. 11. 12. 13. 14. 15.
1. 2.
exist. In order to significantly advance the application of RR to rehabilitation practice, the link between scientific investigations rehabilitation robotics and the clinical application of results must be strengthened. Does the robot contribute to the user’s privacy? – Can the robot help activities that are strictly private? - For instance, cloth change and bathing are strict private activities and it will be important if the robot can be helpful for them. For what kind of movement impairments the robot can be applied? –Larger user’s group means better chances for manufacturing of the robot in large series. What is the robot contribution for completion of each task? - For instance, when we command robot assistance in eating, we must clarify the details on whether the robot can prepare the food by itself or whether it is used to serve food that has been prepared in advance by the human-helper. Knowing the robot characteristics, we should answer also the question how long time in a day the robot is expected to be used, i.e. how many hours in a day the robot will replace the human-helper and how many hours movement independence will be provided with such robot. What is the speed of execution of certain task with the robot if we compare it with the speed of natural motions? For example, if we assume that nonimpaired person who uses his/her hands usually completes certain task within 20 seconds, what is the expected speed of execution of a similar task by the user that uses the robot to perform the task, i.e. what is the intensity of the help? On the other hand, speed should be relatively low because of safety considerations and it is highly influenced on the skills of a particular user to interact with the robot. What kind of HMI is used? Does it require any sensor attachments to the user? What is the robot reliability? What are the results from the test evaluation and the user’s feedback about the robot? What is the noise level when the robot is in operation? What kind of energy is applied for robot operation and how long time the robot can operate with batteries that are initially completely charged? What space can be accessed with the robot? Can the robot be used to pick up distant objects (for instance, objects that are located on the floor)? Is the robot gripper precise enough to handle with thin objects and objects with sophisticated shapes? What are the weight and dimensions of the robot? In case of a wheelchair-mounted robot, it should be calculated how the robot attachment affects some wheelchair characteristics, such as ability for narrow door passage and shifting of the centre of gravity. What are the lifting power of the robot and its maximal speed?
Easiness in control relates to the following questions: How long time is needed for a user to learn how to operate the robot? Does the robot control demand some user’s instructions in each moment of the task execution? What is the level of automatic task performance?
1 Advances in Human-Friendly Robotic Technologies
3.
15
Does the robot refer to the exact user’s commands only or its control system can respond on a correct way to tasks that are set with certain level of “fuzziness”?
For instance, if the user wants to drop a letter at the mail box, usually that requires many adjustment maneuvers controlled by the user until the gripper becomes oriented in the correct position regarding the mail slot. However, the user’s task will become much easier if the gripper adjustment can be done automatically based on sensor information. Another aspects of automation of some tasks are: the dosage of the force applied to the object; sliding prevention of the grasped object; maintaining of the initial object orientation during pick-and place tasks with liquid containers (cup, spoon, etc.). Appearance of the rehabilitation robot is very important issue from the aesthetical and psychological point of view. Appropriate robot design should make the robot naturally looking and will not attract others’ attention. Safety is an item of major consideration in the robot design. Different from other robots, the RR’s are intended to work in close proximity to the face and head of the user with limited own motions. RR’s are complicated mechatronic devices, produced in very small numbers and currently the price of such robots is relatively high. User’s readiness to meet the asked price depends on two main factors: 1. The amount, provided by insurance companies, government agencies, charity organizations, etc. 2. Own financial contribution of the end-user. In addition to the price for purchase of the robot, we should take into account also the maintenance costs of the robot during the period of its usage. The maintenance costs include labor from highly qualified personnel and will increase much if user’s house is very far from the technical service center. When we comment commercialization of the rehabilitation robots, we should take into account that the design of robots for service of people with movement disabilities is a relatively new area and currently existing applications cover only a small number of user’s needs. Due to the small number of the users, the average cost of a RR is quite high.–Currently, RRs are only a tiny part of the service robots, which are small portion of the manufactured robots. Probably the situation will be dramatically changed when home robots are developed. Currently, many successful research projects reveal optimism that we are not far from the era when personal robots will become just a part of the home environment, similar to the refrigerator, video, computer, etc. Perhaps in a changed situation, the RR will differ from the ordinary home robots only by some special functions that they will have added. Apart from their primary functions, such as objects replacement, cooking, serving meal, house guarding, partnering in playing games, interlocutors, etc., the new generation of RR will possess advanced human-friendly interface and will be able to help persons with disabilities and aged people in a much efficient way.
16
D. Stefanov and Z.Z. Bien
1.7 Some Issues for Futuristic Intelligent Robotic House Model A research trend in the smart house design is concerned with the futuristic residential space equipped with advanced monitoring system that observes not only the physical status of the inhabitants but also their behavior and emotional condition. It is also expected that one of the biggest reforms in the home arrangement will be the implementation of various service robots that help the inhabitants in many ways. Actions of the RR will be matched with the operation of other home-installed devices. In this sense, the residential space can be considered as a multi-agents robotic system. In this paper, we note that new concept as “Intelligent Robotic House (IRH)”. One example of such intelligent structure is shown in Fig. 1.4.
Fig. 1.4. A Futuristic Intelligent Robotic House Model. The intelligent robotic house integrates advanced technology for non-invasive health monitoring, movement assistance, and leisure activities, and offers easy human-machine interaction
Each subsystem in the residential space will be capable of solving any local task, while a central control unit will coordinate the work of all subsystems and will build the general control strategy. Performed actions in the intelligent residential space will be initiated either by the user’s command or from analysis of the information from home installed sensors. The human-friendly HMI will be based on general user’s instructions, addressed to the common control system of the IRH. Complex tasks may involve many home installed devices. For instance, user’s command for bringing some food will activate first some special automatic cooking devices to prepare the food; next, the mobile robots will serve the food and another robot will feed the paralyzed user. In this example, the user does not control the robot directly.
1 Advances in Human-Friendly Robotic Technologies
17
Instead, a special computer plans the task and distributes separate subtasks to the different devices. A block diagram of the Intelligent Robotic Home is given in Fig. 1.5. Here, some of the sensor information is used by several subsystems. For example, the information on the user’s temperature and heart rate is used by the health monitoring system, the system for emotional status recognition, and the home environment controller; while the visual information of the home-installed TV cameras is utilized for gesture recognition, face expression recognition, walk pattern recognition, posture recognition, and for home security. The same visual information is also applied for monitoring the health and emotional status of the user.
Fig. 1.5. Functional Architecture of a model Intelligent Robotic House. All home-installed devices and robotic systems are controlled by a common control system. The user can use different modalities in order to communicate with his/her house. Some information is used from several subsystems
The example structure includes three service robots: kitchen service robot, robot for movement assistance, and entertainment robot. The entertainment robot is controlled not only upon the information from its local vision sensors but also on the information from the home-installed vision sensors. Audio and video programs in the intelligent home can be selected automatically upon the recognized current state of emotion of the inhabitant. By monitoring of the user’s behavior during listening to the music or watching the video program, the system learns the user’s preferences and includes favorite programs in the future play lists.
18
D. Stefanov and Z.Z. Bien
It is expected that the intelligent robotic house will be based on advanced techniques for sensing and health monitoring. Some recent vision-based health monitoring systems refer to recognition of the facial expression and the facial color (paleness) as an indication of the current health status of the patient. Emotion monitoring is another challenging subject for providing information on the human state of emotion. An initial result exploring this idea can be found in [71]. New interface design will offer more convenient, human-friendly, natural, and autonomous, or simply, “intelligent” ways of human-machine interaction. Facial expressions and gestures are indicative of human intention and emotion. According to some statistics [72], 93% of messages in the face-to-face communication situation are transmitted via facial expression, gestures, and voice tone, while only 7% are via linguistic words. Home-installed RR should be highly efficient and capable of responding to the user’s exact commands and also to the user’s intentions with a high level of “fuzziness” as well, treating and executing them in a proper way. It is expected that the new generation interface devices will be able to adapt to the user’s specifics and will be highly resistant to various artifacts. We anticipate that a major reform of the future residential space will take place by the advent of the use of various service robots. Low cost robots with increased functionality and high reliability will extend the range of activities in which the persons with disabilities will be supported. The IRH will implement strong fusion among different sources of sensor information. Collected data from the wearable sensors will be used not only for monitoring of the inhabitant’s health status but the same information can be also considered in the control of the home environment. For example, the room temperature can be adjusted in consideration of the current health condition of the user.
1.8 Concluding Remarks The rehabilitation robots have become an important goal of development with strong social and economic motivations. Its realization will render a powerful solution for many existing problems in the society with increasing numbers of aged/physically weak and disabled people and will make human life more pleasant and easier. In this paper, we have tried to formulate and classify some recent tendencies of the research in the area so as to obtain our vision about the future tendencies in the development of the technology and its organization. Through examples, we have shown that the development strategy of the intelligent home-installed technology has changed from the design of separate devices (at the beginning) to a form of integrated system arrangement where many homeinstalled devices communicate with each other and synchronously serve/monitor different parameters of the house. Because the problem for the aging society becomes quite common for many countries, the research problems on the smart-home technology with service robots have already become an important subject of international research. Diverse forms of international cooperation, such as international
1 Advances in Human-Friendly Robotic Technologies
19
conferences, international joint projects for development, and result evaluation, are in progress. From the design point of view, we observe that the development strategy is also experiencing a rapid evolution. Although the first designs of home-installed devices were hardware-oriented, the recent strategies are mainly oriented toward intelligent algorithms, where the software solution takes the main part of the whole design. In addition, we have stressed the fact that the future RR’s will include humancentered technologies where important technological components should provide human-friendly interaction with the user. The home-installed technology will further be oriented toward a custom-tailored design where the modular-type components of the smart house will meet the individual user’s needs, emotional characteristics, and preferences.
References 1.
Saito M (2000) Expanding welfare concept and assistive technology. In: Proc. IEEK Annual Fall Conference, Ansan, Korea 2. Warren S, Craft R (1999) Designing smart health care technology into the home of the future. In: Proc. 1st Joint BMES/EMBS Conf., Atlanta, GA, USA, p 677 3. Lindström JI (2001) From R&D to market products – the TIDE Bridge phase. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology – Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 688–692 4. Harwin W, Rahman T, Foulds R (1995) A review of design issues in rehabilitation robotics with reference to North American research. IEEE Trans. Rehab. Eng 3(1): 3–13 5. Stefanov D (1994) Model of a special orthotic manipulator. Mechatronics 4(4): 401–415 6. Kwee H (2001) Integrating control of MANUS and wheelchair. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 107–112 7. Rosier JC, van Woerden JA, van der Kolk LW, Driessen BJF, Kwee HH, Duimel JJ, Smits JJ, Tuinhof de Moed AA, Honderd G, Bruyn PM (1991) Rehabilitation robotics: the MANUS concept. In: Proc. 5th Int. Conf. Advanced Robotics, Pisa, Italy, pp 893 – 898 8. Song WK, Lee H, Bien Z (1999) KARES: Intelligent wheelchair-mounted robotic arm system using vision and force sensor. Robotics and Autonomous Systems 28(1): 83–94 9. Bien Z, Kim DJ, Stefanov DH, Han JS, Park HS, Chang PH (2002) Development of a novel type rehabilitation robotic system KARES II. In: Keates S, Langdon P, Clarkson PJ, Robinson P (eds) Universal Access and Assistive Technology. London, UK, Springer, pp 201–212 10. Pauly M (1995) TAURO – Teilautonomer Serviceroboter für Überwachungsaufgaben. In: Dillmann R, Rembold U, Lüth T (eds) Autonome Mobile Systeme. Berlin, Germany: Springer, pp 30–39 11. Prassler E, Scholz J, Fiorini P (2001) A robotic wheelchair for crowded public environments. IEEE Robotics and Automation Magazine 7(1): 38– 45 12. Baumgartner E, Skaar S (1994) An autonomous vision – based mobile robot. IEEE Trans. Automat. Control 39(3): 493–502
20
D. Stefanov and Z.Z. Bien
13. Yoder JD, Baumgartner E, Skaar S (1996) Initial results in the development of a guidance system for a powered wheelchair. IEEE Trans. Rehab. Eng 4(3): 143 –302 14. Gomi T, Griffith A (1998) Developing intelligent wheelchairs for the handicapped. In: Proc. Evolutionary Robotics Symp., Tokyo, Japan, pp 461–478 15. Wakuami H, Nakamura K, Matsumara T (1992) Development of an automated wheelchair guided by a magnetic ferrite marker lane. J. of Rehab. Research and Development 29(1): 27–34 16. Wang H, Kang CU, Ishimatsu T, Ochiai T (1996) Auto navigation on a wheelchair. In: Proc. 1st Int. Symp. Artificial Life and Robotics, Beppu, Oita, Japan 17. Kreutner M, Horn O (2001) Contribution to rehabilitation mobile robotics: Localization of an autonomous wheelchair. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, Paris, pp 207–214 18. Yanco H (1998) Wheelesley: A robotic wheelchair system: indoor navigation and user interface. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 256–286 19. Stefanov D (1999) Integrated control of a desktop mounted manipulator and a wheelchair. In: Proc. of the Sixth International Conference on Rehabilitation Robotics (ICORR'99), Stanford University, USA, July 1-2, 1999, pp 207 - 214 20. Finney R, Topping M (1997) After sales care provision for the Handy 1 robotic aid to independence. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Bath, UK 21. Topping M, Smith J (1999) The development of Handy 1, a robotic system to assist the severely disabled. International Conference on Rehabilitation Robotics (ICORR’99), 1999, Stanford, CA, pp 244 – 249 22. Ishii S, Tanaka S, Hiramatsu F (1995) Meal assistance robot for severely handicapped people. Proceedings IEEE Robotics and Automation Conference, pp 1308-1313 23. Soyama R, Ishii S, Fukase A (2003) The development of meal-assistance robot 'MySpoon'. Proceedings of the 8th International Conference on Rehabilitation Robotics, pp 88–91 24. Bagchi S, Kawamura K (1994) ISAC: A robotic aid system for feeding the disabled. AAAI Spring Symposium on Physical Interaction and Manipulation, March 1994 25. Kawamura K, Bagchi S, Iskarous M, Bishay M (1995) Intelligent robotic systems in service of the disabled. IEEE Transactions on Rehabilitation Engineering 3(1): 14 –21 26. Kawamura K, Peters II RA, Wilkes MW, Alford WA, Rogers TE (2000) ISAC: foundations in human-humanoid interaction. IEEE J Intelligent Systems 15: 38–45 27. Mori H, Kotani S, Kiyohiro N (1998) HITOMI: Design and development of a robotic travel aid. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 221 – 234 28. Lacey G, Mac Namara S, Dawson-Howe KM (1998) Personal adaptive mobility aid for the infirm and elderly blind. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 211–220 29. Lacey G, Dawson-Howe KM (1997) Evaluation of robot mobility aid for older persons blind. In Proc. Symp. Intelligent Robot Systems, Stockholm, Sweden 30. MacNamara S, Lacey G (1999) A robotic walking aid for frail visually impaired people. In: Proc. 6th Intl. Conf. Rehabilitation Robotics, Stanford, CA, USA, pp 163–169
1 Advances in Human-Friendly Robotic Technologies
21
31. MacNamara S, Lacey G (1999) PAMAID: a passive robot for frail visually impaired people. In: Proc. RESNA Annual Conf., Long Beach, CA, USA, pp 358–361 32. Lee CY, Seo KH, Oh C, Lee JJ (2000) A system for gait rehabilitation with body weight support: Mobile manipulator approach. Journal of HWRS-ERC 2(3): 16–21 33. Lee CY, Seo KH, Kim CH, Oh SK, Lee JJ (2002) A system for gait rehabilitation: Mobile manipulator approach. Proc. of the 2002 IEEE International Conference on Robotics&Automation, Washington DC, May 2002, pp 3254–3259 34. Harwin W, Loureiro R, Amirabdollahian F, Taylor M, Johnson G, Stokes E, Coote S, Topping M, Collin C, Tamparis S, Kontoulis J, Munih M, Hawkins P, Driessen B (2001) The GENTLE/S project: A new method of delivering neuro-rehabilitation. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology–Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 36–41 35. Johnson MJ, Van der Loos HFM, Burgar CG, Shor P, Leifer LJ (1999) Designing a robotic stroke therapy device to motivate use of the impaired limb. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 123–132 36. Lum PS, Burgar CG, Shor PC, Majmundar M, Van der Loos HFM (2002) Robotassisted movement training compared with conventional therapy techniques for the rehabilitation of upper limb motor function after stroke. Archives of PM&R, 83: 952 – 959 37. Krebs HI, Hogan N, Aisen ML, Volpe BT (1998) Robot-Aided Neurorehabilitation. IEEE Trans. Rehab. Eng 6(1): 75–86 38. Lum PS, Reinkensmeyer DJ, Lehman SL (1993) Robotic assist devices for bimanual physical therapy: Preliminary experiments. IEEE Trans. Rehab. Eng 1(3): 185–191 39. Arz G, Toth A (2001) REHAROB: A project and a system for motion diagnosis and robotized physiotherapy delivery. In: Mokhtari M (ed) Integration of Assistive Technology in the Information Age, ICORR’2001, 7th International Conference on Rehabilitation Robotics. IOS Press, Amsterdam, Evry, France, pp 93–100 40. Toth A, Arz G, Varga Z, Varga P (2001) Conceptual design of an upper limb physiotherapy system with industrial robots. In: Mokhtari M (ed) Integration of Assistive Technology in the Information Age, ICORR’2001, 7th International Conference on Rehabilitation Robotics. IOS Press, Amsterdam, Evry, France, pp 109–116 41. Van der Loos HFM, Hammel J, Lees DS, Chang D, Perkash I (1990) Field evaluation of a robot workstation for quadriplegic office workers. Eur. Rev. Biomed. Tech 5(12): 317– 319 42. Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: Lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Trans. Rehab. Eng 3(1): 46–55 43. Eftring H (1994) Robot control methods and results from user trials on the RAID workstation. In: Proc. 4th Int. Conf. Rehabilitation Robotics, Wilmington, DE, USA, pp 97– 101 44. Neveryd H, Bolmsjö G (1995) WALKY, an ultrasonic navigating mobile robot for persons with physical disabilities. In: Proc. 2nd TIDE Congress, Paris, France, pp 366–370 45. Keates S, Clarkson PJ, Robinson P (2001) Designing a usable interface for an interactive robot. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 156–162 46. Clarkson PJ, Keates S, Dowland R (1999) The design and control of assistive devices. In: Proc. Int. Conf. Engineering Design, pp 425–428
22
D. Stefanov and Z.Z. Bien
47. Oderud T, Tyrmi G (2001) One touch is enough… In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology–Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 144–147 48. Shibata T, Tanie K (2001) Emergence of affective behaviours through physical interaction between human and mental commit robot. Journal of Robotics and Mechatronics 13(5): 505–516 49. Do JH, Park KH, Bien Z, Park KC, Oh YT, Kang DH (2001) A development of emotional interactive robot. In: Proc. 32nd Int. Symp. Robotics, Seoul, Korea, pp. 544–549 50. Shibata T, Tashima T, Tanie K (1999) Emergence of emotional behaviour through physical interaction between human and robot. In: Proc. IEEE Int. Conf. Robotics and Automation, Detroit, MI, USA, pp 2868–2873 51. Shibata T, Tashima T, Tanie K (1998) Emergence of emotional behaviour through physical interaction between human and pet robot. In: Proc. IARP 1st Int. Workshop Humanoid and Human Friendly Robotics, Tsukuba, Japan, pp 1–6 52. Dautenhahn K (2003) Roles and functions of robots in human society-implications from research in autism therapy. Robotica 21: 443–452 53. Werry I, Dautenhahn K, Harwin W (2000) Challenges in rehabilitation robotics: A mobile robot as a teaching tool for children with autism. Proc. International Workshop Recent Advances in Mobile Robots, June 29th 2000, De Montfort University, Leicester, UK 54. Xu G, Sugimoto T (1998) Rits eye: a software-based system for realtime face detection and tracing using pan-tilt-zoom controllable camera. In: Proc. 14th Int. Conf. Pattern Recognition, Brisbane, Australia, pp 1194–1197 55. MacBride C, Fleming B, Tanberg BJ (2001) Interdisciplinary team approach to AAC assessment and intervention. In: Proc. 16th Annual CSUN’s Conf. Technology and Persons with Disabilities, Los Angeles, CA 56. Evans DG, Drew R, Blenkhorn P (2000) Controlling mouse pointer position using an infrared head-operated joystick. IEEE Trans. Rehab. Eng 8(1): 107–117 57. Levine SP, Huggins JE, BeMent SL, Kushwaha RK, Schuh LA, Rohde MM, Passaro EA, Ross DA, Elisevich KV, Smith BJ (2000) A direct brain interface based on eventrelated potentials. IEEE Trans on Rehab. Eng 8(2): 180–185 58. Bien Z, Kim JB, Jung JW, Park KH, Bang WC (2000) Issues of human-friendly manmachine interface for intelligent residential system. In: Proc. 1st Int. Workshop Humanfriendly Welfare Robotic Systems, Taejon, Korea, pp 10–14 59. Kasabov N, Kozma R, Kilgour R, laws M, Taylor J, Watts M, Gray A (1997) A methodology for speech data analysis and a framework for adaptive speech recognition using fuzzy neural networks. In: Proc. 4th Int. Conf. Neural Information Processing, Dunedin, New Zealand, pp 1055–1060 60. Han JS, Stefanov DH, Park KH, Lee HB, Kim DJ, Song WK, Kim JS, Bien Z (2001) Development of an EMG-based powered wheelchair controller for users with high-level spinal cord injury. In: Proc. Int. Conf. Control, Automation and System, Jeju Island, Korea, pp 503–506 61. Nakata T, Sato T, Mizoguchi H, Mori T (1996) Synthesis of robot-to-human expressive behaviour for human-robot symbiosis. In: Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Minneapolis, MN, USA, pp 1608–1613
1 Advances in Human-Friendly Robotic Technologies
23
62. Eguchi I, Sato T, Mori T (1996) Visual behaviour understanding as a core function of computerized description of medical care. In: Proc. IEEE/RSJ Int. Con. Intelligent Robots and Systems, Minneapolis, MN, USA, pp 1573–1578 63. Bien Z, Park KH, Bang WC, Stefanov DH (2002) LARES: An intelligent sweet home for assisting older persons and the handicapped. In: Proc. 1st Cambridge Workshop Universal Access and Assistive Technology, Cambridge, UK, pp 43–46 64. Dallaway J, Jackson R, Timmers P (1995) Rehabilitation robotics in Europe. IEEE Trans. Rehab. Eng 3(1): 35–45 65. Nelisse M (1995) M3S: A general-purpose integrated and modular architecture for the rehabilitation environment. In: Proc. 2nd Int. CAN Conf., London, UK, pp 10.2–10.9 66. Willems C (1999) ICAN Integrated Communication and Control for All Needs; Assistive technology on the threshold of the new millennium. In: Proc. 5th AAATE Conf., Dusseldorf, Germany 67. Allen B (1999) Bus systems in a three tiered world, experiences from the ICAN project. In: Proc. Int. Conf. Smart Homes and Telematics, Eindhoven, Netherlands 68. Eftring H, Boschian K (1999) Technical results from MANUS user trials. In: Proc. 6th Int. Conf. Rehabilitation Robotics, Stanford, CA, USA, pp 136–141 69. Gelderblom GJ, de Witte L, van Soest K, Wessels R Dijcks B, van’t Hoofd W, Goossens M, Tilli D, and van der Pijl D (2001) Evaluation of the MANUS robot manipulator. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology – Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 268–273 70. Mahoney R (2001) The Raptor wheelchair robot system. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, Paris, pp 135–141 71. Bien Z, Do JH (2000) Interactive robot for emotion monitoring. In: Proc. Korea-Japan Joint Workshop on Network based Human Friendly Mechatronics and Systems, Seoul, Korea, pp 62–65 72. Mehrabian A (1968) Communication without words. Psychology Today 2(9): 52–55.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective Michael Hillman
2.1 Introduction Popular culture presents the image of a robot as a mechanical, humanoid device, often evil in intent. Those with a slightly more informed technical knowledge would point to the industrial robot arm of the automotive factory, Even professional engineers, are affected to some extent or other by these popular images. In attempting to survey four decades of the development of rehabilitation robotics it is wise to start from an official definition. The Robot Institute of America defined a robot as “A re-programmable, multifunctional manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks.” Although this definition was obviously intended for industrial robots, it identifies the key features of programmability, flexibility and movement. Robotics has obviously moved on from that early definition. While robots were initially employed as handling machines in factories, their application is now much wider. In 1987 the Department of Trade and Industry in the UK had an Advanced Robotics initiative to encourage the wider use of robotics in areas other than factories. They used this definition of Advanced Robotics. “The integration of enabling technologies and attributes embracing manipulators, mobility, sensors, computing (IKBS, AI) and hierarchical control to result ultimately in a robot capable of autonomously complementing man’s endeavours in unstructured and hostile environments.” This is a very wide-ranging definition, but one of the key features is the “integration of technologies”. The aim of this integration of advanced technologies is to produce a device that can both operate autonomously and in an environment which may be unstructured and/or hostile. More recently the term “Mechatronics” has been coined as “the synergistic combination of precision mechanical engineering, electronic control and systems thinking in the design of products and processes.” (Festo Didactic GmbH Co 1998). In many ways, mechatronics and robotics cover much of the same territory and it is the application of this level of technology to “the restoration of a person to an optimal level of physical, mental, and social function and well being”, that we are concerned with here. Not just in the UK, but across the world there are now many examples of “advanced robotics”. What is key is not just the level of technology, but that the roZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 25-44, 2004. © Springer-Verlag Berlin Heidelberg 2004
26
M. Hillman
bots have moved out of the factory and into the wider, unstructured and sometimes hostile world. The examples cover a wide area and include devices for the exploration both of Mars and earth’s oceans as well as more domestic applications such as filling a car with fuel, mowing the lawn and vacuuming the floor. The use of robotics in rehabilitation is another major area in which robots are coming “out of the factory”. The use of robots in rehabilitation is often associated with assistive devices, some used in a vocational environment, some as aids to daily living or more specifically as feeding devices. There are however other areas either where robotic technology has been applied or might be. One such area is in mobility, increasing the usefulness of more traditional powered wheelchairs. Prosthetics and orthotics is another area where robotics has already been applied. A major area at present is in robot mediated therapy. Education is an area where robotic technology has been applied in some instances and where modern robotic toys might have an impact as well as equipment more specifically aimed at the disabled. Finally communication is an area which has used advanced computer technology, but where there is the scope of incorporating more mechanical technologies as well. This survey covers 40 years of the development of rehabilitation robotics. Rather than attempting to mention every research group and project, this survey is deliberately selective. The emphasis is on those projects that represent the first in a line of development, those that seem particularly innovative, those that have a commercial significance and those which have been used by the greatest number of real life users. The choice of projects is also unashamedly personal in that these are the projects that have most influenced the author. The term “robotics” should be interpreted as widely as possible in an inclusive rather than an exclusive way. The definition above of mechatronics is the one that most simply describes the scope of our survey.
2.2 Earliest Work Most reviews of rehabilitation robotics cite the work at the CASE Institute of Technology in the early 1960’s [24] as the first application of robotics technology to a rehabilitative manipulator. This was a powered orthosis with four degrees of freedom. The exoskeletal structure supported the user’s paralysed arm while performing pre-recorded manipulative tasks, these sequences being taught by an able bodied assistant during training. Interestingly, when so much current work uses electrical actuators, this used pneumatic actuators with closed loop position control achieved using incremental encoders. Another early project was the Rancho Los Amigos “Golden Arm” (Fig. 2.1) [24]. This was a seven degree of freedom battery powered electric orthosis. Several versions were built, and at least one was wheelchair mounted. It was controlled using a form of joint-by-joint control, which was found during evaluation to be not very intuitive.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
27
Fig. 2.1. Rancho Golden Arm
In considering these two early devices it is instructive to put them in the context of the technology of the day. The early 1960’s was a time when the integrated circuit had just been invented, and ten years before the microprocessor. Computers were beginning to come down in size from room size to a more compact cabinet. Neil Armstrong and Buzz Aldrin weren’t to set foot on the moon until the end of the decade, in July 1969.
2.3 Assistive Robotics Assistive robotics can be divided into three main areas based on the mobility of the device. Firstly we consider those that operate in a fixed site. Secondly will be those that may be moved around from one location to another. Thirdly those devices that are attached to a wheelchair. Many people [30] have surveyed the potential uses of robotics to assist people with physical disabilities, and the following areas have been identified. • Eating & drinking • Personal hygiene – washing, shaving, applying make up • Work & leisure – particularly computer use, equipment such as hi-fi and video systems, also games • Mobility – opening doors, windows • General reaching – up to shelves, down to the floor. These are all valuable areas. There are many devices dedicated to specific tasks, some designed for people with disabilities others readily available on the general market. In this case the choice and installation of such devices (not without consultation with the user) predetermines what tasks he or she can carry out. Most assistive robots however are designed as a general-purpose tool and intended to be used as the user desires, rather than for any predetermined task. Independence only comes when the user can decide at any time what activity they would
28
M. Hillman
like to do. A good example of this from our own work is the user who gained great satisfaction from using it to open his Christmas presents. 2.3.1 Fixed Site Apart from the two orthotic devices mentioned above, work in the more specific area of assistive rehabilitation robotics started in the mid 1970's. One of the earliest projects was the workstation based system designed by Roesler [32] in Heidelberg, West Germany. The purpose designed, five degree of freedom manipulator was placed in a specially adapted desktop environment, using rotating shelf units. Another early workstation system was that of Seamone and Schmeisser [33] at the Johns Hopkins University, supported by the Veterans Administration in the United States from 1974. The arm of this system was based around an electrically powered prosthetic arm, mounted on a horizontal track. Various items of equipment (e.g. telephone, book rest, computer discs) were laid out on the simple but cleverly designed workstation table and could be manipulated by the arm using pre-programmed commands. The system thus required that items be in precisely known positions as there were no sensors on the arm. User input was by simple scanning switch selection of routines on a simple LED display. In France, an early project was the Spartacus robot [21], based around a large high power manipulator from the nuclear industry. The table-mounted arm was able to reach down to the floor or up to a shelf. User control was by an analogue input, particularly a head position operated joystick. Safety was of particular importance with this relatively high power device, and early training of users was done with the arm behind a clear screen. This project is of particular significance in that it led to the Manus project in Holland and the Master project in France. In any review of work in rehabilitation robotics there must be recognition of the continuing work at Stanford University, initiated by Larry Leifer in the Department of Mechanical Engineering, with Machiel van der Loos at the Palo Alto VA Center. They built 4 generations of DeVAR (Desktop Vocational Assistive Robot) systems [11, 38]. DeVAR III was a tabletop system laid out for daily living tasks, while DeVAR IV was used in a vocational environment. The DeVAR IV system (Fig. 2.2) used the Puma 260 arm, a standard industrial manipulator, mounted upside down on an overhead track, thus making much better use of the available space. This highlights the problem of using a commercially available robot in that the work environment has to be tailored around the arm. How this is achieved can be crucial to how successful the system is. The usefulness of the system depends on how many “tasks” can be laid out within the work environment. Whenever a certain task is not available to the robot or the user, the usefulness of the system comes to a halt and a colleague has to be called in to intervene. Obviously in an office environment this can be allowed for, but the ability to work independently has been compromised.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
29
Fig. 2.2. DeVAR IV workstation
Besides the engineering work this project is notable for its use in a real working environment on an 8 hours a day basis at the Pacific Gas and Electric Company by Bob Yee. The Stanford group has also done a lot of work to justify the high cost of such a system ($50 - $100,000) in terms of financial saving relative to the cost of employing a human assistant. While the Stanford system used the Puma arm, initially designed for an industrial application, many other projects have been based around the RT series robot. The RTX robot was designed by Tim Jones of UMI (Universal Machine Intelligence, UK) in 1985. The arm was of what is known as a modified SCARA configuration. This has one vertical degree of freedom and two rotational joints with vertical axes that allow the main arm to move in a horizontal plane. This configuration has proved particularly appropriate for rehabilitation applications. One of the early application areas promoted by the manufacturers was its use in rehabilitation (another application area was in the laboratory). The original RTX robot was followed by higher powered versions, the RT100 and the RT200. A mobile version was also investigated, R-Theta but never commercialised One early use of the RTX was a workstation system developed by Caroline Fu [9] of Boeing in Seattle for one of their employees. Interestingly in a number of cases the impetus for using robotics has been from employers, to enable them to continue with valuable staff after accidents, and to keep their expertise within the company. Another significant use of the RT series robot was by the Master project [5] in France. Their approach to maximising the workspace was to mount the arm at the back of the workstation. This gave good visibility of the whole work area. In setting up any workstation it is vital that the arm does not obscure visibility of the environment, in this case either the large vertical column or the bulk of the arm or gripper in different orientations. One significant area of work using the RT series robot has been not just in designing workstations but also in developing and modifying the software and hardware for rehabilitation applications. The use of interchangeable or alternative grippers is one area and this has been a part of the Master system. Several reha-
30
M. Hillman
bilitation friendly programming languages have been developed for the RT robots, although different drivers could allow the software to be applied to other robots. One example is the CURL Cambridge University Robot Language [4]. The RT robots were used as the basis for the RAID project [17], funded by the European Commission. The RAID project, as with all projects under the TIDE initiative (Telematics for the Integration of Disabled and Elderly people), was collaborative and multinational. Amongst the partners were Oxford Intelligent Machines (OxIM, UK) who were then the manufacturers of the RT robots, and the Master project team. The RT series robot was set up in an extended workstation. Another example of the work space of the basic robot being extended by, in this case, mounting the robot on a horizontal track enabling it to retrieve paper work etc from shelving units. The outcome of the RAID project was commercialised by OxIM (who have now ceased trading), and Afma Robots in France So far all the systems we have looked at have used commercially available robots, whether the Puma, or the RT robot, with the problem of integrating such robots into a work environment. The other approach is to design and build a manipulator to best suit the environment. The main advantage is that the robot is designed to be best suited to the likely tasks and working environment, rather than modifying the tasks and environment to suit the robot. Commercially there is the advantage of not being reliant on the continuing availability of a commercial device. Against this there are several possible disadvantages. The resulting device, produced in small numbers, is likely to be more expensive than one made for a wider market in higher volumes. This disadvantage may be countered by making compromises to the design appropriate to the specific system requirements. There is likely to be a longer development time and cost. Any finished device needs to be of a quality and reliability comparable with the best commercial standards.
Fig. 2.3. Regenesis workstation
This was the approach taken at the Neil Squire Foundation in designing their Regenesis manipulator (Fig. 2.3) [3] to best suit the environment and tasks. Their manipulator was based on a horizontal beam, around which an extending arm could translate and rotate. This gave access to a large working volume and could be mounted at the back of a desk, or across a bed for example. This device was, for a while, made commercially available.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
31
Fig. 2.4. Handy 1
In terms of numbers sold, probably the most successful rehabilitation robot is the Handy 1 (Fig. 2.4) [36], sold by Rehab Robotics (UK). The project originated from the Masters Degree project of Mike Topping at the University of Keele (UK). Topping had a neighbour, a young boy named Peter, who had severe problems eating. He used a cheap educational robot to produce a device that allowed Peter to eat independently, at the speed he chose, for the first time. The company has sold at least 250 systems with very positive feedback of its effectiveness. One of the strengths of this project, and reasons for its success, is that it stemmed from the real problems of an individual client. Another feature is that originally it made no attempt to be multi functional. However more recently extensions of the system have been developed for it to be used for applying make up, painting, washing and shaving. The Handy 1 is essentially a feeding aid based around a robot. By comparison the Winsford feeder (RTD-ARC, New Jersey, US) has been available for a long while as a feeding aid, but only recently has been promoted as being a robotic device. With around 2000 units having been sold its commercial impact is obviously greater than the Handy 1. The MySpoon (Secom Co. Ltd, Tokyo, Japan) is available in Japan and is similar in concept. In the UK the Neater Eater (Buxton, UK) was initially designed as a purely manual device to assist those with a tremor eat, utilising a damped arm. More recently a powered, programmable device has emerged which may be considered to be a robotic device. 2.3.2 Mobile Robots Compared with the workstation based devices, the number of mobile assistive robots is very small and the commercial impact has been negligible. One of the best known is probably the MoVAR system [39] from Stanford University, which is essentially a DeVAR on wheels. The mobile base had sophisticated omni-directional “mechanum” wheels. The MoVAR was controlled
32
M. Hillman
from a console with several monitor screens giving feedback to the user from an on board camera as well as a map of the environment and a control environment. A very capable system but at the time it was not packaged in a way which would appeal to a general user. With advances in computer technology the idea could be revisited and a more marketable product achieved. As the founder of the Unimation Company, Joe Engelberger has been called the father of robotics. He has had an interest in service robotics and particularly in medical applications. He proposed [6] the use of his Helpmate robot as a fetch and carry robot for a disabled person. However, while the Helpmate has been successfully used for moving supplies around a hospital, it has not been used successfully in a rehabilitation application. The cluttered environment of a home is not appropriate for such a mobile robot. More recently the KARES II robot system [2] has been developed at KAIST in Korea. This is a wide-ranging project investigating various modes of user control including the use of visual servoing, an eye mouse and a haptic suit as well as the design of the robot arm itself. The arm has been mounted in a number of different configurations, but primarily on a remote controlled mobile base.
Fig. 2.5. Wessex robot
A different approach was investigated at the Bath Institute of Medical Engineering with their Wessex robot (Fig. 2.5) [13]. Having identified the shortcomings of a fixed site robotic workstation in a domestic environment, a non-powered mobile base was designed. While a workstation system works well in a vocational environment it may be very restricting in a domestic environment where different tasks are normally carried out in different rooms of the home. For example to wash in the bathroom, to listen to music in the living room and to eat in the kitchen or dining room. The mobile base was intended to be moved from room to room by a carer or might be clipped to the front of a wheelchair.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
33
2.3.3 Wheelchair Mounted Manipulators If a mobile robot can be seen as a mechanical servant or slave, the concept of mounting a manipulator onto a wheelchair provides what may be termed a third arm. One very early wheelchair mounted robot was designed by Carl Mason [27] at the VA Prosthetics Center in New York. Mechanically it was a well engineered system, able to reach from the floor to ceiling. Apparently though it was too springy, and the hook end effector (as used by many prosthetic hand wearer’s) was not very successful. The control was quite basic, being operated on a joint by joint basis that has been found to be unintuitive and time consuming to control. By comparison placing a simple educational robot on a wheelchair is the most basic of arrangements. Zeelenberg’s son was diagnosed as having Muscular Dystrophy. His parents wanted him to make the fullest development of his abilities and skills. His father obtained an educational robot and simply mounted it on the wheelchair tray [41]. With Muscular Dystrophy it is possible to use a small push button controller and this was chosen for the input device. This wouldn’t pretend to be a technically sophisticated device, but arose out of a real need and close collaboration between the developer and user. Amongst the uses of the robot are for opening the door, moving chess pieces and using the telephone. One reason why this is a very significant part of our historical survey is that out of this work came the Manus project.
Fig. 2.6. Manus
Manus (Fig. 2.6) [22] is extremely well known and although it hasn’t sold as many units as the Handy 1 it is at least as well respected. The work started as far back as 1984 involving collaboration between IRV, (Institute for Rehabilitation Research) in Hoensbroek, led by Hok Kwee, the Institute for Applied Physics and the TNO Product Centre in Delft and the Netherlands Institute of Preventive Health Care. It is a sophisticated robotic manipulator able to be mounted to a number of different wheelchairs. It has seven degrees of freedom, as well as a simple gripper.
34
M. Hillman
The extra degree of freedom extends its vertical range, while allowing it to fold compactly at the side of the wheelchair. The mounting of the arm, protruding from the side of the chair, raises the crucial issue for all wheelchair mounted robots of integrating the arm with the wheelchair, not least to ensure there is no unacceptable increase in overall width, or compromise of the stability of the wheelchair. Many have been sold to rehabilitation centres with much development going on around it, but more importantly with significant sales to end-users. Manus is seen as the standard against which other rehabilitation robotic systems are measured and has been commercialised by Exact Dynamics. Further development of Manus is being carried out by both the manufacturers and also under the European Commanus project [7].
Fig. 2.7. Raptor
The other wheelchair-mounted manipulator that is available commercially is the Raptor (Fig. 2.7) [26], which is being produced by the Rehabilitation Technologies Division of the Applied Resources Corporation. It makes an interesting comparison with Manus. While Manus is a relatively high cost, sophisticated device, the Raptor has introduced compromises to bring the cost down to about a third of that of Manus. In particular it has only four degrees of freedom. It will be very interesting to see how these two devices perform commercially and in terms of their effectiveness, with their differences in cost and functionality. It will also be interesting to see how the larger American market affects the viability. A slightly off-beat approach to wheelchair-mounted manipulators came from Jim Hennequin in the UK. He was best known for his Spitting Image satirical puppets, which appeared on UK television. The puppets used a pneumatic air muscle and these were used for the drive motors of the Inventaid [15] wheelchair mounted robot. He claimed a high power to rate ratio for the air muscles. He also claimed that it was simple enough to be maintained by a back-street fitter.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
35
2.3.4 Human Machine Interface A mobile assistive robot may be envisaged as a “slave” – it can be instructed to fetch an item from the kitchen or to place a book on the shelf. If the technology is adequate for it to operate autonomously, the instructions can be in what is virtually a natural language. A similar approach can be taken to a workstation environment. Since the environment is more structured it is easier for the robot to operate autonomously. By comparison a wheelchair-mounted manipulator may be seen as a “third arm” and would normally be controlled in some direct way, for example to move the arm forward or to grip an object. It is one of the great challenges to come up with a control system that can even begin to compete with the way in which those without disabilities freely move their arms and hands. This distinction need not be hard and fast. It is obviously possible to drive a mobile robot around the home in the same way one would drive a remote controlled car, perhaps with the benefit of an on-board camera. Similarly task type commands can be given to a wheelchair-mounted manipulator. For example “Grip the red mug in front of you”, or “simply reach down to the floor”. Different media can be used to communicate such commands to the controlling computer. For natural language commands speech recognition may well be used, while for direct control a joystick might be seen as more appropriate. For those not able to control a joystick some form of scanning system can be used. The chosen solution in either case may well be a combination of different media and will also depend on the abilities of the user. While in earlier days the interface was a fixed part of the overall system, more recently the user is able to choose the most appropriate interface device.
2.4 Mobility Within mainstream robotics a major area of both research and commercial application is that of “Automatic Guided Vehicles” (AGV). This technology obviously has potential in addressing the mobility needs of people with a wide range of disabilities. All powered wheelchairs operate in two degrees of freedom, conventionally under the direct control of the user and so are not that much different from a two degree of freedom telemanipulator. Modern powered wheelchairs will often add other powered functions such as leg raisers, and seat tilt. Most modern powered wheelchairs also have programmable controllers. Such chairs may be technically sophisticated but would not make the claim of being robotic. As soon as sensors are added, and a control system which can react to the output of those sensors, the wheelchair is becoming an AGV. Such devices are normally referred to as smart wheelchairs. These use sensors to detect objects in the environment. On board processing of the constantly changing and moving relative environment can allow the user such functions as to track a wall, go through a door or dock at a table or desk.
36
M. Hillman
One approach is to adapt a standard commercial base. For example the CALL Centre in Edinburgh, UK have many years experience in this area. In their initial work [28] they used a standard electric wheelchair to produce a smart wheelchair for children and teenagers. In their latest smart wheelchair the Smart Controller acts as if it was a second joystick plugged into the DX (Dynamic, New Zealand) wheelchair bus system. Various Smart Wheelchair ‘tools’ can be easily selected in different combinations to suit the pilot and environment Alternatively it is possible to build a wheelchair that is “smart” from the outset. Such an approach was used by the CEC TIDE funded OMNI project [14]. This was an omni-directional wheelchair integrated with autonomous control features The big difference between an AGV and a smart wheelchair is that a powered wheelchair is not normally required to be completely autonomous. The issue is how to handle the conflicts between when the user should have an appropriate level of control of the chair and when the smart processor will take over, and vice versa. A different approach to mobility comes from Dean Kamen who invented the iBOT wheelchair. While the chair may be driven in a conventional way, gyroscopic sensors allow the chair to balance on two wheels or to climb stairs. The safety issues of relying on gyroscopes and processors to provide the basic stability of the device are paramount. In common with other safety critical “fly by wire” systems multiple redundancy is used. A commercial company Independence Technology (US) has recently received FDA approval for the iBOT in the US and hopes to start making them available to selected clinics and rehabilitation centres towards the end of 2003. Not all mobility implies that the person needs to be transported by the device. In 1977 Meldog [35] was developed at the Mechanical Engineering Labs at Tsukuba Science City in Japan. It provided mobility for a blind person by guiding them around city streets, downloading a basic map and using landmark sensors. It would function in much the same way as a guide dog would be used in other cultures. With the increasing miniaturisation of electronics and GPS positioning it is possible that the same functionality could be obtained today on a body worn device without the problems of kerbs and the steps. Many people have investigated simple electronic white sticks [8] which have met with limited success, but there may be scope for a much more sophisticated device.
2.5 Prosthetics and Orthotics It is clear from the early work mentioned above that prosthetics and orthotics have been closely associated with rehabilitation robotics. It is useful at this stage to define a prosthesis as an artificial limb (although the term can also be used for an internal organ or joint) and an orthosis as a device to support or control part of the body. The early work at CASE and Rancho Los Amigos (mentioned above) were orthotic systems. More recently the Mulos project, funded under the EU TIDE
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
37
funding initiative [40] was a powered upper limb orthotic system. Rahman and colleagues at the AI duPont Institute (Wilmington, US) have been intimately involved in the rehabilitation robotics field. They have been involved in the design of both powered and non-powered arm orthoses. In particular their anti gravity arm orthosis [31] is noteworthy although, being a balanced system, with no external power supply may not come within our definition of a robotic system. Although there is a lot of commercial work in prosthetic arms and hands very little of this has used robotic technology, but rather has been a development from existing technologies. However, one computer controlled upper arm prosthesis is the Utah/MIT artificial arm and dextrous hand developed by Jacobsen [16]. Another longstanding project is the work that originated with the Southampton Hand [23] at Southampton University and progressed at the Nuffield Orthopaedic Centre in Oxford. Initially a complex five fingered hand, the mechanism has been simplified, while retaining the capabilities of forming the hand into several functional configurations both for precision and power. As a continuation of this work, the ToMPAW project [29] combines the earlier Leverhulme hand prosthesis with the prosthetic arm developed at Edinburgh. There are two main issues in powered prosthetics and orthotics. One is the issue of miniaturisation and the other of power. The problem of miniaturisation is particularly critical for a hand prosthesis, where the complete system has to fit within the outline of a human hand. We have already noted the problems of integrating a robotic system onto a wheelchair. The problems of prosthetics are obviously at a far greater level of magnitude. Although a hand prosthesis and an upper arm orthosis require different levels of power both present problems in how to store the energy to give a day’s use of the device before recharging is required. With a hand prosthesis the requirement is to fit batteries with sufficient capacity within the hand. For an upper arm prosthesis the energy requirement is far greater. Although the volume/mass constraints are not so difficult, for a truly portable system this is still a major problem area. Besides electrical batteries, compressed CO2 has also been used as a power supply. Blatchford’s Intelligent Knee prosthesis (Basingstoke, UK) however is a nonpowered device. It uses sensors to regulate the swing of the knee, dependent on the rate of walking and other programmable values. Although it is a passive, rather than an active, device it is truly a robot, although nowhere in Blatchford’s publicity is the word “robot” used.
2.6 Robot Mediated Therapy The use of robotics to provide movement therapy for the rehabilitation of patients following stroke has been an area of major growth within rehabilitation robotics over the past few years. It is interesting to see the growth in the number of papers presented at the International Conference on Rehabilitation Robotics (ICORR) in this area. Before 1999 there was at the most a couple of papers. Since 1999 the number of papers has increased from to 15 in 2003.
38
M. Hillman
The reason for the increase is obviously the potential both for more effective therapy and more cost effective therapy. Such potential exists for all areas of rehabilitation robotics, but is perhaps most easily quantifiable in this area. A robot could be used to replicate the exercise regime used by physiotherapist, but has potential for other regimes not easily carried out by a human. For rehabilitation following stroke there are three main ways [25] in which robots have been applied. • Passive is where the movement is externally imposed by the robot while the patient remains relaxed. This movement can maintain the range of motion at the joints • Active assisted is where the patient initiates the movement, but the robot assists along a predefined path • Active resisted is the opposite case where the patient must move against a resistance generated by the robot. At Palo Alto the MIME system [34] can be used both in a passive or active mode or a bilateral mode in which the patient attempts to move both the affected and unaffected limbs. While MIME uses two six degree of freedom Puma robot arms, the ARM Guide developed by Reinkensmeyer and colleagues uses a one degree of freedom robotic device working in a similar fashion to a trombone slide [18]. The MIT-Manus system [20] is a two degree of freedom system, similarly designed for stroke rehabilitation, and is now available as a commercial product. Another stroke rehabilitation project is the GENTLE/S project [1], which encourages the patient to move against a resisted haptic arm in a computer, generated virtual 3D room. With several different approaches it is important to be able to demonstrate the clinical effectiveness. A lot of useful information has been accumulated for the MIME project. After one and two months the group receiving therapy from the robot showed a faster improvement, although after 6 months there was little difference from those who had received more conventional therapy. While the robot seems to be at least as effective as conventional therapy, it seems to be working in a different fashion, and it may be that different approaches will be needed for different levels of impairment. Most of the current work is involved with stroke rehabilitation, but in the past robots have also been used with patients with cerebral palsy and following orthopaedic surgery. At Santa Clara University two planar robot arms have been used for the rehabilitation of joints following surgery [19]. The two arms, each with force sensors at base and gripper, hold firmly two adjacent limb segments (e.g. upper and lower leg). Using the two robots, the leg is manipulated, with the joint kept under compression for effective rehabilitation.
2.7 Robotics in Special Needs Education The use of robots in education for those with physical and learning disabilities has received attention over the years. For example a young child learns much from
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
39
simple play activities. However if he has impaired mobility these opportunities are greatly restricted. The Cambridge/CUED robot [12], based on a UMI RTX robot with a vision recognition system, allowed the child to interact with his environment in various ways ranging from simply dropping a toy brick onto a drum, to painting or playing board games.
Fig. 2.8. Cambridge University Educational Robot
Many electronic toys are now cheaply available on the mainstream market, which may provide especial benefit for the disabled. However AnthroTronix (Maryland, US) are developing what they describe as telerehabilitation tools to motivate and integrate therapy, learning, and play
2.8 Robotics in Communications We have already identified that most communication aids are not robots. This certainly applies when communication is through an acoustic medium. Communication can however use other senses – especially visual, but also tactile The Dexter hand [10] is intended to act as a finger spelling hand for those who are deaf/blind. Such people use finger spelling for communication but this language is not widely known by the general population. Dexter enables someone to input text, for example at a keyboard which is then converted to finger spelling by Dexter, to communicate with a person who uses finger spelling. One difficulty which many physically disabled people encounter is the desire to read a book, magazine or newspaper. Page turning is one of the tasks that comes
40
M. Hillman
highest on the list of priorities for an assistive robot. It is also one of the most difficult, although there are ways to achieve it. There are several page-turners on the market, most of them bulky, expensive. They are often not very effective and limited in what they can achieve. Surely this is a prime area for the application of robotic technologies. For what is essentially a single function device, cost may be a constraint, but if it was reliable and effective there would be a market for such a device.
2.9 Historical Perspective Looking back at the past 40 years it is sobering to consider what has been achieved, although challenging to imagine what might be achieved in the next 40. This survey has outlined the progress in rehabilitation robotics from the early orthotic devices, workstation based systems, and the start of the commercial products Handy 1 and Manus in the 1980’s. The first International Conference on Rehabilitation Robotics (ICORR) was held in 1990 at AI duPont Institute, Delaware where 12 papers were presented. For several years it alternated between the US and Europe, and most recently was held in Korea in 2003 with 66 oral presentations, and 27 demonstrations and posters. The term “robot” was first used in Capek’s “Rossums Universal Robots” in the 1920’s, from the Czech word for a slave. The film Metropolis in 1927 represented the familiar humanoid robot loved by science fiction. In practical terms robotics can be dated back to the DeVilbiss paint sprayer in the 1930’s. In the 1940’s the fiction author Asimov formulated the 3 laws of robotics. In the 1950’s the Unimation robot company was formed. There are now 750,000 robots in industrial use. The progress in surgical/medical robotics has been very impressive. From the early work in the 1980’s they are now used in neurosurgery, orthopaedic surgery and assistance during endoscopic operations. In rehabilitation, wheelchairs are a much more traditional product – easily defined in what they can and should achieve. Everest & Jennings launched the first modern wheelchair in the 1930’s and the first powered wheelchair in the 1950’s. Since then they have sold their 1 millionth wheelchair and there are many other companies worldwide. It is an industry in which many of the products follow a very traditional design and perhaps the first truly novel product is the iBOT wheelchair.
2.10 Commercialisation The real benefit of rehabilitation robotics is in devices being readily available on the open market, so I believe the number of systems sold commercially is of paramount importance. Table 2.1 is a summary of those systems that are currently commercially available. In a number of cases it has not been possible to quote
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
41
numbers of systems sold, as the manufacturers hold this as commercially sensitive information.
Table 2.1. Current (as at April 2003) commercially available rehabilitation robots No. sold
Cost (USD)
AfMaster
?
$50,000
Manus Raptor
>150 13
$35,000 $12,500
Handy 1 Winsford Neater MySpoon
>250 2000 100 ?
$6300 $2499 $3600 $3200
MIT Manus – Planar MIT Manus – Wrist
20 2
$70,000 $65,000
Workstation Wheelchair mounted Feeder
Therapy
Although a number of workstation systems have been available in the past, the AfMaster is the only one currently available. For wheelchair mounted robots there is the interesting comparison between Manus and Raptor. It will be interesting to see which will win out – the technically and functionally superior Manus or the lower cost Raptor with the benefit of the huge US market? Feeders are an interesting lower cost area, and the Winsford feeder has been available for many years. Handy 1 is the device that is most obviously marketed as being robotic, but its cost is highest. Therapy based robots are beginning to become available. Besides the MIT/Manus, the Arm Guide and MIME projects are being commercialise by the Rehabilitation Technologies Division of ARC as the ARC-MIME system.
2.11 Alternatives to Robotics in Rehabilitation Before concluding this survey the alternatives to using robots in rehabilitation should be considered. These alternatives should be seen not as competition, but complementary. Most assistive robots aim to be multifunctional, but stand alone devices, whether sold specifically for the disabled market or mainstream market can have elements of the same functionality. One area of growth at the moment is the integration of different technological devices and approaches, particularly within a smart house environment. Nowadays many activities can be carried out on a computer without the need to interact with the real world. Examples are computer art and music, computer chess and the whole area of 3D computer games and virtual reality.
42
M. Hillman
Animals are often used in rehabilitation. Particularly dogs for the blind, but also for those with mobility and hearing impairment. Monkeys have also been used although more difficult to train. Human carers will always be important. Against the issues of independence must be balanced the need for human interaction and companionship. However sophisticated our technical systems, they are unlikely to match the abilities of a human. In parallel with the development of robotic devices for rehabilitation, much research is continuing into the origins and treatment of debilitating diseases and conditions.
2.12 Conclusions However good the research may be, success is ultimately measured in assistance being given to patients and disabled people in real life. Research must be seen as a stepping stone to commercial products. This will be through devices being sold and bought commercially (whether through private, institutional or state funding). For devices to succeed commercially the correct balance between function and cost must be achieved. One way in which this field will expand is with a move away from the traditional idea of a robot “arm” to the concept of using robotic technologies – sometimes as a multifunctional device, or sometimes as a single function tool – using robotic technologies in the most appropriate fashion. Finally the question – does it matter whether it’s a robot - should we use the “R” word? Commercially the word robot may be used positively to give an impression of technical sophistication. On the other hand many consumers are frightened by the word robot. But ultimately and finally the most important aspect is the benefit of disabled people and their carers.
References 1.
2.
3.
4.
Amirabdollahian F, et al. (2001) Error correction movement for machine assisted stroke rehabilitation In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 60–65 Bien Z, et al. (2002) Development of a novel type rehabilitation robotic system KARES II, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 201–212 Cameron WM. (1986) Manipulative appliance development in Canada, In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, World Rehabilitation Fund pp 24–28 Dallaway JL, et al. (1993) An interactive robot control environment for rehabilitation applications, Robotica, 11: 541–551
2 Rehabilitation Robotics from Past to Present – A Historical Perspective 5. 6. 7. 8. 9.
10. 11. 12.
13. 14. 15. 16. 17. 18.
19.
20. 21. 22.
23. 24. 25.
43
Detriche JM et al. (1991) Development of a Workstation for Handicapped People Including the Robotized System Master. Proc. ICORR, Atlanta Engelberger J. (1989) Robotics in Service. Cambridge, Massachussets, The MIT Press Evers HG, et al. (2001) MANUS towards a new decade, In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 155–161 Farcy R, Belik Y (2002) Locomotion assistance for the blind, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 277–284 Fu C (1986) An Independent Vocational Workstation for a Quadriplegic. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, World Rehabilitation Fund, pp 42–44 Gilden D, Jaffe D. Dexter, (1988) A Robotic Hand Communication Aid for DeafBlind, Int’l Jnl. of Rehabilitation Research 11(2): 188–189 Hammel J, et al. (1989) Clinical evaluation of a desktop robotic assistant, J. of Rehabilitation Research and Development 26(3): 1–16 Harwin.WS, Ginige A, Jackson RD. (1986) A Potential Application in Early Education and a Possible Role for a Vision System in a Workstation Based Robotic Aid for Physically Disabled Persons. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 18–23 Hillman M, Gammie A (1994) The Bath Institute of Medical Engineering assistive robot, In: Proc ICORR ’94 Wilmington, US, pp 211–212 Hoyer H, et al. (1997) An Omnidirectional wheelchair with enhanced comfort features. In: Proc ICORR 97, Bath, UK, pp 31–34 Jackson RD, (1993) Robotics and its role in helping disabled people, Engineering Science and Education Journal 2(6): 267–272 Jacobsen SC, et al. (1982) Development of the Utah Artificial Arm, IEEE Transactions on Biomedical Engineering 29(4): 249–269 Jones T. (1999) RAID – Towards greater independence in the office and home environment. In: Proc ICORR 99, Stanford, US, pp 201–206 Kahn LE, et al. (2001) Comparison of robot assisted reaching to free reaching in promoting recovery from chronic stroke. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 39–44 Khalili D, Zomlefer M. (1988) An Intelligent Robotic System for Rehabilitation of Joints and Estimation of Body Segment Parameters. IEEE Transactions on Biomedical Engineering 35: 138 Krebs HI, et al. (2003) Robotic applications in neuromotor rehabilitation. Robotica 21(1): 3–12 Kwee HH et al. (1983) First Experimentation of the Spartacus Telethesis in a Clinical Environment. Paraplegia 21: 275 Kwee HH, et al. (1989) The MANUS Wheelchair-Borne Manipulator: System Review and First Results. In: Proc. IARP Workshop on Domestic and Medical & Healthcare Robotics, Newcastle Kyberd PJ, et al. (2001) The design of anthropomorphic prosthetic hands: A study of the Southampton Hand, Robotica 16(6): 593–600 Leifer L (1981) Rehabilitative Robotics, Robotics Age May/June 1981, pp 4–15 Lum P, et al. (2002) Robotic devices for movement therapy after stroke: Current status and challenges to clinical acceptance. Topics in stroke rehabilitation 8(4): 40–53
44
M. Hillman
26. Mahoney RM. (2001) The Raptor Wheelchair Robot System. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 135– 141 27. Mason CP, Peizer E. (1978) Medical Manipulator for Quadriplegic. In: Proc Int’l Conf. on Telemanipulators for the Physically Handicapped. IRIA 28. Nisbet P, et al. (1988) The CALL Centre Smart Wheelchair. In: Proc. First Int’l Workshop on Robotic Applications in Medical and Healthcare, Ottawa, Canada 29. Poulton AS, et al. (2002) Progress of a modular prosthetic arm, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 193–200 30. Prior SD, (1990) An electric wheelchair mounted arm – a survey of potential users, Jnl of Medical Engineering and Technology 14(4): 143–154 31. Rahman T, et al. (2001) An anti-gravity arm orthosis for people with muscular weakness. In: Mokhtari M (ed) Integration of Assistive Technology In The Information Age. IOS, Netherlands, pp 31–36 32. Roesler H et al. (1978) The Medical Manipulator and its Adapted Environment: A System for the Rehabilitation of Severely Handicapped. In: Proc Int’l Conf. on Telemanipulators for the Physically Handicapped. IRIA 33. Seamone W, Schmeisser G. (1986) Evaluation of the JHU/APL Robot Arm Workstation. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 51–53 34. Shor PC, et al. (2001) The effect of Robotic-Aided therapy on upper extremity joint passive range of motion and pain, In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp79–83 35. Tachi S, et al. (1985) Electrocutaneous Communication in a Guide Dog Robot (MELDOG). IEEE Transactions on Biomedical Engineering 32: 461 36. Topping M. Handy 1, (2001) A robotic aid to independence for severely disabled people. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 142–147 37. Topping M, Smith J (1999) The development of Handy 1, a robotic system to assist the severely disabled, In: Proc ICORR 99, Stanford, US, pp 244–249 38. Van der Loos M. (1995) VA/Stanford rehabilitation robotics research and development program: Lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering, 3(1): 46–55 39. Van der Loos M, Michalowski S, Leifer L. (1986) Design of an Omnidirectional Mobile Robot as a Manipulation Aid for the Severely Disabled. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York pp 61–63 40. Yardley A et al. (1997) Development of an upper limb orthotic exercise system In: Proc ICORR 97, Bath, UK, pp 59–62 41. Zeelenberg AP. (1986) Domestic use of a training robot-manipulator by children with muscular dystrophy. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 29–33.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot in the Context of Smart Homes Mounir Mokhtari, Mohamed Ali Feki, Bessam Abdulrazak, and Bernard Grandjean
Abstract The design of robot dedicated to person with disabilities necessitates users implication in all steps of product development: design solution, prototyping the system, choice of users interfaces, and testing it with users in real conditions. However, before any design of any system, it is necessary to understand and meet the needs of the disabled users. In this chapter, we describe our research activity on the integration of a robotic arm in the environment of disabled people who have lost the abilities to use their proper arms to perform daily living tasks and who are able to use an adapted robot to compensate, even partly, the problems of manipulation in their environments. To develop a human friendly interface it is necessary to act on the system itself to make it more flexible and easy to use. Improvement of assistive robot’s functionalities must comply with users environment which is composed of several assistive aids complementary to the robot. To meet this target we should have a consistent knowledge on users’ needs and taking into account their specific types of disability, their restricted possibilities, and also their acceptation level of technologies. This requires multidisciplinary competencies on several research areas, such as computer sciences, networking, robotics, home automation, and also on ergonomic to provide standardized functionalities which allow efficient usability of assistive technological aids. This chapter describes the adaptation software architecture developed for an assistive robotic arm, called Manus manipulator, in the context of smart homes where the robot is considered as an object among the others.
3.1 Introduction Usually our environment became not adapted for people having lost the ability to use their proper lower limbs to walk or their proper arms to perform daily living task, as such opening a door, eating, or even having access to a computer. To compensate their incapabilities people with disabilities have often recourse to assistive technological aids such as, electrical wheelchair to compensate moving function, a robot manipulator to move objects in their environment, environmental control systems to control the home environment, and communication Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 47-56, 2004. © Springer-Verlag Berlin Heidelberg 2004
48
M. Mokhtari et al.
systems to improve their ability to communicate with people or to get access to information by the use of a computer. Consequently the user is confronted to several heterogeneous systems, imposing several user interfaces, providing multiple and complementary functionalities, and forming a whole complex environment that we describe as being a smart environment. This situation is usually described in the literature and by some industrials as the smart homes concept that is not necessary limited to home environment, but also to hospital environment and outside (school, train station, leisure places …). Assistive robotics consists mainly on developing systems aiming to compensate motor capabilities of people having lost the ability to use their proper arm, usually due to spinal cord injuries or muscular dystrophies. Several systems appeared these last two decades, with the goal to perform daily living tasks, such as the Manus manipulator we are using in this research work. Smart homes, or as defined in Europe as domotics, consist on acting on the user environment to make it more accessible by adding automated controlled systems used through a common user interface which is defined as an environmental control system. The smart homes context is not only limited to home environment, but also to hospital, school, outside and so on. In term of tasks we could say that smart homes are dedicated to control systems in the environment, such as doors, windows, lights, TV, VCR and so on, and that assistive robotics is mainly dedicated to manipulate objects, such as gripping object from the floor, drinking, eating and so on. The aim of our work is to develop human-friendly user interface, independent from the controlled system, or from the communication protocols, which must be flexible and personalized for each end user. The objective is to develop a generic and unified user interface able to control, not only the robot manipulator, but also any available equipments in user’s environments, such as electrical wheelchair, telephone, TV, Doors and so on. This implies to experiment existing and emerging technologies to fits the needs of people with disabilities.
3.2 MANUS Assistive Robot The MANUS tele-manipulator is a robot mounted on an electric wheelchair (Fig. 3.1). Its objective is to favour the independence of severally handicapped people who have lost their upper and lower limbs mobility, by increasing the potential activity and by compensating the prehension motor incapabilities. Manus is a robot with six degrees of freedom, with a gripper in the extremity of the arm which permits the capturing of objects (payload of 1,5 kg) in all directions, and a display. All are controlled by a 4x4 buttons keypad or by a joystick with the latest prototype version and soon, a mouse or a touch screen. The 4x4 keypad gives the user the possibility of handling MANUS, and the display unit gives the current state of the MANUS.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
49
Fig. 3.1. Manus robot used in a supermarket
3.3 Networking Technologies and Developments To design a mobile system optimized to offer mobility for the handicapped people, we need to determine the main parameters and technological means allowing supporting solutions in residential and outside environments. Human-Machine interaction in this system appears not only by the user interface, but it is also dependent on wireless and wired networks protocols which control the devices. Indeed, human-machine interface makes possible to control various environment equipment and taking into account their current status “feedback”, whereas, wireless networks ensure the desired mobility of the users [2]. The Figure 3.2 shows the hardware architecture of our concept of smart home and attempts to outline the networking problems of heterogeneous systems in user’s environment. The remote controller integrates the human-machine interface and is accessible by suitable input to each end user (joystick, touch screen, microphones...). The user interface has as a crucial managing role of various equipments functionalities. Among equipment we distinguish several types of devices: electrical devices (White goods), household equipment (Brown goods), data-processing equipment (Gray goods), and also mobile devices (mobile phones, pocket PCs…). The diversity of these products brings a wide range of networking protocols necessary to manage the whole smart environment, such as radio (ex. Bluetooth and 802.11), Infra-red (ex. IrDA), Ethernet, and Power lines communications. The solution consists on the design of a generic user interface independent of the communication protocols.
50
M. Mokhtari et al.
This approach permits to obtain a rather acceptable time response without weighing down the task of the supervisor. Indeed, supervisor plays the central role by processing various interconnections between protocols to allow control communication to corresponding specific devices [5].
Fig. 3.2. Smart homes concept
3.4 General Software Architecture Based on the Commanus software architecture we have developed a new software architecture which consider the Manus robot as an object of the environment, at the same level as an electrical wheelchair, or a domotic system as a TV or VCR (Fig. 3.3). This software architecture is decomposed into three main layers: − User Interface layer: manage user interface events according to any input device (keypad, joystick, voice recognition…) selected and configured with the OT software according to each end user. Remote maintenance of the system is also planned through the Tele-Maintenance Unit (TMU) [6]. − Human-Machine Interface (HMI) layer: convert user events into actions according to selected output devices (Manus, TV, VCR…) − Low Level controller: deals with the specific characteristics of any output device and according to its communication protocol (infra red, radio…). Full control of the Manus robot has been implemented through this software architecture. Actually we are implementing control of home devices through a unified user interface.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
51
Fig. 3.3. General software architecture
3.5 User Interface Adaptation Re-design of software control architecture is not sufficient to allow access to smart environment by people with severe disabilities. The problem is that each end user, with his deficiencies and his individual needs, is considered as a particular case who requires a typical configuration of any assistive system. Selecting the most adapted input device is the first step and the objective is to allow the adaptation of available functionalities according to his needs. For this purpose we have developed a software configuration tools, called OT (Occupational Therapist interface), which allows a non expert in computer science to configure easily any selected input device with the help of different menus containing activities associated to action commands of any system including the Manus robot. The idea is to describe each device using XML, to generate automatically corresponding functionalities, and to display actions in an interactive graphical user interface. According to user needs, and to the selected input devices, the OT offers the mean to associate graphically the selected actions to the input device events (buttons, joystick movements…). The OT software is actually running and fully compatible with the Manus robot. The extension to other home equipment is under development.
52
M. Mokhtari et al.
Below we will focus on the improvements performed on the Manus controller to facilitate the use of the robot in real condition, in a non deterministic environment, as the home environment.
3.6 Implementation of a Path Planner To favour the integration of Manus in home environment it was necessary to facilitate its control by providing natural movements of the arm taking into account obstacles. 3.6.1 Gesture Library In the human physiology, any complete natural gesture is described as being twophased: an initial phase that transports the limb quickly towards the target location and a second long phase of controlled adjustment that allow reaching the target accurately. These two phases are defined respectively as a transport component and a grasp component [4]. Each component is a spatio-temporal transformation between an initial state, and a final state of the arm (Fig. 3.4).
Oi (xi , yi , zi , yawi , pitchi ,rolli )
Of (xf , yf , zf , yawf , pitchf ,rollf )
Gripper trajectory Fig. 3.4. Robot configurations characterising a gesture
In our approach, we are interested in automating the first phase. The second one requires complex sensors (such as cameras and effort sensors) that are, from a usability point of view, not suitable to be integrated on the Manus. The gesture library contains a set of generic global gestures corresponding to transport component. Each gesture (Gi) is characterised by an initial operational variable of the robot workspace (Oii) corresponding to the initial robot arm configuration, and a final operational variable (Oif) corresponding to the final robot arm configuration. Each variable (Oi) is defined in the Cartesian space by the gripper position (xi, yi, zi) and orientation (yawi, pitchi, rolli). The gestures generated by our system are linked only to the final operational variables. The path planner is able, from any initial arm configuration, to generate the appropriate trajectory to reach the final configurations.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
53
3.6.2 Obstacles Avoidance To improve the point-to-point mode which performs movement in blind way without taking into account environmental obstacles, we have decided to design a new strategy based on the dynamic generation of 3D obstacle. One example of commonly performed task with the Manus is gripping a glass from a table as shown in Fig. 3.5. The path planner takes into account obstacles located inside the working space of the robot between the initial and final configuration of the gripper. Physical obstacles are virtually encapsulated in boxes playing the role of forbidden areas. In this case a first box represents the arm column which could not be crossed during the movement, and a second box representing the table obstacle. Intermediates points defining the robot trajectory are generated by the use of a developed avoidance algorithm based 3D geometrical calculation [2, 3, 10, 11]. Actually, the 3D virtual boxes are defined statically to validate path planner functionalities. A dynamic definition of forbidden area is currently under design to allow the user defining obstacles according to his own changing environment. The path planner integrates the intermediate points when calculating an automatic gesture as defined above. Consequently, the control of the arm should be simplified for the user and this will offer gain in term of time and of control efficiency.
Fig. 3.5. Path planning process avoiding obstacle
This concept is complementary to the co-autonomy concept, described below, where, in case of a non defined obstacle, the user should always have the ability to modify the trajectory generated by the planner trajectory.
54
M. Mokhtari et al.
3.7 Towards the Co-autonomy Concept The co-autonomy concept was recently introduced as a promising way to design assistive robots intended to meet the needs of disabled people [1]. This concept is based on the control charring between the human and the assistive robot. This approach was also proposed for obstacle avoidance in telerobotic systems applications in hazardous environments [3]. Three types of situations were mentioned to define the co-autonomy concept. 1. User is in total control. 2. Machine is in total control. 3. User and the machine share the control. The software command architecture is designed to fit this co-autonomy concept. In the first version of the command architecture, the first and the third type of the situations cited above can occur. Users are in total control when they are using the Cartesian Mode and share the control with an autonomous controller when they are using the Point-to-Point Mode. As describe in [4] the gesture in the Point-to-Point mode is controlled by the user by pressing, for example, the keypad button continuously until completion. The gesture stops if the button is released or continues otherwise (We can qualify such control as a pseudo-sharing control). This was designed to prevent collisions with the user, other persons, or obstacles. Pressing a button of a keypad or pushing a stick of a joystick until completion of the gesture may sometimes be exhausting for some users with severe disability. To prevent from this fatigue, we thought to include the second type of situation of the co-autonomy concept in the command architecture and integrate the user in the autonomous control loop, i.e. allowing him/her to intervene during the automated gestures. The user may then, during the progression of the arm towards the target, make gripper position adjustments. For example, it could occur that the path planner generates a trajectory that would go throw an obstacle. In this case, a collision of the arm with the obstacle will happen. The user may then, act on the input device to ovoid this collision. Such a situation may be done, as shown in Fig. 3.6, in three phases. An automatic phase, where the end-effector follows the trajectory processed by the path planner, a semi-automatic phase where the user intervenes to avoid the obstacle, and finally, another automatic phase, when the user stops intervening, and where a new trajectory towards the target is generated. We have called such control mode Pointing-and-Doing Mode which is complementary to Point-to-Point Mode. As shown in Fig. 3.6, the task is performed following different phases: st − 1 phase: autonomous phase − The end-effector follows the trajectory processed by the path planner nd − 2 phase: semi-autonomous phase − The user intervenes during the autonomous phase to avoid the obstacle rd − 3 phase: autonomous phase. The user stops intervening: A new trajectory towards the target is generated.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
55
Fig. 3.6. Pointing-and-Doing mode
3.8 Conclusion In this chapter we have outlined the developments done on the Manus robot, mainly on the software architecture, through the Commanus project, and the evolution of the control system to be adapted to the smart environment. This system is designed on one hand, to reduce manipulation problems for user when using the robot in their environment, and on the other hand, to solve the problems linked to the user-interface. With its new functions, we plan to reduce the task time and the number of commands necessary for complex task. The application of results obtained in assistive robotics in the context of smart homes is considering the Manus robot as a standard object in an environment composed of several objects. The main advantage of this approach is, in one hand, to promote the use of assistive robotics as a function of a whole system, and on the other hand, to favour the integration of a unified and adaptable user interface.
56
M. Mokhtari et al.
This work is currently supported by the GET1 through a national project on smart homes for dependant people. The continuation of this research work to improve the Manus robot and facilitating its integration in daily users environment is insured through the AMOR2 project which is starting with the support of the European Commission.
Acknowledgment The authors would like to thank the people who have participated actively in this presented research work, in particular C. Rose from AFM, and J.P. Souteyrand from INSERM U.483 for graphical design. Funds for this project are provided by GET and Foundation Louis Leprince Ringuet through the Smart Homes project, in association with ENST Bretagne and ENST Paris; and by European Commission through AMOR project.
References 1. 2. 3.
4.
5.
6.
Chatila R, Moutarlier P, Vigouroux N (1996) Robotics for the impaired and elderly persons, IARP Workshop on Medical Robots. Vienna, Austria Feki MA (2002) Communication development system in case of smart homes. Engineering Handicam Lab report, ENIS-Sfax, Tunisia Guo C, Tarn TJ, Xi N, Bejczy AK (1995) Fusion of Human and Machine Intelligence for Telerobotic systems", IEEE International Conference on Robotics and Automation, Nagoya, Japan, pp 3110–3115 Jannerod M (1981) Intersegmental coordination during reaching at natural visual object. In: Long J and Baddeley A (eds.) Attention and Performance IX, Hillsdale, NJ: Lawrence Erlbaum Associates, pp 153–169 Mokhtari M, Abdulrazak B, Fki MA (2003) Human-smart environment interaction in th case of severe disability. Proc. 10 International Conference on Human Computer Interaction (HCI’2003). Greece Truche C, Mokhtari M, Vallet C (1999) Telediagnosis and remote maintenance system on Internet network for the Manus robot. Fifth European Conference for the Advancement of Assistive Technology (AAATE’99), IOS Press, Düsseldorf, Germany.
1 GET: Groupe des Ecoles des Télécommunications, which federates several telecommunication Engineering Schools, including INT. 2 AMOR project EEC Growth program: Mechatronic upgrade & wheelchair integration of the Manus Arm manipulator. Partners involved: Exact dynamics, TNO-TPD and Koningh in the Netherlands, Ideasis and ExpertCam in Greece, Lund University in Sweden, HMC in Belgium, and INT and AFM in France
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II Z. Zenn Bien, Kwang-Hyun Park, Dae-Jin Kim, and Jin-Woo Jung
4.1 Introduction Utopia – it would be a society where the welfare of the people is properly guaranteed. In the society, each constituent would live his/her life with the blessed feeling of equality. In particular, it would be a society where the elderly and even the handicapped would live well, independently, and comfortably along with people without disabilities. It is instructive to note that the number of the elderly is drastically increasing along with the number of the handicapped caused by a variety of accidents in the complicated and diversified society [64]. In order to realize a welfare-driven society, it is essential to build an infrastructure with a variety of convenient facilities, high-tech equipments, and systems based on advanced technologies with human-friendliness. Rehabilitation robotics is mostly concerned with application of the robotic technology to the rehabilitative needs of people with disabilities as well as growing population of the elderly [28]. Rehabilitation robotic systems aim to solve daily living problems in individual activities. One may say that the primary role of rehabilitation robotic systems is to endow as much independence as possible so as to improve human life quality. Typically, rehabilitation robotic systems are classified as a kind of service robots in the area of robotic technology, while they are also considered as some form of assistive devices in rehabilitation engineering. In fact, major functions of such robotic systems are two types: one for replacement (or rehabilitation therapy) of the user’s handicapped function [46, 52, 60] and the other for assisting the user to carry out necessary tasks. In this chapter, we shall concentrate on developing the second type of robotic systems with assistive functions, and, to this end, shall consider the development of a welfare-driven smart home for high quality of daily life and an indispensable assistive robotic system for the severely-handicapped. The importance of smart home for the elderly and the handicapped may be well understood from the various existing studies such as AID project [7], Smart House project at University of Sussex, HS-ADEPT [25], HERMES, the Smart Home project at Brandenburg Technical University, the Gloucester Smart House, the SmartBO Project [21], the smart house at the Colorado University [55], Welfare Techno Houses in Japan, Robotic Room at the University of Tokyo [56], etc. Considering the existing smart homes, we comment some of the concepts that are implemented in our Intelligent Sweet Home project. In the proposed smart home, some recent technology innovations are considered as well as some specifics in Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 57-94, 2004. © Springer-Verlag Berlin Heidelberg 2004
58
Z. Z. Bien et al.
the lifestyle and traditions in Korea. Since R&D of smart home is urgently in demand to comply with the needs of human being to lead more convenient and safe living and to deal with the increase in the number of the elderly and the handicapped, Intelligent Sweet Home for assisting the elderly and the handicapped, developed at the Human-friendly Welfare Robot System Engineering Research Center in KAIST, aims at development and testing of new ideas for the future smart home and their control. Our work is based on the idea that the technologies and solutions for such a smart home should be human-friendly, i.e. smart homes should possess high level of intelligence in their control, actions and interactions with the users, offering them high level of comfort and functionality. According to their implemented form, the assistive robotic systems are divided into three kinds: 1) Workstation-based system, 2) Mobile robot-based system, and 3) Wheelchair-based system. Some of the workstation-based systems assist the user by using the voice command. DeVAR (Desktop Vocational Assistant Robot) [22], TIDE-RAID (Robot for Assisting for Integration for the Disabled) [27], ISAC (Intelligent Soft Arm Control) [39], IST-MATS (Mechatronic Assistive Technology System) [32] and AFMASTER [1] are well-known examples. Basically, the workstation-based system performs various delicate tasks in a stable mode but its operation is confined to a predefined limited workspace due to lack of mobility. The mobile robot-based system consists of robotic arm and mobile platform. This system is used for transportation of small baggage, guidance for the user, and so on. Walky [18], MoVAR (Mobile Vocational Assistant Robot) [66], TIDE-MOVAID (MObility and actiVity AssIstance system for the Disabled) [16], Care-O-bot I/II [9] and Helpmate [31] are good examples in this category. The wheelchair-based system is currently focused on assistance of the daily living activities of the elderly and the physically handicapped. This type of assistive robotic systems adopts various user interfaces. MANUS [47], FRIEND [53] and RAPTOR [15] are well-known examples of this category. Since, in assistive robotic systems, the human-robot interaction technology becomes increasingly important for user’s convenience and safety in addition to autonomous function of the robotic operations, we report some important results in designing and evaluating KARES II, which is newly developed in KAIST, considering the various human-friendly interfaces and the adaptability to the user. The following items are considered as very important factors for future direction of R&D on the rehabilitation robotics [18, 33]: − − − −
Intelligent interaction/interface that is adaptable to the levels of disability Human-friendly design that assumes the user’s comfort Development of the technology for the user’s safety Increase of the system’s autonomy to compensate the user’s laborious direct control.
Considering these factors, this chapter introduces the development of welfareoriented service robotic systems – Intelligent Sweet Home and KARES II.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
59
4.2 Intelligent Sweet Home Intelligent Sweet Home consists of four main parts based on the results of statistical survey [40,57] and questionnaire survey [45] for the demand of the users: intelligent bed robot system, intelligent wheelchair system, transferring system, and human-friendly interfaces including soft remote control system, intention reading mechanism and health monitoring system. The information exchange between subsystems is performed by the home network using wired and wireless communications. An overall view of the current scenario of the Intelligent Sweet Home is shown in Fig. 4.1. The target users of the system will be preferably the elderly and the physically handicapped based on our statistical survey. And the functionalities of the system are derived from the demand of the users.
Fig. 4.1. Overall view of Intelligent Sweet Home
4.2.1 Questionnaire Survey We conducted a questionnaire survey for the people with disability in their limbs. More than 70% of them are elderly people and most of them in the hospital or in the rehabilitation center for a long time. The total number of participants is 70. We first asked a number of questions to understand inconveniences of the participants in daily life, and Table 4.1 shows the results of the survey about what are they feel mostly inconvenient in their basic living. Their responses were concerned with eating meal, easing nature and small activities such as handling nearby things and slight moving of one’s body. Table 4.1. Causes of inconveniences in daily life Function Eating meal Moving on the bed Bedsore Transferring between bed & wheelchair Easing nature Using home appliances Putting on/Taking off one’s clothe
Percentage of respondents 51% 58% 55% 76% 96% 76% 90%
60
Z. Z. Bien et al.
Our system is developed to eliminate or ease those inconveniences by assisting the activities of the users. In the following, such systems for this purpose are described in more detail. 4.2.1.1 Questionnaire Survey on Intelligent Bed Robot System We have asked the potential users several questions for designing an efficient bed robot system. These questions are related to the life style of users and the function that they hope the bed to do. In the survey, the users who stay in bed for more than 10 hours are about two times as many as the users who stay for less than 10 hours. Table 4.2 shows that the main inconveniences of a current bed system which the handicapped feel are movement on bed or movement between bed and wheelchair, since they cannot use one’s arms and legs freely and they tumble down and slip on bed. To surmount this type of difficulties, we have designed the robotic structure to have a supporting bar. Table 4.2. Inconveniences in bed Function Movement between bed and wheelchair Movement on bed Management of feces and urine Reading book or newspaper Avoidance of decubitus Eating Hobbies
# of respondents 10 10 7 5 5 4 2
4.2.1.2 Questionnaire Survey on Intelligent Wheelchair System The powered wheelchair is an important rehabilitation device for the handicapped and the elderly. We carried out a survey to identify various requirements of potential wheelchair users. Surveys were conducted with 62 handicapped persons, among whom 12 are powered wheelchair users and 50 manual wheelchair users. Most of them had a spinal cord injury (SCI) and, especially, powered wheelchair users had SCI at C4 ~ C6. We found that about 51% of respondent has spent over 9 hours daily in the wheelchair. Thus, the wheelchair can be a major important rehabilitation device for the handicapped to maintain daily life. In spite of indispensability, many potential users do not employ the powered wheelchair because of cost, difficulty of transferring between bed and wheelchair, safety, unfriendliness appearance, maintenance, and so on. In response to a question regarding preferable control input device for operation, 55% of the surveyed preferred to voice recognition, while 19% wanted touch screen and 26% wanted joystick. Most of the participants wished that the wheelchair would support automatic battery charging, prevention of decubitus which was frequently occurred in hips, door passage, and autonomous navigation, as shown in Table 4.3. Some participants suggest that driving a
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
61
car in wheelchair, standing/lying function for the circulation of the blood, smooth motion control, and adjustment of height of chair are also desirable. Table 4.3. Most necessary function in wheelchair Function Automatic battery charging Prevention of decubitus Door passage Autonomous navigation Home appliance control Others
Percentage of response 19.3% 18.4% 18.4% 15.8% 14.0% 14.1%
4.2.1.3 Questionnaire Survey on Transferring System between Bed and Wheelchair The purpose of this survey for the handicapped is to find out what kind of help in moving between bed and wheelchair can make them feel convenient and necessary. Totally 45 people answered questions. Among them, 35 people can move between bed and wheelchair by themselves, while the other 10 people need some assistants. Table 4.4 shows the result of the survey. Table 4.4. Transfer form between wheelchair and bed
Transfer process
Necessity of some system Reason
By oneself (can move by own force) 1. Put away armrest in wheelchair 2. Put down leg on the floor between bed and wheelchair 3. Use two arms and move hips to bed − Necessary: 13 persons − Unnecessary: 10 persons − No response: 6 persons Comfort
1. 2. 3. − − −
By help of family (difficult to move by oneself) Put away armrest in wheelchair Put down leg on the floor between bed and wheelchair Hold arms to bed and transfer body by protector’s help Necessary: 14 persons Unnecessary: 1 person No response: 1 person Safety
Among those who responded in the survey, we found that about 70 % says that the current way of transfer is very inconvenient or uncomfortable, and that most of them can not take shower nor go to a stool without assistant. 4.2.1.4 Questionnaire Survey on Home Appliance Control by Intelligent Man-Machine Interface We have asked to choose a reason why conventional remote controllers are inconvenient to the handicapped among the following examples: 1. It is difficult to bring it to him 2. The size of its button is too small to push
62
Z. Z. Bien et al.
3. It is a heavy task to push the button 4. It is cumbersome to need individual one for each appliance 5. No problem. Among the examples, most of respondent checked no. 1 and no. 4 as shown in Fig. 4.2. From this result, we have confirmed that it is necessary to develop a system that can control most of home appliances in a natural and easy way.
Fig. 4.2. The reason for inconvenience of conventional remote controllers
4.2.2 Assistive Systems 4.2.2.1 Intelligent Bed Robot System [37] Based on the survey that we conducted, we have developed an intelligent bed robot system composed of a pressure sensor-laid bed and a manipulator as shown in Fig. 4.3. Most previous researches have focused on the system which can monitors the patient's posture and motion on bed [56]. We, however, propose a robotic system which can actively help the patient using a robotic manipulator. While the patient is on the bed, the pressure sensors monitor his posture and motions. When he moves on the bed, the robotic manipulator can support his body.
Fig. 4.3. Intelligent bed robot system
In developing the pressure sensor-laid mattress, a set of Force Sensing Resistors (FSR) is used as a pressure sensor as shown in Fig. 4.4. The resistance value
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
63
of the FSR decreases in proportion to applied force on the active surface. Meas2 urement range of force is up to 10 kg/cm and the size of pressure mattress is 1900 mm×800 mm. The spatial interval is 70 mm in vertical direction and 50 mm in horizontal direction. This sensor pad is divided into three modules in order to correspond to reclining beds. The sampling frequency of the images is 10 Hz with its resolution being 10 bits. By using these pressure distribution images, posture and gross movement estimation are realized.
Fig. 4.4. Arrangement of pressure sensors in bed mattress
In designing the robotic manipulator, the parallel manipulator was proposed for supporting the patient’s body. The mechanism consisted of three actuated arms, which were attached to the mobile base via the driven revolute joints as shown in Fig. 4.5. A 50 W DC motor was used as the actuator. Three passive arms were also attached to an upper platform via passive revolute joints. This parallel manipulator is built on the moving platform that is actuated by a 50 W DC motor. The parallel manipulator is responsible for subtle motion on the fixed position of the mobile platform, and the mobile platform can move between the ends of the bed so that the manipulator can reach any position on the bed.
Fig. 4.5. Linkage mechanism in bed robot
64
Z. Z. Bien et al.
4.2.2.2 Intelligent Wheelchair System [41] We have developed an intelligent wheelchair that can help daily life of the elderly and the physically handicapped. We have first analyzed commercial power wheelchair and developed interface board between wheelchair and PC controller based on real-time Linux, RTAI. The system has two incremental encoders and laser range finder to sense environment, localize its position, and detect obstacle as shown in Fig. 4.6. Fig. 4.7 shows the result of laser range finder in intelligent wheelchair and Fig. 4.8 shows the consecutive localization result that was carried out in Intelligent Sweet Home in Fig. 4.1.
Fig. 4.6. Intelligent wheelchair
Fig. 4.7. Result of laser range finder
Fig. 4.8. Localization result
Since charging of a wheelchair battery is one of the tiresome burdens for the handicapped, we concurrently developed a battery charging station and plug for autonomous charging. Fig. 4.9 shows the prototype system. In order to ensure mechanical and electrical security, we have used several micro-switches and relays.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
65
In an undocked state, all electrodes are electrically disconnected to the charging station and plug. During docking process, charging begins by relay control of PC.
Fig. 4.9. Battery charging station and plug
4.2.2.3 Transferring System We have developed the robotic system that can transfer the handicapped between bed and wheelchair. This system consists of mobile part, man-machine interface part and charging system. Fig. 4.10 and Table 4.5 show the mobile part of the system and its specification, respectively. Sling part was specially designed for safety and comfortability. We have developed the man-machine interface part as shown in Fig. 4.11, to be applicable for both left and right handler, and even the handicapped. We have also designed the charging system that is applicable to the other system and can guarantee its safety as shown in Fig. 4.12. Since charging battery is one of the tiresome burdens for the handicapped, the battery charging station and plug will be very helpful to the handicapped.
Fig. 4.10. Transferring system
66
Z. Z. Bien et al. Table 4.5. The specification of the mobile part Part Overall length Narrowest external base width Docking foot inside width Docking foot height Unit net weight Max speed
Length 1330 mm 650 mm 650 mm 250 mm 100 kg 0.4 m/s
Fig. 4.11. Man–machine interface of transferring system
Fig. 4.12. Automatic charging system in transferring system
4.2.2.4 Home Network and Management System [62] Information exchange between subsystems is performed by the home network using wired and wireless communications based on Ethernet. The configuration of the network adopts both server-client and peer-to-peer methods. When a new subsystem is added or an existing subsystem is removed, the server updates address map and send the map to each subsystem. The server collects the state of subsystems and shows the information using a graphic user interface. On the other hand, some operations are achieved using peer-to-peer communication. For example, when the user wants to move from the bed to the wheelchair, the corresponding subsystems directly communicate each other so that they can move to the specified place according to some predefined procedure. For effective management of Intelligent Sweet Home, an outdoor management is also investigated. Even when the elderly or the handicapped is left alone at
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
67
home, the conditions of the home environment can be controlled and checked by caregivers by using internet. 4.2.3 Intelligent Man-Machine Interfaces 4.2.3.1 Soft Remote Control System [5, 19, 58] Significant part of Intelligent Sweet Home is devoted to development of innovative solutions for more natural human-oriented interface for control of homeinstalled devices. From the survey result, we have learned that frail handicapped would feel much comfortable if the applied HMI does not require any attachment of sensors to the user. In our study, we propose soft remote control system and voice recognition system. Soft remote control system is a concept for control of the robot and home appliances by predefined hand gestures remotely sensed by ceiling-mounted CCD cameras. There are two modes of control of the home environment [5]: − Simple mode: User selects devices by pointing to them, and then dwelling or voice is used to confirm the selection and activation of the selected device. Next pointing to the same device turns it off. This mode is applied to “on-off” control of home appliances such as TV, lamps, curtains (opening/closing), etc. − Extended mode: User first activates a mode where a list of tasks and services appears on the TV screen by using a hand gesture. Then, pointing to the TV and moving one’s hand, the user selects from the menu a command that should be executed. Last, taking certain hand posture or using voice commands, the user confirms selected command and initiates its execution. Such mode can be used for changing of the TV channels, setting of home environmental parameters such as indoor temperature, light intensity, sound loudness of audio devices, as well as for selection of pre-programmed tasks that will be automatically executed by the robot or other home-installed devices. Three ceiling-mounted CCD color cameras with pan/tilt motions are used to acquire the image of the room. For the simple identification of the commanding hand in the complex background, it is assumed that the user should wear a color (red & blue) hand band. The color hand band is tracked by means of the condensation algorithm [30]. Then, image segmentation is applied to extract the hand color region from the neighborhood of the color hand band region. For representation of raw data, a feature extraction procedure is also included. It is followed by pointing recognition procedure that recognizes the pointing gesture and calculates the orientation angle and pointing direction of the hand. The control procedures end with sending appropriate IR signal for controlling home appliances. We have tested pointing gestures in Intelligent Sweet Home as shown in Fig. 4.13. When the user points at the TV, as the result of recognizing the pointing gesture, the TV is turned on/off. As an extended version, we are testing pointing gestures with 3×3 menu matrix on TV (Fig. 4.14). With the menu, we can control
68
Z. Z. Bien et al.
appliances including home robots. By taking hand posture and orientation into consideration, this system enables supplementary extension of possible number of commands and enhances freedom of user’s movement.
Fig. 4.13. Soft remote controller used in Intelligent Sweet Home: hand pointing recognition
Fig. 4.14. Extended mode of Soft Remote Controller with 3×3 menu matrix
4.2.3.2 Intention Reading in Bed [37] Until now, there have been many intelligent bed systems but most of them were focused on the monitoring of user’s behavior on bed based on temperature, pressure, or vision sensor. So, it can tell what he is doing but cannot tell what he wants to do. Compared with other systems, our research is also focused on finding what he likes to do, i.e. user’s intention as an input of the system as shown in Fig. 4.15. This function becomes very important for the elderly and the handicapped since they are not good at mobility and at manipulation of devices. To recognize the user’s intention, we use pressure sensors on bed as mentioned in 4.2.2.1. When the user intends to lift his/her body up, center of pressure (COP) moves toward his/her hip and total contacting area (TA) becomes bigger. When the user intends to lower his/her body down, COP moves toward his/her head and
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
69
TA becomes smaller. By these observations, we use COP and TA as the features for recognizer and use Hidden Markov Model (HMM) as recognizer since the intention feature data is a sequential form as shown in Fig. 4.16. From 6 person’s experiment, we could find that intention recognition rate is directly proportional to the ratio of intention interval and motion interval. Fig. 4.17 shows the recognition results of body lowering HMM and body lifting HMM during its motion. In Fig. 4.17, y-value means recognition rate and x-value means time. Motion was started at x = 60 and ended at x = 140 . By setting proper intention interval, we could get user’s moving intention. We are now studying the effect of mattress on pressure distribution and classification of various intentions including turning or moving.
Fig. 4.15. Interaction between user and intelligent bed robot
Fig. 4.16. Intention recognition algorithm
70
Z. Z. Bien et al.
a
b Fig. 4.17. Recognition rates: (a) body lowering, (b) body lifting
4.2.3.3 Health Monitoring System [4, 49] Human health monitoring system is one of the intelligent man-machine interfaces in the sense that it can detect the unconscious human intention [67]. For example, when someone catches a cold, his or her body will generate heat during the sleep. In this example, human health monitoring system can detect the heat from the user and give him/her a proper task such as raising the room temperature. Since the elderly or the handicapped are always apt to become sick suddenly, the health monitoring system is much needed and important [38]. Thus, the health monitoring system must be able to detect the basic human signs by telemetries, diagnose the emergent level of some disease and tell the doctors or the caregivers by network as an example in Fig. 4.18. If this system can know the pre-information about the user’s health, then the reliability of this system will be bigger. Finally, the elderly and the handicapped can have a conviction about his or her health using this health monitoring system.
Fig. 4.18. Diagram for human health monitoring system
For Intelligent Sweet Home, we have developed telemetry for various biosignals with 4-ch wireless bio-signal monitoring system as shown in Fig. 4.19.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
71
Test results for various bio-signals are shown as examples in Figs. 4.20, 4.21 and 4.22.
Fig. 4.19. 4-ch wireless biosignal monitoring system
Fig. 4.20. EMG test result during human walking
Fig. 4.21. Human ECG test result
Fig. 4.22. EEG test result during human sleep
72
Z. Z. Bien et al.
4.3 KARES II System In 1996, we developed KARES I system as a wheelchair-based rehabilitation robot system [64]. KARES I can provide four basic tasks (pick a cup on the table, pick an object on the floor, approaching a cup to the user and switch on/off on the wall) to help severely physically handicapped person. KARES I consists of a 6DOF robotic arm with a mono-vision system for visual servoing, voice recogTM nizer, 6D force-torque sensor and 3D input device (SpaceBall ). All the subsystems are integrated into a powered wheelchair platform. Although KARES I contains many factors as pointed out by [18, 33], human-friendliness and adaptability to the levels of disability have to be considered for the better system. Furthermore, due to flexibility of the rubber tire of powered wheelchair, the vibration of the robotic base causes very unstable operation of the robotic arm during the task. The lessons learned from KARES I and analysis of many conventional rehabilitation robotic systems enable us to address two technical issues as follows:
− A new kind of rehabilitation robot system is required for comfortable and robust operation. Although KARES I is a compact version of various sensors and technologies, exact operation is hard to achieve due to vibration of the robotic base caused by rubber tire of the powered wheelchair. One may say that a workstation-based system (e.g. ISAC of Vanderbilt University [39]) is one solution for resolving vibration problems since such a system enables the robot to operate in the stable mode. However, the workstation-based system cannot provide enough workspace due to its limited mobile capability. Thus, a novel combination of a wheelchair-based system and a workstation-based system can be adopted to realize a futuristic rehabilitation robotic system. In this case, a mobile base plays a key role due to its stable operation during stoppage as well as its free-to-move capability. − To cope with variety of handicaps, man-machine interaction/interface should be realized in a modular form. On the contrary to conventional rehabilitation robot systems, a futuristic rehabilitation robot system has to provide wide range of services for variety of handicaps. The modularized approach of dealing with different degrees of disability will give the user a benefit of minimum redundancy regarding the components/subsystems, and thus enhance the user affordability about cost. In KARES II system, various kinds of interaction/interface are implemented so that the user can choose according to the degree of disability. Comparing with conventional rehabilitation robots, KARES II shows several unique features as follows:
− KARES II, a product based on “Task-oriented Design” (TOD), has adaptability to the user according to his/her level of disability. From the able-bodied to the severely spinal cord injured (lesion level C4 or C5), KARES II can support twelve predefined tasks. The twelve tasks are collected from six month’s sur-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
73
veys for the handicapped in the factories and clinics. Based on the survey, those specific twelve tasks were defined according to the usability, feasibility and relationship with the rehabilitation purpose. − KARES II possesses high-level autonomy along with various human-robot interaction interfaces. − KARES II is a successful combination of the fixed workstation-type system and the wheelchair-based system. Through previous experiences on KARES I, we have ensured that this hybrid type system is operation-wise effective in many cases. − Opinions by user trials have been used for refining not only the technical aspects but also aesthetic (or human-friendly) design of KARES II system. This section describes some important requirements and philosophy of design as well as the overall system structure of KARES II system which we have developed for assisting the daily living activities of the physically handicapped. 4.3.1 Questionnaire Survey KARES II system is implemented according to the principle of Task-Oriented Design (TOD) [10] which confirms realization of the predefined tasks, and as a byproduct, it may further attain additional tasks owing to the flexible nature of a robotic system. Specifically, we surveyed as a first step basic activities of end-users (i.e. the spinal cord injured people and the other physically handicapped) and caregivers for six months (see Table 4.6). Table 4.6. Information for samples of survey Period Locations Type(numbers) Living situations
1999.1 – 1999.6 hospital(1), industrial workplace(3), asylum(6) quadriplegia(21), poliomyelitis(9), mental disorder(6), others(4) inpatient(24), outpatient(or dwelling at home)(16)
After having examined those activities collected through surveys, about 150 items were related as possible tasks in a brainstorming process. Then, these items were categorized according to their usability, feasibility and suitability for assistive purposes. Finally, twelve basic tasks were determined as described in Table 4.7. These tasks are the ultimate target for TOD which has guided our development of the subsystems such as a robotic arm, necessary user interfaces, and other hardware modules. Along with the notion TOD, we have taken the concept of “humanfriendliness” into consideration as design philosophy. Since the robotic arm is very likely to interact with human users, safety should be guaranteed when the robotic arm makes contact with the users. For the user interfaces, easier accessibility to the system is required since most of the handicapped people are not good at operating robotic systems. For human-friendliness of the robotic arm, we have
74
Z. Z. Bien et al.
adopted active compliance control for the safety of a user in contact with the robotic arm. For easier accessibility to the system, all the user interfaces have a capability of faster execution of each task. In addition, the appearance of every subsystem of KARES II is paid attention so that the subsystems look human-friendly and comfortable. Another important design philosophy considered in KARES II is “modularization of subsystems”. In consideration of the variety of the level of handicaps, accessible interfaces should differ for cost effectiveness and simplicity. The modularized subsystems can make it possible to construct a personally optimized system. Table 4.7. Twelve tasks for KARES II Task no. 1 2 3 4 5 6 7 8 9 10 11 12
Task name Serving a meal Serving a beverage Wiping/Scratching face Shaving Picking up objects Turning switches on/off Opening/Closing doors Making tea Pulling a drawer Playing games Changing CD/tapes Removing papers from printer/fax
Distance between user & robot hand Near Near Near Near Far Far Far Far Far Near/Far Near/Far Near/Far
4.3.2 Overall Structure 4.3.2.1 H/W Structure of KARES II System As shown in Fig. 4.23, KARES II system consists of the wheelchair platform with various user interfaces (specified as positions 3–6) and the mobile platform with a robotic arm for compliance control and for visual servoing (positions 1 and 2 in Fig. 4.23). Here, the mobile platform is essential to perform tasks that consist of operations far away from the user. In the mobile platform, the mobile base gives mobility and extends the workspace of KARES II system. Considering twelve tasks in Table 4.7, we have found that the mobile platform is very effective to perform some tasks which should be done in the spots far from the user: the tasks include picking an object, turning switches on/off and opening/closing the door. We have also concluded that the mobile base needs not to be omni-directional but autonomous. For the robotic arm manipulation, a six DOF robotic arm with all revolute joints is developed to perform the twelve predefined tasks. It has the PUMA type Denavit-Hartenberg parameters [17] and the lengths of links are optimized for the predefined tasks. The design procedure begins with task points of the predefined twelve tasks and as shown in Fig. 4.24, the optimized arm is obtained [10].
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
75
Fig. 4.23. KARES II system: conceptual view
Fig. 4.24. Design procedure of KARES II
As the wheelchair platform for human-robot interfaces, a powered wheelchair with programmable capability is adopted [29], so that the interface developed by us can control the wheelchair itself. 4.3.2.2 S/W Structure of KARES II System: I/O Relations and Control Architecture Fig. 4.25 shows the I/O relations among all the subsystems of KARES II. As we can see from Fig. 4.25, KARES II system includes various human-robot interfaces for smooth communication and comfortable interaction between the user (the elderly and the people with spinal cord injury) and the robotic arm (position 1). Each interface commands velocity and position in response to the inputs from the user. The user can control the robotic arm as well as the wheelchair itself using various interfaces such as EMG interface (position 6), Eye-mouse (position 3), head interface (position 4) and shoulder interface (position 5). Choice may be dictated according to the levels of disability. In the case of shoulder interface, the user can acquire status information of the system through a haptic feedback function. Also, the visual servoing subsystem (position 2) can provide with two kinds of services:
76
Z. Z. Bien et al.
(i) autonomous service in which the user’s intervention is not necessary and (ii) human-friendly service in which the user’s intention can be acquired via the user’s facial expressions. The latter one is considered as a form of feedback from the user to robot.
Fig. 4.25. KARES II system: I/O relations among each subsystem
Realization of KARES II system as a whole is possible only if the system is constructed based on some efficient control architecture. Fig. 4.26 shows the control architecture of KARES II. If the user selects a certain task through GUI, an overall task sequencer (OTS) decides the necessary interfaces and the sequence for the task. Based on the arrangement of the overall task sequencer (OTS), a submodule task sequencer (STS) decides the corresponding module’s action-like request information to a sub-module, the command to the actuators, notification of the result (if any), and so forth. The sequencer acts as the central coordination unit within the control architecture [54]. Even though we have confirmed that the integrated system based on the above control architecture works reasonably well, there are rooms for optimization. For improvement, we find that a top-down approach of architecture is preferred to the evolutionary design approach in consideration of the following aspects [54]:
− Enhancement and reuse of current software − Modularity of subsystems to support easy exchange of components − Implementation of distributed system concept in order to achieve scalable computing performance.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
77
Fig. 4.26. Control architecture for KARES II system
4.3.3 Soft Robotic Arm with Visual Servoing In this section, we describe two human-robot interaction technologies employed for the robotic arm of KARES II. First, we report that the robot system is designed to have the compliance function. Notably, as shown in Table 4.7, Task 3 and Task 4 require it in the course of task execution. The compliance function can increase the safety level when there occurs an unexpected collision with the user. Moreover, it can provide more comfortable services to the user. Second, it is noted that the robotic arm is equipped with a visual servoing function. This function is required not only for detecting and locating an object autonomously but also for basic intention reading by analyzing facial expressions of the user. 4.3.3.1 Active Compliance Control of the Robotic Arm [10, 11, 12] Since our robotic arm should have various values of compliance in accordance with the twelve tasks, the active compliance control (ACC) is implemented on our robotic arm. In our approach, we have implemented ACC without any force or torque sensors. For ACC, the controller requires not only the position measurement but also the force which is usually measured from the force sensors or the torque sensors. However, we have adopted a method for sensorless measurement for simple and low-cost design. Provided below is the method for sensorless torque measurement. At a static contact situation, the output torque of a motor ( τ motor ) is equal to the external torque ( τ ext ) due to the contact. By using this fact, we can easily know
τ ext while the motor of the robotic arm is controlled to exert τ motor . But, unfortunately, this easy method requires some hard constraints such as negligible back-
78
Z. Z. Bien et al.
lash and friction. These constraints can hardly be satisfied for those robotic arms with speed reducing gears since most gears, by nature, have frictions and backlashes. As a remedy to this problem, the cable-driven mechanism is adopted for speed reduction. The cable-driven mechanism is known to have negligible friction and backlash [65]. Accordingly, it enables the sensorless torque sensing. To realize the desired compliance, Time-Delay Control Based Compliance Control is proposed [12]. This concept is easily implemented while providing with efficiency in control performance. This compliance increases the level of safety for unexpected collision, and furthermore, can make the user feel more comfortable when he/she is performing contact-type tasks such as shaving and wiping of his/her face. To confirm the compliance control algorithm proposed above, a simple experiment is conducted. For a static configuration of the robotic arm, an external force is given as shown in Fig. 4.27. The desired compliances are given by Cd 1 = 2.21 deg/Nm, Cd 2 = 1.17 deg/Nm, and Cd 3 = 0.39 deg/Nm. The results show that the desired compliances are realized at the first three joints, which verifies that the torque-senseless compliance control works well.
Fig. 4.27. Schematic diagram for the experiment
4.3.3.2 Visual Servoing [43, 63] Visual servoing (Fig. 4.25, position 2) module is adopted to provide a visionbased control for autonomy of the robotic arm [13, 59] and to implement a humanfriendly interface for face recognition and intention reading. In the first version of our wheelchair-based rehabilitation robotic systems called KARES I [64], we had found that visual servoing is not an easy task due to requirements of real-time control and robustness to varying illumination, and in particular, the performance is deteriorated due to vibration of the robotic base supported by flexible rubber tires of the wheelchair. In KARES II system, we have separated the robotic arm from the wheelchair platform and have used a vision technique called “space variant vision” for real-time control and robustness to varying illumination. For effective execution of the predefined tasks, we have used a novel stereo camera head in an eye-in-hand configuration [63]. The developed smallsized/light-weighted stereo camera head is installed on the robotic arm in eye-in-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
79
hand configuration [14]. For fast image processing, the log-polar mapping (LPM) is adopted which is a kind of space variant vision technique [6]. Since LPM image shows invariance to the scaling and the rotation, in addition to its high image reduction ratio (of 22:1 in our system), it is very suitable for visual servoing with eye-in-hand camera configuration [36]. Here, we report that we have performed an experiment of “intention reading” by utilizing the visual images obtained through visual servoing. We assumed that one can show his/her intention to drink or not to drink by opening or closing one’s mouth. Thus, we implemented an intention reading skill based on the information about the user’s mouth [43]. Fig. 4.28 shows sequential images of the user’s face with different degree of mouth openness and the result of intention reading. According to the extracted features about the user’s mouth, we can easily estimate the positive/negative level of the user’s intention to drink or not to drink. With 110 facial images, this method results in a reasonable classification rate (92.7%) [43].
a
b
Fig. 4.28. Intention reading from sequential images: (a) face images, (b) extracted intention
4.3.4 Intelligent Human-Robot Interfaces For the user of KARES II, there are four types of human-robot interfaces as will be described shortly, and it is proposed that the user choose a proper combination of interfaces according to his/her level of disability. Such combination has an advantage of guaranteeing better reliability of the system. Table 4.8 shows a guide to select appropriate interfaces according to the level of disability. In fact, selection of an interface is determined not only by the level of disability but also by the residual functional ability of the user. Table 4.8. Appropriate human–robot interfaces according to the level of disability and the residual functional ability Interface Eye-mouse Head interface Shoulder interface EMG interface
Head/neck Ο Ο × Ο
C4 shoulder(partial) Ο Ο ¨ Ο
C5 shoulder Ο Ο Ο Ο
arms(partial) Ο Ο Ο Ο
80
Z. Z. Bien et al.
4.3.4.1 Eye-Mouse [42, 68] In order for the people with severe motor disability such as C4 lesion to use the KARES II system, it is recommended to use the Eye-mouse system as an input device. The users can indicate the position of the object that they want to grab and give commands to the robot to do something about the object by using the Eyemouse on a computer that is mounted on the wheelchair. So far, there have been many techniques reported to obtain the eye-gaze direction [23]. Those methods can be divided into two types: the contact method and the non-contact method. In the non-contact method, no device is attached on a user’s head while some sensor around the user estimates the user’s eye-gaze direction. CCD camera [3, 20] has been popularly used since it needs no attached device that may cause inconvenience for a user. However, this method has lower accuracy than the contact method and need a head-pursuing system under the free head condition. In the contact method, the head pose and eye movement is obtained using some device that is attached on the user’s head [2, 61]. This method is more accurate than the non-contact method and has no need for a head-pursuing system. However, a disadvantage of this method is that it is inconvenient due to the attached device. In spite of the inconvenience of the contact method, we adopt the contact method because an interface system for the handicapped should be accurate and reliable under the free head condition. Some commercial systems are available adopting the contact method (e.g., [2, 61]), but the design of the systems is not suitable for supporting the handicapped in daily life, so we have developed our own system. We have developed a human-friendly Eye-mouse system based on the opinion of the handicapped (Fig. 4.29). Here, head pose is measured by using ‘Magnetic Sensor Receiver’ on the cap. Eye-gaze direction is acquired by image-based method using CCD camera, IR LED and mirror [20, 44, 48].
Fig. 4.29. Proposed Eye-mouse system
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
81
4.3.4.2 Biosignal-Based Interface [26] EMG (electromyogram) signal is a form of electric manifestation of neuromuscular activation associated with contracting muscle. KARES II adopts EMG interface for the user with disability who can move one’s shoulder or head for controlling a robotic arm or a powered wheelchair. We have developed a small sized LNA (low noise amp)-type EMG AMP with differential AMP to remove common mode component noise, two biquad (2nd order) notch filters to reduce hum noise [34], and a band-pass filter to remove high frequency band. To extract the user’s intentions from muscle movement of shoulders, we defined basic motions as shown in Fig. 4.30. To tackle the user dependency problem exhibited in many previous works, we propose an algorithm capable of classifying the bio-signals obtained from different subjects into the predefined classes using a Fuzzy C-means algorithm and a rough set-based technique selecting a necessary and sufficient set of features out of all feature combinations extracted [26]. The overall signal processing procedure is briefly explained as follows. EMG signals of the predefined motions are measured from four predetermined muscles (channels) with electrodes attached to each subject. A second order high pass butterworth filter with 30Hz cut-off frequency is used for reducing low frequency noise such as motion artifacts. Well-known features such as integral absolute value (IAV), variance (VAR), zero crossing (ZC), frequency ratio (FR) are extracted to classify the predefined motions from noiseless EMG signals. By applying a well-established feature extraction algorithm [26] to these numerous extracted features, we can obtain the minimized feature combination sets which have enough information for complete classification. The minimized feature combination sets extracted by the proposed algorithm are used as input-output pairs to make Fuzzy Min-Max Neural Networks (FMMNN) learn the motions. After learning, FMMNN gives the classified results and actuates the robotic arm based on the user’s movement. In the experiment1, the basic motions are recognized with success rates of approximately 90 % for four untrained users.
Fig. 4.30. Basic 8 motions
1
Each user has 10 trials of each motion for validation.
82
Z. Z. Bien et al.
4.3.4.3 Head&Shoulder Interface [50, 51] Head interface is a two DOF interface for people with C4 lesion. It is used for body-operated control of a wheelchair and a robotic arm. Force sensitive resistor (FSR) is a suitable element for developing a human-robot interface satisfying the guidelines because of its characteristics: low price, ease to measure force, arbitrary shape, and thinness. Human head motion is analyzed in order to determine the motion detection range of the head interface. Average maximum tilt angles are 41° for the front, 73° for the rear, and 60° for right and left side. A head interface valid in the analyzed range (73°) has been developed as shown in Fig. 4.31(a). Shoulder interface is a wearable sensor suit converting the human body motion into a useful command [50, 51]. Humans shoulder motion is also analyzed in the same method as in the previous subsection. The average maximum ranges of shoulder motion are 7.5 cm for the front, 7 cm for the rear, 10.1cm for the upper direction, and 2.5 cm for the downward direction. We have decided that the lift motion of shoulder is most useful for human-robot interaction. A tension sensor measuring the lift motion of shoulder has been developed as shown in Fig. 4.31 (b).
(a)
(b)
Fig. 4.31. Main components of head/shoulder interface: (a) angle sensor, (b) tension sensor
4.3.5 User Trials We have performed user trials by letting six subjects with spinal cord injury use the various functions of KARES II system. Six subjects were rehabilitants of Korean NRC (National Rehabilitation Center) in Seoul who are currently taking appropriate rehabilitation programs (see Table 4.9). The order of user trials is basically composed of following steps. At first, we showed to the subjects how to execute the scenario of “Task 2” in Table 4.7 by using the integrated system. We helped then the subjects to understand the objective of KARES II system. Next, we let the subjects experience by actually letting them operate and observe the various interface subsystem/modules of the system. In every case, some short questionnaires are given to the users with detailed ex-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
83
planations by us. Next, based on the collected qualitative answers and quantitative measurements for each subsystem, we have drawn the figure which depicts a distribution of the satisfaction degrees (0-100 %) according to the predefined evaluation aspects of each subsystem2. From these procedures, we have obtained the results of user trials. Table 4.9. Information for subjects subject ID A B C
sex
age
M M M
33 36 35
lesion level C4 C5 C4
residual motor ability Head/neck shoulder arms ¨ Ο × Ο Ο × ¨ Ο ×
D
M
51
C5
Ο
Ο
Ο
E
M
21
C4
Ο
¨
×
F
M
31
C5
Ο
Ο
¨
technical aids none none none powered wheelchair none powered wheelchair
4.3.5.1 Robotic Arm When the task of shaving or cleaning-face is conducted by the robotic arm, there exists some form of contact between the trial subject and the robot hand. For such a task, the robot must have some level of compliance for safety of the user. In these cases, the magnitude of compliance may be dependent on each individual and on the kind of tasks. In order to implement comfortable shaving and cleaningface tasks, we investigated a preferred compliance level in each task by performing the task, for the users followed by a set of questionnaires. Three levels of compliance are used in the user trial for both tasks as shown in Table 4.10. Table 4.10. Three levels of compliance [deg/Nm] 1st axis 2nd axis 3rd axis
Level 1 1.232 0.7815 0.1309
Level 2 2.053 1.303 0.218
Level 3 4.107 2.61 0.436
Compliance is realized to each of the 1st, 2nd and 3rd axes in the form of joint compliance in consideration of link length. In Table 4.10, Level 1 is the lowest compliance level (or the highest stiffness case), and Level 3 is the highest compliance level (or the lowest stiffness case). Based on the investigation of human arm compliance done in [24], we have designed three levels; the first one is smaller than that of human arm, second one is nearly the same as that of human arm and the third one is bigger than that of human arm. 2
Predefined evaluation aspects are diverse due to each subsystem's unique characteristics.
84
Z. Z. Bien et al.
For the shaving task, we applied three levels of compliance to six subjects and found that the preference of the handicapped is distributed as shown in Fig. 4.323. Fig. 4.32 (a) and (b) show the degree of safety and the degree of satisfaction of each compliance level that was obtained from the interviews for shaving task case. As shown in Fig. 4.32, Level 2 and Level 3 mark higher preference in regard to safety and comfortable feeling. This result shows that in case of shaving task, the trial subject prefers the level 2 of compliance which corresponds to a weak nurse arm’s compliance [24]. In the task of cleaning-face, we have found that all three levels of compliance in Table 4.10 fail to render a satisfactory result: the degree of satisfaction is very low. We learned that the six subjects preferred to compliance that is lower than the presented values in Table 4.10, which means that the task of cleaning-face needs a more strong contact force with high stiffness to perform the task in a satisfactory way. Also, we have found that it is desirable to position a towel in a desirable position and let the trial subject move their head to clean up the face.
a
b Fig. 4.32. Evaluation of shaving task by the subjects: (a) safety, (b) ease of use
4.3.5.2 Visual Servoing The visual servoing subsystem consists mainly of a newly-developed stereo camera head in an eye-in-hand configuration, a function for object recognition, a function for face recognition and a function for intention reading based on facial expression recognition. In the user trials, we have asked all the subjects about their opinions on physical appearance of the visual servoing mechanism. Fig. 4.33 shows the evaluation summary of visual servoing subsystem. As shown in Fig. 4.33, the stereo camera head should be redesigned to provide more human-friendly appearance. Also, most of the subjects pointed out some difficulty of the language of our GUI due to its small-sized font and English-expression. For easy use and higher satisfaction degree, GUI should be organized using our mother tongue language with bigger character size. In the aspect of function for object recognition, the number of recognizable objects should be predefined by the user candidates and should be increased so as to be capable of handling various 3
In every figure, each degree of satisfaction is represented by ‘Boxplot’ (Boxplot 2002).
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
85
objects in our real life. Finally, we report that all the users are satisfied with the function for intention reading. For more general usage of intention reading, recognition of some parts of face (other than mouth) is recommended.
Fig. 4.33. Results from use trials for visual servoing
4.3.5.3 Eye-Mouse We have carried out user trials with the proposed Eye-mouse system. The accuracy result of the estimated eye-gaze direction in Table 4.11 was obtained when the distance between the user and the monitor is about 500 mm. The two numbers in the each cell of the first and second rows of the table are the horizontal error and the vertical error, respectively. With these errors, a suitable size of the button of the interface program was computed as shown in the third row of the table. The fourth row includes the maximum resolution in case of a 15" monitor. It is noted that the result is worse than the resolution (14×12) obtained in the laboratory [48] and the results are different for different users. The possible reasons for these results are as follows: 1. To extract eye-gaze direction using the Eye-mouse, the center coordinate of eye should be obtained anatomically with respect to the receiver of the magnetic sensor 2. Since the motions of eyes include saccadic motion, concentration is needed for the users to fix gaze on one point 3. Because the proposed method to track the pupil is a vision-based method, it can be affected by illumination of surroundings. From the survey results shown in Fig. 4.34, we may say that most of the users were satisfied with the structure of the interface program. Especially, the convenience of the additional ‘OK’ button for click operation was acclaimed. The users also said that it is easy to control the pan/tilt unit due to the function of ‘automatic pushing’ when the mouse pointer moves into the button. The satisfaction degree
86
Z. Z. Bien et al.
about the design and easiness for wearing the system was turned out to be acceptable. The users wanted the system to be light and not to be tight. They also pointed out the problem of perspiration on the surface of contact. Since the proposed method is a ‘contact’ method, tiredness will be a critical problem when using the device for a long period of time, and thus, the level of tiredness needs to be observed for a longer period of user trials. Table 4.11. Experimental results of accuracy of Eye-mouse Subject Error Mean (pixel) Error STD (pixel) Possible Button Size (mm) Possible Monitor Resolution (15")
1
2
3
4
5
6
(-36.1,-6.7)
(6.3,-7.1)
(1.3,20.0)
(-44.5,-8.9)
(-49.9,-4.8)
(66.4,-5.7)
(50.4,45.1)
(98.4,88.4)
(32.6,15.4)
(63.9,79.8)
(46.1,34.6)
(31.3,43.1)
51.5 × 31.1
62.3 × 57.2
20.2 × 21.2
64.6 × 53.2
57.2 × 23.6
58.2 × 29.2
5.9 × 7.4
4.9 × 4.0
15.1 × 10.8
4.7 × 4.3
5.3 × 9.6
5.2 × 7.9
Fig. 4.34. Results of inquiries about Eye-mouse
4.3.5.4 Head&Shoulder Interface Two experiments were performed to evaluate the performance of the shoulder interface and head interfaces which can make 2 DOF signals. The experimental process is shown as follows: 1. Wear our new interface and try to make four direction signals, which are forward, backward, right and left direction. This list of actions is for familiarizing the user with the new interfaces. 2. Operate the real wheelchair while adapting oneself to the moving mechanism with the new interface.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
87
3. While driving the wheelchair along the predefined path shown in Fig. 4.35, the total elapsed time from start to end point, and the number of collision and recognition rate of interface are measured. 4. Evaluate the performance of new interface.
Fig. 4.35. Predefined path for wheelchair control
The results for wheelchair control according to the path are shown in Fig. 4.354. Table 4.12 is the results about head interface control and shoulder interface control. We conducted another experiment in which the same procedures are conducted by some ordinary person who uses hand to operate joystick. This latter result shows that the elapsed time is 21 seconds while the average number of collision is 0.5 times. In comparison with this case of ordinary person, the experimental result of using joystick shows that the traveling time by the handicapped is twice longer than that of ordinary person and the collision occurs more frequently as many as five times. But people with spinal cord injury do not necessary to spend much adapting time to control the powered wheelchair by our new interfaces. They control the wheelchair along the path with 2 times collision on the average. This result of user trial presents big potential for applicability in that even the severelyhandicapped can control the power wheelchair without any assistance. Also, the results show that our interfaces may not perfectly work with all the handicapped. In the test of shoulder interface, the first and the fifth subject could not use each shoulder independently and therefore the experiment was not carried out. And in the head interface test, the first subject had difficulty in tilting the head for roll motion so he could not turn to appropriate direction. From the survey results in Fig. 4.36 (a) and (b), we may say that the subjects were satisfied with overall impression about both interfaces. Most of the people who used the shoulder interface put in the overcoat said it is a very attractive device because it does not attract other people’s attention. 4
Based on Jones et.al (Jones et al. 1998), we modify the path.
88
Z. Z. Bien et al. Table 4.12. Experimental results of two interfaces Interface Type
subject Elapsed Time (s) Number of Collision Recognition rate (%)
1
Head interface 2 3 4 5
6
1
2
Shoulder interface 3 4 5
6
45
42
58
92
120
34
30
113
28
2
0
2
2
5
2
3
2
1
80
80
65
80
60
80
65
55
80
a
b
Fig. 4.36. Survey results about two interfaces: (a) head interfaces, (b) shoulder interface
4.3.5.5 EMG Interface Wheelchair control was performed also by an EMG interface. To assess the performance of EMG interface objectively, we have measured the total elapsed times, and the number of collisions while driving along a predefined path shown in Fig. 4.35. After experiment, we interviewed the users with questionnaire for subjective evaluation. We tested two control modes as shown in Table 4.13. The difference between Mode 1 and Mode 2 is a command for forward movement. In Mode 1 case, the wheelchair will go forward until a user keeps up-forward command motion such as both shoulders up. In Mode 2 case, the same command was executed by using a toggle switch which makes the wheelchair go straight or stop according to its current state. We attached four electrodes (two channels, bipolar type) for measuring EMG signals in both Trapezius muscles. Table 4.13. Motion command for controlling of wheelchair Motion Initial state Both shoulders up Right shoulder up Left shoulder up
Wheelchair motion in Mode 1 Stop Forward movement Right movement Left movement
Wheelchair motion in Mode 2 Current state hold (forward/stop) Forward/Stop (toggle) Right movement Left movement
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
89
After experiment, we have found that the forward command in Mode 1 has made the users to get tired because the users had to maintain forward motions till the wheelchair reached the target position ahead. The users, however, gave forward command easily in Mode 2 by both shoulders up. The difficulty of forward command usage in Mode 2 is the wheelchair response delay time. The wheelchair response delay time makes the user confuse whether the controller accepts the command correctly. After using this controller for a while, however, the user adapted to this delay time, and felt more comfortable than the forward command in Mode 1. The users also commented about electrode attachment and its outlook appearance. Some users disliked the procedure of electrode attachment (including skin preparation) to skins. Overall wheelchair control performance of the subjects is not as good as ordinary persons, but we have ascertained that EMG interface based on the head movement or shoulder movement would be applied as a wheelchair controller to the users with spinal cord injury with lesion level C4 and C5 (see Fig. 4.37).
Fig. 4.37. Results from user trial for EMG interface
4.4 Concluding Remarks As the preliminary result of developing welfare-oriented robotic systems, we have suggested and implemented several subsystems that help the elderly and the handicapped have independent daily living. We have further collected feedbacks from the design stage according to evaluation by the handicapped. We believe that one of the most important things in welfare-oriented robotic systems is to adopt various service robots to help the habitants in many ways. The idea of Intelligent Sweet Home is treated at present not as a science fiction but as important goal with strong social and economics aspect. Its realization will give a solution of many existing problems of the welfare society and will make the handicapped/elderly’s lives as well as human life much pleasant and easier. We have also shown that, by hybridizing a workstation-based frame and a wheelchair-based one, a novel type of rehabilitation robotic system can be realized
90
Z. Z. Bien et al.
to have advantages of the two types, and that modularized man-machine interface/interaction is realized to cope with the variety of handicaps. In realizing the user’s input commands and interaction mechanism, various human-robot interfaces including Eye-mouse, head/shoulder interfaces, and EMG signal interface are developed so as to cope with different levels of disability. Based on our experiences of developing Intelligent Sweet Home and KARES II, and the feedback from the users, we have concluded that the proposed systems need further improvement in several aspects as follows:
− Further study is needed to design a convenient operation methodology of the system on behalf of novice users and long-term handling. More sensitive and wide intention reading capability of various kinds is desirable for humanfriendly interaction. − Although each subsystem performs its own functions well, we find that a central decision maker is desirable as a means of communication between subsystems for exchanging necessary information. If this kind of decision maker is installed, each subsystem could have worked fully without additional software programming. In this case, what the subsystems need to do is only to send information that is requested from the decision maker. And also, if it is necessary to add a new task, the system operator may modify the decision maker properly only. In this way, we can make the system more simple and flexible. − It is necessary for each subsystem to check whether the information communication is well under way or not. This capability is necessary in preventing safety reduction because of communication failure.
Acknowledgement This research is supported by Human-friendly Welfare Robot System Engineering Research Center (Sponsored by KOSEF) of KAIST and the Ministry of Science and Technology of Korea as a part of Critical Technology 21 Program on “Development of Intelligent Human-Robot Interaction Technology”. We like to acknowledge various forms of support from Prof. Ju-Jang Lee, Prof. Byung Kook Kim, Prof. Jin-Oh Kim, Prof. Jong-Tae Lim, Prof. Heyoung Lee and their student staffs in developing Intelligent Sweet Home, and acknowledge helpful comments on extended mode of soft remote control system from Dr. Dimitar Stefanov. We also like to acknowledge a variety of aid from Prof. Pyung-Hun Chang, Prof. Myung Jin Chung, Prof. Dong-Soo Kwon, and their student staffs in developing KARES II system, and acknowledge assistance from Dr. Byung-Sik Kim and his staff of National Rehabilitation Center, Korea in user trials.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
91
References 1. 2. 3. 4.
5.
6.
7.
8. 9. 10. 11.
12.
13. 14. 15. 16. 17. 18. 19.
20.
Afma-robots (2003) AFMASTER, http://www.afma-robots.com ASL501 (2003) Model 501. http://www.a-s-l.com/501_home.htm ASL504 (2003) Model 504. http://www.a-s-l.com/504_home.htm Bang W, Stefanov D, Jung J, Kim M, Lee J, Lee H, Bien Z (2001) Human-friendly th health monitoring system for service to the elderly and disabled. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2001), Evry Cedex, France, pp 333–339 Bien Z, Park KH, Kim JB, Do JH, Stefanov D (2003) User-friendly interaction/interface control of intelligent home for movement-disabled people. Proceedings th of the 10 International Conference on Human-Computer Interaction, Crete, Greece, Vol. 4, pp 304–308 Bolduc M, Levine MD (1998) A review of biologically motivated space-variant data reduction models for robotic vision. Computer Vision and Image Understanding 69(2):170–184 Bonner S (1998) AID HOUSE: Edinvar housing association smart technology demonstrator and evaluation site. Proceedings of the 3rd TIDE Congress, Helsinki, Finland, pp 396–400 Boxplot (2002) http://www.shodor.org/interactivate/activities/boxplot/ Care-O-bot (2003) http://www.care-o-bot.de/english/Care-O-bot_2.php Chang PH, Park HS (2003) Development of a robotic arm for handicapped people: a task-oriented design approach. Autonomous Robots 15(1): 81–92 Chang PH, Park HS, Park J, Jung JH, Jeon BK (2001) Development of a robotic arm th for handicapped people: a target-oriented design approach. Proceedings of the 7 International Conference on Rehabilitation Robotics (ICORR2001), pp 84–92 Chang PH, Kang SH, Park HS, Kim ST, Kim JH (2003) Active compliance control for th the disabled with cable transmission. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 84–87 Chen N, Parker GA (1994) Inverse kinematic solution to a calibrated puma 560 industrial robot. Control Engineering Practice 2: 239–245 Choi J (2001) Design of a behavior-based controller using a novel camera head and its application to service robots (in Korean). MS Thesis, KAIST, Korea Colello MS, Mahoney RM (2002) Commercializing assistive and therapy robotics. Universal Access and Assistive Technogy, Keates S et al. (eds), pp 223–234 Conte G, Longhi S, Zulli R (1996) Motion planning for unicycle and car-like robots. International Journal of Systems Science 27(8): 791–798 Craig JJ (1989), Introduction to robotics: mechanics and control. Addison-Wesley Publishing Co. Dallaway JL, Jackson RD, Timmers PHA (1995) rehabilitation robotics in Europe. IEEE Transactions on Rehabilitation Engineering 3: 35–45 Do JH, Kim JB, Park KH, Bang WC, Bien ZZ (2002) Soft remote control system using hand pointing gesture. International Journal of Human-friendly Welfare Robotic Systems 3(1): 27–30 Ebisawa Y (1998) Improved video-based eye-gaze detection method. IEEE Transactions on Instrument and Measurement 47(4): 948–955
92
Z. Z. Bien et al.
21. Elger G, Furugren B (1998) SmartBO – an ICT and computer-based demonstration home for disabled people. Proceedings of the 3rd TIDE Congress, Helsinki, Finland, pp 392–395 22. Erlandson RF (1995) Applications of robotic/mechatronic systems in special education, rehabilitation therapy, and vocational training: a paradigm shift. IEEE Transactions on Rehabilitation Engineering 3: 22–32 23. Glenstrup AJ, Engell-Nielsen T (1995) Eye controlled media: present and future state. B.S. Dissertation, Copenhagen University 24. Gomi H, Kawato M (1997) Human arm stiffness and equilibrium-point trajectory during multi-joint movement. Biological Cybernetics 76: 163–171 25. Hammond J, Sharkey P, Foster G (1996) Integrating augmented reality with home systems. Proceedings of the 1st International Conference on Disability, Virtual Reality and Associated Technologies ECDVRAT '96, pp 57–66 26. Han JS, Bang WC, Bien ZZ (2002) Feature set extraction algorithm based on soft computing techniques and its application to EMG pattern classification. Journal of Fuzzy Optimization and Decision Making 1: 269–286 27. Harwin WS, Rahman T, Foulds RA (1995) A review of design issues in rehabilitation robotics with reference to north American research. IEEE Transactions on Rehabilitation Engineering 3: 3–13 28. Hillman M (1998) Introduction to the special issue on rehabilitation robotics. Robotica 16: 485 29. Hillman M, Hagan K, Hagan S, Jepson J, Orpwood R (2002) The Weston wheelchair mounted assistive robot – the design story. Robotica 20: 125–132 30. Isard M, Blake A (1998) Condensation-conditional density propagation for visual tracking. International Journal of Computer Vision 29(1): 5–28 31. ISRA (1995) The service robot market, an in-depth study from the international service association. ISRA 32. IST-MATS (2003) http://www.bcdi.be/en/projects/data.html 33. Iwata H, Hoshino H, Morita T, Sugeno S (1999) A physical interference adapting hardware system using MIA arm and humanoid surface covers. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1216–1221 34. Johnson DE (1975) Rapid practical designs of active filters, John Wiley&Sons 35. Jones DK, Cooper RA, Albright S, DiGiovine M (1998) Powered wheelchair driving performance using force- and position-sensing joysticks. Proceedings of the IEEE 24th Annual Northeast Bioengineering Conference, pp 130–132 36. Jruger V (1995) Optical flow computation in the complex logarithmic plane. DiplomaThesis, University of Kiel, Germany 37. Jung JW, Lee CY, Lee JJ, Bien ZZ (2003) User intention recognition for intelligent th bed robot system. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Daejeon, Korea, pp 100–103 38. Kawarada A, Takagi T, Tsukada A, Sasaki K (1998) Evaluation of automated health th monitoring system at the ‘welfare techno house’. Proceedings of 20 IEEE/EMBS, pp 1984–1987 39. Kawamura K, Isakarous M (1994) Trends in service robots for the disabled and the elderly. Proceedings of IROS’94, pp 1647–1654 40. KIHASA (2000) National survey of the disabled persons. Korea Institute for Health and Social Affairs
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
93
41. Kim CH, Jung JH, Kim BK (2003) Design of intelligent wheelchair for the motor disth abled. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Daejeon, Korea, pp 92–95 42. Kim DH, Kim JH, Chung MJ (2001) A computer interface for the disabled using eyegaze information. International Journal of Human-friendly Welfare Robotic Systems 2(3): 22–27 43. Kim DJ, Song WK, Han JS, Bien Z (2003) Soft computing based intention reading techniques as a means of human-robot interaction for human centered system. Journal of Soft Computing 7: 160–166 44. Kim JH, Lee BR, Kim DH, Chung MJ (2003) Eye-mouse system for people with moth tor disabilities. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 159–163 45. Kim Y, Park KH, Seo KH, Kim CH, Lee WJ, Song WG, Do JH, Lee JJ, Kim BK, Kim JO, Lim JT, Bien ZZ (2003) A report on questionnaire for developing Intelligent Sweet Home for the disabled and the elderly in Korea living conditions. Proceedings th of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 171–174 46. Krebs HI, Hogan N, Volpe BT, Aisen ML, Edelstein L, Diels C (1999) Robot-aided neuro–rehabilitation in stroke: three-year follow–up. Proceedings of ICORR1999, pp 34–41 47. Kwee HH (1998) Integrated control of MANUS manipulator and wheelchair enhanced by environmental docking. Robotica 16(5): 491–498 48. Lee BR (2002) A real-time eye-gaze tracking system using infrared rays and vision sensor (in Korean). M.S. Dissertation, Korea Advanced Institute of Science and Technology 49. Lee H, Bien Z (2002) Variable bandwidth filter for reconstruction of bio-medical signd nals with time-varying instantaneous bandwidth. Proceedings of the 2 Joint EMBS/BMES Conference, Houston, USA, pp 141–142 50. Lee K, Kwon DS (2000) Sensors and actuators of wearable haptic master device for the disabled. Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 371-376 51. Lee K, Kwon DS (2001) Wearable master device for spinal injured persons as a control device of motorized wheelchairs. Journal of Artificial Life and robot 4(4):182-187 52. Lum PS, Burgar CG, Shor PC, Majmundar M, Van der Loos HFM (2002) Robotassisted movement training compared with conventional therapy techniques for the rehabilitation of upper limb motor function after stroke. Archives of Physical Medicine and Rehabilitation 83: 952–959 53. Martens C, Ivlev O, Graser A, Lang O, Ruchel N (2001) A FRIEND for assisting handicapped people. IEEE Robotics and Automation Magazine 8(1): 57–65 54. Martens C, Kim DJ, Han JS, Graeser A, Bien Z (2002) Concept for a modified hybrid multi-layer control architecture for rehabilitation robots. Proceedings of the 3rd International Workshop on Human-friendly Welfare Robotic Systems, Daejeon, Korea, pp 49–54 55. Mozer MC (1999) An intelligent environment must be adaptive. IEEE Intelligent Systems and Their Applications 14(2): 11–13
94
Z. Z. Bien et al.
56. Nakata T, Sato T, Mizoguchi H, Mori T (1999) Synthesis of robot-to-human expressive behavior for human-robot symbiosis. Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1608–1613 57. NSO (2001) The future estimated population. National Statistical Office 58. Park KH, Bien ZZ (2003) Intelligent Sweet Home for assisting the elderly and the handicapped. Proceedings of the 1st International Conference on Smart Homes and Health Telematics (ICOST200), Paris, France, pp 151–158 59. Peters II RA, Bishay M, Cambron ME, Negishi K (1996) Visual servoing for service robot. Robotics and Autonomous Systems 18: 213–224 60. Rao R, Agrawal SK, Scholz JP (2000) A Robot test-bed for assistance and assessment in physical therapy. Advanced Robotics 14(7): 565–578 61. SMI (2003) 3D VOG Video-oculography. http://www.smi.de/3d/index.htm 62. Song WG, Lim JT (2003) Design and management in smart home. Proceedings of the 1st International Conference on Smart Homes and Health Telematics (ICOST2003), Paris, France, pp 33–37 63. Song WK, Bien Z (2003) Blend of soft computing techniques for effective humanmachine interaction in service robotic systems. Fuzzy Sets and Systems 134:5–25 64. Song WK, Lee H, Bien Z (1999) KARES: Intelligent wheelchair-mounted robotic arm system using vision and force sensor. Robotics and Autonomous Systems 28(1): 83– 94 65. Townsend WT (1988) The Effect of Transmission Design on Force-Controlled Manipulator Performance. PhD Thesis, MIT 66. Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering 3: 46–55 67. Yamaguchi A, Ogawa M, Tamura T, Togawa T (1998) Monitoring behavior in the th home using positioning sensors. Proceedings of 20 IEEE/EMBS, pp 1977–1979 68. Yoo DH, Chung MJ (2002) Vision-based eye gaze estimation system using robust pupil detection and corneal reflections. International Journal of Human-friendly Welfare Robotic Systems 3(4): 2–8.
5 „FRIEND“ – An Intelligent Assistant in Daily Life O. Kouzmitcheva, C. Martens, A. Pape, H. She, I. Volosyak, and A. Gräser
Abstract The research and development in the field of rehabilitation robots produced a multiplicity of rehabilitation robots available as off-the- shelf products or laboratory prototypes (e.g. [1]). The application scope of these systems is large and covers ranges such as support for everyday tasks, assistance in the vocational surroundings or support in health care. An in-depth analysis unveils that rehabilitation robots, which are intended for a flexible use but not for individual special applications, offer services only on a relatively low level of abstraction, i.e. the direct low level control of the system [2] remains by the user. This leads to a high cognitive load with accompanying concentration loss, especially for persons depending on interfaces like speech control or eye movement trackers. In order to relieve the users from this kind of tiresome control the treatment of tasks on higher abstraction level becomes desirable [3]. The system should be able to perform chains of actions, which are repeated during daily life tasks, autonomously and/or with minimum necessary user interaction.
5.1 Basic Concepts and Hardware 5.1.1 The FRIEND Project With the motivation to overcome the situation explained above, the Institute of Automation (IAT), University of Bremen, has been developing the rehabilitation robotic system FRIEND since 1997. The system belongs to the category “intelligent wheelchair mounted manipulators”. It focuses on users with high spinal cord injury that are unable to control the manipulator by means of a keyboard or joystick. The system shall offer support during life activities, so that its users will become independent from care personnel. The strategic objective of the FRIEND project is to offer life autonomy for approximately 2 hours per day. Beside the aspect, that this is one of the main requirements mentioned by the users, the fulfilment of the objective would have a strong impact on the economical acceptance of the rehabilitation robotic system. In order to reach this strategic objective the IAT has chosen the so-called “pour a drink task” as the first daily life task to be offered by the FRIEND system. The realization of a robust and (semi-) autonomous execution of this task unveils a number of challenging technical problems to be solved, that are also representative Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 95-126, 2004. © Springer-Verlag Berlin Heidelberg 2004
96
O. Kouzmitcheva et al.
for other tasks. Therefore, the investigation and realization of the “pour a drink task” have the potential to develop a general method for robust high-level task execution in rehabilitation robotic systems. Within this chapter, the hardware and software structure of the FRIEND are presented. Afterwards, chapter 2 focuses on the realization of the “pour a beverage task” with the help of FRIEND. 5.1.2 Hardware Structure of FRIEND FRIEND (see Fig. 5.1) consists of an electric wheelchair (SPRINT, Meyra, Germany) and a 6 DOF robot arm (MANUS, Exact Dynamics, Netherlands). The arm is connected to a PC, which is mounted on the backside of the wheelchair, via a CAN bus interface. As a user interface an off-the-shelf speech recognition system is used. Other devices may replace speech recognition, as an input device, which may be better adapted to the users needs.
Fig. 5.1. Front view of FRIEND
FRIEND operates in a flexible human centred environment. It must be able to react to environmental changes dynamically during robotic arm movements. For this purpose, the system possesses a number of sensors for environmental perception. For visual perception, the system is equipped with an adjustable stereocamera system, which is mounted behind the users neck, and a gripper mounted camera. Using visual sensors for environmental perception is twofold: On the one hand, it offers the highest amount of information about the current environmental state.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
97
On the other hand, the complexity of signal processing and pattern recognition algorithms based on visual information is tremendous. Additionally, in many cases the reliability of the perceived information is insufficient for a robust task execution. Therefore, the IAT developed a “smart” tray that is mounted at the front side of the wheelchair. The “smart” tray (see Fig. 5.2) offers information about weights of objects placed upon the tray through a scale. The location of objects relative to a coordinate system fixed to the tray is measured through a touchpad. As shown within the succeeding chapters, the reliable information offered by the tray in combination with the visual information perceived by the cameras facilitates robust task execution. Beside the sensor systems mentioned so far, the gripper of the robotic arm is equipped with a force sensitive foil to perform sensitive grasping actions.
a
b
Fig. 5.2. Smart Tray. a top-view on scale surface, b matrix foil of position sensor
5.1.3 Multi-layered Control Architecture of FRIEND The integration of sensors, actuators and human machine interfaces into a computer program is a challenging task from the viewpoint of software engineering. A control-architecture is required, that offers an infrastructure for task planning, user interaction, resource administration as well as for dynamic activation and deactivation of closed loop control processes for the realisation of reactive operations. A dominating design principle for this kind of problem is called hybrid multi-layer architecture. This type of architecture combines the main design-principles for autonomous physical agents i.e. reactive and deliberative agent. Example architectures based on this principle are 3T [4], TCA [5] or SmartSoft [6]. Hybrid multi-layer architectures consist of three layers: The bottom layer realizes the reactive part of the robotic system (autonomous physical agent), i.e. the direct coupling of sensorial input to actuator control. Within this layer, concur-
98
O. Kouzmitcheva et al.
rently running and interacting behaviours determine the interaction of the system and the environment. The top layer −called the deliberator− plans operators on the highest level of abstraction. Here, methods from the field of classical artificial intelligence are used. Between these two layers resides the sequencer. The sequencer creates task schemes based on reactive behaviours that are offered as higher level abstraction operators to the deliberator. Beside this “gluing functionality” this layer is responsible for deadlock free execution of reactive behaviours that have to share limited system resources. Even though the latter aspect is crucial for secure execution, the avoidance of such situations cannot be guaranteed by some implemented architectures [5]. Here, the formal specification and analysis of all possible action sequences on sequencer level, like presented in section 5.2.5, is necessary. As shown in [7], traditional three layered control architectures do not meet the needs within the field of rehabilitation robotics systems. For this purpose a modified layered control architecture was designed that enables autonomous execution interrupted by man-machine interactions (see Fig. 5.3). The main design principle is still based on the hybrid multi-layer concepts and that the integration of reactive behaviours as well as deliberative capabilities becomes possible. Within the modified architecture, a human-machine-interface (HMI) replaces the deliberator. Additionally, it offers a direct control path from the HMI to the actuators and sensors. The system can rely on the user’s cognitive capabilities whenever necessary. The sequencer plays the role of a discrete event controller (DEC) that is responsible for a proper generation of action sequences related to high-level commands. The arrow from the HMI to the sequencer indicates such a command, like pouring and offering a drink. After the receipt of a command, the sequencer processes the following steps: First, the sequencer has to load command-related task knowledge to fix the information necessary for the determination of the current internal and environmental state. The state information is required for the succeeding generation of the command related action sequence and stored within the symbolic part of the world model. On this level of abstraction, a state is defined as a binary vector where each vector element represents a single fact about the environment or the system itself. The determination of the facts that are necessary for a complete state description takes place during the process of task knowledge modelling. In order to acquire the current state at the beginning of the task execution the sequencer queries the world-model about the desired facts. If no information is available, monitoring commands or user interactions are activated by the sequencer in order to update the state description within the world-model. After the initial state has been determined successfully, the sequencer generates sequences of executable elementary operations (EEOPs) that will be passed over to the reactive layer or the human-machine interface. From the sequencers point of view EEOPs are split up into four categories: • Reactive operators (direct sensor and actuator coupling, like visually controlled gripping of an object) • Monitoring commands (e.g. identification of objects) • User commands (e.g. moving camera into a direction of an object)
5 „FRIEND“ – An Intelligent Assistant in Daily Life
99
• Calculation operators (e.g. trajectory planning) • Direct actuator control (e.g. gripper movements). Task
User Interactions
Human-Machine Interface
Direct Actuator Control
Execute Monitoring
Task
User Interactions
User Interaction
Worldmodel
Set Facts
Squencer
Execute Monitoring
Closed Loop Control
Read Facts
Symbolic Represenation of System and Environmental State
Closed Loop Control
Monitoring Operation
Direct Actuator Control
Write subsymbolic information Read and write subsymbolic information
Subsymbolic Represenation of System and Environmental State
Reactive Layer
Sensors
Actuators
Fig. 5.3. Modified hybrid multi-layer control architecture
In order to indicate its execution result, the EEOP returns a discrete value, like Success or NotFound, to the sequencer. Depending on this value the sequencer decides whether to proceed with the execution or to modify the generated sequence, i.e. to perform a re-planning step. The reactive layer is located on the lowest level of the control architecture. This layer consists of software servers that encapsulate the hardware of sensors (e.g. cameras) and actuators (e.g. robot arm). Each server offers its services i.e. monitoring operations or actuator control via client-objects. Each client-object can be
100
O. Kouzmitcheva et al.
instantiated within the context of another process i.e. the sequencer or the humanmachine interface. By means of the clients-objects EEOPs for monitoring, direct control, reactive operation or user interaction are constructed. For instance, if a reactive operation has to be performed, e.g. visually guided grasping of objects, the necessary clients for sensor input and actuator control are instantiated and connected dynamically within the context of the EEOP. The exchange of information between EEOPs is performed via the sub-symbolic part of the world model. Here, beside the administration of the required resources, the sequencer controls the related flow of information. For instance, if visual position information about an object is required by a reactive operation, a monitoring operation that produces this information has to be executed first. If the monitoring operation fails the sequencer is informed via an appropriate return value and it can react to this unforeseen situation. The following chapter describes the development of executable elementary operators that are necessary for the execution of a “beverage serving task”. First, each operator and its integration into the FRIEND system are described separately by means of the stepwise explanation of the task scenario. Afterwards, a task planning approach that makes use of these operators and that is used within the control architecture is presented.
5.2 Application and Control This chapter explains the realization of the “beverage serving task” by means of the FRIEND system. First, the scenario is described on an abstract and intuitive level. This sketchy explanation is followed by a detailed description of the technical realisation of executable elementary operators for object detection, object manipulation, like visually guided object grasping, obstacle avoidance and the pour process itself. To get a robust behaviour this operation is executed under closed loop control that exploits weight information from the “smart” tray and adapts a trajectory obtained via the method of demonstration-based programming. A conclusion of the task explanation is that the ability of task planning becomes a necessary requisite if autonomous task execution is required. Therefore, a real-time suitable task planning approach based on enhanced assembly planning methods is introduced. The resulting task planning component is integrated into the sequencer of FRIEND’s control architecture and makes use of its executable elementary operators. Finally, the demonstration based programming method for the creation of operators is presented in more detail. The programming method helps to create executable elementary operations more easily than traditional engineering approaches. This helps to enhance the systems flexibility and robustness during task execution.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
101
5.2.1 The „Beverage Serving“ Task The scenario can be described as follows: a person (e.g. care personnel) arbitrarily places a glass and an already opened bottle on top of the tray. Afterwards, the user enters the single command “Serve Drink” and the system shall fill the glass with the beverage inside the bottle and move it to the user’s mouth autonomously. After the user finishes drinking, the glass will be put back on the tray, ready for the next serving drinking process. Even though some hard restrictions like empty glass or already opened bottle have been introduced, the realization of the autonomous execution of the task is still a great technical challenge. First, the system has to detect and grasp the bottle. Second, it has to locate the neck of the bottle relatively to the glass to perform a succeeding pour-in action. Third, the pouring process has to be performed and observed to fill the glass sufficiently. Afterwards, the bottle has to be placed back on the tray and the glass has to be grasped. Finally, the glass has to be moved into the vicinity of the user’s mouth. In the following, the realization of these steps is described in detail. 5.2.1.1 Object Detection The ability of object detection is a fundamental requirement in autonomous task execution. Within this scenario object detection based on natural characteristics is required in order to avoid artificial object markings. It turned out that the colour of an object is suitable for this purpose. Unfortunately, using colour information introduces some additional problems compared to grey value image processing. Different external influences like variable lighting conditions can be the reason of large fluctuations in objects’ colours. It might happen that differently coloured objects possess the same colour within the captured image. In order to overcome this problem our object detection algorithm makes use of the HSV (huesaturation-value). The main advantage of this colour space is that the HS-channels for objects of the same colour are almost independent of illumination condition changes. On the basis of this colour space a new method for automatic object recognition based on fuzzy decision system is used for object classification [8]. The result of the method is shown in Fig. 5.4 (a), (b). The system detects the bottle as well as the glass in both images of the stereo-camera system. This is symbolized by the bounding ellipses within the pictures to the right. The detection process itself works with a frame rate of 10fps achieved on a Pentium IV 2.8 GHz platform with an image size of 512 × 256 pixels. This frame rate is necessary for smooth movements of the robot arm if it is operating in visual servoing control mode. The main disadvantage of the presented object identification and detection method is that it does not reason about the detected objects. Within a clustered and unstructured environment, it might happen that wrong objects are identified, because of changing illumination conditions. By using the ‘smart’ tray, we can cope with this problem. With the information from the tray, unreasonable information can be excluded. Fig. 5.4 depicts the identified objects necessary for the pouring action by using image information in combination with the touchpad.
102
O. Kouzmitcheva et al.
a
b
c
d
Fig. 5.4. Object identification. a stereo camera view, b detected objects, c raw sensor information, d touchpad view
5.2.1.2 Object Approaching, Grasping, and Crossover Based on the image information obtained by object detection the pouring action can be performed. In order to implement this part of the scenario the gripper of the robot arm has to grasp the identified bottle and move it to the vicinity of the glass to execute pouring action in closed loop mode. For this purpose, image based visual servoing as well as the “look-and-move” approach are used. Vision Based Object Manipulation A human being grasp objects almost invariably with the aid of vision. He or she uses visual information to locate and identify objects, and to decide how to grasp them. Additionally, they use visual information for obstacle avoidance during the movement of their hand towards the desired object as well as for the correct alignment of the hand. The latter aspect, i.e. the hand-eye coordination, is the basic principle of the visual servoing method for robust robot arm control. The method uses visual information as feedback value in a closed control loop, so that approaching objects becomes independent from calibration errors of the vision system, like camera or position calibration. In visual servoing, it is necessary to generate gripper motions from visual observation. The visual controller (see Fig. 5.5) uses the location of features on the image plane directly for feedback.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
103
Fig. 5.5. Conventional visual controller
The controller includes an image jacobian matrix, which describes the relationship between changes in image plane and in world coordinates [9]. During the control process, the system simultaneously tracks the robot arm and the target, e.g. I bottle. Within this context, the image error e (i.e. control error) is defined as the I distance in both images between the reference point r actual, e.g. gripper, and the tarI get point r desired, e.g. bottle. Driving this error to zero in both images is equivalent to a 3D movement of the gripper into the vicinity of the object to be gripped (see W W Fig. 5.6: u - output of the controller, r – position of the robot gripper in world coordinates). This conventional visual servoing is a well-known method and realised for many applications worldwide. The disadvantage of the method is that fixed cameras with constant, usually small, focal length are used. It reduces the complexity of the control algorithm, but at the same time it restricts the workspace of the system.
Fig. 5.6. Visual servoing based grasping of a glass
In order to enhance the flexibility of the system, it must be possible to manipulate objects that are not placed on the tray. Even in the “gripping from the tray” scenario, it might happen that the robot gripper leaves the common field of view of the fixed cameras, so that vision based control becomes impossible. To overcome this problem adjustable cameras are used. Both cameras are mounted on pan-tilt heads and have variable focal lengths. In this case, the common field of view covers the whole workspace of the robot arm [10]. To use the additional degrees of freedom two additional control loops (Pan-Tilt and Zoom) are included for each camera (see Fig. 5.7).
104
O. Kouzmitcheva et al.
Fig. 5.7. Extended visual controller
The pan-tilt control loop keeps the object in the centre of the image and the zoom loop controls the image size so that the objects of interest always have a sufficient resolution (see Fig. 5.8).
Fig. 5.8. Adaptation of image resolution and camera position during pan-tilt head (PTH) control
The application of these enhanced control loops enlarges the robustness of the visual servoing based grasping process. To enhance the robustness of the system visual information and information from the ‘smart’ tray are merged. With this additional information, the gripper position can be adjusted with the necessary accuracy. “Look-and-move” Based Object Manipulation First, we remind briefly the classical ‘Look-and-move’ paradigm. This approach uses visual sensor information to compute the 3D position and orientation of the target object in order to report a 3D pose for the robot to achieve. The ability to compute relative 3D position and orientation of the object in Cartesian space implies that: 1. a 3D model of the object is available 2. visual features used to recognize and locate the object are represented in the model 3. the visual sensor is calibrated in order to be able to estimate Cartesian positions and orientations.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
105
In order to overcome time-consuming calibration and to exclude the necessity of image models for objects the ‘smart’ tray is used as an additional sensor. This tray provides the information about the placement of the objects relative to the tray coordinate frame. By means of the combination of this information with the results of the colour image processing described in 2.1.1, the 3D position of identified objects can be calculated easily with sufficient accuracy. At first glance, this modified ‘Look-and-move’ method seems to be a suitable technique to control the robot arm in order to manipulate different objects. However, it still has some disadvantages: 1. No real-time correction of the robot path is possible 2. Errors in calibration directly affect the accuracy with which the desired position is determined. 3. The information about 3D object characteristics, e.g. height, is required in order to compute the target position of the robot arm; 4. The provided sensor information is restricted to X and Y directions. 5. The number of executable tasks is restricted to a priori known objects that have to be placed on the tray. In order to overcome the problem of storing object related data and to be able to grasp and to manipulate different objects anywhere in the workspace of the robot arm, a combination of visual servoing and ‘look-and-move’ is used. This avoids the disadvantages of the look-and-move approach and the inaccuracy of visual servoing due to small focal length. For grasping and manipulation of the objects on the tray the result is as follows: 1. After the first object identification the robot gripper will be moved into the vicinity of the object to be gripped by means of visual servoing. 2. Due to the information provided by the ‘smart’ tray the gripper position is adjusted with sufficient accuracy. 3. After the gripper has reached its target position relatively to the desired object, a pre-programmed grasp action is executed. In order to execute the pouring task the bottle has to be moved into predetermined relative position close to the glass. For the execution of this motion, both techniques described above can be used. The difference to the ‘gripping task’ is the definition of the desired position and the reference point for control. In this case, the gripped bottle determines the reference point instead of the gripper. The desired position is calculated with respect to desired relative position of the bottle and the glass determined by the pouring trajectory. After the desired position is reached, the beverage pouring can start. 5.2.1.3 Beverage Pouring To guarantee that no beverage splashes out of the glass, the pouring trajectory as well as the flow have to be controlled. The pouring trajectory is determined by the movement of the bottle tip with respect to the glass and the slope of the bottle. While it is difficult to model this trajectory, it is much easier to obtain it through
106
O. Kouzmitcheva et al.
demonstration [11]. That is, a human instructor demonstrates how the bottle moves during a pouring process. The demonstrated movement will be measured and recorded with a position sensor providing 6 DOF information. This information is transformed to get the bottle’s tip with respect to the centre of glass opening. Hence, the acquired pouring trajectory is independent from specific bottles and glasses, and can be easily applied in a pouring task for various objects. In addition, this demonstrated trajectory is chosen as a general one, which covers most pouring processes regardless different filling levels. For the utilization in the system, the trajectory is stored in form of a list data structure. Figure 5.9 shows the whole procedure. As a result, a human being’s experience on beverage pouring has been observed and stored in the robot system.
a
b
c Fig. 5.9. Generation of the pouring trajectory. a Demonstration, b List of trajectory, c Transformation
When a pouring process begins, the acquired trajectory is used as a reference of the movement of the bottle, while the actual pose of the bottle is controlled by the beverage flow, i.e., the actual trajectory is a modification of the original observed trajectory. Since the initial filling level of the bottle is unknown, the observed trajectory may be repeated partly or wholly till the desired filling weight is achieved. If the whole trajectory has been executed and the filling set point is still not reached, the system will stop the pour-in task and offers a warning message because the bottle is empty.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
107
To eliminate the influence of the initial conditions on the actual pouring task, such as the shape of bottle and glass and the initial liquid level, closed control loop is applied. The ‘smart’ tray measures the filling weight and derives the beverage flow. Figure 5.10 shows the set-up when beverage pouring begins.
Fig. 5.10. System set-up for pouring beverage
The control loop is shown in Fig. 5.11, where Win, Fref and Factual stand for the desired weight to be filled, desired flow range and actual flow respectively. As a result, the pouring process is accomplished without an interaction of the user. Due to closed loop control a very simple structure is able to handle different initial and boundary conditions.
Fig. 5.11. Schema of pouring control loop
5.2.1.4 Put an Object Down After the pouring process is finished, the bottle has to be placed back on the tray. Visual servoing, as used by human, would imply that: 1. Image processing is able to recognize the tray shape and all objects, which are still placed on it; 2. A free position on the tray has been calculated in both camera images;
108
O. Kouzmitcheva et al.
3. The contact of two planes in stereo images (lay at bottom of tray) must be determined. The visual servoing procedure is very robust and accurate as long as the image error between two objects has to be reduced. In the case, that the image error is defined as the difference between two virtual points in the image, the inaccuracy in cameras and correspondence problem in image processing become noticeable. To realize put down action accurately and collision free, it is needed that the image processing is able to recognize small details in the scene with high accuracy. It requires elaborate image processing and calibration algorithms, which can become very time consuming. The use of the “smart” tray avoids much of the problems mentioned above. First, the system has to determine a free position based on the touchpad information. In order to simplify the search, the biggest “objects free” area on the tray is determined (Fig. 5.12). put down object
search free position on tray
move gripper to free position
monitoring weight
0 1 2 5
no
no
weight step
yes
scan position of weight step
actual position = target position
yes
open gripper
Fig. 5.12. Flow chart “put down object”
g
5 „FRIEND“ – An Intelligent Assistant in Daily Life
109
On the basis of this area the put down position is calculated relatively to the tray coordinate system and the gripper can be moved above this position. Starting from this position, the gripper is moved downwards to the direction of the tray surface. During this process, the system is checking tray data continuously in order to detect the contact between the object and the tray: If the system detects an increase of the weight and the touchpad detects a “new” object, an additional plausibility check is executed. If the new object appears within the expected area, the system stops the movement of the robot and opens the gripper. 5.2.2 Obstacle Avoidance In the preceding sections, it was assumed that no object has to be considered. Within realistic scenarios, the possibility of obstacles has to be taken into account and obstacle avoidance has to be performed. First, the obstacles have to be detected. In our scenario, each object that is located between the gripper and the target is treated as an obstacle, regardless of its actual depth position. For the manipulator action, only obstacles in the neighbourhood of the manipulator and that are observable in both camera images are taken into consideration. For reasons of simplification it is assumed that the obstacles can be distinguished sufficiently from the background by their colour. They are described by surrounding rectangles and it is assumed that the objects or at least parts of the obstacles are in the same plane as the gripper and target. To detect an obstacle a small rectangle area (region of interest, ROI) between the gripper and the object is defined. The area is bounded by the centre of gravity (COG) of the blobs at the target and the blue light emitting diode at the gripper (see Fig. 5.13). If an object is detected in this region within both images and the two ratios of the object width to ROI width are equal, i.e. w11 : w12 ≅ w21 : w22 , it can be concluded that there is an obstacle between the gripper and the target object.
a
b Fig. 5.13. Defining the region of interest. a left camera view, b right camera view
After the detection of the obstacle it is necessary to generate a suitable trajectory which excludes a collision. For the realisation the manipulator movement
110
O. Kouzmitcheva et al.
from the start to the target position the path is divided into discrete steps. We divide the procedure by generating a provisional set point value for each movement. Afterwards, this set point is used by the image-based visual-servoing system. After each step the system checks if the objects are still in the center of the image and the resolution is sufficient. Otherwise, the angles of the pan-tilt and the focal lengths are adjusted. This method guarantees that the robot arm and the objects of interest are kept in the common field of view. A fixed trajectory can be calculated easily with the help of the epipolar geometry [12]. However, this reduces the ability to react in different situations. For example, when the robot arm might moves out of the field of view while exercising a significantly larger obstacle avoidance motion. Another error could be the inadequacy of camera resolution so that the gripper cannot be detected in the image. Figure 5.14 shows the provisional set points for the movement of the robot arm. A provisional set point is a virtual point without a detectable image feature in the real scene. The gripper and the object are also represented with virtual points, i.e. the light emitting diode above the gripper has to be transformed to the centre of the gripper to determine the image based grasping point [13].
3
2
1
O
G
obstacle
Fig. 5.14. Front view of the image
The gripper {G}, the object to be grasped {O}, and the obstacle are shown in Fig. 5. 14. The gripper moves to point {1}, where the image processing detects the obstacle. Then, the gripper takes an evasive motion towards {2}. This procedure is repeated until the object {O} is approached. After each movement step, the PTHand zoom-control guarantee that the image resolution suffices the needs for the gripper as well as object detection within the next step. The foil sensor of the “smart” tray that offers redundant information about the position and width of objects and obstacles supports the detection process.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
111
5.2.3 Task Planning So far, the description of the sequence of operations that are necessary for the execution of the “beverage serving task” was quite straightforward. The bottle has to be grasped, moved into a position relatively to the glass, so that the pouring process can be started, and placed back on the tray afterwards. A deeper investigation of the situation unveils, that the assumptions made implicitly, like “the bottle can be grasped directly” or “the cameras will offer all necessary information about the manipulated object”, cannot be guaranteed under real world conditions. The following example underlines theses statements and offers a method to derive plans for the control of unknown situations. The method is explained for a particular case. In order to perform a pouring action safely, the system has to determine whether the bottle can be grasped directly or an object that is standing close to the bottle has to be relocated first, so that the bottle can be grasped afterwards. For this decision the system analyses the location of objects on the tray. For each object, a “safe gripping” area is defined (Fig. 5.15). If these areas of two objects are intersected, the relocation is necessary. For instance, if the glass stands in front of the bottle (from gripper’s point of view it hides the bottle), the system has to put the glass on another “free” place. The “relocation” consists of the following subtasks: grasp the glass in front of the bottle, lift the glass and put it down on a free place on the tray. Afterwards, the bottle can be grasped and moved close to the glass, ready for the pouring process.
„Safe“ gripping areas
Glass
Bottle
Bottle
Glass a
Tray
Tray b
Fig. 5.15. Analysis of the objects’ placement on the tray. a pour in process can be performed directly, b relocation is required
The relocation process described above can be interpreted as the result of a task planning process performed by the robotic system. First, the system reasons about the current environmental situation. Afterwards, on the basis of this situation, it plans the actions to be performed next in order to reach the target situation related with the task to be solved. It is evident that task planning becomes a necessary prerequisite if robust task execution on a relatively high level of abstraction is de-
112
O. Kouzmitcheva et al.
manded. The demand for more tasks to be executed by the system enforces this requirement. Task planning problems have been investigated in the field of artificial intelligence (AI) since the last 30 years, with the objective of creating fully autonomous agents e.g. robots. Until now domain-independent planners from this area like STRIPS, ABSTRIPS, NOAH, MOLGEN, DEVISER, SIPE [14, 15] are still not robust and efficient enough to work in real world robotic systems like flexible assembly systems or service robots [16]. For instance, these systems suffer from the combinatorial explosion of the search space and do not take the uncertainty of the environment into consideration. In order to overcome these drawbacks, domaindependent assembly planning methods have been enhanced, so that the planning of tasks on a higher level of abstraction for real world applications will become possible [16]. Within these approaches as much as possible knowledge about the task is integrated in advance, so that the planning problem can be solved with search methods of low computational costs. Here, the choice of an adequate task representation data-structure is essential. 5.2.3.1 Task Representation In the field of assembly planning so-called AND/OR-graph data-structures are frequently used to represent domain dependent state relations between parts of a product within an assembly process. These data-structures are compact and efficient to search. In the following AND/OR-graphs as well as their task planning pendants, the so-called AND/OR–nets, are informally introduced. For a formal definition of AND/OR–nets see [16]. Figure 5.16 depicts the structure of an AND/OR-graph. Each node of the AND/OR-graph represents either a single part (e.g. {A}), a subassembly (e.g. {A,B,C}) or the final product (e.g. {A,B,C,D}). Nodes are connected via ANDhyperarcs that represent feasible assembly or disassembly operations performed e.g. by a manipulator. One side of the arc refers to the target node that represents the result of an assembly operation. The other side is connected with at minimum two nodes that represent the parts or the subassemblies before the assembly operation. For example {A,B,C,D} is connected with {A,B} and {C,D}. Alternatively, {A,B,C,D} can be constructed via the assembly of {A,B} and {C,D}. Cao and Sanderson [16] proposed an extension of AND/OR-graphs, so-called AND/OR-nets, which consider certain state representations emerging especially within robotic task planning scenarios. For this application, nodes represent objects and their related geometrical relation within certain task scenarios. The definition of AND/OR-nets in contrast to AND/OR-graphs allows two nodes to contain same objects. These nodes differ in an internal state that represents a current geometrical object-constellation or object states.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
113
ABCD
ABC
AB
A
CD
B
C
D
Fig. 5.16. AND-OR graph example
The structure of an AND/OR-net will be explained by means of the “Beverage serving” task: Fig. 5.17 depicts the corresponding AND/OR-net. The bottle {Bo}, the tray {Tr}, the robot arm {Ro} and the glass {Gl} appear in different situations. For instance, the node [Bo,Gl,Tr]0 represents the object-constellation that the empty glass stands on the tray next to the filled bottle, whereas [Bo,Gl,Tr]1 represents the filled glass standing on the tray next to the empty bottle. Cao and Sanderson distinguish the nodes with a predetermined internal state number (attached to the set of objects). Operators that are associated with the AND-arcs are signed with AOP (assembly operator) and DOP (disassembly operator). The AND/OR-net structure fixes all possible operation sequences that are possible for the execution of a task. In order to generate a feasible task plan based on the net depicted in Fig. 5.17, the initial as well as the desired target state have to be acquired. The target is specified during the process of task modelling, because it has a fixed association to a high-level command. But the initial state has to be determined within a monitoring process. Therefore, it is necessary to enhance the information offered by the AND/OR-net structure, so that particular states of object as well as relation between objects can be determined and associated with a state. For this purpose, facts are introduced and connected with the nodes of the AND/OR-net [17]. For instance, node [Bo,Gl,Tr]0 can be associated with the facts: 1. 2. 3. 4.
StandsOn ( Bo, Tr ) = TRUE StandsOn ( Gl, Tr ) = TRUE IsEmpty ( Gl ) = TRUE ….
114
O. Kouzmitcheva et al.
Fig. 5.17. Example AND/OR-Net for the “Beverage Serving” Task
It has to be guaranteed that each fact that appears inside the AND/OR-net can be determined within an initial monitoring process. The initial monitoring process determines the initial state whether by means of a monitoring operation, like the above described object detection, or with the help of the user. For the example of the “ beverage serving task” the initial state is established by the subsumption of the nodes [Ro]0 and [Bo,Gl,Tr]0. This represents the state that the gripper of the robot arm is empty and the filled bottle as well as the empty glass is standing on the tray. The target state is established by the nodes [Bo,Gl,Tr]1 and [Ro]0.That represents the filled glass, the empty bottle standing on the tray and the empty gripper of the robot arm. The problem of creating a feasible sequence of operators (task plan) that, when executed, will drive the systems from the initial to the target state is now reduced to a graph search problem. Because of the finite size of the net as well as its implicit restrictions in feasible operations the search methods used in [16] are real-time suitable and don’t suffer from the halting problem. 5.2.3.2 High-Level-Plan Generation As shown in [16] an arbitrary AND/OR-Net can be transformed into an equivalent Petri-Net. This offers the possibility to perform the planning process based on the reachability graph of the resulting Petri-Net. An informal description of the AND/OR-Net to Petri-Net transformation algorithm is depicted in Fig. 5.18.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
115
A,B {A,B} Assemble
Assemble Disassemble
{A}
{B}
Disassemble
A
Initial AND/OR-Net
B
Resulting Petri-Net
Fig. 5.18. AND/OR-Net to Petri-Net transformation
The Petri-Net resulting from the transformation of the AND/OR-Net in Fig. 5.17 is depicted in Fig. 5.19. The markings in place [Bo,Gl,Tr]0 and [Ro]0 represent the initial state of the “beverage serving task”. In case of a Petri-Net a task plan is equivalent with a sequence of transitions that transforms the initial marking of the Petri-Net into the target marking. For the “beverage serving task” the target marking is to have marks in [Ro]0 and [Bo,Gl,Tr]1. [Bo,Gl,Tr] 0
[Ro] 0
[Bo,Gl,Tr] 1
[Bo,Ro] 1 [Bo,Gl,Ro,Tr]
0
[Bo,Gl,Ro,Tr] 1
[Gl,Tr] 0
[Bo,Ro] 0
[Gl,Tr] 1
[Bo,Gl,Ro,Tr]
2
Fig. 5.19. Petri-Net corresponding to the “pour in a drink” task
The high-level-plan generated from the Petri-Net depicted in Fig. 5.19, is: 1. GripObject( Ro, Bo ) 2. MoveObject( Ro, Bo )
116
3. 4. 5. 6.
O. Kouzmitcheva et al.
PourIn( Ro, Bo, Gl ) MoveObject( Ro, Bo ) PutDownObject( Ro, Bo, Tr ) Depart( Ro ).
A detailed description of the “planning” algorithm can be found in [18]. Related to the description of the technical realization of EEOPs required for the “beverage serving task” it is obvious that the operations on the AND/OR-net level are not executable by the system directly. Each operator on this level of abstraction consists of a composition of EEOPs that are necessary for the realization of the operator’s sub-task. Due to this kind of composition, operators on AND/ORnet level are called composed operators (short: COP). 5.2.3.3 Low-Level-Plan Generation and Execution In order to create a sequence of EEOPs that can be executed by the system directly, the COPs of the AND/OR-net level plan have to be decomposed. For this purpose, a library with COP related generic Petri-Nets is offered. Each generic Petri-Net describes the behaviour of the system during the execution of the COP from the sequencers point of view (see Fig. 5.3). The decomposition of a task planning problem into different levels of abstraction is a natural approach because even humans will plan their activities in a rough manner first, before they think about the details afterwards. Beside the accompanied ergonomic advantages the hierarchical representation of a task planning problem results in a smaller size of the search space −in case of Petri-Nets the size of the reachability-graph− and therefore reduces the computational costs of the task plan generation, if independent modules can be assumed [14]. Each generic Petri-Net possesses a list of formal parameters that represent the actuator, the objects to be manipulated or object specific data. Within the decomposition step, the formal parameters of this list are replaced with actual parameters that represent the actual objects of the task scenario. This enables the connection between concrete objects and the resulting EEOP level plan. The atomic constructional elements for the COP associated Petri-Nets are EEOPs, descriptions of facts inside the world model and representations of system resources (e.g. sensors) required for EEOP-execution. Because it is possible that the AND/OR-net level plan contains parallel executable COPs, a mutual exclusive use of system resources has to be guaranteed. In case of shared resources, all Petri-Nets related to parallel executable COPs are merged into a single Petri-Net that can be planned afterwards. As depicted in Fig. 5.20 each place of the Petri-Net that represents a fact within the world-model starts with the keyword “FAC”. The initial markings of these places have to be queried from the world-model or determined with the help of monitoring EEOPs. The initial markings of the remaining places have to be known in advance. They are used for net-flow-control purposes, like the initial activation of a monitoring operation. Each transition of the net represents a possible realization of an EEOP i.e. the execution of an EEOP together with its possible return value. The transitions that belong to the same EEOP are gathered in groups of at
5 „FRIEND“ – An Intelligent Assistant in Daily Life
117
least two members. Each of these EEOP realizations within a group returns a different value, so that all possible execution results of an EEOP are taking into account within the Petri-Net model. To underline this aspect graphically, these transition groups are surrounded by a rectangle that produces the impression of a single transition1 (short: EXOR-transition). Because the chosen way of modeling unknown behavior doesn’t require modification within the Petri-Net formalism, the simulation and verification of the nets is still possible by means of off-theshelf software tools. During the modeling process, i.e. the creation of the COP related Petri-Nets, all possible return values of the EEOPs have to be taken into consideration. The idea is that all possibilities for erroneous behavior can be implemented in advance and verified within the context of Petri-Nets with reachable target markings. In order to avoid infinite loops of automatic fault elimination steps, user commands are integrated into the Petri-Nets directly. The decomposition of a COP during the planning process will be explained by means of a simplified Petri-Net version for the COP MoveObject() (see Fig. 5.20). Due to its low structural complexity the operator is suitable for the description of the basic design concepts. As shown in the AND/OR-net depicted in Fig. 5.17, MoveObject() is responsible for the transportation of an already gripped object to a free place within the workspace of the manipulator. It is assumed that MoveObject() subsumes the EEOPs CoarseApproach(), SearchFreePos() and User(SearchFreePos). The movement part of the operator will be performed with the help of CoarseApproach(). As described in the introduction, CoarseApproach() uses the stereo-camera system (Scam) for the determination of a collision free gripper trajectory, so that a calculated target position can be reached. In case of MoveObject() the target position is defined as the free position in the workspace. The pre-place FreePositionKnown announces the necessity of this information. If a free position is unknown to the system, the monitoring operation SearchFreePos() has to be started first. It is assumed that this operation makes use of the stereo-camera system also. In case the monitoring operation fails, the user will be involved into the search procedure. Then, he or she has the opportunity to offer the required information by controlling system parts directly or to abort the task execution. This kind of user involvement reduces the complexity of the system because a complex deliberative process is handed over to the user’s disposal. The formal parameters of the COP MoveObject() are Robot and Object. If MoveObject() represents the COP instance that connects the Nodes [Bo,Gl,Ro,Tr]2, [Gl,Tr]1 and [Bo,Ro]1 within the AND/OR-Net of Fig. 5.17 the actual parameters are Ro and Tr. These parameters have been predefined during the AND/OR-Net construction already. The instantiation of the parameters determines the semantic interpretation of places and transitions within the generic Petri-Net. For instance, FAC.IsGripped( Robot, Object ) changes to FAC.IsGripped( Ro, Bo ), so that the actual i.e. initial marking of this place can be queried from the world-model or with the help of a monitoring EEOP. 1
Within the class of Fuzzy-Petri-Nets such a transition type is called mutal-exclusive transition [16]
118
O. Kouzmitcheva et al. FAC.IsGripped( Robot, Object )
FAC.StandsOn( Object, Platform )
CoarseApproach( Robot.EEL.Pos, FreePos.EEL.Pos ) = Success CoarseApproach( Robot.EEL.Pos, FreePos.EEL.Pos ) = Failure
FAC.IsInFreePos( Robot )
FreePosKnown FreePosUnKnown
SearchFreePos( FreePos.EEL.Pos ) = Known SearchFreePos( FreePos.EEL.Pos ) = UnKnown FreePosNotInSight SYS.IsAvailable( SCam )
User( FreePos.EEL.Pos ) = Success User( FreePos.EEL.Pos ) = Abort
Abort
Fig. 5.20. Generic Petri-Net for COP MoveObject
For the generation of a sequence of EEOPs resulting from the instantiated generic Petri-Net the same algorithms like for the AND/OR-net can be used. The EEOP sequence generated from the instantiated Petri-Net depicted in Fig. 5.20 is: 1. SearchFreePos(FreePos.EEL2.Pos) = Known 2. CoarseApproach(Ro.EEL.Pos,FreePos.EEL.Pos) = Success. This plan will be passed over to the execution system within the sequencer (see Fig. 5.3). After an EEOP has been processed, the execution system compares its expected return value with the actual one. In case of a difference, a new EEOP sequence has to be generated in order to react to the unpredicted behavior. For this purpose the instantiated Petri-Net related to the COP fires the EXOR-transition with the actual return value, so that a new initial marking emerges. Starting from this initial marking a new path leading to the target marking can be searched. 2
Elementary Executable Level: Addresses the geometrical data part of the world-model
5 „FRIEND“ – An Intelligent Assistant in Daily Life
119
5.2.4 Demonstration-Based Programming Until now, the task planning mentioned above is based on a prerequisite that task knowledge has already been implemented in the robot system. In other words, the robot system has possessed a list of individual actions and a sequence of these actions as well, as already presented in Fig. 5.19. Here, the terminology of actions is of the same meaning of COP (Composed operators). A question arises immediately how the actions and sequence can be transferred to the robot. The effort to answer the question will begin with transferring the single action knowledge to the robot. A common way to achieve this goal can be a classical engineering method. This standard engineering method would consist of an in-depth task analysis first and the modelling of the action. For this, the relationships between the parameters of the concerned objects need to be known (e.g. between bottle and flow). Only based on an exact model of each action and enough knowledge of the relationships between parameters, it is possible to program a robot to accomplish the action. But how can this expertise be explored and built? It is very natural to come to the idea of observing how a human being achieves this action, as has been done in the “pour in” action, because a normal human can execute a daily life action such as pouring a drink without apparent difficulty. The observation can bring the benefit of exploring the skill of a human being for doing the action. Here “Skill” denotes “the learned power of doing a thing competently”, which a human exerts without consciousness. As an example, the heuristic observation of a human demonstrator to pour drink from a bottle to a glass on a table brings the following facts: 1. Before the pouring begins, the bottle is gripped by the demonstrator and directed to an initial relative position to the glass. This position depends on the sizes of the glass and bottle. 2. The bottle rotates around one axis at a suitable speed till the beverage comes out, meanwhile the distance between the bottle tip and the center of glass opening as well as that between the bottle bottom and the table is kept at a constant value. 3. After the first beverage flows out, the flow of the liquid is controlled to be not so large that the liquid splashes out of the glass, and not so small that the liquid drips. 4. The filled liquid level is monitored continuously to avoid the glass being overfilled. 5. At the end of pouring the bottle is moved in a special way to avoid the liquid dripping. 6. In the whole pouring process the human being applies a closed loop structure to monitor the liquid outflow. At this point, the above qualitative information from observation seems enough for programming the robot to execute a “pour in” action. One can set the reference values like the value of flow by means of trial and error. This turns out to be very tedious and time-consuming. It is much easier to measure and evaluate these set
120
O. Kouzmitcheva et al.
values directly from the demonstration process, for which an observation system is needed. At first view, the construction of an observation system increases the complexity and extra cost for solving the problem. But if the facts are taken into account that the observation system can be utilized for the observation of other daily life actions, that the results acquired are optimal because they are from human demonstration, and, that parts of the observation equipments are already a constitution of the robot system, a good trade-off could be found. In the current FRIEND system only a 6-DOF Polhemus sensor had to be equipped extra for the observation purpose. The camera system is taken as both part of the robot and observation system. Though the initial motivation for demonstration is to build an action model, or in other words, acquire the skill for executing the action, the question if it is necessary to program the robot executing the actions under the same sensory mechanism as that of a human being comes unavoidably. A human demonstrator depends mainly on the visual information, and it is very natural to conclude that a robot should also take the visual information from the camera system. Due to the high data flow and complex image-processing algorithm, the efficiency of visual system in robot application is not very high. This problem can be solved if simpler sensors are applied. As an example, a scale-based tray is applied in FRIEND to receive the flow information by the derivation of weight data. An additional advantage of simpler sensors is that a shorter cycle time can be realized which allows a higher bandwidth of the control loop. This scale-based tray can be taken as part of the observation system as well. If an observation system has been set up, a quantitative analysis of the demonstration is possible. Figure 5.21 shows a demonstrated trajectory with a sampling period of 200 ms. It can be seen that the bottle tip with respect to the center of the glass opening in x-axis direction is within ±10 mm. In y-axis direction, it is also within ± 10 mm after the liquid starts to flow out (at this point the roll angle is about –75°). In Z- direction, it is about 130 mm at the beginning and when the tilt angle arrives approximately 80 degrees, it keeps around 33.4 mm until the end of the pouring. The roll angle changes relatively fast before the liquid flows out and changes slower after that. In contrast to the huge change in roll angle, the other angles keep within a limited range with 0° and –12° for pitch and -4° to 17° for yaw angle. This trajectory reflects the intuitive pouring motion of a human being.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
121
150 X (mm) Y (mm) Z (mm)
X-, Y-, Z-Axis
100
50
0
-50
-100 0
1
2
3
Time (ms)
4 x 10
4
a 20
Roll, Pitch, Yaw Angle (°)
0 -20 -40 -60 -80 -100
Roll (°) Pitch (°) Yaw (°)
-120 0
1
2 Time (ms)
3
4 x 10
4
b Fig. 5.21. Time history of a pouring trajectory. a position history, b orientation history
Figure 5.22 shows the flow history in the demonstration trials with different bottles and glasses. In this figure, three parts are indicated: start up, intermediate and final section. In the start up and the final section the flow changes very rapidly. In the intermediate section, though the flow is with a high variance as well, it is kept above a minimum level. In this part the flow depends on the filled level and due to a minimum flow dripping is avoided.
122
O. Kouzmitcheva et al.
50 Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8
45 40
Flow (g/s)
35 30 25 20 15 10 5 0 -5
0
0.2
0.4
0.6
0.8
1
Filling weight ratio Fig. 5.22. Flow depending on filling ratio for different demonstrations
The information from the quantitative analysis above can be applied in robot programming in two ways: the first is to simply implement the acquired information in the robot system, and the second is to abstract the information to a skill strategy then design a simple function to accomplish the skill. In the first case, as already described, the demonstrated pouring trajectory is taken as a reference for a closed-loop control of the flow to achieve the pouring action. In the second case, a composed trajectory that consists of a rotation and an artificial method to keep the bottle tip a constant distance from the center of glass opening, together with the closed-loop control, can also achieve a similar result. This is logical because the function of the pouring trajectory is to keep the relative position and orientation of the bottle tip to the center of glass opening. There exist many trajectories, which serve the same purpose. The demonstrated trajectory is only one possibility. Even a human being applies different trajectories for the same pouring action. However, some skills keep unchanged like the constant distance between the bottle tip and center of glass opening. Demonstration is needed to acquire such a skill but it is not necessary to copy the demonstrated data themselves if a more simplified design based on the knowledge of the skills can achieve the same functionality. The same conclusion can also be observed from the flow control. The behavior of a human demonstrator for the flow control as shown in Fig. 5.22 shows great variation. Nevertheless, regardless of these variations there exist the common skills, as already mentioned above. If a flow control is made, it is not necessary to copy a flow history exactly. But the skill for such a flow control with the set flow points should be implemented via an appropriate design. Here one important fact to be emphasized is that during the demonstration phase the human being is part of a closed loop control. With the help of the sen-
5 „FRIEND“ – An Intelligent Assistant in Daily Life
123
sory system (eyes) and the controller (brain), the human being measures and evaluates the actual values of important variables, compares the actual value with the set point and controls the movement of the body. This closed loop control behavior dominates the demonstration phase in structure as well as in recorded data and is responsible for the great robustness of human behavior and the ability to deal with variation of initial values. This can also explain the fact that regardless of the different behavior of a human demonstrator even in the same action, the action goal can still be achieved because this closed loop structure guarantees that the key skills needed in the action are kept. In this sense, we come to the conclusion that it is essential to implement such a closed loop structure in robot programming as well. Until now, it has been stated that the method we are using is demonstrationbased programming (DbP). It is necessary to make a distinction with the method of programming by demonstration (PbD). This can be explained from the aspects of both motivation and implementation. The purpose of DbP is to help a robot application engineer with the programming procedure. This is reflected in exploring the action model and the analysis and acquirement of the skill for the action. The analysis and programming of the skill is left to the human being himself. PbD, on the other hand, is a concept originated from the idea of an automatic programming, which leaves robot programming to inexperience users. Conceptually, the only inputs required to generate the control command sequences to the robot system are the description of the objects involved in the task, and the high-level task specification. Thus, the key idea of PbD is to offer the user a proper programming and cooperation interface, through which the system can observe a human performing a task, understand it, and perform the task with minimal human intervention. This can be further presented in Fig. 5.23 from the level of a programmer and a robot.
Fig. 5.23. Difference between DbP and PbD
In this context, a robot system capable for PbD is built on the following conditions: 1. A demonstration system is available 2. The robot system has the prior knowledge of the elemental operations. This is actually the database of actions that the robot has been programmed with. An action is a sensor-motor primitive that allows the robot to interact with its environment. 3. The robot should have the capability to robustly execute these actions.
124
O. Kouzmitcheva et al.
4. The connection between demonstration and program generation has been developed. It means the functions for the automatic analysis of the task as well as the sequence generation have been developed. When considering the fact that a daily life task is actually a combination of the basic elemental actions, as described in Fig. 5.19, it becomes immediately evident that a PbD system with functionalities mentioned above is very valuable in service robot applications. In this sense, the action sequences as well as the transitions among the actions can be recognized by PbD and translated directly to a robot command sequence. This not only means the sequence of a task can be transferred to the robot automatically, which answers the second question raised at the beginning of this chapter, but also gives a prospective of teaching robot new tasks in a much easier way. In conclusion, the demonstration-based programming method DbP focuses more on programming the robot with actions like “pour in”, whereas PbD is more suitable for programming a complete action sequence. Our current efforts in FRIEND system are to equip the system with robust elemental actions like pour in. It is the future plan to extend the system to be capable of PbD ability.
5.3 Summary In this paper, the rehabilitation robotic system FRIEND has been presented. After the technical description of its hard- and software structure, the realization of a ‘beverage serving’ task is explained. Even though the requirements for this task are relatively restrictive, the realisation of its autonomous execution turns out to be a great technical challenge: Objects involved in the task have to be detected, grasped and moved into different positions. Additionally, the pouring process itself has to be observed and controlled autonomously. Here, the basic principle of our approach is to introduce feedback structures, so that different sub-tasks of the task execution can be performed autonomously and are robust against environmental changes. Within this example, visual servoing has been used for grasping and moving objects. The automated pouring process, executed within a closed control loop also uses information obtained via programming by demonstration in combination with weight information from a ‘smart’ tray. The automatic combination of these sub-tasks that is realized within an overall control architecture, results in the execution of the ‘beverage serving’ task with minimum user interaction. The collaboration between semi-autonomous actions offered by the system and user interactions for the support of required information (e.g. object identification) or direct control turns out to be a promising concept for the realization of assistive devices, especially within an evolutionary development process. The system shall consider the user’s cognitive capabilities whenever full autonomy leads to an unmanageable technical complexity, so that a robust functioning system can be offered right from the start of the project. Further development steps can concentrate
5 „FRIEND“ – An Intelligent Assistant in Daily Life
125
on the reduction of necessary user interactions. We claim, that the exploitation of this principle will offer more independence in daily life for handicapped people, especially those, who aren’t able to control the robotic systems directly. A future plan of this research is to further increase the robustness of the autonomous execution of the FRIEND system. As an example, the glass should not be restricted to be empty and the system can recognize this situation to make a right decision and corresponding action. Another concern is to improve the flexibility of the system, i.e. to accomplish some other tasks, such as open a door.
References 1.
Dallaway JL, Jackson RD, Timmer PHA (1995) Rehabilitation Robotics in Europe. In: IEEE Transactions on Rehabilitation Engineering 3: 33–45 2. Dario P et. al. (2002) EURON Research Roadmaps 2002. Research Roadmaps of the European Robotics Research Network 2002, http://www.euron.org 3. Kawamura K, Bagchi S, Iskarous M, Bishay M (1995) Intelligent robotic systems in service of the disabled. In: IEEE Transactions on Rehabilitation Engineering 3(1): 14– 21 4. 3T A User Guide. Metrica Inc. Robotics and Automation Group NASA Johnson Space Center Houston, TX 77058 February 13, 1996 5. Bastia J, Fedor C, Goodwin R, Simmons R (1997) Task Control Architecture – Programmers Guide to Version 8.0, Manual version: May 1997 6. Schlegel C, Wörz R (1999) The software framework SmartSoft for implementiong sensorimotor systems. Proc. IROS, Kyongju, Korea, October 1999, pp 1610–1616 7. Martens C, Kim DJ, Han JS, Gräser A, Bien Z (2002) Concept for a Modified Hybrid Multi-Layer Control-Architecture for Rehabilitation Robots. Proc. Third International Workshop on Human-friendly Robotic Systems, Daejon, South Korea, January 21–22, 2002, pp 49–54 8. Volosyak I, Gräser A (2003) Automatic object recognition using fuzzy decision system for the reha-robot FRIEND. In: Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 76–79 9. Hager G D, Chang WC and Morse AS (1995) Robot Hand-Eye Coordination Based on Stereo Vision. In: IEEE Control Systems Magazine 15: 30–39 10. Radchenko O, Pape A and Gräser A (2003) Visual Servoing with adjustable zoomth cameras. In: Proceedings of the 8 ICORR, Daejeon, South Korea, 23-25 April 2003, pp 51–54 11. She H, Martens C, Graeser A (2003) Application of Programming by Demonstration in the Rehabilitation Robotic System FRIEND. In: Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 39–42 12. Hosoda K, Sakamoto K and Asada M (1995) Trajectory Generation for Obstacle Avoidance of Uncalibrated Stereo Visual Servoing without 3D Reconstruction. In: Proc. IEEE/RSJ Int. Conf on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, pp 29–34
126
O. Kouzmitcheva et al.
13. Pape A, Radchenko O, Gräser A, Jang H, Bien Z (2003) Obstacles avoidance with visual control of adjustable zoom-cameras. Proceedings 8th ICORR, Daejeon, South Korea, 23–25 April 2003, pp 294–297 14. Norvig P, Russel S (1995) Artificial Intelligence – A Modern Approach. Prentice Hall Series in Artificial Intelligence, Prentice Hall 1995 15. Qiang Y (1997) Intelligent Planning – A Decomposition and Abstraction Based Approach, Springer Verlag 16. Cao T, Sanderson AC (1996) Intelligent Task Planning Using Fuzzy Petri Nets, World Scientific Publishing 17. Martens C, Schüttler J, Gräser A (2003) Logical Verification of AND/OR-net Structures for Task-Knowladge Representation in Service Robotics Scenarios, Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 16–19 18. Martens C (2003) Generation of parallel executable control sequences for rehabilitation robotic systems on the basis of hierarchical Petri-Nets, In: Lohman B and Gräser A (Hrsg.), Methoden und Anwendungen der automatisierungstechnik, pp 73–85.
6 GIVING-A-HAND System: The Development of a Task-Specific Robot Appliance M.J. Johnson, E. Guglielmelli, G.A. Di Lauro, C. Laschi, M.C. Carrozza, and P. Dario
Abstract The rapidly changing demographics in industrialized nations create a pressing need for effective personal assistive aids that appeal to elderly and disabled users. Our goal is to design robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit. To do so, we explore the creation of the robot appliance, a personal robotic aid with the ability to function within a localized assistive system in a specific environment within the home such as the kitchen. This chapter presents our concept of the robot appliance and details of one of two design studies involving our concept for task-specific, robotic aids. We discuss the GIVING-A-HAND system concept and the results of interviews with elderly and medium-to-high disabled persons that prioritized and refined requirements for the robotic appliance component of the system: a small, counter-top mobile robot, “Addams Hand” that users can remotely control to interact with common kitchen appliances to perform fetch-and-carry tasks.
6.1 Introduction The rapidly changing demographics in industrialized nations create a pressing need for effective personal assistive aids that appeal to elderly and disabled users. In industrialized nations such as the US and Italy it is predicted that by the year 2025 about one of two persons will be over the age of 65. If we follow disability trends associated with aging, we can predict that in this new society 16% or more if these persons over the age of 65 will be living with one or more impairments that disrupt their ability to complete activities of daily living in their homes. Therefore, our goal is to design personal robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit to this increasing pool of potential users. In designing personal robots to assist elderly and disabled persons in their home, two main approaches can be envisaged: the first approach is to develop the human-like personal assistant, which is a general purpose, do-it-all robot that is able to assist in multiple activities of daily living [2]; the second approach is to Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 127-141, 2004.
© Springer-Verlag Berlin Heidelberg 2004
128
M.J. Johnson et al.
develop the task-specific personal robot, which is dedicated to completing one activity of daily living [8]. The first approach can be seen in the futuristic ideas exemplified in the humanoid robot waiters in such films as the American movie Bicentennial Man, by Nicolas Kazan. One concrete example is MOVAID [3], a general-purpose, personal assistant for the home. The second approach is exemplified by Handy 1, which can assist persons with severe disability to eat, to makeup, or to paint [13]. Humanoid assistants would be ideal helpers because they would seamlessly integrate within our home environments and provide truly versatile assistance. The technology for individually affordable, safe, and fully acceptable humanoid assistants for elderly and disabled persons is still many years away. Until the technology becomes commonplace, the cost of humanoid personal systems will remain prohibitive with cost to benefit ratios that are too high. In the face of these challenges, we are exploring implications within the second approach. We advocate a movement away from creating robots that are multipurpose with the ability to function throughout the residential environment to creating robots that are appliance-like with the ability to function as one of a localized system of assistive aids in a specific environment within the home. The advent of smaller and cheaper microprocessors and micro-technology make this idea of the affordable “robot appliance” even more attainable in the short-term than a humanoid assistant.
6.2 Background Our evolution toward the perspective of the personal assistance in the home as being a localized, modular system of appliance–like aids followed a sequence that closely patterns the general evolution of the field of rehabilitation robotics. The general-purpose robot assistant was first envisioned as fixed workstations such as DEVAR [5], as intelligent wheelchairs such as TIDE-OMNI, as wheelchairmounted manipulators such as the Manus arm in the SPRINT-IMMEDIATE project and later on as mobile robots as in the URMAD project [2]. Later efforts to combine the performance of fixed workstations with the versatility of mobile robots such as in the MOVAID project [2, 3] were positive movements toward creating more favorable cost to benefit ratios in personal home assistance. The MOVAID system was a distributed system that consisted of a mobile, semiautonomous robot with a number of fixed workstations to which the mobile unit could physically dock. The workstations were in the kitchen and bedroom and allowed the performance of activities of daily living such as preparing a meal, pouring a glass of water, cleaning the countertop surfaces, and removing soiled linens from the bed. Despite user acceptance during validation trials with the MOVAID system, the lessons learned indicated that a complete solution for personal assistance, especially for disabled persons, is more distributed, more reconfigurable and better able to integrate existing domotic, telematic, and consumer products.
6 GIVING-A-HAND System
129
The concepts of modularity and distribution were stressed further, up to the formulation of the idea of an integrated modular assistive system, as in the P3 Project [4, 7]. These concepts are reviewed in the following section in terms of the philosophy behind personal robotic assistance and the lessons learned during validation and simulation experiments. The accumulation of design experience and lessons lead us to now explore the creation of robot appliances: low-cost, customizable, modular, and more task-focused robots. These personal robotic aids would have the ability to function within a localized system of assistive aids, which are tied to a specific environment within the home, such as the kitchen. 6.2.1 Domotic-Robotic Integrated System The introduction of the integrated modular home system concept represents a move toward a real application and cost reduction in technological assistance in the home [7]. This concept proposed the sub-division of the personal assistant into robotic modules and off-the-shelf domotic modules that can be integrated through a domotic network to create a smart home environment. Figure 6.1 illustrates this concept. The robotic system included the technological aids: robotic arms, mobile bases, electrical wheelchairs, while the domotic network included standard domotic devices such as home lighting system, air conditioning, doors and windows control. The devices shared tasks in intuitive and cost saving ways. The addition of the domotic components lowered costs due to the use of largely commercially available products and lowered complexity by replacing the need for the robot to perform some complex tasks in the home. For example, domotic door control devices instead of robots were used to open doors; thus, they reduced the complexity of the required robot controller. In order to better design the functionality and the modularity of the system, clinical trials were conducted with potential end users (disabled people and assistants), with existing robotic and domotic devices, in real life scenarios, in tasks that were chosen based on the users’ higher priority tasks. Trials with these individuals helped identify the priorities for the system functionality and led to the following requirements for the robotic modules and the network: • reprogrammable (use flexible and accessible interfaces) • multifunctional (useful for various priority tasks) • modular (adaptable to user’s injury level) • reconfigurable (useful in priority areas of the home such as the bathroom and the kitchen) • integrated with a domotic system • compliant controllers (ability to apply variable force to modulate interaction with the environment) • portable (reasonable weight and size).
130
M.J. Johnson et al.
Fig. 6.1. Integrated Modular System of Aids
6.2.2 Localized System of Appliances The requirements detailed in the P3 project indicated that the robotic modules should be smaller, more task-specific, and within a networked system that permits quick and easy re-configuring and programming. As a result, the next evolution introduces a network of domotic, telematic, and appliance-like robot modules that function separately or as a unit within a local environment in the home. Figure 6.2 illustrates an example of this localized network in the kitchen environment consisting of standard appliances such as the microwave oven and a sample robot appliance for fetch and carry tasks. The concept of the appliance contains within it several features: the idea of simplicity, adequate performance, reasonable reliability, direct mapping between task and user need, and reasonable cost. The term “appliance” is applied to a device or instrument designed to perform specific function, specifically an electric device, such as a microwave or coffeemaker, for household use. On the other hand, the term “an information appliance” is applied to the emerging generation of home appliances, which are appliances specializing in information knowledge, facts, graphics, images, video or sound, designed to perform a specific activity and to share information within its family of appliances [9].
6 GIVING-A-HAND System
131
Fig. 6.2. The concept to use domotic and telematic technology used to network appliances in the kitchen.
From these descriptions, our basic definition of the robot appliance emerges. The robot appliance is a task-specific device, i.e., it is designed to perform a specific activity or set of activities. The robot appliance goes beyond both standard and information appliances in that it must not only be specific in function and capable of receiving, storing, and sharing information but also capable of acting autonomously or semi-autonomously on the information received. There are many types of task-specific mechatronic and robotic assistive aids that already exist within the home environment. Some examples of existing robotic aids are the small mobile robots that haul around food, cooking robots such as the Bimby, and feeding robot aids such as the Neater Eater and the Handy 1 [13]. A localized modular system of aids capitalizes on home automation and domotic networks to create an integrated system where existing task-specific robotic aids and new robot appliances can be incorporated and made to communicate and interact with each other. Of course, salient to this concept is the need to design special domotic assistants to negotiate the interactions between users and the system and between users and an individual robot appliance. These innovative control interfaces must be designed to optimize the degree of interaction between the user and the product during task performance, while taking advantage of the user’s capabilities [4]. We conducted design case studies of two task-restricted robot appliances for a local kitchen environment, a fetch-and-carry robot appliance (GIVING-A-HAND) and a robot appliance for eating (SELFEED). Here we describe in detail only the GIVING-A-HAND concept, the user-centered development process used, the resulting user-centered priority requirements for the fetch-and-carry robot appliance, Addams Hand.
132
M.J. Johnson et al.
6.3 Design Concept for the Giving-A-Hand System From a series of preliminary interviews conducted with highly disabled potential users with severe upper arm dysfunction, we arrived at the initial system concept. The interviewees had uniformly expressed a desire to use the kitchen. Some key tasks they wanted to accomplish in the kitchen without fatigue were getting a drink independently, making a snack independently, and preparing a meal alone or participating with relatives or a caregiver in preparing a meal. From these insights, we developed a design protocol that would meet users’ access needs in many kitchen tasks. A kitchen task can be divided into three major stages: the setting-up stage, the processing stage, and the enjoying stage. In the example of the task of cooking, Fig. 6.3 shows that the setting-up stage involves cutting up the food, the processing stage involves some type of tool or appliance to do the cooking, and the enjoying stage involves the eating or drinking of the products from the processing stage. Non-disabled persons can navigate these stages without problems while the elderly user may need some assistance in the setting-up stage. Furthermore, medium to high-level disabled users need even more assistance in all three stages, especially the last two.
Fig. 6.3. Three stages in a typical kitchen task such as cooking
Fig. 6.4. The Giving-A-Hand system design concept
6 GIVING-A-HAND System
133
The GIVING-A-HAND design concept builds on the cooperative use of domotic, telematic and robotic technologies to drive down cost of assistive technology and increase access. As illustrated in Fig. 6.4, the concept assumes human assistance is available and can be given during the setting-up stage, and proposes the use of domotic/telematic technologies in processing stage, and robotic assistance in the enjoying stage. The scope of human assistance depends on the task and the user’s ability. For example, high-level disabled users would most likely need a human assistant to prepare the meal and place it on the countertop so that when needed the user, with the help of the technology assistants, can reheat and eat it.
6.4 Domotic/Telematic and Robotic Assistance The concept proposes the use of domotic and telematic assistants in the form of wireless access technology and board-level Ethernet controllers to provide a local area network (LAN). The LAN network connects all kitchen appliances together (i.e., those used in the food processing stage such as the microwave) and permits them to be alternatively controlled using a portable PC-based universal remote. The system would also connect any robotic assistant into the network (Fig. 6.2). We are designing a remote control that permits users with varying levels of disability to direct high-level actions of the appliances (e.g., setting the time on the microwave) and low-to-high level actions of a mobile robot assistant (e.g., steering). Robotic assistance enables persons with severe disabilities to participate independently in the enjoying stage. One specific idea proposed for this stage is the development of a low cost, mobile robot appliance that can interface with key appliances and assist in the completion of complex manipulation tasks, specifically those priority tasks highlighted during our preliminary interviews. The robot would be controlled using the universal remote and be programmed to recognize each appliance and perform the manipulations needs for each one.
6.5 The Fetch and Carry Robot Appliance Development Figure 6.5 illustrates the scenario-of-use for the robotic assistant called “Addams Hand”. The robot is envisioned to be a small, countertop, mobile aid that is programmed and controlled by the disabled person (or any user) to take food from an appliance such as the microwave oven and deliver it to a tray or to an eating robotic assistant such as a Neater Eater [12]. It could also then be programmed to fetch items from another appliance such as a cooking robot (bimby) and deliver it to the eating surface. The Addams Hand concept is unique in that it offers a smaller, lower cost, and less invasive robotic solution than existing kitchen aids such as MOVAID [3] and CAPDI [1].
134
M.J. Johnson et al.
Fig. 6.5. A scenario-of-use depicting the robotic assistant called Addams Hand interfacing with two appliances to assist the user in the enjoying stage
Fig. 6.6. A concept scene showing the robot interacting with a microwave and using the ARTS Lab prosthetic hand [10]
The concept for the first robot prototype is shown in Fig. 6.6. With a functioning goal that is similar to the general purpose MOVAID robot, the mobile robot
6 GIVING-A-HAND System
135
appliance within the localized network compensates for mobility disabilities typical to persons with severe upper limb impairments. Unlike MOVAID, this robot appliance (in the kitchen network) is restricted to completing fetch-and-carry tasks on a countertop, i.e., move items from one appliance to another such as a plate or a beverage from the microwave to an eating robot. A fully functional prototype of the fetch-and-carry robot appliance (Addams Hand) is still being designed with the following main requirements: • Low cost (< $ 4000) • Small footprint (31 cm in diameter) • Movable of a countertop • Capacity to manipulate a variety of objects up to 10 N with prosthetic hand • Portable (< 7.5 Kg) • Member of a domotic network • Ability to be controlled by a universal remote • Ability to move autonomously or semi-autonomously • Ability to communicate with other network appliances.
6.6 User-Centered Development In order to identify additional requirements for Addams Hand, interviews were conducted. Subjects completed a questionnaire to assess their attitude toward technology in the kitchen and a questionnaire on the desired functional characteristics for the robot. A total of nine elderly users and four medium-to-severely disabled users participated. All four disabled participants and six of the nine elderly participants completed all aspects of the interview. The elderly participants (82 ± 9.62 years) were limited in completing manipulations required for kitchen tasks such as taking objects from the refrigerator, opening and closing bottles, cutting food, pouring a drink, or grasping objects. The disabled participants (41.25 ± 9.62 years) had physical limitations due to spinal cord injuries or multiple sclerosis. They required full assistance with preparing meals. All subjects were cognitively able to understand the questionnaires; assistance was given to any subject who had difficulty completing the questionnaire due to impairment. To assess subjects’ attitudes toward technology in the kitchen, participants were shown a slide-show presentation that featured current and fictional assistive aids for the kitchen environment. These aids were characterized as low, medium, or high technological design. For each category of aids shown and described, subjects were asked to decide whether they liked the concept or disliked the concept. If they were unsure, they were asked to mark the category “I don’t know.” To gather users’ ideas on the functional characteristics that should be incorporated into the robot, subjects were asked to complete a questionnaire that listed eighteen items. The requirements encompassed design issues such as controllabil-
136
M.J. Johnson et al.
ity, safety, appearance, environment-of-use, cost, accuracy, and usability. After a discussion of the concept for the robot and its scenario-of-use, participants scored each of the items with a 9 (very important), 6 (important), 3 (less important), or a 1 (not important). Priority Functional Requirements Only functional requirements survey results are reported (Table 6.1). The data given in the functional requirements survey were analyzed by taking the median of the scores given for each functional characteristic. Table I shows the functional characteristics ranked in order of the level of importance given to them as captured by the median score of the responses by all participants. The results indicate that the participants considered 8 of the 18 functional requirements “very important.” Results indicate that elderly and disabled subjects were in agreement on only 4 of the 8 “very important” requirements. They all agreed that it was very important that the robot be easy to control, safe when interacting with the user and its environment during the handling of objects, and be affordably priced. In addition, elderly subjects desired the robot to be able to be used safely on the kitchen countertop while disabled users desired the robot to handle objects such as a plate and beverage glass with or without as well as respond to their demands in a timely and robustly manner. Users’ desire to easily control the robot translates into our need to create a user interface that has high usability: effective in its ability to transmit to the robot the desired actions of the users, efficient in allowing users commands to be interpreted accurately, and satisfying in giving user comfort and pleasure during use. Users’ concerns with affordability and safety are common themes observed in the rehabilitation robotics literature [5, 6, 11]. In the case of the Addams Hand prototype, creating a low cost system is a priority. We aim to lower system cost by decreasing the number of degrees of freedom of the robot, by utilizing some commercially available components, by simplifying the number of tasks the robot needs to perform, and by creating a modular environment in which the robot perform tasks. We intend to use vision, proximity sensors, and a combination of supervisor and autonomous control to improve system safety on all three interaction levels. Disabled users’ desire for a robot that can successful assist in key manipulations tasks involving reaching and gripping agrees with results reported by Stranger et al. 1994 [11]. The study reported that disabled users’ task priorities were reaching, gripping, and picking up objects from shelves and floors, preparing food and drinks along with eating and drinking. Given the small number of participants, it was important to determine whether participants’ responses were useful inputs to the design process. We assessed whether the data were consistent with larger questionnaire study such as the study conducted under the MOVAID project [3], the user task priority review work published by Stranger et al. 1994 [11] and other reviews [5, 14]. We saw that the
6 GIVING-A-HAND System
137
data derived from these interviews with potential users were for the most part consistent with the literature and thus offer valid insights into the designing of the GIVING-A-HAND system prototype.
Table 6.1. Functional characteristics ranked in terms of the median score derived from the data of all participants. The median scores for both disabled and elderly subjects are shown. Raw score: 9 (very important), 6 (important), 3 (less important), or a 1 (not important) FUNCTIONAL REQUIREMENTS
1 2 2a 2b 3 4 5 6 7 8 9 2c 10 11 12 13 14 15 16 17 18
Elderly (NDisabled = 6) (N = 4) Easy to control (by all: elderly and disabled) 9 9 Picks up objects of various sizes and weights (e.g.,6 9 straw, plate, glass) Pick up a glass of water (empty or full) 7.5 9 Pick up a plate (empty or filled with food) 7.5 9 Delivers objects safely (user interaction) 9 9 9 Delivers objects intact (deliver objects not broken or9 spilled) Usable on the kitchen countertop (safely) 9 7.5 Affordable (low cost) 9 9 Interacts with distinct appliances (e.g., bimby, micro-7.5 9 wave etc.) Available on demand and usable for specific period 7.5 9 Manages unexpected situations 6 9 Pick up a straw 6 8 Completes task in reasonable time 6 9 Easy to transport (move from one location to another) 6 6 Quick device set up (turn on quickly and easily) 4.5 6 Gets objects from tabletop and appliances 3 7.5 Moves on floor or tabletop 6 7.5 Recognizes and stops at appliances 6 6 Delivers objects to accurate place 6 6.5 Allows user to see objects being grasped 6 7.5 Design makes the product attractive 3 3
All (N = 10) 9 9 9 9 9 9 9 9 9 9 7.5 6.5 6 6 6 6 6 6 6 6 3
In summary, the results indicated that eight of the 18 requirements will be given priority in our design. Of the 8, we consider the 4 requirements shared by the two user groups to be given the highest priority. The 3 (of 8) tasks cited by only disabled users will also be included so as to satisfy their special needs. By continuing to query users throughout the design process, we will further define each requirement and assess what are acceptable levels of implementation for them.
138
M.J. Johnson et al.
6.7 Prototype of a Local Network with the Robot Appliance A functional prototype of a networked system of two standard appliances and a prototype of a robot appliance-like aid for fetch and carry tasks have been developed as early prototype of a modular integrated assistive system under the GIVING-A-HAND project. Figure 6.7 illustrates the domotic assistant and the local network of appliances, which consists of two standard appliances and the simple prototype of the fetch-and-carry robot appliance. The appliances are a microwave oven and a multifunctional device for preparing and cooking food. This network was primarily developed to enable the user to control these three appliances through a domotic assistant that was adapted to their use. Figure 6.8 illustrates the resulting prototype of the robot appliance and the next generation prototype. The prototype consisted of a modified mobile base from Parallax and a Robot Oz 4-DoF arm. The mobile base was equipped with infrared sensors to avoid obstacles and a simple line-tracking sensor to permit structured and safe movement on the countertop. A distributed control architecture, which consisted of two dedicated PIC microcontrollers (Basic Stamp boards from Parallax) to receive and send control and information signals to the mobile base and arm, implements cheaper behavior-based protocols to reduce system control cost and overhead. A lack of standard protocols for accessing and modifying the functions on appliances necessitated the creation of special control interfaces. Control of the bimby and the microwave via the domotic assistant were facilitated by dedicated control boards and specially designed interface boards. The control boards permit the appliances to be directly connected to a local LAN network and their control panels to be controlled via digital outputs. The interface cards converted the current and voltage of the appliances to the appropriate TTL signals. An applet on the domotic assistant communicates with the control card through a socket with an IP address of the card and a predefined port. The applet permits the user to change the state of the outputs of the card. A wireless connection and control protocol enabled the robot to move freely on the countertop. A serial onboard bridge wireless LAN (Symbol – CB1000) transferred the data coming from a serial RS232 connection, on a wireless TCP/IP network. A wireless access point (Buffalo-Air Connect) connects the bridge to the network. The lack of a standard protocol for sharing information between appliances emerged as a possible cause of integration difficulties. Special communication protocols were developed to connect our appliances.
6 GIVING-A-HAND System
139
Fig. 6.7. Local network prototype in the kitchen
Fig. 6.8. A simple functional prototype of Addams Hand: a fetch-and-carry robot appliance for the in the kitchen
140
M.J. Johnson et al.
The prototype of the fetch-and-carry robot appliance gave insights into the mechanical, control, and cost issues involved in the design. In developing a lower cost and more task specific robot, we learned that we must often sacrifice performance, precision, and flexibility. In developing more functional robot appliances, we must manage reduce technical complexity and cost, balance residual user ability and machine controllability, and increase performance while minimize invasiveness into the use environment.
6.8 Summary and Conclusions Our goal is to design personal robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit to this increasing pool of potential disabled and elderly users. We proposed the development of robot appliances that are task-specific and part of a network where they can interact with other information and robotic appliances. We discussed the details of the GIVING-A-HAND system. Work continues on the design and development of both the feeding robot appliance and the fetch-and-carry robot appliance as well as on the implementation of the local network. Implementation of this network of aids permits us to examine the challenges inherent in integrating and in controlling both robot appliances and standard appliances within one domotic network. A functional prototype of a domotic network consisting of a prototype robotic appliance for fetch-and-carry tasks and standard appliances for the kitchen environment revealed that lack of standard protocols for sharing information and for modifying on-board functions were the main barriers to implementation of our proposed system of aids. Next steps of the work will be the implementation and clinical validation of two different prototypes of a novel robotic feeding appliance and the realization of a second generation prototype of the assistive system with the fetch and carry robot.
Acknowledgments The work was supported via National Science Foundation-NATO Postdoctoral fellowship, DGE-0107998 and the core funds of the INAIL RTR Centre and the ARTS Lab of the Scuola Superiore Sant’Anna.
References 1.
Casals A, Merchan R, Portell E (1999) CAPDI: A robotized kitchen for the disabled and elderly. In: Bühler C and Knops H (eds) Assistive Technology on the Threshold of the New Millennium. IOS press.
6 GIVING-A-HAND System 2. 3. 4.
5.
6.
7. 8. 9.
10. 11.
12. 13.
14.
141
Dario P, Guglielmelli E, Allotta B (1996) Robotics in medicine. IEEE Robotics and Automation Society Magazine 3(3): 739–752 Dario P., Guglielmelli E., Laschi C., Teti G. (1999) MOVAID: A personal robot in everyday life of disabled and elderly people. Technology and Disability (10)2: 77–93 Guglielmelli E, Dario P, Laschi C, Fontanelli R, Susani M, Verbeeck P, Gabus J (1996) Humans and technologies at home: from friendly appliances to robot interfaces. IEEE Int’l Workshop on Robot and Human Communication, pp 71–79. Hammel J, Hall K, Lees D, Leifer L, Van Der Loos H.F.M., Perkash I, Crigler R (1995) Clinical evaluation of a desktop robotic assistant, Journal of Rehabilitation Research & Development 26: 1–16 Harwin WS, Rahman T, Foulds RA (1995) A review of design issues in rehabilitation robotics with reference to North American research. IEEE Transactions on Rehabilitation Engineering 3(1): pp 3–13 Laschi C, Guglielmelli E, Teti G, Dario P (1999) A modular approach to rehabilitation robotics, 2nd EUREL Workshop on Medical Robotics, Pisa, Italy, pp 85–89 Mahoney R (1997) Robotic products for rehabilitation: Status and Strategy, Proc. of the Int. Conf. on Rehabilitation Robotics (ICORR), Bath, UK, pp 12–22 Norman DA (1998) The Invisible Computer: Why good products can fail, the personal computer is complex, and information appliances are the solution. First MIT Press Cambridge, Massachusetts Sebastiani F and Suppo C (2000) Analisi e sviluppo di meccanismi sotto-azionati per una protesi di arto superiore. Scuola Superiore di Sant’ Anna. Pisa, Italy Stranger CA, Anglin C, Harwin WS, et al (1994) Devices for assisting manipulation: A summary of user task priorities. IEEE Transactions on Rehabilitation Engineering, 2(4): 256–265 The Neater Eater (http://www.michaeli.u-net.com/main.htm) Topping M and Smith J (1999) The development of Handy 1: A robotic system to assist the severely disabled. Proceedings of the International Conference on Rehabilitation Robotics (ICORR 99). San Francisco, CA. July, pp 244–249 Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering 3(1): 46–55.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions Noriyuki Kawarazaki, Ichiro Hoya, Kazue Nishihara, and Tadashi Yoshidome
Abstract This paper explains the cooperative work system between manipulators and humans using hand gesture instructions. The goal of our system is that the manipulator can work with humans in the same working space according to the instructions of hand gestures. Our cooperative welfare robot system is composed of the manipulator, a PC and a trinocular stereovision hardware. Since the system has to recognize the position and posture of the hand in the three dimensional work space, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are obtained through range images. The gesture is recognized based on the length and width of the hand. We propose the new method in which the hand area is divided into two blocks in order to recognize the hand gesture rapidly. Through experimental results, we will show the effectiveness of our system.
7.1 Introduction The development of robots has been significant in production, including factories. The expectation is high for the development of intelligent robot systems that work cooperatively with human beings in daily life and in medical treatment and welfare (Fig. 7.1). Smooth interfacing of human beings and robots is essential for the operation of robots by people in daily life. Anyone can operate robots with easy by giving instructions to the robot by using gestures, just as people communicate with gestures. As the motion of a human hand is capable of making expressions naturally and viscerally, information on assignments human intentions and instructions, among others, can be obtained by observing the motion of the human hand. This interaction has been the subject of extensive research in recent years. As a result, an intelligent manipulator system using tracking vision has been developed [1]. The control algorithm for a service robot through the hand over task was proposed [2], a new user interface called "Active Interface" to interact with human beings has been presented [3]. Human actions are utilized in human-robot interaction [4]. The method for recognizing head/hand gesture from a sequence of range images was presented, and algorithms for real time visual recognition of human action sequences was developed [5, 6]. With the presentation of the human Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 143-153, 2004.
© Springer-Verlag Berlin Heidelberg 2004
144
N. Kawarazaki et al.
gesture recognition method using pattern space trajectory [7], the proposal of the "Interactive Sensing" to let the robot find a human in a complex background [8], the new method uses multiple color extraction, stereo tracking and template matching for human pointing action was developed [9]. On the other hand, various types image processor of robot has been proposed [10]. There were various types of image processors proposed, however. The high performance robot vision system which is implemented as a transputer-based vision system has been developed [11, 12]. This paper provides the cooperative welfare robot system using hand gesture instructions. The goal of our system is that the manipulator can work with humans in the same workspace according to the instructions of hand gestures. Since the system has to recognize the position and posture of the hand in the three dimensional workspace, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are obtained through range images. Since it is a very time-consuming process to recognize the hand gesture based on the pattern matching of the hand, we use the characteristic dimensions of the hand. Moreover, we propose the new method that the hand area is divided into two blocks in order to recognize the hand gesture rapidly. This paper is organized as follows. The concept of our cooperative robot system is provided in Sect. 7.2. The measurement of the distance using stereo images is presented in Sect. 7.3. The detection of the hand and the target object is described in Sect. 7.4, and recognition of the hand gesture is presented in Sect. 7.5. Several experimental results are discussed in Sect. 7.6. Conclusions are provided in Sect. 7.7.
Fig. 7.1. An intelligent robot system
7.2 Cooperative Robot System Our cooperative welfare robot system is shown in Fig. 7.2. This system is composed of a manipulator, a trinocular stereovision hardware and a PC. The manipulator used here has six degrees of freedom of motion and has the mechanical hand. Since the system has to recognize the position and posture of the hand in real time, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are calculated through range images which are obtained from the trinocular stereovision hardware. The goal of our system is that the manipulator works with the human in the same working space according to the instructions of hand gestures. In our system the operator gives hand gesture to the manipulator conversationally. For example, when the operator points with the forefinger to the object, manipulator picks up the object and hands it over to the operator.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
145
Hand gesture
Objects Manipulator
Trinocular stereovision
PC
Fig. 7.2. Cooperative Robot System
7.3 Measurement of Distance Using Stereo Images The trinocular stereovision hardware consists of three-camera module. The camera module has simultaneously obtained three images of the scene (Fig. 7.3). The system is able to determine the distance to the object in the scene. The system calculates the amount of the shift between the left image and the right image using the Sum of Absolute Differences correlation (SAD) method. The calculated value of the horizontal disparity dh and the three dimensional position (Xh,Yh,Zh) about the target pixel based on the horizontal baseline are shown below. d h = Xl − X r
(7.1)
Xh =
b h (X l + X r ) 2 dh
(7.2)
Yh =
b h ( Yl + Yr ) 2dh
(7.3)
Zh =
bh f dh
(7.4)
where: f -focal length, bh - horizontal baseline, (xl, yl) - position of the target pixel in the left image, (xr, yr) - position of the target pixel in the right image. The three dimensional position (Xv,Yv,Zv) about the target pixel based on the vertical baseline is calculated according to the same procedure about the top image
146
N. Kawarazaki et al.
and the right image.The actual three dimensional position (X,Y,Z) of the target pixel is selected among the (Xh,Yh,Zh) and (Xv,Yv,Zv) in view of the correlation of images.
Fig. 7.3. Geometric relationship between three images
7.4 Detection of the Hand and the Target Object 7.4.1 Detection of the Hand Area Using Color Image At first, the system has to detect the hand area in the image of the workspace. The hand area is detected based on the RGB pixel values of the flesh tint in the color image. The color image is digitized as 24-bit RGB pixel value, so that each element of RGB (Red, Green and Blue) is 8 bit or 256 levels of brightness [13, 14]. We examined the RGB pixel value of the flesh tint of several people. In order to detect the hand area in the color image, we define the "RGB range" as shown below. R: 96 – 177 (7.5) G: 53 – 118
(7.6)
B: 49 – 109
(7.7)
|R-G| / |G-B| = 5.5–10.8
(7.8)
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
147
The RGB value is apt to be influenced by the light. Therefore, we use the hue of flesh tint in order to reduce the influence of the light. The equation of transformation from RGB value to the hue is shown below.
0.7 R − 0.59G − 0.11B H = tan −1 − 0.3R − 0.59G + 0.89B
(7.9)
We obtained the hue of flesh tint through the experiment and define the "hue range" as shown below. H: 105 – 135
(7.10)
The area of the flesh tint is detected roughly in the color image using the "hue range" and the noise is removed using the "RGB range". 7.4.2 Tracking of the Hand Using CP After the hand area is detected using "RGB range" and the "hue range" of the color image, we determine the center position of the hand that is called the CP in order to trace the hand. The position of each pixels of flesh tint is obtained based on the stereo images. Since the size of the fist of the humans is approximately equal to the sphere with radius 40 mm, the system searches for the center of the sphere with the maximum density of pixels of flesh tint (Fig. 7.4). The center of the sphere is regarded as the CP of the hand. As shown in Fig. 7.5, the area of flesh tint about the hand is obtained based on the "RGB range" and the "hue range". Then the hand is obtained based on the CP. Once the CP is detected, the hand is traced by the tracking of the CP.
Fig. 7.4. Pixels of flesh tint
Fig. 7.5. Detection of the hand
148
N. Kawarazaki et al.
7.4.3 Detection of the Object Using Gesture Instruction It is assumed that objects are put on the table and height of the table is known. The system has to detect the object on the table, when the operator points with the forefinger to the object. The farthest point from the CP in the hand area is called the FP that is the tip of the forefinger. After the detection of the FP, we define the pointing vector V directed from the CP to the FP. The pointing vector V intersects the surface of the table at the object position OP. The OP is calculated from the geometric relationship (Fig. 7.6) and equations of the OP (X, Y, Z) are shown below. Because the object lies around the OP on the table, the system searches around the OP so as to detect the object. X = CX −
(C X − FX ) ⋅ ( Y − C y )
(7.11)
Fy − C y
Y = C h − Th Z = CZ +
(7.12)
(C X − FX ) ⋅ (Y − C y ) ⋅ ( FZ − C Z ) (Fy − C y ) ⋅ (C X − FX )
where: CP(CX , Cy , Cz), FP(Fx ,Fy, Fz), Ch: height of the camera from the floor, Th: height of the table from the floor.
X CP FP
Table
OP
Y Fig. 7.6. Geometric relationship between CP, FP and OP
(4.13)
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
149
7.5 Recognition of the Hand Gesture As shown in Fig. 7.7, we define several instructions using hand configurations. We make the manipulator move in accordance with the instructions of hand gestures. For example, when the operator opens the hand upward (Inst.2), manipulator hands the object to the operator.
Inst.1 Grasp
Inst.2 Deliver the object Inst.3 Approach
Inst.4 Stand by
Fig. 7.7. Instructions of hand gestures
Since it is a very time-consuming process to recognize the hand gesture according to the pattern matching about the whole hand configuration, we use the characteristic dimensions of the hand. In order to recognize the hand configuration rapidly, we divide the hand area into two blocks: the hand block and the finger block (Fig. 7.8).
B CP
Hand Block
FP
A
C
Finger Block
Fig. 7.8. Block division of the hand area
The finger block is defined as the flesh tint’s area that is more than 60mm distant from the CP of the hand. As shown in Fig. 7.8, we define three characteristic dimensions (A, B and C) of the hand in order to recognize the hand gesture rapidly. As shown in Fig. 7.9, hand gestures are divided into branches based on the conditions. The length A is the distance from the CP to the FP. If the length A is less than 60 mm, we consider that the operator closes the hand and the hand gesture means the instruction 1. If the length A is more than 60 mm, we calculate the length B that is the maximum width of the hand block. If the length B is less than 60 mm, we consider that the operator opens the hand upward and the hand gesture means instruction 2. If the length B is more than 60 mm, we calculate the length C, is the maximum width of the finger block. If the length C is less than 30 mm,
150
N. Kawarazaki et al.
we consider that the operator points with forefinger to the object and the hand gesture means instruction 3. Otherwise we consider that the hand gesture means instructions 4. As the width of the forefinger of the human is more than 30 mm, we define the threshold at 30 mm so as to determine the instruction 3 or 4. Because we don't use the whole hand configuration but the three characteristic dimensions, the hand gesture is determined rapidly.
Fig. 7.9. General flow about the recognition of the hand gesture
7.6 Experimental Results We made several experiments in order to clarify the effectiveness of our system. The sequence of hand gesture instruction is shown in Fig. 7.10. In our system the operator gives gesture instruction to the manipulator conversationally. Manipulator waits for the gesture instruction and works based on it individually. The experimental results are shown in Fig. 7.11. The manipulator waits for the instruction of the hand gesture at the initial position (Fig. 7.11a). After the operator points at the target object with the forefinger, manipulator moves the mechanical hand over the object (Fig. 7.11 b). As shown in Fig. 7.11c, the manipulator grasps the object according to the instruction 1. Figure. 7.11d shows that the manipulator picks the object up and hands on it to the operator. Manipulator can move according to the instructions of hand gestures in real time.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
151
Fig. 7.10. Sequence of hand gesture instruction
a
b
c
d
Fig. 7.11. Experimental results from the gesture control of the robot. a Initial position (Instruction 4), b operator points at the target object with the forefinger (instruction 3) and the robotic gripper moves over the object, c the posture of the operator’s hand corresponds to instruction 1 and the robotic gripper grasp the object, d the manipulator delivers the object to the operator’s hand which configuration corresponds to instruction 2
152
N. Kawarazaki et al.
7.7 Conclusions In this paper, we proposed the cooperative welfare robot system using hand gesture instructions. In our system, the hand gesture is recognized and the manipulator works based on it. The hand area is detected accurately based on the "RGB range" and "hue range" of the color image. In order to recognize the hand gesture we use the characteristic dimensions of the hand. We propose the method that the hand area is divided into two blocks in order to recognize the hand gesture rapidly. In our system the operator gives gesture instruction to the manipulator conversationally. The effectiveness of our system was clarified by several experimental results. In future work, we will further define many kinds of gesture instructions for the practical application of our system.
References 1.
Kashiwagi N, Kawarazaki N, Nishihara K (1998) Manipulator work system using vision. Proc. of the IEEE Int. Workshop on Robot and Human Communication, pp 251– 255 2. Agah A, Tanie K (1997) Human interaction with a service robot: mobile-manipulator handing over an object to a human. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 575–580 3. Yamasaki N, Anzai Y (1995) Active interface for human-robot interaction. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 3103–3109 4. Pavlovic VI, Sharma R and Huang TS (1997) Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(7): 677–695 5. Kuniyoshi Y, Inaba M, Inoue H (1992) Seeing, understanding and doing human task. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 2–9 6. Wren CR, Azarbayejani A, Darrell T, Pentland AP (1997) P finder: real-time tracking of the human body. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(7): 780–785 7. Nagaya S, Seki S, Oka R (1996) Pattern space trajectory for gesture spotting recognition. Proc. of the 3rd Japan-France Congress on Mechatronics, pp 208–211 8. Inamura T, Inaba M, Inoue H (1998) Finding human based on the interactive sensing. Proc. of Intelligent Autonomous Systems, pp 86–92 9. Mori T, Yokokawa T, Sato T (1998) Recognition of human pointing action based on color extraction and stereo tracking. Proc. of Intelligent Autonomous Systems, pp 93– 100 10. Moribe H, Nakano M, Kuno T, Hasegawa J (1987) Image preprocessor of modelbased vision system for assembly robots. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 366–371 11. Inoue H, Tachikawa T, Inaba M (1992) Robot vision system with a correlation chip for real-time tracking, optical flow and depth map generation. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 1621–1626
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
153
12. Morita T (1999) Tracking vision system for real-time motion analysis. Advanced Robotics 12(6): 609–617 13. M Sonka, V Hlavac, R Boyle (1999) Image processing, analysis, and machine vision. An International Thomson Publishing Company 14. J C Russ (1999) The image processing handbook. A CRC handbook published in cooperation with IEEE Press.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon” Ryoji Soyama, Sumio Ishii, and Azuma Fukase
8.1 Introduction Eating is a basic need for humans. In fact, it is so basic that most people are not aware of the motions that they use while eating. For some physically disabled people, eating can be laborious or a bit embarrassing since they must rely on a caretaker to feed them. “My Spoon” is a meal-assistance device designed to assist these people during meals. Its basic concept is to allow a user to “eat a meal freely, at any pace and choosing favorite foods in any desired order.” By using My Spoon, it is now possible to eat together with friends and family. After several trials and prototypes [3, 4, 5], My Spoon has been marketed in Japan since May 2002.
8.2 Meal-Assistance Device “My Spoon” As shown in Fig. 8.1, My Spoon is comprised of an operating interface adjustable to the user condition, a 5-DOF manipulator arm, a 1-DOF end-effector (spoon and fork) and a dedicated meal tray sectioned into four compartments [1, 2]. The small size of the robot system assumes domestic usage. The robot body weights approximately 6 kg, and measures 370 (L) × 280 (W) × 270 (H) mm. Power consumption is approximately 30W. The user uses a body part that can be moved relatively freely and selects a control interface that is suitable for that part. Through the interface, the user can move the manipulator to any desired position. When in position, the spoon and fork will grasp the food and relay it to the mouth of the user. Anticipated users are those who cannot move their hands freely and need help while eating. Current users include those who have spinal cord injuries or muscular dystrophy, but can • move their head freely and take in food brought to the mouth • swallow normally • understand how to operate the device
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 155-163, 2004.
© Springer-Verlag Berlin Heidelberg 2004
156
R. Soyama, S. Ishii, and A. Fukase
Fig. 8.1. Meal-Assistance Device “My Spoon”
8.3 Operating Interface Different users have different physical disabilities. The parts of the body with which the user can maneuver to control the machine can vary greatly, as can the ability to understand of the operation of the machine. Moreover, some users may become tired if frequent manipulations are required while others may become weary of eating if an extended amount of time is required for a meal. Therefore the operating interface must be versatile given these conditions. In short, it is necessary to reduce the number of control operations while shortening the time required to eat. Although reducing the number of control operations will deprive the user of eating options, we believe that we have made the interface sufficiently flexible enough to target the vast majority of potential users. As a result of the field trials, we have adopted a chin-controlled joystick as the standard operation interface, with an optional reinforced joystick or push-button as a replacement for the chin-controlled joystick (Fig. 8.2).
8.4 Basic Operation After initial setup, two types of commands, compartment selection and position adjustment, are necessary to operate My Spoon. For general use, a user must be able to understand both of these commands. A user will first select the desired compartment, and then adjust the position of the manipulator within the compartment.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
a
b
157
c
Fig. 8.2. Some examples of robot interface. a Standard chin-controlled joystick, b Optional reinforced joystick, c Optional push button
8.4.1 Setup Initial setup is important to ensure safe operation. This also enables My Spoon to accommodate different users with different physical disabilities. Setup entails • Registering the “home” position of the manipulator in front of the mouth of the user. • Adjusting the joystick or push-button sensitivity to one of three preset values. • Selecting a spoon and fork from two available sizes (Fig. 8.3). These exclusive attachments are designed for quick and easy interchange. • Adjust the placement of the cup stand (Fig. 8.4.) for easy access.
Fig. 8.3. Variable size spoon and fork sets;
Fig. 8.4. Cup stand
158
R. Soyama, S. Ishii, and A. Fukase
8.4.2 Compartment Selection Command Set The included meal tray is partitioned into four rectangular compartments. The compartment selection command selects the compartment that contains the item that the user would like to eat. Pushing forward selects the upper-left compartment; pushing right, the upper-right; pulling back, the lower-right; and pushing left, the lower-left compartment. After the compartment is selected, the manipulator automatically moves to the left-most edge of the compartment while rotating so that the spoon and the fork are positioned perpendicularly to the tray. The manipulator will then start to descend down into the tray (Fig. 8.5). 8.4.3 Position Adjustment Command Set This command set becomes effective automatically after the compartment selection mode finishes. The position of the spoon and the fork can now be finely adjusted by manually moving the joystick in a desired direction. However, an input in the left direction signifies the end of adjustment and the fork will slide down and grasp the food (Fig. 8.6). After grasping the food, the manipulator ascends and rotates to a position parallel with the tray and moves back toward the home position.
a
b
c
d
Fig. 8.5. Compartment selection sequence. a Compartment selection using joystick, b Manipulator movement with rotation, c Movement toward selected compartment, d Final position
a
b
c
d
Fig. 8.6. Position adjustment and grasping sequence. a Manipulator position adjustment, b Fork descends, c Grasp food; d Manipulator return (with food)
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
159
8.5 Control Modes This basic operation allows a user to eat using only several joystick inputs. Some users may find it troublesome to repeat the same operations every time they eat. Mentally impaired users may find it difficult to understand the operation of the system and others may find it difficult to finely control the arm due to other disabilities. Therefore there are two other simpler control modes each with a lower amount of user input, for a total of three control modes. The first mode allows full user control of timing, compartment and food selection. The second allows full control of timing and compartment selection. The third mode allows for the control of timing only. In each mode, the user can freely select when to eat, although there will be a restriction on food selection. 8.5.1 Manual Mode This is the default operating mode, requiring both compartment selection and position adjustment. 1. One joystick operation is required to select the tray compartment. 2. Several operations are required to move the spoon and fork to the desired position. One additional operation is necessary to grasp the food. 3. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth. This mode provides the most flexibility in eating. However due to the number of operations required, the push-button cannot be used as in operating interface for this mode. 8.5.2 Semi-automatic Mode In this mode, a tray compartment is selected by controlling the joystick as in manual mode. However, position adjustment within the selected compartment is not performed, and the foods are grasped one after another in the predetermined sequence (Fig. 8.7). This mode of operation is suitable for users with a relatively low level of manual dexterity. However due to the number of operations required, a joystick must be used as the operating interface. 1. One joystick operation is required to select the tray compartment. 2. Foods are grasped in a predetermined sequence. 3. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth.
160
R. Soyama, S. Ishii, and A. Fukase
Fig. 8.7. Predetermined sequence and position for food selection within a compartment
Fig. 8.8. Predetermined sequence for compartment selection
8.5.3 Automatic Mode In this mode, the user can only specify when to pick up food. As in semiautomatic mode, foods within a compartment are selected automatically. However compartment selection is not available and will be selected in a predetermined sequence (Fig. 8.8). A user operation will consist only of a single joystick movement in any direction or a push of an optional button (Fig. 8.2). The user cannot select the desired food. 1. One joystick or push-button operation is required to initiate motion. 2. The manipulator moves to a compartment in a predetermined sequence. 3. Foods in the compartment are grasped in a predetermined sequence. 4. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth. In automatic mode and semi-automatic mode, the manipulator may return to the user without any food since it moves in a predetermined sequence. Therefore in these modes, the spoon will “glide” slightly above the tray surface for a few centimeters before grasping to avoid returning to the user with an empty spoon.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
161
8.6 Future Tasks While the current My Spoon functions well, there are still several areas what can be improved, namely enhancing the accuracy in grasping food and in preventing an empty spoon from returning to the user. To add this functionality, imageprocessing technology may be incorporated for sensing and automation. 8.6.1 Food Recognition by Using Color Image Processing The purpose of adding color image processing to My Spoon is to identify the position and distribution of food in the food tray. This will simplify the operation, making it more convenient to use. 8.6.1.1 Fundamental Image Processing Since the color and shape of food encompasses a very wide range, accurately extracting food items is a difficult problem. Although My Spoon is mainly used indoors, there still are many different kinds of light sources, not to mention that the lighting may change during operation. By using the color of the bottom of the lower-right compartment of the meal tray as the default tray color, thresholds for further image processing can be set dynamically. 8.6.1.2 Color Sample Extraction and Threshold Calculation Although there are many ways to derive thresholds from a color sample, we have used hue and saturation from an HSV input for robustness against external environmental changes. By mapping the hue on the x-axis and saturation on the y-axis, the hue-saturation distribution of the bottom of the meal tray can be seen in Fig. 8.9.
Fig. 8.9. Hue - Saturation Distribution
162
R. Soyama, S. Ishii, and A. Fukase
a
b
Fig. 8.10. Meal tray image. a Input Image b The same image after processing
To determine which areas of the tray contain food, the hue and saturation of each pixel is determined. If the hue and saturation of a pixel is within the bounds of the hue-saturation distribution of the meal tray, then the pixel is classified as the meal tray. Likewise if the element falls outside of the bounds of the meal tray distribution, then the pixel is classified as food. An input image and an image after processing are shown in Fig. 8.10. 8.6.2 Improvements in Operation
8.6.2.1 Preventing an Empty Spoon to Return to the User In automatic and semi-automatic mode, each compartment is subdivided into nine partitions and My Spoon will grasp food from each compartment in a predetermined sequence. By using image processing to determine which partitions are empty, new sequences can be dynamically generated to prevent unnecessary movements in those partitions. In Fig. 8.11, partitions 1, 6 and 8 in the compartment have been identified as containing food. Thus only three operations will be required to grasp all of the food. 8.6.2.2 Improving the Usability of Manual Mode In manual mode, the spoon and fork position must be accurately adjusted. By determining food locations using image processing, a directional input from the joystick will allow My Spoon to automatically find the nearest food item in that direction. Thus by adding a pose calculation after the binarization routine, the probable food locations can be labeled, allowing calculation of the probable center of mass.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
a
b
163
c
Fig. 8.11. Image of a single compartment. a Input image, b Image after processing, c Food locations (o: found, ×: not found)
If several food items are in close proximity, there is a chance that they may be mislabeled as one food item instead of several items. We are considering using edge-detection and the color schemes within a label as ways to overcome this problem, however this has not been implemented yet.
8.7 Conclusion We have developed "My Spoon", a meal-assistance device commercially available in Japan. My Spoon is operated by combining several motions. The system has three modes of operation each with a different level of user control. Although the standard control interface is a chin-controlled joystick, optional control interfaces such as a reinforced joystick and push button are available. By combining these options, the system can be customized fit to a wide variety of physical conditions. Future tasks include enhancing the accuracy in food grasping, and preventing an empty spoon to return to the user. With this in mind, we are exploring the use of image processing technology for sensing and automation.
References 1. 2. 3. 4. 5.
Ishii S. et al. (2002) Development of the meal assistance robot. In: Proceedings of the 17th RESJA Annual Conference, pp 443–446 Ishii S. et al. (2001) Safety of the meal assistance robot. In: The Journal of Japanese Society for Medical and Biological Engineering, p 200 Ishii S. et al. (1996) A meal assistance robot for people with quadriplegia -4th report, In: Proceedings of the 11th RESJA Annual Conference, pp 351–356 Tanaka S. et al. (1995) A meal assistance robot for people with quadriplegia. In: Proceedings of the 10th RESJA Annual Conference, pp 311–314 Ishii S. et al. (1992) Meal assistance robot as a device for people with quadriplegia. In: Proceedings of the 7th RESJA Annual Conference, pp 79–82.
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Abstract Controlling MANUS, a wheelchair-mounted manipulator, can put a high cognitive load on the end user. Visual servoing techniques can help to reduce this load. This paper describes a visual servoing system that can assist the end-user in carrying out all day living tasks. No a-priori information about the object to manipulate is used (allowing operation in unstructured environments). The system is to be able to operate in direct collaboration with the end-user (collaborative control).
9.1 Introduction MANUS is a wheelchair mounted manipulator, meant to assist severely handicapped people in carrying out all day living tasks, such as eating, drinking, scratching etc. The manipulator has six rotational degrees of freedom for positioning and orienting the gripper, one degree of freedom for opening and closing the gripper, and one (optional) degree of freedom for lifting the entire manipulator. MANUS is designed to operate in an unstructured environment, where the user is responsible for driving the robot to the required position (telemanipulation). Compared with industrial robots, MANUS has low accuracy and low repeatability1. These deficiencies have to be compensated by the end-user who guides the system to the required position, based on visual observations (dotted lines in Fig. 9.1). Although users are able to manipulate many objects using this control architecture, the cognitive load on the end-user while accomplishing a certain task can be serious (especially when accurate motions are required). In this research it is investigated how a camera system, mounted on the gripper of the MANUS (eyein-hand configuration) can help the end-user in carrying out a certain task. The control of a robot by means of a camera, visual servoing, is a widely explored field of science. As MANUS has to operate in a very unstructured environment with little or no knowledge of the objects, building a fully autonomous system is very difficult.
1
This is a result from the high friction and backlash, caused by the design choice to put the motors and gearbox in the main shaft of the manipulator, (keeping the size and weight of the MANUS low.)
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 165-174, 2004.
© Springer-Verlag Berlin Heidelberg 2004
166
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Objects
User
Controller
Manus
Fig. 9.1. Traditional MANUS control scheme
Also, the MANUS philosophy is that the end-user should always be ‘in control’. Especially for the latter reason, it is not the intention to build a fully autonomous system, but rather to incorporate the user into the control loop. The research is therefore not only directed towards building a vision system that can produce some control in unstructured environments, but also on how the user and the visual servoing can work together towards executing a task. The research field that combines user input with machine control is called "Collaborative control" and is first used in teleoperation of vehicles, for example in [1]. In this paper, a robot vehicle is described that engages in an active dialogue with the user to determine the course of action. The application of collaborative control to MANUS is quite new, see [2]. In this work, the user also has to interact with the robot through a dialogue. In our approach, the user can input commands to the robot directly and the vision system merely assists in the movement in an intuitive way. The control scheme needed for this is depicted in Fig. 9.2. In this paper, several approaches to visual servoing are considered and a solution for integrating visual servoing in the MANUS controller is proposed. The performance of the visual servo is demonstrated in a test case where the objective is to pick up a colored beaker.
Objects Vision User
Controller
Manus
Fig. 9.2. Vision based control scheme
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
167
9.2 Visual Servoing Two widely used architectures for visual servoing are image based visual servoing (IBVS) and position based visual servoing (PBVS) (see Fig. 9.3) [1–5]. In PBVS a 3D world-model is constructed by the vision system. This 3D world model is used by a path-planner, which will guide the arm to the desired position. Calculating a 3D world model can be carried out using different techniques (stereovision, laser triangulation, usage of a-priori knowledge about the objects, etc.). These techniques are not preferred, since they need extra hardware, or explicit apriori information (which is in most cases not available in unstructured environments). In IBVS, the required position of the manipulator is defined in terms of image features (e.g. required location of corners of a cube). The visual servo continuously calculates the actual values of these features, and calculates a correcting control action in feature space. Feature errors are translated to robot co-ordinates (defined in Cartesian space or joint space). This requires the calculation of the inverse of an image Jacobian (Ji), defined in the following equation:
df = J i ( c xo ) c d xo
(9.1)
c
where f represents the feature vector and xo the pose of the object with respect to the camera frame.
y x
xd +
∆x
-
x
fd +
∆f
World
Control law
f
Pose determination
Inverse Jacobian
-
f
∆x
z
Feature extraction
Control law
World
Feature extraction
Fig. 9.3. Position based visual servoing PBVS (upper) vs. image based visual servoing (IBVS) (lower)
168
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.2.1 Vision Aspects of the Visual Servoing As stated earlier, the MANUS has to be able to function in unstructured environments and with various objects, putting high constraints on the vision system [6]. To reduce the complexity of these constraints, the objects can be equipped with markers. Although this approach has proven to work well [7], it limits the versatility of the system. Other examples, in which MANUS cooperates with the user, can be found in [8]. Here, the MANUS functions in conjunction with an automated wheelchair. For each object that is to be manipulated, a 3D model is formed using several shots from different angles. An operator is asked to segment and classify the object. In addition, he is asked to supply a gripping strategy. The main differences with our approach are that the operator is not part of the control loop and that a separate learning step is necessary. Also, all DOF's are controlled, while the essence of our approach is that DOF's are distributed between vision and user. Our visual servoing system is used in two manners. When MANUS is far away from an object, only generic image features are used (e.g. size of object, position of center of gravity, etc). The visual servo is responsible for driving the gripper close to the object to manipulate. When the manipulator is ‘close’ to the object, either the end-user takes over the control of the visual-servoed degrees of freedom, or a second visual servoing algorithm is activated, for fine-positioning the gripper. For this, a-priori information can be used to identify the object and the position of the gripper with respect to the object. Only the first manner of use is described in this paper.
9.3 Control Architecture The control system must support the combination autonomous control (via visual servoing), and direct user control, which may be defined in different co-ordinate systems. Figure 9.4 shows the scheme used. Whenever a user wants to execute a certain task, a task scheduler selects a set of image features from a database. Different features may be selected during different stages of the task. The features are compared with the measured features, resulting in a feature error. The in–1 verse image Jacobian Ji translates feature errors to Cartesian errors (with respect to the camera frame). These errors are multiplied by a selection matrix S, which is th a diagonal matrix of ones and zeros. A one in element Sii implies that the i Cartesian DOF is actively used for visual servoing. In parallel, the end-user may drive the robot in his/her preferred co-ordinate system. The specified user input is transformed the camera frame too, and added to the output of the visual controller. The controller uses the combined signals to control the robot. This architecture has the advantage that tasks need not be carried out completely by the visual controller. Instead, it is possible to merge user inputs with
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
169
visual servo inputs, allowing some DOF's to be controlled by the user, and some DOF's to be controlled by the visual controller. Note that when the entire selection matrix S is set to 0, the traditional MANUS controller is realized.
Fig. 9.4. Visual control architecture
9.4 Vision System 9.4.1 Theory In the first stage of task execution, we wish to rely on generic features, applicable for a wide range of objects. These features should supply servoing information, i.e. information that can be used to control the selected DOFs of the manipulator. An example of a generic feature is the center of mass of an object, the size of objects, or the angle of the ‘long axis’ of an object. Since the system operates in unstructured environments, with changing illumination, the quality of the extracted features will strongly vary. In order to cope with this, confidence measures are introduced, which measure the reliability of the calculated features. During control the features with the highest confidence measures can be selected. Note that the number of features must be at least equal to the number of actively controlled DOF's. If no alternative features can be selected, the visual servo must stop, and the end-user will be informed (e.g. through an audio signal).
170
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.4.2 Implementation The vision system is equipped with a low cost color camera. Wide-angle lenses were used, preventing failure of feature calculation at close proximity. Occlusion is dealt with by using the confidence measures (Hu moments and the number of pixels on the border). For segmentation of an object normalized color information is used, providing brightness independent color segmentation. From the segmented image, several features are calculated. c c For servoing in the x and y directions, the center of mass of the pixels belonging to the object are calculated by taking the median of the pixel coordinates. This is more robust than the mean of the coordinates, as can be seen in Fig. 9.5. A feature for the Roll is obtained by calculating the center of mass for both the upper and lower half of the object. The angle of these two points with the y-axis is used as a measure of the Roll. This way, the system tends towards a symmetric image in which long objects are aligned with the y axis. More than one possibility exists to use the features for visual servoing. For exc c ample, the center of mass can be used for controlling the x and y of the gripper, or for controlling the yaw and pitch orientation angles. Naturally, this requires a proper selection matrix S, and image Jacobian Ji.
a)
b)
c)
d)
Fig. 9.5. Calculated features. a The cup, b centers of mass using median (light gray, more robust) and mean (white) c centers of mass in case of segmentation error (gray square) d roll feature, cup divided in two halves
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
171
9.5 Stability Our setup has some special consequences for the stability of the system. Although the relationship between the position of the object with respect to the camera and the features in the image is linearized through the Jacobian, the Jacoc bian itself is dependent on xo. This means that the 'gain' between one pose parameter and its feature is influenced by other pose parameters. For instance, the center of mass appears to be moving faster in the camera image when an object is close. This is caused by the perspective of the camera. According to the wellknown pinhole model, the projection Jacobian has the following form:
foc x ∆ xo c z o ∆ xi c = J xyz ⋅ c = ∆ y o 0 ∆ y i c
c
0 c ∆ xo foc y ∆c y o c z o
(9.2)
c
c
where focx and focy represent the focal distance in x and y directions, xo and yo the c c position of the object with respect to the camera frame, xi and yi the corresponding point (pixel location) in the camera image. c The Jacobian has large diagonal values for smaller values of zo, which may result in instability. Whilst this is a problem in all IBVS setups, here it cannot be compensated, bec cause not all pose parameters, like zo, are known. We solved this problem by considering possible cases of cross coupling and tuning the gain to a stable system in the worst case of this cross coupling. The system should then be stable in all other cases. Additionally, some constraint may be imposed on a DOF to pose a limit on this worst case. For the aforementioned example of the center of mass, this means that the movement is constrained to a minimum Z and the gain is tuned to a stable system for this Zmin.
9.6 Experiments Several experiments have been done using different selection matrices (and consequently different Jacobians). The task was to pick up a colored beaker. The experiments served two goals: 1. To investigate the stability of the proposed visual control architecture. 2. To investigate the usability issues of the combined visual/user control system. The results of one typical experiment are shown in Figs. 9.6 and 9.7. In this experiment, the user controlled the Z-direction (i.e. the distance between the gripper and the beaker) manually, whilst the visual servo controlled the X, Y and roll DOF's. Yaw and pitch angles remained constant.
172
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Figure 9.6 shows the pose of the MANUS during task execution, calculated from the motor angles. As explained earlier, the MANUS has a significant amount of backlash. This means that the position of the motors differs from the actual pose of the gripper. Consequently, a moving motor does not automatically imply that the gripper is also moving. This is also shown in Fig. 9.7 (which applies to the same experiment). Three signals are given. The first signal is the Y co-ordinate of the gripper, calculated from motor co-ordinates. The second signal shows the Y co-ordinate measured in the camera image. The third signal shows the input of the end-user. It can be seen that though the motor is moving, the real gripper position is relatively constant. Another experiment was done concerning the effect of cross coupling on the c stability. First, the arm is moved towards the cup, whilst being servoed in the x c and y directions (Fig. 9.8). The gain is clearly too high, as the arm starts to oscilc c late in x as the distance in z gets smaller. The gain is set to a stable value at this distance (Zmin). (Zmin is small enough to pick up the cup). As we move away from the cup, the servo remains stable throughout the trajectory, as would be expected. Other experiments confirm this outcome, and show that the proposed architecture resulted in a stable control performance.
Fig. 9.6. Results of experiments
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
173
Fig. 9.7. Motor position vs. gripper position
Fig. 9.8. The user moves in cz and vision servoes the cx direction. The black straight lines in de cz graph indicate the position of the cup
174
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.7 Conclusions and Future Work We proposed a visual control architecture that allows semi-autonomous visual control of the MANUS manipulator. Using the control architecture it is possible to select DOF's that will be controlled by visual servoing, whilst other DOF's are directly controlled by the enduser. The experiments show that the total controller is stable. Differences between the motion of the motors and the motion of the gripper can be explained by the amount of backlash that exists in the manipulator. More experiments will be done to investigate the best value of the selection matrix S for different tasks. This will be ascertained by user trials. Subsequently, the feature-tracking algorithm should be improved. Furthermore, the second stage of the visual servoing algorithm (close to the object) should be developed. In this stage a flexible way of using a-priori information will be introduced.
References 1.
Fong T, Thorpe C, Baur C (1999) Collaborative control: A robot-centric model for vehicle teleoperation. AAAI 1999 Spring Symposium: Agents with Adjustable Autonomy, Stanford, CA, March 1999 2. Martens C, Ruchel N, Lang O, Ivlev O, Graser A. (2001) A FRIEND for assisting Handicapped people. IEEE Robotics & Automation Magazine, 7(1): 57–65 3. Hutchinson S, Hager G, Corke P (1996) A tutorial on visual servo control. IEEE Transactions on Robotic and Automation, 12(5): 651–670 4. Espiau B, Chaumette F, Rives P (1992) A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3): 313–326 5. Hager G (1997) A modular system for robust hand-eye coordination using feedback from stereo vision. IEEE Transactions on Robotics and Automation, 13(4): 582–595 6. Weiss LE, Sanderson AC, Neuman CP (1987) Dynamic sensor-based control of robots with visual feedback. IEEE Journal on Robotics and Automation, RA-3, pp 404–417 7. Corke P, Hashimoto K (eds) (1993) Visual control of robot manipulators – A review. World scientific; Singapore 8. Woodfill J, Zabih R (1997) Motion based tracking for dynamic, unstructured environments. Computer Science Department, Stanford University 9. Martens C (2001) Interactive controlled robotic system FRIEND to assist disabled people. In. Proceeding ICORR 2001, p 148 10. Matsikis A, Schmitt M, Rous M, Kraiss K “Ein Konzept für die mobile Manipulation von unbekannten Objekten mit Hilfe von 3D-Rekonstruktion und Visual Servoing", RWTH Aachen, http://www.techinfo.rwth-aachen.de/Forschung/MSR/Manus/.
10 A Safety Strategy for Rehabilitation Robots Makoto Nokata and Noriyuki Tejima
10.1 Introduction It was reported previously that many MANUS users thought that the MANUS was useful, but not useful enough to gain totally independent lifestyles for several reasons; it moves too slowly, it cannot handle heavy goods, and its manipulative arm is not long enough. It would technically be easy to develop a faster, more powerful and bigger robot than the MANUS to meet these requests, however, such a high performance robot would be too dangerous for everyday use. The consideration of safety aspects is especially important for rehabilitation robots with higher performance entering the market. The problem has been discussed in a special committee of the Japan Robot Association according to ISO. In the paper, a safety strategy for rehabilitation robots is proposed as a result of the discussion.
10.2 Principles of Safety Standards for Robots 10.2.1 Framework of New Safety Standards for Robots About industrial robots, ISO 10218 “Manipulating Industrial Robot-Safety” was established in 1992. The basic strategies for safety in the standard were that robots should be isolated from humans and that they must be turned off when they cannot be isolated. However, the strategies cannot be applied to rehabilitation robots because they must work near or in contact with human. We should establish a new safety framework for rehabilitation robots. Recently, a safety standard system for machinery has been established, in which the safety standards for manipulating industrial robots are included. In Europe, a certification system has been carried out according to it. Manufacturers can affix the CE Marking (CE is an abbreviation for “Conformité Européenne”, French for 'European Conformity) to their own products after they are certified by themselves or by a notified body. When such a certification system is functioning, taking the responsibility for accidents is not necessary for designers. Accidents caused after enough risk reduction has been carried out should be tolerated according to the certification system. Accordingly, the process of certification is most important. Designers must properly perform the iterative process of risk assessment and risk reduction. Though there is no safety standard for rehabilitation robots, they should obey the safety standard system for machinery. As stated above, the risk for rehabilitaZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 177-185, 2004.
© Springer-Verlag Berlin Heidelberg 2004
178
M. Nokata and N. Tejima
tion robots cannot be reduced in the same manner as for industrial robots, and residual risk may be difficult to tolerate. Therefore, it is necessary to develop rational protective measures for rehabilitation robots based both on the basic concepts of safety standards and from the point of view of enhancing users' QOL (Quality of Life). 10.2.2 Safety Standard for Machinery The safety standard system for machinery has been established in a pyramidal structure shown in Fig. 10.1. In this system, standards on the top prescribe basic concepts of safety, standards below them prescribe common technologies, and standards at the bottom prescribe precise technologies for each type of machinery, such as manipulating industrial robots. The basic concepts of safety are protective measures according to risk assessment and disclosure of residual risk. According to ISO/IEC Guide 51:1999, safety is defined as “freedom from unacceptable risk” and risk is defined as “combination of the probability of occurrence of harm and the severity of that harm”. Tolerable risk is defined as “risk which is accepted in a given context based on the current values of society”. A level of tolerable risk is not clearly stated in the standard and should be decided according to the current values of society, state-of-the-art technology, legal problems and so on. Safety is relatively described by risk in terms of probabilities. There can be no absolute safety: some risk will remain, defined as residual risk. Nobody can say that accidents or disasters can be avoidable absolutely. For a guarantee of safety, there must be ground for tolerating accidents after an adequate risk reduction process is implemented.
Fig. 10.1. A pyramidal structure of the safety standard system for machinery
10 A Safety Strategy for Rehabilitation Robots
179
10.2.3 Risk Assessment Process and Risk Reduction According to ISO 12100-1:1992, safety measures are a combination of the measures incorporated at the design stage and those measures required to be implemented by the user. Safety measures at the design stage should be performed as a combination of risk assessment and risk reduction as listed below. The first four processes are iterated until the remaining risks become tolerable, and finally point 5 will be used. 1. Specify the limits of the machine, 2. identify the hazards and assess the risks, 3. remove the hazards or limit the risks as much as possible, 4. design guards and/or safety devices against any remaining risks. 5. inform and warn the user about any residual risks. When specifying the limits, use limits, space limits and time limits should be determined. In the process, it is important to take reasonably foreseeable misuse into account, such as incorrect behavior resulting from normal carelessness and the reflex behavior of a person in case of malfunction, incident, failure and so on. Hazards are the origin or the nature of the expected harm. For identifying the various hazards, past data and experiences will be useful. After hazard identification, risk estimation should be performed for each hazard. According to ISO 14121:1999, the risk is derived from a combination of the following elements (Fig. 10.2): 1. the severity of harm; 2. the probability of occurrence of that harm, which is a function of: o the frequency and duration of the exposure of persons to the hazards; o the probability of occurrence of a hazardous event; o the technical and human possibilities to avoid or limit the harm. Before, the risk was defined as the product obtained by multiplying the four elements, however, now it can be estimated by various functions of them. There are many methods of risk estimation. For risk reduction, an inherently safe design should be striven for by completely removing the hazards or limiting the risks first of all, such as removal of sharp edges, minimizing or removal of kinetic energy, usage of low voltage and observing ergonomic principles. Against hazards which cannot be avoided or sufficiently limited by an inherent safe design, safeguards should be applied, such as fixed guards, movable guards, interlocking guards and trip devices. After the risk reduction, risks for the machine with the safety measures will be assessed again. As risks cannot be eliminated completely, it should be considered which of the risks should preferentially be reduced or which measures will provide cost-effective performance. In carrying out this process, it is necessary to take account of: • the safety of the machine, • the ability of the machine to perform its function, • usability of the machine, • the manufacturing and operational cost of the machine, in that order of preference.
180
M. Nokata and N. Tejima
It is necessary to inform and warn the users about residual risks. The instructions and warnings will prescribe the procedures and operating modes intended to overcome the relevant hazards. If a particular training is required, it must be indicated.
Fig. 10.2. Risk assessment and risk reduction of safety measures at the design (ISO 14121:1999, ISO 12100-1-2)
10.2.4 Tolerable Risks for Robots As mentioned above, a level of tolerable risk for machinery should be decided according to the standard which risk assessor declares. This is suitable for industrial robots. Because the workers whom they may injure do not directly benefit from their use, their safety should be certified objectively. On the contrary, users of rehabilitation robots directly benefit from them. Users may accept their use on account of these benefits even when the designer cannot reduce their associated risks sufficiently. Safety standards for medical devices will help us to better consider safety for rehabilitation robots. For example, surgical robots are highly beneficial if the patients' outcome is successful, however, the operative outcomes are not always a success. For such machinery, it is not necessary for designers to take responsibility for accidents if the following steps were taken prior to their usage: That the device was in fact declared as "state-of-the-art", that the residual risks were clearly detailed to the patients and that the patients consented to their use.
10 A Safety Strategy for Rehabilitation Robots
181
10.3 Case Study on Safety of Rehabilitation Robots In general, the risk assessment and the risk reduction of machinery are carried out according to ISO/TR 12100-1 “Safety of machinery-Basic concepts, general principle for design” and ISO 14121:1999 “Safety of machinery-principles of risk assessment”. In Japan, the special committee for standardizing rehabilitation robots has been established by the Japan Robot Association in 2001. The committee members, who are researchers of medical and rehabilitation robots, carried out Case Study of assessing several medical and rehabilitation robots according to ISO/TR 12100-1:1992 and ISO 14121:1999. The aim of this case study is to clarify the key points of risk assessment and risk reduction for these robots. The following medical and rehabilitation robots are carried out case study of the risk assessment by use of block chart shown in Fig. 10.3 which is Fig. 10.2 modified by ISO14971, that is "Medical devices: Application of risk management to medical devices".
Fig. 10.3. The iterative process to achieve safety which is Fig. 10.2 modified by ISO 14971
• Medical robots o Neurosurgical robot o Laparoscopic surgery robot o Continuous passive motion device (CMP)
182
M. Nokata and N. Tejima
• Rehabilitation robots o Meal assistance robot o Mobile ceiling lift o Bed transfer o Pet robot / Mental care robot. This section reports some of the results as follows: 1. Risk estimation 2. Risk reduction 3. Benefit estimation 10.3.1 Risk Estimation This section comments some formulas for estimating the risk of machinery are proposed. Risk related to the considered hazard can be calculated by the following equation:
R = Q * F *C * N
(10.1)
R: risk related to the considered hazard Q: probability of occurrence of harm F: frequency and duration of exposure C: severity of possible harm that can result from considered hazard N: number of exposed people The same equation (Eq. 10.1) is used by special committee for standardizing to estimate the risk of marketed medical and rehabilitation robots. However, this approach shows a lot of significant disadvantages. Some of them are briefly commented below: (a) "R: risk related to the considered hazard" is influenced by the difference in a user and the body situation of cased person. In case of in-home care, a caretaker or a cared person has to operate a medical / rehabilitation robot by himself / herself. Most of caretakers or cared persons are not familiar with their operation, the number of "Q: probability of occurrence of harm" becomes large caused by their incorrect operation or misuse. Even though correct operation and movement, robots can injure the patient whose joint is stiff or whose bone is breakable such as osteoporosis, the Q number is so high. As a result, risk of medical and rehabilitation robots is influenced by the difference in a user and the body situation of cased person. This is far different from a risk of machinery which can be estimated on the assumption that user is a specialist of operation and a person with a normal healthy body. (b) The difference in the state of robot's work space greatly influences "R: risk related to the considered hazard". Stretcher and lifter have residual risk of the user's fall. The damage is dependent on a fall place. For example, the damage of falling to bed is low, but to rigid floor
10 A Safety Strategy for Rehabilitation Robots
183
is high. Manipulator also gives user a risk of collision accident, the probability of accident is dependent on room space or user's position. As mentioned above, the difference in the state of robot's work space must be considered in estimating a risk of medical and rehabilitation robots. (c) There is little judgment material for determining "Q: probability of occurrence of harm" and "C: severity of possible harm" Compared with machinery, there are few statistics data about the accident report of medical treatment and rehabilitation apparatus. The accident occurrence number of cases including a slight injury is unknown, so it is extremely difficult to determine "Q: probability of occurrence of harm". In addition, there is no method to calculate "C: severity of possible harm". In the present circumstances, these values are estimated experimentally or subjectively by the risk assessor. (d) The risk cannot be expressed correctly by the multiplication of a risk element like Eq. 10.1. On the assumption that risk factors in Eq. (10.1) have been independent mutually, risk level is calculated by multiplied each of them. However, there is a certain correlation between "F: frequency and duration of exposure" and "Q: probability of occurrence of harm" of medical and rehabilitation robots. No harm will come from only using Eq. (10.1). 10.3.2 Safety Measures of Risk Reduction Although some safety measures of risk reduction are convenient for machinery, it is not useful for rehabilitation robots. In case of continuous passive motion device (CPM device), the emergency stop device is not useful for the persons who injured their spinal cord. Because they can feel no pain and cannot judge stopping the robot. Generally speaking, inherently safe of industrial robots is easily guaranteed by setting up a fence around the robot. Furthermore, several types of safety measures, such as interlocking device, enabling device, hold-to-run control device and so on, are available to reduce the risk. Therefore risk can be made small infinite. However, it is impossible to realize inherently safe of medical and rehabilitation robots because these robots cannot work without giving a person a touch. Few available safety measures for these robots are developed. 10.3.3 Benefit Estimation Several benefits of using rehabilitation robots are reported by case study (Table 10.1). Most of benefits must be quantified by use of QOL (Quality Of Life), ROL (Respect Of Living) and ADL (Activities of Daily Living), but it is too difficult to quantify them objectively. For examples, benefit of "mobility" is changed according to the extent of gait disorder, so it is necessary to subjectively consider the daily life of targeted cared person.
184
M. Nokata and N. Tejima Table 10.1. Benefits of using rehabilitation robots and the quantification factors
User (Mostly carer)
Cared person
Benefit Improvement of working condition (ex. Reduction of lumbago generating) Acquisition of an independence life Expansion of a life space Mentally relieved
Quantification factors QOL, ROL, Tiredness, Working time, Cut-down medical expenses for lumbago QOL, ROL, ADL
10.4 Proposal of Risk Assessment Guideline for Rehabilitation Robots This section proposes safety strategy for rehabilitation robots according to results of case study mentioned above. Proposed guideline of risk assessment and risk reduction is shown in Fig. 10.4.
Fig. 10.4. Proposed guideline of safety strategy for rehabilitation robots
10 A Safety Strategy for Rehabilitation Robots
185
Fig. 10.4 has similar structure to those, shown in Fig. 10.3. As a difference, the new structure is additionally improved: • Determine limits: user, the extent of handicap, the condition of health, the ability of operation and so on • The 3rd person who can do objective judgment with technical knowledge evaluates the contents of the carried-out risk assessment. Judgments whether apparatus is introduced or not by carer, cared person and manager in consideration of benefit.
10.5 Conclusion This paper presented the study of a safety strategy for rehabilitation robots. The principle of safety standards for robot is reported, especially the framework of safety standards, risk assessment process and risk reduction for machinery. As a result, it made clear that it was necessary to form a new safety strategy for rehabilitation robots based on the principle and process of safety standards for machinery. According to the results, the case study of assessing several medical and rehabilitation robots was carried out according to ISO/TR 12100-1:1992 and ISO 14121:1999. The problems of assessing these robots were discussed in a special committee of the Japan Robot Association, a safety strategy for rehabilitation robots has been proposed. Determine limits of special factors, judgments by certification system, benefit of robots for user have been added to the strategy. However, making the safety standard system for rehabilitation robot has just started. The safety strategy proposed in this paper has not been reached the stage of perfection yet. Future works are: (1) development of risk reduction technology for rehabilitation robots, (2) study of estimation / evaluation method of risk for them, and (3) establishment of certification system in order to guarantee risk assessment carried out.
11 Safety Evaluation Method of Rehabilitation Robots Makoto Nokata, Koji Ikuta, and Hideki Ishii
11.1 Introduction An aged society comes soon. Human-care robots must be realized to nurse aged and disabled persons. Human-care robot will need to work around elderly people and give them touches; therefore conventional safety strategies for industrial robots can not be applied to human-care robots. It is now necessary to make a new study of safety in the space where a human and a machine will exist together. In this chapter, we investigate the human injury caused by robot and machine, and then we classify safety design and control strategy for robot. Next, we propose an evaluation method of safety for human-care robots and define evaluation measures which describe the degree of safety. Next, we apply our method to evaluate several safety design and control strategies, and then we prove the viability of our safety evaluation method. These proposed methods enable us to optimally distribute cost among several safety strategies, and to derive suitable approaching motion of a multi-link manipulator to a human. The validity and effectiveness of these methods are demonstrated by numerical analysis. As a result, the design and control to increase safety are successfully obtained.
11.2 Safety Strategy for Human-Care Robots 11.2.1 Injury to Humans from Human-Care Robots We gave thorough consideration to the possibility of injury to humans from human-care robots and machines. The causes of injury may be classified as follows: 1. mechanical injury - shock (internal bleeding, fracture of a bone), scar (bleeding, contagion) 2. electric injury - electric shock (death from shock, burn), electromagnetic wave (cancer, leukemia) 3. acoustic injury - boom (hardness of hearing), low frequency sound (insomnia, neurosis). In this research, we chose the safety strategy to prevent mechanical injury as the subject of the study. Though protecting humans from electric and acoustic injury is possible by making use of insulators or soundproofing materials, it is very Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 187-198, 2004.
© Springer-Verlag Berlin Heidelberg 2004
188
M. Nokata, K. Ikuta, and H. Ishii
difficult to isolate the mechanical damage in the workspace of the robot. For securing human-care robots which work around humans, many kinds of design and control strategies are indispensable. But many complex and difficult problems are conformed in the design and the control of the robot. 11.2.2 Classification of Safety Strategies We classify safety strategies as follows: 1. Pre-contact safety strategy 2. Post-contact safety strategy. Pre-contact safety strategy considers of as minimizing human injury before human-robot collision. Post-contact safety strategy considers of reducing the injury after the collision. In analogy to car safety strategy, the former strategy corresponds to avoiding a collision by means of an Antilock Brake System (ABS), the latter strategy to absorbing the shock by means of an air bag or side door beam. This discussion can be classified as follows from the viewpoint of a humancare robot user or designer: 1. Safety design strategy (Minimizing injury by design) 2. Safety control strategy (Minimizing injury by control). Table 11.1 shows the classification of safety strategies. Table 11.1. Classification of safety strategies
after collision
before collision
control strategy avoid collision minimize impact force attenuation diffusion
design strategy
distance speed posture moment of inertia stiffness
weight cover
joint compliance
surface
shape
To implement robot design safety strategies, we have already developed the cybernetic actuator [1] and non-contact magnetic gear [2], which have force limiting functions. Other strategies have been devised such as force limiting equipment using electrorheological fluid [3], force control, shock absorption cover [4], and chamfering, etc. Few researches have been carried out on safety evaluation methods, such as dangerousness of actuator arrangement [5] and safety in a human controlling [6]. International safety standards have defined safety as, "freedom from unacceptable
11 Safety Evaluation Method of Rehabilitation Robots
189
risk of harm," and thus estimate only the risk of harm [7]. This estimation method lacks a quantitative basis because it relays on the use of insufficiently provable data. Furthermore, the estimation methods of safety vary with the researchers. As a result, we cannot compare the various strategies. So we have done only one separate case study from beginning to end. The reasons are attributable to the vagueness of the concept of safety. Everyone thinks that it is difficult to calculate the degree of safety or dangerousness and the contribution of each safety design and control strategy to the overall safety performance of robots. So nobody tries to do it.
11.3 Proposing Evaluation Measures of Safety 11.3.1 Necessity of Safe Quantitative Evaluation It is necessary to define ''evaluation measures'' for devising the general safety strategies of human-care robots. Evaluation measures enable us to compare the effect of each safety strategy on the same scale and to optimize the design and control of human-care robots. In the field of information science, Dr. Shannon has defined information as the degree of entropy, he has there by advanced information theory remarkably [8]. In the robotics field, Dr. Uchiyama and Dr. Yoshikawa defined the measure of manipulability, which has enabled us to compare the manipulation performance of various kinds of robot uniformly [9]. The former definition doesn't express enough about the quality of the information; the latter doesn't express various kinds of control performance completely. But we cannot deny their contribution to science and engineering. If we overcome some different opinions and define the general evaluation measures of human-care robots, we will able to achieve similar effects. 11.3.2 Selection of Evaluation Measures First, we examined in detail the occurrence process of collision accidents. According to ISO 12100, some formulas for estimating the risk of machinery are proposed. A typical equation for risk related to the considered hazard is shown as follows:
R = Q * F *C * N R: risk related to the considered hazard Q: probability of occurrence of harm F: frequency and duration of exposure C: severity of possible harm that can result from considered hazard N: number of exposed people.
(11.1)
190
M. Nokata, K. Ikuta, and H. Ishii
Many researchers have analyzed the ''Q: probability of occurrence of harm'' caused by human error, manipulation and so on. Their main topic is how to reduce the probability of accident and how to estimate it. The relation between the design and control of human-care robots and the dangerousness of injury has been paid little attention. In the event of careless collision between robots and humans, the degree of ''C: severity of possible harm that can result from considered hazard'' can be expressed as Eq. 11.2 by using only main factors such as design and control.
C = f (design) ⋅ g (control)
(11.2)
In this research, we have been taking a stand on studying ''what design or control can minimize human injury'' at the occurrence of an accident. Put another way, our aim is to make quantitative evaluation of the effectiveness of safety design or control measures, and to minimize its dangerousness on condition that the Q: probability of occurrence is 1. What should the evaluation measures be? A human-care robot works around humans who move irregularly. We consider an appropriate safety strategy while adapting the classified design/control safety strategy mentioned previously. A safety design strategy is a means for reducing the injury to a human after an irregular collision. A safety control strategy is a means for minimizing the injury before a human-robot collision. It is important to estimate not the occurrence rate but the injury due to collision. No matter what the cause of collision accident may be, the shock of mechanical injury depends on impact force, and the scar depends on impact stress. Namely we consider impact force and stress as evaluation measures.
11.4 General Evaluation Method Using Evaluation Measures In this section, we propose a general quantitative method of evaluation using evaluation measures. First, we define critical impact force Fc as minimal impact force that causes injury to human. Next, we define the danger-index α as the producible impact force F of robot against Fc in Eq. 11.3.
α=
F Fc
(α ≥ 0)
(11.3)
Strictly speaking, the value of force Fc varies according to age, sex and body part. But we use one representative value for realizing the generality of safety evaluation. In exceptional cases such as eyes, where Fc is very low, these body parts are treated as a singular point. Another evaluation is needed for such points. Next, we consider the overall danger-index provided by some safety strategies. We express the characteristic of safety strategies for minimizing the impact force
11 Safety Evaluation Method of Rehabilitation Robots
191
by using a block chart, which is popular in the control field. For example, producible impact force is input, a safety strategy is a factor, its danger-index is transfer function, and injury to a human is output. The index is dependent on the transfer function. In this system, several factors are connected with each other in series. The characteristic of whole system can be expressed as the multiplication of each transfer function. The total danger-index of whole robot αall is expressed by the multiplication shown in Eq. 11.4. This equation enables us to quantify the effect of safety strategies on the same scale: n
α all = ∏ α i
(11.4)
i =1
where ''n'' is the total number of safety strategies and ''i'' is the number of safe strategies. As an example, we consider the case of reducing impact force by a perfect shock absorption material. Even if a robot collides with a human, the impact force to the human is qualitatively 0 because it is isolated by the material. The dangerindex αj about the shock absorption material is expressed as 0 by using the proposed evaluation method of safety. The total danger-index multiplied by each index results in 0, so it is obvious to agree the usual. Too many safety strategies reduce the ability of robot work or operation. This problem can be solved by devising a safety strategy on condition that required working ability is satisfied, or calculating the optimum solution between Eq. 11.4 and efficiency of robot working. This is an advantage produced by a quantitative evaluation of dangerousness. Defining impact force and the danger-index before improvement as F0 and α0 respectively, the improvement rate η can be calculated by Eq. 11.5.
η=
α 0 F0 Fc F0 = = α Fc F F
(11.5)
Fc is canceled in Eq. 11.5, we can simply compare before and after safety strategies. The algorithms of our safety evaluation method are the following: 1. Investigating the factor of damage to a human as evaluation measures. 2. Calculating the impact force F of each safety strategy. 3. Calculating the danger-index α from Eq. 11.3. 4. Executing the general evaluation of safety by using the total danger-index. 5. Discussing the safety strategy from the result. This method enables to evaluate the effect of each or all safety strategies.
192
M. Nokata, K. Ikuta, and H. Ishii
11.5 Deriving Danger-Indexes of Safety Strategy In this section, examples of safety design and control strategies will be given to show the practical derivation of a danger-index. 11.5.1 Safety Design Strategy At first, we propose a linear approximate model of each safety strategy and solve it individually. The aim of approximation is in order to extract only the effect of a safety factor and remove the effects of other factor, as much as possible. Usually, we make models and equations which satisfy all effects of boundary conditions at the same time. This method requires the reconsideration of them when the conditions are changed. If more phenomena are considered, it makes the equation complicated and increases unknown variables. For evaluating and comparing safety strategies, it is necessary not only to consider all phenomena strictly but also to quantify the safety with the aim of wide use. As a result, we work out the danger-index of the safety strategy by using a linear approximate model individually. This research supposes a collision accident between human and robot, and each safety strategy for reducing the damage from the collision is discussed. For example of a safety design measure, the reducing a robot weight in order to minimize the impact force is shown as follows. Impact force F is derived as Eq. 11.6 by Newton's equation of motion. This impact force F of robot against the critical one yields the danger-index α, Eq. 11.7.
F = ma
(11.6)
ma Fc
(11.7)
α=
As an example, a danger-index is shown when robot material is changed from steel (density: 7.86 ×103[kg / m3 ] ) to aluminum (density: 2.69 ×103 [kg / m3 ] ). When the robot moves at 1 [m/s ], the danger-index α is 0.34. Or if replaced with a plastic (density: 1.40 ×103 [kg / m 3 ] ), the index α is 0.18. In short, if the weight is reduced by 2
half, α is half, too. Similarly, it is possible to derive danger-indexes of several design strategies, such as absorbing impact force by soft cover, safety joint compliance, minimizing impact stress caused by shape, reducing surface friction and so on. The equations of these danger-indexes have been shown in [10].
11 Safety Evaluation Method of Rehabilitation Robots
193
11.5.2 Safety Control Strategy Danger-index equations of safety control strategy are derived in this research. If dynamical analysis or consideration of extra parameters are needed, the safety is evaluated by using some assumptions. For example of safety control strategy, ''Effect of keeping distance'' is shown as follows. Sufficient distance between a human and a robot produces enough time to reduce impact force by braking, actions to avert collision, and so on. When the approaching speed of a robot (mass: m) is reduced at acceleration a from distance l. Time until collision ∆t is obtained by Eq. 11.8, when v>0 and a>0.
l = v∆t −
a∆t 2 2
2
∆t =
v v 2l − − a a a
(11.8)
The collision speed becomes v − a∆t , and impact force F and danger-index α are expressed as in Eqs. 11.9 and 11.10. We assume that the impact force does not become a negative value.
F =m
α=
(v − a∆t ) − v′
(11.9)
dt
F (v − a∆t ) − v′ =m Fc dt
(11.10)
Here, we examine nursing motion by a multi-joint manipulator. First, ''normalization technique of impact force'' is introduced in order to pick up the effect of distance. In Eqs. 11.8–11.10, acceleration a has no influence on the effect of distance and differs between every robot. Velocity after collision v' cannot be specifically determined before the collision. These parameters are determined by the assumption that impact force is 1 N (normalized impact force). Therefore, unknown parameters in these equations, obtained from this technique, should be a = 1[m / s 2 ] , v'=0 [m/s], dt=1.0 [s]. That is a normalization technique. We consider a concrete example of a robot with mass 10 kg approaching a human from a distance of 0.5 m at a velocity of 2 m/s. The time until collision ∆t , calculated from Eq. 11.8, is 0.27 s. Impact force F0, obtained from Eq. 11.9, is 64.65 N. The critical impact force Fc is 490 N that is 10% of the force which the human head can withstand without injury. A safety factor of 10 on Fc is introduced on our own terms. Strictly speaking, Fc changes according to age, sex and body part. But we use 490 N as one representative value for realizing the generality of danger evaluation. If another value of Fc is needed, the safety is evaluated by replacing just the equation of impact force F in Eq. 11.3. Of course, exceptional cases exist, such as eye, where Fc is very low, these are treated as a singular point, and therefore another evaluation is needed for such points. The danger-index α0
194
M. Nokata, K. Ikuta, and H. Ishii
calculated from Eq. 11.10 is 0.13. When the robot is set up at 1.0 m apart from human, ∆t , F and α are 0.59 s, 24.15 N and 0.049, respectively. The improvement rate η is 3.01. The result revealed quantitatively that the danger was decreased almost to 30%. Similarly, it is possible to derive danger-indexes of several control strategies, such as approaching safety velocity and safety posture so on. The equations of these danger-indexes have been shown in [11].
11.6 Proposal of Design Optimization and Practical Examples This section proposes a design and control optimization using our danger evaluation method. 11.6.1 Formulating the Design Optimization Method First, we calculate the cost performance of safety methods. When the cost of safety method i(1,2, … ,n ) is ∆yi and the increased improvement rate is ∆η i ,
∆φi is expressed as Eq. 11.11. ∆η φ= i ∆yi
then the improvement rate for cost
(11.11)
Improvement rate ηi of safety method i is expressed as Eq. 11.12, which is increased improvement rate (invested cost yi × φi ) plus 1 (initial). 1 (initial) means an improvement rate before improving.
ηi = 1 + yiφi
(11.12)
Practical examples of optimizing the cost distribution are maximizing safety under fixed cost and minimizing total cost under fixed safety. These examples use three safety methods: decreasing weight, modifying shape and protective surfacing. The improvement rate per unit cost of each method is derived by our danger evaluation method. The safety method, which is decreasing weight by replacing the stainless steel of a robot arm (100x80x300 [mm], ρ sus = 7.87 [g/cm 3 ] ) by duralumin ( ρ dur = 2.80 [g/cm 3 ] ) is as follows. Danger-index can be expressed as Eq. 11.7, and improvement rate is derived by Eq. 11.13,
η=
α ρ susVa Fc 7.87 = = = 2.81 Fc ρ durVa 2.80 α0
where V is volume of material, a is acceleration at a collision.
(11.13)
11 Safety Evaluation Method of Rehabilitation Robots
195
The costs will come to $364, which consists of material expense of $64 plus wages of $300. The increase in the improvement rate is the value derived by Eq. 11.13 minus 1 (1 is value of the improvement rate before the improvement). As a result, the improvement rate per cost is expressed as Eq. 11.14.
φweight =
2.81 − 1 = 0.005 364
(11.14)
By modifying the shape by planning off the four corners (R5), we obtain
φshape = 0.0034 ( ∆η shape = 0.67 , ∆yshape = $200 ). A protective surfacing of soft material (thickness: 10 [mm], E = 5.0 [Mpa], 4 sides) gives φshrface = 0.0154 ( ∆η surface = 2.16 , ∆y surface = $140 ).
11.6.2 Maximizing Safety Under Fixed Cost This section solves maximizing safety under fixed cost. The optimized cost distribution is obtained by satisfying total improvement rate Tη ⇒ max (Eq. 11.15) and total cost Y=const (Eq. 11.16).
(
)(
) (
)
Tη = 1 + y1 φ1 ⋅ 1 + y2 φ2 L 1 + yn φn : max
(11.15)
y1* + y2* + L + yn* = Y : const
(11.16)
*
*
*
*
*
*
If the total cost of improving one robot arm is $500, each cost can be obtained by substituting the improvement rate per unit cost shown in above section for Eq. 11.15 and Y=$500 for Eq. 11.16. The safety can be improved 9.76 times by distributing $500 among decreasing weight $227.05, modifying shape $132.95 and protective surfacing $140.00, specifically, replacing 62% iron with duralumin, chamfering 66% of corners and covering 100% of surface with rubber. As a result, it is possible to quantitatively determine the enforcement percentages of the safety method. Another combination, such as decreasing weight $360.00 (98%) and protective surfacing $140.00 (100%) can increase safety 8.85 times, or decreasing weight $300.00 (83%) and modifying shape $140.00 (100%) can increase safety 3.91 times. These results clarify that the above combination is the best-optimized cost distribution. As a result, this method enables us to quantitatively optimize safety design methods while considering cost and makes it easy to execute them efficiently.
196
M. Nokata, K. Ikuta, and H. Ishii
11.6.3 A New Method of Calculate a Safe Approach Motion This section proposes a new method for calculating a safe approach motion. The new method minimizes the total amount of danger index (Eq. 11.17) considering the tolerant danger index [12].
∫ α (t )dt → min
0 Task T-04 U-01
U-02 P-01 P-09
U-14
U-02 U-14 U-15
c
d
Fig. 13.5. Mode contribution by number of actions. a Pilot mode; b Point To Point mode; c Relative mode; d Commanus mode
Using Manus within all new modes at the same time; also, need less number of actions and less of time to perform the tasks (Fig. 13.5d and Fig. 13.6d), which confirms that the new modes contribute in the reduction of the number of actions and to the reduction of execution time. Past Tim e (P-01/P-02) U-01 P-01 U-02 P-02 U-09 U-14 U-15
a
U-01 P-01 U-02 P-04 U-09 U-14 U-15
b Past Tim e (P-01/P-03)
U-01 P-01 U-02 P-03 U-09 U-14 U-15
c
Past Tim e (P-01/P-04)
Past Tim e (P-01/P-09) -> Task T-04
P-01 P-09
U-01 U-02 U-09 U-14 U-15
d
Fig. 13.6. Mode contribution by execution time. a Pilot mode; b Relative mode; c Point To Point Mode; d Commanus mode
13 Usability of an Assistive Robot Manipulator
219
13.6 Discussion During evaluations, we have noticed that: • The standard relation between movement and mode shown that Cartesian, Joint or Pilot modes where generally used to carry out movements of high amplitude. The Relative mode remains adapted for the final phase, for example to insert floppy disk in a computer. • The visual feedback contributed to simplify the training phase, this was concretized by the reduction in the training duration, a session of 20 minutes was sufficient for the majority of the users in contrary to several sessions previously. • Use of the OT configuration tool, dedicated to define the mapping control on the input device, by the occupational therapists remained appreciable by the users. We noticed that it was necessary to carry out a series of input device’s configurations, on average from 2 to 5 iterations to lead to a personalized version for each user. • The training of the Pilot Mode remains difficult to realize because it forces to imagine a virtual reference fixed on the gripper. But for those who succeeded this step, expert users, it permitted to decrease the execution time of the tasks. Even, some users found it more intuitive in comparison with the movement of the hand. • These results confirmed our former evaluations on the favorable contribution of the Point-to-Point mode.
13.7 Conclusion The evaluation of the new architecture allowed us to bring some improvement to the system. The first trials with disabled patients showed their interest regarding the new added modes. The result obtained, are only preliminary results, and we can not pronounce yet on the real contribution of the new command architecture modes in the daily use of Manus at home and outside. More evaluations in real life conditions with the help of disabled people are necessary to test all the new functions offered by the proposed new system. This new architecture allowed plugging different input devices which could be selected according to each end-user. To propose personalization of input device, we have developed a user-friendly “OT” tool, which allows users to select any appropriate device and modify the configuration mapping of actions according to each robot modes. Evaluation and contribution of this on-line mapping of input devices and adaptation has already started and results will be presented in the near future. The continuation of this research work is insured through a new European project AMOR which started in May this year, which is a logical continuation of
220
B. Abdulrazak, M. Mokhtari , and B. Grandjean
Commanus project. The development realized during Commanus will lead to new command architecture for Manus which will be integrated in AMOR1 project. The aim is to propose a new generation of Manus robot taking into account the users requirements.
Acknowledgments The authors would like to thank the users who have participated actively in these experimentations from the rehabilitation hospital of Garches. Funds for this project were provided by the European Commission.
References 1.
2.
3. 4. 5.
6.
7.
Abdulrazak B, Mokhtari M, Grandjean B (2003) Assistive robotics for severely disabled people: The Commanus project. AMSE, Journal of the Association for the Advancement of Modeling and Simulation Techniques in Enterprises, Special edition HANDICAP, Barcelona, Spain 63(4): 1–14 Abdulrazak B, Mokhtari M, Grandjean B, Dumas C (2002) La robotique d'aide Aux personnes Handicapées, le projet Commanus. Proc Handicap 2002, la deuxième conférence, Pour l’essor des technologies d’assistance, porte de Versailles, Paris, France, pp 89–94 Chatila R, Moutarlier P, Vigouroux N (1996) Robotics for the Impaired and Elderly Persons. IARP Workshop on Medical Robots. Vienna, Austria, 1–2 Oct. 1996 Heidmann J, Dumazeau C (1999) Evaluation du robot Manus par des personnes lourdement handicapées. Rapport interne Handicom Mokhtari M, Abdulrazak B, Rodriguez R, Grandjean B (2003) Implementation of a path planner to improve the usability of a robot dedicated to severely disabled people. Proc IEEE Robotics and Automation International Conference (ICRA’2003), Taiwan Mokhtari M, Didi N, Roby-Brami A (1999) A multidisciplinary approach in evaluating and facilitating the use of the Manus robot. Proc IEEE Robotics and Automation International Conference (ICRA’99), Detroit, Michigan, USA Vertut J, Coiffet P (1984) Les Robots. Tome 3a : Téléopération, Evaluation des technologies. Ed. Hermes, France.
1 AMOR project EEC Growth program: Mechatronic upgrade & wheelchair integration of the Manus Arm manipulator. Partners involved: Exact dynamics, TNO-TPD and Koningh in the Netherlands, Ideasis and ExpertCam in Greece, Lund University in Sweden, HMC in Belgium, and INT and AFM in France
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands Gert Willem Römer, Harry Stuyt, Geer Peters, and Koos van Woerden
14.1 Introduction This chapter describes both the current (Anno Domini 2003) and future processes of prescribing the Assistive Robotic Manipulator (ARM, also known as “Manus”) to potential client/users within the Dutch public health insurance system. In addition, the results of two studies conducted in the Netherlands relevant to the costeffectiveness of the ARM, the indication criteria, and targeted user groups, are summarized and discussed.
Fig. 14.1. Two ARM users pouring a beer
14.2 Wheelchair Mounted Service Manipulator ARM The wheelchair mounted service manipulator ARM (also known as "Assistive Robotic Manipulator" and "Manus") is produced by Exact Dynamics, in the NetherZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 221-230, 2004.
© Springer-Verlag Berlin Heidelberg 2004
222
G.W. Römer et al.
lands, and assists disabled people having very limited or non-existent hand and/or arm function (See Figs. 14.1 and 14.2).
Fig. 14.2. The ARM and its components.
Table 14.1. Physical characteristics & properties of the ARM Property Degrees of Freedom (DoF): Reach: Weight: Lift capacity: Gripper: Gripper clampign force: Repeatability: Velocity: Safety features: Power supply: Input devices: Display: Control modes: RoI:
Value 6 + gripper + lift unit (total 8) 80 cm + 25 cm (lift unit, optional) 13 kg (ARM only), 18 kg (incl. lift unit) Up to 2 kg 2 fingers, with 3 point grasping finger tips 20 N ±1.5 mm 9.9 cm/s (max. Joint velocity 30º/s) Slip-couplings, limited speed, acceleration, gripper force, and much more 24 V DC, 3 A (max.) Joystick, keypad, switches, sip & puff, UniScanner, EasyRider, etc. 5×7 LED matrix with buzzer Carthesian-mode, joint-mode 1 to 2 years (see Sect. 14.5.2)
The typical ARM user may suffer from muscular dystrophy, multiple sclerosis (MS), cerebral palsy, rheumatism, or spinal-cord lesions. The ARM allows a variety of Activities of Daily Life (ADL) tasks to be carried out in the home, at work, and outdoors. These tasks include drinking from a glass, removing an item from a desk, scratching ones head, discarding an item in a trash receptacle, handling a floppy disk, shopping, or posting a letter. The ARM can be operated using a wide range of input devices that include, but are not limited to a keypad (sixteen-button,
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
223
with 4×4 grid), or a joystick (also the joystick of the wheelchair). Additionally, a headband or spectacle mounted laser pointer, or other specially adapted device can be devised and constructed to function by the use of a non-disabled body part, such as the chin. Table 14.1 lists some technical characteristics of the ARM. Since its commercial introduction more than ten years ago, the ARM has proved to be a safe, efficient, and a highly appreciated assistive device. The time needed to complete an ADL-task is a characteristic determining the performance of a rehabilitation robot [4]. That is, if a rehabilitation robot is capable of carrying out quickly, it is a good robot. Table 14.2 lists the typical time, required by trained users, to carry out some typical ADL-tasks using the ARM.
Table 14.2. Typical times to complete ADL-tasks using the ARM Task Pick up a water bottle from a table, fill a glass with water, and place the bottle back on the table Pick up a glass of water from a table and bring it to ones mouth, and take a sip (no straw) Pick up an object from the floor and put it on a table Grab a chip from a plate, bring it to ones mouth, eat the chip, and return the gripper to the plate Pick up and answer a mobile phone Training required to teach an inexperienced user how to pick up an object from the floor
Time [min] 1.25 1 1.6 1 1 5 – 10
14.3 The Current Process of Providing an ARM to a User The distributor for the ARM in the Netherlands is Revalidatietechniek hetDorp (known as RTD), and they are currently the organization responsible for the process of prescribing an ARM for potential users, including: • informing potential users of the benefits of the ARM, • establishing the indications for each client based on the indication criteria, • assessment of the technical modifications required to attach the ARM onto the wheelchair, • filing of the formal application for reimbursement for the ARM, • installation of the ARM on the wheelchair, • training the user. The manufacturer, Exact Dynamics, is responsible for the service and maintenance of each ARM.
224
G.W. Römer et al.
14.3.1 Informing Users about the Benefits of the ARM Rehabilitation specialists at RTD are familiar with the features and benefits of the ARM for severely handicapped individuals. These specialists inform potential clients, nation wide, of the effectiveness and benefits of the ARM. Also, potential users themselves have contacted RTD directly after having learned of the existence of the ARM, either through a network of current ARM users, the media, or a rehabilitation center. Once interested in the ARM, a potential client consults one of the specialists at RTD, who then guides them through the rest of the procurement process. Such a specialist is hereinafter referred to as an “ARM-therapist”. 14.3.2 Indication Criteria The following medical, social, and physical requirements must be met, in order for an ARM to be prescribed. The (potential) user must: • have very limited or non-existent arm and/or hand function, and can not independently (without the help of another aid) carry out ADL-tasks, • use an electric powered wheelchair, • have cognitive skills sufficient to learn how to operate and control the ARM, • have a strong will and determination to gain independence by using the ARM, • have a social environment including caregivers, friends, and/or relatives who encourage the user to become more independent by using the ARM. A formal indication is determined by an ARM-therapist, based on these criteria, to calculate the maximum functional benefits of the ARM for each client. 14.3.3 Stand-Alone Test If the indication criteria (previous section) are met, the client will be given the opportunity to explore and test the ARM in a so-called “stand-alone” setup. In this setup, the ARM is mounted on a self-supporting fixture that the wheelchair can position next to (See Fig. 14.3). This setup allows the client to use the ARM to explore their individual ability to operate the ARM using different control devices. This not only helps the potential client to decide whether or not he/she wants to use the ARM but also allows the ARM-therapist to determine if the user is capable of learning how to operate the ARM. In addition, the ARM-therapist will make determinations as to the technical electrical and mechanical modifications to the wheelchair are required to optimize the operation of the ARM. For example, the mounting location of the ARM on the wheelchair must be determined based on the physical characteristics of the client and the design of the individual wheelchair itself. The posture of each user and their field of view also affects the final positioning of an ARM installation. Generally, the ARM is mounted in front of the user on the left side (Fig. 14.1) or right side (Fig. 14.4) of the wheelchair.
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
225
Fig. 14.3. Using a stand-mounted ARM, this user was able to pick up, manipulate, and position a small object, after only 5 to 10 minutes of training
Fig. 14.4. An ARM mounted on the right side of a wheelchair
Also at this point, the most effective input control device for each client is determined and specified (e.g. wheelchair joystick, keypad, chin-control, switches, buttons, etc.). Currently, users in the Netherlands control their ARM with either (their original wheelchair) joystick (about 80%), or the 4×4 keypad (about 20%). The keypad option is preferred by clients that still have some hand function because it offers them a slightly more convenient way to operate the ARM.
226
G.W. Römer et al.
14.3.4 Formal Application and Funding of an ARM Once the previous phase is successfully completed, the ARM-therapist files a formal request for funding of the ARM. Currently, funding for the ARM is not yet provided by the Dutch social security system, or public health insurance systems, but by a three-year governmental exploratory grant in the framework of the AWBZ (the Dutch Act of Particular Medical Expenses). This grant, ending in 2004, covers the purchase and supply of approximately 240 ARM’s, and includes their five-year service contract, the adaptation of the wheelchair, and training the user (Sect. 14.3.6). Each ARM is purchased from and serviced by Exact Dynamics (Sect. 14.3.7). Part of the formal application process for each ARM requires that the user's local municipality, and associated wheelchair service organization, grant permission for the modification of the user's wheelchair that an ARM is to be installed on. This is because, according to the WVG (the Dutch Act for Supplies for the Handicapped), local municipalities fund and legally own each user’s wheelchair. This formal bureaucratic process, can take up to several months, and obviously needs to be simplified in the future. 14.3.5 Mounting the ARM on the Wheelchair Once the funding of the ARM and the grant of permission to adapt an ARM to a wheelchair has been obtained, the wheelchair is shipped to RTD where it is fitted with the ARM. This integration includes; • Mechanical adaptations, such as the quick-change mount to easily attach and detach the ARM, • Electrical modifications, including the ARM’s power supply, rewiring of the wheelchair’s joystick if necessary, or the functional integration of the ARM with an on-chair Environmental Control Unit (ECU) or scanner. Depending on the complexity of each installation, the modification can take one to two days. During the ARM installation period the client generally may apply to receive a replacement wheelchair from their local municipality. 14.3.6 Training Each client is trained in their own home to operate their ARM, conducted by an ARM-therapist. Usually, only one session is required to familiarize the user with the operational characteristics of the ARM, at which safety issues are also discussed. On average, it takes 5 to 10 minutes of training for an inexperienced user to pick up an object for the first time (See Fig. 14.3). Depending on the user, up to six additional training sessions, at two-week intervals, may be necessary to teach tips-and-tricks to efficiently perform ADL-tasks. After three months, the use of the ARM is evaluated, which may result in more training, additional technical modifications to the wheelchair, or installation of another input control device.
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
227
14.3.7 Service and Maintenance The ARM is serviced by Exact Dynamics. Each robot, funded by the exploratory grant (Sect. 14.3.4), is sold with the optional five-year service-contract that includes the annual maintenance, mechanical, electrical, and software updates, repair of (manufacturing) defects, and a helpdesk support line. A user will report malfunction of an ARM to RTD. If the problem cannot be repaired on-site by RTD, the ARM is then shipped to Exact Dynamics for repair. Usually the ARM is returned to RTD within 48 hours for reinstallation.
14.4 The Future Process of Prescribing the ARM As mentioned previously the ARM is not yet funded by the Dutch social security or social health insurance systems, but by a governmental exploratory grant (Sect. 14.3.4), which explains the laborious and bureaucratic process of prescribing an ARM for a user. It is expected that the ARM will be on the so-called “prescription list for medical aids” from 2005, which implies that the reimbursement of all costs of the ARM (including wheelchair adaptation, etc.) is the responsibility of a single organization, i.e. the social health insurance companies. This will greatly simplify the process of prescribing an ARM. RTD currently operates from a single location in the Netherlands. By the end of 2003, four rehabilitation centers will be disbursed evenly within the Netherlands in northern, southern, eastern, and western regions, and will take over tasks of RTD. These centers are independent from RTD and will be responsible for increasing the public awareness of the ARM and informing potential users on the opportunities and possibilities of using the ARM. If a potential user applies for an ARM at his or her social health insurance company, the insurance company will apply for a formal indication (Sect. 14.2.2) at a rehabilitation center. This indication will be carried out at the indication center by a team of specialists consisting of a medical doctor, an ergonomic engineer, physiotherapist, and a rehabtechnician from RTD. When a potential client meets the requirements, the standalone-test (Sect. 14.2.3) will be carried out at a rehabilitation center. The team will determine if the potential user has sufficient cognitive skills to control the ARM. Once a positive determination is made, the wheelchair will then shipped to RTD for integration of the ARM. It is still uncertain whether training will be offered to each user in group-settings at a rehabilitation center, or individually at the user’s home’s. Each user will have the opportunity to test their ARM for three months, and then be evaluated. Once a definite prescription has been made, the regional rehabilitation center will be the intermediary between the user, RTD, and Exact Dynamics.
228
G.W. Römer et al.
14.5 Summary of Two Recent Dutch ARM-User Evaluations This section summarizes and discusses two major Dutch ARM-user evaluations. 14.5.1 User Study Conducted by iRV In 1999, the Dutch Council of Health Care Insurance (CvZ) commissioned the Dutch Institute for Rehabilitation Issues (iRV) to carry out a study to analyze the target user-group, and the cost-effectiveness and indication criteria of the ARM [1, 2]. It was estimated that the size of the group of potential users in the Netherlands, which could benefit from the ARM, ranges from about 800 to 2000 individuals. This estimation was based on medical diagnosis, the availability of a powered wheelchair, user age, individual personal characteristics, intended use of the ARM, and the user’s environment. Therefore, given the population of the Netherlands (16 million), the number of potential users ranges from 0.005% to 0.0125% of the Dutch population. Indication criteria was formulated and evaluated, which resulted in the criteria as described in Sect. 14.3.2. Part of the study was to determine the effect of the ARM on the independence of specific ARM users, and their perception of the changes to their quality of life. The study compared the activities of 13 long term (> 4 years) ARM users, to 21 non-ARM users having a comparable level of impairment. The activities of both groups were analyzed with respect to individual levels of independence, required assistance, perceived quality of life, and more. The observations included eating, drinking, self-care activities like washing and brushing teeth, removing objects from the floor or out of a cupboard, feeding pets, and operating typical devices such as a VCR. Statistical evaluation showed that 10% of the users applied the ARM for more than 4 hours per day, 30% for 2 to 2½ hours per day, and 60% for less than 2 hours per day. So, about 2 hours per day on average. It was noted that ARM users carried out about 40% more ADL-tasks themselves, than did the non-ARM users. In addition, ARM users required about 30% less assistance to carry out those tasks, indicating greater independence. For the ARM users, assistance was mainly required to prepare the specific task, like uncorking a bottle of wine, while pouring and drinking the wine was then carried out by the ARM users themselves. Moreover, ARM users reported an increased feeling of independence and autonomy, which led to a higher level of satisfaction and pride when they accomplished these activities unassisted. Although the latter benefits of the ARM cannot be expressed in terms of money, they are of course of great value. Detailed results of this study can be found in de Witte et al. [1] and Gelderblom et al. [2]. Discussion Usually Dutch ARM users have the availability of numerous additional ADL aids, and their homes are furnished with a high degree of home automation. It is therefore likely that an ARM user which lacks these additional aids will (need to) use
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
229
the ARM for more than the reported 2 hours per day. The expenses on additional ADL aids could then be saved. 14.5.2 User Study Conducted by hetDorp In 1998, almost parallel to the iRV-study, the Siza Dorp Group, of which RTD is a subsidiary), started an independent user study [3]. The main focus of this study was to quantify the net cost-savings on labor-costs of ADL caregivers, due to the reduced ADL assistance required by ARM users. Eight non-ARM users ranging in age from 21 to 37 were selected, based on the indication criteria (Sect. 14.2.2), and were provided with an ARM, and trained how to use it. Next, one-week observations of users took place at 3-month intervals, comprising a total of 4 weeks in 12 months. During these observations, the amount and duration of ARM usage, as well as the amount and duration of ADL assistance was recorded (See Table 14.3).
Table 14.3. Averaged times (in hours per day) provided by ADL assistance with and without the use of an ARM [3].
Assistance ARM usage
Without ARM min avg. max
min
With ARM avg. max
2.9
3.7
5.8
1.4
2.8
0
0
0
0.6
1.5
Difference
min
avg.
4.8
0.7
1.2
ma x 1.8
3.7
0.6
1.5
3.7
A wide variation in ARM usage, and ADL-assistance required, was noted. This variation was attributed to the cognitive and physical capabilities of each user, and included any lack of desire to use the ARM. Results show that, due to the use of the ARM, at least 0.7 to 1.8 hours “per day” can be saved on the labor-costs of ADL caregivers. With an average hourly rate of 28 euro, for Dutch ADL-assistance, this results in a savings of 7,154 to 18,396 euro per year. Detailed results of this study can be found in van den Brand and van de Ven [3]. Discussion It can be argued that the measured reduction of 0.7 to 1.8 hours per day on ADLassistance is conservative for the following reasons. The group of ARM users that were tested has the availability of additional aids, as well as a high degree of home automation. It is therefore likely that an ARM user lacking these additional aids will use the ARM more and, as a result, will save more on assistance than the reported 0.7 to 1.8 hours per day. Also, expenses for additional ADL-aids are then saved. Unfortunately, this study does not report which, or how cognitive and physical limitations impede the use of the ARM. Such information is relevant to improve or modify the ARM (e.g. a different or optimized input-device) to result in
230
G.W. Römer et al.
increased ARM usage and therefore increased savings on ADL-assistance. Also, the individual desire and level of determination to use the ARM are important factors governing the degree of ADL-assistance required. It is reasonable to expect that, in the future, once a user has the availability of an ARM, he/she must use the ARM for a minimum number of hours each day to gain the cost benefit. These aspects indicate that a saving on ADL assistance of 2 to 3 hours per day could be achieved. It is expected that the trend of reduced ADL assistance will be amplified by the future introduction of a personal care budget. This method of financial support will offer handicapped individuals the opportunity to select and acquire the level of human or technical aid they desire. The cost for a standard ARM, including a 3-year warranty is about €€ 25,000, excluding local sales tax, wheelchair modification and training. With the estimated savings on ADL assistance of 2 to 3 hours per day, this implies a return on investment of about 1 to 1.5 years. An even more favorable return on investment is obtained, when the fact that the ARM user is able to work (again), is incorporated in the calculation. This is especially important for countries without social security, or public health insurance systems.
References 1. 2.
3. 4.
Witte L.P. de et al. (2000) Manus een helpende hand. iRV report Gelderblom, GJ et al. (2001) Cost-effectiveness of the MANUS robot manipulator, Proc. of the International Conference of Rehabilitation Robotics (ICORR2001), pp. 340–345 Brand Jvd, Ven Avd (2000) Onderzoeksrapport project Manus robot manipulator, Siza Dorp Group report Römer GRBE, Johnson M, Driessen B (2003) Towards a performance benchmark for rehabilitation robots, Proc. 1st International Conference on Smart Homes and Health Telematics, September 24-26, 2003, Paris, France, pp. 159–164.
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors of an Underactuated Prosthetic Hand M. Zecca, G. Cappiello, F. Sebastiani, S. Roccella, F. Vecchi, M.C. Carrozza, and P. Dario
Abstract The development of a prosthetic hand able to replicate as much as possible the grasping and sensory features of the natural hand represents an ambitious project for scientists. State of the art technology is still far to provide engineers with components with similar performance of their natural models, and active prosthetic hands can be only a pale replication of the missing natural limb. This chapter presents the current research efforts towards the development of a self-adaptative and anthropomorphic prosthetic hand. In particular, the chapter is focused on the problem of replicating the natural sensory system of the hand with an artificial proprioceptive and exteroceptive sensory system.
15.1 Introduction The hand is the end effector of the upper limb, which in humans serves the important function of prehension, as well as being an important organ for sensation and communication [11]. The development of a prosthetic hand able to replicate as much as possible the grasping and sensory features of the natural hand represents an ambitious project for scientists, because of its challenging characteristics and performance: a large number of Degrees of Freedom (22 DoFs), redundancy and complexity of proprioceptive and exteroceptive sensors, and advanced control [6]. State of the art technology is still far to provide engineers with components with similar performance of their natural counterparts, and prosthetic hands can be only a pale replication [3]. Commercial hand prostheses have one or two degrees of freedom (DoFs) providing finger movements and thumb opposition. Due to this lack of DoFs, such devices are characterized by a low grasping functionality [4]. In order to overcome these limitations, and to enhance the dexterity and the usability of myoelectric hand prostheses, a self-adaptative and anthropomorphic prosthetic hand has been developed [9]. In particular, this chapter is focused on the problem of replicating the natural sensory system of the hand with an artificial sensory system designed and fabricated according to a biomechatronic approach [3]. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 233-242, 2004.
© Springer-Verlag Berlin Heidelberg 2004
234
M. Zecca et al.
15.2 Mechanical Structure In general, cosmetics requirements force the engineers to incorporate the prosthetic device in a glove, and to keep size and mass of the entire device compatible with those of the human hand. The combination of robust design goals, cosmetics, and limitations of available components, can be matched only with a drastic reduction of DoFs, as compared to those of the natural hand [2]. Due to this, prostheses are characterized by low grasping functionality, and thus they do not allow adequate encirclement of objects in comparison to the human hand. This low flexibility and low adaptability of artificial fingers lead to an instability of the grasp in presence of an external perturbation, as illustrated in [10]. In order to enhance the dexterity of prosthetic hands by keeping an intrinsic actuation solution and a simple control algorithm, we adopted an innovative design approach based on underactuated mechanisms [5, 8]. The result of these efforts is a three-fingered anthropomorphic hand called RTR II hand (Fig. 15.1) [9].
Fig. 15.1. The RTR II prosthetic hand (on the left) and its actuation and transmission system (on the right)
This hand weights about 320 grams, and it has nine DoFs in total, but only two motors. Index and middle fingers are identical (both have three phalanges), while the thumb has two phalanges, as in the human hand. The hand is based on a tendon transmission system (Fig. 15.1). The tension of the tendons generates a torque around each joint, by means of small pulleys, and allows the flexion movement; this transmission structure acts in the same way as the flexor digitorum profundus [5, 7]. The extension movement is realized by torsion springs. The adduction and abduction movements of the thumb are realized by means of a four bar link mechanism.
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
235
The actuators system consists of 2 DC motors with different characteristics and functions: • the first motor (Minimotor S. A., mod. 1727 006 C, with 20/1 Minimotor S.A gearhead) acts on a slider providing all the fingers with the flexion/extension movements for power grasp (a detailed view of the slider is shown in Fig. 15.4); • the second motor (Minimotor S. A., mod. 1219 006 C, with 10/1 Minimotor S.A gearhead) allows the adduction and abduction movements of the thumb (positioning grasp with less power).
15.3 Sensory System The hand sensory system is the core of the prosthetic device. It is necessary to enable automatic control of grasping tasks without requiring special attention and efforts to the user. In addition, the sensory system is studied with the idea of providing the amputee with cognitive feedback about the grasping task that is performed [1]. For these reasons, according to a biomechatronic approach [3], the artificial sensory system is inspired at replicating the natural sensory system providing both proprioceptive and exteroceptive sensing abilities.
Fig. 15.2. Photograph of the prototype of the RTR II hand
236
M. Zecca et al.
In synthesis, the sensory system is composed of different sensors (Fig. 15.2): • proprioceptive position sensors – the position of the slider actuating the tendon transmission is monitored by a Hall-effect sensor, which detects the position of the slider along his stroke during the flexion/extension movements of the fingers, like the physiological angular sensors in the joint capsules [6, 13]; • proprioceptive joint angular position sensors – the thumb angular displacement when performing adduction/abduction movements is measured by a Hall effect sensor embedded in the joint structure, like the physiological angular joint sensors in the joint capsules [6, 13]; • proprioceptive tendon force sensors – a tension sensor has been fabricated in order to continuously monitor the cable tension applied by the motors, as the Golgi tendon organ in series with the muscle [6, 13]; • exteroceptive force sensors – an artificial mechanoreceptor is obtained by means of a FSR sensor embedded in a silicone cap at the thumb tip. This sensor behaves like the physiological skin mechanoreceptors [6, 13]. The force sensor has been applied only on the thumb tip that is significantly involved in all the functional grasping tasks [12]. The following subsections describe in detail the sensory system and its performance.
15.4 Materials and Methods The calibration of the tensiometer and of the FSR pressure sensor have been done using an INSTRON R4464 testing machine (Instron Corporation, Canton, Massachusetts, USA) with a static load. The calibration of the two position sensors has been done manually with a Rupac digital Caliper 1165. The data have been pre-processed with custom-made electronic boards. All the signals have been acquired using an acquisition board (National Instruments™ DAQ Card 1200), and processed by a custom LabVIEW™ interface to visualize in real time the output (in Volts) versus the applied load or displacement. All data have been saved on a PC for post-processing and further reference. 15.4.1 Slider Position Sensor A qualitative measurement of phalanges positions is obtained by detecting the displacement of the slider where a Hall-effect sensor (model SS496B, Honeywell Inc, Freeport, Il, USA) is mounted. Twelve magnets (model 103MG5, Honeywell Inc, Freeport, Il, USA) have been mounted in front of the slider in order to generate a monotonic magnetic field (Fig. 15.2). Thanks to a Finite Elements (FE) simulation, a configuration of magnets able to generate an appropriate distribution of magnetic field has been established
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
237
The experimental analysis (Fig. 15.3) has assessed the simulation and the final calibration on board has provided good linearity and repeatability (enhanced by reducing the machining and assembling tolerances) [9]. With a power supply of 5 V, the output of the sensor could be approximated by: 2
Vout = 0.0643•Xslider + 1.8371,
R = 0.9901
(15.1)
2
where R , is defined as:
∑( y - yˆ ) =1∑( y ) −∑( yˆ ) 2
2
R
j
j
2
j
2
.
(15.2)
j
2
R , or coefficient of determination, is a number from 0 to 1 that reveals how closely the estimated values of the trendline correspond to the actual data. A trendline is most reliable when its R-squared value is at or near 1.
Fig. 15.3. Hall tension versus linear slider’s stroke
15.4.2 Tendon Tensiometer In the RTR II hand, the transmission cables are fixed on one end to the index and middle distal phalanges and, on the other end, they are connected to the linear slider through the two compression springs of the differential mechanism (Fig. 15.4). The cables act directly on two mobile elements, which compress the springs during the adaptive grasp of an object of irregular shape. The force sensor is obtained by sensorizing a mechanical component acting as a mechanical stop for the cable and able to strain itself under the tension of a grasping cable.
238
M. Zecca et al.
Fig. 15.4. Cross section of the linear slider
In order to obtain an elastic strain of the component and an appropriate mechanical strength, a classic structural analysis with FE methods (using two symmetry planes and linearity assumption) has been used to optimize the dimension of the cantilever in the design phase. The tendon tensiometer is based on two strain gauges sensors (model ESU-0251000, Entran Device Inc, Fairfield, NJ, USA). The micromechanical structure has been fabricated to obtain a deformable cantilever (Fig. 15.4), in order to continuously monitor the cable tension applied by the motors, as the Golgi tendon organ in series with the muscle [6, 13]. A cone-shaped tip, fixed to the load cell, has been used to apply the load. A Wheatstone bridge, followed by a signal amplifier and a low pass RC filter with ft=100 Hz, has been used to detect the variation of the resistance of the two strain gauges. The output of the tensiometer Vout is related to the applied tension Tcable by the following equation: Vout = 26.349•Tcable - 0.3732,
2
R = 0.9996
(15.3)
The sensing device has shown good dynamic, sensitivity and repeatability performance (Fig. 15.5); a little hysteresis and time delay have been detected due to the differential mechanism of the hand (there is a spring under the strained component) [9].
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
239
Fig. 15.5. Output response of the tensiometer
15.4.3 Thumb Position Sensor In order to sense the position of the thumb, a round-shaped cap with two magnets (model 103MG5, Honeywell Inc, Freeport, Il, USA) has been assembled at its base, at the center of rotation of the four bar link mechanism providing abduction/adduction capabilities to the thumb (Fig. 15.2). A Hall effect sensor (model SS496B, Honeywell Inc, Freeport, Il, USA), located in front of the cap, determines the angle displacement of thumb metacarpus.
Fig. 15.6. Response of the thumb position sensor
240
M. Zecca et al.
The output of the position sensor Vout is related to the angular position of the thumb θthumb by the following equation: Vout = 131.1•θthumb - 319.76
2
R = 0.9575
(15.4)
The sensor has an operative range of 30° and has shown good sensitivity, and repeatability performance (Fig. 15.6). 15.4.4 Force Sensor A FSR pressure sensor (part #400, Interlink Electronics, Camarillo, Ca, USA), 5 mm in diameter and 0.3 mm of nominal thickness, has been embedded at the thumb tip: the whole distal phalange, with the FSR at the volar side, has been immersed in a thumb shaped shell containing melted silicone. When the silicone polymerization has been over, a force sensitive thumb tip has been obtained. The hand was locked with the force sensor facing upwards, and a cylinder (5 mm in diameter), fixed to the load cell of the testing machine, has been used to apply the load. The output of the FSR force sensor Vout is related to the applied force FFSR by the following equation: Vout = - 0.2887•Ln(FFSR) + 1.2867,
2
R = 0.9754
(15.5)
Preliminary experiments have shown a low hysteresis, and high repeatability (Fig. 15.7). The sensor gives information on the static pressure on a large area (more than 5 mm) and it has shown good dynamic characteristics. As a consequence, the developed force sensor could be likened to some features of the FA II and SA II physiological mechanoreceptors [6, 13].
Fig. 15.7. Output response of the thumb force sensor
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
241
15.5 Conclusions Commercial hand prostheses have just one or two degrees of freedom (DoFs) providing finger movements and thumb opposition. Due to this lack of DoFs, such devices are characterized by a low grasping functionality. In order to overcome these limitation, and to enhance the dexterity and the usability of myoelectric hand prostheses, a self-adaptative and anthropomorphic prosthetic hand has been developed. In this chapter the sensory system of this hand, called RTR II hand, has been described. The proprioceptive and exteroceptive sensory structure has shown good performance in terms of operative range, repeatability and linearity. At present some experiments on the control of the hand by means of the Electromyographic signals are carried out, to exploit the force and position sensors for implementing a closed-loop control of the grasping hand in order to limit the user involvement just in identifying the initial parameters of the grasping task (the required grasping force level and the grasping type). The expected result is the increasing of the prosthesis usability.
Acknowledgements This work has been carried at the RTR Research Centre in Rehabilitation Bioengineering (Viareggio, LU, Italy) of the INAIL Prosthetic Centre funded by INAIL (National Institute for Insurance of Injured Workers), and originated by a joint initiative promoted by INAIL and by Scuola Superiore Sant’Anna. This work is supported in part also by funds of the CYBERHAND (“Development of a Cybernetic Hand”, IST-FET Project #2001-35094) Project.
References 1. 2.
3.
4. 5.
ARTS, CRIM Labs (2001) The CYBERHAND Project, Development of a Cybernetic Hand. IST-FET Project (#2001 35094) Carrozza MC, Massa B, Dario P, Lazzarini R, Zecca M, Micera S, Pastacaldi P (2002) A two DOF finger for a biomechatronic artificial hand. Technology & Health Care 10: 7–89 Carrozza MC, Massa B, Micera S, Lazzarini R, Zecca M, Dario P (2002) The Development of a Novel Prosthetic Hand–Ongoing Research and Preliminary Results. IEEE Trans Mechatronics 7: 108–114 Dechev N, Cleghorn WL, Naumann S (2001) Multiple Finger, passive adaptive grasp prosthetic hand. Mechanism Machine Theory 36: 1157–1173 Hirose RS, Ma S (1999) Coupled tendon-driven multijoint manipulator. In: Proceedings of the 1999 IEEE Conf. on Robotics and Automation, pp 1268–1275
242 6. 7. 8. 9.
10.
11. 12.
13.
M. Zecca et al. Kandel ER, Schwartz JH, Jessel TM (2000) Principles of Neural Science. McGrawHill/Appleton & Lange Kapandji IA (1982) The Physiology of the Joints. Vol. 1: Upper Limb. Churchill Livingstone, Edinburgh Laliberté T, Gosselin CM (1998) Simulation and design of underactuated mechanical hands. Mechanism Machine Theory 33: 39–57 Massa B, Roccella S, Carrozza MC, Dario P (2002) Design and development of an underactuated prosthetic hand. In: Proceedings of the 2002 IEEE Conf. on Robotics and Automation, pp 3374–3379 Ruthier F, Rancourt D, Gosselin CM (1995) Design of a hand prosthesis based on kinematic principles. In: Proceedings of the 1995 Myoelectric Controls Powered Prosthesis Symposium, pp 53–56 Tubiana R (1981) The Hand. W. B. Saunders Company, West Washington Square, Philadelphia Vecchi F, Micera S, Zaccone F, Carrozza MC, Sabatini AM, Dario P (2001) A sensorized glove for applications in biomechanics and motor control. In: Proceedings of the 2001 Conference of the International FES Society Webster JG (1988) Tactile Sensors for Robotics and Medicine. John Wiley&Sons.
16 Design and Testing of WREX Tariq Rahman, Whitney Sample, and Rahamim Seliktar
Abstract A passive gravity-balanced, 4 DoF arm orthosis was built for children with arm weakness such as in muscular dystrophy. The orthosis is identically gravity balanced in 3D with linear elastic elements. The orthosis has an exoskeletal configuration and is attached to the back of a wheelchair. Algorithms that yield an improved gravity-balancing scheme are derived and a new mechanical structure of the arm presented. Our experience with user trials is presented along with early results using the Jebsen test of hand function and other subjective user feedback.
16.1 Introduction WREX- Wilmington Robotic Exoskeletal has been developed to assist people with muscular weakness in moving their arm in 3D. The arm is gravity balanced for the weight of the person and WREX. The person’s arm is placed in WREX, which then allows him to navigate his arm in space with his residual strength. WREX is primarily intended for people with muscular dystrophy and spinal muscular atrophy. People with these conditions lose the ability to place their arm in space due to weakness. The distal muscles are less affected and sensation remains intact. The balance forearm orthosis (BFO) is among the earliest devices to assist people with arm weakness [1]. The BFO for the most part, however, is a planar device that does not assist in elevation. The first computerized orthosis was developed at the Case Institute of Technology in the early 1960s [2]. The manipulator was configured as a floor-mounted four degree-of-freedom externally powered exoskeleton. Control of this manipulator was achieved using a head-mounted light source to trigger light sensors in the environment. Rancho Los Amigos Hospital continued the Case orthosis and developed a six-degree-of-freedom electrically driven “Golden Arm” [3]. The Rancho `Golden Arm' had a similar configuration to the Case arm but no computer control. It was significant, however, in that it was mounted on a wheelchair and was found to be useful by people who had disabilities with intact sensation, resulting from polio or multiple sclerosis. The Rancho `Golden Arm' was controlled at joint level by seven tongue-operated switches, which made operation very tedious. A number of other projects have developed arm orthoses including the Burke Rehabilitation Center arm [4], the Hybrid Arm Orthosis [5], and the PODEUM Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 243-250, 2004.
© Springer-Verlag Berlin Heidelberg 2004
244
T. Rahman, W. Sample, and R. Seliktar
system [6]; however, to date the BFO remains the only affordable and realistic option. This project addresses some of the practical issues of design, affordability and user acceptance of such a device.
16.2 Design of WREX WREX is a 2-link 4-DoF exoskeletal arm that is attached to the wheelchair (Fig. 16.1). It is gravity balanced using linear elastic elements. Details of the original structure and equations are given in [7, 8]. The following design changes are proposed based on pilot trials with users [9]: 1. Make the lengths of the links adjustable 2. Make a custom mount to the wheelchair.
Fig. 16.1. Earlier prototype of WREX mounted on a subject’s wheelchair
Based on user trials in the home, the feedback was that the lengths of the links do not conform to the length of the natural limb. Since our design had three different sizes to accommodate all subjects, it would be impossible to adjust for those that fall between sizes. Therefore, a plan was made to have link sizes that were adjustable. This was impossible with the existing design. Therefore a major redesign of WREX was called for. The original design used off-the-shelf bungee cords with specific stiffnesses. The cords were tied in a loop and mounted on the parallel linkage of the upper and lower links. The bungee passed over four points in order to yield the exact balancing and to conform to the equations, as shown in Fig. 16.2. The four contact points added to
16 Design and Testing of WREX
245
the friction in the movement and added mechanical complexity to the design. It was therefore decided to attempt to have the bungees connected only at two points. A two-point connection would allow the links to be adjusted in length and not compromise the exact balancing constraint. However a two-point connection requires that the resting length of the spring be zero, or the stiffness line pass through the origin. This requirement was impossible to meet since none of the elastic elements tried conformed to these constraints. However, there were some elements whose stiffness characteristics came close to those described above.
Fig. 16.2. Shows the bungee in a loop which circumvents the issue of the zero resting length requirement
16.3 Gravity Balancing With x ≠ 0 If the resting length of the elastic element is not required to be zero then a number of available elements can be utilized.
Fig. 16.3. Schematic of WREX with forearm link and upper arm link
246
T. Rahman, W. Sample, and R. Seliktar
The equations of motion for the orthosis are derived as follows, referring to Fig. 16.3: The moment about the elbow for the lower arm link is given by: x20 a b sin θ (16.1) M e = m2 gl2 sin θ 2 − K 2 1 − 2 2 2 2 2 a b 2 a b cos θ + + 2 2 2 2 2 and the moment about the shoulder for both links is given by: x10 a b sin θ M s = m1 gl1 sin θ1 − K1 1 − 1 11 2 2 a + b + 2 a b cos θ 1 1 1 1 1 + 2m2 gl1 sin θ1
(16.2)
The following values were chosen to match the dimensions and weights of the children to be tested: m1=3.2 Kg, m2=2 Kg, l1=252 mm, l2=101 mm K1=0.15 Kgm, K2=0.046 Kgm, b1= 229 mm, b2=152 mm a1=a2= 25 mm, x10=x20 = 38 mm. The values m1 and m2 are the weights of the child’s upper arm and forearm respectively added to the weight of the two links of the orthosis. The weights of the persons’ upper arm and forearm are based on 2.7% and 2.2% body weight respectively. For this illustration a person weighing 81.6 Kg is chosen. Torque at Elbow 18 16
elbow torque (lb in)
14 12 10 8 6 4 2 0 -2 theta
1.01
2.01
3.01
theta (rad)
Fig. 16.4. This is the torque at the elbow required to extend the forearm through 180 degrees. The bottom curve is the torque with the orthosis. The top is the torque without the orthosis. These figures are for an 81.6 Kg person
16 Design and Testing of WREX
247
Torque at Shoulder
90 80
shoulder torque (lb in)
70 60 50 40 30 20 10 0 -10 theta
1.01
2.01
3.01
theta (rad)
Fig. 16.5. Torque at the shoulder due to the weight of the whole arm. The bottom curve is the torque required with the orthosis. The top curve is the torque of the arm without the support of the orthosis
As can be seen from Figs. 16.4 and 16.5, the torques at the shoulder for these particular variables are approximately linear. The amount of non-linearity is insignificant when compared to the sinusoidal curvature obtained when there is no orthosis compensation. These curves are presented for comparison on the same graphs. These dimensions were then implemented on the prototype orthosis, shown in Fig. 16.6.
Fig. 16.6. WREX shown with subject use in the home
248
T. Rahman, W. Sample, and R. Seliktar
The new prototype allows the adjustment of link lengths and changing stiffness by adding or subtracting therabands (Smith and Nephew, Germantown, Wisconsin, USA). Therabands are ideal because their elastic behavior is consistent, and they come in various sizes that are easily identifiable by color. The new unit is made from steel telescoping rods. These are in a parallelogram arrangement for the upper arm and a single link for the forearm. The connection from the wheelchair to the origin of the shoulder link is custom made by bending steel tubing to suit the individual subject. The arm trough is also custom made for each subject. The subject’s arm is plaster cast. From this a positive mold is fabricated then a negative polyethylene brace is made. This brace is then attached to the forearm link.
16.4 Clinical Testing Testing of WREX comprises user trials in his or her home along with controlled laboratory experiments. The subject is first fitted with a custom unit. This comprises a wheelchair attachment, arm brace, and length and stiffness adjustments. The subject is then tested without the orthosis on the Jebsen test of hand function. The Jebsen is a standardized instrument made up of a series of timed tasks. These tasks are indicative of activities of daily living such as feeding, writing and picking up objects [9]. The user is then asked to take WREX home for two weeks. They are encouraged to use it as much as possible in a variety of settings including school and home. They are brought back into the lab and tested on the Jebsen with WREX. It is hypothesized that a period of two weeks is sufficient to overcome the learning curve. They are also tested on the WeeFIM, which is functional independence measure. The WeeFIM is a comprehensive instrument that covers many aspects of disability. For this study only the questions related to grooming and feeding are included. Fifteen subjects have been recruited for the study. Inclusion criteria are MD or SMA, arm rating of 2–3 on the manual muscle scale, and are in a wheelchair.
16.5 Results Early indications presented here are based on feedback form three subjects. The Jebsen test has 7 tasks related to activities of daily living. These include writing, card turning, small object manipulation, simulated feeding, checker stacking, large light object manipulation and heavy large object manipulation. The simulated feeding and small object manipulation showed significant improvement in timing of these activities (Table 16.1).
16 Design and Testing of WREX
249
Table 16.1. Scores from the Jebsen test of hand function for the three subjects
Subject
1 2 3
Object manipulation time (s) Arm Arm with WREX 21 9 27 13 35 15
Simulated feeding time (s) Arm Arm with WREX 32 14 25 15 38 19
These tasks showed more than a 100% reduction in time to perform the two tasks. The small object manipulation consists of picking up bottle tops, paper clips, and coins and placing them in a can. The simulated feeding has the subject picking up dried beans with a spoon and putting them in the same can. Some of the subjective comments were that they could now raise their hand in school; they could feed themselves various foods they were unable to eat before, such as spaghetti. They could perform activities such as building legos, playing swords and throwing a baseball. The results presented here are early in the testing process; however, they provide an indication of the potential success of this type of intervention. The clinical trials are ongoing and should yield statistically significant results with the use of WREX.
Acknowledgments This research has been funded by the Nemours Biomedical Research, and the U.S. Department of Education, Field Initiated Grant # H133E30013. Thanks to Mena Scavina DO, Michael Alexander MD and Alisa Clark RN for the clinical trials.
References 1.
2. 3.
4. 5.
Lunsford TR, Wallace JM (1995) The orthotic prescription. In: Goldberg B, Hsu J (eds) Atlas of Orthotic and Assistive Devices-Biomechanical Principles and Applications (Third Edition), Mosby LeBlanc M, Leifer L (1982) Environmental control and robotic manipulation aids. Engineering in Medicine and Biology Magazine, December, pp 16–22 Allen JR, A. Karchak Jr., Bontrager EL (1972) Final project report, design and fabricate a pair of rancho anthropomorphic manipulator. Technical Report, The Rancho Los Amigos Hospital Inc., 12826, Hawthorn Street, Downey CA 90242 Stern PH, Lauko T (1975) Modular designed, wheelchair based orthotic system for upper extremities. Paraplegia, 12, pp 299–304 Benjuya N, Kenney SB (1990) Hybrid Arm Orthosis. Journal of Prosthetics and Orthosis 2(2): 155–163
250 6.
7.
8.
9.
T. Rahman, W. Sample, and R. Seliktar Galway R, Naumann S, Sauter W, Somerville J (1991) The evaluation of a powered orthotic device for the enhancement of upper-limb movement (PODEUM). Final report submitted to The National Health Research and Development Program of Health and Welfare Canada, Project # 6606-3835-59 Rahman T, Ramanathan RR, Seliktar R, Harwin W (1995) A simple technique to passively gravity-balance articulated mechanisms. ASME Transactions on Mechanisms Design, 117(4): 655–658 Rahman T, Sample W, Seliktar R, Alexander M, Scavina M (2000) A body-powered functional upper limb orthosis. VA Journal of Rehabilitation Research and Development 37(6): 675–680 Jebsen RH, Taylor N, Trieschmann RB, Trotter MJ, Howard LA (1969) An objective and standardized test of hand function. Arch Phys Med Rehab 50: 311–319.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair Dimitar Stefanov, Alexander Avtanski, and Z. Zenn Bien
Abstract The present chapter introduces a navigation algorithm for a wheelchair in semistructured home environment. The user sets the end position and orientation of the wheelchair and the controller autonomously steers the wheelchair from the current position to the goal. The algorithm includes a procedure for automatic creation of an initial map of the wheelchair surroundings that is used afterwards for composition of collision-free path from the starting point to the target. Calculation of the present wheelchair position is based on the information of the TV images from ceiling mounted cameras that detect the wheelchair in all steps of its movement toward the goal. For prevention from navigation failures in situations when wheelchair markers cannot be clearly detected by the cameras, the information about the angular rotation of the driving wheels is used for calculation of the current position. In order to set the movement task, the user refers to a wheelchair-mounted monitor where the images from all rooms are represented. After selection of the TV image from the room where the target is located, one puts the end wheelchair position and orientation by pointing them directly on the image. The information, collected by onboard-mounted range sensors during the wheelchair operation, is used for obstacle avoidance and map update. The map can be additionally corrected and modified by the user. A special simulator ROSI (RObotic SImulator) is designed for computer testing of the proposed algorithm. Finally, some design principles of the simulator and results from the initial evaluation of the algorithm with the simulation are remarked.
17.1 Introduction and Related Works Autonomous guided wheelchairs (AGW) can be a promising solution for indoor transportation of older persons and people with severe dexterity limitations. Utilization of such wheelchairs offers the user an easy and independent access to different home positions. Receiving the user’s instruction about the goal, the navigation system autonomously steers the wheelchair to the goal. In order to respond much better to the user’s needs, most of the recent developments in the same research area aim at the design of wheelchairs that not only can follow the path to the goal but also have abilities to compose the routine toward the end position and Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 253-298, 2004.
© Springer-Verlag Berlin Heidelberg 2004
254
D. Stefanov, A. Avtanski, and Z.Z. Bien
to modify automatically the initially generated path during the task execution if obstacles appear on the primarily-intended route appear. AGW dramatically reduces the number of user’s commands and significantly decrease cognitive load of the user. Despite various solutions for autonomous guidance of industrial vehicles (much as automatic guided carriers for flexible manufacturing systems, mobile industrial robots, etc.), only small number of these ideas can be applied to the control of powered wheelchairs that operate at home or office environment due to several important reasons: First, the wheelchair systems should be relatively cheap in order to be affordable to a great number of users. Second, the wheelchair performance should be extremely safe and risk-free for the user, for nearby located people, pets, and for the home furniture/appliances. Third, the wheelchair navigation systems should possess certain level of intelligence in order to be able to perform successful avoidance of obstacles and operate in semi-structured or unknown environment. Fourth, such systems should be human-friendly and easily controlled by users who could be non-technically educated with certain level of movement disability. 17.1.1 Methods for Navigation Most of the developments within that area concern wheelchairs that operate in structured- or semi-structured home environment. Regarding their navigation algorithm, AGW can be classified into two large groups: beacon-based AGW and AGW with natural landmark navigation. The beacon-based navigation systems determine current wheelchair position by detection of beacons that are strategically placed at pre-defined locations within the home environment, measurement of the distances to these beacons, and calculation of the angles of the beacon directions. Some recent wheelchair projects apply navigation by natural landmarks from the structured environment such as furniture edges, doorframe edges, etc. The procedures for landmark identification are usually based on processing of the visual information from one or two cameras. The procedure includes identification of the reference edges, calculation of the length of each landmark and orientation regarding the floor plane, and estimation of the landmark elevation from the floor. The beacon-based systems can be further grouped into systems that refer to active beacons (light-emitting, sound-emitting, or radio-wave-emitting beacons, installed on the walls or furniture) and systems that refer to passive beacons. The latter usually utilizes specific images whose patterns, dimensions, reflection characteristics, and colors are unique for the environment. Such markers, usually of low cost, can be easily attached to the walls and simply rearranged within the house but the techniques for their detection typically involve CCD visual sensors and require much complicated computing procedures. As an example for wheelchair navigation based on passive markers, we may mention the wheelchair systems developed at the University of Notre Dame [1, 2]. The wheelchair is navigated through special markers attached to the walls and to the furniture. The reference paths are physically “taught” to the system during the setup procedure. In the
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
255
“run” mode, the error between the current estimated position and the reference path are calculated and used to control the wheelchair steering in order to follow the reference path. The user selects the destination and then the wheelchair system controls the chair to the desired location. In the “run” mode the navigation system not only follows the pre-composed path to the goal but also automatically modify the routine if obstacles on the initial path are detected. After the obstacle overcome, the wheelchair returns on the original trajectory. The navigation systems with active beacons are usually much resistant to ambient-light artifacts and apply simple hardware for detection of the current position than those with passive markers but the rearrangement of the active beacons can be a problem since each active beacon should be separately powered and controlled. The guidepath navigation systems can be considered as a special class beaconbased systems where the beacons are embedded in the floor. The guide track can be designed as a reflective or colored tape attached to the floor, as a cable that emits high-frequency electromagnetic field, as permanent-magnets array embedded in the floor, or as a magnetic tape with pre-recorded information tracks on it. Although the approach is widely used in many material-handling applications, its utilization in the design of home-operated wheelchairs is limited due to the complexity of the wheelchair movement routines and the requirements for easy reconfiguration of the path. An additional limitation arises from the necessity for embedment of the guidepath in the floor. A wheelchair, navigated via magnetic tape guidepath, was reported in [3]. The solution includes a strip of flexible magnetic material attached to the floor surface and a magnetic stripe follower based on an array of fluxgate or Hall effect sensors, mounted on the board of the vehicle. The natural-landmark navigation schemes vary from detection of ceiling mounted lamps and calculation of their positions [4] to detection of doorframes and furniture edges [5, 6]. In the VAHM project, the vertically located edges of the home furniture are used as natural landmarks for navigation [7, 8]. Each natural landmark is identified by the metrics of its edges and their distance to the ground. The TAO-1 wheelchair, developed at the Applied AI Systems Inc., uses two CCD color cameras for landmark - based navigation and infrared sensors for obstacle detection and collision avoidance [9]. Mobile robot localization by representation of a path as sequential color strings is proposed in [10]. The code of each string includes information about the color and geometry characteristics of the vertical edges of the furniture. In general, the navigation systems based on passive bacons possess enhanced flexibility than those with active beacons. Since the natural landmark navigation does not require installation of any special beacons, the user can modify easily the existing travel routines and can add new travel paths without assistance from specialized technical staff. On the other hand, the solutions based on natural landmarks require much-sophisticated sensors, involve complex algorithms for analysis of the visual scene, and require much powerful hardware. Similar to the strategies adopted in the design of mobile robots, most algorithms for autonomous wheelchair guidance usually run simultaneously two procedures for estimation of the current wheelchair position: landmarks navigation procedure and dead-reckoning navigation procedure that allows calculation of the
256
D. Stefanov, A. Avtanski, and Z.Z. Bien
current wheelchair position by memorizing the coordinates of previously determined position and applying to it the direction and distance traveled since that point. Calculations are usually based on measurement of the rotation angles of the driving wheels, which is typically realized by wheel-embedded encoders. Despite the sensitivity of the dead-reckoning method to errors resulting from wheel slippage and tire deformation, the combined approach significantly improves the wheelchair performance in the situations when one or more beacons/landmarks are missing, malfunctioning or temporarily hidden from the navigation sensors by other objects that lie between them. The odometry information plays dominant role for the positioning of some autonomous wheelchairs [11, 12]. The wheelchair path can be described as the sequence of the angular positions that the driving wheels have in each point. Such path representation is quite simple and computation does not require big hardware resources. During the task execution, the odometry module provides only the rough position information and the exact wheelchair position is calculated upon the information from ultrasonic range sensors or eye-safe infrared scanners. When the wheelchair operates in semi-autonomous navigation mode, the wheelchair control is shared between the user and the automatic controller [13]. In that mode the user sets the general direction of the wheelchair and the controller modifies the user’s commands in order to prevent possible collisions with obstacles. That mode facilitates successful doorway passage and can help users who are unable to give precise commands because of tremor, vision problems, etc. The algorithm is based on processing of the information from wheelchair-mounted proximity sensors. Since the AGW design usually include sensors for obstacle detection and apply algorithm for automatic avoidance, in most cases, the same AGW are used not only in full autonomous mode but also in semi-autonomous mode. As an example, we may refer to the "NavChair" - a wheelchair, developed at the Mobile Robotics Lab of the University of Michigan [14, 15]. The user sets the general direction of travel and the NavChair follows it. It automatically avoids obstacles trying to maintain user-specified direction as closely as possible. Twelve Polaroid ultrasonic sensors are used for both obstacle detection and wall following. The TinMan supplementary wheelchair controller, developed at the KISS Institute for Practical Robotics (KIPR), is a special control module that sits between the joystick and the existing wheelchair controller and modifies joystick signals in order to avoid obstacles detected by its proximity sensors [16, 17]. The navigation system of the “Wheelesley” wheelchair employs infrared-, sonar-, and Hall-effect sensors [18]. The user sets the desired movement direction and the controller generates commands for collision avoidance and centering the chair in the hallway. The Drive assistant system, developed at the VTT Machine Automation, Finland, uses ultrasonic sensors for environmental perception and modifies the user’s commands in case of obstacle avoidance [19].
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
257
17.1.2 Path Planning and Navigation to the Goal Approaches to mobile robot navigation can be classified into three categories: model-based approaches, sensor-based approaches, and hybrid approaches [20]. Model-based approaches (called also as “global planning methods” [21, 22] or “functional architectures”) refer to a priori knowledge about environment. The general control strategy is to build a world model, plan actions with respect to goals, and then execute the planned path via steering system. Techniques for model-based path generation include connectivity graph [23], Voronoi diagram, cell decomposition [24, 25], artificial potential fields [26–29], etc. Although the approaches can successfully generate a path from the initial point to the goal, their operation is limited to structured environment and cannot meet the complexity of environment that changes relatively quickly or cannot be precisely described. Sensor-based approaches (so-called “reactive planning systems”, or “reactive control systems”) [30–33] are adaptive to unstructured environment and their operation is based completely on sensor information. Sensor-based approaches apply “behavioral architecture” [34], where multiple tasks run in parallel. Each task refers to robot sensory input and forms stimulus-response mechanism or “behavior”. The behavior set includes typically goal-attraction, wall-following and obstacle avoidance that contribute to the successful and safety navigation in a dynamic environment. A drawback of these approaches is that they do not always guarantee a success of the mission and the robot may get lost in the environment. The hybrid approaches [35, 36] try to fuse the strengths of both model-based and sensor-based approaches in order to achieve larger capabilities. These methods refer to pre-acquired knowledge of the environment. A planner is used to generate a path from incomplete global description of the environment and to give sub-goals to a navigator that realizes the local control. This approach is usually realized by hierarchical control structure that has a set of three enclosing loops: the functional loop, the reactive loop, and the reflective loop. The functional loop is responsible for path planning and refers to the map that is permanently updated on the information from the proximity sensors. The reactive loop refers to the sensorial data of a set of range sensors and deals with local motion, path following and localization issues. The reflexive loop includes a set of bumper sensors and deals with imminent collision detection. Raw sensorial data is available within this loop. Layers have their priority in the decision taking process. Highest is the priority of the reflexive loop, following by the reactive and functional loop. Depending on its priority, command of certain loop can disable or modify those other commands which are in conflict with itself. The navigation algorithm commented in this work is based on the hybrid approach. The present chapter proposes a control strategy for indoor-operated autonomous wheelchair. The wheelchair user can access different places in his/her home by designation of the position and the orientation of the wheelchair at the end point only. The wheelchair controller automatically plans the initial path from the current position to the goal and additionally modifies it during the task execution, while building a collision-avoidance strategy. The approach also considers wheelchair operation in semi-structured environment. The chapter is organized as fol-
258
D. Stefanov, A. Avtanski, and Z.Z. Bien
lows. In Sect. 17.2, we introduce the navigation problem and the main considerations to the proposed algorithm. In Sect. 17.3, we explain the proposed localization of the current wheelchair position. The scenario of wheelchair control and the way of setting the final position and orientation of the wheelchair by the operators are commented in Sect. 17.4. Next, a navigation system based on the algorithm is described. In Sect. 17.5, a computer simulator based on the proposed approach, is presented in Sect. 17.6 together with the main assumptions on the modeling of the home environment, wheelchair kinematics, sensors modeling, and control of the wheelchair model. Some simulation results and comment on the wheelchair behavior in some interesting situations are given in Sect. 17.7. Future plans and conclusions appear in Sects. 8 and 9 respectively.
17.2 Conception of Wheelchair Navigation 17.2.1 Problem Statement After delivery to the user’s home, the wheelchair should be adjusted in accordance with the user’s needs and home specifics. The overall procedure includes initial data collection, creation of a database about the characteristics of the home environment, composition of the wheelchair routines, and fine-tuning of the autonomous wheelchair system. Since the adjustment process includes a lot of special operations, further modification of the existing movement programs usually can be done by highly-qualified programmers and rehabilitation specialists only. The user cannot do that by himself. Even in case of small re-adjustment of the existing programs, specialists should visit the user’s home to perform such work. The present research aims at development of a flexible algorithm for autonomous wheelchair control that allows users with serious movement impairments to customize by themselves the initial wheelchair settings and to access home positions that have not been included in the initial set. The approach allows composition of new movement routines and modification of existing movement programs. Apart from that, the approach concentrates on the problem for automatic organization of a home map and for automatic adaptation to an unknown home environment. 17.2.2 Initial Assumptions 17.2.2.1 Assumptions Regarding the House Environment: 1. The user’s house consists of several rooms and the user can move from one room to another by means of a wheelchair with suitable interface. Appropriate lifting system is available for transferring from the bed to the wheelchair and vice versa.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
259
2. The home environment is semi-structured, i.e. most of the objects are placed in known positions but during the wheelchair operation some of them may be removed or replaced. New objects can be added as well. 3. The wheelchair can freely enter any room due to the fact that either all doors in the operating environment are permanently opened or an adequate system manipulates the door when the wheelchair approaches it. 17.2.2.2 Assumption Regarding the User’s Interface The interface to be used is customized in accordance with the user’s own movement abilities. Recently, some advanced head tracking techniques involving facial detection [37, 38] and optoelectronic detection of light-reflective head-attached markers have been proposed [39, 40]. New technologies such as eye-movement control, brain control [41], and gesture recognition [42] give new opportunities for natural human – friendly interaction/interface and offer new perspectives for efficient human-wheelchair communication. 17.2.2.3 Four Modes of Wheelchair Operation: 1. Navigation to a new goal position – By means of a suitable interface, the user sets the position to which the wheelchair should go. Afterwards the wheelchair autonomously composes the route to the goal and transports him/her to that position 2. Pre-programmed mode. – In order to simplify the way of wheelchair instruction, the user can build his/her own library of predefined goal positions. In order to be transported to one of his/her desired positions, the operator just selects it and initiates its automatic execution. This mode facilitates user’s interaction with the wheelchair and reduces the time for instruction setting. 3. Regime of initial data acquisition. – After installation at user’s home, the wheelchair should collect some initial information regarding the unknown environment. Such information about the rooms’ geometry and obstacle locations is needed for the initial path planning procedures. We assume that the wheelchair is equipped with sensors that detect nearly located obstacles and that a suitable algorithm is applied to process the sensor information and to represent it as a graphic image (map) where the contours of the rooms and the obstacles are specified. We consider two variant for gathering data about the home environment: (a) Following the specific algorithm, the wheelchair autonomously moves to different places of user’s house in order to explore the unknown environment. The wheelchair maneuvers for exploration of home environment may be tiring and annoying for the user if he/she rides the wheelchair during the data acquisition process. Therefore we assume that the exploratory maneuvers should be performed in autonomous wheelchair mode. (b) After the wheelchair delivery at user’s home, a person from the service team directs the wheelchair to different places and “shows” to it the new home environment. During that “human-guided intro-
260
D. Stefanov, A. Avtanski, and Z.Z. Bien
duction” the wheelchair sensors collect information about the geometry and position of different obstacles and walls. The approach requires human involvement but makes the wheelchair teaching process faster. The initial teaching gives only basic information about the home environment. During exploitation of the wheelchair, its sensors continuously monitor the home environment and update existing database with the latest changes, making it much precise and detailed. 4. Semi-autonomous navigation – This mode is applied for successful doorway passage and can help the user who is not able to provide precise commands because of some motor or visual limitations. Based on the information of the proximity sensors, the algorithm modifies the user’s commands and prevents eventual collisions to obstacles.
17.3 Localization of the Wheelchair Position Successful autonomous navigation can be realized if the coordinates of the wheelchair position, position of the target, and obstacle positions are represented in one and the same coordinate system. Knowing the wheelchair position is important because of two main reasons: 1. Path planning procedure requires initial wheelchair position. 2. If an obstacle is detected by the wheelchair sensors, its coordinates are represented in conjunction with the coordinates and heading of the wheelchair. In order to solve the localization problem, we suppose that a sufficient number of ceiling-mounted TV cameras are installed within the house. Sensing areas of these cameras cover the whole region where the wheelchair and its user may be located. At least one camera detects the wheelchair in each moment of its operation (Fig. 17.1). Since the location of each camera is a priori known, the coordinates of each camera can be easily re-calculated by a common coordinate system.
Fig. 17.1. Home environment and the ceiling-mounted cameras
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
261
Much precise and faster localization of the wheelchair can be achieved if special passive markers with specific shape, patterns and color are attached to it, as illustrated in Fig. 17.2. These markers are arranged on the possible paths which make easy detection from the TV cameras.
a
b
Fig. 17.2. A diagram of the applied wheelchair localization. a ceiling-mounted TV cameras measure the cues positions and wheel-embedded encoders measure the rotation of the driving wheels; b cue example
Fig. 17.3. Localization of the current wheelchair position
262
D. Stefanov, A. Avtanski, and Z.Z. Bien
The wheelchair is considered as a rigid body and its position is then determined by the calculated positions of the attached markers. Since the approaches for calculation of markers’ position from the TV image are well researched and applied in many areas (in the human movement analysis, for example) [43–45], we will not discuss here the way of calculation of the marker positions in the paper. In order to prevent eventual navigation failure in the positions where the markers cannot be clearly seen by the cameras, the dead reckoning can be additionally applied for much precise calculation of the current position. The overall block diagram of the proposed wheelchair navigation scheme is shown in Fig. 17.3. The images from all TV cameras are transferred to two monitors located on the wheelchair and near the bed, respectively. Apart from the navigation purposes, the installed TV system can be used for home surveillance.
17.4 Scenario of the Wheelchair Control Setting of the target is organized in an easy and comprehensive manner. In order to place the desired location, the user refers to the wheelchair-mounted monitor where initially the images from all rooms are represented. At first, using an appropriate interface, he/she selects the TV image from the room where the desired end location is situated. The same interface is then used for zooming of the selected image (or part of it) up to the size of the whole screen that gives additional convenience in the precise setting of the goal position. By pointing the TV image, the user sets the goal position. As a result, a color circle and a flashing line passing through it, appear on the image. The circle represents the goal position and the flashing arrow line corresponds to the wheelchair orientation toward the goal position. By means of an appropriate command (for instance by selection the arrow end and dragging it), the user can rotate the flashing arrow line in order to set the desired wheelchair orientation toward the goal position. After setting the desired end position and the final wheelchair orientation, the user applies a new command that initiates the automatic execution of the transportation task. The wheelchair starts to navigate autonomously toward the goal. The consequence of the abovecommented procedures for setting the goal position and orientation is illustrated with Fig. 17.4. In the most cases, the two-dimensional TV image is not sufficient for the precise setting of the desired end-position (since the cameras are usually oriented with a certain angle toward the wheelchair movement plane; some obstacles may be located between the TV camera and the desired location). The position specified by pointing on the TV image usually represents a place that is located NEAR the desired position rather than the exact location. Therefore we assume that after arriving near the final position the user will set the exact wheelchair position by direct control. During the execution of the transportation task, the wheelchair sensors collect information about the home environment. In order to be verified by the user, this information is shown on the monitor screen as a plan that represents the contours
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
263
of all rooms and obstacles. Current wheelchair location is also specified on the map. The user and the service personnel can modify the map. Wheelchair movement in a free-from-obstacle zones can be restricted by marking them on the plan as "banned" (For example, it is not desirable to set wheelchair passage via zones that may cause certain discomfort to the user, such as passage near heaters or powerful fans, traveling through open areas during raining, etc.). Alternatively, a zone can be marked as "not-free from obstacles", when multiple reflection of the emitted sensor signals causes false sensor signals.
Fig. 17.4. Sequences of setting of the goal position and orientation
264
D. Stefanov, A. Avtanski, and Z.Z. Bien
The automatically generated map can be used not only for verification of the collected information but also as an alternative way for selection of the goal location in the pre-programmed mode of operation. Instead of referring to a list of desired end positions, the user can set the desired location by pointing it directly on the map (Fig. 17.5). In order to be easily recognized and selected, the preprogrammed targets can be noted on the screen with flashing symbols, whose shapes and colors vary.
Fig. 17.5. Map of the house and favorite places specified on it; 1, 2, 3, 4, 5 – predefined end positions
17.5 Navigation System The proposed navigation algorithm can be realized by means of an hierarchical control structure with two layers, named here as subsystem for global navigation and subsystem for local navigation, respectively (Fig. 17.6). Depending on the position of the switch SW, the wheelchair control system can operate in either full autonomous mode or semi-autonomous mode. The user can choose the control mode by an appropriate command that activates the switch. When the full autonomous mode is selected, the output of the human-machine interface (HMI) becomes connected to the subsystem for global navigation and the user’s instructions regarding desired end position and orientation are transferred to it. If the semi-autonomous mode (the second state of the switch SW) is set, the signals from the HMI are transferred to the subsystem for local navigation and the user controls the wheelchair directly.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
265
Fig. 17.6. Block diagram of the navigation system
When the wheelchair runs in the full autonomous mode, the subsystem for global navigation builds the wheelchair’s route to the goal and sets to the subsystem for local navigation a sequence of instructions for the wheelchair movement toward the goal. The subsystem performs the following tasks: 1. Building a map of the home environment. – The map contains information about the positions of the walls, doorways, and obstacles. For its composition, the subsystem for global navigation refers to the data from the TV images, encoders of the driving wheels, and sensors for obstacle detection, mounted to the wheelchair platform. 2. Calculation of the goal position. – Since the user sets the end position and orientation by pointing on the TV image, the subsystem should “translate” the user’s instruction by calculation the position of the pointer (x, y, Θ) in respect to the coordinates of the same map. 3. Calculation of the current wheelchair coordinates. – The wheelchair position and orientation should be defined in terms of the coordinate system of the map. In order to perform that task, the subsystem for global navigation refers to the information from the TV images. After recognition of the wheelchair-
266
D. Stefanov, A. Avtanski, and Z.Z. Bien
installed markers in the visual scene and calculation of their positions, the wheelchair position and orientation are computed. In the cases when the visual information is insufficient for successful calculation of the current wheelchair position, a dead reckoning procedure based on analysis of the angle rotation of the driving wheels can be additionally applied. 4. Path planning. – The system for global navigation computes the wheelchair’s route to the goal by referring to the information regarding the goal position, current wheelchair location, and the location of free-from-obstacles zones in the existed map, Apart from that, the calculations include a procedure for optimization of the composed path (removal of loops, finding the shortest way to the goal, choice of path segments with maximal width, reference to user’s preferences, etc.). 5. Generating instructions to the subsystem for local navigation. – During the execution of the movement task, the subsystem for global navigation refers to the planned path and sends to the subsystem for local navigation a subsequence of instructions regarding the position and direction that the wheelchair should take in the next step toward the goal. The instructions contain also information about the distance from the current wheelchair location to the goal that is further used by the subsystem for local navigation for calculation of the appropriate wheelchair speed for precise maneuvers and exact positioning in the goal position. 6. Map update. – During the task execution, the wheelchair-mounted sensors scan the environment for existence of obstacles. The newly detected obstacles are added to the map and the old obstacles, which presence is not confirmed, are removed from the same map. In order to minimize the size of the onboard-mounted hardware, the image processing procedures can be done in two computing modules (one onboard mounted and other immobile), connected via wireless data link (see Fig. 17.3). The local navigation subsystem receives instructions from the upper hierarchical level and controls the driving wheels of the wheelchair. An array of wheelchair-mounted range sensors supplies the subsystem with information about near obstacles. The wheelchair completes the exact movement instructions and follows the initially composed path if no obstacles on the intended route are detected. If the wheelchair sensors signalize about unknown obstacle the subsystem for local navigation starts maneuver for safe obstacle avoidance and tries to keep the userspecified direction of the wheelchair movement as closer as possible. Path modification is based on the existing map and the information from the range sensors. The information regarding the location and geometry of the new obstacle is being sent to the global navigation system for the further map update. In semiautonomous mode, the local navigation subsystem modifies only these user’s commands that are not precise and would lead to collision with walls or obstacles.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
267
17.6 Computer Simulation of the Control Algorithm In order to test proposed navigation algorithm, a computer simulator named RoSi (Robotic Simulator) was developed. The simulation is conducted for a semistructured home environment. The user can choose the initial conditions for the planned task (structure of the home environment, obstacle shape and location, starting position, goal positions of the wheelchair, speed, mode of control, etc.). The task execution conforms to the proposed navigation algorithm. The simulator demonstrates the behavior that the real wheelchair will have during execution of the following control tasks: 1. Exploration. – We assume that the floor plan and the obstacles position are unknown to the wheelchair control system. In order to get enough information regarding modeled home environment and operate successfully in it, the simulator explores the surroundings in the process of the task execution and creates a map, which represents the locations of all obstacles, available routes, positions of the walls, etc. 2. Route planning. – The simulator tries to find a route to the goal. Initially generated route may be inaccurate if the map information is incomplete. 3. Tracking the planned route 4. Obstacle avoidance. – In order to realize successful navigation to the goal position, the navigation algorithm modifies the initially planned route if any obstacles on it are detected. 5. Wall following. – Moving along wall by keeping on a certain lateral distance from it is a useful component of obstacle-avoidance strategy. That regime can facilitate the smooth passage through narrow corridors and doorways. 6. Control assistance (semi-autonomous mode). – The semi-autonomous control mode can be particularly helpful to certain category of disabled users who experience difficulties to operate ordinary powered wheelchair in narrow corridors, doorways, or closely located obstacles. In order to evaluate the proposed control algorithm by people having such kind of disabilities, we decided to include this mode in the computer simulation. In order to explain the operation of the simulator and its structure, at first we will comment on some significant design issues such as algorithm for wheelchair movement, representation of the home environment, and mechanisms for building the control strategy adopted in the simulator. 17.6.1 Wheelchair Kinematics The wheelchair movements, represented by the simulator, correspond to the movement of a four-wheeled wheelchair with two front castor wheels and two rear driving wheels. The scheme of the mobile platform is shown in Fig. 17.7, where the driving wheels are indicated as A and B and the front wheels are noted as C
268
D. Stefanov, A. Avtanski, and Z.Z. Bien
and D. The speed ratio of the driving wheels determines the change of the wheelchair movement direction.
Fig. 17.7. Kinematic model of the wheelchair
Let us denote the delta-rotations of driving wheels by dϕ1 and dϕ 2 , respectively. The delta-translations of the driving wheels can be determined as:
dr1 = R.dϕ1
(17.1)
dr2 = R . dϕ 2
(17.2)
where R is the radius of driving wheels. Wheelchair displacement dX along the path of travel can be expressed by the equation:
dX =
dr1 + dr2 2
(17.3)
The wheelchair heading dΘ is a function of the displacement of the left and right driving wheels:
dΘ =
dr1 − dr2 b
(17.4)
where b is the distance between the driving wheels. Modeling of the wheelchair and it movement are based on the following assumptions: 1. The wheelchair is modeled as a platform with rectangular shape, as those shown in Fig. 17.7. 2. The wheelchair can move forward and backward.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
269
3. As a difference of most approaches for autonomous navigation where the robot movement is represented as shifting from one grid cell to another, present simulation considers a wheelchair model that can move into 5 discrete directions: strong left (LL), left (L), straight (S), right (R), and strong right (RR), as shown in Fig. 17.8. Wheelchair headings are counted regarding the wheelchair platform, not regarding the map coordinates. 4. Movement speed of the wheelchair model is constant and differs for each of above commented directions. Straight movement speed is the highest. Movements to the left and right (L and R respectively) are performed on medium speed. Low speed is chosen for the LL and RR movements. Backward motions are the slowest and equal for all directions. 5. During its movement toward the target, wheelchair moves mainly forward. Backward movements are used for short periods of time, mainly for recovering from harmful situations. 6. Acceleration and deceleration of the wheelchair model are limited to preliminary defined values. The wheelchair gradually accelerates to the nominal speed and stops lightly. 7. The wheelchair moves on a flat floor. 8. The wheelchair moves without sliding.
Fig. 17.8. Movement directions of the wheelchair model
17.6.2 Modeling of the Sensors and Their Arrangement on the Wheelchair Platform The simulation algorithm is based on the assumption that two types sensors for obstacle detection are mounted on the wheelchair periphery: range sensors and contact sensors. Range sensors give an approximate distance to the nearest obstacle in the particular direction in which they are oriented. Since the range sensors are fixed to the wheelchair platform and their position and orientation are known, a simple algorithm is applied for calculation of the distance to each obstacle and its angular location regarding the wheelchair platform. In certain situations, the distance sensors may not provide full coverage of the surrounding area. The contact sensors are intended to detect obstacles that have not been revealed by the
270
D. Stefanov, A. Avtanski, and Z.Z. Bien
distance sensors. Activation of one or more sensors discontinues further wheelchair movement in the direction of the obstacle.
Fig. 17.9. Representation of a range sensor in the simulator. a sensor detection area and it zones; b schematic representation of a sensor; c An object in the sensing zone; d schematic representation of the detected obstacle
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
271
Autonomous robots usually use ultrasonic, laser-scanning or position-sensing detectors (PSD) for ranging of nearly located obstacles [46]. The detection zone of such range sensors is typically shaped as cone or pyramid. As it will be commented in the Section 17.6.3.1, the simulator considers two-dimensional model of the home environment. That is why the detection zones of the wheelchair-mounted range sensors are also presented in the same model as two-dimensional areas, shaped as triangles as shown in Fig. 17.9. In this paper we do not refer to concrete model range sensors. Instead, the computer simulation is based on the assumption that the range sensors have viewing angles: +30º ~ -30º. The distance toward the obstacle is ranged at four sensing zones (Z1 ~ Z4) and the state of each zone is represented with a sensing point. A point is considered to be in "on" state if an obstacle is detected at that zone. In the Fig. 17.9 b, the activated points are marked as solid black circles and the non-activated ones are specified as empty circles. Information from the first two proximal sensing points (Z1 and Z2) is used for obstacle detection although the signals from the distal sensing points (Z3 and Z4) are used for building a map of the home environment. The problem of selecting a minimum number of sensors with their optimal arrangement on the autonomous guided vehicles (so called sensor modeling problem) has been explored for many years, and many methods for its solution have been proposed [47]. The problem is not an objective of the simulator at present. Instead, we assume that the sensors system consists of 7 range sensors and 7 contact sensors all of them arranged on the wheelchair periphery on a way shown in Fig. 17.10. Dotted lines on the figure indicate the orientation of the sensors.
Fig. 17.10. Arrangement of the range sensors and contact sensors on the wheelchair platform
272
D. Stefanov, A. Avtanski, and Z.Z. Bien
17.6.3 Navigation Algorithm of the Simulator The navigation task presented by the simulator is to find a path from a start position to a target and traverse it without collision. Navigation may be decomposed into three sub-tasks: (a) mapping and modeling the environment; (b) path planning and selection; and (c) path following and collision avoidance. The relationships between these tasks and their solution in the simulator will be commented in the next three sections. 17.6.3.1 Map of the Indoor Environment Over the last two decades various approaches of map building have been proposed such as occupancy grids, free space maps, composite maps, etc. [48–52]. In the proposed approach, the initial map of the home environment is based upon the readings of the states of two distal sensing points of each range sensor (Fig. 17.9 b, sensing points 3 and 4). This tactic allows collecting of initial information on the obstacles without coming in dangerous proximity to them. Developed simulator adopts grid-based mapping and represents the home environment as a bidimensional space divided in equal squares. Each square is represented by a pixel. In the first version of the simulator for simplicity reasons we used monochrome bitmap in which the obstacles are denoted by black color and associated with value “1” and the free space is represented by white color and associated with value “0”. This way one pixel is described by just one bit. The grid density is 65536 x 65536 lines that is quite enough for precise simulation of office and home environment. In order to simplify the simulator algorithm, all three-dimensional obstacles are represented with their projection on the floor. On the occupancy grid each obstacle is embodied with the sum of the grid cells that are completely or particularly occupied by the obstacle (Fig. 17.11). Choice of the map density should be done very carefully. As smaller are the grid-cells, as precise the obstacles are modeled. From other hand, operation with large number of cells reduces the calculation speed. The simulator uses a large number of grid nodes and snapping does not result in significant errors. For example, if the map with the same density is applied to rep2 resent an apartment with dimensions 12x12 meters (144 m floorage), the map resolution on both X and Y direction will be 0.18 mm, i.e. the snap error in the particular case will not exceed 0.1 mm. The initial map, composed upon the data from the introductory exploration of the new environment, is relatively coarse but despite its low accuracy it is still sufficient for successful path planning process. During the wheelchair operation, the information from all sensing zones of the range sensors is analyzed and the resolution of the primary map quickly improves. As a result, the accuracy of the next path planning procedures increases.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
273
Fig. 17.11. Representation of obstacles on the map
As it was already commented in Sect. 17.4, proposed approach is based on the idea that the user specifies a goal position by pointing it directly on the TV image. It is also supposed that current wheelchair position will be calculated from the TV images. In both tasks the point of interest is represented in respect to the coordinate system of a concrete camera. Since the wheelchair operates in the whole indoors area, it is much convenient if the navigation system refers to one unified map of the whole house rather than referring to number of maps that represent only small parts from it. The common map will significantly facilitate such important procedures as path planning and wheelchair position tracking. The map merging problem in that paper is quite similar to the problem of building a map from data collected by different robots that explore different parts of the environment [53]. As a difference, in the particular case the map merging problem is simplified from the facts that all cameras are fixed to the ceiling (considered to be a solid two-dimensional rigid body), cameras do not change their view angle, and their mutual positions remain the same for the whole period of exploitation of the wheelchair system. 17.6.3.2 Map Update It is quite likely that some obstacles in home environment change their positions, and appear, or disappear. In order to provide correct functioning of the wheelchair control system, the current map should be updated through the information from the onboard mounted range sensors. However, in some situations, range sensors can output false signals for obstacle existence. Some possible reasons are:
274
•
• •
D. Stefanov, A. Avtanski, and Z.Z. Bien
Interference between sensors. – It may occur in specific configuration of obstacles or specific surface properties of detected objects. The activation of one sensor from the emitted signals of another may cause incorrectly determined distance to the obstacle. False signal due to detection of a moving object and considering it as a static obstacle. Sensor signal caused by random noise.
In order to make the control system much resistant to sensor artifacts, decision for modifying the map by adding or removing an obstacle can be made if the change of the state of a map cell (“occupied” or “free of obstacle”) is repeatedly confirmed. The approach is based on the assumption that the occupancy of the particular cell is not binary determined but it depends on certain weight coefficient ω , whose value is determined as follows:
ω i = ω i −1 + a k
where
ωi ω i −1
if
0 < ω i −1 + a k < 1
(17.5)
ωi =1
if
ω i −1 + a k ≥ 1
(17.6)
ωi = 0
if
ω i −1 + a k ≤ 1
(17.7)
(i ) -th scanning of the cell occupancy, is the weight coefficient during the (i − 1) -th detection of the cell occu is the weight coefficient during the
pancy, is increment of the change of the weight coefficient, if an obstacle is detected in the entire cell during the
k (i ) -th scanning, a = +1 a = − 1 if an obstacle is not detected in the entire cell during the (i ) -th scanning. Since the map is monochrome, each cell state (S) can be described as “0” or “1” only. In order to conform to that rule, we make following assumptions:
where
Si = Si −1
if
0 < ωi 2) Case 248 p. 2001 [1-85233-410-X] Vol. 264: Banos, A.; Lamnabhi-Lagarrigue, F.; Montoya, F.J Advances in the Control of Nonlinear Systems 344 p. 2001 [1-85233-378-2]
Vol. 271: Rus, D.; Singh, S. Experimental Robotics VII 585 p. 2001 [3-540-42104-1] Vol. 272: Yang, T. Impulsive Control Theory 363 p. 2001 [3-540-42296-X] Vol. 273: Colonius, F.; Grune, L. (Eds) Dynamics, Bifurcations, and Control 312 p. 2002 [3-540-42560-9] Vol. 274: Yu, X.; Xu, J.-X. (Eds) Variable Structure Systems: Towards the 21st Century 420 p. 2002 [3-540-42965-4] Vol. 275: Ishii, H.; Francis, B.A. Limited Data Rate in Control Systems with Networks 171 p. 2002 [3-540-43237-X] Vol. 276: Bubnicki, Z. Uncertain Logics, Variables and Systems 142 p. 2002 [3-540-43235-3] Vol. 277: Sasane, A. Hankel Norm Approximation for Inˇnite-Dimensional Systems 150 p. 2002 [3-540-43327-9]
Vol. 265: Ichikawa, A.; Katayama, H. Linear Time Varying Systems and Sampled-data Systems 376 p. 2001 [1-85233-439-8]
Vol. 278: Chunling D. and Lihua X. (Eds) H∞ Control and Filtering of Two-dimensional Systems 161 p. 2002 [3-540-43329-5]
Vol. 266: Stramigioli, S. Modeling and IPC Control of Interactive Mechanical Systems { A Coordinate-free Approach 296 p. 2001 [1-85233-395-2]
Vol. 279: Engell, S.; Frehse, G.; Schnieder, E. (Eds) Modelling, Analysis, and Design of Hybrid Systems 516 p. 2002 [3-540-43812-2]
Vol. 267: Bacciotti, A.; Rosier, L. Liapunov Functions and Stability in Control Theory 224 p. 2001 [1-85233-419-3] Vol. 268: Moheimani, S.O.R. (Ed) Perspectives in Robust Control 390 p. 2001 [1-85233-452-5]
Vol. 280: Pasik-Duncan, B. (Ed) Stochastic Theory and Control 564 p. 2002 [3-540-43777-0] Vol. 281: Zinober A.; Owens D. (Eds) Nonlinear and Adaptive Control 416 p. 2002 [3-540-43240-X]
Vol. 282: Schroder, J. Modelling, State Observation and Diagnosis of Quantised Systems 368 p. 2003 [3-540-44075-5] Vol. 283: Fielding, Ch. et al. (Eds) Advanced Techniques for Clearance of Flight Control Laws 480 p. 2003 [3-540-44054-2] Vol. 284: Johansson, M. Piecewise Linear Control Systems 216 p. 2003 [3-540-44124-7] Vol. 285: Wang, Q.-G. Decoupling Control 373 p. 2003 [3-540-44128-X] Vol. 286: Rantzer, A. and Byrnes C.I. (Eds) Directions in Mathematical Systems Theory and Optimization 399 p. 2003 [3-540-00065-8] Vol. 287: Mahmoud, M.M.; Jiang, J. and Zhang, Y. Active Fault Tolerant Control Systems 239 p. 2003 [3-540-00318-5] Vol. 288: Taware, A. and Tao, G. Control of Sandwich Nonlinear Systems 393 p. 2003 [3-540-44115-8] Vol. 289: Giarre, L. and Bamieh, B. Multidisciplinary Research in Control 237 p. 2003 [3-540-00917-5] Vol. 290: Borrelli, F. Constrained Optimal Control of Linear and Hybrid Systems 237 p. 2003 [3-540-00257-X] Vol. 291: Xu, J.-X. and Tan, Y. Linear and Nonlinear Iterative Learning Control 189 p. 2003 [3-540-40173-3] Vol. 292: Chen, G. and Yu, X. Chaos Control 380 p. 2003 [3-540-40405-8] Vol. 293: Chen, G. and Hill, D.J. Bifurcation Control 320 p. 2003 [3-540-40341-8]
Vol. 294: Benvenuti, L.; De Santis, A.; Farina, L. (Eds) Positive Systems: Theory and Applications (POSTA 2003) 414 p. 2003 [3-540-40342-6] Vol. 295: Kang, W.; Xiao, M.; Borges, C. (Eds) New Trends in Nonlinear Dynamics and Control, and their Applications 365 p. 2003 [3-540-10474-0] Vol. 296: Matsuo, T.; Hasegawa, Y. Realization Theory of Discrete-Time Dynamical Systems 235 p. 2003 [3-540-40675-1] Vol. 297: Damm, T. Rational Matrix Equations in Stochastic Control 219 p. 2004 [3-540-20516-0] Vol. 298: Choi, Y.; Chung, W.K. PID Trajectory Tracking Control for Mechanical Systems 127 p. 2004 [3-540-20567-5] Vol. 299: Tarn, T.-J.; Chen, S.-B.; Zhou, C. (Eds.) Robotic Welding, Intelligence and Automation 214 p. 2004 [3-540-20804-6] Vol. 300: Nakamura, M.; Goto, S.; Kyura, N.; Zhang, T. Mechatronic Servo System Control Problems in Industries and their Theoretical Solutions 212 p. 2004 [3-540-21096-2] Vol. 301: de Queiroz, M.; Malisoff, M.; Wolenski, P. (Eds.) Optimal Control, Stabilization and Nonsmooth Analysis 373 p. 2004 [3-540-21330-9] Vol. 302: Filatov, N.M.; Unbehauen, H. Adaptive Dual Control: Theory and Applications 237 p. 2004 [3-540-21373-2] Vol. 303: Mahmoud, M.S. Resilient Control of Uncertain Dynamical Systems 278 p. 2004 [3-540-21351-1] Vol. 304: Margaris, N.I. Theory of the Non-linear Analog Phase Locked Loop 303 p. 2004 [3-540-21339-2] Vol. 305: Nebylov, A. Ensuring Control Accuracy 256 p. 2004 [3-540-21876-9]