Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
6777
Vincent G. Duffy (Ed.)
Digital Human Modeling Third International Conference, ICDHM 2011 Held as Part of HCI International 2011 Orlando, FL, USA, July 2011 Proceedings
13
Volume Editor Vincent G. Duffy Purdue University School of Industrial Engineering and Department of Agricultural and Biological Engineering 315 N. Grant Street, West Lafayette, Indiana 47907, USA E-mail:
[email protected] ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-21798-2 e-ISBN 978-3-642-21799-9 DOI 10.1007/978-3-642-21799-9 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011929223 CR Subject Classification (1998): H.5, H.1, H.3, H.4.2, I.2-6, J.3 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Foreword
The 14th International Conference on Human–Computer Interaction, HCI International 2011, was held in Orlando, Florida, USA, July 9–14, 2011, jointly with the Symposium on Human Interface (Japan) 2011, the 9th International Conference on Engineering Psychology and Cognitive Ergonomics, the 6th International Conference on Universal Access in Human–Computer Interaction, the 4th International Conference on Virtual and Mixed Reality, the 4th International Conference on Internationalization, Design and Global Development, the 4th International Conference on Online Communities and Social Computing, the 6th International Conference on Augmented Cognition, the Third International Conference on Digital Human Modeling, the Second International Conference on Human-Centered Design, and the First International Conference on Design, User Experience, and Usability. A total of 4,039 individuals from academia, research institutes, industry and governmental agencies from 67 countries submitted contributions, and 1,318 papers that were judged to be of high scientific quality were included in the program. These papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers accepted for presentation thoroughly cover the entire field of human–computer interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. This volume, edited by Vincent G. Duffy, contains papers in the thematic area of digital human modeling (DHM), addressing the following major topics: • • • • •
Anthropometry applications Posture and motion modeling Digital human modeling and design Cognitive modeling Driver modeling
The remaining volumes of the HCI International 2011 Proceedings are: • Volume 1, LNCS 6761, Human–Computer Interaction—Design and Development Approaches (Part I), edited by Julie A. Jacko • Volume 2, LNCS 6762, Human–Computer Interaction—Interaction Techniques and Environments (Part II), edited by Julie A. Jacko • Volume 3, LNCS 6763, Human–Computer Interaction—Towards Mobile and Intelligent Interaction Environments (Part III), edited by Julie A. Jacko • Volume 4, LNCS 6764, Human–Computer Interaction—Users and Applications (Part IV), edited by Julie A. Jacko • Volume 5, LNCS 6765, Universal Access in Human–Computer Interaction— Design for All and eInclusion (Part I), edited by Constantine Stephanidis • Volume 6, LNCS 6766, Universal Access in Human–Computer Interaction— Users Diversity (Part II), edited by Constantine Stephanidis
VI
Foreword
• Volume 7, LNCS 6767, Universal Access in Human–Computer Interaction— Context Diversity (Part III), edited by Constantine Stephanidis • Volume 8, LNCS 6768, Universal Access in Human–Computer Interaction— Applications and Services (Part IV), edited by Constantine Stephanidis • Volume 9, LNCS 6769, Design, User Experience, and Usability—Theory, Methods, Tools and Practice (Part I), edited by Aaron Marcus • Volume 10, LNCS 6770, Design, User Experience, and Usability— Understanding the User Experience (Part II), edited by Aaron Marcus • Volume 11, LNCS 6771, Human Interface and the Management of Information—Design and Interaction (Part I), edited by Michael J. Smith and Gavriel Salvendy • Volume 12, LNCS 6772, Human Interface and the Management of Information—Interacting with Information (Part II), edited by Gavriel Salvendy and Michael J. Smith • Volume 13, LNCS 6773, Virtual and Mixed Reality—New Trends (Part I), edited by Randall Shumaker • Volume 14, LNCS 6774, Virtual and Mixed Reality—Systems and Applications (Part II), edited by Randall Shumaker • Volume 15, LNCS 6775, Internationalization, Design and Global Development, edited by P.L. Patrick Rau • Volume 16, LNCS 6776, Human-Centered Design, edited by Masaaki Kurosu • Volume 18, LNCS 6778, Online Communities and Social Computing, edited by A. Ant Ozok and Panayiotis Zaphiris • Volume 19, LNCS 6779, Ergonomics and Health Aspects of Work with Computers, edited by Michelle M. Robertson • Volume 20, LNAI 6780, Foundations of Augmented Cognition: Directing the Future of Adaptive Systems, edited by Dylan D. Schmorrow and Cali M. Fidopiastis • Volume 21, LNAI 6781, Engineering Psychology and Cognitive Ergonomics, edited by Don Harris • Volume 22, CCIS 173, HCI International 2011 Posters Proceedings (Part I), edited by Constantine Stephanidis • Volume 23, CCIS 174, HCI International 2011 Posters Proceedings (Part II), edited by Constantine Stephanidis I would like to thank the Program Chairs and the members of the Program Boards of all Thematic Areas, listed herein, for their contribution to the highest scientific quality and the overall success of the HCI International 2011 Conference. In addition to the members of the Program Boards, I also wish to thank the following volunteer external reviewers: Roman Vilimek from Germany, Ramalingam Ponnusamy from India, Si Jung “Jun” Kim from the USA, and Ilia Adami, Iosif Klironomos, Vassilis Kouroumalis, George Margetis, and Stavroula Ntoa from Greece.
Foreword
VII
This conference would not have been possible without the continuous support and advice of the Conference Scientific Advisor, Gavriel Salvendy, as well as the dedicated work and outstanding efforts of the Communications and Exhibition Chair and Editor of HCI International News, Abbas Moallem. I would also like to thank for their contribution toward the organization of the HCI International 2011 Conference the members of the Human–Computer Interaction Laboratory of ICS-FORTH, and in particular Margherita Antona, George Paparoulis, Maria Pitsoulaki, Stavroula Ntoa, Maria Bouhli and George Kapnas. July 2011
Constantine Stephanidis
Organization
Ergonomics and Health Aspects of Work with Computers Program Chair: Michelle M. Robertson Arne Aar˚ as, Norway Pascale Carayon, USA Jason Devereux, UK Wolfgang Friesdorf, Germany Martin Helander, Singapore Ed Israelski, USA Ben-Tzion Karsh, USA Waldemar Karwowski, USA Peter Kern, Germany Danuta Koradecka, Poland Nancy Larson, USA Kari Lindstr¨om, Finland
Brenda Lobb, New Zealand Holger Luczak, Germany William S. Marras, USA Aura C. Matias, Philippines Matthias R¨ otting, Germany Michelle L. Rogers, USA Dominique L. Scapin, France Lawrence M. Schleifer, USA Michael J. Smith, USA Naomi Swanson, USA Peter Vink, The Netherlands John Wilson, UK
Human Interface and the Management of Information Program Chair: Michael J. Smith Hans-J¨ org Bullinger, Germany Alan Chan, Hong Kong Shin’ichi Fukuzumi, Japan Jon R. Gunderson, USA Michitaka Hirose, Japan Jhilmil Jain, USA Yasufumi Kume, Japan Mark Lehto, USA Hirohiko Mori, Japan Fiona Fui-Hoon Nah, USA Shogo Nishida, Japan Robert Proctor, USA
Youngho Rhee, Korea Anxo Cereijo Roib´ as, UK Katsunori Shimohara, Japan Dieter Spath, Germany Tsutomu Tabe, Japan Alvaro D. Taveira, USA Kim-Phuong L. Vu, USA Tomio Watanabe, Japan Sakae Yamamoto, Japan Hidekazu Yoshikawa, Japan Li Zheng, P. R. China
X
Organization
Human–Computer Interaction Program Chair: Julie A. Jacko Sebastiano Bagnara, Italy Sherry Y. Chen, UK Marvin J. Dainoff, USA Jianming Dong, USA John Eklund, Australia Xiaowen Fang, USA Ayse Gurses, USA Vicki L. Hanson, UK Sheue-Ling Hwang, Taiwan Wonil Hwang, Korea Yong Gu Ji, Korea Steven A. Landry, USA
Gitte Lindgaard, Canada Chen Ling, USA Yan Liu, USA Chang S. Nam, USA Celestine A. Ntuen, USA Philippe Palanque, France P.L. Patrick Rau, P.R. China Ling Rothrock, USA Guangfeng Song, USA Steffen Staab, Germany Wan Chul Yoon, Korea Wenli Zhu, P.R. China
Engineering Psychology and Cognitive Ergonomics Program Chair: Don Harris Guy A. Boy, USA Pietro Carlo Cacciabue, Italy John Huddlestone, UK Kenji Itoh, Japan Hung-Sying Jing, Taiwan Wen-Chin Li, Taiwan James T. Luxhøj, USA Nicolas Marmaras, Greece Sundaram Narayanan, USA Mark A. Neerincx, The Netherlands
Jan M. Noyes, UK Kjell Ohlsson, Sweden Axel Schulte, Germany Sarah C. Sharples, UK Neville A. Stanton, UK Xianghong Sun, P.R. China Andrew Thatcher, South Africa Matthew J.W. Thomas, Australia Mark Young, UK Rolf Zon, The Netherlands
Universal Access in Human–Computer Interaction Program Chair: Constantine Stephanidis Julio Abascal, Spain Ray Adams, UK Elisabeth Andr´e, Germany Margherita Antona, Greece Chieko Asakawa, Japan Christian B¨ uhler, Germany Jerzy Charytonowicz, Poland Pier Luigi Emiliani, Italy
Michael Fairhurst, UK Dimitris Grammenos, Greece Andreas Holzinger, Austria Simeon Keates, Denmark Georgios Kouroupetroglou, Greece Sri Kurniawan, USA Patrick M. Langdon, UK Seongil Lee, Korea
Organization
Zhengjie Liu, P.R. China Klaus Miesenberger, Austria Helen Petrie, UK Michael Pieper, Germany Anthony Savidis, Greece Andrew Sears, USA Christian Stary, Austria
Hirotada Ueda, Japan Jean Vanderdonckt, Belgium Gregg C. Vanderheiden, USA Gerhard Weber, Germany Harald Weber, Germany Panayiotis Zaphiris, Cyprus
Virtual and Mixed Reality Program Chair: Randall Shumaker Pat Banerjee, USA Mark Billinghurst, New Zealand Charles E. Hughes, USA Simon Julier, UK David Kaber, USA Hirokazu Kato, Japan Robert S. Kennedy, USA Young J. Kim, Korea Ben Lawson, USA Gordon McK Mair, UK
David Pratt, UK Albert “Skip” Rizzo, USA Lawrence Rosenblum, USA Jose San Martin, Spain Dieter Schmalstieg, Austria Dylan Schmorrow, USA Kay Stanney, USA Janet Weisenford, USA Mark Wiederhold, USA
Internationalization, Design and Global Development Program Chair: P.L. Patrick Rau Michael L. Best, USA Alan Chan, Hong Kong Lin-Lin Chen, Taiwan Andy M. Dearden, UK Susan M. Dray, USA Henry Been-Lirn Duh, Singapore Vanessa Evers, The Netherlands Paul Fu, USA Emilie Gould, USA Sung H. Han, Korea Veikko Ikonen, Finland Toshikazu Kato, Japan Esin Kiris, USA Apala Lahiri Chavan, India
James R. Lewis, USA James J.W. Lin, USA Rungtai Lin, Taiwan Zhengjie Liu, P.R. China Aaron Marcus, USA Allen E. Milewski, USA Katsuhiko Ogawa, Japan Oguzhan Ozcan, Turkey Girish Prabhu, India Kerstin R¨ ose, Germany Supriya Singh, Australia Alvin W. Yeo, Malaysia Hsiu-Ping Yueh, Taiwan
XI
XII
Organization
Online Communities and Social Computing Program Chairs: A. Ant Ozok, Panayiotis Zaphiris Chadia N. Abras, USA Chee Siang Ang, UK Peter Day, UK Fiorella De Cindio, Italy Heidi Feng, USA Anita Komlodi, USA Piet A.M. Kommers, The Netherlands Andrew Laghos, Cyprus Stefanie Lindstaedt, Austria Gabriele Meiselwitz, USA Hideyuki Nakanishi, Japan
Anthony F. Norcio, USA Ulrike Pfeil, UK Elaine M. Raybourn, USA Douglas Schuler, USA Gilson Schwartz, Brazil Laura Slaughter, Norway Sergei Stafeev, Russia Asimina Vasalou, UK June Wei, USA Haibin Zhu, Canada
Augmented Cognition Program Chairs: Dylan D. Schmorrow, Cali M. Fidopiastis Monique Beaudoin, USA Chris Berka, USA Joseph Cohn, USA Martha E. Crosby, USA Julie Drexler, USA Ivy Estabrooke, USA Chris Forsythe, USA Wai Tat Fu, USA Marc Grootjen, The Netherlands Jefferson Grubb, USA Santosh Mathan, USA
Rob Matthews, Australia Dennis McBride, USA Eric Muth, USA Mark A. Neerincx, The Netherlands Denise Nicholson, USA Banu Onaral, USA Kay Stanney, USA Roy Stripling, USA Rob Taylor, UK Karl van Orden, USA
Digital Human Modeling Program Chair: Vincent G. Duffy Karim Abdel-Malek, USA Giuseppe Andreoni, Italy Thomas J. Armstrong, USA Norman I. Badler, USA Fethi Calisir, Turkey Daniel Carruth, USA Keith Case, UK Julie Charland, Canada
Yaobin Chen, USA Kathryn Cormican, Ireland Daniel A. DeLaurentis, USA Yingzi Du, USA Okan Ersoy, USA Enda Fallon, Ireland Yan Fu, P.R. China Afzal Godil, USA
Organization
Ravindra Goonetilleke, Hong Kong Anand Gramopadhye, USA Lars Hanson, Sweden Pheng Ann Heng, Hong Kong Bo Hoege, Germany Hongwei Hsiao, USA Tianzi Jiang, P.R. China Nan Kong, USA Steven A. Landry, USA Kang Li, USA Zhizhong Li, P.R. China Tim Marler, USA
XIII
Ahmet F. Ozok, Turkey Srinivas Peeta, USA Sudhakar Rajulu, USA Matthias R¨ otting, Germany Matthew Reed, USA Johan Stahre, Sweden Mao-Jiun Wang, Taiwan Xuguang Wang, France Jingzhou (James) Yang, USA Gulcin Yucel, Turkey Tingshao Zhu, P.R. China
Human-Centered Design Program Chair: Masaaki Kurosu Julio Abascal, Spain Simone Barbosa, Brazil Tomas Berns, Sweden Nigel Bevan, UK Torkil Clemmensen, Denmark Susan M. Dray, USA Vanessa Evers, The Netherlands Xiaolan Fu, P.R. China Yasuhiro Horibe, Japan Jason Huang, P.R. China Minna Isomursu, Finland Timo Jokela, Finland Mitsuhiko Karashima, Japan Tadashi Kobayashi, Japan Seongil Lee, Korea Kee Yong Lim, Singapore
Zhengjie Liu, P.R. China Lo¨ıc Mart´ınez-Normand, Spain Monique Noirhomme-Fraiture, Belgium Philippe Palanque, France Annelise Mark Pejtersen, Denmark Kerstin R¨ ose, Germany Dominique L. Scapin, France Haruhiko Urokohara, Japan Gerrit C. van der Veer, The Netherlands Janet Wesson, South Africa Toshiki Yamaoka, Japan Kazuhiko Yamazaki, Japan Silvia Zimmermann, Switzerland
Design, User Experience, and Usability Program Chair: Aaron Marcus Ronald Baecker, Canada Barbara Ballard, USA Konrad Baumann, Austria Arne Berger, Germany Randolph Bias, USA Jamie Blustein, Canada
Ana Boa-Ventura, USA Lorenzo Cantoni, Switzerland Sameer Chavan, Korea Wei Ding, USA Maximilian Eibl, Germany Zelda Harrison, USA
XIV
Organization
R¨ udiger Heimg¨artner, Germany Brigitte Herrmann, Germany Sabine Kabel-Eckes, USA Kaleem Khan, Canada Jonathan Kies, USA Jon Kolko, USA Helga Letowt-Vorbek, South Africa James Lin, USA Frazer McKimm, Ireland Michael Renner, Switzerland
Christine Ronnewinkel, Germany Elizabeth Rosenzweig, USA Paul Sherman, USA Ben Shneiderman, USA Christian Sturm, Germany Brian Sullivan, USA Jaakko Villa, Finland Michele Visciola, Italy Susan Weinschenk, USA
HCI International 2013
The 15th International Conference on Human–Computer Interaction, HCI International 2013, will be held jointly with the affiliated conferences in the summer of 2013. It will cover a broad spectrum of themes related to human–computer interaction (HCI), including theoretical issues, methods, tools, processes and case studies in HCI design, as well as novel interaction techniques, interfaces and applications. The proceedings will be published by Springer. More information about the topics, as well as the venue and dates of the conference, will be announced through the HCI International Conference series website: http://www.hci-international.org/ General Chair Professor Constantine Stephanidis University of Crete and ICS-FORTH Heraklion, Crete, Greece Email:
[email protected] Table of Contents
Part I: Anthropometry Applications The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability and Hip Joint Center Prediction . . . . . . . . . . . . . . . . . . . . . . . . . Wen-Ko Chiou, Bi-Hui Chen, and Wei-Ying Chou
3
An Automatic Method for Computerized Head and Facial Anthropometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jing-Jing Fang and Sheng-Yi Fang
12
3D Parametric Body Model Based on Chinese Female Anhtropometric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Sixiang, Chan Chee-kooi, W.H. Ip, and Ameersing Luximon
22
Anthropometric Measurement of the Feet of Chinese Children . . . . . . . . . Linghua Ran, Xin Zhang, Chuzhi Chao, and Taijie Liu
30
Human Dimensions of Chinese Minors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xin Zhang, Yanyu Wang, Linghua Ran, Ailan Feng, Ketai He, Taijie Liu, and Jianwei Niu
37
Development of Sizing Systems for Chinese Minors . . . . . . . . . . . . . . . . . . . Xin Zhang, Yanyu Wang, Linghua Ran, Ailan Feng, Ketai He, Taijie Liu, and Jianwei Niu
46
Part II: Posture and Motion Modeling Motion Capture Experiments for Validating Optimization-Based Human Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aimee Cloutier, Robyn Boothby, and Jingzhou (James) Yang
59
Posture Reconstruction Method for Mapping Joint Angles of Motion Capture Experiments to Simulation Models . . . . . . . . . . . . . . . . . . . . . . . . . Jared Gragg, Jingzhou (James) Yang, and Robyn Boothby
69
Joint Torque Modeling of Knee Extension and Flexion . . . . . . . . . . . . . . . . Fabian Guenzkofer, Florian Engstler, Heiner Bubb, and Klaus Bengler Predicting Support Reaction Forces for Standing and Seated Tasks with Given Postures-A Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . Brad Howard and Jingzhou (James) Yang
79
89
XVIII
Table of Contents
Schema for Motion Capture Data Management . . . . . . . . . . . . . . . . . . . . . . Ali Keyvani, Henrik Johansson, Mikael Ericsson, Dan L¨ amkull, and ¨ Roland Ortengren
99
Simulating Ingress Motion for Heavy Earthmoving Equipment . . . . . . . . . HyunJung Kwon, Mahdiar Hariri, Rajan Bhatt, Jasbir Arora, and Karim Abdel-Malek
109
Contact Area Determination between a N95 Filtering Facepiece Respirator and a Headform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhipeng Lei and Jingzhou (James) Yang
119
Ergonomics Evaluation of Three Operation Postures for Astronauts . . . . Dongxu Li and Yan Zhao
129
In Silicon Study of 3D Elbow Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . Kang Li and Virak Tan
139
Implicit Human-Computer Interaction by Posture Recognition . . . . . . . . . Enrico Maier
143
Optimization-Based Posture Prediction for Analysis of Box Lifting Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tim Marler, Lindsey Knake, and Ross Johnson
151
Planar Vertical Jumping Simulation-A Pilot Study . . . . . . . . . . . . . . . . . . . Burak Ozsoy and Jingzhou (James) Yang
161
StabilitySole: Embedded Sensor Insole for Balance and Gait Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peyton Paulick, Hamid Djalilian, and Mark Bachman
171
The Upper Extremity Loading during Typing Using One, Two and Three Fingers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jin Qin, Matthieu Trudeau, and Jack T. Dennerlein
178
Automatic Face Feature Points Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . Dominik Rupprecht, Sebastian Hesse, and Rainer Blum
186
3D Human Motion Capturing Based only on Acceleration and Angular Rate Measurement for Low Extremities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Schiefer, Thomas Kraus, Elke Ochsmann, Ingo Hermanns, and Rolf Ellegast Application of Human Modeling in Multi-crew Cockpit Design . . . . . . . . . Xiaohui Sun, Feng Gao, Xiugan Yuan, and Jingquan Zhao A Biomechanical Approach for Evaluating Motion Related Discomfort: by an Application to Pedal Clutching Movement . . . . . . . . . . . . . . . . . . . . . Xuguang Wang, Romain Pannetier, Nagananda Krishna Burra, and Julien Numa
195
204
210
Table of Contents
Footbed Influences on Posture and Perceived Feel . . . . . . . . . . . . . . . . . . . . Thilina W. Weerasinghe and Ravindra S. Goonetilleke Postural Observation of Shoulder Flexion during Asymmetric Lifting Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Xu, Chien-Chi Chang, Gert S. Faber, Idsart Kingma, and Jack T. Dennerlein An Alternative Formulation for Determining Weights of Joint Displacement Objective Function in Seated Posture Prediction . . . . . . . . . Qiuling Zou, Qinghong Zhang, Jingzhou (James) Yang, Robyn Boothby, Jared Gragg, and Aimee Cloutier
XIX
220
228
231
Part III: Digital Human Modeling and Design Videogames and Elders: A New Path in LCT? . . . . . . . . . . . . . . . . . . . . . . . Nicola D’Aquaro, Dario Maggiorini, Giacomo Mancuso, and Laura A. Ripamonti
245
Research on Digital Human Model Used in Human Factor Simulation and Evaluation of Load Carriage Equipment . . . . . . . . . . . . . . . . . . . . . . . . . Dayong Dong, Lijing Wang, Xiugan Yuan, and Shan Fu
255
Multimodal, Touchless Interaction in Spatial Augmented Reality Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monika Elepfandt and Marcelina S¨ underhauf
263
Introducing ema (Editor for Manual Work Activities) – A New Tool for Enhancing Accuracy and Efficiency of Human Simulations in Digital Production Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lars Fritzsche, Ricardo Jendrusch, Wolfgang Leidholdt, Sebastian Bauer, Thomas J¨ ackel, and Attila Pirger
272
Accelerated Real-Time Reconstruction of 3D Deformable Objects from Multi-view Video Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Holger Graf, Leon Hazke, Svenja Kahn, and Cornelius Malerczyk
282
Second Life as a Platform for Creating Intelligent Virtual Agents . . . . . . Larry F. Hodges, Amy Ulinski, Toni Bloodworth, Austen Hayes, John Mark Smotherman, and Brandon Kerr A Framework for Automatic Simulated Accessibility Assessment in Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikolaos Kaklanis, Panagiotis Moschonas, Konstantinos Moustakas, and Dimitrios Tzovaras Cloth Modeling and Simulation: A Literature Survey . . . . . . . . . . . . . . . . . James Long, Katherine Burns, and Jingzhou (James) Yang
292
302
312
XX
Table of Contents
Preliminary Study on Dynamic Foot Model . . . . . . . . . . . . . . . . . . . . . . . . . Ameersing Luximon and Yan Luximon
321
Three-Dimensional Grading of Virtual Garment with Design Signature Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roger Ng
328
A Model of Shortcut Usage in Multimodal Human-Computer Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan Schaffer, Robert Schleicher, and Sebastian M¨ oller
337
Multimodal User Interfaces in IPS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ulrike Schmuntzsch and Matthias R¨ otting
347
The Application of the Human Model in the Thermal Comfort Assessment of Fighter Plane’s Cockpit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haifeng Shen and Xiugan Yuan
357
Mass Customization Methodology for Footwear Design . . . . . . . . . . . . . . . Yifan Zhang, Ameersing Luximon, Xiao Ma, Xiaoling Guo, and Ming Zhang
367
Part IV: Cognitive Modeling Incorporating Motion Data and Cognitive Models in IPS2 . . . . . . . . . . . . . Michael Beckmann and Jeronimo Dzaack Study on Synthetic Evaluation of Human Performance in Manually Controlled Spacecraft Rendezvous and Docking Tasks . . . . . . . . . . . . . . . . Ting Jiang, Chunhui Wang, Zhiqiang Tian, Yongzhong Xu, and Zheng Wang Dynamic Power Tool Operation Model: Experienced Users vs. Novice Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia-Hua Lin, Raymond W. McGorry, and Chien-Chi Chang An Empirical Study of Disassembling Using an Augmented Vision System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Barbara Odenthal, Marcel Ph. Mayer, Wolfgang Kabuß, Bernhard Kausch, and Christopher M. Schlick
379
387
394
399
Polymorphic Cumulative Learning in Integrated Cognitive Architectures for Analysis of Pilot-Aircraft Dynamic Environment . . . . . . . . . . . . . . . . . . Yin Tangwen and Shan Fu
409
A Context-Aware Adaptation System for Spatial Augmented Reality . . . Anne Wegerich and Matthias R¨ otting
417
Table of Contents
XXI
Using Physiological Parameters to Evaluate Operator’s Workload in Manual Controlled Rendezvous and Docking (RVD) . . . . . . . . . . . . . . . . . . Bin Wu, Fang Hou, Zhi Yao, Jianwei Niu, and Weifen Huang
426
Task Complexity Related Training Effects on Operation Error of Spaceflight Emergency Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yijing Zhang, Bin Wu, Xiang Zhang, Wang Quanpeng, and Min Liu
436
The Research of Crew Workload Evaluation Based on Digital Human Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yiyuan Zheng and Shan Fu
446
Part V: Driver Modeling A Simulation Environment for Analysis and Optimization of Driver Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ola Benderius, Gustav Markkula, Krister Wolff, and Mattias Wahde
453
Learning the Relevant Percepts of Modular Hierarchical Bayesian Driver Models Using a Bayesian Information Criterion . . . . . . . . . . . . . . . . Mark Eilers and Claus M¨ obus
463
Impact and Modeling of Driver Behavior Due to Cooperative Assistance Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Florian Laquai, Markus Duschl, and Gerhard Rigoll
473
Predicting the Focus of Attention and Deficits in Situation Awareness with a Modular Hierarchical Bayesian Driver Model . . . . . . . . . . . . . . . . . . Claus M¨ obus, Mark Eilers, and Hilke Garbe
483
The Two-Point Visual Control Model of Steering - New Empirical Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hendrik Neumann and Barbara Deml
493
Automation Effects on Driver’s Behaviour When Integrating a PADAS and a Distraction Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabio Tango, Luca Minin, Raghav Aras, and Olivier Pietquin
503
What is Human? How the Analysis of Brain Dynamics Can Help to Improve and Validate Driver Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sebastian Welke, Janna Protzak, Matthias R¨ otting, and Thomas J¨ urgensohn
513
Less Driving While Driving? An Approach for the Estimation of Effects of Future Vehicle Automation Systems on Driver Behavior . . . . . . . . . . . . Bertram Wortelen and Andreas L¨ udtke
523
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
533
The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability and Hip Joint Center Prediction Wen-Ko Chiou1, Bi-Hui Chen2, and Wei-Ying Chou1 1 2
Chang Gung University, Wen-Hwa 1st Road, Kwei-Shan Tao-Yuan, Taiwan, 333, R.O.C. Chihlee Institute of Technology, Sec. 1, Wunhua Rd., Banciao District, New Taipei City, Taiwan, 313, R.O.C.
[email protected],
[email protected],
[email protected] Abstract. Deforming 3D scanned data is an important and necessary procedure for the development of dynamic three-dimensional (3D) scanned anthropometry. The inaccuracies in joint center will cause error in deformation. Bell et al. developed the equations to predict hip joint center (HJC) based on anthropometric measurement of inter-anterior superior iliac spine distance (IAD). However, no previous study has reported on the reliability of IAD measurements in 3D scanned data, and therefore the effect on HJC estimates needs to be determined. Four measurers (2 trained/ 2 untrained) were recruited into this study to collect measurements of IAD in 3D scanned data under two situations (with/ without landmarks). The intra-class correlation (ICC) and technical error of measurement (TEM) were used to assess the reliability of the measurements. Results showed the untrained group had the lowest reliability and validity of IAD measurement in the without landmarks situation, and the error of HJC prediction in this situation was significantly higher than in the other situations (p 0.001). Both of training and use of landmarks improved the validity of measurement and HJC prediction; compared with training alone, attaching landmarks can significantly improve the reliability of measurement.
<
Keywords: Reliability; Hip joint center; Three-dimensional body scanner; Trained; Landmarks.
1 Introduction With the general application requirements of the dynamic three-dimensional (3D) scanning data, an accurate and reliable method to deform this data needs to be defined. The accurate identification of joint centers is of great importance for 3D scanning data deformation. Studies have shown that inaccuracies in identifying joint center will cause joint translations, and have a considerable influence on body kinematics and kinetics [1-2]. The hip, knee, and ankle joint are used to define the anatomical frame of the lower extremities [3]. While the joint centers of the knee and V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 3–11, 2011. © Springer-Verlag Berlin Heidelberg 2011
4
W.-K. Chiou, B.-H. Chen, and W.-Y. Chou
ankle are easier to locate, the deeply located hip joint is not easily identified. Predictive methods are commonly used to estimate the hip joint center (HJC) clinically. Bell, Pedersen, & Brand [4] described a method to estimate the location of the HJC in all three planes (x, y, z) using a fixed percentage of the inter-anterior superior iliac spine distance (IAD). However, the reliability of these measurements in 3D scanning data and the effect of IAD measurement results on HJC estimates have yet to be determined. ISO 20685:2010 addresses protocols for the use of 3D body scan systems in the acquisition of body shape data and measurements that can be extracted from 3D scanning data. For reducing error in 3D scanning, this international standard proposes attaching reflective landmarks to the skin over anatomical landmarks prior to scanning. Landmarks are tools which help in characterizing both the size and the shape of skeletal structures in human populations. Previous studies indicated that most of the anatomical landmarks are difficult to detect without palpating the body and placing a reflective landmark on the site prior to scanning [5]. This procedure can improve the reliability and accuracy of the anthropometric results. However, this method is requiring trained staff to attach the landmarks to the subjects, costing huge of time and money. Moreover, the effect of the landmarks on the 3D surface anthropometric reliability of IAD has yet to be determined. This paper will investigate the effect of landmarks on the measurement reliability of IAD and HJC prediction. Previous studies have described the effects of anthropometric training on measured reliability. Almost all of previous studies of anthropometric measurements used trained staff to do measurements [6-9]. Sebo, Beer-Borst, Haller, & Bovier [10] also demonstrated that the reliability of measurements improved after a one-hour training in anthropometric measurements. However, most staff engaged in 3D scan image deformation has little opportunity to receive training in human anatomy. This may result in some issues regarding reliability and accuracy of the 3D surface anthropometrics. This paper separates the measurers into two groups, trained group and untrained group, to investigate the effects of landmark on the measurement reliability of IAD and HJC prediction in the two groups. The purposes of this study are to quantify the 3D surface anthropometric reliability of IAD measurement, and to report on the effect of IAD measurement differences on HJC prediction, which were calculated from the equations developed by Bell et al. [4]. We will compare the results in the four groups: no landmarks / no training (NLNT); no landmarks / trained (NLT); landmarks / no training (LNT); landmarks / trained (LT). The effect on the measurement reliability of IAD and the prediction error of HJC is discussed
2 Methods 2.1 Data Collection of 3D Whole Body Scanned Models In this study, the Chang Gung Whole-Body Scanner (CGWBS) was utilized to capture the intricacies of the human body [11]. The CGWBS scans a maximum cylindrical volume of 190 cm in height and 100 cm in diameter, which accommodates most human subjects. The CGWBS system is designed to withstand shipping and repeated use without the need for alignment or adjustment.
The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability
5
Twenty subjects (9 male, 11 female), with a mean (SD) age of 21.4 (0.7) years, weight of 56.8 (9.37) kg, and height of 166.1 (1.7) cm were recruited into this study. The subjects had no history of lower limb problems, and gave informed consent to participate in this study, which was approved by the Chang Gung Memorial Hospital Institutional Review Board (no: 97-2538B). The subjects, wearing a standard garment and cap, stood erect with their shoulder held in 20° to 30° of abduction and feet shoulder-width apart. During scanning, the subjects had to hold their breath for about 10 s. All procedures and parameters were in accordance with the previous study [11]. Each subject completed two trials of scanning. The relevant anatomical landmarks of the right and left antero-superior iliac spine (ASIS) were marked in the first trial by a trained researcher. The line distance of IAD in eight of subjects was measured by the trained researchers (used Martin-type anthropometer), after scanning. 2.2 Data Collection from Scanned Data Four measurers were recruited for this study. Two were familiar with human anatomy (trained measurers); the other two had not received any trained in human anatomy (untrained measurers). Following a simple introduction to the human anatomy and the location of the ASISs through three pictures from anatomy textbooks [12], the measurers were asked to locate the coordinates of the right and left ASISs (x, y, z) on the scan images in two trials. Software will calculate the line distance of IAD form the right and left ASISs coordinates and the test-retest reliability of IAD measurement will be calculated. Anthro3D software (Logistic Technology, Taiwan) was used to perform these location tasks on 3D scanning data (Fig. 1) and to calculate the line distance of IAD. The x axis was defined as the anterior-posterior direction, the y axis was defined as the medial-lateral direction, and the z axis was defined as the superiorinferior direction. The coordinates of HJC (x, y, z) were calculated using the equations presented by Bell et al. [4]. When the coordinate system origin was located halfway between the right and left ASISs, the equation estimated HJC at 22% IAD posterior, 30% IAD inferior, and 36% IAD lateral to the point of origin.
(a)
(b)
Fig. 1. (a) After the coordinates of the right ASIS (point X) and left ASIS (point X’) are located on the scan images without landmarks, the software will calculate the IAD (line XX’). (b) After the coordinates of the right ASIS (point Y) and left ASIS (point Y’) are located on the scan images with landmarks, the software will calculate the IAD (line YY’).
6
W.-K. Chiou, B.-H. Chen, and W.-Y. Chou
2.3 Statistical Analyses This study calculated the IAD from the coordinates of right and left ASISs to assess the intra-measurer and inter-measurer reliability of the four situations (NLNT, NLT, LNT, and LT). Intraclass correlation (ICC) was used for this purpose [13-15]. Good to excellent reliability was accepted at an ICC of 0.75, as in previous research [16]. For the inter-measurer reliability, the data gather from measurer’s first trial measurements were compared with the second trial measurements. For the intrameasurer reliability, the data gather form trained measurer 1 were compared with trained measurer 2, and the data gather from untrained measurer 1 were compared with untrained measurer 2, using the first trial measurements data. The technical error of measurement (TEM) was also used to verify the degree of imprecision when performing and repeating anthropometrical measurements (withinmeasurer) and comparing them with measurements from other measurers (betweenmeasurer). It is the most common way to express the error margin in anthropometry. The values of TEM in this study were calculated with the commonly used formula. The lower the TEM obtained, the better was the precision of the examiner in performing the measurement. ISO 20685:2010 describes the methodology to survey the validation of the measurement extracted from 3D scanning image. The standard for accuracy is the results of the corresponding traditional measurement, when measured by a skilled anthropometrist. The difference between an extracted measurement and the corresponding traditional measurement on actual subjects should be calculated to show the accuracy of this extracted measurement. In this study, eight of subjects the IAD were measured by the trained researchers using a Martin-type anthropometer. The differences between the results of traditional measurement and the results of measurement extracted from 3D scanning image have been calculated to show the accuracy of the IAD measurements. The one-way ANOVA (SPSS 16.0 for windows, 2008) was performed to detect the groups (NLNT, NLT, LNT, and LT) effect on IAD measurement, signed / absolute errors of HJC prediction. The coordinates of HJC (x, y, z) were calculated using the equations presented by Bell et al. [4]. Duncan's post hoc test was used to detect which group was significant different with the others. The significance level was set at 0.001. The IAD results derived from the traditional measurement were used to calculate the HJC coordinates of the eight subjects (as a gold standard), by the Bell’s equations [4]. The difference between the results gathered from the four groups and the traditional measurement were calculated to represent the HJC prediction errors. The differences of absolute distance along each axis were compared to determine the HJC prediction errors of the four groups in the three anatomical planes (x/ y/ z). The differences of signed distance were also compared to determine the general direction of the difference.
3 Results 3.1 Reliability of IAD Measurement Table 1 summarizes the TEM results of IAD measurement. Generally, the intrameasurer TEMs were greater than the inter-measurer TEMs. NLNT has the highest
The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability
7
intra-measurer and inter-measurer TEMs (16.8 mm and 13.2 mm), and NLT has the second highest (11.8 mm and 7.1 mm). LT shows the least intra-measurer and intermeasurer TEMs (2.3 mm and 2.1 mm); it is interesting to note the Duncan's post hoc test showed no significant difference between LNT and LT. Table 1. TEM results of IAD measurement (n=20) NLNT (1) NLT (2)
LNT (3)
LT (4)
Duncan's post hoc test
Inter-measurers TEM (mm)
16.8
11.8
5.2
2.3
1>2>3>4
Relative TEM (%)
7.0
5.5
2.2
1.2
1>2>3=4
Intra-measurers TEM (mm)
13.2
7.1
2.7
2.1
1>2>3=4
Relative TEM (%)
4.9
3.0
1.0
0.9
1>2>3=4
Fig. 2. The intra-measurer and inter-measurer ICCs of IAD measurement (n=20)
The inter-measurer ICCs were higher than the intra-measurer ICCs (Fig. 2). All groups have good reliability (ICC>0.75), except the intra-measurer ICC of the NLNT. The NLNT showed the lowest inter-measurer and intra-measurer ICCs in the four groups. The groups that measured with landmarks (LNT and LT) showed excellent reliability (ICC>0.9). 3.2 Validity of IAD Measurement The mean value obtained from NLNT showed the most discrepancy with the traditional measurement (25.2±15.1 mm). Duncan's post hoc test showed the result of NLNT was significantly greater than the other groups (NLT: 10.2±7.2 mm; LNT: 7.9±4.5 mm; LT: 7.7±3.8 mm).
8
W.-K. Chiou, B.-H. Chen, and W.-Y. Chou
3.3 Validity of HJC Prediction The coordinates of HJC (x, y, z) were calculated using the equations presented by Bell et al. [4]. Fig. 3 showed the general direction of the HJC prediction errors. The NLNT trended to predict the HJC locations as more posterior, lateral, and inferior. Fig. 5 showed the HJC prediction error of the four groups (NLNT/NLT/LNT/LT). Overall, the maximum errors were found in the y axis. The maximum prediction errors of x, y, and z axes were found in the NLNT group at 4.8 mm, 9.1 mm and 7.6 mm, respectively. The result of ANOVA showed there were significant different between the four groups (NLNT/ NLT/ LNT/ LT) in x, y and z axis. Duncan's post hoc test showed NLNT was significantly different with the other groups (NLT/ LNT/ LT). There were no significant differences between NLT, LNT and LT.
Fig. 3. Comparison of the absolute HJC prediction error (SD) of the four groups (n=8) *Significantly different (p 0.001).
4 Discussion 4.1 The Effect of Training and Landmark Placement on the Reliability of IAD Measurement The NLNT showed the lowest IAD measurement reliability among the four groups (NLNT/ NLT/ LNT/ LT). Even through NLT showed better reliability than the NLNT, the results of relative TEM showed the imprecision of NLT is 5.5% (intra-measurer) and 3.0% (inter-measurer); these values are not considered to be acceptable (relative TEM < 2%) [17]. These result showed whatever the measurer is trained or not, the results had lower reliability when landmarks were not used when measuring IAD on the 3D scanning data. The results of LNT and LT showed excellent reliability, with
The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability
9
ICCs > 0.9. Compared with training, attaching landmarks significantly increased the reliability of measurement. These results demonstrated that landmarks can play a more important role than the training in improving reliability. 4.2 The Effect of Trained and Landmark on the Validity of IAD Measurement ISO 20685:2010 describes the 3D scanning methodology used to establish internationally compatible anthropometric databases. In this methodology, the measurers should be skill anthropometrists, and landmarks are tools for improving the accuracy of measurement. Our study compared the four groups (NLNT/ NLT/ LNT/ LT) and found except for NLNT, the results of NLT/LNT/ LT are similar. The results showed the landmarks were the only assistive tools to improve the measurement accuracy for the trained measurers; however, when the measurers have not received training in anatomy, the landmarks were important to the correctness of the measurement. Location of the ASIS by the trained measurers with landmarks is considered the best methods to improve the reliability and the validity of the measurement (ISO 20685:2010). However, our study still found about 7 mm mean measurement difference in the LT group. The errors can be explained by an additional source of error related to the placement of the landmarks during the identification of ASIS locations. Small variances in landmarks placement can therefore affect measurement results significantly [16]. Pervious study also reported a maximum measurement difference of 30 mm when waist circumference was measured at four different, yet closely located points [18]. 4.3 The Effect of Trained and Landmark on the Validity of HJC Prediction Our study compared four groups (NLNT/ NLT/ LNT/ LT) and found the results of three groups (NLT/LNT/ LT) are similar. The results of reliability and validity of IAD measurement are also similar, showing the predicted HJC coordinates were affected by the measurement differences of IAD. The NLNT shows a large difference compared with the other groups, with a maximum error in the y axis of 20.9 mm. These results show that measurement obtained by untrained measurers have low validity in HJC prediction when no landmarks are used. Since most of the staff engaged in 3D scanning image deformation has little chance to receive anatomical training, there is a need to solve the reliability and validity problem in IAD measurement on 3D scanning data. Moreover, our results showed that trained measurers also needed landmarks to improve measurement reliability. This indicates that to improve the correctness of IAD measurement and decrease the HJC prediction errors, attaching landmarks on ASISs is a better solution than giving training.
5 Conclusion This study successfully quantified the magnitude and reliability of intra- and intermeasurer anthropometric measurements, and showed the effect that measurement differences have on predicted HJC locations. Results showed the predicted HJC
10
W.-K. Chiou, B.-H. Chen, and W.-Y. Chou
locations were affected by IAD measurement differences. Landmarks can help to decrease IAD measurement differences as well as the prediction errors of HJC, especially in measurements obtained by untrained measurers.
References 1. Piazza, S.J., Okita, N., Cavanagh, P.R.: Accuracy of the functional method of hip joint center location: Effects of limited motion and varied implementation. Journal of Biomechanics 34, 967–973 (2001) 2. Stagni, R., Leardini, A., Cappozzo, A., Grazia Benedetti, M., Cappello, A.: Effects of hip joint centre mislocation on gait analysis results. Journal of Biomechanics 33, 1479–1487 (2000) 3. Cappozzo, A., Catani, F., Della Croce, U., Leardini, A.: Position and orientation in space of bones during movement: Anatomical frame definition and determination. Clinical Biomechanics 10, 171–178 (1995) 4. Bell, A.L., Pedersen, D.R., Brand, R.A.: A comparison of the accuracy of several hip center location prediction methods. Journal of Biomechanics 23, 617–621 (1990) 5. Nurre, J.H.: Locating landmarks on human body scan data. In: Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling, pp. 289–295 (1997) 6. Moreno, L.A., Joyanes, M., Mesana, M.I., González-Gross, M., Gil, C.M., Sarría, A., Gutierrez, A., Garaulet, M., Perez-Prieto, R., Bueno, M., Marcos, A.: Harmonization of anthropometric measurements for a multicenter nutrition survey in Spanish adolescents. Nutrition 19, 481–486 (2003) 7. Nagy, E., Vicente-Rodriguez, G., Manios, Y., Beghin, L., Iliescu, C., Censi, L., Dietrich, S., Ortega, F.B., De Vriendt, T., Plada, M., Moreno, L.A., Molnar, D.: Harmonization process and reliability assessment of anthropometric measurements in a multicenter study in adolescents. International Journal of Obesity 32, 58–65 (2008) 8. Johnson, W., Cameron, N., Dickson, P., Emsley, S., Raynor, P., Seymour, C., Wright, J.: The reliability of routine anthropometric data collected by health workers: A crosssectional study. International Journal of Nursing Studies 46, 310–316 (2009) 9. Weiss, E.T., Barzilai, O., Brightman, L., Chapas, A., Hale, E., Karen, J., Bernstein, L., Geronemus, R.G.: Three-dimensional surface imaging for clinical trials: Improved precision and reproducibility in circumference measurements of thighs and abdomens. Lasers in Surgery and Medicine 41, 767–773 (2009) 10. Sebo, P., Beer-Borst, S., Haller, D.M., Bovier, P.A.: Reliability of doctors’ anthropometric measurements to detect obesity. Preventive Medicine 47, 389–393 (2008) 11. Lin, J.D., Chiou, W.K., Weng, H.F., Fang, J.T., Liu, T.H.: Application of threedimensional body scanner: observation of prevalence of metabolic syndrome. Clinical Nutrition 23, 1313–1323 (2004) 12. Moore, K.L., Dalley, A.F.: Clinically Oriented Anatomy. Lippincott Williams & Wilkins, Philadelphia (1999) 13. McGraw, K.O., Wong, S.P.: Forming Inferences about Some Intraclass Correlation Coefficients. Psychological Methods 1, 30–46 (1996) 14. Shrout, P.E., Fleiss, J.L.: Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin 86, 420–428 (1979) 15. Weir, J.P.: Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of Strength and Conditioning Research 19, 231–240 (2005)
The Effects of Landmarks and Training on 3D Surface Anthropometric Reliability
11
16. Burkhart, T.A., Arthurs, K.L., Andrews, D.M.: Reliability of upper and lower extremity anthropometric measurements and the effect on tissue mass predictions. Journal of Biomechanics 41, 1604–1610 (2008) 17. Perini, T.A., de Oliveira, G.L., dos Santos Ornellas, J., Palha de Oliveira, F.: Technical error of measurement in anthropometry. Revista Brasileira de Medicina do Esporte 11, 81– 90 (2005) 18. Wang, J., Thornton, J.C., Bari, S., Williamson, B., Gallagher, D., Heymsfield, S.B., Horlick, M., Kotler, D., Laferrère, B., Mayer, L., Xavier Pi-Sunyer, F., Pierson Jr, R.N.: Comparisons of waist circumferences measured at 4 sites. American Journal of Clinical Nutrition 77, 379–384 (2003)
An Automatic Method for Computerized Head and Facial Anthropometry Jing-Jing Fang and Sheng-Yi Fang Department of Mechanical Engineering, National Cheng-Kung University, Tainan, Taiwan {fjj,n18981220}@mail.ncku.edu.tw
Abstract. Facial anthropometry plays an important role in ergonomic applications. Most ergonomically-designed products depend on stable and accurate human body measurement data. Head and facial anthropometric dimensions provide detailed information on head and facial surfaces to develop well-fitting, comfortable and functionally-effective facial masks, helmets or customized products. Accurate head and facial anthropometry also allows orthognathic surgeons and orthodontists to plan optimal treatments for patients. Our research uses an automatic, geometry-based facial feature extraction method to identify head and facial features, which can be used to develop a highly-accurate feature-based head model. In total, we have automatically located 17 digital length measurements and 5 digital tape measurements on the head and face. Compared to manual lengthmeasurement, the average error, maximum error and standard deviations are 1.70mm, 5.63mm and 1.47mm, respectively, for intra-measurement, and 2.07mm, 5.63mm and 1.44mm, respectively, for inter-measurement. Compared to manual tape-measurement, the average maximum error and standard deviations are 1.52mm, 3.00mm and 0.96mm, respectively, for intrameasurement, and 2.74mm, 5.30mm and 1.79mm, respectively, for intermeasurement. Nearly all of length measurement data and tape measurement data meet the 5mm measuring error standard. Keywords: anthropometry, head and face, feature-based.
1 Introduction Facial anthropometry is very important for the design and manufacture of products which rely on access to a database of accurate head size measurements to ensure comfort and utility, such as helmets, masks, eyeglasses and respirators [1, 2]. Traditionally, anthropometric measurements were taken subjectively by an experienced technician using an anthropometer, sliding calipers, spreading calipers, and a measuring tape. This traditional measurement method is very time-consuming and highly dependent on the measurer’s skill, and thus cannot provide objective, accurate and reproducible measurement data. An automated measurement system could therefore be very useful to ergonomic designers. V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 12–21, 2011. © Springer-Verlag Berlin Heidelberg 2011
An Automatic Method for Computerized Head and Facial Anthropometry
13
Three-dimensional body scanning technology has matured considerably in recent years with the development of scanning platforms such as AnthroScan [3], Cyberware [4], Vitronic [5], AC2 [6], and others. The scanner establishes a three-dimensional data point cloud, which can then be used to produce a mesh model. However, the raw data from the scanner and the resulting point cloud lack detail and related information, thus limiting its utility in follow-up applications. The post-processing software is mostly designed for general purposes and tends to use general triangulation methods, such as Delauney triangulation, leaving it unable to generate finer meshes or retain important geometric data. Fang’s [7] work used image processing methods to identify 67 more facial feature points and 24 more feature lines than those identified in MPEG-4 definitions [8] and is much suitable for anthropometric applications. A number of large three-dimensional anthropometric surveys have been conducted around the world in past few years, such as the CAESAR (Civilian American and European Surface Anthropometry Resource) project [9] and the Taiwan Human Body Bank [10]. These anthropometric data can be used in ergonomic design to create more comfortable earphones, eyeglasses, respirators, masks, helmets, and other products to enhance the safety of workers in hazardous environments [11-20]. However, manual data measurement presents some problems. The face presents a smaller surface area relative to the rest of the body, and even slightly misplaced markers on the face can result in disproportionately serious errors. Geometric variations on the face are also more complex than those found on the rest of the body, thus increasing the scope for error. This research continues a robust line of research in marker-less facial feature identification. We establish a highly-accurate optimized mesh model from original point cloud scanned by a non-contact infrared body scanner according to these facial features. The B-spline curve is used to simulate tape measurement to automatically measure the feature line length. Fig. 1 shows the industrial applications of our research. First, the results can be used to design more effective and comfortable ergonomic products. Second, highly-accurate head and facial measurements can lead to better clinical diagnosis and assessment, be used to better plan surgical symmetry, predict post-surgical results, and be used for comparing pre-surgical goals with postsurgical results.
Fig. 1. Industrial applications of our research
14
J.-J. Fang and S.-Y. Fang
The paper is organized as follows: Section 2 introduces the anthropometry landmarks on the head and face. We divide the head model into longitudinal-and-latitudinal meshes, allowing us to use automatic methods to make head and facial anthropometry measurements. Section 3 presents all 17 length measurements and 5 tape measurements, and compares them with manual measurements. Finally, we discuss the advantages and disadvantages of this study, and suggest possible future work.
2 Methods 2.1 Anthropometry Landmarks We used a computed tomography scanner (Biograph; Siemens AG, Berlin, Germany) to obtain high density scan data from a plastic mannequin, identifying facial features using an automatic features identification method [21]. This automatic marker-less method detects head and facial features according to geometric variations. In total, the method can detect 67 feature points and 24 feature lines on the head, and is an objective identification method in that it avoids the subjective feature identification results inherent in manual measurements by differently-skilled technicians. In our previous work, the primary landmarks were located on the head scan data. Most anthropometry landmarks can be detected by geometry-based feature identification methods, but some are defined on bone surface and do not have geometric characteristics on soft tissue. For instance, to solve the unclear geometric characteristic of soft tissue, the gonion feature on the chin line is defined according to the golden ratio between the chin feature and the ear position. All the identified features used for coming applications are listed as follows: • Centerline: Op-Opthrocranion, V-Vertex, G-Glabella, S-Sellion, Prn-Pronasale, Sn-Subnasale, Sto-Stomion, Chi-Chin • Eyes: P-Pupilla • Nose: Al-Alare • Ears: Pa-Preaurale • Mouth: Ch-Chelion • Bone: Zyf-Zygofrontale, Ft-Frontotemporale, Zy-Zygomatic, Go-Gonion, Me-Menton 2.2 Model Construction We used the method described in [7] to develop multi-resolution head models suited to different applications. For medical applications, we can construct the head model using high-resolution with highly-accurate meshes, while other applications require only lower-resolution meshes with small file sizes. The method locates longitude and latitude lines based on the location of facial feature points, and adjusts the grid position so that feature points in the grid can be maintained without distortion after reconstruction. We also used a genetic algorithm to optimize the number of meshes
An Automatic Method for Computerized Head and Facial Anthropometry
15
and the errors between the original point cloud and the constructed mesh model by following equation, min f = W1 2 x1 ( x2 − 1) 2 + W2 e + W3 p 2
s.t. g1 = e − 2 ≤ 0
(1)
g 2 = p − 0.05 ≤ 0
where x1 and x2 are the number of longitudinal and latitude lines, respectively, e is the error of the constructed model, p is the ratio of long-narrow triangular meshes to the total number of triangular meshes, and W1, W2, W3 are the weighting number ( 0.7, 0.2, and 0.1, respectively). The errors can be divided into three categories: the distance between the original scan point and the mesh face, the distance between the original scan point and the mesh edges, and the distance between the original scan point and the mesh vertices. For these three error types, we take the minimum error one as the error for the specific scan point. 2.3 Head and Facial Anthropometry Head and facial anthropometry can be classified into two main categories: straight line distances and surface distances. Straight line distances are the projection distance between two feature points on the specific plane, sagittal plane, coronal plane, or transversal plane. For example, the nose protrusion is the projection distance on the sagittal plane between the Pronasale and Subnasale. In our research, the straight line distance is called the length measurement which can be easily determined by Euclidean distance.
Length = d ( q, p ) =
n
∑(q − p ) i =1
i
2
i
(2)
Surface distance is a length of a feature curve which passes through three specific feature points along the head surface. For example, the Bitragion-Subnasale curve passes through the left and right tragion and subnasale, and whose arc length is the total length of this curve. In the garment industry, the arc length is also called the tape measurement. In practice, we define a plane passing through FA, FB, and FC as a plane E: ax+by+cz+d=0. Then, we can determine the intersection point set { Fi } of plane E and the head model. The intersection point set was presented as a polyline, Γ. However, this polyline may be a concave polygon, and differs from reality. In fact, when taking tape-measurements, the tape only contacts the convex part of the body to form a convex hull around the body. The convex hull of the polygon Γ is obtained by k ⎧ k ⎫ H convex ( Γ ) = ⎨∑ α i pi pi ∈ Γ, α i ∈ R , α i ≥ 0, ∑ α k = 1, k = 1, 2,K⎬ i =1 ⎩ i =1 ⎭
(3)
16
J.-J. Fang and S.-Y. Fang
A rough estimation of tape measurements can be given by summing the intervertex distance of H convex ( Γ ) , but precision depends on the resolution of the mesh surface. To obtain a well-estimated measurement, Leong employed a B-spline approximation technique to regenerate the surface curve [22]. In our research, we adopt this method and use an order k=4 B-spline curve to fit the polyline n
C ( u ) = ∑ Ni , k ( u ) Pi , u ∈ [ tk −1 , tn +1 )
(4)
t =1
where Pi is the control point, and Ni,k is the kth-order B-spline basis function. After constructing the corresponding B-spline curve of the specific polyline, the tape measurement can be determined by numerical integration, Length = ∫
u =1
u =0
C ( u ) du
(5)
In Fig.3, the black points Fi, Fi+1, and Fi+2 are the intersection points of the plane E and the triangular meshes ∆j and ∆j+1. The red line is the original polyline connecting the intersection points, and the blue curve (the dashed line) is the corresponding fitting B-spline curve.
Fig. 2. Plastic mannequin
Fig. 3. Convex-hull B-spline curve approximation
3 Results Our research uses an automatic feature extraction method to identify head and facial features from a scanned head model. We propose a multi-resolution mesh construction method to reconstruct the head mesh model, and apply a genetic optimization method to obtain the optimized mesh numbers with good accuracy. The optimum mesh number is 120×70, the resulting error is about 0.069mm, and the longnarrow triangular mesh percentage is about 2.66%. The head mesh model is shown as Fig. 4 where the black lines are the mesh edge, green points are the feature points, and red lines are the feature lines.
An Automatic Method for Computerized Head and Facial Anthropometry
17
Fig. 4. Optimized head mesh model
We also present a digitized head and facial anthropometry method providing a total of 17 length measurements (shown in Fig. 5) and 5 tape measurements (shown in Fig. 6). Length measurements are listed in Table 1. The Proj. plane column in Table 1 lists the projection planes used to determine the corresponding distance between the two features listed in the anthropometry landmarks column. Tape measurements are listed in Table 2, where most of feature curves are open curves from the start point, passing through mid-point to the end point. Only one feature curve, the head circumference, is a closed loop curve, and this is determined by measuring the circumference of the head surface above the eyebrow ridges.
2
16
1
4
3 6
5 8
7
Fig. 5. The length measurements
Fig. 6. The tape measurements
18
J.-J. Fang and S.-Y. Fang Table 1. Definitions of length measurement anthropometry
1 2 3
Feature name Maximum frontal breadth Minimum frontal breadth Interpupillary breadth
Proj. plane coronal coronal coronal
4
Nasal root breadth
coronal
5
Nose breadth
coronal
6
Bizygomatic breadth
coronal
7 8 9 10 11 12 13 14 15
Bigonial breadth Lip length Menton-sellion length Subnasale-sellion length Nose protrusion Facial projection Chin projection Sellion Z Right tragion X
coronal coronal sagittal sagittal sagittal sagittal sagittal sagittal sagittal
16 Head breadth
coronal
17 Head length
sagittal
Anthropometry landmarks Left and right zygofrontale Left and right frontotemporale left and right pupilla The root of the left and right bridge of the nose Left and right alare Maximum horizontal breadth between left and right zygomatic arches Left and right gonion Left and right chelion Menton and sellion Subnasale and sellion Pronasale and subnasale Sellion and tragion Menton and tragion Vertex and sellion Sellion and opisthrocranion The maximum horizontal breadth of the head above the ears Glabella and opisthrocranion
Table 2. Definitions of tape measurement anthropometry
1 2 3 4
Feature name Bitragion coronal arc Bitragion frontal arc Bitragion subnasale arc Bitragion chin arc
5 Head circumference
Start point Mid-point left tragion Vertex left tragion Forehead left tragion Subnasale left tragion Chin Maximum circumference of the eyebrow ridges
End point right tragion right tragion right tragion right tragion head above the
Five volunteers used traditional measuring tools, including measuring tape (accuracy: 0.1mm) and calipers (accuracy: 0.01mm) to obtain these head and facial anthropometry data points for the plastic mannequin, which will be used for comparison with our research. The volunteers’ measurement error results are listed in Table 3. From these experiments we can see that the maximum error often occurs on the Maximum and Minimum Frontal Breadth features (i.e., the distance between the zygofrontale or frontotemporale) in the length measurement group, and the bitragion chin arc (i.e., the arc length between the left and right tragion passing through the chin point) in the tape measurement group.
An Automatic Method for Computerized Head and Facial Anthropometry
19
Volunteer No.1 repeated the measuring process 5 times for intra-volunteer comparison, with results listed in Table 4. The maximum error often occurred in the feature lines related to the chin point, e.g., the Menton-Sellion length or the Bitragion chin arc, because the zygofrontale and frontotemporale are features on the orbital bone and the chin point is also a feature point defined by the mandible, so it is difficult to distinguish these features from nearby soft tissue. Volunteers may measure this feature line or feature arc from different positions, resulting in more significant errors. We can also find that more than 88% (in most cases, it is nearly 100%) of the length-measurement data and more than 60% (in most cases, it is nearly 100%) of the tape measurement data show errors of under 5mm, which is the standard of anthropometry. Table 3. Comparison results of inter-volunteer measurement data (unit: mm)
Length measurement
Tape measurement
Mean SD Max. B. Bold numbers are condition effect with P ≤0.05. Joint angular velocity were mean rectified. Flexion, adduction, supination and internal rotation were positive, and extension, abduction, pronation and external rotation were negative.
The Upper Extremity Loading during Typing Using One, Two and Three Fingers 183
-26.1 (8.6)
-200 (70) -255 (147)
-200 (74) -266 (157)
Wrist Forearm Elbow Shoulder
Flexion Two
-26.2 (9.4)
10.3 (1.4)
28.8 (6.4) 61.0 (21.1)
11.4 (1.9)
30.5 (5.7) 60.6 (12.3)
Peak to peak joint torque (Ncm)
Wrist Forearm Elbow Shoulder
Mean joint torque (Ncm)
One
28.2 (5.5) 56.1 (18.2)
10.3 (1.2)
-200 (71) -284 (154)
-26.4 (8.9)
Three
25.0 (8.9)
6.7 (1.8)A
-164 (44)
3.5 (2.0)
One
27.6 (12.9)
5.9 (1.2)AB
-158 (67)
3.4 (1.9)
Adduction Two
22.1 (8.9)
5.6 (1.5)B
-154 (52)
3.1 (2.3)
Three
35.4 (10.5)A
11.7 (2.6)
1.8 (2.7)
-17.2 (30.6)
31.3 (10.4)A
12.0 (3.4)
2.4 (4.2)
-16.8 (29.7)
24.3 (7.6)B
11.4 (4.2)
0.4 (2.1)
-20.7 (32.9)
Internal rotation (Supination) One Two Three
Table 2. Mean (SD) of joint torque (Ncm) statistics across six subjects for three typing conditions: using one, two or three fingers. Conditions with different superscript letters are significantly different, and the corresponding means are represented by the letters such that A>B. Bold numbers are condition effect with P ≤0.05. Flexion, adduction, supination and internal rotation were positive, and extension, abduction, pronation and external rotation were negative.
184 J. Qin, M. Trudeau, and J.T. Dennerlein
The Upper Extremity Loading during Typing Using One, Two and Three Fingers
185
Acknowledgements. This work was funded in part by NIOSH R01 OH008373, and the NIOSH ERC at Harvard University-grant (T42 OH008416-05).
References 1. US Census Bureau. Computer and Internet Use in the United States, Current Population Reports, pp. 23–208 (2005) 2. Eltayeb, S., Staal, J.B., Kennes, J., Lamberts, P.H., de Bie, R.A.: Prevalence of complaints of arm, neck and shoulder among computer office workers and psychometric evaluation of a risk factor questionnaire. BMC Musculoskel Dis. 8, 68 (2007) 3. Eltayeb, S.M., Staal, J.B., Hassan, A.A., Awad, S.S., de Bie, R.A.: Complaints of the arm, neck and shoulder among computer office workers in Sudan: a prevalence study with validation of an Arabic risk factors questionnaire. Environ. Health 7, 33 (2008) 4. Klussmann, A., Gebhardt, H., Liebers, F., Rieger, M.A.: Musculoskeletal symptoms of the upper extremities and the neck: a cross-sectional study on prevalence and symptompredicting factors at visual display terminal (VDT) workstations. BMC Musculoskel Dis. 9, 96 (2008) 5. Gerr, F., Marcus, M., Ensor, C., Kleinbaum, D., Cohen, S., Edwards, A., Gentry, E., Ortiz, D.J., Monteilh, C.: A prospective study of computer users: I. Study design and incidence of musculoskeletal symptoms and disorders. Am. J. Ind. Med. 41, 221–235 (2002) 6. NIOSH. In: Bernard B.P. (ed.) Musculoskeletal disorders and workplace factors: A critical review of epidemiologic evidence for work-related musculoskeletal disorders of the neck, upper extremity, and low back. Cincinnati, OH. DHHS (NIOSH) Publication Number 97141 (1997) 7. Baker, N.A., Cham, R., Cidboy, E.H., Cook, J., Redfern, M.S.: Kinematics of the fingers and hands during computer keyboard use. Clin. Biomech. (Bristol, Avon) 22, 34–43 (2007) 8. Serina, E.R., Tal, R., Rempel, D.: Wrist and forearm postures and motions during typing. Ergonomics 42, 938–951 (1999) 9. Sommerich, C., Marcus, W., Parnianpour, M.: A quantitative description of typing biomechanics. J. Occup.Rehabil. 6, 33–55 (1996) 10. Kuo, P.-L., Lee, D.L., Jindrich, D.L., Dennerlein, J.T.: Finger joint coordination during tapping. J. Biomech. 39, 2934–2942 (2006) 11. Dennerlein, J.T., Kingma, I., Visser, B., van Dieen, J.H.: The contribution of the wrist, elbow and shoulder joints to single-finger tapping. J. Biomech. 40, 3013–3022 (2007) 12. Cappozzo, A., Catani, F., Croce, U.D., Leardini, A.: Position and orientation in space of bones during movement: anatomical frame definition and determination. Clin. Biomech. (Bristol, Avon) 10, 171–178 (1995) 13. McConville, J.T., Churchill, T.D., Kaleps, I., Clauser, C.E., Cuzzi, J.: Anthropometric relationships of body and body segment moments of inertia. Air Force Aerospace Medical Research Laboratory, Wright-Patterson Air Force Base, Ohio, AFAMRL-TR-80–119 (1980) 14. Veldpaus, F.E., Woltring, H.J., Dortmans, L.J.: A least-squares algorithm for the equiform transformation from spatial marker co-ordinates. J. Biomech. 21, 45–54 (1988) 15. Kingma, I., Toussaint, H.M., De Looze, M.P., Van Dieen, J.H.: Segment inertial parameter evaluation in two anthropometric models by application of a dynamic linked segment model. J. Biomech. 29, 693–704 (1996) 16. Gerr, F., Monteilh, C.P., Marcus, M.: Keyboard use and musculoskeletal outcomes among computer users. J. Occup. Rehabil. 16, 265–277 (2006)
Automatic Face Feature Points Extraction Dominik Rupprecht, Sebastian Hesse, and Rainer Blum Hochschule Fulda – University of Applied Sciences, Marquardstr. 35, 36039 Fulda, Germany {Dominik.Rupprecht,Sebastian.Hesse,Rainer.Blum}@ informatik.hs-fulda.de
Abstract. In this paper we present results of finding a way to automatically equip a three-dimensional avatar with a model of a user’s individual head. For the generation of the head model certain so-called face feature points must be extracted from a face picture of the user. A survey of several state-of-the-art techniques and the results of an approach for the extraction process are given for the points of the middle of each iris, the nasal wings and the mouth corners. Keywords: Avatar, Face Feature Points, Integral Projection, Circle Detection, E-Commerce.
1 Introduction High quality three-dimensional virtual human bodies (avatars) are playing an increasingly important role in the areas of information technology, for example in networked 3D worlds like Second Life1 and Twinity2, in the simulation of processes or in virtual try-ons of clothing. In the field of creating avatars, and most of all customized ones, especially in the context of an e-commerce application the customers’ acceptance of their personal avatar is very important. On possible solution to increase the users’ acceptance of their avatar might be to equip it with an individual, three-dimensional model of their individual heads. For this challenge the automatic face modeling software ”FaceGen” [1] provides a basis. Offering frontal human face pictures as input, a textured threedimensional face model is generated. Unfortunately, the software requires the user to manually identify several so-called face feature points, first. But, as the context of this work calls for a fluent web shopping experience, it is advisable to completely automate the head model generation process, including the face feature extraction. This would keep the customers from any tedious extra work. The approach presented here combines several state-of-the-art techniques. It differs from similar face recognition techniques, where often detection of regions and template matching are applied. In contrast, for the context at hand, it is essential to identify accurate points in order to obtain a good model and texture as result of the automatic head generation process. 1 2
http://secondlife.com/ http://www.twinity.com/
V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 186–194, 2011. © Springer-Verlag Berlin Heidelberg 2011
Automatic Face Feature Points Extraction
187
1.1 Objectives and Significance In this paper we present intermediate findings of the ongoing research project KAvaCo3. Focus of its research activities is to improve customized avatars for the usage in an innovative, interactive system for the support of apparel retail via Internet. The results reported here were obtained in the Bachelor thesis of this paper’s coauthor Sebastian Hesse. To interpret our findings correctly and to transfer them to other research contexts, we provide details on the study’s context. 1.2 The KAvaCo Project On the one hand, the aim of the ongoing research project KAvaCo is to develop an interactive system for generating humanoid, anatomically correct, animatable, virtual humans. Secondly, the avatars are used as representations of individual customers in ecommerce and are evaluated as such. Imitation refers both to geometric properties such as body sizes and shapes as well as to the visual appearance, such as skin or hair color. All of the necessary information can be entered by the users themselves. For a realistic representation of the head, photos are processed, which can be provided by the customers with little effort using conventional digital cameras. The optimal design of the user interaction is a central goal for both the creation and the use of the avatars. In addition to the technical progress the project also investigates the until now only partially clarified scientific question, what demands are made by customers on their personal virtual twins in the context of e-commerce. Concerning the representation, for example, it has to be clarified which level of realism is desired by customers, whether individual ”problem areas“ and characteristics should be concealed or whether an idealized representation achieves greater acceptance. The results of the research will be applied and tested in the context of the clothing retail. Here comes into play that the developed avatars can directly be used in systems for virtual-try on for made-to-measure and ready-made clothing. But also the use for other applications such as in the area of public health, for example the visualization of weight gain and loss, is planned.
2 Method The face modeling software employed by the project, ”FaceGen”, needs eleven points from the frontal face image for computation. In this work techniques and approaches for the points of the eyes (feature point: middle of each iris), the nose (nasal wings) and the mouth (mouth corners) were considered. The approach which is finally used to extract the facial feature points is divided into two sections. The first is to reduce the image data to small areas of interest for later detection. This means, that the feature points do not have to be searched in the whole face image, but in small zones. Afterwards, in the second section the required points are searched through several extracting techniques. The techniques are applied to regions where the feature points are expected. The detailed procedure is shown in the following sections. 3
http://www.kavaco.info/
188
D. Rupprecht, S. Hesse, and R. Blum
To evaluate the results of each concept a set of 100 frontal face images has been acquired from [2]. To be as representative as possible the persons on the photos were chosen from different nationalities, different age and gender. Each of the approaches was tested with all pictures and documented. Therefore, the detected sections and features were logged to validate them against human specified ones. Classification numbers were defined to decide how robust the region detection and the feature extraction are. For the region detection the classification numbers are false-positive-, false-negative- and right-positive-rate in percent, for the feature extraction it is the variance between the identified and the ideal position in pixel.
3 Region Detection 3.1 Detection Concept As said before, the first part of the identified solution reduces the area that is subject to the later detection, to small regions. The detection area is constrained in a “topdown” manner to so-called regions of interest (ROI), here, first the face, then eye, mouth and nose. The ”region-detection” makes use of the Haar-Like Feature Extraction with AdaBoost (see e.g. [3]) at the beginning. The Haar-Like Feature Extraction is an object recognition technique which uses weak features to detect domain objects. With the learning algorithm AdaBoost a classifier is constructed for the detector. The identification of the detection area starts by searching the part of the image that contains the frontal face. This area is then divided into three further ones: 1. Area – upper face third, for searching the eyes-region 2. Area – middle face third, for searching the nose-region 3. Area – lower face third, for searching the mouth-region This division of the face is done by intuition, because the position of the searched regions in normal faces is given by the human anatomy. These new regions are treated as search candidates for the remaining three ROI. Due to this division into smaller areas the detector needs less performance and it also reduces the detection of false regions. A test iteration with the 100 human faces showed, that the detection of the faceand eyes-region has a good detection rate. Only two percent of these both regions were not detected. But the false-positive rate of the other two regions was too high (30 percent for the nose and 47 percent for the mouth). So there is a need to extend this concept to counteract the high false-positive rate. 3.2 Extended Detection Concept In a second concept some extensions are made to the first one reported in the previous section in order to decrease the false-positive-rate of the mouth and nose region. Therefor a validation was implemented which checks the regions with a set of rules. The detected regions are related to each other and the face area. Primary point of reference is the region of the eyes which has a very good detection rate. The violation
Automatic Face Feature Points Extraction
189
of a rule excludes the region from the list of potential candidates. The rules were set up under the condition that they are not too detailed and not too general to avoid excluding too many right or too little wrong detected regions. Even though the eye-regions have good results there are rules implemented to prevent a false detection here. This can occur in the area of the nostrils because the integral projection of two eyes and nostrils could be very similar. Because the nostrils are less distant than the eyes, the width of the eyes-region is related to the width of the face. If the width of the eyes region falls below 30 percent of the face region width it will be excluded. For the validation of the nose region it is related to the eyes region. Two rules for the nose region are used to check that the nose region’s horizontal position is within the eyes region. Another rule ensures that the nose region is not above the eyes. The mouth region uses the same rules as the nose region but the rule for the vertical position is not applied, because the mouth is only looked for in the lower face third. Using these rules the false-positive rate could be reduced in the mouth region by 10.03 percent and the nose region by 11.43 percent. But there were many areas that were too small to be correct ones. Thus, the rules were adjusted by analyzing the smallest region in context of other regions. It turned out that the mouth and nose region fits 60 times into the facial region. An extension of the rules checking this threshold decreased the false-positive rate for the mouth region by 23.22 percent and for the nose region by further 6.82 percent. The false-positive rate was now low enough to develop and use some techniques to extract the desired feature points in the detected regions.
4 Feature Extraction Facial feature points can be identified through their miscellaneous properties. Their extraction has also a high relevance in domains like gesture detection, emotion recognition and the transfer of real facial expressions onto an avatar. Such feature points are often used to identify a face in an image but, vice versa, to detect the face region at first with a different, global approach could help making a following feature extraction more robust and effective. The feature extraction approach presented here makes use of several image processing techniques which are described in the following sections. 4.1 Extraction of the Center of the Eye The eye is one of the most distinctive regions in the human face. Two well recognizable attributes are the elliptic geometry of eye and iris and its brightness distribution. One possibility is to use the geometric attribute of the eye to detect its center. To detect the iris the Circular Hough Transformation can be applied to the respective region. Another technique to find the iris is the integral projection. With the integral projection it is possible to extract the brightness distribution of the eye region. The following sections will analyze both techniques.
190
D. Rupprecht, S. Hesse, and R. Blum
Circle Detection. To detect the iris with the Circular Hough Transformation (cf. [4]) it is necessary to transform the region from a colored to a gray scale image. Most image processing algorithms are applied on gray scale or binary images because they contain less information than colored images. Hereafter, possible image artifacts are removed with a Gaussian filter. Now, the iris can be highlighted by an edge-detector, here the Canny filter. The result of the Canny filter has the benefit that edges are represented as one pixel wide lines (also called binary edges). The image delivered by this pre-processing step forms the basis for the subsequent circle detection. To prevent false-detected circles, a smallest and largest acceptable radius is calculated dynamically relative to the height of the eye region. The minimum radius has to be greater than one tenth of the eye region height and the maximum radius has to be smaller than half of the region height.
Fig. 1. Results of the Circular Hough Detection
The circle detection has good results for many eyes regions (see Fig. 1, part a). But too much result images show that this method produces also a lot false-detected circles (see Fig. 1, part b). This is the result of a too dark illumination of the images or not wide enough opened eyes. An increase of the contrast or the change from binary to gray scale images did not produce better results. So, this approach was no longer analyzed. Integral Projection. As already mentioned in section 4.1 another method to detect the eye is the integral projection. This method uses the brightness distribution in the eyes region. To detect the iris it is necessary to calculate the vertical and horizontal integral projection. As shown in [5] the coordinates of the iris can be extracted from the two integral projections. The x-coordinate can be derived from the horizontal and the y-coordinate from the vertical projection. The calculation of the projection can be disturbed by several factors such as dark bushy eyebrows, eyelashes and dark shadows. So, a pre-processing step is needed to decrease the effect of these factors. After transforming the image into a gray-scale one, a median filter is used to eliminate image artifacts. To highlight the iris and remove reflection in the eye, morphological operations are used. A method called “erosion” enlarges the dark areas in an image. That is how the iris can be highlighted but it also highlights shadows, eyebrows and eyelashes. This can be contracted by increasing the contrast (Fig. 2 a) and using “dilation” (Fig. 2 b) to shrink dark areas before applying the erosion method (Fig. 2 c).
Automatic Face Feature Points Extraction
191
Fig. 2. Preprocessing the image for the integral projection
The used pre-processing method delivers good results so that the integral projection can extract the darkest area. The large part of disturbing factors can be eliminated through the contrast enhancement and use of dilation. Mostly all of the extracted x-coordinate from the horizontal and the y-coordinate from the vertical integral projection were inside the iris. Only bad exposure and big shadows disturbs the extraction too much. But such face images will also result in bad textures and cannot be used for the threedimensional head-generation. Therefore the exposure problem is not analyzed anyway. At the end the results of this method were usable. For the head generation process with “FaceGen” feature points with a variance less than 20 pixels from the ideal position can be used. Only 12.24 percent of the detected points were too far away and could not be used for the automated head generation process. 4.2 Extraction of the Nasal Wings In contrast to the eyes, the nose does not have such good attributes to recognize. Most noticeable is the tip of the nose, the nostrils and the contours of the nasal wings. In the following two approaches to track the nose are described. Contour tracking. The pre-processing of the contour tracking is similar to the circle detection of the eyes. Often, the nasal wings have differences in brightness to the surrounding face so that they can be extracted as edges. The result of the Canny-filter is the basis for this method. An algorithm is applied onto the edge image which follows the edges to find contours. The contours that are farthest to the right and left potentially represent the nasal wings. Inspecting maps of edges for the 100 test faces, it is obvious that this method cannot deliver useful results. The differences in brightness from the nasal wings and the surrounding face were not big enough. Often, it was even not possible to mark the nasal wings manually because of a fluent crossover between the nasal wings and the surrounding face region (Fig. 3 a). Only if the exposure is good enough the edges are recognizable. Another problem is the falsification of the edge image by facial hair like beards (Fig. 3 b). Brunelli and Poggio [6] had used this method to find the nose but it is not useful to extract the nasal wings.
192
D. Rupprecht, S. Hesse, and R. Blum
Fig. 3. Results of the contour tracking of the nose
Integral Projection. As Hua et al [5] describe the integral projection can be used to find the tip of the nose or the nostrils. So, for pre-processing, the region has only to be transformed in a gray-scale image and possible image artifacts have to be removed by using the median-filter. The nose region is then analyzed with the horizontal projection. In this projection the recognized low points are the nostrils and the amplitude in between represents the tip of the nose. The vertical position of the nostrils can be extracted by calculating the vertical integral projection. Here the lowest point mostly describes the dark nostrils. The effect of the nostrils in the integral projection cannot be highlighted by using the erosion. Sometimes the tip of the nose is beneath the nostrils so that they are not visible and hence not detectable.
Fig. 4. Results of the integral projection to identify the nostrils
In some cases this method could deliver acceptable results (Fig. 4 a). But, depending on the face tilt and the shape of the nose the nostrils may not be identified (Fig. 4 b). A good and consistent exposure of the nose is also important. Otherwise the horizontal projection is useless (Fig. 4 c). But another problem is that the position of the nostril does not imply the position of the respective wing of the nose, which is needed for the head generation process. So, this method could not be used to detect the wings of the nose. 4.3 Extraction of the Corners of the Mouth The mouth has some distinctive attributes to recognize. In most cases the mouth is darker than the face. This characteristic can be used to extract the contour of the mouth and its width and position. At first the contour tracking algorithm like section
Automatic Face Feature Points Extraction
193
4.2 was used to extract the mouth but the results were not usable. There are too many disturbing factors like beards, small lips or makeup. These circumstances make an extraction with this method impossible. As already mentioned the mouth has a darker brightness as the surrounding face. So, it is possible to extract it by applying a vertical and horizontal integral projection in the mouth region. In the pre-processing step the region is transformed into a gray scale image, then using dilation to reduce the effect of dark hairs and afterwards erosion to mark the dark lips. In the horizontal projection the width of the mouth can be extracted by searching the two outer flanks and the vertical projection is useful to detect the vertical position. Here it was also not possible to extract the feature points exactly enough. The vertical position is often recognizable as a down point in the vertical integral projection. But a dark beard or bad lighting conditions disturb the extraction massively. In [6] it is described how to extract the width of the mouth with the horizontal projection but the effect of the disturbing factors is too big. The shape of the mouth often varies a lot so that lips were not recognizable. So this method can also not be used to extract the required feature points.
5 Conclusion This paper shows that there are many different ways to try to extract the several required feature points from a face photo. The chosen approach with detection of the face first and then separation of the regions of interest for a subsequent feature detection appears to be a good choice. The Haar-Like Feature Extraction with AdaBoost identifies the face with a very high right-positive rate. The implemented rules, which identify the detected ROI as valid, helped to reduce the initial high false-positive rate when selecting the three regions of interest for the eyes, the nose and the mouth. The quality of the analyzed approaches for the feature extraction varies a lot. The integral projection is the best choice for finding the center of the eyes. Almost 90 percent of the given eyes could be detected. But for the other two feature points the algorithms are too vulnerable to disturbing factors like facial hair or bad exposure. So, the differences in brightness between nose and face are mostly too small to detect the edges of the nose. The extraction of the mouth has similar problems. The shape of the mouth that greatly varies from face to face as well as beards impede useful results. As future development it is planned to investigate further procedures to get better results. One possible solution may be the combination of different approaches for example template matching with transformable templates, to find the edges of mouth and nose. These edges can then be used to extract the desired feature points. Another possible attempt might be to separate the mouth from the rest of the face not only by its brightness distribution but by its color and subsequently detect its corners. Acknowledgments. This research was financially supported by the German Federal Ministry of Education and Research within the framework of the program “Forschung an Fachhochschulen mit Unternehmen (FHprofUnd)”.
194
D. Rupprecht, S. Hesse, and R. Blum
References 1. Singular Inversions Inc. - FaceGen, http://facegen.com/ 2. SmartNet IBC LTD - Human references for 3D Artists and Game Developers, http://www.3d.sk/ 3. Viola, P., Jones, M.J.: Robust real-time object detection. International Journal of Computer Vision 57, 137–154 (2004) 4. Gupta, P., Mehrotra, H., Rattani, A., Chatterjee, A., Kaushik, A.K.: Iris Recognition using Corner Detection. In: Proceedings of 23rd International Biometric Conference (IBC 2006), Montreal, Canada (2006) 5. Hua, G., Guangda, S., Cheng, D.: Feature points extraction from faces. In: Proceedings of the Image and Vision Computing, New Zealand (2003) 6. Brunellin, R., Poggio, T.: Face recognition: Features versus templates. IEEE Transactions on Pattern Analysis and Machine Intelligence 15, 1042–1052 (1993)
3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement for Low Extremities Christoph Schiefer1,2, Thomas Kraus1, Elke Ochsmann1, Ingo Hermanns2, and Rolf Ellegast2 1
RWTH Aachen University, Institute of Occupational and Social Medicine, Medical Faculty, Pauwelsstraße 30, 52074 Aachen, Germany 2 IFA, Institute for Occupational Safety and Health of the German Social Accident Insurance, Alte Heerstrasse 111, 53757 Sankt Augustin, Germany {cschiefer,tkraus,eochsmann}@ukaachen.de, {christoph.schiefer,ingo.hermanns,rolf.ellegast}@dguv.de
Abstract. Human motion capturing is used in ergonomics for ambulatory assessment of physical workloads in field. This is necessary to investigate the risk of work-related musculoskeletal disorders. Since more than fifteen years the IFA is developing and using the motion and force capture system CUELA, which is designed for whole-shift recordings and analysis of work-related postural and mechanical loads. A modified CUELA system was developed based on 3D inertial measurement units to replace all mechanical components in the present system. The unit consists of accelerometer and gyroscopes, measuring acceleration and angular rate in three dimensions. The orientation determination based on angular rate has the risk of integration errors due to sensor drift of the gyroscope. We introduced “zero points” to compensate gyroscope drift and reinitialize orientation computation. In a first evaluation step the movements of lower extremities are analyzed and compared to the optical motion tracking system Vicon. Keywords: ambulatory workload assessment, inertial tracking device, motion capturing, CUELA, ergonomic field analysis.
1 Introduction Human motion capturing is used in ergonomics for ambulatory assessment of physical workloads in field. This is necessary to investigate the risk of work-related musculoskeletal disorders. Since more than fifteen years the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) is developing and using the motion and force capture system CUELA (computer-assisted recording and long-term analysis of musculoskeletal loads), which is designed for whole-shift recordings and analysis of work-related postural and mechanical loads in ergonomic field analysis [1, 2, 3]. Typical fields of application are measurements in office, factory or construction environments as well as handling of building machines and other vehicles [4, 5, 6]. The main focus of the motion analysis is on body angles and motion pattern detection. V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 195–203, 2011. © Springer-Verlag Berlin Heidelberg 2011
196
C. Schiefer et al.
The CUELA system uses a combination of gyroscopes, inclinometers and goniometers to measure human body angels. Using mechanical components provide contentedly results, however at some workplaces the freedom of movement of the test person is restricted, or the use of mechanical components is impossible at all. Therefore a new sensor that uses 3D inertial measurement units (IMUs) was designed for the CUELA system to replace all mechanical components. The IMU consists of a 3D accelerometer and three one axis gyroscopes measuring acceleration and angular rate. The orientation determination based on angular rate has the risk of integration errors due to sensor drift of the gyroscope. To compensate the integration error, other inertial sensor based human motion capturing systems use complementary sensors like magnetometers [7, 8]. The magnetometers proved not to be suitable for measurements at real workplaces as ferromagnetic materials or electromagnetic radiation in the vicinity of the magnetometer distorts the sensor readings [9]. This article describes an alternative attempt on 3D human motion capturing based only on acceleration and angular rate. In a first evaluation step the movements of lower extremities are analyzed and compared to the optical motion tracking system Vicon [10].
2 Methods 2.1 Motion Capturing and Analysis The IMU is designed as USB human interface device with a size of 6.0cm*2.5cm*0.8cm and consists of three ADIS16100 gyroscopes perpendicular to each other and the SCA3000 triaxial accelerometer. The device works in this study at a sampling frequency of 50Hz. Higher sampling rates are also possible. A battery powered USB hub is designed for power supply of the sensors, so they can be attached to a PC or a mobile device as data logger. Elastic and breathable straps are used to fix the sensors below or above the clothing at the body segments of interest. A person wearing the system is not hindered in its movement and can move freely. To support and simplify the motion analysis, measurements are synchronously video recorded. The orientation of each body segment, equipped with an IMU, is determined by a quaternion based algorithm. An orientation quaternion qt represents the current orientation state of the IMU in relation to the initial orientation q0. Depending on the dynamical state of the IMU, qt is updated in two ways: in static case, indicated by the acceleration vector length of 1g, the accelerometer measures the gravitational acceleration and is therefore interpreted as inclinometer. A quaternion qacc,t is calculated, that represents the orientation of the IMU at time t using the factored quaternion algorithm (FQA) [11]. The elevation and roll component of qacc,t are determined by FQA. The azimuth component of qacc,t originates from integrating the vertical angular rate component. The vertical angular rate component is gained from the transformation of the measured angular rate vector from the body coordinate frame into the global coordinate frame. In the dynamic state of the IMU, the gravitational acceleration component is superimposed by other acceleration effects. In this case the gyroscope readings, measuring
3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement
197
angular rate in the body coordinate frame, are used to update the previous orientation quaternion qt-1. The resulting quaternion qω,t is fused with qacc,t by spherical linear interpolation using a weight factor that depends on the dynamical state of the IMU. The weight factor is computed by a Gaussian bell function and the current acceleration vector length as input. To enhance the long term accuracy and reduce the orientation drift error, an automated initialization routine based on so called “zero points” is used. In static situations, with the subject facing to an initial direction, the gyroscope offset and azimuth angle are set to zero. For later analysis, these zero points are identified in the video. This method showed to be more effective than using a high-pass filter for offset compensation of the gyroscope, as the filter also removes “offsets” resulting from a turn of the person. Depending on whether the human movement is reconstructed in real time or later on for detailed analysis, the algorithm can be applied uni- and bidirectional on the arriving or stored data. As the motion data are computed forward and backward in time at the post processing, the positive initializing effect of the zero points is doubled. All results presented in this article are based on bidirectional motion analysis. The motion data of each body segment are feed to a simplified human body model (Fig. 1). The human model uses constraints on the degree of freedom of human joints to prevent body segments moving in a way that is impossible for a healthy person. The translation in 3D space of the human body is not taken into account in this model. The hip center point is fixed in a predefined height above the origin of the coordinate frame, but it can change freely its orientation.
Fig. 1. Visualization of the simplified human body model
2.2 Evaluation of the CUELA Inertial System To evaluate the inertial sensor based CUELA system, it is compared with the optical motion tracking system VICON in a laboratory environment. Optical motion tracking systems excel in a high accuracy and are often used for stationary motion tracking and evaluation of other motion tracking systems [12, 13]. As the degree of freedom of leg
198
C. Schiefer et al.
movement is restricted in comparison to other joints and segments of the human body, we decided to start evaluation of the motion analysis algorithm at the lower extremities. Eleven healthy subjects voluntarily participated in the evaluation study. Their anthropometric characteristics, given in Table 1, varied intentionally to ensure the human model works independent from the anthropometry. Table 1. Anthropometric characteristics of the study participants Female (n = 5) Mean SD Age (years) 28.2 3.5 Weight (kg) 60.6 8.2 Height (cm) 167.8 4.5
Male (n = 6) Mean SD 39.6 7.8 82.5 7.5 179.0 6.2
Total (n = 11) Mean SD Min 34.4 8.4 24 72.5 13.6 48 173.9 7.9 164
Max 49 91 190
Our Vicon setup tracks passive markers with twelve cameras at a frequency of 100 Hz. 20 reflective markers were placed at the lower extremities of the subjects, following the Plug-in-Gait marker placement protocol. The positions are at the feet, knee and hip. The Vicon BodyBuilder tool is used to compute the corresponding joint center points based on the marker set. All subjects were also equipped with five IMUs at the following positions: frontal at lower leg left and right as well as upper leg left and right and backwards on hip height (see Fig. 2). The segment length of each subject was adapted in our human model using the measured mean segment length from Vicon BodyBuilder. The subjects were asked to perform different standardized movements. The movements include standing upright, step to the front with alternating legs and to the left and right side, squatting in different ways on the ground, climbing stairs and ladder up and down as well as turn on point (see Fig. 3). All movements were repeated three times. Some of the movements needed additional setup. A ladder was put to the measurement field. The subjects were asked to climb the ladder, till their hip reaches a height of about two meters, to not exceed the upper bound of the measurement volume. Climbing stairs and turn on point was combined into one exercise. A stairway with three steps up and down was put into the measurement field. The subjects were asked to walk the stairs up and down, turn on point, walk the stairs back and turn again - three times. Both systems synchronously recorded the sensor readings continuously. The time for changing the measurement setup was not taken into account. The adjusted measurement time was about 2.7 minutes. Only before and after a subject performs a taskblock, a zero point is set. To generate comparable data, both systems determined the joint center points of ankle, knee and hip of the left and right side. As the CUELA model does not capture translation, the translational part of the movements was eliminated by a customized BodyBuilder script. As both human models differ in their complexity, the joint center points showed not to be an adequate comparison criterion. Offsets between joint center points in both models could not be removed as they are not constant in direction for 3D movement.
3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement
199
Fig. 2. Subject with CUELA inertial (left) and additionally with Vicon markers (middle - right)
Fig. 3. Subjects performing the tasks step to front, to the side, climbing stairs and ladder
For that reason, we took the left and right knee angles according to the neutral zero method and the hip azimuth angle for comparison criterions. The knee angle is computed by the enclosed angle of the vectors pointing from knee to the ankle and hip joint center points. The hip azimuth angle is computed by the vector pointing from the right to the left hip joint center. The root mean square error (RMSE) of the course of time of the angles is used as an objective quality criterion and was computed for all subjects.
200
C. Schiefer et al.
3 Results Figure 4 shows the course of time of the knee angles of one subject during the stair climbing task with turn on point measured by Vicon and CUELA. The Vicon curve is mainly covered by the CUELA curve. The neutral zero method defines the knee angle as follows: the transition point from knee flexion to extension corresponds to the neutral knee position and is defined as 0°. In the knee extension case, the knee angle is defined negative. This is problematic in our way of knee angle calculation, as the angle between two vectors in 3D is defined only in a range of 0° to 180°. This effect is observable in this example at the left knee angle determined by CUELA. Knee angle - left 120 Vicon CUELA
100
angle [°]
80 60 40 20 0 -20 0
10
20
30
40
50
60
time [sec]
Knee angle - right 120 Vicon CUELA
100
angle [°]
80 60 40 20 0 -20 0
10
20
30
40
50
60
time [sec]
Fig. 4. Knee angles during stair climbing and turn on point of the left and right knee
The corresponding hip azimuth angle is shown in Figure 5. Again the CUELA curve covers mainly the vicon curve, even though the azimuth angle is determined by angular rate integration. The six plateaus with small amplitudes represent the hip rotation during climbing the stair up and down. The plateau transitions correspond to the 180° turn on point behind the steps.
3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement
201
hip azimuth angle 100 Vicon CUELA
80 60
angle [°]
40 20 0 -20 -40 -60 -80 -100 0
10
20
30
40
50
60
time [sec]
Fig. 5. Hip azimuth angle during stair climbing and turn on point
For an objective analysis of the difference between the Vicon and CUELA measurement, the RMSE of each subject and task is computed using Vicon as reference for the knee and hip azimuth angle. The mean RMSE of all subjects is shown in table 2. Table 2. Mean RMSE values of CUELA inertial using Vicon as reference knee angle, mean RMSE in degrees (SD)
hip azimuth angle, mean RMSE in degrees (SD)
squatting
4.6 (± 1.8)
3.4 (± 1.8)
step to front
5.5 (± 1.3)
5.8 (± 2.6)
step to side
5.7 (± 2.1)
7.1 (± 2.3)
ladder climbing
5.7 (± 1.7)
5.0 (± 1.7)
stair climbing and turn by 180°
6.4 (± 1.4)
6.4 (± 2.3)
task
The mean RMSE values of the knee angles range from 4.6° (squatting) to 6.4° (stair climbing and turn by 180°). For the hip azimuth angle they range from 3.4° (squatting) to 7.1° (step to the side).
4
Discussion
In literature, often a RMSE between 1° to 6° is given for inertial sensor based motion tracking systems [7, 14, 15, 16]. Most of these systems use complementary sensors to increase the accuracy of the gyroscope and accelerometer measurement. Our results in the range of 3.4° to 7.1° are in a comparable range without using complementary sensors. In our evaluation study we compared both systems in general. A lot of factors
202
C. Schiefer et al.
can therefore influence our results besides the sensor and algorithm accuracy. The marker and IMUs can generate movement artefacts independent from each other, the human models computing the joint center points differ in their complexity and the calculation of knee angels from vectors in 3D is problematic - to name some external impacts. A reduction of the comparison complexity may improve our result by eliminating external disturbances. For example fixing the Vicon markers to the IMUs allows for measuring the IMU movement independent from movement artefacts and different human body models. The calculation of common medical angles based on the body segment quaternions needs investigation to get a better basis for comparing both systems independent from gimbal locks. Introducing zero points increased essential the accuracy of azimuth angle determination and gyroscope drift compensation. The correlation of quantity of zero points and long term accuracy needs investigation in the future.
5 Conclusion For ergonomic field analysis of human motion, we developed an inertial sensor based human motion capturing system. It captures movement without using mechanical components, magnetometers or other components to allow measurements at various real workplaces. Zero points are used to reinitialize the orientation computation to reduce gyroscope drift and its consequences. In this evaluation study we analyzed and compared the motion of lower extremities to the optical motion tracking system Vicon. The mean RMSE values of the knee and hip azimuth angle are used as quality criterion. The resulting RMSE values are somewhat higher but in the range of published evaluation data of comparable systems and are expected to get smaller with further optimization. The CUELA inertial system, based only on accelerometer and gyroscopes, is at a viable starting point for usage in long term analysis of human motion tracking at real workplaces.
References 1. Ellegast, R.P., Kupfer, J.: Portable posture and motion measuring system for use in ergonomic field analysis. In: Ergonomic Software Tools in Product and Workplace Design, pp. 47–54 (2000) 2. Hermanns, I., Raffler, N., Ellegast, R.P., Fischer, S., Göres, B.: Simultaneous field measuring method of vibration and body posture for assessment of seated occupational driving tasks. International Journal of Industrial Ergonomics 38, 255–263 (2008) 3. Ellegast, R., Hermanns, I., Schiefer, C.: Workload Assessment in Field Using the Ambulatory CUELA System. In: Duffy, V.G. (ed.) ICDHM 2009. LNCS, vol. 5620, pp. 221–226. Springer, Heidelberg (2009) 4. Glitsch, U., Ottersbach, H.J., Ellegast, R.P., Schaub, K., Franz, G., Jäger, M.: Physical workload of flight attendants when pushing and pulling trolleys aboard aircraft. Int. Journal of Ind. Ergonomics 37, 845–854 (2007)
3D Human Motion Capturing Based Only on Acceleration and Angular Rate Measurement
203
5. Ditchen, D., Ellegast, R.P., Herda, C., Hoehne-Hückstädt, U.: Ergonomic intervention on musculoskeletal discomfort among crane operators at waste-to-energy-plants. In: Bust, P.D., McCabe (eds.) Contemporary Ergonomics, pp. 22–26. Taylor & Francis, London (2005) 6. Weber, B., Hermanns, I., Ellegast, R., Kleinert, J.: A person-centered measurement system for quantification of physical activity and energy expenditure at workplaces. In: Karsh, B.T. (ed.) EHAWC 2009. LNCS, vol. 5624, pp. 121–130. Springer, Heidelberg (2009) 7. Plamondon, A., Delisle, A., Larue, C., Brouillette, D., McFadden, D., Desjardins, P., Larivière, C.: Evaluation of a hybrid system for three-dimensional measurement of trunk posture in motion. Applied Ergonomics 38, 697–712 (2007) 8. Yun, X., Bachmann, E., Kavousanos-Kavousanakis, A., Yildiz, F., McGhee, R.: Design and implementation of the MARG human body motion tracking system. In: International Conference on Intelligent Robots and Systems, vol. 1, pp. 625–630 (2004) 9. Bachmann, E., Yun, X., Peterson, C.: An investigation of the effects of magnetic variations on inertial/magnetic orientation sensors. In: IEEE International Conference on Robotics and Automation, vol. 2, pp. 1115–1122 (2004) 10. Vicon, http://www.vicon.com 11. Yun, X., Bachmann, E., McGhee, R.: A Simplified Quaternion-Based Algorithm for Orientation Estimation From Earth Gravity and Magnetic Field Measurements, vol. 57(3), pp. 638–650 (2008) 12. Bergmann, J.H., Mayagoitia, R.E., Smith, I.C.: A portable system for collecting anatomical joint angles during stair ascent: a comparison with an optical tracking device. Dynamic Medicine 8, 3 (2009) 13. Mayagoitia, R.E., Nene, A.V., Veltink, P.H.: Accelerometer and rate gyroscope measurement of kinematics: an inexpensive alternative to optical motion analysis systems. Journal of Biomechanics 35(4), 537–542 (2002) 14. Sabatini, A.: Quaternion-based extended Kalman filter for determining orientation by inertial and magnetic sensing. IEEE Transactions on Biomedical Engineering 53(7), 1346– 1356 (2006) 15. Foxlin, E., Harrington, M., Altshuler, Y.: Miniature 6-DOF Inertial System for Tracking HMDs. In: SPIE, Orlando, FL, vol. 3362 (1998) 16. Bachmann, E.R.: Inertial and Magnetic Tracking of Limb Segment Orientation for Inserting Humans into Synthetic Environments. Naval Postgraduate School, United State Navy (2000)
Application of Human Modeling in Multi-crew Cockpit Design Xiaohui Sun, Feng Gao, Xiugan Yuan, and Jingquan Zhao School of Aeronautical Science and Engineering, Beihang University, 37 Xueyuan Load Beihang University, Beijing
[email protected], {gaofeng,yuanxg}@buaa.edu.cn
Abstract. Based on the need of multi-crew cockpit ergonomic design, we set up a parameterized digital human model. By virtually controlling digital human model to devices of cockpit, the potential conflicts in the process of multi-crew coordination were identified. The solutions on device layout were proposed. This method beforehand took human factors into account in the multi-crew cockpit design. Design efficiency was improved greatly. Keywords: Human modeling, Cockpit design, Crew Coordination.
1 Introduction The tasks in a multi-crew cockpit not only are completed by individual, but depend more on the coordination of crew. In the process of the coordination, the diversity of crew and their cognition will cause all kinds of conflict, some of which may influence the safety of the airplane. Statistics show that the craft accidents caused by coordination problems make up more than 70% of all the accidents that happened [1]. Most of the current studies on human factor analyze the field of vision in cockpit, the working area, and the arrangement of equipments and the amenity of working position [2-4]. But the study on the conflicts of the multi-crew cockpit is very poor. Therefore, in order to improve the safety of flight and avoid the conflict, in the stage of cockpit concept design, analyzing the working efficiency of the coordination is necessary by using digital human model in multi-crew cockpit.
2 Research on Human Modeling 2.1 Human Statistics The accuracy of the size of human model is an important factor for assessing working efficiency. Digital human model, which is based on the statistics of anthropometry, is able to solve the accuracy issue. According to digital human body, the virtual working environment is made to stimulate. And the virtual working environment can support the analysis on working efficiency and acquire high credibility [4]. V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 204–209, 2011. © Springer-Verlag Berlin Heidelberg 2011
Application of Human Modeling in Multi-crew Cockpit Design
205
Human body dimensions include static dimensions and dynamic dimensions. The static dimensions of human model in the paper were according to “Chinese male pilots’ measurement” (GJB 4586-2003). 25 measurement statistics relating to working efficiency were used in the paper, including size of limbs e.g. the length of standing body, the length of sitting body, the length of arm and size of some specific point (e.g. the gap between eyes, the gap between hips.). The Dynamic range, by analysis of literature [5-8] in relation to the human body range of motion data, defined the final size of the dynamic range of the human body model, including the dynamic range data of 28 body parts( e.g. the neck, chest, shoulders and elbows, etc.). 2.2 Digital Human Model The model adopted surface model method, dividing human body to skeleton layer and segment layer. Segment layer is over the skeleton layer. Skeleton layer conforms to human topology structure, and can express information of human body pretty precisely, by adjusting each joint’s position to drive the segment layer to change accordingly. So the digital human model is conveniently controlled the movement.
Fig. 1. Human skeleton model
When human skeleton pattern was set up, the paper considered the composition of skeleton and the function of every joint. Human skeleton pattern was divided to 66 segments and 65 joints including 131 freedom of motion. According to the feature of pilot’s job and the need of analyzing cockpit, the micro-movement of trunk and hands of the digital human model was needed to simulate. Therefore, the trunk and the hand were divided to 50 segments, which met the real composition of human body. Human skeleton pattern is shown in Fig. 1. 2.3 Posture Control of Human Model Human model must be set at a specific posture according to working condition in human-machine ergonomic evaluation of cockpit. In order to realize the setting of
206
X. Sun et al.
human model posture, the paper determined the position segments of human model by limiting joints angle which is called positive kinematic technology. Setting human model static posture depended on the angle of the whole body’s 65 joints which consists of 131 degrees of freedom. So if the angle of joints was given, human model posture was completely set. In the process of controlling actual posture, statistics of joint angle is confirmed in two ways: the first is to adopt the special work of pilots and the arrangement of the cockpit, which identify figure of each joint’s angle; the second is to acquire Movement Capture System (MCS). For the first method, a lot of angles should be decided by human. Therefore, Vicon system was used for acquiring movement information of the pilots in the paper. The data points of capture are shown in Fig. 2. The acquired figures were amended, and then a figure document including spatial coordinates and main joint changing angle would be produced. The human model posture would be created by reading the figure document.
Fig. 2. The capture data points of MCS
3 The Application Research of Human Model 3.1 The Analysis of Human Factor in Multi-crew Cockpit In the multi-crew cockpit, the coordination of the crew is inevitably needed in the process of completing the task, and makes human factor issues more complex than the single cockpit. So the coordination process is mainly to analyzed and evaluated in ergonomics issues of the multi-crew. Wesley Allan Olson (1999) [9] provided the definition of coordination: coordination can be viewed either as a cooperative process which requires agents to flexibly and adaptively work together towards goal attainment or it may be viewed as a conflict resolution process in which interactive participation is not required-instead the focus is on detecting and resolving conflicting goals and actions. Therefore, the multi-crew coordination is analyzed from two aspects: (1) the coordination process; (2) the conflicts during coordination. By analyzing multi-crew coordination, Eugene and Grubb (2002) [10] proposed Crew Coordination Model (CCM). In the model, the crew cooperation process will
Application of Human Modeling in Multi-crew Cockpit Design
Fig. 3. The analysis of the multi-crew cockpit design
207
Fig. 4. The analysis interface of the vision
happen in these occasions: inputting when plan changed, informing when situation awareness changed, inputting when state of aircraft changed and supervising when operation operates. In the process of cooperation, the cooperation model can be divided into three categories (take two- pilot cockpit for example): one pilot operates while the other monitors; two monitor interactively; two operate interactively. By analysis of multi-crew conflict, according to Jehn's (1995) [11] view, the conflict is divided into task conflict, relationship conflict and process conflict. In the multi-crew cockpit, the task conflict refers to the inconsistency of the content of the crew tasks in the cockpit; relationship conflict refers to the incompatible interpersonal relations between crew; process conflict refers to conflicts how to complete the task, especially the conflict of the monitoring and operation in the process of cooperative. Despite the different content of conflicts, the conflicts will affect the crews operating performance, increase the crew workload. The paper mainly studied ergonomics issues of the process conflict, which was analyzed and evaluated by the digital human model. 3.2 The Application Example of Human Model Take two-person crew for example, the mode of the cooperation model is that one operates and the other monitors. The coordinated task was that the crews needed to reenter the data when situation awareness changed. In the coordination mode, the pilot on the right seat is in charge of entering new flying statistics, operating EICAS and controlling devices on the head plate, while the pilot on the left seat is in charge of supervising outside, the operating results of the pilot on the right seat and changes of the panel. In the process of multi-crew cockpit design, the process conflict in coordination mode was analyzed and assessed by human model, as shown in Fig. 3. The human model in Fig. 3 is the 50 percentage human model. The assessments included: whether the position and arrangement of the monitor was in the best field of vision, whether the
208
X. Sun et al.
position of controller operated by right-seated pilot was inside the field of vision, whether the result of operation was blocked during the process of operation the rightseated pilot. The analysis interface of the vision is shown in Fig. 4. Analyze the accessibility of right-seated pilot, which was to assess whether the controller was in range of accessibility of the right-seated pilot. The analysis interface of the accessibility is shown in Fig. 5. In the analysis, the posture control and movement control of human model was accomplished by the control interface of human model (Fig. 6).
Fig. 5. The analysis interface of the accessibility
Fig. 6. The control interface of human model
4 Summary By analyzing human factor of multi-crew design with digital human model, the paper evaluated the process conflict in multi-crew cockpit from the aspects of vision and accessibility. According to the result of evaluation, some advices were given on multi-crew cockpit design. The conflict would be reduced in cockpit, and the safety and efficiency could be increased in multi-crew flight operation. But the conflicts in multi-crew cockpit are influenced by many factors. The paper just analyzed the process conflict. The influence of task conflict and relation conflict in multi-crew cockpit design needs further researching.
References [1] Orlady, H.W., Orlady, L.M.: Human factors in multi-crew flight operation (1999) [2] Wang, L.J., Yuan, X.G.: Geometrical Dummy used for ergonomic evaluation on early stage cockpit conceptual design. J. Space Medicine & Medical Engineering 14(6), 439– 443 (2001)
Application of Human Modeling in Multi-crew Cockpit Design
209
[3] Su, R.-E., Xue, H.-U., Song, B.-F.: Ergonomic virtual assessment for cockpit layout of civil aircraft. J. Systems Engineering-Theory & Practice 29(1), 185–191 (2009) [4] Libo, Z.: Research on technology and visualization of ergonomics analysis and assessment for aircraft cockpit. Beihang university, Beijing (2008) [5] Zhu, D., Zheng, L.: Human Anatomy. Fudan University publishing, Shanghai (2002) [6] GJB 2873-1997 Human Engineering design criteria for military equipment and facilities. The criterion publishing company of china, Beijing (1997) [7] GJB/Z 131-2002 Human engineering design handbook for military equipment and facilities. The criterion publishing company of china, Beijing (2002) [8] Zheng, X., Jia, S., Gao, Y., et al.: The modern biodynamic of movement. The national defense industry publishing company, Beijing (2002) [9] Olson, W.A.: Supporting Coordination Widely Distributed Cognitive Systems: The Role Of Conflict Type, Time Pressure, Display Design, And Trust. College Of The University Of Illinois, Illinois (1999) [10] Grubb, G., Simon, R.: Development of candidate crew coordination training methods and materials. ARI Contractor Report (2002) [11] Jehn, K.A.: A multi-method examination of the benefits and detriments of intra-group conflict. J. Administrative Science Quarterly 40, 530–557 (1995)
A Biomechanical Approach for Evaluating Motion Related Discomfort : Illustration by an Application to Pedal Clutching Movement Xuguang Wang1, Romain Pannetier1,2, Nagananda Krishna Burra1, and Julien Numa1 1
Université de Lyon, F-69622, Lyon, France; IFSTTAR, UMR_T9406, LBMC, Université Lyon 1 2 Renault SAS, Service Facteur Humain, Conduite et Vie Abord, 1 avenue du Golf, Guyancourt, France
[email protected] Abstract. In this paper, a motion related discomfort modelling approach based on the concept of “less constraint movement” has been proposed and illustrated by a case study of clutching pedal movements. Using a multi-adjustable car mock-up, 6 existing pedal configurations were tested by 20 subjects (5 young and 5 older males, 5 young and older females) and compared with those freely adjusted pedal positions, called ‘less constraint’ configurations. From questionnaire and motion analysis of the experimental data, it was observed that pedal resistance had a dominant effect on discomfort perception. The pedal position adjustment seemed to mainly reduce the discomfort at the beginning of travel. The ergonomic criterion for pedal design should therefore take into account two main factors: 1/ pedal resistance, 2/ a good trade-off between pedal position at the beginning of travel and its end position. The movements corresponding to less constraint configurations will be used as reference data for defining ergonomic criterion. In particular, The focus should be put on the hip, knee and ankle joint torques at the end of travel and the joint angles at the beginning and end of pedal travel. Keywords: Discomfort, Clutch pedal, Biomechanics, Ergonomics, Task-oriented movement.
1 Introduction Digital human simulation is believed to be one of the key technologies for proactive design. Almost all components of a product can be digitalized using today’s CAD (Computer-Aided Design) technologies. Being able to evaluate the customer’s satisfaction through digital human simulation at the early stage of product design becomes an absolute necessity for reducing design cycle. It can be easily understood that such an advanced digital human simulation tool can be applied in a large industrial domain where the designed system is addressed to a high number of users (clients/operators) and its physical mock-up is difficult to reproduce. However, although DHMs have already existed for over 30 years, the current models do not V.G. Duffy (Ed.): Digital Human Modeling, HCII 2011, LNCS 6777, pp. 210–219, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Biomechanical Approach for Evaluating Motion Related Discomfort
211
fulfil all the requirements desired by the users. Refer to the handbook of Digital Human Modeling (DHM) edited recently by Duffy (2008) for a most exhaustive review of DHM researches and developments. The main reasons are, on the one hand, high expectation from users, and, on the other hand, high complexity of human modelling. Despite recent progresses in motion simulation, current DHMs are mainly limited to geometric and kinematic representations of humans, which are enough to have a visually human-like simulation of motions. But objective discomfort criteria for evaluating a task are still missing, making it difficult for a design engineer to judge whether the designed product will be appreciated in terms of ease-of-use and comfort. Discomfort is induced by interactions with environment and internal biomechanical constraints affecting the musculoskeletal system. We believe that a dynamic and muscular activities simulation is required in order to evaluate discomfort. Meanwhile, functional human data, such as joint maximum range of motion and joint muscle strength, are necessary for validating biomechanical models and for defining task related discomfort criteria. However, few consistent and reliable data are available and can be integrated into a DHM. Therefore, a three-year European collaborative research project, named DHErgo (Digital Humans for Ergonomic design of products, www.dhergo.org), has been launched in 2008 for developing dynamic and musculo-skeletal human models for ergonomic applications. The objective of this paper is to present the biomechanical approach adopted in DHErgo for evaluating motion related discomfort. The approach will be illustrated with one of the three case studies performed in this project: lower limb movement when clutching a pedal.
2 General Motion Related Discomfort Modelling Approach An integrated approach has been adopted for modeling both motion and discomfort (Fig. 1). An objective evaluation of discomfort requires a realistic motion simulation, whereas a realistic motion simulation has to take into account discomfort criteria. Typically, our research begins with an experiment with voluntary subjects with a well planned experimental design. For each movement to be studied, we make use of a motion capture system (e.g. Vicon) to measure the trajectories of the markers attached to the body surface. Meanwhile, external contact forces are also required if we want to estimate joint forces and torques. Individual digital mannequin can be defined through superposing a digital model and subject’s photos in a calibrated space (see for instance, Seitz and Bubb, 2001). In DHErgo, in order to improve the accuracy of individualized model, anatomic landmarks are also palpated (Robert et al, 2010). In case of musculo-skeletal models, an attempt has also been made to scale muscle strength with measured maximum joint strength (Pannetier et al, 2011). With the inputs as marker trajectories, initial digital anthropometric model as well as external forces, the movement can be reconstructed kinematically using a global optimization procedure (Ausejo et al., 2006) and then joint forces and torques can be estimated using an inverse dynamic procedure. For instance, inverse dynamic analysis was recently applied to truck (Monnier et al, 2009 & 2010) and car ingress/egress movements (Causse et al, 2009 & 2011). Using a static optimisation method, muscles forces can also be estimated (see for instance, Fraysse et al, 2009). Then, motion
212
X. Wang et al.
analysis can be performed in order to: 1/identify and understand motion control strategies, 2/ to structure motion data according to subject’s characteristics, task, and motion control strategies, 3/to understand the sources of discomfort and search for possible relationship between discomfort and biomechanical parameters. Moreover, some basic biomechanical data are required such as joint maximum range of motion and joint strength for motion simulation and discomfort evaluation. Meanwhile, specific efforts are needed for developing efficient motion adaptation algorithm in order to adapt an existing motion to a new simulation scenario. Task specific experiment Anthropometry and physical characteristics
Discomfort (Questionnaire)
Trajectories + Ext. forces (Vicon, Force sensors )
Basic data collecting et model development Fonctional data (ROM, Strength)
Anatomic data and human models (MB and MS)
Motion control rules and simulation algorithms
Individualized models
Motion reconstruction (angles, torques, muscle forces…)
Discomfort analysis
Motion analysis
Predictive models
Discomfort
Motion
Fig. 1. Integrated approach for simulating motion and discomfort
It should be pointed out that existing ergonomic assessment methods such as OWAS (Kerhu et al, 1997), RULA (McAtamney and Corlett, 1993), REBA (Highett and McAtamney, 2000), OCRA (Occhipinti, 1998) are initially developed for observation of working postures in industry. Only a very rough estimation of posture is usually required either from direct visual estimation or from recorded video. In addition, the postural evaluation criteria were decided by a group of ergonomics experts. These methods can certainly be helpful for detecting main risk factors of a workplace. But they can hardly be used for ergonomic evaluation of a product such as a vehicle. Any task oriented motion is more or less constraint by the environment. A design engineer often needs to choose one solution among several alternatives. If we accept the idea that a better comfort can be obtained when people can make appropriate adjustments by themselves, then less constraint motions can be obtained experimentally. For this, a multi-adjustable mock-up is usually required. These less constraint motions, called also “neutral” motions by Dufour and Wang (2005), can then be referred as reference data for comparing a proposed solution. Therefore, the proposed discomfort modelling approach for the ergonomic assessment of a taskrelated motion can be summarized as follows:
A Biomechanical Approach for Evaluating Motion Related Discomfort
213
1. Identify main critical design parameters that may affect the discomfort perception and develop working hypothesis. 2. Plan the experiment. In this step, one has to propose an appropriate experimental design with identified independent variables which should be related to the main critical parameters defined in Step 1. The appropriate apparatus should be chosen to manipulate the independent variables and to measure the dependent responses. In case of product design optimisation, a multi-adjustable experimental mock-up is usually needed, with the necessary facilities allowing the participants to easily choose their preferred adjustments. In addition to motion capture with the measurement of external contact forces, a discomfort questionnaire is the only direct way to “measure” the subjective feeling of the participants. 3. Conduct the experiment with a sample of voluntary participants. 4. Data processing and analysis. Three types of data analysis have to be performed mainly by comparing imposed and freely adjusted configurations: a) questionnaire analysis for understanding the effects of independent variables on perceived discomfort, b) motion analysis for identifying main motion characteristics (keyframes, motion control strategies, etc.), c) search for possible relationships between perceived discomfort and biomechanical parameters from motion analysis. 5. Define ergonomic criterion based on data analysis for the implementation in a DHM tool.
3 Application to Pedal Clutching Movement In the project DHErgo, pedal clutching movement was chosen as the first case study to check how the proposed approach could be helpful for improving the pedal design when using a DHM tool. 3.1 Data Collecting Two types of data were collected: 1/ data characterizing a participant in terms of anthropometry, ranges of motion, joint strength as well as joint torque perception, and 2/ clutching motions for several pedal configurations proposed by the three car manufacturers involved in DHErgo. A pedal configuration is mainly characterized by seat height, pedal resistance at the end of travel, pedal travel length and travel inclination angle (Wang et al, 2000). Six different pedal configurations were tested. In this experiment, a less constraint configuration with an adjustable pedal position was also tested, allowing the investigation of the effects of pedal position. For this, a multi-adjustable experimental mock-up similar to that used in a past study (Wang et al, 2000, Fig. 2) was used. 10 younger and 10 older male and female subjects participated in the experiment. They were divided into 4 groups with the stature close to their respective average value (young male, young female, older male, older female). A high number of physical parameters related to clutch pedal operation were measured: whole body motions with external surface markers, 3-axes force applied on the pedal, the pressure between subject and seat, EMG signals of the main muscle groups of the lower limb; etc… Meanwhile, discomfort feelings when clutching were
214
X. Wang et al.
also collected through a questionnaire. Prior to the clutch pedal experiment, the data for characterizing subject’s individual physical capacity of the left lower limb were also collected (Fig.2): joint strength and joint torque perception, joint range of motion. All these data were used for estimating the biomechanical parameters such as joint angle, joint torque as well as muscle forces for a better understanding of the perceived discomfort when clutching and for proposing a predictive discomfort model of pedal clutching.
(a)
(b)
(c)
Fig. 2. Experimental set up for the measurement of ankle joint maximum dorsi-flexion (a), of the hip maximum flexion torque and four other intermediate force levels (b) and of the clutching pedal movements (c)
3.2 Main Experimental Results Here only main results from data analysis are presented for illustrating how the proposed discomfort modelling approach has been applied to pedal clutching operation. More details will presented elsewhere (Pannetier et al, 2011, Burra et al, 2011). In this case study, the responses to the questionnaire or biomechanical parameters were analysed according to three independent variables: • Group of subjects (Group or G), i.e. 4 groups according to age and gender • Configuration (Config or C), 6 existing pedal configurations provided by three car manufacturers • Type of configuration (ConfigType or T), i.e. imposed or freely chosen Here we are going to mainly focus on the comparison between imposed and free pedal configurations. Range of Pedal Adjustment. After having tested each of the imposed pedal configurations, the participants were asked to only change the pedal position without modifying other car parameters. Table 1 summarizes the pedal adjustment in x (longitudinal, + backwards), y (lateral, + towards right) and z (vertical, + upwards) with respect to the six imposed configurations. The adjustments in x, y and z directions were in average respectively -8.8, -21 and -16.8 mm. They were strongly dependent on pedal configuration. The effect of subject group was observed only in x and z directions. Globally, the participants preferred to put the pedal slightly further forwards (Δx=-8.8 mm in average), a little bit on the left (Δy=-21 mm in average) and also slightly lower (Δz=-16.8 mm in average).
A Biomechanical Approach for Evaluating Motion Related Discomfort
215
Table 1. Means and standard deviations of the adjustment in x (longitudinal, + backwards), y (lateral, + towards right) and z (vertical, + upwards) with respect to the six imposed pedal positions
Config C1 C2 C3 C4 C5 C6 Total
N 20 20 20 20 60 20 160
Δx (mm) -3.0±17.7 -3.4±20.6 -1.4±32.6 -3.9±24.8 -14.9±18.0 -13.8±23.1 -8.8C*,G*±22.5
Δy (mm) -41.0±18.8 -4.2±12.7 0.9±4.0 -31.0±22.6 -18.3±14.0 -37.9±16.3 -21.0C***±20.6
Δz (mm) -45.5±32.9 -46.6±24.8 -43.5±30.8 -4.4±25.7 0.2±27.9 4.6±36.4 -16.8C***,G**±36.5
*p37.6 or