Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
3166
This page intentionally left blank
Matthias Rauterberg (Ed.)
Entertainment Computing – ICEC 2004 Third International Conference Eindhoven, The Netherlands, September 1-3, 2004 Proceedings
Springer
eBook ISBN: Print ISBN:
3-540-28643-8 3-540-22947-7
©2005 Springer Science + Business Media, Inc. Print ©2004 by International Federation for Information Processing. Laxenburg All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Springer's eBookstore at: and the Springer Global Website Online at:
http://ebooks.springerlink.com http://www.springeronline.com
Preface
The advancement of information and communication technologies (ICT) has enabled broad use of ICT and facilitated the use of ICT in the private and personal domain. ICT-related industries are directing their business targets to home applications. Among these applications, entertainment will differentiate ICT applications in the private and personal market from the office. Comprehensive research and development on ICT applications for entertainment will be different for the promotion of ICT use in the home and other places for leisure. So far engineering research and development on entertainment has never been really established in the academic communities. On the other hand entertainment-related industries such as the video and computer game industries have been growing rapidly in the last 10 years, and today the entertainment computing business outperforms the turnover of the movie industry. Entertainment robots are drawing the attention of young people. The event called RoboCup has been increasing the number of participants year by year. Entertainment technologies cover a broad range of products and services: movies, music, TV (including upcoming interactive TV), VCR, VoD (including music on demand), computer games, game consoles, video arcades, gambling machines, the Internet (e.g., chat rooms, board and card games, MUD), intelligent toys, edutainment, simulations, sport, theme parks, virtual reality, and upcoming service robots. The field of entertainment computing focuses on users’ growing use of entertainment technologies at work, in school and at home, and the impact of this technology on their behavior. Nearly every working and living place has computers, and over two-thirds of children in industrialized countries have computers in their homes as well. All of us would probably agree that adults and children need to become competent users to be prepared for life and work in the future. Especially children’s increasing use of entertainment technologies brings with it both the risk of possible harm and the promise of enriched learning, well-being and positive development. Between now and the near future, digital technologies will become more powerful and affordable for all users and at every level, in digital networks and in product offerings. An increasing number of people will be able to compile, program, edit, create and share content; as a result, they will gain more control and become more immersed in media experiences. But more than technical challenges, the social implications on human behavior will be of most importance. We need a media ecology movement to heighten consciousness to fight the waste and pollution that the media produces. It is indeed a question of the mental environment for our children and future generations. The questions we must ask ourselves are: Do we give them a world that is challenging, stimulating, inspiring, and really entertaining? Do we encourage their intelligence, creativity and curiosity? To address and hopefully answer these questions and to advance this newly born area of entertainment technologies it is important to build a good relationship between academia and industry, and to set up a task force group. This was the main motivation that in August 2000 prompted the International Federation for Information Processing (IFIP) General Assembly to approve the setting up of the Entertainment Computing
VI
Preface
Specialist Group (SG16) under the auspices of IFIP and the Committee for Cooperation with Industry (CCI). First of all, the major efforts of SG16 activities were directed toward demonstrating that the subject could be mature enough to attract the broad interest of the ICT community. For this purpose a technical event, the 1st International Workshop on Entertainment Computing (IWEC), was planned, and IWEC Steering Committee members were appointed (Bruce Blumberg from MIT Media Lab, USA; Marc Cavazza from the University of Teesside, UK; Jaap van den Herik from the Universiteit Maastricht, Netherlands; Tak Kamae from Laboratories of Image Science and Technology, Japan; Donald Marinelli from Carnegie Mellon University, USA; Ryohei Nakatsu from ATR, Japan; Matthias Rauterberg from the Technische Universiteit Eindhoven, Netherlands; and Demetri Terzopoulos from the University of Toronto, Canada). The first important opportunity came when IFIP TC13 on “Human-Computer Interaction” kindly offered a time slot for a first international panel on entertainment computing at the prestigious INTERACT 2001 conference in Tokyo (Japan) in July 2001. The IWEC Steering Committee decided to accept this kind offer to increase the presence of SG16 and IWEC. At the panel many participants showed interests in entertainment computing. In the next year, 2002, the first international workshop on entertainment computing (IWEC) was launched. IWEC 2002 was successfully held at Makuhari (Japan) on May 14–17, 2002. IWEC 2002 attracted over 100 participants and over 60 papers were published in the proceedings by Kluwer (edited by Ryohei Nakatsu and Junichi Hoshino). At IWEC 2002 were many high-quality papers and several interesting technical demonstrations. In other words, evidence that entertainment computing was already an important technical area. At IWEC 2002 we had an extended SG16 meeting, and it was agreed unanimously that the formation of a new technical committee (TC) on entertainment computing should be proposed formally to IFIP at the General Assembly at Montreal in 2002. Based on the success of IWEC 2002, SG16 organized the next International Conference on Entertainment Computing (ICEC 2003), that was held during May 8–10, 2003 at the Entertainment Technology Center at Carnegie Mellon University, Pittsburgh (USA). ICEC 2003 was also successful with more than 100 attendees, 20 highly select papers, several prestigious keynote talks, and invited panels. All the papers for ICEC 2003 were accepted by ACM for inclusion in their ACM online digital library. To complete the first around-the-world cycle “Japan–USA–Europe”, the 3rd International Conference on Entertainment (ICEC 2004) was held in Europe at the Technische Universiteit Eindhoven during September 1–3, 2004. This conference attracted 27 full papers. Around 150 attendees from academia and industry participated in this successful conference. In several parallel sessions full papers, short papers, posters, system demonstrations and exhibitions from industry were presented. The program included three well-received keynote talks, three specially invited topic talks, and an outstanding super-chess contest organized by Jaap van den Herik. For more information about ICEC 2004 have a look at the homepage on the Internet: http://www.icec.id.tue.nl/
Preface
VII
For making ICEC 2004 such an outstanding event, we have to thank the following people who volunteered in the organization: Jaap van den Herik and Anton Nijholt as co-chairs, Jacques Terken as review chair, Ben Salem as treasurer and chair of the organizing committee, as well as all members of the different committees, in particular the long list of distinguished experts from all over the world in the scientific and industrial program committee, the several sponsors, all cooperating societies, and last but not least all researchers who submitted and presented their outstanding research results at ICEC 2004, documented in this book. We gratefully acknowledge their contributions, effort and valuable input.
Eindhoven, June 28, 2004
Matthias Rauterberg
Committees
Chair Matthias Rauterberg
(Technische Universiteit Eindhoven, The Netherlands)
Co-chairs Jaap van den Herik Anton Nijholt
(Universiteit Maastricht, The Netherlands) (University of Twente, The Netherlands)
Review Chair Jacques Terken
(Technische Universiteit Eindhoven, The Netherlands)
Program Committee Espen Aarseth Matt Adcock Samir Akkouche Elisabeth André Sebastiano Bagnara Christoph Bartneck Trevor Batten Franck Bauchard Maurice Benayoun Aude Billard Mark Billinghurst Mats Björkin Edwin Blake Don Bouwhuis Jonah Brucker-Cohen Brad Bushman Marc Cavazza Liming Chen Adrian Cheok Jeffrey Cohn Roger Dannenberg John Debenham Jürgen Enge Loe Feijs
(University of Copenhagen, Denmark) (CSIRO ICT Centre, Australia) (Universite Claude Bernard Lyon 1, France) (University of Augsburg, Germany) (Politecnico di Milano, Italy) (Technische Universiteit Eindhoven, The Netherlands) (Media Art, The Netherlands) (Ministère de la Culture, France) (Université Paris 1, France) (Swiss Federal Institute of Technology, Switzerland) (University of Canterbury, New Zealand) (Göteborg University, Sweden) (University of Cape Town, South Africa) (Technische Universiteit Eindhoven, The Netherlands) (Trinity College Dublin, Ireland) (University of Michigan, USA) (University of Teesside, United Kingdom) (Ecole Centrale de Lyon, France) (National University of Singapore, Singapore) (University of Pittsburgh, USA) (Carnegie Mellon University, USA) (University of Technology, Australia) (Zentrum für Kunst und Medientechnologie, Germany) (Technische Universiteit Eindhoven, The Netherlands)
Organization
Sidney Fels Franz Fischnaller Christian Freksa Masahiro Fujita Catherine Garbay Bill Gaver Ian Gibson Andrew Glassner Martin Goebel Tom Gross Reinder Haakma Sture Hägglund Michael Haller Dong-Han Ham Goffredo Haus Ernst A. Heinz Michael Herczeg Jaap van den Herik Yibin Hou Hiroyuki Iida Wijnand IJsselsteijn Ebroul Izquierdo Anker Helms Jørgensen Oussama Khatib Gudrun Klinker Karl-Friedrich Kraiss Thomas Landspurg Fatima Lasay James Lester Peri Loucopoulos Henry Lowood Michael Macedonia Don Marinelli Jeroen van Mastrigt Hitoshi Matsubara Frans Mayra Gary McDarby Ivica Mitrovic Frank Nack Ryohei Nakatsu Anton Nijholt Dietmar Offenhuber Michio Okada Kees Overbeeke Mark Overmars René Paré
IX
(University of British Columbia, Canada) (University of Illinois at Chicago, USA) (University of Bremen, Germany) (SONY, Japan) (CNRS, France) (Royal College of Art, United Kingdom) (Academy of Interactive Entertainment, Australia) (Coyote Wind Studios, USA) (fleXilution, Germany) (Bauhaus-University Weimar, Germany) (Philips Research, The Netherlands) (Linkoping University, Sweden) (Upper Austria University of Applied Sciences, Austria) (ETRI, Korea) (State University of Milan, Italy) (International University, Germany) (University of Luebeck, Germany) (University of Maastricht, The Netherlands) (Beijing University of Technology, China) (University of Shizuoka, Japan) (Technische Universiteit Eindhoven, The Netherlands) (University of London, United Kingdom) (University of Copenhagen, Denmark) (Stanford University, USA) (Technical University Munich, Germany) (RWTH Aachen, Germany) (In-Fusio, France) (University of the Philippines, Philippines) (North Carolina State University, USA) (UMIST, United Kingdom) (Stanford University, USA) (Georgia Tech, USA) (Carnegie Mellon University, USA) (Hooghe School voor Kunst, The Netherlands) (Future University-Hakodate, Japan) (University of Tampere, Finland) (MediaLab Europe, Ireland) (University of Split, Croatia) (CWI, The Netherlands) (Kwansei Gakuin University, Japan) (University of Twente, The Netherlands) (Ars Electronica Futurelab, Austria) (ATR Network Informatics Laboratories, Japan) (Technische Universiteit Eindhoven, The Netherlands) (Utrecht University, The Netherlands) (Grafico de Poost, The Netherlands)
X
Organization
Paolo Petta Paul Plöger Andriana Prentza Matthias Rauterberg Theresa-Marie Rhyne Ben Salem Jonathan Schaeffer Nikitas Sgouros Takanori Shibata Andy Sloane Otto Spaniol Pieter Spronck Scott Stevens Norbert Streitz Kazuo Tanie Naoko Tosa Bodo Urban Frans Vogelaar Magdalena Wesolkowska Lars Wolf Jeremy Wyatt Ken Young
(Medical University of Vienna, Austria) (FH Bonn Rhein Sieg, Germany) (National Technical University of Athens, Greece) (Technische Universiteit Eindhoven, The Netherlands) (North Carolina State University, USA) (Technische Universiteit Eindhoven, The Netherlands) (University of Alberta, Canada) (University of Piraeus, Greece) (AIST, Japan) (University of Wolverhampton, United Kingdom) (RWTH Aachen, Germany) (Universiteit Maastricht, The Netherlands) (Carnegie Mellon University, USA) (Fraunhofer IPSI, Germany) (AIST, Japan) (Entertainment Computing Labs., Japan) (Fraunhofer Institute for Computer Graphics, Germany) (Kunsthochschule für Medien Köln, Germany) (Concordia University, Canada) (Technical University Braunschweig, Germany) (University of Birmingham, United Kingdom) (University of Warwick, United Kingdom)
Organizing Committee Chair Ben Salem
(Technische Universiteit Eindhoven, The Netherlands)
Secretary Helen Maas-Zaan Marline Tiessen Nora Tonnaer
(Technische Universiteit Eindhoven, The Netherlands) (Universiteit Maastricht, The Netherlands) (Technische Universiteit Eindhoven, The Netherlands)
Treasurer Ben Salem
(Technische Universiteit Eindhoven, The Netherlands)
Web Design Christoph Bartneck
(Technische Universiteit Eindhoven, The Netherlands)
Student Volunteers Erik van Alphen, Willeke van de Linden, Serge Offermans, Joep van Poppel, Rik Runge, Dick Rutten, Linda Valk, Harry Vermeulen, Thomas Visser (Technische Universiteit Eindhoven, The Netherlands)
Sponsors
TU/e: Technical University of Eindhoven TU/e-ID: Department of Industrial Design TU/e-JFS: J.F. Schouten School for User-System Interaction Research KNAW: Royal Netherlands Academy of Arts and Sciences NWO: Netherlands Organisation for Scientific Research ERCIM: European Research Consortium for Informatics and Mathematics IOP-MMI: Innovation-Oriented Research Program Human-Machine Interaction
Cooperating Societies
Association for Computing Machinery: SIGCHI and SIGGRAPH Association for Robotics & Automation Associazione Italiana per l’Intelligenza Artificiale Australian Computer Society British Computer Society Computer Professionals for Social Responsibility Computer Society of India Digital Games Research Association Dutch Chapter of SIGCHI Dutch Computer Society Francophone Human-Computer Interaction Association German Informatics Society: SIG Communication & Distributed Systems Icelandic Society for Information Processing Norwegian Computer Society Philippine Computer Society Royal Institution of Engineers in the Netherlands Swiss Informatics Society Usability Professionals’ Association
IFIP SG16
A new Technical Committee (TC) on Entertainment Computing was proposed to IFIP (approval pending) in the following way:
TC Title Entertainment Computing Aims To encourage computer applications for entertainment and to enhance computer utilization in the home, the technical committee will pursue the following aims: to enhance algorithmic research on board and card games to promote new types of entertainment using information technologies to encourage hardware technology research and development to facilitate implementing entertainment systems, and to encourage non-traditional human interface technologies for entertainment. Scope 1. Algorithms and strategies for board and card games (algorithms of board and card games; strategy controls for board and card games; level setups for game and card games). 2. Novel entertainment using ICT (network-based entertainment; mobile entertainment; location-based entertainment; mixed reality entertainment). 3. Audio (music informatics for entertainment; 3D audio for entertainment; sound effects for entertainment). 4. Entertainment human interface technologies (haptic and non-traditional human interface technologies; mixed reality human interface technologies for entertainment). 5. Entertainment robots (ICT-based toys; pet robots; mental commit robots; emotion models and rendering technologies for robots). 6. Entertainment systems (design of entertainment systems; entertainment design toolkits; authoring systems). 7. Theoretical aspects of entertainment (sociology, psychology and physiology for entertainment; legal aspects of entertainment). 8. Video game and animation technologies (video game hardware and software technologies; video game design toolkits; motion capture and motion design; interactive story telling; digital actors and emotion models). 9. Interactive TV and movies (multiple-view synthesis; free viewpoint TV; authoring technologies). 10. Edutainment (entertainment technologies for children’s education; open environment entertainment robots for education).
IFIP SG16
XIII
Members: As first members of this TC, Ryohei Nakatsu is named as chair (contact:
[email protected]), Matthias Rauterberg as vice-chair, and Claudio Pinhanez as secretary. TC Activities: The 3rd International Conference on Entertainment Computing (ICEC) was organized. The next ICEC will be held in 2005 in Japan. SG16 became a sponsor of the international 10th Advances in Computer Games Conference (ACG-10), that was held in November 2003 at Graz, Austria. Two panel sessions were organized: (1) at the IFIP TC13 INTERACT conference in 2001 (Japan), and (2) at the IFIP World Computer Congress in 2002 (Canada). An additional Topical Day “Virtual Realities and New Entertainment” was held at the IFIP World Computer Congress in August 2004 (France). TC Publications: Ryohei Nakatsu and Junichi Hoshino (2003, eds.). Entertainment Computing, Kluwer Academic Publishers. Matthias Rauterberg (2004, ed.). ICEC 2004, Entertainment Computing, Lecture Notes in Computing Science, Vol. 3166, SpringerVerlag.
Working Groups (WG) Under TC ‘Entertainment Computing’ WG16.1 Digital Storytelling Scope: Storytelling is one of the core technologies of entertainment. Especially with the advancement of information and communication technologies (ICT), new types of entertainment called video games have been developed where interactive story development is the key that makes those games really entertaining. At the same time, however, the difference between interactive storytelling and conventional storytelling has not been well studied. Also, as the development of interactive storytelling needs a lot of time and human power, it is crucial to develop technologies for automatic or semiautomatic story development. The objective of this working group is to study and discuss these issues. Members: As a first member of this WG16.1, Marc Cavazza is named as chair (contact:
[email protected]). WG16.1 Activities: Already there are several conferences/workshops on digital storytelling. To establish a link between IFIP and these conferences/workshops is the first activity of WG16.1. WG16.2 Entertainment Robots Scope: Robots are becoming one of the most appealing forms of entertainment. New entertainment robots and/or pet robots are becoming popular. Also, from the theoretical point of view, compared with computer graphics based characters/animations, the robot is an interesting research object as it has a physical entity. Taking these issues into consideration, it was decided at the SG16 annual meeting that a new working group on entertainment robots is to be established. Members: As a first member of WG16.2, Hitoshi Matsubara is named as chair (contact:
[email protected]).
XIV
IFIP SG16
WG16.2 Activities: As a first activity of this working group, WG16.2 organized a national workshop on entertainment computing, Entertainment Computing 2003, held during Jan. 13–15 at Osaka (Japan). It attracted more than 120 attendees and 30 papers. WG16.2 publications: The proceedings were published by IPSJ (Information Processing Society of Japan) as a special issue on “Entertainment Computing,” IPSJ Symposium Series, No.1, 2003. WG16.3 Theoretical Basis of Entertainment Scope: Although there are huge entertainment industries already, such as video games, toys, robots, etc., little academic interest has been paid to such questions as what is the core of the entertainment, what are the technologies that could create new entertainment, and how can the core technologies of entertainment be applied to other areas such as education, learning and so on. The main objective of this WG is to study these issues. Members: As a first member of WG16.3, Matthias Rauterberg is named as chair (contact:
[email protected]). Anyone who is qualified and interested in active participation in one of the working groups is kindly invited to contact one of the WG chairs.
Editor’s Note
ICEC 2004 attracted 62 full-paper submissions, 40 short-paper submissions, 8 poster submissions and 4 demo submissions, in total 114 submissions. Based on a thorough review and selection process done by 93 international experts from academia and industry as members of the program committee, a high-quality program was compiled. The international program committee consisted of experts from all over the world: 3 from Australia, 3 from Austria, 3 from Canada, 1 from China, 1 from Croatia, 2 from Denmark, 1 from Finland, 6 from France, 15 from Germany, 2 from Greece, 2 from Ireland, 2 from Italy, 8 from Japan, 1 from Korea, 16 from the Netherlands, 1 from New Zealand, 1 from the Philippines, 1 from Singapore, 1 from South Africa, 3 from Sweden, 1 from Switzerland, 7 from the United Kingdom, and 12 from the United States. The final decision was made by review and conference chairs based on feedback from at least two reviewers available online via the conference management tool. As a result, 27 full papers were directly accepted as submitted, and for the acceptable remaining 21 submissions their status was changed: 9 were accepted as short papers, and 12 as posters; 14 full-paper submissions were definitively rejected; 19 short papers were directly accepted as submitted, and for 10 others their status was changed, to 8 posters and 2 demo papers, for final acceptance; 11 short-paper submissions were definitively rejected; 3 poster paper submissions were accepted and 5 rejected; 3 demo paper submissions were accepted and 1 rejected. Finally 27 full papers, 27 short papers, 18 poster papers, 3 demo papers, and in addition 3 keynote papers plus 3 specially invited topic papers were compiled and are presented in this book. All these papers could be allocated to one of the following topics: (1) advanced interaction design; (2) art, design and media; (3) augmented, virtual and mixed reality; (4) computer games; (5) human factors of games; (6) intelligent games; (7) mobile entertainment; (8) sound and music; and (9) visual media engineering. Papers per topic are ordered as follows: full papers, short papers, demo papers, and poster papers.
This page intentionally left blank
Table of Contents
I Invited Presentations Ludic Engagement and Immersion as a Generic Paradigm for Human-Computer Interaction Design Craig A. Lindley Realization of Tai-Chi Motion Using a Humanoid Robot Takenori Wama, Masayuki Higuchi, Hajime Sakamoto, Ryohei Nakatsu
3
14
Building Better Systems for Learning and Training: Bringing the Entertainment Industry and Simulation Technology Together William R. Swartout
20
Game Intelligence: From Animal Play Behavior to Entertainment Computing Marion Bönsch-Kauke
21
Effects of Violent Video Games on Aggressive Behavior, Helping Behavior, Aggressive Thoughts, Angry Feelings, and Physiological Arousal Brad Bushman
22
New Behavioural Approaches for Virtual Environments Marc Cavazza, Simon Hartley, Jean-Luc Lugrin, Paolo Libardi, Mikael Le Bras
23
II Advanced Interaction Design “Kuru-kuru Pitcher”: A Game for the Internet Chair Kazuya Adachi, Michael Cohen, Uresh Duminduwardena, Kayoko Kanno
35
Fun and Sports: Enhancing the Home Fitness Experience Wijnand IJsselsteijn, Yvonne de Kort, Joyce Westerink, Marko de Jager, Ronald Bonants
46
Manipulating Multimedia Contents with Tangible Media Control System Sejin Oh, Woontack Woo
57
“Tangible Influence”: Towards a New Interaction Paradigm for Computer Games Marco Vala, Ana Paiva, Rui Prada
68
XVIII
Table of Contents
Computer Supported Collaborative Sports: Creating Social Spaces Filled with Sports Activities Volker Wulf, Eckehard F. Moritz, Christian Henneke, Kanan Al-Zubaidi, Gunnar Stevens Optical-Flow-Driven Gadgets for Gaming User Interface Zoran Zivkovic The Human-Information Workspace (HI-Space): Ambient Table Top Entertainment Andrew J. Cowell, Richard May, Nick Cramer
80
90
101
Game-Driven Intelligent Tutoring Systems Marco A. Gómez-Martín, Pedro P. Gómez-Martín, Pedro A. González-Calero
108
Practice! YUBIMOJI AIUEO for Japanese Hand Language Learning Takao Terano, Fusako Kusunoki, Yasushi Harada, Miki Namatame
114
The Bush Telegraph: Networked Cooperative Music-Making Rodney Berry, Mao Makino, Naoto Hikawa, Masami Suzuki
120
III Art, Design, and Media Live Role-Playing Games: Implications for Pervasive Gaming Jennica Falk, Glorianna Davenport
127
Animating Conversation in Online Games Hannes Högni Vilhjálmsson
139
From Artistry to Automation: A Structured Methodology for Procedural Content Creation Timothy Roden, Ian Parberry Commedia Virtuale: Theatre Inspiration for Expressive Avatars Ben Salem
151 157
Take the Money and Run? An Ethical Approach to the Relation Between Game Research and Game Industry Miguel Sicart
163
Moved by Movements: How Character Movements Cue Us to Form Specific Genre and Affective Impressions Valentijn Visch
168
Improvisation in Theatre Rehearsals for Synthetic Actors Tony Meyer, Chris Messom
172
Table of Contents
Enjoyment and Entertainment in East and West Matthias Rauterberg
XIX
176
IV Augmented, Virtual, and Mixed Reality Interactive Props and Choreography Planning with the Mixed Reality Stage Wolfgang Broll, Stefan Grünvogel, Iris Herbst, Irma Lindt, Martin Maercker, Jan Ohlenburg, Michael Wittkämper The Interactive and Multi-protagonist Film: A Hypermovie on DVD André Melzer, Sebastian Hasse, Oliver Jeskulke, Inga Schön, Michael Herczeg
185
193
Apply Social Network Analysis and Data Mining to Dynamic Task Synthesis for Persistent MMORPG Virtual World Larry Shi, Weiyun Huang
204
How Realistic is Realism? Considerations on the Aesthetics of Computer Games Richard Wages, Stefan M. Grünvogel, Benno Grützmacher
216
Read-It: A Multi-modal Tangible Interface for Children Who Learn to Read Ivo Weevers, Wouter Sluis, Claudia van Schijndel, Siska Fitrianie, Lyuba Kolos-Mazuryk, Jean-Bernard Martens Exploiting Films and Multiple Subtitles Interaction for Casual Foreign Language Learning in the Living Room Victor Bayon CLOVES: A Virtual World Builder for Constructing Virtual Environments for Science Inquiry Learning Yongjoo Cho, Kyoung Shin Park, Thomas Moher, Andrew E. Johnson, Juno Chang, Min Cheol Whang, Joa Sang Lim, Dae-Woong Rhee, Kang Ryoung Park, Hung Kook Park SEITV – Interactive Multimedia Leisure/Educational Services for Digital TV in MHP Julián Flórez, Igor García, Iker Aizpurua, Céline Paloc, Alejandro Ugarte, Igor Jainaga, Jesús Colet, Xabier Zubiaur
226
235
241
248
Tangible Augmented Reality Modeling Ja Yong Park, Jong Weon Lee
254
Human Body Tracking for Human Computer Intelligent Interaction Jong-Seung Park, Sang-Rak Lee
260
XX
Table of Contents
A Graphical System for Interactive Rendering of Objects in an Augmented Reality Scenery Uwe Berner, Norbert Braun, Sofia Kolebinova
V
266
Computer Games
TEAM: The Team-Oriented Evolutionary Adaptability Mechanism Sander Bakkes, Pieter Spronck, Eric Postma
273
Size Variation and Flow Experience of Physical Game Support Objects Loe M.G. Feijs, Peter Peters, Berry Eggen
283
Enhancing the Performance of Dynamic Scripting in Computer Games Pieter Spronck, Ida Sprinkhuizen-Kuyper, Eric Postma
296
Open-Source Game Development with the Multi-user Publishing Environment (MUPE) Application Platform Riku Suomela, Eero Räsänen, Ari Koivisto, Jouka Mattila
308
Player-Centered Game Environments: Assessing Player Opinions, Experiences, and Issues Penelope Sweetser, Daniel Johnson
321
An Application of Game-Refinement Theory to Mah Jong Hiroyuki Iida, Kazutoshi Takahara, Jun Nagashima, Yoichiro Kajihara, Tsuyoshi Hashimoto The Design and Implementation of Multi-player Card Games on Multi-user Interactive Tabletop Surfaces Shwetak N. Patel, John A. Bunch, Kyle D. Forkner, Logan W. Johnson, Tiffany M. Johnson, Michael N. Rosack, Gregory D. Abowd Entertainment Feature of a Computer Game Using a Biological Signal to Realize a Battle with Oneself Shigeru Sakurazawa, Nagisa Munekata, Naofumi Yoshida, Yasuo Tsukahara, Hitoshi Matsubara
333
339
345
AI: the Missing Link in Digital Game Interface Design? Darryl Charles, Daniel Livingstone
351
Engaging Game Characters: Informing Design with Player Perspectives Penelope Drennan, Stephen Viller, Peta Wyeth
355
Emergent Stories in Massively Multiplayer Online Games: Using Improvisational Techniques to Design for Emotional Impact Brenda Harger, David Jimison, Eben Myers, Ben Smith, Shanna Tellerman
359
Table of Contents
XXI
VI Human Factors of Games Towards a Framework for Design Guidelines for Young Children’s Computer Games Wolmet Barendregt, Mathilde M. Bekker
365
Social Translucence of the Xbox Live Voice Channel Martin R. Gibbs, Kevin Hew, Greg Wadley
377
Artifact-Based Human-Computer Interface for the Handicapped Ki-Hong Kim, Hong-Kee Kim, Wook-Ho Son
386
A Home Page Is Where the Heart Is: Using Games Based Design Techniques to Enhance Contact Centre Interfaces Nicola J. Millard, Paul K. Buckley, Faye Skinner, Rosita Venousiou
393
Avoiding Average: Recording Interaction Data to Design for Specific User Groups Nick Fine, Willem-Paul Brinkman
398
Physiological Response to Games and Non-games: A Contrastive Study Karina Oertel, Gösta Fischer, Holger Diener
402
VII Intelligent Games Probabilistic Opponent-Model Search in Bao Jeroen Donkers, Jaap van den Herik, Jos Uiterwijk
409
Agent Wars with Artificial Immune Systems Gayle Leen, Colin Fyfe
420
MMOG Player Classification Using Hidden Markov Models Yoshitaka Matsumoto, Ruck Thawonmas
429
Expanding Spheres: A Collision Detection Algorithm for Interest Management in Networked Games Graham Morgan, Kier Storey, Fengyun Lu Electronic Augmentation of Traditional Board Games Clim J. de Boer, Maarten H. Lamers Strategy Selection in Games Using Co-evolution Between Artificial Immune Systems Donald MacDonald, Colin Fyfe Level of Detail Modelling in a Computer Game Engine Francisco Ramos, Miguel Chover
435 441
445 451
XXII
Table of Contents
VIII Mobile Entertainment Networked Mobile Gaming for 3G-Networks Amjad Akkawi, Sibylle Schaller, Oliver Wellnitz, Lars Wolf
457
Mobile Games for Training Tactile Perception Grigori Evreinov, Tatiana Evreinova, Roope Raisamo
468
Emotionally Loaded Mobile Multimedia Messaging Timo Saari, Marko Turpeinen, Jari Laarni, Niklas Ravaja, Kari Kallinen
476
“Why Is Everyone Inside Me?!” Using Shared Displays in Mobile Computer Games Johan Sanneblad, Lars Erik Holmquist
487
Associated Emotion and Its Expression in an Entertainment Robot QRIO Fumihide Tanaka, Kuniaki Noda, Tsutomu Sawada, Masahiro Fujita
499
Position-Aware IEEE 802.11b Mobile Video Services Rafael Asorey-Cacheda, Francisco J. González-Castaño, Enrique Costa-Montenegro, Ignacio López-Cabido, Andrés Gómez-Tato, José Carlos Pérez-Gómez
505
A Human-Pet Interactive Entertainment System over the Internet Lee Shang Ping, Farzam Farbiz, Adrian David Cheok
509
Developing and Evaluating Mobile Entertainment Applications: The Case of the Music Industry Vasilios Koutsiouris, Pavlos Vlachos, Adam Vrechopoulos An Entertaining Way to Access Web Content Giacomo Poretti, Alberto Sollberger Design of an Interface for Technology Supported Collaborative Learning – The RAFT Approach Lucia Terrenghi, Marcus Specht, Moritz Stefaner
513 518
522
IX Sound and Music iFP: A Music Interface Using an Expressive Performance Template Haruhiro Katayose, Keita Okudaira
529
Sound Pryer: Adding Value to Traffic Encounters with Streaming Audio Mattias Östergren
541
Table of Contents
XXIII
Harmonics Table: Audiovisual Expression of Group Interaction on a Sensing Table Sangwoong Hwang, Hyunchul Park, Chansuk Yang, Manjai Lee
553
Hello-Fish: Interacting with Pet Fishes Through Animated Digital Wallpaper on a Screen Sunyean Jang, Manjai Lee
559
Background Music Generation Using Music Texture Synthesis Min-Joon Yoo, In-Kwon Lee, Jung-Ju Choi
565
A Progressive Sounding Object Model in Virtual Environment Qiong Zhang, Taiyi Chen
571
X Visual Media Engineering Automatic Visual Data Management System Jae-Ho Lee, Sung-Hoon Park, Young-Jin Choi, Whoi-Yul Kim Development of Extemporaneous Performance by Synthetic Actors in the Rehearsal Process Tony Meyer, Chris Messom An Efficient CLOD Method for Large-Scale Terrain Visualization Byeong-Seok Shin, Ei-Kyu Choi
579
586 592
Integrating Ideas About Invisible Playgrounds from Play Theory into Online Educational Digital Games Darryl Charles, Moira McAlister
598
EffecTV: A Real-Time Software Video Effect Processor for Entertainment Kentaro Fukuchi, Sam Mertens, Ed Tannenbaum
602
Web-Based Tool for Analyzing Emotions Through Images and Generating a Music Therapy System Taesik Kim, Hyeyoung Kim
606
Turning Photo Annotating Tasks into Instant Messaging Fun: Prototyping, User Trials, and Roadmapping Yuechen Qian, Loe M.G. Feijs
610
Author Index
615
This page intentionally left blank
Invited Presentations
This page intentionally left blank
Ludic Engagement and Immersion as a Generic Paradigm for Human-Computer Interaction Design Craig A. Lindley Institution Technology, Art and New Media, University of Gotland, Cramergatan 3, SE-621 57 Visby, Sweden
[email protected] Abstract. Ludic systems are interactive media productions typically generalised under the heading of computer games, but actually integrating elements of game play, simulation or modeling, and narrative. The success of ludic systems lies in their highly effective modes of player engagement and immersion. Game play, simulation and narrative have their own respective forms of engagement and immersion that have often been overlooked in the development of models for human-computer interaction. As game systems become more ubiquitous, technical platforms will evolve to support more game-like interaction in general. This will facilitate the development of many applications having ludic engagement and immersion modes that dissolve some of the distinctions between work and play, providing the potential for alleviating tedium in many computerbased work tasks.
1 Introduction Computer games are the most successful commercial application of interactive media, perhaps with the exception of the world wide web as a complete system. Commercial computer games integrate elements of pure game play, simulation and narrative, resulting in very rich media experiences with different levels of meaning having complex interrelationships. Computer games did not evolve directly from any existing established media industry. Their origins lie in simulation systems and computer graphics, and while computer games have borrowed much from other media (including genre conventions and cinematic techniques), the computer game industry has evolved independently until more recent generations started to involve cross licensing arrangements between media forms (eg. computer games and cinema). This historical evolution of computer games has occurred within its own emerging subculture. The earliest games include projects developed as side interests or after hours work, and the culture of gamers has grown out of the subcultures of war gamers, table top role playing gamers and computer geeks. Gamer culture is closer to the culture of comic books than high art, or even television and film. Even with the current huge commercial success and popularity of computer games, the form is often dismissed as the lightest form of entertainment, more contemptible than television, poisoning the minds of children and wasting the time of adults. Despite this bad press from those unable to relate to the medium, games are now gaining greater respectability. Game studies has emerged over the last few years as a M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, pp. 3–13, 2004. © IFIP International Federation for Information Processing 2004
4
C.A. Lindley
new and serious field of academic enquiry. Research projects are addressing issues of technology and form in order to raise the perceived cultural quality of games, not just as an entertainment medium, but also as an art form. Games as a medium are gaining acceptance as a form with as much validity as any other expressive medium. In terms of engagement and immersion, games have been far more successful than most if not all other computer software applications. It is surprising from this perspective that games have nevertheless been so easily dismissed, rather than studied more intensively for what they achieve in human-computer interaction. Entertainment applications are surely valid in their own right, and there is much to be said for subjecting the dichotomy between “entertainment” and “serious” applications to intensive critique from the perspectives of the emotional loading of those terms, and from the perspectives of how games function from economic, sociological, psychological and existential perspectives. Beyond this, game form can be abstracted from its external functions as a general model of an engaging and immersive medium that can have effective applications in many areas. Klabbers [7] cites as potential areas of application of game systems including: business administration, public administration, educational institutions, environment, health care, human services, international relations, military, religion, technology, human settlements and imaginary worlds. This broad field of potential applications is not needed to justify taking games seriously; the huge market for games and their force of engagement and immersion is enough to justify a strong focus on game media. The broader application areas justify this interest from a much broader field than those interested only in entertainment or art: games provide a very strong paradigm for human-computer interaction across all possible application areas. Since games are not simply games, they are perhaps more accurately referred to as ludic systems, systems that integrate game play with other formal systems of interaction and time structure. This paper discusses the different interaction forms involved from the perspective of how those forms structure the play experience. This discussion includes analysis of the forms of engagement and immersion involved, focusing in particular on how the different systems of temporal order involve different models of engagement and immersion.
2 Games, Models, and Narratives as Independent Formal Subsystems of Ludic Space Ludic systems may involve three different kinds of formal system determining the temporal structure of the experience of players. Each of these types of formal system can be realized in ways that operate independently of the others, although any given ludic system may involve more than one of these forms to varying degrees and using different strategies of interrelationship or integration. The three types of formal system are: the formal system of games, the formal system of the model, and the formal systems of narratives. In Lindley [8] this trichotomy was expressed in terms of game, narrative and simulation, respectively. However, to be more accurate, simulation must be considered together with fabrication in the case of systems for which correspondence with an external system is of limited relevance, and the concept of the model captures this more effectively than that of simulation.
Ludic Engagement and Immersion as a Generic Paradigm
5
3 Games and Game Play Computer games encompass a vast range of different kinds of interactive media productions. In the broadest possible sense we call all of these things games. However, it is much more useful to adopt a narrower definition of game that can be associated with different modes of engagement and immersion from those of other formal ludic semiotic subsystems. Hence a game will be defined as follows: A game is a goal-directed and competitive activity conducted within a framework of agreed rules. This can be referred to as the ludic or ludological definition of game, the kind of definition at the heart of traditional game theory. This definition captures many features of the definitions considered by Juul [6], but omitting criteria based upon issues of pragmatics or function that are highly subject to external accidents of history and context. In being goal-directed, a game involves activity aimed towards some final state. In being competitive, that final state has some kind of payoff value or measure or progress associated with it that may be associated with win/lose criteria. The win/lose criteria may involve absolute measures (first one to achieve a score of 9, or to complete level X) of relative measures (highest score after 10 minutes wins). Competition may be between real players or some combination of real and simulated players (eg. single player strategy games). The definition does not require players to learn the rules; it only requires that activity obeys the rules, and that players implicitly or explicitly agree to those rules. For games played without computational support, payers need to understand the rules. However, as Juul [6] notes, one of the advantages of computer games is that the machine enforces the rules, relieving the player from the need to know all of the rules in detail and supporting rule sets far too complex for manual operation. The rules establish what as a player you can or cannot do, and what the consequences of actions may be within the game. Learning to play a game, making progress within a game and completing or winning a game are a matter of learning how to interact within the game system and its rules in a way that supports progress. This is a matter, not necessarily of learning the game rules, but of learning a gameplay gestalt1, understood as a pattern of interaction with the game system. Playing the game is then a matter of performing the gestalt. It is what the player does, within the system and as allowed by the rules of the game. A gameplay gestalt can have many forms for a particular game, capturing different playing styles, tactics and approaches to progressing through the game and (perhaps) eventually winning. In general, it is a particular way of thinking about the game state from the perspective of a player, together with a pattern of repetitive perceptual, cognitive, and motor operations. A particular gameplay gestalt could be unique to a person, a game, or even a playing occasion. Recurrent gameplay gestalts can also be identified across games, game genres, and players. Some examples of gameplay gestalts in computer games include:
1
A gestalt may be understood as a configuration or pattern of elements so unified as a whole that it cannot be described merely as a sum of its parts.
6
C.A. Lindley
Action games: shoot while being hit, strafe to hiding spot, take health, repeat RPGs: send fast character to lure enemy from group, all characters kill enemy, take health, repeat. Strategy Games: order peasants, send to work, order soldiers, send to perimeters, repeat while slowly expanding the perimeters (up to the point of runaway win/lose); OR: move x archers to tower y every n minutes to head off the enemy camel musketeers from the east who arrive every n minutes. In General: confront barrier, save if successful, reload and retry if unsuccessful. Patterns like these may or may not be explicitly designed for by the creators of a game. If designers do take them into account, it may be in supporting the development and emergence of these patterns in play, rarely by forcing them on the player. It may also be that a game has a single winning gestalt. This may not be a simple repetitive interaction pattern, but perhaps the performance of one very specific pattern (eg. in the case of puzzle games), or a generic pattern performed repetitively in variable instantiations. The rules and ludic medium of a game imply and support a range of different types of valid actions performed by the player, typically with scope for variations in how those actions are performed, and how and when different actions are selected and sequenced. A move within a game is an abstraction over player action, mapping it to an abstracted action of significance within the rule set and independent of local, personal and idiosyncratic variations in performance; a move is a connotation of a physical action and its perceptual consequences, allowed and facilitated by the framing of the game (I can move a chess piece on the board at any time, but I only make a move in the game of chess when I’m playing the game). Hence a player performs actions having conventional connotations as moves within the formal system of the game. Those actions are likely to be highly stylized according to the game, and actions too dissimilar to the stylized set will be regarded as fouls or cheats if their performer intends them to have in-game significance, or as extra-ludic actions potentially frustrating other players if they are not intended to have in-game significance. A gameplay gestalt, as described above, is a player’s pattern of move performances. Douglas and Hargadon [4] discuss the function of schemas (or schemata) in conditioning affective responses to experiences of interactive media, with a particular focus upon narrative pattern schemas and how satisfaction of expectations embodies in a schema function to create immersion within an unfolding narrative, while disruption of expectations requires engagement with the unfamiliar narrative structure. Schemas are understood as cognitive frameworks determining what is known about the world, objects within it, tasks performed and what is perceived. Once a schema is associated with a situation, relevant scripts are called from it that shape perception, navigation and interaction within that scenario. A gameplay gestalt may (once developed) be interpreted as constituting a kind of schema related to, but not identical with, those discussed by discussed by Douglas and Hargadon [4] When a gameplay gestalt functions as a reusable schema, the player will attempt to apply it within games that appear to resemble those within which the schema has previously been successful. Hence as players we may find ourselves repeating game play patterns without exploring new interaction possibilities provided with new games, as long as the established patterns support progress within the game. Douglas and Hargadon note that it is highly normative schemas that enable readers of texts to ‘lose’ themselves in the text in what they refer to as an immersive affective
Ludic Engagement and Immersion as a Generic Paradigm
7
experience. Douglas and Hargadon contrast this with the engaged affective experience where contradictory schemas or elements defying conventional schemas tend to disrupt reader immersion in the text “obliging them to assume an extra-textual perspective on the text itself, as well as on the schemas that have shaped it and the scripts operating within it”. Applying these distinctions to games, a new kind of game in which previously established game play gestalts do not work might in these terms be said to require players to become involved in the engaged affective experience required for developing new gameplay gestalts (or indeed to initially determine which already available gestalts apply within a specific game). Once a successful gestalt is found or formed, the player shifts from engaged affective experience to immersed affective experience in the performance of the gestalt. This process of player engagement and subsequence immersion within a game experience is strongly associated with the exploration, choice and performance of game moves. The potential for choosing moves results in a very loosely predefined time structure such that games are not strongly a priori time-structured in their design. A specific type of move is likely to have some time structure at least implicit within the rule-derived constraints bounding valid ways of performing it. But this is minimal, and the temporal structure of gameplay gestalts is an emergent structure developed during play by the player. Larger scale time structures specific to game form are reflected in the language of rounds, bouts, matches, tournaments, seasons and campaigns. The most primitive complete game experiences, at which a point of win or loss is reached, are bouts or rounds. Higher level game structures tend to be highly repetitive patterns of these simple game experiences, largely concerned with the organization of opponents and extension of the simple competitive situation of a game to include a broader field of opponents with a view to obtaining a global performance or game play ranking obtained by accumulation of the results of many bouts. An interesting difference between textual forms and games forms is that while game genres manifest genre schemata within the fixed design of the game artifacts, the schemata involved in play, referred to here as gameplay gestalts, are inventions of the player. Hence the schemata connoting a first person shooter (involving features like: first-person point of view in a 3D world with player character mobility, armed facing antagonistic opponents, etc.) might invoke in the player a gameplay gestalt (a kind of performance schemata) of the form: shoot while being hit, strafe to hiding spot, take health, repeat. This may succeed across many games conforming to the appropriate set of first-person-shooter schemata for a given player, but different players may use very different game play gestalts. In this case, game schemata, that might characterise genres, rule sets, and game types, inform design processes and facilitate recognition of game types by players. But game play schemata, or gameplay gestalts, are a more personal and emergent phenomenon, cognitive adaptations by the player facilitating progress within a game of a particular type schema. This means that it is the gameplay gestalt, or game play schema, that is the primary learning outcome of game play. It is the player’s structuring or pattern development and selection activity that creates engagement with games, and the ongoing performance of an acquired or selected pattern that creates immersion in game play. As Douglas and Hargadon [4] suggest, and as previously observed by Turkle [9] game play often reaches states of timeless loss of self-consciousness, involving the state of consciousness that Csikszentmihalyi [2] calls flow, a state at the boundaries between engagement and immersion, of being totally absorbed in meeting a constantly unfold-
8
C.A. Lindley
ing challenge. Contrary to Douglas and Hargadon’s view however, this need not have anything to do with narrative. For games it is a state of pure game play, in the present terms, a state of selecting and performing gameplay gestalts, an immersion in performance, in the internal world of a game, but not in the internal world of a narrative. The performance is itself a process requiring attention, and for successful flow the demands of performing an integrated cognitive and instrumental pattern must fall above thresholds of boredom and within the bounds of what is achievable and sustainable for the player in a flowing state of concentration. Csikszentmihalyi characterizes flow, an experience that is simultaneously challenging and rewarding, as one of the most enjoyable and valuable experiences that a person can have.
4 Narrative The narrative structure of experience within ludic systems may be either pre-authored at the performance level, or an emergent pattern based upon combinatorial principles built into the generation level. Narrative in the strongest sense is perceived when a structure conforms to a very specific narrative pattern, the kind of narrative schema referred to by Douglas and Hardagon [4] An a example of such a pattern, commonly found both in commercial cinema in computer games, is the three-act restorative structure (see [3]). The three-act restorative structure has a beginning (the first act) in which a conflict is established, followed by the playing out of the implications of the conflict (the second act), and completed by the final resolution of the conflict (the third act). The three-act restorative model includes a central protagonist, a conflict involving a dilemma of normative morality, a second act propelled by the hero’s false resolution of this dilemma, and a third act in which the dilemma is resolved once and for all by an act that reaffirms normative morality. Each act within the three-act structure culminates in a point of crisis, the resolution of which propels the plot into the following act, or to the final resolution. This structure is very common and highly immersive for audiences having strong anticipations of the structure’s definitive elements. In computer games incorporating a prespecified three act restorative structure within the player’s experience, the central conflict form often manifests recursively (ie. the structure is repeated at different levels of temporal scale). For example, the overall restorative three-act model may be applied to the game experience as a whole, with the dramatic arch being completed when the user finishes the game. At this level the story is usually not interactive, since act one, key scenes within the story of act two (ie. primary plot points), and the playing out of the consequences of the final resolution in act three are typically achieved by cut scenes, sequences of noninteractive, pre-rendered video or non-interactive animation sequences. The next level down within the recursive structure is that of the game level. The game level is designed for the pursuit of a goal, that of the player reaching the end of the level, which progresses the player through the second act of the higher level three-act structure of the game narrative. There is rarely if ever a one-to-one correspondence between game levels and acts; more typically, the first act and the end of the third act are presented via cut scenes, with playable game levels summing to form a highly extended second act followed by the final resolution of the third act as the end of game play (eg. by overcoming the final and toughest enemy, usually a demonic character at the heart of
Ludic Engagement and Immersion as a Generic Paradigm
9
the central conflict in the story). Although experience within a level typically has much of the structure of a match, a contest, a league or a tournament, the sense of level-specific narrative development can be enhanced by increasing difficulty through a level, or by an internal dramatic structure that emphasizes the point of completing the level, such as the defeat of a level boss, the big barrier creature at the end of the level. The false resolution that drives act two of the three-act restorative model at the highest structural level may be seen manifesting repetitively with each game level: when the game level is resolved (completed), the player finds themselves at the beginning of the next game level full of conflicts. At the next level of the recursive decomposition of game structure, there is often a series of smaller scale conflicts and challenges within a game level, which may include monsters to be defeated or avoided, puzzles to be solved, or treasures, clues or keys that must be found in order to progress in the current or future game levels. Usually it is only this lowest level of the game plot that is highly interactive; these are actually the individual games played by the player (by the definition above). The linear and non-interactive cut scenes framing game play are revealed in a predefined order, and within a level all players usually start in the same place and must have completed the same set of tasks in order to complete the level. The low level and interactive parts of the game are played by performance of a gameplay gestalt, a repetitive pattern of moves by which the player progresses through the individual games of a level. Hence game play usually has little if any bearing on the story being told; the story is for the most part a structure imposed on top of, and different from, game play.
4.1 Narrative and Gameplay The question of the relationship between gameplay and narrative is often regarded as problematic [1]. Considering the nature of engagement and immersion involved in each form, this should not be surprising. The apprehension of an experience as a narrative requires the cognitive construction of a narrative schema or narrative gestalt, a cognitive structure or pattern allowing the perception and understanding of an unfolding sequence of phenomena as a unified narrative. In playing a computer game, one must learn and then perform one or more gameplay gestalts in order to progress through the tasks of the game. To experience a ludic system as both a game and as a narrative requires the apprehension of a narrative gestalt unifying the flow of game experiences into a coherent narrative structure. But narrative gestalts and game play gestalts are not the same kind of patterns. Hence the tension between gameplay and narrative can be viewed as arising from competition between these respective gestalt formation and performance processes for perceptual, cognitive, and motor effort. This can be a competition both for cognitive resources, and for formal influence in creating the apprehension of the experience. Within the range of effort required for immersion and engagement, and especially for the experience of flow, if gameplay consumes most of the player’s available cognitive resources, there will be little scope left for perceiving complex narrative patterns (eg. we forget the motivation behind the character’s battles, the names and relationships among the characters, etc.). Moreover, the narrative adds little to player immersion and engagement in game play (who cares, it’s fun anyway!). On the contrary, cut scenes delivering plot points can be experienced as irritations when they interrupt the immersion and flow of game play, leading
10
C.A. Lindley
to forms of engagement that are uninteresting since the player can only wait to reenter the play experience using the same patterns as before the cut scene interrupted it. Conversely, focusing on the development of the sense of narrative (eg. in the case of multipath movies or branching hypertexts) reduces the player’s need for and interest in a highly engaging or immersive gameplay gestalt. In general, game play or the narrative are the sites of immersion. Transitions between game play and narrative cut scenes create engagement, signaling “now you should watch” or “now you should play”. This can work well, but generally this requires engagement to be minimized by minimizing differences in the audiovisual style of the cut scenes and introducing them in a way in which their rhythm flows with that of game play performance. An alternative strategy for overcoming this tension is to develop game play mechanics that are fundamentally dramatic, in that their consequences do affect the higher level narrative patterns of the game. While the three-act restorative structure is a useful heuristic for writers wishing to determine a high level story structure, it is too high level for effective integration with game play. For this, the basic game mechanics must constitute narrative action on the part of the player. This suggests a strategy for achieving narrative game play by moving away from high level predefined narrative schemata towards a more detailed integration of principles for narrative generation within the mechanics of player interaction. That is, narrative game play requires a more object-oriented approach (see [5]), in which game characters encapsulate narrative potential, and the specific narratives that emerge over time are a function of the players’ history within a ludic world. Such a form of game play may not involve repetitive interaction patterns, just as most stories involve development rather than repetition. The primitive actions of a narrative are dramatically significant actions. A series of dramatically significant actions leads to a resolution (according to a specific narrative structure). For a player, the question driving play (for a truly interactive narrative) becomes: which move should I perform next, in this evolving dramatic situation? The game, if viewed as such, then has a closer resemblance to chess or a puzzle game. The series of choices made by the player will form a pattern of dramatic development, closer in nature to solving a puzzle than to winning a game. For a persistent world there will be no final solution state, and the game play experience will be a construction (fabrication) of the dramas and emotions of a life.
5 The Model A simulation can be defined as: a representation of the function, operation or features of one process or system through the use of another. For game worlds, it must be possible for the simulation model to be a model of a system or world that is a fiction or fabrication. This makes model a preferable term, since it does not need to be a model of any aspect of reality, while a simulation is more typically understood as representing some aspect of a real system. As noted above, all computer games involve a level of modeling at which the performance space of the game is fabricated. The experience of the game for the player as a game in the strict sense, as a narrative and/or as a simulation is then a matter of the degree to which the player experience is structured by the formal systems of a game or narrative. If there is little or no pre-structured game or narrative form shaping the play experience, the simulation system becomes the dominant structure informing
Ludic Engagement and Immersion as a Generic Paradigm
11
the shape of the that experience. The presence of predefined game or narrative forms within the structure of the play experience tends to push the sense of the ludic system as a simulation into the background, since game play and narrative provide strong imperatives of purpose motivating player action. This sense of purpose may be implicit but strongly suggested within a simulation game. For example, Sim City implicitly conveys the game objective of growing your city, and doing so may result in the emergence of repetitive play patterns. But this can be varied, and it is possible to have one’s city destroyed by a natural disaster, or to play with dysfunctional cities in different forms, or simply with very different kinds of cities to see how they develop. The game then functions as a “toy” as much as a game; for a simulation, play is much more of a matter of exploring the space of possible experiences supported by the simulation, in which case the purpose of play is provided much more strongly by the player. Authored and pre-specified time structures are then typically manifested by: discrete event modelling of physical phenomena, such as the motions of virtual physical objects within a 3D space subjected to modeled gravitational forces growth and decay of game world objects (eg. buildings within zones in Sim City) events manifesting probabilistically, such as arrivals or departures of game objects (eg. wandering monsters or enemies) functional modeling of complex systems (eg. vehicle simulators and models) A model-based ludic system may involve no specific repetitive and goal-oriented activities, no obvious end state (other than the player getting bored) and no large scale predefined patterning of the play experience. Play patterns emerge over the course of running a model, can be completely different for different runs, and most likely have never been anticipated by the designers of the model. Repetitive action may be used to operate a model, but may not be directed to any specific final goal by the player/operator. In terms of engagement and immersion, this is a matter of how the player chooses to operate the model, and can include modes of play more typical of toys, or experiences of flow that may be associated with other kinds of activities than game play. The extent to which a ludic system is structured and experienced predominantly as a model is reflected in the ability of players to define their own narratives and/or games and game systems within the modeled world. In this sense a model provides a field of play, within which it is up to the players to define how they will play. Integration of more specific narrative and game play patterns provides more ready-made purposes for interaction, but the model substrate of ludic systems always makes it an option for the player not to play the provided games or not to follow pre-structured narratives. For a game that is a functional model with little to no predefined narrative or game elements, players need to engage more heavily to create and sustain their own purposive behaviors. Narratives and games provide purposes and modes of action that can be activated in order to meet those purposes. Engagement in that case facilitates the discovery of schemas of game or narrative form (ie. for specific genres of games or narratives) that provide criteria for the development and/or selection of schemas of play or viewing.
12
C.A. Lindley
6 Conclusion This paper has considered the present of the formal systems of games, narratives and models within ludic systems. Each kind of formal system has schemas that make it recognizable in terms of temporal structure and interaction possibilities. The familiarity or unfamiliarity of schemas leads to either immersion or engagement in the ludic experience. It has also been discussed how learning to play a game is a matter of learning one or more gameplay gestalts, or game play schemas, in combination with selecting appropriate schemas already learned as play patterns applicable to making progress within a game of a particular type. Developing or selecting one or more appropriate gestalts or schemas for the lludic experience involve firstly an experience of engagement with the ludic system followed by and immersion within the ludic experience. Engagement and immersion faciliate the player entering a state of flow, which provides a form of playing pleasure based upon the pattern of interaction. These formal models do not rely upon issues of the thematics of a production. Themes suggest a mapping onto a real or fictional world; for example, one can have a fantasy chess set, a cat and dogs chess set, or an abstract chess set. Themes can obviously have a role in the appeal of a particular game to a particular player (otherwise all ludic systems would be abstract). However, the essential form of a game can be transferred across different themes, just as a particular narrative structure can be realized in different scenarios. Genres often involve thematic conventions, but these can be stretched; a strategy game could deal with historical scenarios, while the same mechanics can also be applied to fantasy scenarios, science fiction, farmyard animals, or just about anything else. Similarly the context and situation of a ludic system can radically redefine its functions and meaning. This means that the basic mechanics, form and structure of a ludic system can be effective in terms of immersion, engagement and flow across many different thematic manifestations and functions. Ludic play is therefore seen to be a generic mechanism. It is, therefore, possible to regard ludic schemas with their implications for engagement, immersion and flow as a general design approach for humancomputer interaction across a broad range of applications. Understanding the experiential implications of game play patterns is a contribution to understanding the broad appeal of computer games. But since these patterns are largely dependent upon the abstract system and mechanics of games, they could be adopted as interaction modes across many applications not traditionally regarded in terms of entertainment. The result is an approach to interaction design that renders interaction entertaining, whatever the overall functions and context of a system may be. The challenge in pursuing this paradigm lies within the many possible application areas, and consists of mapping abstract moves of games onto meaningful functions and useful effects within the domain, together with defining a purpose of interaction served by those moves and leading to the creative orchestration of moves into an immersive experience of play by system users who are also players. Play then does not need to be limited to purposes of entertainment, and repetitive processes of work can be transformed into experiences of optimal flow.
Ludic Engagement and Immersion as a Generic Paradigm
13
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
Aarseth E. J., Cybertext: Perspectives on Ergodic Literature, The Johns Hopkins University Press, 1997. Csikszentmihalyi M., Flow: The Psychology of Optimal Experience, Perennial; Reproduction edition, March 13, 1991. Dancyger K. and Rush J., Alternative Scriptwriting, Second Edition, Focal Press, 1995. Douglas J. Y. and Hargadon A., 2001 “The pleasures of immersion and engagement: schemas, scripts and the fifth business”, Digital Creativity, Vol. 12, No. 3, pp. 153–166. Eladhari M., Objektorienterat berättande i berättelsedrivna datorspel (“Object Oriented Story Construction in Story Driven Computer Games”), Masters Thesis, University of Stockholm, http://zerogame.interactiveinstitute.se/papers.htm. 2002. Juul J., “The Game, The Player, The World: Looking for a Heart of Gameness”, Proceedings, Level Up Digital Games Research Conference, pp. 30–45, Copier M. and Raessens J., Eds., 4–6 November 2003, Utrecht. Klabbers J. H. G., “The Gaming Landscape: A Taxonomy for Classifying Games and Simulations”, Proceedings, Level Up Digital Games Research Conference, pp. 54–67, Copier M. and Raessens J., Eds., 4–6 November 2003, Utrecht. Lindley C. A., “Game Taxonomies: A High Level Framework for Game Analysis and Design”, Gamasutra feature article, 3 October 2003, http://www.gamasutra.com/features/20031003/lindley_01.shtml. Turkle S., The Second Self: Computers and the Human Spirit, Simon and Schuster, New York, 1984.
Realization of Tai-Chi Motion Using a Humanoid Robot Takenori Wama1, Masayuki Higuchi1, Hajime Sakamoto2, and Ryohei Nakatsu1 1
Kwansei Gakuin University, School of Science and Technology 2-1 Gakuen, Sanda, 669-1337 Japan {scbc0057,x78072,nakatsu}@ksc.kwansei.ac.jp http://www.ksc.kwansei.ac.jp http://ist.ksc.kwansei.ac.jp/index.html 2
Hajime Laboratory, Kobe, Japan
[email protected] http://www.hajimerobot.co.jp/
Abstract. Even though in recent years research and development of humanoid robots has increased, the major topics of research generally focus on how to make a robot perform specific motions such as walking. However, walking is only one of the complicated motions humans can perform. For robots to play an active role in society as our partner, they must be able to simulate precisely various kinds of human actions . We chose tai-chi as an example of complicated human actions and succeeded in programming a robot to perform the 24 fundamental tai-chi actions.
1 Introduction Many companies and universities are currently doing research and development into humanoid robots. These robots are equipped with a certain amount of flexibility at their robotic “joints,” making it possible for them to perform various motions. However, most of these studies investigate little outside of rising or walking actions, ignoring the rest of the actions that humans can perform. As a result, little research has fully investigated and utilized robotic flexibility. Indeed, since walking and rising are good examples of complicated and dynamic actions, it is valuable to study them. At the same time, however, it is expected that in the near future humanoid robots will be introduced into society to become our partner at home and in the workplace. Therefore, robots must not only walk or rise but also do various kinds of human-like operations naturally. Robots must also use these motions to communicate with humans. Based on this basic concept, we tried to reproduce smooth full body actions in a commercially available humanoid robot. We selected the motions of tai-chi, a Chinese martial art form, because smooth movements condensed from all human actions for exercising the entire body are essential to it. Therefore, our goal is to design taichi actions, install them, and develop a humanoid robot that can perform them.
M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, pp. 14–19, 2004. © IFIP International Federation for Information Processing 2004
Realization of Tai-Chi Motion Using a Humanoid Robot
15
2 Humanoid Robot We decided to use a robot developed at the Hajime Laboratory to reproduce the smooth exercises of tai-chi. This humanoid robot was equipped with 22 servomotors, enabling it to simulate the smooth, human-like motions by simultaneously controlling all of these motors. The hardware specifications of the robot are shown in Table 1, and its appearance is shown in Fig. 1.
Fig. 1. Appearance of the humanoid robot
16
T. Wama et al.
3 Tai-Chi There are five major schools of tai-chi: Chin, Kure, Son, Bu and Yo, which is the most commonly practiced style. Yo’s extended version has been officially established by the Chinese government. As an established tai-chi, there are 24 formula, 48 formula, 88 formula, and etc. In creating tai-chi motions, we chose the 24 formula tai-chi because even though it is the simplest form of tai-chi, it still contains the strong points of the other schools.
4 Motion Editor A motion editor was used to design the robot’s motion as shown in Fig. 2. Figure 2 also shows the Front, Right, and Top views for the robot’s front, right, and above positions, respectively. In the Perspective view, we can rotate the robot image 360 degrees in any direction to gain a panoramic view. The angle of each motor is controlled by the parameter in the upper right part of the figure. The number in the lefthand side shows the data assigned to each motor, and the right-hand side number shows the angle of each motor. The transition time for each motion is decided by the parameter in the lower right part of the figure. Moreover, since we can create and store 32 motions in one file, 24 files were created to store all of the 24 formula tai-chi motions.
Fig. 2. User interface screen of the motion editor
Realization of Tai-Chi Motion Using a Humanoid Robot
17
5 Motion Design Basically, each motion was created manually using a motion editor. By connecting each created motion, a series of smooth actions was generated. The detailed process is described below.
5.1 Creation of Motion with the Motion Editor As described in Sect. 4, we exhaustively studied each tai-chi motion through magazines and videos. Then we divided each continuous motion into a series of short, key motions; key frames were decided for each motion. Next, a portion of each key motion was decided using the motion editor, which then output the control data for each servomotor. In the process, we had to create each motion, maintaining as much balance as possible.
5.2 Check of Continuous Action Before connecting the motion created in Sect. 5.1 to a series of motions, it was necessary to investigate any incongruities by comparing the motion with that of human taichi motion from magazines and videos. Tai-chi, essentially, is comprised of a series of continuous actions that do not stop from beginning to end. When the robot does taichi, however, there is a short pause when the motion is connected because of the specifications of the motion editor that we used. However, if we concentrate on watching the tai-chi motion of the robot, however, there is no little sense of incongruity.
5.3 Motion Adjustment on an Actual Robot Each tai-chi motion created in Sect. 5.2 was then installed into the robot and checked. Since the robot’s center-of-gravity could only be checked during simulations with the actual physical robot, this whole process was the most important and time consuming. Sometimes small differences in the center-of-gravity between the computer graphics robot and the physical robot couldn’t be recognized on the motion editor. If the robot fell down during a tai-chi motion, the motor angle had to be adjusted. A key frame between two key frames had to be carried out. In this way, we investigated for incongruity in the series of robot motions, eventually obtaining complete motion data.
6 Demonstration of Tai-Chi Motion Based on Human Robot Interaction We extended our study and added a speech recognition tool called Julian to the robot, enabling it to perform a tai-chi demonstration based on communication with humans. When a user utters a command sentence, the key words are extracted by the speech
18
T. Wama et al.
Fig. 3. Composition of the conversation system using the robot
recognition tool, converted into the control command for the robot, and sent to it (Fig. 3). The control data for each motion itself is loaded in the robot’s microcomputer. Corresponding to the control command sent from the Server PC, the microcomputer reads out suitable control data to control the robot movement. At various exhibitions we demonstrated our robot having easy conversations with the audience and showing them various tai-chi motions. In the future, various applications are possible based on such interaction between humans and this type of humanoid robot. For example, a robot can talk or chat with humans and act as their partner. It can also entertain humans by showing them tai-chi or dancing with them. Moreover, such forms of entertainment that currently use computer characters as fighting games and role-playing games could be performed with humanoid robots. In the near future there could be a growing market for such interactive games.
7 Conclusion In this paper we investigated the realization of tai-chi motions for a humanoid robot and created a control data base of human-like tai-chi motions. However, tai-chi is only one of the complicated motions and actions of humans. A humanoid robot must be able to perform various human-like motions, so a robot needs to be capable of autonomous motion, action, and behavior. For the autonomy of a robot, there are many research issues. We want to prepare a database containing various kinds of fundamental motions and achieve any desired motions and actions by combining these basic motion units. For the preparation of such a database, it is necessary for the motion editor to grasp the center-of-gravity balance and make it easy for the user to design robot motions. In addition, it is necessary to introduce new technologies for the humanoid robot. For example, if a robot encounters stairs, it must use image processing to recognize their height and inclina-
Realization of Tai-Chi Motion Using a Humanoid Robot
19
tion. Thus, research of image processing is also required. Research on accurate speech recognition is also needed. At present the functions of humanoid robots are very limited. However, we believe that someday such autonomous robots as science fiction Androids from movies will emerge and be introduced into society.
Building Better Systems for Learning and Training: Bringing the Entertainment Industry and Simulation Technology Together William R. Swartout Institute for Creative Technologies, University of Southern California 13273 Fiji Way, Marina del Rey, CA 90265, USA
[email protected] http://www.ict.usc.edu
Abstract. In 1999, at the University of Southern California the Institute for Creative Technologies (ICT) was established. The ICT was intended to explore a question: what would happen if researchers who understood the technology of simulation and virtual reality worked in close collaboration with people from the entertainment industry who understood how to create compelling stories and engaging characters? What synergies would emerge? Would it be possible to create much more immersive simulation systems for training and learning? In the brief period since the opening of the ICT, we are starting to see the answers to these questions and understand the promise of this approach. In this keynote talk, I will describe some of the insights that have emerged from this collaboration, the major research efforts we have undertaken in areas such as graphics, artificial intelligence and sound, and the integrating virtual reality applications we have produced in areas such as training and leadership development.
M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, p. 20, 2004. © IFIP International Federation for Information Processing 2004
Game Intelligence: From Animal Play Behavior to Entertainment Computing Marion Bönsch-Kauke Ahrenshooper Str. 33/0501, 13051 Berlin, Germany
[email protected] Abstract. Playing a game, cooperative, competitive or mixed motive, keeps our attention; good vibrations and a sense of humor ‘arouse’ interests; desire leads to approach (AIDA): this entertains you. This remark is only true if the state activities in play, gamble or game to achieve real “playing” from a psychological point of view. Profound, extended and long-lasting scientific studies from animal play behavior to progress in culture of mankind have investigated childhood, adolescence and late adulthood up to simulations in cyberspace century. The results of these studies show clearly that game behavior can be described in at least five stages: 1. Relaxed field; 2. Quasi-experimental operating; 3. Imitating and imagining a “make-believe” world; 4. Selection of regularities; 5. Fairness (respect for the rules: moral and responsibility). The peak or crown of game behavior is called “Game Intelligence”. The social-psychological facts of entertainment computing must be taken into consideration when developing present and future inventions.
References [1] Kauke, Marion (1992). Spielintelligenz. Spielend lernen – Spielen lehren? Spektrum Akademischer Verlag Heidelberg, Berlin, New York. [2] Kauke, Marion (1998). Kooperative Intelligenz. Sozialpsychologische und spielexperimentelle Grundlagen der Interaktivität zwischen Partnern. Spektrum Akademischer Verlag Heidelberg
M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, p. 21, 2004. © IFIP International Federation for Information Processing 2004
Effects of Violent Video Games on Aggressive Behavior, Helping Behavior, Aggressive Thoughts, Angry Feelings, and Physiological Arousal Brad Bushman Institute for Social Research, University of Michigan, 426 Thompson Street, Ann Arbor, MI 48106, USA
[email protected] http://www.umich.edu/~bbushman
Abstract. Meta-analytic procedures were used to review the results from 85 studies on violent video game effects. Violent video games increase aggressive behavior, aggressive thoughts, angry feelings, and arousal levels, and decrease helpful behaviors. The effects occurred for both children and adults and for both males and females. The more violent the video game, the stronger the effects. Violent video game effects are also increasing over time.
References [1] Anderson, C. and Bushman, B., (2001). Effects of violent video games on aggressive behaviour, aggressive cognition, aggressive affect, physiological arousal, and pro-social behavior: a meta-analytic review of the scientific literature. Psychological Science, Vol. 12, No. 5, pp. 353–359. [2] Anderson, C. and Bushman, B., (2002). Effects of media violence on society. Science, Vol. 295, No. 3, pp. 2377–2379.
M. Rauterberg (Ed): ICEC 2004, LNCS 3166, p. 22, 2004. © IFIP International Federation for Information Processing 2004
New Behavioural Approaches for Virtual Environments Marc Cavazza1, Simon Hartley1, Jean-Luc Lugrin1, Paolo Libardi2, and Mikael Le Bras1 1
School of Computing, University of Teesside, TS1 3BA, Middlesbrough, United Kingdom
[email protected] 2
Department of Electronics for Automation, University of Brescia, via Branze 38, Brescia, I-25123, Italy
Abstract. We describe a new approach to the behaviour of 3D environments that supports the definition of physical processes and interactive phenomena. The work takes as a starting point the traditional event-based architecture that underlies most game engines. These systems discretise the environments’ Physics by separating the objects’ kinematics from the physical processes corresponding to objects interactions. This property has been used to insert a new behavioural layer, which implements AI-based simulation techniques. We introduce the rationale behind AI-based simulation and the techniques we use for qualitative Physics, as well as a new approach to world behaviour based on the induction of causal impressions. This is illustrated through several examples on a test environment. This approach has implications for the definition of complex world behaviour or non-standard physics, as required in creative applications.
1 Introduction It is a common view in interactive systems research to consider that, while graphics and visualisation have made significant progress over recent years, behavioural aspects are somehow lagging behind, and have not sustained a similar pace of progression. This generic statement concerns both the simulation of realistic Physics for 3D worlds and the behaviour (generally AI-based) of autonomous entities populating them. One well-known instantiation of this statement consists in saying that the added value of future computer games will derive increasingly from the AI technology they incorporate, although this statement probably needs to be revisited from a more fundamental perspective. More realistic physical modelling also constitutes a challenge, in particular in terms of computational resources. This problem is currently approached by discretising physical simulation to reflect the actual events taking place in the virtual world, in particular those arising from interaction between world entities. There has been, generally speaking, little research on the notion of integrated world behaviour, which goes beyond the isolated simulation of object’s physical behaviour to consider the world’s physical phenomena from a more global perspective. Such an approach would be centred not only on physical objects but also on processes affecting objects, on the aggregation of objects into systems or devices whose behaviour M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, pp. 23–31, 2004. © IFIP International Federation for Information Processing 2004
24
M. Cavazza et al.
cannot be deduced directly from simple physical simulation, but requires a higher level of conceptual modelling. It should also consider relations between events occurring in the 3D world, and how these can be perceived as causally related by the user. In this paper, we introduce a novel approach to the integrated behaviour of virtual worlds, which is based on the use of AI techniques derived, among others, from qualitative simulation. We first describe the rationale for this approach and the system architecture, which is organised around the Unreal Tournament 2003™ game engine [1], whose event-based system serves as a baseline layer for integration of high-level behaviour. We then discuss two novel methods supporting 3D world’s behaviour, which are qualitative physics and causal propagation, and how their basic components and formalism have been adapted to the specific constraints of real-time visualisation.
1.1 Physics Modelling, Event-Based Systems, and Interaction Comprehensive modelling of all physical events in a virtual environment would be a formidable task, impossible to achieve in real time. In interactive virtual environments, basic physical behaviour is implemented in an interactive fashion to maintain a response rate which is acceptable for user interaction. This has led to the rationale, that in order to maintain this interaction rate, the Physical simulation is discretised. Kinematic aspects within these systems tend to be simulated through traditional numerical approaches, while more complex mechanical events (objects breaking or exploding) are pre-calculated. In other words, the overall dynamics of objects is subject to traditional physical simulation, while interactions between objects (collisions, etc.) constitute discretisation units. This saves considerable computation at the level of these events, whose pre-computed consequences can be triggered as a consequence of event recognition. The condition for such a system to work efficiently is the availability of an “event system”, i.e. mechanisms for recognising in real-time the occurrence of such events and supporting the programming of cause-effects associations. In most cases, event systems are derived from basic collision detection mechanisms of the graphic engines. These systems are able to produce event primitives corresponding to collision or contact between objects, or objects entering and leaving areas or volumes in the 3D environment. This is not specific to game engines, as event-based systems play an important role in VR software as well [2]. However, traditional game engines use their event system essentially as an API supporting the ad hoc development of object’s behaviours associated with specific instances of events. The starting point for this research was to use the inherent discretisation of Physics in 3D engines to integrate high level behavioural mechanisms that could support complex behavioural simulation on a principled basis.
1.2 System Architecture The system comprises a graphic environment, composed of the UT 2003 engine and an external physical simulation modules (called QP engine and causal engine, see below), developed in C++. The software architecture is based on UDP communication, supported through the UDPLink class in UT 2003. The messages exchanged between the UT 2003 environment and the behavioural modules correspond, on one
New Behavioural Approaches for Virtual Environments
25
side, to the activation conditions of various behaviour instances run by the engine. On the other side, the engine sends messages to update object states, which are interpreted by the Unreal Environment [3]. The Unreal Tournament engine extensively relies on event generation to support many of its interaction aspects and, most importantly, the mechanism for event generation is accessible to redefine specific behaviours. Formally, an event can be characterised as an encapsulated message, which is generated by an Event Source, this being an object of the environment. Examples of such basic events are: Bump(Actor Other), Touch(Actor Other), UnTouch(Actor Other), ActorEnteredVolume (Actor Volume), etc. The Unreal Tournament Engine implements two different kinds of event: the basic (primitive) events, which are low level events defined within the game engine (derived from the collision detection procedures in the graphic engine), and the programmed events. The latter are events whose definitions are scripted and can thus be programmed by the system developer. This is a mechanism by which the system can parse low-level events into high-level events corresponding to a semantic description of the world.
1.3 Techniques for World Behaviour The notion of world behaviour generalises that of virtual world Physics, to encompass any kind of dynamic generation of events in the virtual world. In traditional physical simulation, pre-defined consequences are triggered in response to specific events, such as a glass exploding when hit by a missile. This approach can be generalised by considering the principles according to which events can be related to one another. For instance, physical simulation can be entirely discretised using laws of physics to produce causal chains of elementary events. This approach to symbolic reasoning on physical systems corresponds to an AI technique known as Qualitative Physics [3]. On the other hand, it is also possible to produce world behaviours by directly relating events to one another so as to create artificial causal chains. We describe both methods in the remainder of this paper. Like with any AI method, but more specifically with those modelling worlds, we should first discuss the knowledge representation formalisms that support their implementation.
2 Ontology and Representations The principled definition of behaviour in a symbolic system relies on the appropriate description of action and processes, as well as object properties, which are determinants for their involvement in certain classes of actions and processes. It is thus necessary to develop an ontology for a given environment’s Physics. Importantly, this ontology will describe both objects and relevant actions in the environment.
26
M. Cavazza et al.
Fig. 1. An Example Qualitative Process describing Fluid Flows. This process is interactively triggered by moving the glass under to running water.
2.1 Representing Actions and Processes The default mechanism for representing actions UT 2003™, which is representative of large class of interactive 3D system, consists in directly associating physical events to a set of possible consequences depending on the objects involved. For instance, the impact of a fragile object on a hard surface will be associated with this object being broken, through specific, ad hoc, scripting. This description should be supported by an appropriate formalism for change-inducing events, which should clearly identify actions and their consequences. The second step consists in defining an ontology of such events, i.e. describing the most important high-level events that can be recognised in the virtual environment. We have termed these change-inducing events Context Events (CE) to reflect their semantic nature. Typically, a CE is represented using an action formalism inspired from those serving similar functions in planning and robotics, such as STRIPS [4] or the operator representation in the SIPE system [5].
Fig. 2. Context Events constitute high-level descriptions of actions in the virtual environment. Here, a fragile object breaking on impact (instantiated to a falling glass).
New Behavioural Approaches for Virtual Environments
27
These representations originally describe operators responsible for transforming state of affairs in the world. They tend to be organised around pre-conditions, i.e. conditions that should be satisfied for them to take place and post-conditions, i.e. those world changes induced by their application. Our formalism for CE comprises three main fields, which are analogue to the SIPE representation. The first field, called trigger, contains the basic event from which the CE can be recognised and which prompts instantiation of the corresponding CE. The condition field is a formula testing properties of the objects involved in such as CE, The effect field corresponds to the consequence part of the CE and contains the effect to be applied to the objects affected by the CE (Figure 2). Another kind of action representation consists of discretised physical processes such as those used by qualitative physics. These Qualitative Processes (QP) encapsulate the expression of physical laws relevant to a given high-level process, e.g. liquid flows, heat transfer, etc. An example qualitative process is shown on Figure 1. It formalises several relevant aspects, from the conditions that trigger the activation of a process to the formulas which determine the evolution of key variables (influence equations). The set of formalised qualitative processes constitute an ontology of physical transformations for a given virtual world.
2.2 Objects’ Representation Objects descriptions vary greatly depending on the kind of processing which is applied to them. Often, part-whole relationships and functional properties tend to dominate symbolic object descriptions. In our specific context, the main role of object descriptions is to determine the kind of actions and processes they can take part in, as well as relating an object’s visual appearance to the transformations that can be applied to it. In the first instance, we have organised these representations according to several dimensions: i) the object’s mechanical properties (e.g. breakable, movable, etc.), ii) its functional properties (e.g. object as container, fluid source, support, etc.) and iii) its visual properties (includes the object’s appearance, but also the visual translation of certain of its behaviours, for instance for a glass to tilt, etc.). We have made a choice for the overall granularity of the representation, where direct relations can map properties of a given field onto another. This, in order to avoid the computational overhead of managing a complex semantic network . Another aspect of object representation consists in relating them with the kind of qualitative processes they can take part in. The integration of Qualitative Physics in the virtual environments’ basic mechanisms is achieved through the redefinition of a special class of physical objects: qualitative process objects, or QP objects. This follows traditional implementation techniques by which classes of objects are defined depending on the computations they can trigger from the interactions they participate in (i.e., in UT 2003, objects manipulated by the native physics engine, Karma™, are defined as members of the class of “Karma™ objects”). The QP objects have several properties: i) they are associated with an event interception mechanism that attaches data-driven procedures for the recognition of the pre-conditions of QPs in which they can take part, ii) their properties can be defined through qualitative variables, which are the key variables defined within QP’s and are involved in qualitative proportionalities and influence equations, iii) they are associated physical states that have a visual
28
M. Cavazza et al.
translation, including in terms of transitions between states (e.g. animations showing a recipient filling, a liquid evaporating, etc.). These states correspond to landmark values for the qualitative variables.
3 Qualitative Physics in Virtual Environments Of the various approaches that have been described in qualitative physics, we have opted for Qualitative Process Theory (henceforth QPT) [6], essentially for its representational properties. QPT descriptions are centred on physical processes (e.g. liquid flows, heat transfer, etc.) whose states are described through the values of qualitative variables. We have given a brief introduction to this formalism in previous sections. In essence, relations between variables are described through influence equations and qualitative proportionalities. The former correspond to the actual dynamics of the process; for instance, that the amount of liquid in a recipient increases with the inflow. The latter maintain “static” relationships between variables, such as the fact that the mass of liquid in the container is proportional to its volume. The QPT formalism is well adapted to its integration in virtual environments, for several reasons: i) the explicit description of a QP’s pre-conditions supports the definition of procedures activating the QP simulation from physical events involving objects in the virtual world. This is the basic mechanism for integration of QP’s in the interactive environment, ii) The kind of causality associated with QP descriptions can be matched to user interventions, and iii) QPT has been successfully used to define ontologies with a significant number of processes, representing a whole subset of physical processes for a given world. In terms of their actual implementation, pre-conditions are encoded in specific UnrealScript™ (the programming language of UT2003 serving as an API) procedures associated to the virtual world objects’ in order to trigger the activation of relevant QPs. In that sense, pre-conditions are not strictly speaking part of the actual QP representation implemented. However, all the other elements of the QP representations; qualitative variables, qualitative proportionalities and influence equations are implemented within the QP engine. Their actual use by the engine during simulations is discussed in the following sections. Figure 3 shows the behaviour of the system simulating the filling of a glass from a running tap. When objects, which can behave as recipients, are aligned with a liquid flow (here the beer tap), this, corresponding to the pre-condition of a filling process, activates the corresponding liquid-flow QP on these objects. The process running in the QP engine updates the value of the amount of water in the glass, through its influence equations. In a similar fashion, qualitative proportionalities update the total mass of the glass, as well as the height of liquid. These variables transform the state of the filling glass in the virtual world by updating its physical properties (e.g. weight) as well as its appearance. The overall dynamics is dictated by the QP simulation process, the speed of the filling glass animation being an approximation of these dynamics. The overall simulation remains interactive, and at any time, the user can remove the glass from the running tap, which will interrupt the process while retaining the physical properties of the glass (amount of water/beer filled into the glass).
New Behavioural Approaches for Virtual Environments
29
Fig. 3. Several Qualitative Processes operating simultaneously can be visualised in the virtual world.
Fig. 4. The Causal Engine can dynamically generate new consequences for events taking place in the virtual world by inhibiting the default outcome and substituting it with alternative effects. It relies on semantic descriptions of the actions and objects involved.
30
M. Cavazza et al.
A typical description of world behaviour contains many QP that can interact dynamically, so as to reflect the behaviour of complex systems or devices that would be difficult to model in an integrated fashion through numerical simulation.
4 Redefining the Laws of Causality Causal Simulation normally relates one event occurring in the virtual world to its logical consequences. Causal simulation is a technique for generating behaviours using symbolic descriptions of causal chains relating physical events. In that sense, qualitative physics, as introduced above, incorporates causality within the symbolic description of physical processes and the physical laws governing them. However, if we consider the behaviour of a virtual world as the one perceived by human users rather than an absolute one, it should be characterised by the causal links attributed by the user to sequences of events. Hence, a mechanism that can generate event co-occurrences on a principled basis will elicit the perception of causal relations between these events. In return, the modification of causal laws will determine original virtual world behaviours. This mechanism is implemented into a “causal engine”, a system intercepting events in the virtual world and re-arranging their occurrence using specific knowledge about the desired behaviour of the virtual world [7]. The causal engine operates continuously through sampling cycles that are initiated by the occurrence of actions in the virtual world. Basically, the occurrence of events affecting world objects initiates a sampling cycle, during which the system recognises potential events and stores them while inhibiting their effects (it could be said that it “freezes” them). The causal engine then transforms these “frozen” events, by altering their effects, before reactivating them. This re-activation then initiates a new sampling cycle. The causal engine operates by recognising high-level events (introduced above as Context Events, or CE), whose semantic properties are used to generate new causal associations. These high-level events (such as breaking, filling, launching, etc.) are recognised from primitive events obtained from the graphics engine. For instance, the candidate CE for the glass breaking, is triggered by the glass hitting the table surface. This means that during a sampling cycle, a break(?glass) CE will be instantiated upon recognition of the hit(?table, ?glass) basic event, as the CE’s conditions are satisfied. This CE will be the target for effects’ modifications in the causal engine. These modifications of CE’s are carried out though the applications of specific knowledge structures, called Macro-Operators (Henceforth MOp). MOp use world knowledge (for instance on physical properties of objects) to modify appropriate CE’s parameters. For instance, objects which should break up as an effect of the CE could be replaced by similar, “breakable”, objects. We can illustrate the behaviour of the Causal Engine on a simple example. The test case we will consider is that of a glass being grasped, then dropped by the user for a certain height onto the surface of a table, which can also hold other objects, such as similar glasses. The default physical behaviour would consist for the glass to break on impact (Figure 4, top line), as would be directly encoded as an object behaviour in a Physics engine. Figure 4 represents several alternative behaviours. The default object of the break CE, i.e. the falling glass is substituted with the other glass standing on the table. The basis for this substitution being that the two objects are similar (actually identical), and in close spatial relation.
New Behavioural Approaches for Virtual Environments
31
The resulting impression is depicted on Figure 4 (bottom line, left): the glass falls on the table and upon impact on the table, it is the adjacent glass which breaks up. From the user’s perspective, the normal cause-effect sequence is disrupted: the triggering event of a given CE, in this case the glass falling on a table, will be followed, not by its default consequence (e.g. the falling glass breaking), but by an alternative effect (e.g. a nearby glass breaking without being directly hit). The causal engine can generate multiple alternative behaviours for a given CE: in this example, the table can break rather than the falling glass (Figure 4, bottom line, centre), or the adjacent glass on the table could tilt, spilling its contents (Figure 4, bottom line, right). The causal engine can generate alternative cause-effects relationships for the complete set of events occurring in a virtual world, so as to redefine the overall physical behaviour experienced by the user.
5 Conclusion and Perspectives We have presented a new approach to the implementation of virtual world’s physical behaviour. This approach is based on the simulation of physical phenomena using symbolic computation rather than numerical integration. It uses established AI techniques, such as qualitative simulation as well as a novel approaches to explicit definition of laws of causality. This approach enables the description of the overall “laws of Physics” of a given world, supporting in particular the description of alternative laws of Physics. This simulation method is compatible with the operation of native Physics engines, which can still take charge on the non-discretised aspects of the simulation (e.g. object kinematics): in that sense, even as symbolic methods, they do not compromise the response time of the overall system. Acknowledgements. This work has been supported in part by the European Commission through the ALTERNE project, IST-38575.
References 1. Lewis, M and Jacobson, Games Engines in Scientific Research. Communications of ACM, Vol. 45, No. I, pp. 27–31, 2002. 2. Jiang, H., Kessler, G.D and Nonnemaker, J. (2002). DEMIS: a Dynamic Event Model for Interactive Systems. ACM Virtual Reality Software Technology 2002, Hong Kong 3. Cavazza, M., Hartley, S., Lugrin J.-L. and Le Bras, M., Qualitative Physics in Virtual Environments, ACM Intelligent User Interfaces, pp. 54–61, 2004. 4. Fikes, R. E. and Nilsson, N. J., STRIPS: a new approach to the, application of theorem proving to problem solving. Artificial Intelligence, 2 (3-4), pp. 189–208, 1971. 5. Wilkins, D. E. (1988). Causal reasoning in planning. Computational Intelligence, vol. 4, no. 4, pp. 373–380. 6. Forbus, K.D., Qualitative Process Theory, Artificial Intelligence, 24, 1-3, pp. 85–168, 1984. 7. Cavazza, M., Lugrin, J.-L., Hartley, S., Libardi, P., Barnes, M.J. and Le Bras, M., 2004. ALTERNE: Intelligent Virtual Environments for Virtual Reality Arts. Smart Graphics 2004 Symposium, Banff, Canada, Lecture Notes in Computer Science vol. 3031, Springer Verlag.
This page intentionally left blank
Advanced Interaction Design
This page intentionally left blank
“Kuru-kuru Pitcher”: A Game for the Chair
Internet
Kazuya Adachi, Michael Cohen, Uresh Duminduwardena, and Kayoko Kanno Spatial Media Group, University of Aizu, Aizu-Wakamatsu, Fukushima-ken 965-8580; Japan {m5081101,mcohen,m5062101,s1090061}@u–aizu.ac.jp
Abstract. We have developed second-generation prototypes of the Internet Chair, a novel internet appliance. The first generation explored using the chair as an input device; the prototype described here, is a pivot (swivel, rotating) chair deployed as an output device, a rotary motion-platform information appliance. Its haptic display modality is yaw, dynamically synchronized with wireless visual displays and spatial audio in a rotation-invariant virtual space. In groupware situations— like teleconferencing, chat spaces, or multiplayer gaming— such orientation is also used to twist iconic representations of a seated user, avatars in a virtual world, enabling social situation awareness. Using its audio display modality, transaural speakers (without crosstalk), “nearphones” embedded in the seat headrest, the system can present unencumbered binaural sound with soundscape stabilization for multichannel sound image localization. As a haptic output modality, chairs with servomotors render kinesthetic and proprioceptive cues, twisting under networked control, to direct the attention of a seated subject, orienting seated users like a “dark ride” amusement park attraction or under active user control, local and/or distributed. The manifesting as personal LBE (location-based entertainment), can be used in both stand-alone and networked applications. We have developed a multiplayer game that exploits some unique features of our networked rotary motionplatform, loosely resembling a disk/disc access driver, in which “spindled” players race to acquire circularly arrayed dynamically arriving targets. Keywords: {augmented, enhanced, hybrid, mediated, mixed} reality/virtuality, haptic interface, information furniture, location-based entertainment (LBE), motion platform, networked appliance, soundscape stabilization.
1 Introduction There are more chairs in the world than windows, desks, computers, or telephones. According to a metric of person-hours used, and generalized to include couches, stools, benches and other seat furniture, the chair is the most popular tool on earth, with the possible exceptions of its cousin the bed, and eyewear. The Internet Chair1 begins to exploit such ubiquity, as an instance of information furniture. The first generation explored using the chair as an input device [1]. (The introduction to that same 1
www.u-aizu.ac.jp/~mcohen/spatial-media/IC/
M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, pp. 35–45, 2004. © IFIP International Federation for Information Processing 2004
36
K. Adachi et al.
reference also surveys some instances of cybernetic chairs.) It determines which way a seated user is facing, adjusting dynamic maps and soundscape presentations accordingly. We have developed a second-generation prototype of our Internet Chair deployable as an output device, a motion-platform information appliance. Dubbed “Schaire” (and pronounced “schaire,” for “share-chair”), our prototypes can twist under networked control, synchronized with panoramic visual displays and spatial audio for propriocentric consistency. Besides entertainment computing, this research belongs to fields variously described as or associated with calm technology” or “ubicomp” (for ubiquitous computing2), which containment hierarchy can be proposed as Table 2, driving displays at different scales, a taxonomy of which is shown in Table 1.
1.1 Soundscape Stabilization The direction one’s body is oriented differs from which way one’s head is turned (a primary parameter for auditory directionalization), which in turn differs from which way one’s eyes (and also often one’s attention) point. Nevertheless, a system that tracks the orientation of a pivot (swivel, rotating) chair provides a convenient firstorder approximation for all of these attributes. Informal experiments suggest that seated body tracking alone provides adequate parameterization of dynamic transfer function selection for auditory directionalization [6] while serving as a cue to others in groupware contexts (virtual conferences, concerts, and cocktail parties) about directed attention. The propriocentric sensation is linked with soundscape stabilization, invariance preserving the location of virtual sources under reorientation of the user for world-referenced spatial audio.
1.2 Social Situational Awareness Gaze awareness is an understanding of the direction of another’s attention [7,8]. Besides the usual reasons for turning around (to face an interesting direction), users can use rotation to disambiguate front back sounds, for example, and to signal to other users a direction of regard. A networked chair can be linked to visual displays of virtual spaces, which iconify distributed users as avatars, as suggested by Table 4. Calibrating the Schaire to align real and virtual worlds amplifies natural situational awareness so, for example, teleconferees’ voices can be directionalized to their actual location. With suitably acoustically transparent audio display, such signals can be registered with the actual environment for alignment of real and synthesized cues. In general, both an icon/avatar’s visual presentation and audio manifestation are directionally dependent— icons by having non-symmetric attributes, and sources and sinks by being non-isotropic (non-omnidirectional). The multicast spinning of audio sources communicates to other users in a chat space, both visually and acoustically, as the loudness of non-omnidirectional sources changes when speakers face or turn away from sinks.
2
www.didi.com/sorcerersapprentice
“ Kuru-kuru Pitcher”: A Game for the
3 4 5
6 7
www.jabra.com/products/mobiles.htm www.elumens.com/produts/visiondome.html www.ambisonics.net www.mew.co.jp/cyberdome www.imax.com
Internet Chair
37
38
K. Adachi et al.
Fig. 1. CVE Architecture: groupware suite
2 Implementation: Session-Integrated Multimodal I/O Clients We have designed and implemented an architecture and framework [9] to support a collaborative virtual environment (CVE), allowing distributed users to share multimodal virtual worlds. Our CVE architecture (as shown in Fig. 1) is based upon a client/server (C/S) model, and its main transaction shares the state of virtual objects and users (avatars) by effective multicast via replicated-unicast of position parameters to client peers in a session. The client/server architecture integrates multimodal control/display clients— including the haptic renderer, a spiral visualizer and intensity panner, and perspective renderer— described in the following subsections.
2.1 Azimuth-Display Output Modality Our second-generation prototype, developed with partners Mechtec8 and Yamagata University, features a powerful (with about 3–4 Newton·meters of torque, adjustable to limit maximum speed and force) servomotor for force display [11] [12] and computer-mediated rotation of the chair— visceral sensation as it whirls around to direct the attention of a seated subject with adjustable insistence/forcefulness— like a “dark ride” amusement park attraction, or subtly nudging the user in a particular direction. As shown in Fig. 3, an arbitrary number of similarly equipped chairs can be net 8
www.mechtec.co.jp
“Kuru-kuru Pitcher”: A Game for the
Internet Chair
39
Fig. 2. For its haptic output modality, servomotors render kinesthetic force display, rotating each Schaire under networked control. Note the nearphones straddling the headrest
worked, with application-determined distribution of cybernetic torque [13]. In practice, each Schaire uses two session channels, one to track its real time orientation and one to anticipate its rotational “target.” (Details of the chair driver are the subject of a forthcoming article.)
40
K. Adachi et al.
Fig. 3. Simplified concept: nearphone-equipped rotating seats exchanging torque via networked mutual tractor beams
2.2
(Soundscape Stabilized Swivel-Seat Spiral-Spring): Spatial Audio Output Modality
We have developed interfaces for such a servomotor-equipped chair [14], one of which graphical instances is shown in Fig. 4. This GUI displays and controls the azimuth of the Schaire using a spiral spring metaphor, and also allows positioning of audio sources, directionalized from resident sound files or captured in realtime from analog streams. As an alternative to transaural loudspeakers (providing crosstalkcancelled binaural cues), speaker arrays, and normal headphones, we are using “nearphones,” external loudspeakers placed near but not on the ears, straddling the headrest of a chair: a hybrid of circumaural headphones (which block ambient sounds) and loudspeakers, as shown in Fig. 5. Our groupware suite features other heterogeneous clients. Of particular relevance for the game described in the next section is a computer graphic rendering of a space, allowing various camera positions, including endocentric person: from the pointof-view of the avatar), egocentric or tethered person: attached to but separate from the avatar), and exocentric person: totally detached from the avatar) perspectives [p. 125–147, 15]. For cable-less operation necessitated by the spinning chair, these clients run on laptop computers networked via Wi-Fi. (In our lab we use various Mac iBooks and Powerbooks with the AirPort9 wireless option.)
9
www.apple.com/airport
“ Kuru-kuru Pitcher”: A Game for the
Internet Chair
41
Fig. 4. Soundscape-stabilized spiral-spring swivel seat interface. A spiral spring metaphor is used to calculate restoration force delivered through a servomotor, and the yaw of the chair (as indicated by the total deflection of the spring) and azimuth of a virtual sound source (as indicated by the plus sign, set here at 0° (NW) are used to control intensity stereo panning for playback of audio files and streams. (Developed by Kenta Sasa and Shiyougo Ihara with Takashi Wada and extended by Newton Fernando with Dishna Wanasinghe.)
42
K. Adachi et al.
2.3 Kuru-kuru Pitcher Game: Graphical Output Modality and Input We have developed a game called “Kuru-kuru Pitcher” featuring the motion platform, programmed using Java3D10 [16] [17] [18]. (“Kuru-kuru” means “rotation” in Japanese.) Users seated in the Internet Chair shoot at blocks, arranged around the players like the hours a clock. (For example, the twelfth block is to the (logical) North, and the third block is to the East.) A software game manager streams instructions about which blocks should be targeted. Players race to rotate their chairs, aiming at and shooting the targets, acquiring them in any order. The game’s main graphical window includes three panes, as shown in Fig. 5. The left pane shows an exocentric view of both the right panes show egocentric views for both players. When the “Shoot” button is pushed, one’s ball is pitched, which animation is displayed in the game windows (the yellow sphere). If one’s ball hits a correct block, it disappears from the space (and displays) of both players.
Fig. 5. Triple-paned game window, showing one exocentric (left) and two egocentric (right) views. The player has just pitched a yellow ball at a (missed) target, while the opponent has rendered his/her avatar semi-invisible (yielding the translucent occlusion seen in the bottom right of the top right pane).
10
java.sun.com/products/java-media/3D/
“Kuru-kuru Pitcher”: A Game for the
Internet Chair
43
44
K. Adachi et al.
As a further situational cue to game state, each player’s nearphones display music spatialized to the opponent’s relative azimuth, using the utility client described in the previous section. Mapping the twelve block positions to the twelve notes of a chromatic scale, we are experimenting with two styles of audio cues: playing the respective note as a rotating chair passes that azimuth, and spatializing a melody composed from the notes associated with the target blocks. One’s avatar transparency can be adjusted dynamically, to make one invisible, so that the opponent confronts a “ghost player.” An exotic feature of our game is the ability of each player to swap positions with the other, using the “Thrash” button, naturally also swapping egocentric display panes on the right of the game window. Even though the player chairs are shown adjacently in the game window, the coexistence is of course only virtual, as the chairs are in separate physical spaces. For instance, one can’t shoot the other player. In this sense, then, the players can be said to inhabit separate but coextensive spaces. Player behavior can be likened to that of a disk/disc drive access algorithm, where the targets are circularly arrayed sectors of dynamically arriving page faults to access, and the players are the spindle. (The metaphor is limited, though, as there is no direct analog to our game’s multiplayer competitive aspects, spatialized audio cues, transparency feature, or “Thrash”-swappable positions.) Strategy involves determining the most efficient acquisition order, in consideration of the other player and one’s own position and motion, adjusted dynamically to accommodate newly arrived targets. The winner is the one with the most blocks at the end of a timed round.
3 Conclusion Information furniture interpolates between the convenience of spatially immersive displays and the personalness of carried or worn interfaces. The mediates between its occupant, a seatee, and information, networked multimedia, including realtime audio and data streams for groupware. Our game, platformed on the rotary motion-platform, encourages propriocentric consistency, while allowing rejection of the perspective singularity characteristic of immersive systems. Its interface’s multimodal, multiplayer features blur the distinction between endo-, ego-, and exocentric systems by integrating an egocentric audio display with fluid visual perspective control into inter-personal LBE that is literally sensational. Acknowledgments. This research was supported in part by grants from the Gigabit Research Project and the Fukushima Prefectural Foundation for the Advancement of Science and Education.
“ Kuru-kuru Pitcher”: A Game for the
Internet Chair
45
References 1. Cohen, M.: The Internet Chair. IJHCI: Int. J. of Human-Computer Interaction 15 (2003) 297–311. Special Issue: Mediated Reality. ISSN 1044-7318. 2. Streitz, N.A., Siegel, J., Hartkopf, V., Konomi, S., eds.: Cooperative Buildings: Integrating Information, Organizations, and Architecture (CoBuild’99: Proc. of the Second Int. Wkshp.), Pittsburgh, Springer (LNCS 1670) (1999). ISBN 3-540-66596-X. 3. Lewis, T.: Information appliances: Gadget netopia. (IEEE) Computer (1998) 59–68. ISSN 0018-9162. 4. Mann, S.: Smart clothing: The wearable computer and wearcam. Personal Technologies 1 (1997) 21–27. ISSN 0949-2054 and 1433-3066. 5. Billinghurst, M., Starner, T.: Wearable devices: New ways to manage information. (IEEE) Computer 32 (1999) 57–64 6. Koizumi, N., Cohen, M., Aoki, S.: Japanese patent #3042731: Sound reproduction system (2000) 7. Ishii, H.: Translucent multiuser interface for realtime collaboration. IEICE Trans. on Fundamentals of Electronics, Communications and Computer Sciences (Special Section on Fundamentals of Next Generation Human Interface) E75-A (1992) 122–131. 0916-8508. 8. Ishii, H., Kobayashi, M., Grudin, J.: Integration of inter-personal space and shared workspace: Clearboard design and experiments. TOIS: ACM Trans. on Information Systems (Special Issue on CSCW’92) 11 (1993) 349–375 9. Kanno, T., Cohen, M., Nagashima, Y., Hoshino, T.: Mobile control of multimodal groupware in a distributed virtual environment. In Tachi, S., Hirose, M., Nakatsu, R., Takemura, H., eds.: Proc. ICAT: Int. Conf. on Artificial Reality and Tele-Existence, Tokyo: University of Tokyo (2001) 147—154. ISSN 1345-1278; sklab-www.pi.titech.ac.jp/~hase/ICATPHP/upload/39_camera.pdf; vrsj.t.u-tokyo.ac.jp/ic-at/papers/0l147.pdf. 10. Cohen, M., Martens, W.L.: Spatial Media Research at the University of Aizu. JVRSJ: J. Virtual Reality Society of Japan 6 (2001) 52–57. ISSN 1342-6680; www.u-aizu.ac.jp/~mcohen/welcome/publications/spatial-media.ps. 11. Burdea, G.C.: Force and Touch Feedback for Virtual Reality. John Wiley & Sons, New York (1996). ISBN 0-471-02141-5. 12. Rosenberg, L.B.: A Force Feedback Programming Primer (for PC Gaming Peripherals Supporting I-Force 2.0 and Direct-X 5.0). (1997) www.force-feedback.com. 13. Brave, S., Ishii, H., Dahley, A.: Tangible interfaces for remote collaboration and communication. In: CSCW’98: Proc. Conf. on Computer-Supported Cooperative Work, Seattle, Washington, ACM (1998) 169–178. ISBN 1–58113-009-0. 14. Cohen, M., Sasa, K.: An interface for a soundscape-stabilized spiral-spring swivel-seat. In Kuwano, S., Kato, T., eds.: Proc. WESTPRAC VII: 7th Western Pacific Regional Acoustics Conf., Kumamoto, Japan (2000) 321–324. ISBN 4-9980886-1-0 and 4-9980886-3-7. 15. Barrilleaux, J.: 3D User Interfaces with Java 3D. Manning Publications (2001). ISBN 188477-790-2. 16. Sowizral, H., Rushforth, K., Deering, M.: The Java 3D API Specification. Second edn. Addison-Wesley (2000). ISBN 0-201-71041-2. 17. Palmer, I.: Essential Java 3D fast: developing 3D graphics applications. Springer (2001). ISBN 1-85233-394-4. 18. Walsh, A.E., Gehringer, D.: Java 3D API jump-start. Prentice-Hall (2002). ISBN 0-13034076-6.
Fun and Sports: Enhancing the Home Fitness Experience Wijnand IJsselsteijn1, Yvonne de Kort1, Joyce Westerink2, Marko de Jager2, and Ronald Bonants2 1 Human-Technology Interaction Group, Department of Technology Management Eindhoven University of Technology, P.O.Box 513, 5600 MB Eindhoven, The Netherlands
{w.a.ijsselsteijn,y.a.w.d.kort)@tue.nl 2
Personal Care Institute, Philips Research Eindhoven 5656 AA Eindhoven, The Netherlands
{joyce.westerink,marko.de.jager}@philips.com
Abstract. The current paper describes research that is aimed to elucidate our understanding of technology factors that may help users of home exercise equipment to stay motivated for doing regular work-outs. In particular, we investigated the effects of immersion and coaching by a virtual agent on intrinsic motivation and the sense of presence of participants cycling on a stationary home exercise bike. A basic two-by-two within-subjects experimental design was employed whereby participants were presented with a virtual racetrack with two levels of immersion (high vs. low) and two levels of a virtual coach (with vs. without). Results indicate a clear positive effect of immersion on both motivation and presence. The virtual coach significantly lowered the perceived control and pressure/tension dimensions of intrinsic motivation, but did not affect the enjoyment dimension. The presence of the virtual coach also reduced negative effects associated with VEs.
1 Introduction An object frequently encountered at yard sales, or gathering dust in the attic is the stationary exercise bike. This is most likely due to the fact that exercising on a stationary bike at home is usually quite boring. On the other hand, cycling outside can be time-intensive, uncomfortable (e.g. bad weather, smog), or even dangerous around places not well adapted to bikers. Thus, there appears to be a clear need for exercise equipment in the home that is more stimulating and more gratifying to use so that people’s motivational levels will not plummet after the initial enthusiasm that led to the purchase of the exercise equipment has faded away. The current study deals with the question whether virtual environments (VEs) and biofeedback presented via a virtual coach can help raise motivation for engaging in physical exercise. We hypothesized that offering a more immersive environment in which the user feels present would heighten the fun the user is having, and would thus have a beneficial effect on the user’s motivation. Additionally, we expected that a virtual coach providing biofeedback information on training intensity, specifically heart rate, would increase the motivation as well, as it helps goal-setting and raises perceived control and competency, both of which help boost motivation.
M. Rauterberg (Ed.): ICEC 2004, LNCS 3166, pp. 46–56, 2004. © IFIP International Federation for Information Processing 2004
Fun and Sports: Enhancing the Home Fitness Experience
47
1.1 Motivation Motivation is the concept we use when we describe the forces acting on or within an organism to initiate and direct behavior (e.g. [1]). We usually discern between intrinsic and extrinsic motivation, where intrinsic motivation refers to engaging in an activity purely for the pleasure and satisfaction derived from doing the activity, whereas extrinsic motivation refers to engaging in a variety of behaviors as a means to an end and not for their own sake [2]. Intrinsic motivation is often considered more powerful and leading to more stable behaviour than extrinsic motivation and is highly relevant for sports. Below we will discuss how immersion and feedback are thought to influence intrinsic motivation.
1.2 Immersion and Presence Slater & Wilbur [3] refer to immersion as the objectively measurable properties of a VE. According to them it is the “extent to which computer displays are capable of delivering an inclusive, extensive, surrounding, and vivid illusion of reality to the senses of the VE participant” (p. 604). Thus, immersion refers to the system’s ability to shut out sensations from the real world, accommodating many sensory modalities with a rich representational capability, and a panoramic field of view and sound. Presence can be conceptualised as the experiential counterpart of immersion. It has been defined as the sense of ‘being there’ in a mediated environment (e.g. [4],[5]) and more recently as the “perceptual illusion of non-mediation” [6], which broadens the definitional scope somewhat, also including social factors. Various empirical studies have demonstrated a positive effect of immersion factors on presence, including field of view, stereoscopic imagery, interactivity, pictorial realism, spatial audio, and haptic feedback [7-10]. Presence is generally considered a positive outcome of immersive environments, leading to engagement and more intense enjoyment. If presence could make fitness a more engaging and fun experience, this is likely to boost intrinsic motivation to train.
1.3 Biofeedback The term biofeedback was originally used to describe laboratory procedures (developed in the 1940’s) where trained research subjects were provided with information about their own brain activity, blood pressure, muscle tension, heart rate and other bodily functions that are normally not under our voluntarily control, with the purpose of exerting conscious control over them. Today, biofeedback is often used as a training technique in which people are taught to improve their health and performance by using signals from their own bodies. In the current experiment, heart rate was measured and, based on this information, feedback was provided to the participant using a virtual social agent, who could either encourage participants to do better, tell them they were doing great, or tell them to slow down a little, if the heart rate became too high. In this way, the coach could both be an extrinsic motivator and at the same time provide feedback on the impact of the exercise. This information is likely to enhance the person’s perceived control and competence and stimulates goal-setting and adherence: the information underlines the person’s efforts and progress.
48
W. IJsselsteijn et al.
2 Method 2.1 Design A basic two-by-two within-subjects experimental design was employed whereby participants were presented with two levels of Immersion (high vs. low) and two levels of Virtual coach (with vs. without).
2.2 Participants Twenty-four employees of Philips participated in the study, none of whom engaged in frequent physical exercise. Male/female distribution was even; their average age was 41.3 years.
2.3 Equipment and Setting The experiment was conducted in the HomeLab, at the Philips Research laboratories in Eindhoven, The Netherlands. HomeLab is a future home-simulation, a test laboratory that looks like a normal house and thus provides us with a relatively natural context in which to test the behaviour of the participants using the home fitness application. The experiment was conducted in a room, which was darkened for the purpose of the experiment to avoid bright sunlight unpredictably influencing the visibility of the screen. Participants were asked to seat themselves on a racing bicycle placed on a training system with variable resistance. The bicycle was placed in front of a wallmounted screen on which the environment and the coach were projected with a beamer (see Figure 1).
Fig. 1. Composite photograph of the experimental setup, with the stationary racing bicycle placed in front of the wall-projected virtual environment. Participants’ viewing distance was approximately 2.20m, with an image size of 1.60m by 1.10m.
Fun and Sports: Enhancing the Home Fitness Experience
49
Fig. 2. Immersive condition with virtual coach. Virtual environment generated by Tacx T1900 ‘i-magic’ VR Trainer software. Virtual Coach kindly provided by Philips PDSL.
2.4 Stimuli The high immersion condition showed a fairly detailed interactive computergenerated visualization of a person cycling on a racing bicycle through a landscape. Interaction with the environment took place via the handlebars (for direction) and biking velocity. The low immersion condition showed an abstract picture of a racetrack in bird’s eye view, with a dot indicating the position of the biker. Interaction with the environment was less rich since participants did not have to use the steer to stay on track, nor could they influence the velocity of the dot on the track (although most participants were not aware of this). In the condition with virtual coach, an avatar-like female appeared every minute (see Figure 2). She gave feedback to the participant, based on heart-rate information measured with a special chest belt. In the condition without the virtual coach this image did not appear.
2.5 Dependent Variables The main dependent measures were intrinsic motivation and presence. Motivation was measured using an existing, well-validated questionnaire, the Intrinsic Motivation Inventory (IMI), consisting of six subscales (interest/enjoyment – which is the most central one to motivation – perceived competence, value/usefulness, perceived con-
50
W. IJsselsteijn et al.
trol/choice, felt pressure and tension, and effort) [11]. For measuring presence various methods have been used or proposed to date (for a review, see [12]). The ITC Sense of Presence Inventory [13] provides sufficient sensitivity, while having proven reliability and validity. It consists of four subscales: spatial presence, engagement, ecological validity, and negative effects. Besides this, heart rate and velocity of the participant were also measured and recorded. The heart rate was used as input for the coach’s directions; average speed was considered as a corroborative behavioural measure of motivation, since one would expect participants to work harder during their exercise when motivation is higher.
2.6 Procedure Participants – in sports clothing – received a short introduction upon entering the exercise room. After putting on the chest belt for easy heart rate measurement, they mounted the bicycle for the first session. The total procedure consisted of four sessions, the order of which was fully counterbalanced. After every session participants filled out the IMI and ITC-SOPI, which also gave them 10 minutes to recover from their exercise. The total experiment took about 1.5 hours to complete.
3 Results For both the ITC-Sense of Presence Inventory (ITC-SOPI) and the Intrinsic Motivation Inventory (IMI), components were computed based on the factor structures that were validated in earlier studies. Subsequently, repeated measures analyses of variance (REMANOVA) were performed on these components according to the full model, with Immersion (high vs. low) and Coach (with vs. without) as independent within factors. Results will be reported for intrinsic motivation components first, then for presence. Lastly we will report bivariate correlations between the various components.
3.1 Intrinsic Motivation The six IMI components were all subjected to full model REMANOVAs. Four scales (interest/enjoyment, perceived competence, value/usefulness, and perceived control) showed significant effects of Immersion: all scores were higher for high immersion. These last two scales also showed a significant effect of the virtual coach, as did the pressure/tension scale: value/usefulness was higher, perceived control and pressure were lower with the coach present. Finally, the effort/importance scale did not show any significant results. Means of the most important scales are visualized in Figures 3-5 and reported in Table 1; statistics are reported in Table 2. Average velocity was used as a corroborative behavioural measure of motivation. Indeed velocity scores showed the same pattern of results as the questionnaire data did. There was a main effect of Immersion F(1,23)=65.73, p