This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
6616
Francisco V. Cipolla Ficarra Carlos de Castro Lozano Mauricio Pérez Jiménez Emma Nicol Andreas Kratky Miguel Cipolla-Ficarra (Eds.)
Advances in New Technologies, Interactive Interfaces, and Communicability First International Conference, ADNTIIC 2010 Huerta Grande, Argentina, October 20-22, 2010 Revised Selected Papers
13
Volume Editors Francisco V. Cipolla Ficarra HCI Lab., 24121 Bergamo, Italy E-mail: [email protected] Carlos de Castro Lozano EATCO Research Group, University of Córdoba, Spain E-mail: [email protected] Mauricio Pérez Jiménez University of La Laguna, Department of Drawing, Design and Aesthetic, Spain E-mail: [email protected] Emma Nicol University of Strathclyde, Computer and Information Sciences, UK E-mail: [email protected] Andreas Kratky University of Southern California, Interactive Media Division, USA E-mail: [email protected] Miguel Cipolla-Ficarra HCI Lab., 24121 Bergamo, Italy E-mail: fi[email protected] ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-20809-6 e-ISBN 978-3-642-20810-2 DOI 10.1007/978-3-642-20810-2 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011926011 CR Subject Classification (1998): H.4, H.5, C.2, H.3, D.2, I.2 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI
The user interface is the environment par excellence where the latest breakthroughs in the formal and factual sciences converge. In the design of current and future interactive systems, the presentation of content on screen is the key to the success of the rest of the components that make up an avant-garde computer science structure. The year 2010 opened an interesting decade in which to consolidate communicability, especially with the constant (r)evolution of the interfaces of interactive systems. Right now we are starting to see the first results of the intersection of scientific knowledge to increase the quality of telecommunications in the daily life of millions of users. However, interactive systems will continue to be programmed with regard to the design of the interfaces, using the latest advances in software and the constant progress of hardware. A democratization of the future models in human – computer interaction will ease the interaction in the environments of immersive multimedia oriented toward education, health, work and leisure time. The current era of interactive communication makes us reflect and work daily to meet the needs of the societies to which we belong, and it tends to improve the quality of life of each one of its members. In this environment the new technologies can and must be within reach of everyone. The state of the art will ultimately lead to future technological tendencies born from the interaction between humans the constant technological (r)evolution and the environment. This is a place where the intersection of knowledge deriving from the formal and factual sciences can enrich in a masterful way each one of the research projects presented and related to the last generation of interactive systems on-line and off-line. The (r)evolution of the Net must allow humans to take a steady flight toward new horizons where technological breakthroughs are shared by the base of the pyramid – the general public, in the least amount of time possible. Important steps have been taken in that direction in the last few years thanks to the globalization of telecommunications and social networks. However, the costs stemming from free access to digital information and/or the legislations in force prevent that flight even today in many societies for millions of potential users of multimedia interactive systems. In the current scientific environment we intend to build a bridge of solutions to problems, suggesting innovating solutions and future guidelines of action thanks to the lessons learned from the research we have conducted or that is currently in progress. The Program Committee of the conference consisted of Albert, C. (Spain), Alcantud-Mar´ın (Spain), Anderson, S. (USA), Balsamo, A. (USA), Bellandi, V. (Italy), Bleecker, J. (USA), Buzzi, M. (Italy), C´aceres D´ıaz, A. (Puerto Rico), Cipolla-Ficarra, M. (Italy and Spain), Colorado-Castellary, A. (Spain),
VI
Preface
De Castro-Lozano, C. (Spain), D´ıaz-P´erez. P. (Spain), Dormido-Bencomo, S. (Spain), El Sadik, A. (Canada), Fogli, D. (Italy), Fotouhi, F. (USA), GarridoLora, M. (Spain), Giaccardi, E. (Spain), Giulianelli, D. (Argentina), Griffith, S. (Jamaica), Grosky, W. (USA), Guarinos-Gal´ an, V. (Spain), Guerrero-Ginel, J. (Spain), Hadad, G. (Argentina), Hourcade, J. (USA), Ilavarasan, V. (India), Jones, C. (Argentina), Kratky, A. (Germany), Landoni, M. (Switzerland), Lau, A. (Australia), Lau, F. (China), Lebr´ on-V´ azquez, M. (Puerto Rico), LevialdiGhiron, S. (Italy), Marcos, C. (Argentina), Mon, A. (Argentina), Moreno-S´ anchez, I. (Spain), Moˇzina, K. (Slovenia), Negr´ on-Marrero, P. (Puerto Rico), Nicol, E. (UK), Pastor-Vargas, R. (Spain), P´erez-Garc´ıa, F. (Spain), P´erez-Jim´enez, M. (Spain), Pestano-Rodr´ıguez, J. (Spain), Pino-Mejias, J. (Spain), Ramos-Col´ on, E. (Puerto Rico), Read, T. (Spain), Rodr´ıguez, R. (Argentina), Rubio-Royo, E. (Spain), Ruip´erez, G. (Spain), Ru´ız-Medina, C. (Spain), Ryu, Y. (South Korea), Sainz de Abajo, B. (Spain), Salvendy, G. (USA), S´ anchez-Bonilla, M. (Spain), S´ anchez-Montoya, R. (Spain), Silva-Salmer´ on, J. (Canada), Stanchev, P. (USA), Styliaras, G. (Greece), Tamai, T. (Japan), Torres-Gallardo, E. (Puerto Rico), V¨a¨an¨ anen-Vainio-Mattila, K. (Finland), Valeiras-Reina, G. (Spain), Veltman, K. (Canada), Vera, P. (Argentina), Villarreal, P. (Argentina), Zato-Recellado, J. (Spain). They supported the preparation of the conference. I would like to thank all of the authors and speakers for their effort as well as the referees for their kind collaboration. Finally, the special thanks go to people and authorities of Huerta Grande (C´ ordoba, Argentina), Casa Serrana (Huerta Grande), Carlitur (La Falda), Agencia C´ordoba Turismo, Maria Ficarra (ALAIPO and AInCI), Anna Kramer, Ingrid Beyer and Alfred Hofmann (Springer), Comune di Brescello (Reggio nell’Emilia, Italy), Gabriele Carpi (Pro Loco – Brescello), Virginio dall’Aglio (International Film Festival, Brescello) and to all those who financially supported the international conference. October 2010
Francisco V.C. Ficarra
Acknowledgements
City Council of Huerta Grande, Córdoba – Argentina
Pro loco di Brescello (R.E.) Italy
Festival Film
Table of Contents
Advances in New Technologies, Interactive Interfaces and Communicability: Design, E-Commerce, E-Learning, E-Health, E-Tourism, Web 2.0 and Web 3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francisco V. Cipolla Ficarra Autonomatronics TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alfredo Medina Ayala Wiki Tool for Adaptive, Accesibility, Usability and Colaborative Hypermedia Courses: MediaWikiCourse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. de Castro, E. Garc´ıa, J.M. R´ amirez, F.J. Bur´ on, B. Sainz, R. S´ anchez, R.M. Robles, J.C. Torres, J. Bell, and F. Alcantud Making “Google Docs” User Interface More Accessible for Blind People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Giulio Mori, Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini, and Victor M.R. Penichet An Interactive Information System for e-Culture . . . . . . . . . . . . . . . . . . . . . Susana I. Herrera, Mar´ıa M. Clusella, Mar´ıa G. Mitre, Mar´ıa A. Santill´ an, and Claudia M. Garc´ıa Research and Development: Business into Transfer Information and Communication Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francisco V. Cipolla Ficarra, Emma Nicol, and Valeria M. Ficarra
1
8
16
20
30
44
Reducing Digital Divide: Adult Oriented Distance Learning . . . . . . . . . . . Daniel Giulianelli, Graciela Cruzado, Roc´ıo Rodr´ıguez, Pablo Mart´ın Vera, Artemisa Trigueros, and Edgardo Moreno
Database Theory for Users Unexpert: A Strategy for Learning Computer Science and Information Technology . . . . . . . . . . . . . . . . . . . . . . Francisco V. Cipolla Ficarra
SIeSTA: Aid Technology and e-Service Integrated System . . . . . . . . . . . . . C. de Castro, E. Garc´ıa, J.M. R´ amirez, F.J. Bur´ on, B. Sainz, R. S´ anchez, R.M. Robles, J.C. Torres, J. Bell, and F. Alcantud
159
SIeSTA Project: Products and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carlos de Castro Lozano, Javier Bur´ on Fern´ andez, Beatriz Sainz de Abajo, and Enrique Garc´ıa Salcines
171
Computer Graphics and Mass Media: Communicability Analysis . . . . . . . Francisco V. Cipolla Ficarra and Miguel Cipolla Ficarra
Advances in New Technologies, Interactive Interfaces and Communicability: Design, E-Commerce, E-Learning, E-Health, E-Tourism, Web 2.0 and Web 3.0 Francisco V. Cipolla Ficarra1, 2 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected] 1
Abstract. We make a summarized historical review of the technological breakthroughs and the impact that these have had, and will have have in the case of computing linked to telecommunications in the next few years. It is a 360º analysis, which takes into account the social factors and context where the human being is inserted as a fulcrum of the cosmos in knowledge and scientific advance. Keywords: Computer Science, Information Technology, Software, Hardware, Human-Computer Interaction.
millions of people in the world thanks to television. With this means of communication had started the global village of the new millennium. It was the screen, that little by little was being introduced from the workplace and research into the homes, accompanied by a central unit of electronic processing (computers) [8] [9]. This central processing unit (CPU), made up of memory, arithmetic unit and logic, added to the control unit of input and output, through the different peripherals, does not only represent the great technological breakthrough of humankind in the last century (obviously, without considering separately the CPU components, since we could go so far back as the abacus that was used by the cavemen and is still common in China and Japan to make calculations), but also the horizon of many researchers in the labs of thousands and thousands of universities, scattered around the earth. A horizon that has been widened with that eagerness for discovery and invention that constantly allow to have a hardware which is ahead of the software in areas of the American and Asian continent which daily invent new products and other Europeans who are devoted to explaining said inventions or discoveries. It is easy to see this reality with the simple fact of visiting the libraries of the universities in North America and compare them with the libraries of the European universities. In the former, the study texts are accompanied with an impressive bibliography of science fiction books, whereas in the second they just stick to the mandatory textbooks. In the Old World, the educational university reforms in some member countries make that some subjects related to computer science, multimedia, interface designs, etc. [10] [11] [12], which can’t be assigned over 120 pages in the technical or humanities subjects. That is, a clear nod to the cavemen’s civilization. One of the delayers of the advance of the new or latest technologies, from the point of view of the observer of the changes, is the lack of reading in the university period.
2 Screen Culture The arrival of television in many homes boosted in many economically developed societies the creation of a great libraries network to widen the knowledge of the images in motion by the audience. Then with the computer and following that premise of making out of every user a contents editor, which has been fulfilled from the technological point of view, thanks to the advances of the internet, the free access to information and communicability [5]. In the countries where the rules of interactive communication among the human beings are fulfilled, there is a tendency to read and write more in front of the computer screen. In this sense, the feedback information process is continuous, and it bolsters the relationships between the inhabitants of a given community or with the rest of the global village [11] [13] [14]. However, this continuous reading and writing messages has led to the loss of creativity because of bureaucratic reasons (hiring goods and/or services, reviewing the costs of projects, solving the human factors of the work environment, etc.), which must be solved even from the home point of working hours. That is, the human being must be available to dp interactive communication around the clock seven days a week (7/24). Here is a small involution inside the (r)evolution of the breakthroughs of the new technologies, that is, a diminution of life quality in daily life. Perhaps the creativity of the artists of
Advances in New Technologies, Interactive Interfaces and Communicability
3
the last century and related to geometry (although they didn’t know anything of mathematics), would not have yielded such precious examples, from the scientific point of view, as the impossible figures by Maurits Cornelis Escher [15].
Fig. 1. Example of creativity on paper support by Escher
Fig. 2. Escher and the geometry
4
F.V. Cipolla Ficarra
Fig. 3. The artistic images boost the advance of the sciences and depict the convergence of human genius in many cases
These figures have served to visualize important algorithms and even as models for the 3D representation on the computer screen [16]. Precisely in this limit of 3D representation on a 2D screen is where can be seen the great qualitative evolution of graphical computing and all its derivations towards the most motley sectors of society such as education, entertainment, medicine, e-work, e-commerce- rural tourism, etc. [17-20]. All these activities require new professionals oriented at the communicability of the interactive systems [5]. Systems that are being constantly reduced to an extent in which they will be measured by millimetres, such as can be a television on a wristwatch or the computer system that allows one to pump blood into an artificial heart. However, the human being keeps on asking for socialization, and it is no chance that the phenomenon of the Web 2.0 has changed the way of interrelation among internet users. A step beyond is Web 3.0, where the human being wants to interact with intelligent houses or home automation, see displayed the information on screens that take the wall, the ceiling, the floor of a room, etc. That is, to multiply the virtual spaces of interaction to infinite thanks to the access to the databases interrelated among themselves. Now in this continuous loop of the technological breakthrough, it is necessary not to forget the three main elements necessary to all human beings; food, shelter and clothing, because the gap between churning out products for a privileged consumer elite of these together with the servicing of this consumption is opening in a geometrical way with the lower rank of the population pyramid, especially in the Latin countries. Therefore, the advances in the technological breakthroughs must be designed and thought about in their use for the greatest amount of
Advances in New Technologies, Interactive Interfaces and Communicability
5
possible users in the whole planet, in the shortest time and the lowest possible cost. This precondition is valid both for the products and for the services of the on-line and off-line interactive services.
Fig. 4. “Cosmos Escheriano” (Cipolla-Ficarra, F. 1997). It is an example used to demonstrate that within three months a fine arts/journalism student can make computer animations without a previous knowledge in technical English and commercial applications of graphical computing. Images like these made it possible in the 90s democratize the 2D and 3D images in some cities of Southern Europe through the use of a simple PC (assembled and without a brand). And a low cost commercial software: 3D Studio Max.
3 Conclusion The breakthroughs in the new technologies are generated from a position in the cosmos for the human being, where the future must be something essential. Evidently, without forgetting the past, but this must remain in the background, of those achievements obtained with the evolution of computer science, interactive design, the on-line and off-line hypermedia systems, virtual reality, scientific visualization, etc. Here a continuous balance is necessary between theory and practice, for instance, in the European and American university environment, for instance.
6
F.V. Cipolla Ficarra
Now in the combination and intersection of the formal and factual sciences is where the basis of scientific growth lies. A knowledge that must respect the essential principles such as the freedom of circulation of the obtained results for the common good of humankind, the transparency of the conclusions and the eradication of the dynamic persuader, whose only aim is his personal promotion in the social networks, even if this entails setting up a vertical or dictatorial structure of anything circulating in the Internet. The reliability of the contents and also the detection mechanisms of these phenomena will be essential in the current era of communicability expansion. In this context of continuous evolution, the way of presenting the information on the computer screens (which daily decrease their size) and the access to it, will be essential for the new models of interaction with the databases. In these databases are gradually stored all the knowledge and activities of nowadays civilization. It is necessary to breed a federated system of such activities open to all users. Finally, technological breakthroughs do not wreak havoc on their own. It is the human being that has the final power to turn them into a chaos or a cosmos. It would be ideal that technology stands at the service of all and that the goal of increasing the quality of life for the basis of the human population.
References 1. Cipolla-Ficarra, F.: Persuasion On-Line and Communicability: The Destruction of Credibility in the Virtual Community and Cognitive Models. Nova Publishers, New York (2010) 2. Heller, D.: Aesthetics and Interaction Design –Some Preliminary Thoughts. Interactions 12(5), 48–50 (2009) 3. Shneiderman, B.: Creativity Support Tools. Communications of the ACM 45(10), 116–120 (2002) 4. Holman, D., Vertegaal, R.: Organic User Interfaces: Designing Computers in Any Way, Shape, or Form. Communications of the ACM 51(6), 48–55 (2008) 5. Cipolla-Ficarra, F.: Quality and Communicability for Interactive Hypermedia Systems: Concepts and Practices for Design. IGI Global, Hershey (2010) 6. De Roure, D.: e-Science and the Web. IEEE Computer 43(5), 90–93 (2010) 7. Wilson, J.: Toward Things That Think for the Next Millennium. IEEE Computer 33(1), 72–76 (2000) 8. Bohem, B.: Making a Difference in the Software Century. IEEE Computer 41(3), 33–38 (2008) 9. Steane, A., Rieffel, E.: Beyond Bits: The Future of Quantum Information Processing. IEEE Computer 33(1), 38–45 (2000) 10. Wood, L., Skrebowski, L.: The Future’s Here; It’s Just Unevently Distributed. Interactions 11(2), 76–79 (2004) 11. Wilson, C., Guzdial, M.: How to Make Progress in Computing Education. Communications of the ACM 53(5), 35–37 (2010) 12. Withe, W., et al.: Better Scripts, Better Games. Communications of the ACM 52(3), 42–47 (2009) 13. Piniewski, B., et al.: Empowering Healthcare Patients with Smart Technology. IEEE Computer, 27–34 (2010)
Advances in New Technologies, Interactive Interfaces and Communicability
7
14. Priami, C.: Algorithmic Systems Biology. Communications of the ACM 52(5), 80–88 (2009) 15. Escher Interactive CD-Rom. Abrams, New York (1996) 16. Grammenos, D.: The Ambient Mirror: Creating a Digital Self-image Through Pervasive Technologies. Communications of the ACM 16(2), 46–50 (2009) 17. Baskinger, M.: Pencils Before Pixels: A Primer in Hand-Generated Sketching. Interactions 15(2), 28–36 (2008) 18. Lin, M., et al.: Physially Based Virtual Painting. Communications of the ACM 47(8), 40–47 (2004) 19. Ncube, C., Oberndorf, P., Kark, A.: Opportunistic Software Systems Development: Making Systems from What’s Available. IEEE Software 25(6), 38(6), 38–41 (2008) 20. Rashid, A., Weckert, J., Lucas, R.: Software Engineering Ethics in a Digital World. IEEE Computer 42(6), 34–41 (2009)
Autonomatronics TM Alfredo Medina Ayala Walt Disney Imagineering Research and Development 1401 Flower St. Glendale Ca. 91221 - USA [email protected]
Abstract. Entertainment robots throughout theme parks are well known. In this paper, we briefly discuss some background of automated robots and define some terms that help describe methodologies and concepts for autonomous shows within a flexible narrative. We assert by using some basic rules and concept that entertainers applied during a show, applies to autonomous interactive shows as well. We discuss our multimodal sensory setup and describe how we applied these basic rules and concepts to a show and. We assert that it is uniquely important in the study of autonomous for theme park and location base entertainment.
robots to perform task, our robots perform in shows. We have to solve a set of unique challenges to make our robotic come across as believable characters with distinct personalities and whom are guest can form an emotional connection. These automated robots are design to look alive through expressive motion and audio, but they differ from other types of robots in that they do not respond to external stimuli from our guest. The second type, are interactive audio-animatronics, but these types are heavily depend on direct human manipulation to achieve life like interaction and game play. Thus, we have coined a third type, Autonomatronics (see Fig 1.) that combines both type describe above minus the direct human manipulation of the audio-animatronics. Expressive motion in these types are a hybrid of preprogrammed and real times synthesized animation and blending[1][14].
Fig. 1. Prototype Autonomatronics tracking a person’s face
As mentioned above, our prototype Autonomatronics robot consists of multiple sensors that can be broken into four major components for clarity (See Fig 2.). • Vision − Can recognize facial expressions and infer emotional state − Uses a 3D vision system to locate items of Interest − Can analyze facial features and can deduce specific traits • Hearing − Uses state-of –the-art, grammar- based speech recognition software • Speech − Has the ability to hold structured conversations − Draws from a large library of potential dialogue − Can analyze and mimic specific speech • Brain − Makes real-time decision based upon external stimuli − Responds independently to an audience’s action of choices
10
A.M. Ayala
Fig. 2. Autonomatronics four major components
Each of these categories has its own technical challenges. Braezeal and Fong describe the challenges in the design of interactive robots in general [6, 7, 5, 11]. One major challenge is vision tracking of multiple objects with persistence [3]. However, Donald Reid, describes a general solution to the data association problem of tracking multiple objects with the known limitations for real-time [9]. We still have to consider difficult detection for unconstrained environments [14]. Voice recognition is another formidable challenge. Word recognition is challenging with low single to noise ratio environments, such as theme parks. Also voice recognition for children is another challenge [12,13].
2 The Narrative Bruce and Nourbakhsh describe robotic theaters, were robots play out drama [8]. At the most basic level the fabula is defined as “a series of logically and chronologically related events that are caused or experienced by actors” [2]. We assert by using some basic rules and concept that entertainers applied during a show, applies to autonomous interactive shows as well. One of the golden rules, is never relinquishes control to audience. Secondly, keep the show going at its tempo, but always be prepared for the unexpected. The goal is to get through the show within the a given time boundary; so knowing what to sacrificed and still deliver an entertaining show is crucial. This lead us the concept of a flexible narrative with given time boundaries. Figure 3, illustrate a linear narrative that could define ride experience in a theme park. Each node (1, 2 ... n) can represent a fabula, where the actor could be an audioAnimatronics within a ride experience or show. The point is that you move from start to end in a linear fashion. The length of the arrow determines the length of the show. Figure 4 illustrates a flexible narrative for robotic autnoumose intaction similar to a non-interactive narrative, except that it uses a non-linear curve. The start and ends are
Autonomatronics TM
11
the same but the length of the cord is longer in non-interactive narratives, thus representing a longer time to go form start point to endpoint in a show. The area bounding the box represents the total amount of time for the show. Thus, depending on the interaction, shows varies in time, but always lead to a conclusion.
Fig. 3. Non-interactive Narrative
Fig. 4. Illustrated Interactive Narrative
3 Autonomatronics Play Test The Autonomatronics Play test was design to test this flexible narrative concept with a show time boundary of six minutes [14]. We had the opportunity to play test the autonomous show at Disney’s D23 conference [15]. The Show played every fifteen minutes, 8 hours a day for 5 days. The Actors consisted of two characters, the first was an Audio-Animatronics’ Bird called “Glayds” and our second actor was the new Autonomatronics robot called “Otto” (see Fig 5). Below we list the six acts and a briefed description of each: • Act I: Introduction Of Characters. • Act II: Otto counts number of guest and response depend on count
12
A.M. Ayala
Fig. 5. Picture of Glayds (left) and Otto (right)
• Act III: Otto looks for specific smile across the audience • Act IV: Otto asks a specific audience for their name and then pitch shifts it and repeats their name for fun. • Act V: Otto asks selected audience member by name for their favorite color and response by repeating the color and changing the color of his tie • Act VI: Otto asks audience to move closer together for a group photo and add the favorite color to the bots card. Figures 6, 7 and 8 describe three of the six acts using the goal event outcome graphical representation. Figure 6 shows Act III, Otto looks for smiles. Throughout the entire six minutes, the Autonomatronics vision system was scanning audience members for smiles. Otto would select an audience member that exhibit the largest smile, followed by making eye-to-eye contact and zooming in on the selected audience member. The second selection looked for the least smile. This allowed Otto the opportunity to encourage selected audience members to smile. This proved to be a fun game with the audience member selected, because if selected their image would appear on stage. Figure 7 captured images of a zoom shot of a smiling face and nonsmiling face that appear on a large screen above Otto. Figure 8 illustrates our Animatronics voice recognition system used in Act V. The goal here was to have Otto ask an audience member what their favorite color was, and then repeat back that color mentioned by the guest instantaneously. We reinforced this by changing the color of his tie to match the requested color. If the selected audience member gave a non-reproducible color, for example “poke dots “, then Otto would
Autonomatronics TM
Fig. 6. Otto looks for smiles
Fig. 7. Image Captured showing smile and no smile detection
Fig. 8. Illustrates our Animatronics voice recognition system
13
14
A.M. Ayala
response by saying, “ I can’t do poke dots, what’s yours second favorite color? “ . The audience perceived that Otto had awareness and was fully responsive with the person he was addressing. Lastly, we describe Otto taking a picture in Figure 9. Otto’s goal was to take a group picture. Otto had the ability to ask the audience to get closer if audience members were outside the picture frame. If audience members were outside the picture frame on the left or right boarder, then Otto would gestures to get closer to the center or have the audience members at the edges scoot left or right depending on guest location. After several tries, Otto will take a picture and reinforces awareness by overlaying a bow tie that matched what color the audience member selected earlier in the show (see figure 10).
Fig. 9. Act VI – Otto takes Picture
Fig. 10. Otto’s show guest Picture with graphics overlay
Autonomatronics TM
15
4 Conclusions The results of this trials showed that a narrative with dynamic fabula could work to deliver individually unique shows. We asserted that by never relinquishing control and never stopping for the correct answer, we were able deliver magical autonomous moment to our guest. In order to realize the benefits of this technology, we must add greater capability to our Autonomatronics characters to make our robotic come across as believable characters with distinct personalities and whom are guest can form an emotional connection. To that goal, future work will add the ability to remember individual’s faces, and have new dynamic turn-taking techniques, in order to sustain natural conversations.
References 1. Arikan, O., Forsyth, D.A.: Interactive motion generation from examples. ACM Transactions on Graphics 21(3), 483–490 (2002) 2. Bal, M.: Narratology - Introduction to the Theory of Narrative. U. Toronto Press (2002) 3. Beymer, D., Konolige, K.: Real-Time Tracking of Multiple People Using Continuous Detection. In: IEEE Frame Rate Workshop (1999) 4. Boult, T.E., Chen, L.H.: Analysis of Two New Stereo Matching Algorithms. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 177–182 (1988) 5. Breazeal, C.: A Motivational System for Regulating Human-Robot Interaction. In: Proc. AAAI 1998, pp. 31–36. AAAI Press, Menlo Park (1998) 6. Breazeal, C., et al.: Interactive Robot Theatre. Communications of the ACM 46(7), 76–85 (2003) 7. Breazeal, C.: Designing Sociable Robots. MIT Press, Cambridge (2002) 8. Bruce, A., Knight, J., Nourbakhsh, I.: Robot Improv: Using drama to create believable agents. In: AAAI Workshop Technical Report WS-99-15 of the 8th Mobile Robot Competition and Exhibition, pp. 27–33. AAAI Press, Menlo Park (1999) 9. Reid, D.: An Algorithm for Tracking Multiple Targets. IEEE Transactions on Automatic Control 24(6), 843–854 (1979) 10. Erell, A., Weintraub, M.: Spectral Estimation for Noise Robust Speech Recognition. In: Proc. Speech and Natural Language Workshop, Cape Cod. Morgan Kaufmann, San Francisco (1989) 11. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3-4), 143–166 (2003) 12. Mangu, L., Brill, E., Stolcke, A.: Finding Consensus in Speech Recognition: Word Error Minimization and other Applications of Confusion Networks. J. Computer Speech and Language 14(4), 373–400 (2003) 13. Potamianos, A., Narayanan, S., Lee, S.: Automatic Speech Recognition for Children. In: Proceedings Eurospeech, Rhodes, Greece (1997) 14. Scheirer, W., Rocha, A., Heflin, B., Boult, T.: Difficult Detection: A Comparison of Two Different Approaches to Eye Detection for Unconstrained Environments. In: The Third IEEE Int. Conf. on Biometrics: Theory, Applications, and Systems –BTAS 2009, Washington, D.C (2009) 15. http://d23.disney.go.com/articles/091209_NF_FS_Blog3.html 16. http://disneyland.disney.go.com/disneyland/en_US/parks/attra ctions/detail?name=MrLincolnAttractionPage (Current 2010) 17. http://www.youtube.com/watch?v=FSH9LWnYFZM&feature=related (Current 2010)
Wiki Tool for Adaptive, Accesibility, Usability and Colaborative Hypermedia Courses: MediaWikiCourse C. de Castro1, E. García1, J.M. Rámirez1, F.J. Burón1, B. Sainz2, R. Sánchez, R.M. Robles1, J.C. Torres1, J. Bell1, and F. Alcantud 1 1
Department of Computer Science, University of Córdoba, Campus of Rabanales Madrid-Cádiz Road, km.396-A, Albert Einstein Building, 14071 Córdoba, Spain {egsalcines,jburon,ma1caloc}@uco.es 2 Department of Communications and Signal Theory and Telematics Engineering, Higher Technical School of Telecommunications Engineering, University of Valladolid, Campus Miguel Delibes, Paseo de Belén nº 15, 47011 Valladolid, Spain [email protected]
Abstract. Recently published social protection and dependence reports reaffirm that the elderly, the disabled, or those in situations of dependency objectively benefit from continuing to live at home with the assistance from direct family. Currently in Spain - amongst the elderly, or people in a situation of dependency - 8 out of every 10 people stay at home. The end result is that the direct family relations have the responsibility of performing 76% of the tasks during the daily routine where aid is needed1. Associations for people with disabilities, however, not only report a lack of adequate aid services, but a lack of direct-family assistance as well. It is necessary, therefore, for an “evolution” or overhaul amongst the social and health service provision systems. The elderly, people in situations of dependency, or people with disabilities should be provided with enough resources and aids to allow them to decide their own future2. Keywords: authoring tools; adaptive hypermedia; collaborative work; wiki; SCORM.
1 Introduction Collaborative learning systems enhances the development of tools that facilitate the work of the designer and author of online courses (1 Indesahc AIPO) (2 Shaad) and groupware systems (3 KnowCat). WikiCurso is a Web authoring tool that enables the sharing, use, adapt and make learning objects in a content repository of a knowledge network. 1
2
CERMI “Discapacidad severa y vida autónoma” Centro Español de Representantes de personas con discapacidad (CERMI) 2002. Available at: http://www.cermi.es/documentos/ descargar/dsyva.pdf García Alonso, J. V. (coordinador), “El movimiento de vida independiente. Experiencias internacionales”, Fundación Luis Vives, Madrid, 2003. Available at: http://www.cermi.es/ documentos/descargar/MVIDocumentoFinal.pdf
Wiki Tool for Adaptive, Accesibility, Usability and Colaborative Hypermedia Courses
17
2 Wikicursos WikiCurso (WC) is based on the Integrated System for the generation of hypermedia courses adaptavivos INDESAHC, desktop tool developed by the research group and EATC and which have produced dozens of courses for institutions and companies. INDESAHC The drawback is that the desktop is a tool, does not allow open and direct collaboration of the different types of authors who make up a production team of multimedia systems (coordinators, writers, designers, media producers, engineers and so on. .) This limitation is solved by incorporating MWC INDESAHC functionality and implements such as MediaWiki Wiki tools with which it has built the online encyclopedia Wikipedia.
Fig. 1. Wikicursos
MWC manager incorporates a knowledge-based system that allows, from the actions of the thematic network users, analyze and advise on the status of partnerships, using models of interaction so as to enhance the contributions of authors to corporate repository would be beneficial and increase user participation in the knowledge network. (Enrique here briefly explain your tool). MWC is being developed using AJAX technologies group. The interface of the application and the didactic model are the same as the original version, ensuring that existing users can easily migrate to the Web version without previous training and interactive scenarios (learning objects) to occur with this tool are Web pages accessible (WAI standards compliance), wearable (heuristic rules provide for Jackop NielsenAIPO-ACTD), adaptive (paul Bra) meets the SCORM standards and comply with the Creative Common. INDESAHC is an author tool for creating and evaluating interactive multimedia courses whose structure is based on themes, lesson plans, concepts and scenarios.
18
C. de Castro et al.
Currently the tool is being used as part of the Red EVA (Virtual Learning Environment) which coordinates the International University of Andalusia, and integrate with it, the Instituto Superior Politecnico "José Antonio Echevarria of Cuba and the Universidad Técnica Particular de Loja (Ecuador), also participate through the virtual space a large number of universities and institutions in Spain and Latin America, with the objective of developing a collaborative courses.
Fig. 2. Wikicursos: Modules selection
The INDESAHC is a desktop tool and disconnected, so the course authors can not openly collaborate in developing them, but have to split the issues and then bring them together, which is not efficient for the level of collaboration outlined in the bases of the Red EVA. In response to the constraints identified, we intend to develop a tool with the same facilities for the development of courses that the current INDESAHC, but to incorporate the version control methods currently implemented wiki tools, such as Wikipedia. The Web INDESAHC is being developed using AJAX technologies group with an interface similar to the original version, ensuring that existing users can easily migrate to the new version without any prior training. As an online application that will allow users to modify the scripts and content defined by others is necessary to establish a hierarchy of roles in the system, such as the course coordinator or responsible who will be responsible for ensuring the consistency of it and to reverse any modifications deemed incorrect. For this the system must record every action carried out by users at all times and provide tools to compare with previous versions so you can undo changes that are made by other users. This will also help keep track of what each user has brought out statistics like the percentage of user activity or course. The courses developed with this tool may be imported into desktop tool and vice versa.
Wiki Tool for Adaptive, Accesibility, Usability and Colaborative Hypermedia Courses
19
References 1. Advanced Distributed Learning. Shareable content object reference model (SCORM): The SCORM overview (2005), http://www.adlnet.org 2. Alcantud, F.: Teleformación: Diseño para todos. Universitat de Valencia Estudi General – Server de Publicacions (1999) 3. Gay, G., et al.: Learning management system ATutor (2005), http://www.atutor.ca/ 4. BlackBoard Inc. Learning management system BlackBoard (2005), http://www.blackboard.com/ 5. Brusilovsky, P., Schwarz, E., Weber, G.: ELM-ART: An intelligent tutoring system on World Wide Web. In: Lesgold, A.M., Frasson, C., Gauthier, G. (eds.) ITS 1996. LNCS, vol. 1086, pp. 261–269. Springer, Heidelberg (1996) 6. Brusilovsky, P., Eklund, J., Schwarz, E.: Web-based education for all: A tool for developing adaptive courseware. Computer Networks and ISDN Systems 30(1-7), 291–300 (1998) 7. Brusilovsky, P.: Adaptive Hypermedia. In: User Modeling and User-Adapted Interaction, vol. 11, pp. 87–110. Kluwer Academic Publishers, Netherlands (2001) 8. Brusilovsky, P.: Developing adaptive educational systems: From Design Models to Authoring tools. In: Authoring Tools for Advanced Technology Learning Environments, pp. 377–409. Kluwer Academic Publishers, Netherlands (2003) 9. Carro, R.M., Pulido, E., Rodrígues, P.: TANGOW: Taskbased Adaptive learNer Guidance on the WWW. Computer Science Report, Eindhoven University of Technology, pp. 49–57 (1999) 10. Collins, A., Brown, J.S., Newman, S.E.: Cognitive apprenticeship: Teaching the crafts if reading, writing and mathematics. In: Knowing, Learning and Instruction: Essays in Honor of Robert Glaser, pp. 453–494. Lawrence Erlbaum Associates, New Jersey (1989)
Making "Google Docs" User Interface More Accessible for Blind People Giulio Mori1, Maria Claudia Buzzi1, Marina Buzzi1, Barbara Leporini2, and Victor M. R. Penichet3 1
CNR-IIT, via Moruzzi 1, 56124 Pisa, Italy CNR-ISTI, via Moruzzi 1, 56124 Pisa, Italy 3 Computer Systems Department, University of Castilla-La Mancha, Albacete, Spain {Giulio.Mori,Claudia.Buzzi,Marina.Buzzi}@iit.cnr.it, [email protected], [email protected] 2
Abstract. Groupware systems are increasingly embedded in our everyday life, both at the office and at home. Thus groupware systems should offer easy interaction for all, including the differently-abled. In this paper we describe the design and implementation of a modified version of Google Docs (http://docs.google.com) interfaces for collaborative editing of documents. Although consisting of only a few Web pages (login, document list, text editing) this modified version shows how it would be possible to enhance interaction via screen reader and voice synthesizer with this popular groupware system, while maintaining its appealing “look&feel”. Keywords: Accessibility, usability, groupware, collaborative editing, screen reader, blind.
Making "Google Docs" User Interface More Accessible for Blind People
21
navigability as well as interaction. An interface should satisfy peoples’ needs (utility), be easy to use in a gradual learning process (learnability and memorability) and limit user error (few and easily remedied errors) [9]. Guidelines have been proposed in the literature for designing usable Web content. One authoritative source is the World Wide Web Consortium (W3C, Web Accessibility Initiative group), which defines accessibility guidelines for Web content, authoring tools and user agent design. The W3C Web Content Accessibility Guidelines (WCAG) published by the World Wide Web Consortium (W3C) in the framework of the Web Accessibility Initiative (WAI), are general principles for making Web content more accessible and usable for people with disabilities [16]. The WCAG (2.0) are organized into four principles: clear perception of content information (content perceivable), complete interaction with an interface in its functions (interface elements operable), comprehension of meaning (content understandable), and maximizing the interface’s compatibility with new assistive technologies and devices (content robustness) [16]. A blind person navigating Google Docs via screen reader encounters various problems [2]. In this paper we describe the design and implementation of a modified version of Google Docs interfaces for collaborative editing of documents (Fig. 1). Specifically, the login, document list, and text editing pages were implemented incorporating accessibility criteria. This proposal is only one possible solution for providing easier navigation via screen reader, showing that it is possible to enhance user experience while maintaining an appealing “look&feel”.
a
b
Fig. 1. Google Docs login (a); Selecting a document type in the Main page (b)
The paper is organized into five parts. Section 2 briefly illustrates issues regarding interaction via screen reader and Section 3 presents some related works in the field. Section 4 describes the modified Google Docs UIs optimized for interaction via screen reader; finally, Section 5 introduces a short discussion, and the paper concludes with future work.
22
G. Mori et al.
2 Interacting via Screen Reader Blind people usually interact with the computer via screen reader, voice synthesizer and keyboard, perceiving UI content aurally and sequentially. This interaction may lead to serious problems in perceiving content of Web pages. Specifically, the screen reader causes: •
•
•
Content serialization and overload. Content is announced sequentially, as it appears in the HTML code. This process is time-consuming and annoying when parts of the interface (such as the menu and navigation bar) are repeated on every page. As a consequence, blind users often quit a screen reading at the beginning, preferring to navigate by Tab key from link to link, or explore content row by row via arrow keys. Mixing content and structure. With Web content, the screen reader announces the most important interface elements such as links, images, and window objects as they appear in the code. This is important for helping the blind user figure out how the page is organized, but requires additional cognitive effort to interpret. Content out of order. Depending on the html code, the text might be announced in the wrong order: for instance if a table’s content organized in columns, the screen reader announces the content out of order.
This can lead to perception issues, such as lack of context, lack of interface overview (if the content is not organized in logical sections) and difficulty understanding UI elements or working with form control elements (if not appropriately organized for interaction via keyboard). More details are available in [7]. The screen reader is SW that comes between the computer OS (operating system) and the browser (i.e. the user) making the interaction more complex. Advanced commands must be learned by heart to operate this assistive technology proficiently. For this reason, when designing for blind users it is essential to consider the overall interaction, involving the perceptual, motor and cognitive systems of the Human Processor Model [3]. The cognitive aspect of an interaction is extremely important, since learning techniques relevant for sighted people may not be effective for the visually impaired. Thus, alternative ways to deliver content should be provided. Furthermore, a blind person may develop a different mental model of both the navigation structure and the visual UI, so it is crucial to provide a simple overview of the system as well as content.
3 Related Works Web 2.0 and Rich Internet Applications transformed the Web from a simple collection of hypertext text and images created by single programmers, to multimedia and dynamic content increasingly built by users collaboratively. This evolution has implied the increasing complexity of user interfaces and Web layouts. Since groupware environments vary greatly regarding functions, interfaces, cooperation schemes and more, it is difficult to generalize very specific findings, whereas it is easier to compare homogenous classes of groupware applications. Regarding usability of on-line
Making "Google Docs" User Interface More Accessible for Blind People
23
content available in the World Wide Web, Takagi et al. suggest spending more time on the practical aspects of usability rather than focusing on the syntactic checking of Web pages, since some aspects are difficult to evaluate automatically, such as ease of understanding page structure and interface navigability [12]. Cooperative environments are particularly interesting and useful in the educational field, where knowledge is assembled cooperatively. Khan et al. [5] performed a usability study in an educational environment on ThinkFree, a collaborative writing system, with four novice and four experienced users. Specifically, authors compared ThinkFree to Google Docs by means of a user test with Think Aloud protocol, a post-test questionnaire to collect user feedback and interviews to validate gathered results. Although ThinkFree proved effective for the proposed tasks, efficiency and availability of resources were more limited than in Google Docs. Schoeberlein et al. [11], revising recent literature on groupware accessibility and existing solutions, have highlighted the need for future research. Authors observed that most articles address the needs of a specific category of differently-abled persons. In particular, visually-impaired people with little or no visual perception experience objective difficulties when interacting with a complex layout via screen reader, and are frequently studied. The use of groupware systems by a blind user often requires considerable computer skills. For simplifying access to a popular groupware system (i.e. Lotus Notes), Takagi et al. developed a self-talking client to allow blind people to access groupware main functions efficiently and easily, masking the user from the complexity of the original visual interface [13]. Recently Kobayashi developed a client application (Voice Browser for Groupware systems, VoBG) to enable visually impaired persons inexperienced with computer technology to interact with a groupware system that is very popular in Japan (Garoon 2). The VoBG browser intercepts Web pages generated by the groupware server, parses their HTML code and simplifies on-fly their content and structure in a more understandable format for target users [6]. Baker et al. adapted Nielsen’s heuristic evaluation methodology to groupware; by means of a usability inspection conducted by expert and novice evaluators, they showed that this methodology can also be effectively applied by novice inspectors, at low cost [1]. Ramakrishnan et al. [10] investigate usability assessment in "information management systems", groupware environments characterized by mostly user asynchronous usage, integrating and adapting Nielsen's usability heuristics. Awareness, one of the main properties of a groupware system, is also one of the accessibility principles: a user would be able to perceive by means of the screen reader when portions of UI reload and to know the associated event occurring (e.g. a new person joining the chat, a new message arriving on the board, a new user working on the document, and so on). To fill this gap, the WAI group is working on the Accessible Rich Internet Applications specification (WAI-ARIA) to make dynamic web content and applications (developed with Ajax, (X)HTML, JavaScript) more accessible to people with disabilities [15]. Using WAI-ARIA, web designers can define roles to add semantic information to interface objects, mark regions of the page so users can move rapidly around the page via keyboard, define live regions, etc. [15]. Thiessen gave an example of using WAI-ARIA to design and implement a chat, highlighting some limitations of live regions [14]. However, this problem is common
24
G. Mori et al.
with emerging standards, since browsers and assistive technologies need to conform to the new specifications, and this takes some time before reaching stable implementations.
4 Designing Modified Google Docs UIs In a previous paper [2] we analyzed interaction with Google Docs via screen reader in order to understand the problems encountered by blind people when writing a document collaboratively. Specifically, we examined the Google Docs log-in, the Main (after log-in access) and Document Editing (to create/modify an item of ‘document’ type) pages, with a usability and accessibility inspection. The interaction had been carried out with the screen reader JAWS for Windows Ver. 10.0 (http://www.freedomscientific.com), and both the MS Internet Explorer (IE) version 8.0 and Mozilla Firefox version 3.0.5 browsers. Google Chrome showed problems since it does not work with JAWS. Blind people usually perceive page content aurally and sequentially when accessing the Web by screen reader and voice synthesizer. Furthermore, blind users mainly navigate via keyboard. This type of interaction with Web pages and User Interfaces (UI) leads to several problems perceiving content. We performed an analysis of Google Docs [2] in order to test its accessibility and usability when interacting by means of a screen reader and voice synthesizer. 4.1 Original GoogleDocs UIs: Accessibility Problems Verifying the degree of accessibility via screen reader of Google Docs user interfaces was a preliminary step in our study [2]. Results showed that several main functions of Google Docs are practically inaccessible via keyboard, making interaction very frustrating for blind users. Thus, no further usability analysis was considered at this level. Specifically, the main accessibility problems detected by our inspection via screen reader can be summarized as follows: a) Some interactive elements cannot be detected by the screen reader nor be accessed via keyboard (since they are not standard (X)HTML elements and their labels are announced by the screen reader as simple text) making some tasks impossible to complete (for example, links in the main page built with alternative techniques, without providing the focus via keyboard). b) Users can have difficulty orienting themselves on the interface, with no possibility of quickly accessing its main functions (such as creating or accessing a document) or the document list. c) There are various compatibility issues with JAWS and Google Docs using Internet Explorer and Firefox browsers; this generates some differences in detecting UI elements as well as in the interaction modality. d) Lack of the summary attribute for tables used for layout purposes does not provide useful information on their content quickly. A short, functional and descriptive summary can facilitate navigation through special commands (such the letter “t” to quickly reach the next table), or to make the table content more understandable when reading sequentially via arrow keys, without having to read all cells to get an overview.
Making "Google Docs" User Interface More Accessible for Blind People
25
e) The editor is not practically accessible. The main menu (file, edit, view, insert, format, etc.) and the style formatting toolbar (font type or size, etc.) are inaccessible since they cannot be reached via keyboard, while bold, italic or underlined functions can only be used through shortcuts. f) Some dialogue windows are not accessible at all (no accessible message on informative windows). More details are available in [2]. Based on the accessibility issues observed with the test analysis, we fixed the detected problems by implementing a basic version of modified Google Docs UIs. Specifically, we worked only on the log-in and the Main pages and on the Document Editor (to create/modify an item of ‘document’ type). 4.2 The Modified Google Docs UI: A Proposed Solution The modified UI maintains the same “look & feel” of the original Google Docs (so that sighted users can interact with the familiar UI), while supporting many facilities to improve navigation for blind users. We have focused only on the user interfaces (aiming at improving user interaction) preserving the navigation links between pages, but not all functions of the original Google Docs interfaces are yet implemented. The modified pages are based on the original Google Docs pages, but they have been cleaned of useless code (such as Javascript and functions responsible for behavior of interface elements). We chose this solution -- instead of implementing the interfaces from scratch -- to maintain the same “look & feel”. Figure 2 shows the Main (“all items”) page of the modified UI. 1
4
3
2
5 Fig. 2. Modified Google Docs UI: the “all items” page, split in five areas
New standard (X)HTML interactive widgets (buttons, links, pull down menus, etc.) have been used on the cleaned interfaces and this has produced more accessible effects: interactive elements are completely reachable and their labels are announced by the screen reader (4.1 section, point a). Layout has been modified to facilitate user navigation, giving a blind user the possibility of jumping quickly from one point to another (4.1 section, point b). To this aim, the Main page has been divided into five areas (Fig. 2).
26
G. Mori et al.
Each area in Fig. 2 (a standard (X)HTML div) has been associated with a standard WAI-ARIA suite landmark role, thus a blind user is no longer forced to interact sequentially with the interface, but can move quickly to different areas (by pressing a special shortcut that provides a list of areas navigable via arrow keys). However, the standard landmarks of WAI ARIA suite, being intended for the main common sections of any Web page, are very general. We chose predefined banner, contentinfo, search, navigation and main landmarks (associated with a numbered area of Fig. 2) but their names do not provide a particularly significant orientation for the blind user. The WAI-ARIA suite also makes it possible to define personalized regions that may better fulfill user needs. Unfortunately, at the moment JAWS v.10 or 11 for Windows and both the MS Internet Explorer (IE) version 8.0 and Mozilla Firefox version 3.6 browsers do not correctly support customized regions (only the name “region” is announced). For this reason we have decided to also implement a complementary solution using hidden labels [8]. Hidden labels are a sort of bookmark in the interface; they are not visualized but are considered fixed interaction points by the screen reader. Each area of Fig. 2 has been associated with a hidden personalized label (as well as a standard landmark). This solution allows a blind user to move from one area to another, making interaction easier and more understandable. The user can either activate landmarks by pressing a special key combination on the keyboard (showing a navigable list via arrow keys), or can press the “h” key to jump to the next hidden label (by adding the shift key, it is possible to reach the previous one). On the main interface, the list of available documents has been arranged in a table, like the original Google Docs (each row containing a document), since the screen reader allows one to jump easily from one row to another. However the ‘summary' attribute has been added to the tag
, to clarify its meaning (4.1 section, point d). The editor page (showing the document) of the proposed interface is composed of a toolbar and a text area (Fig. 3). Compared to the original editor (inaccessible, 4.1 section, point e), the new page is now accessible: the toolbar buttons (save, bold, italic, underlined, left, center, right, justified) and the pull down menu (Paragraph, Font Family, Font Size), are reachable via keyboard, and their associated functions may can immediately be activated. The blind user can write in the text area and change font properties, using both toolbar widgets or key shortcuts; (s)he can also modify text alignment and have a feedback when selecting text (word by word). When deciding how to provide an accessible editor, we analyzed several possible solutions; the most interesting were: 1) a set of pieces of codes provides by the Illinois Center (ARIA examples including html, javascript and CSS files) [17], 2) the Dijit Rich Text editor [19] and 3) the TinyMCE editor, an Open Source Javascript-based HTML WYSIWYG editor [18]. After testing the three SWs, we chose the TinyMCE editor because it works well with the screen reader JAWS (with both Mozilla Firefox and MS Internet Explorer) and is ready-to-use. We are currently working on building a customizable editor, making the interaction via screen reader with most popular browsers more satisfying and easier to use.
Making "Google Docs" User Interface More Accessible for Blind People
27
Fig. 3. Modified Google Docs UI: the editor page
4.3 Discussion Different browsers may render content differently and have different behaviors with the same screen reader. Tests performed with JAWS on the proposed UIs have shown a good degree of accessibility using both Internet Explorer and Mozilla Firefox browsers, removing the compatibility issues observed in our preliminary inspection (4.1 section, point c). Without any preliminary knowledge of the UI structure (which a sighted person can perceive at a glance) blind people spend a great deal of time exploring the page in order to find the desired content or elements in the page. The modified UIs provide different ways to create a “logical structure of content” of the interface: ARIA landmarks and hidden labels. With this structure screen reader commands allow the user to perform rapid positioning on the desired part within the page. The WAI-Aria solution is certainly preferable to the solution based on hidden labels, but until the screen readers are able to get customizable landmarks – not the generic ones now identified as just “search”, “banner”, and so on -- it is not really useful for obtaining an appropriate overview of the contents available in the page. Concerning perception, all elements of the modified UIs are focusable via Tab key and operable via keyboard. However, the editor integrated in our solution, although fully accessible, has some usability limits: 1) the blind user is unable to focus immediately on the editing area, skipping the toolbar, 2) widgets of the toolbar should be grouped by function similarity, so the blind user could quickly jump (by a predefined group special key) from one group of the toolbar to another, eliminating the effort required to scan sequentially all the present widgets in the toolbar, which can be frustrating when there are a great number of widgets (alternating between the editing area and the long toolbar in terms of functionalities). Organization of the toolbar in groups was realized by the Illinois Center [17]. At this stage we designed and implemented a possible solution to make the Google Documents user interface easier for a blind user. The next step will be to provide a more complete prototype, which includes the proposed UIs, and at the same time is able to simulate all functions needed to carry out user testing to gather data on the proposed solution. For this aim, a server that emulates a reduced set of Google Docs functions is required. In terms of usability and more accessible interaction, i.e. to allow greater control of the UI, it is necessary to also catch dynamic events, such
28
G. Mori et al.
as when a dialogue pop-up window appears on the screen (e.g. user notification) or the collaborative environment changes (e.g. a new user joins or leaves the group). Currently, this feature is not accessible via screen reader in the original Google Docs UIs. By using server side Ajax and WAI-ARIA live regions it would be possible to fix this problem, making informative messages accessible (4.1 section, point f).
5 Conclusions and Future Work We have described the implementation of a modified version of Google Docs user interface for collaborative editing of documents, to allow a more satisfying experience for users relying on a screen reader and voice synthesizer. To assure that the user interface is usable and accessible via screen reader, we used standard solutions to conform to WAI WCAG 2.0 and ARIA principles, criteria and techniques. At the moment, only the Google Docs login, main page and document editing have been implemented, integrating an open source accessible editor in the proposed solution. Our implementation of the modified UIs showed that with relatively little effort it is possible to also make a complex interaction environment more accessible and usable, and this does not impact on the graphic and appealing “look&feel” of Google Docs. Lack of accessibility of Google Docs may have a dramatic negative impact on user efficiency when carrying out basic tasks such as selecting and updating a document. In future work, we plan to complete the development phase, making this initial prototype operative in order to perform a user test with a sample of blind users with the original and modified Google Docs user interfaces, to evaluate subjective as well as objective data and to assess the proposed solution.
References 1. Baker, K., Greenberg, S.: Empirical Development of a Heuristic Evaluation Methodology for Shared Workspace Groupware (2002) 2. Buzzi, M.C., Buzzi, M., Leporini, B., Mori, G., Penichet, V.M.R.: Accessing Google Docs via Screen Reader. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2010. LNCS, vol. 6179, pp. 92–99. Springer, Heidelberg (2010) 3. Card, S.K., Moran, T.P., Newell, A.: The Psychology of Human-computer Interaction, pp. 29–97. Lawrence Erlbaum Associates, London (1983) 4. ISO 9241-11. Ergonomic requirements for office work with visual display terminals (VDTs) - Part 11: Guidance on usability (1998) 5. Khan, M.A., Israr, N., Hassan, S.: Usability Evaluation of Web Office Applications in Collaborative Writing. In: 2010 International Conference on Intelligent Systems, Modelling and Simulation, Liverpool, United Kingdom, January 27-January 29, pp. 147–151 (2010) 6. Kobayashi, M.: Voice Browser for Groupware Systems: VoBG - A Simple Groupware Client for Visually Impaired Students. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 777–780. Springer, Heidelberg (2008)
Making "Google Docs" User Interface More Accessible for Blind People
29
7. Leporini, B., Andronico, P., Buzzi, M., Castillo, C.: Evaluating a modified Google user interface via screen reader. In: Universal Access in the Information Society, (7/1-2). Springer, Heidelberg (2008) 8. Leporini, B., Paternò, F.: Applying web usability criteria for vision-impaired users: does it really improve task performance? International Journal of Human-Computer Interaction (IJHCI) 24, 17–47 (2008) 9. Nielsen, J.: Usability engineering. Morgan Kaufmann, San Diego (1993) 10. Ramakrishnan, R., Goldschmidt, B., Leibl, R., Holschuh, J.: Heuristics for usability assessment for groupware tools in an information management setting (2004) 11. Schoeberlein, J.G., Yuanqiong, W.: Groupware Accessibility for Persons with Disabilities. In: Stephanidis, C. (ed.) UAHCI 2009. LNCS, vol. 5616, pp. 404–413. Springer, Heidelberg (2009) 12. Takagi, H., Asakawa, C., Fukuda, K., Maeda, J.: Accessibility designer: visualizing usability for the blind. In: 6th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 177–184 (2004) 13. Takagi, H., Asakawa, C., Itoh, T.: Non-Visual Groupware Client: Notes Reader. In: Proceedings of Center on Disability Technology and Persons with Disabilities Conference. California State University, Berkeley (2000) 14. Thiessen, P., Chen, C.: Ajax Live Regions: Chat as a Case Example. In: Proceedings of the 2007 International World Wide Web Conference, W4A (2007) 15. W3C. WAI-ARIA Best Practices. W3C Working Draft, February 4 (2008), http://www.w3.org/TR/wai-aria-practices/ 16. W3C. Web Content Accessibility Guidelines 2.0, December 5 (2008), http://www.w3.org/TR/WCAG20/ 17. Illinois Center for Information Technology and Web Accessibility. ARIA Examples, http://test.cita.illinois.edu/aria/index.php 18. TinyMCE - Javascript WYSIWYG Editor, http://tinymce.moxiecode.com/ 19. Dijit Rich Text editor, http://www.dojotoolkit.org/reference-guide /dijit/Editor.html
An Interactive Information System for e-Culture Susana I. Herrera, María M. Clusella, María G. Mitre, María A. Santillán, and Claudia M. García International Institute Galileo Galilei, Universidad Nacional de Santiago del Estero and Universidad Católica de Santiago del Estero [email protected], [email protected] {asantillan.ccs,cgarcia.cmu, gabriela.mitre.presidente}@fundaringenio.org.ar
Abstract. This study proposes the design of a web interactive information system for promoting cultures. It is a result of a relevant research on Santiago del Estero’s culture1 and human-computer interaction topics; which began about five years ago2. Culture is a hypercomplex phenomenon, that’s why it is studied from two different and complementary approaches: Cultural Studies and Systemics. In regard with the methodology, retroprospectivation is used to design the e-culture system. This strategy involves a systemic modelling process evolving from an existed model (ancient Santiago’s culture) to an existing model (current Santiago’s culture), then to an operating model (interactive model for Santiago’s culture) and finally to a meta model. Retroprospectivation allows changing the order of the described evolution. The operating model is useful for the e-culture system analysis and design. It is based both on an interaction model for knowledge management (guiding the analysis) and on the emotional system design (mainly guiding user interface design). The full validation of the operating model is still in process. Once sufficiently validated it –by adapting it to other cultures- the development of the meta model is expected. It would be a cross-culture web application model that contributes to promoting cultural identities for academic, sporting, trade and tourism purposes. Keywords: e-culture, retroprospectivation.
human-computer
interaction,
emotional
design,
1 Introduction In the last thirty years, scientific discoveries and science-based techniques innovations produced a wonderful technological revolution in which all human beings from different cultures around the world are participating. 1
Transformations in such a short term affected time and space notions, impacting on men lifestyle, practical routines of professions, social styles, religious, politics and communication from all to all. For that dialogue to be effective, understanding the peculiarities of each is necessary. In the past, homogenization was the cultural goal; diversity is now seeking; losing individual profiles for the sake of ultra modernity is not a wise alternative. Today's world requires that each culture develops historical consciousness and the ability to think and act according to their cultural values. However, it should not oppose the concept of identity against globalization. Each culture must be present in this new globalized world but showing their firmly established identity. In this study culture as a hypercomplex phenomenon is presented. It involves an epistemic process [7] of building that identity. This process starts from the native origins of a community and goes on with their endogenous changes produced by new cultures that are received through physical, intellectual or virtual immigration. As a hypercomplex process, studying culture from two different approaches is proposed: from Cultural Studies and Systemic perspectives.
Fig. 1. The methodology: sistemic modeling using retroprospectivation
Currently, the New Information and Communication Technologies (nICTs) provide many tools to facilitate the cultural knowledge and promoting. In this sense, this paper aims to design an Interactive Information System for e-culture. It is considered that web applications can be accessed on a massive scale by people of different cultures, using different languages. On the other hand, cultural communication and promotion would be effective only if user interaction is optimized. Considering
32
S.I. Herrera et al.
culture as a highly subjective, complex and emotional object, it is important that the web application design would be made using emotional systems design paradigm. Regarding the methodology a systemic retroprospective [25] modeling is used (see Figure 1). This process begins with the conception of the Meta Model (1°) to be achieved: the e-culture model corresponding to an Interactive Web Information System for the promotion of Latin American cultures, in this case. Taking Santiago’s culture as a case study, the existed model is built (3°), it consists of the history of that culture. After that, existing model is developed (2°), which includes twelve characteristics of Santiago’s culture. From these, an operating model is being designed and adapted (4°-could be computing implemented). It is based both on an interaction model for knowledge management (guiding the analysis) and on emotional system design (mainly guiding user interface design). The operating model guides the analysis and design of information systems e-culture. From this model, the desired meta model will be obtained. Since it is based on a formal abstraction it is possible to apply it to different content, like other local cultures. This approach is analogous to the innovative and sustainable knowledge management proposal. Using the latter Trayegnosis process could be systemically disaggregate in: Prognosis (anticipation of future knowledge), diagnosis (current or alternative knowledge to be confirmed), Retrognosis (knowledge sought in the past to sustain present knowledge) and Proyegnosis (existed and existing knowledge that allow building projects). Next sections are structured as follows. In section 2 the study of culture phenomenon is presented, from the approaches given by Cultural Studies and Systemics, raising the foundations of a systemic model to represent this phenomenon. Paragraph 3 provides software engineering issues as a basis for the operating model proposed, it deals with Human-Computer Interaction models (mainly the PPKMS interaction model) and Emotional Systems Design. Section 4 provides the operating model for Santiago’s e-culture system, a model that guides the steps of analysis and system design. Finally, section 5 presents the conclusions and future work.
2 Culture as a Complex and Evolutionary System The new life style of the inhabitants of the planet has become a strong cultural revolution that strongly challenges the national identities. This new time has also been called the era of emptiness [15]: the differences vanish underneath a cover of similarities; reality is deprived of meaning in a space without territoriality where everything works under the disposable sign of the consumption. That is why nowadays it is mandatory to study the cultures in order to assist them to assume the present and envisage the future. The expansion of the benefits of Science and Technology, the endogenous changes, and the competitiveness on an international scale, must bear the seal of the identities, altogether with their own voices and (self) images. Within this frame, and from the Cultural Studies and Systemic approaches, the culture is considered as a complex phenomenon, and from a concrete example of a cultural system, a model for the representation of a culture is proposed.
An Interactive Information System for e-Culture
33
2.1 Culture Culture emerged in the course of the human evolution through processes of natural selection. Nowadays, the culture is too developed and its evolution is multi-lineal [8]. • • • • •
The culture is historic, because it is social heritage. It is structural, since it is a combination of ideas, symbols, and interrelated modeling behaviors. It is symbolic, because it is based on meanings that are arbitrarily assigned and shared by a society. It is a message that can be decoded in its contents and rules. The culture is not a homogeneous block, since there are multiple mentalities which result in a diversity in unity, in internal dynamics, and conflict and change.
Culture is a dynamic synthesis-on the ground of the individual and collective consciousness- of the historical, material, and spiritual reality of a society. The cultural manifestations are the different ways in which this synthesis is expressed in each stage of its evolution. Through the cultural manifestations, the values are discerned, options are made, individuals express themselves, become aware of themselves as individuals and as members of a group, see themselves as projects in progress, question their own deeds, search for new meanings and create works that transcend [28]. Concisely, culture is an abstract totality of such a magnitude that it cannot be understood without clues for comprehension and without a systemic perspective. The field of Cultural Studies is a theoretical-conceptual tool to approach culture; culture is considered as a complexity [Ref De Carvalho Jorge]. One of the main phenomena that are studied is the way in which the cultures of different social groups –within a certain context- comport in front of a dominant culture. In the field of Cultural Studies3, the researchers seek a sustained rationality, generative of knowledge, and the inclusion of interdisciplinary perspectives in the analysis of contemporary cultural phenomena. These perspectives are provided by disciplines like Anthropology, Sociology, Philosophy, Literature, History, Arts, Political Economy, Communication, and Cinematography. In the Cultural Studies the perception is oriented from a placing at a distance –once the focus returns to oneself; the reflections are confronted through dialogues with different approaches, and the researcher moves forward with a broad spectrum. The difference is assimilated and sharpened in categories of perception and understanding, as an organized system that permits getting to know oneself and enriching the confrontation with others. The individual identity is built this way. Then, these perspectives blend into only one which is a systemic synthesis that reveals the codes and semantic unities for comprehension.
3
It began in Birmingham (United Kingdom) in the 1960s, when Hoggart began to study the use of the traditions of the literate (1957) and understood that there are other ways of being literate, using the deconstruction. Then, it expanded to the Anglo Saxon countries, the U.S.A., Asia, and Australia.
34
S.I. Herrera et al.
2.2 Culture as a Complex Phenomenon Culture, as an object of study, is a complex phenomenon. Culture, as an object of study, is a complex phenomenon. It is studied from a systemic paradigm which could be optimized by the transdisciplinary features that allows the concurrence of other complementary perspectives. On the basis of the conceptualizations elaborated by prestigious researchers on systemic studies [3, 9] and of the work done by IIGG [6, 5, 23, 7, 27], culture can be defined as a complex phenomenon that consists of: the adaptation of a group of values and norms by a numerous group of people which tends toward the creation of a dynamic, stable entity that persists by itself for a considerable period of time in history. Its main systemic -structural and functional- characteristics [26] are the following: • • •
Elements: beliefs, language, aesthetics, folklore; Limit: the cultures generally have definite spatial limits; Evolution: values and norms, transmitted from generation to generation, on a time of psychological prints or marks, which guarantee their survival (historic consciousness); there is a process of evolution and decadence, until the culture disappears, but it leaves traces that contribute to the formation of a new culture. A member of a cultural system views his system according to the traces he has received, from which it is very difficult to escape [2].
From the Systemics point of view it is important to stress the relationship between culture and society: a culture provides the meanings that make it possible for people to have relationships. Society is web of interactions that unite people through shared meanings and senses. This socio-cultural process takes place in a socio-space, in a historic time. Individuals undergo a complex process of acquisition of the cultural meaning in the form of knowledge and skills for social life (practical, abstract and artistic ones), and from these, individuals integrate into the society. Transdisciplinary Systemics is an appropriate cognitive and decisional approach to culture, because when only analysis and detachment of the elements are used, the inter-relationships are broken and it is impossible to re-integrate the whole, and to observe and study the dynamic behaviors –like the evolutions and the equilibriums. Therefore it is appropriate to utilize the conceptual instruments and tools provided by Systemics to study culture as a complex trans-disciplinary phenomenon, proceeding retroprospectively [25, 7]. Cultural Studies are a complement to Systemics because they allow the interdisciplinary, meta-disciplinary, reflexive and pluralistic views, which are necessary to approach current issues related to the research of cultures. They enrich the range of the questions under study covering a great number of variables and connections. 2.3 A Systemic Model for Studying and Promoting Culture Given a cultural system, it can be studied using systemic modeling, which is transdisciplinary, and retroprospective. The group of researchers in the IIGG has developed a model of culture from the study of their own culture, called santiagueñidad4. 4
It refers to the set of distinctive features of the culture of Santiago del Estero, a province in northeastern Argentina. In this paper, it is called Santiago’s culture.
An Interactive Information System for e-Culture
35
Starting from the empiric approximation provided by a look at this concrete culture, a systemic model is proposed for the study and promotion of a culture. This model is based on components that can be operationalized and processed into information systems. The components which provide the meaning and make it possible for a culture to come into being were selected from the cognitive, affective, and productive dimensions [12]. They are: •
• •
Selected components from the cognitive dimension: the modalities in the use of the language, beliefs, knowledge, ways of thinking and regarding, the religious attitude, and the life styles, are modalities of the knowledge of the world which function as matrixes, overlapping other axiological hierarchies. The articulate the symbols and their meanings since they constitute global mental representations that integrate a particular view of the world. Selected components from the affective dimension: one of the main on es being the sensitivity in the everyday interaction that reveals the social factors which are conditioning it and clears up the ideologies that praise it. Selected components from the productive dimension: the particularities of dancing, music, and singing, the crafts and cooking practices, the historical epics and the literary productions in its varied styles, are modalities of acting.(of action) upon the world.
3 Interactive Information Systems Previous sections presented a systemic model of culture that can be implemented in an information system (IS). To manipulate cultural objects in an effective way, this system must take into account a set of technological features; the most important one is interaction. Interactive SI success depends on the interaction model involved as well as specific design issues to be taken account in this type of systems. These topics are studied by two Informatics areas: Human-Computer Interaction (HCI) and Interaction Design (ID). The most relevant IIGG research’s findings about these topics are presented in this section. 3.1 Interaction Models from a New Perspective The study of the relationship between human factors and computer systems began in 1982. At that year the Association in Computer-Machines (ACM) created its Special Interest Group on Computer-Human Interaction (SIGCHI) [1]. As a result of IIGG research on this subject, many human-computer interaction models were obtained. Figure 2 shows the evolution of these models since 2002. The Informatics Interaction Systemic Models (2002-2007) have evolved according to context changes, from low to high complexity, trying to achieve an optimal humancomputer relationship. The evolution of the complex relationship biosystemtechnosystem began with the Symbiotic Interactive System for Assisted Management or Modelo # 0 [4]. Then another model was built based on adaptive symbiosis, it was called Modelo # 1 [17]. Afterwards, a more demanding symbionomic conditions model was developed, known as Modelo # 2 [18]. Finally, an interactive model
36
S.I. Herrera et al.
Fig. 2. Evolution of IIGG Human-Computer Interaction Models
involving systemic conditions, in addition to symbiosis and symbionomic features, was obtained; it was called Modelo # 3 [19]. The Systemic Models for Data-Information-Knowledge-Intelligence-Wisdom Management (2008-2010) have incorporated time in their representations. From this new vision, systemic modeling procedure is based on retrosprospective methodology, previously described in Introduction (see Figure 1). The main model is known as Modelo # 4 [20]. Zero to 4th models have been validated by applying them to real-life case-studies (called replications). In next section, one of these replications which is appropriate to e-culture modeling, is synthetically described. IIGG is currently working on Integrated Models for Conceptual and Design Processes. Among them, Modelo # 5 addresses the interaction of postgraduate training, based on e-learning and life-long-learning education. And the Modelo # 6 optimizes interaction using mobile and ubiquitous computers, allowing mentor assistance. 3.2 An Interaction Model for Knowledge Management The Personal and Professional Knowledge Management System Model (PPKMS) [10] is a replica of Modelo # 4. It has systemic features to support the current complexity of the relationship between technosystem (Ts), biosystem (Bs) and sociosystem (Ss). This relationship is represented Ts::Bs:: Ss. It is an evolutionary, agile and expansive model.
An Interactive Information System for e-Culture
37
The PPKMS model uses the Reference Scenario about Significance Growth [13], it takes into account the scenario determined by three basic significance layouts: bio-evolutional, personal and cultural layouts. This scenario is limited by axes corresponding to biophysical growth, learning capacity and knowledge system. In this space, the meaning vector takes different meaning degrees: Data, Information, Knowledge, Intelligence, Wisdom (DIKIW values). The model is based on retroprospective method applied to knowledge. Additionally, variable time [16] was incorporated into the model; it allows continuously changing interactions between Bs, Ts and Ss. So an evolutionary system results: Ts ÍÎBs ÍÎSs. This model has four structural components, inherited from Modelo # 4 [10]: 1. The central core is the Bs. People or organized groups of people who have infonomic, symbiotic, symbionomic and systemic capabilities. The Bs takes its own DIKIW values, that can be described by dual, dialectic and 4thOrdercybernetic components [21]. 2. The container is the Ts. The Ts systematically grows and varies as technologies do too. Bs is connected to a universal network (Internet). Variable time dominates this component, in relation to the physical and virtual distance. Considering these variables (time and distance), the Ts can be: Proximal (equipment, techniques that are of immediate reach as TV, radio, microwave), Medial (equipment and techniques used to increase user extent, as a phone), Distal (multimedia communications networks, computers connected to social networks, home automation). 3. The habitat is the Ss. Interaction between Bs and Ts is developed inside it. It allows and facilitates the eco-evolutionary movement (general and global movement) being modified by the Bs::Ts interaction. It has the following characteristics: • • •
Bio (life): features defined from ethnic, heritage, fixed o chosen spatial areas, lineages, dynasties, strain. Psychosocial: social manners selected by emotion or belonging or commitment dimensions; customs associated with abilities, skills and competences. E.g. respect for technology, arts and sciences. Cultural: requirements defined by religious, racial, geographic and professional groups.
4. The meaning and direction are components that determine the quantum speed about perception, intuition, prediction, insight and advance design of desirable futures (Fd), possible futures (Fp) and preferred futures (Fr) that serve the broad Bs satisfiability. The Ts:: Bs:: Ss relationship is constantly moving in that direction and meaning. The functional component is expressed through three features that characterize the structural components:
38
S.I. Herrera et al.
• • •
Ontogenical, features that generate the core values of the constituent entity. Typological, describe the logic of typing process using creative activities that defines relevant-pertinent classes and stereotypes. Social requirements to meet demand - mainly specified by the Bs- and supply. This offer could not respond to real needs (technological consumerism frequently generated by the Ts).
From the PPKMS model is possible to design systems, algorithms, processes and other devices used by the Informatics and Infonomics [24]. It requires the layouts operationalization, considering each mentioned component and variables [10]. It marks meaning and direction necessary to achieve innovation, essential seasoning for effectively and efficiently knowledge management. 3.3 The Interactive Design from an Emotional Perspective The ID defines the structure and behavior of interactive ISs. It tries to create meaningful relationships between people and the products and services that they use, from computers to mobile devices to appliances and beyond. ID practices are evolving with the world [11]. There are underlying concepts that drive the practice of ID; the major ones are [14]: goal-driven design, interface as magic, usability, affordances, and learnability. Since this research is about modeling cultural objects, it is necessary to take into account Donald Norman’s design ideas: emotional design of IS [22]. According to Norman, the successful of interactive systems design requires to involve three levels of people emotions: visceral, behavioral and reflective. The visceral level is pre-conscious. In this level takes place the first sensation of the application, created from its appearance. The behavioral level impacts on the use of the application. The experience itself has several aspects: functionality, performance and usability. But only in the reflective level, consciousness and higher levels of feelings and emotions take place. Only here the full impact of thought and emotion is experienced. This level varies depending on culture, experience, education and individual differences. There is another difference between the three levels of emotions: time. The visceral and behavior levels relate to the present, they involve feelings and experiences generated while the user is running the application. But the reflective level extends much than the present; reflection makes people remember the past and contemplate the future. About application design, the three levels of emotions generate three different kind of design processes, which could be summarized as follows: • Visceral Design -> Appearance. Physical characteristics are most important at this level (form, appearance, aesthetics). To be effective it requires the collaboration of visual and graphic artists. • Behavioral Design -> Appropriate and efficient usage. Good behavioral design is based on functionality, understandability and usability. Good behavioral design is an essential part of the IS design process: from user requirements analysis to the quickly prototyping to test the prospective user.
An Interactive Information System for e-Culture
39
• Reflective Design -> Self-image, personal satisfaction, memories. Reflective level operations determine the user global impression of the application. At this level people think of the product, reflect on its appearance and on the use experience. The total impact of an application comes from the reflective level, using retrospective memory. The essence of reflective design is that it is all in the spectator’s mind. There are many ISs that involves complex phenomena that evoke memory and identity (p.e. culture). To achieve effective interaction in those IS a reflective design is needed. It means that the following aspects would be taken account in the design process: a) a good graphic design, b) music and dance, c) films, d) photographs, e) objects that evoke memories -things that have personal associations; f) self-feelings. In short, attractive applications work better. This attraction produces positive emotions, and generates more creative mental processes.
4 e-Culture Interactive IS: Analyses and Design This section presents the core of the proposal: How to build an interactive IS for eculture? According to the interaction models proposed by the IIGG, especially the PPKMS model, the Bs::Ts::Ss relationship is continuously adapting to the context changes and trends. Therefore, it is imperative to consider that interactive systems are based on large social networks that use the Internet. This justifies that cultural IS should be implemented as web applications: e-culture, a new phenomenon. An interactive e-culture IS is effective if and only if it is capable of transmitting cultural events allowing users to create new cultural knowledge, by use of digital resources (nICTs) and the Internet. The developing process of such a system requires special attention to two stages of Software Engineering: analysis and design processes. The analysis process should take into account the interaction model PPKMS while the design process must take into account the reflective emotion level in the user interfaces design. In order to optimize the analysis activities, the IIGG has developed a systemic model of Santiago’s culture, using retroprospectivation process. Then, an interactive model for Santiago’s culture was obtained by adding interaction characteristics of the PPKMS model. This model is presented in section 4.1. With regard to design optimization, section 4.2 presents a set of emotional design issues to be considered at this stage. 4.1 An e-Culture Interactive Model for Santiagueñidad Using retroprospectivation process, a new model was developed: Santiago’s Culture Existing Model. This was obtained by considering the cultural model proposed in paragraph 2 and the Santiago’s Culture Existed Model. The last one consists of a descriptive model about Santiago’s culture history that can be synthesized in the following lines. Santiago’s culture is a product of an uninterrupted cultural construction over the centuries. It started approximately 1700 years ago, since the experiences of the first
40
S.I. Herrera et al.
American ethnic groups in relation with their habitat5. As time went by, they received the cultural heritage from other ethnic groups: the Spaniards, in the 16th century, as they settled and colonized the territory, the Africans, in the 17th and 18th centuries, which arrived as slaves and freed slaves, and the Syrian-Lebanese and Italian, between the 19th and 20th centuries as immigrants. It is a strongly symbolic culture, with a special artistic wealth (which succeeded in reaching out to other places in the world) that possesses material, procedural, and spiritual diacritical6 elements which confer it an identity of differentiating power. To sum up, the Santiago’s culture generates unique perceptions, representations, and ways of thinking and acting, based on an ethic ecologist evaluation and highly religious which underlies in the American Hispanic bio-cultural mythic background. The Santiago’s culture Existing Model is a systemic model that contains twelve elements that identify the inhabitants of the province of Santiago del Estero. These features are described using 12 “noun verbs”: Creeres (beliefs): a universe of tales from the mythical tradition of profound and sacred significance in the social context. Hablares (Orality, modalities in the use of the mother tongue): richness of the language by the contribution of regional terminology; syntactic, semantic, and pragmatic modalities; the way of speaking. Cantares (Songs, music and singing): musical genres, lyrics of extraordinary expressive intensity, instruments, composers, modalities of performance, esthetics; representatives. Bailares (Dancing): collective style in the way of dancing, choreographic productions, performers, composers, motivation for dancing. Contares (Story-telling): literary production in varied styles; the tradition of story-telling and the ways in which the stories are told –tales, legends, fables, riddles, sayings, proverbs. Pensares (Ways of thinking, source of thought): our life style and our life philosophy shown through literary productions (poems, rhymes, novels, essays). Saberes (Practical knowledge, alternative medicine): the art of healing –traditional practices- modalities, medicines, healing by incantation. Haceres (Crafts, cooking and handicrafts): traditional cuisine, typical dishes, events, homemade desserts; current traditional handicrafts; pottery, weaving, wick basket making, instrument making, leather works; millenary rupestrian designs and engravings. Ceramics art passed down by millenary aboriginal cultures, current meanings of the designs. Sentires (Sentiments, popular religiosity): present patron saint festivals, rituals, pilgrims, personal sacrifice pious offerers, musicians, the festival, the day of the Death, Candle lighting, infant waking, the wedding reception, and communitarian esthetics in social manifestations. Vivires (Living, way of life) home, mud thatched shack, its practicality, its ecological legacy, hospitality, courtyard and mate, games; man’s environment. Luchares (Fighting, heroic epics): guerrillas, look outs, heroic fights. 5 6
It is ccharacterized by a plain forest situated in the Chaco region of Argentina. These are elements chosen by groups to represent to themselves and to others.
An Interactive Information System for e-Culture
41
Mirares (Watching, perception and its influence in the construction of reality): the paradigms, self perception and the perception of others. From the Existing Model, an Operating Model was built: an interactive IS which has the same dimensions, features and components than PPKMS model. The Santiago’s culture features corresponding to functional components are: • ontogenical: beliefs, orality, ways of thinking, practical knowledge, sentiments and watching; because they represents how a person (who belongs to Santiago’s culture) show their essential values; • typological: songs, dancing, living and fighting; because they describe features of their logic typing; • Through story-telling and crafts, the person (who belongs to Santiago’s culture) answers to the context offer and demand. The structural components of this model are defined as: • Core (Bs) is the person or group of people who study Santiago’s culture; • Container (Ts) represents all those technological tools (ICTs and nICTs) that the Bs uses to study the culture; • Habitat (Ss) represents the past and present society, which is characterizing Santiago’s culture; and the future society that is constantly changing and becomes more complex all over the time; • Sense and Direction, determines the meaning of constantly changing society, where is it going to. This Operating Model allows the optimization of the IS analysis process, in terms of interaction conditions. Its scope is Santiago’s culture. Validating it will allow obtaining the Meta Model: an interactive e-culture IS for Latin American cultures. 4.2 From the Conceptualization to the System Design The Operating Model presented in previous section can be enriched so as to extend its usefulness from the analyses to the design stage. In this regard, emotional design principles are introduced in that model to optimize the design of user interfaces. To obtain an IS that successfully promotes Santiago’s culture, is necessary to consider the following design features: • High interaction, which implies a design process that takes into account the three levels of emotion: visceral, behavioral and reflective levels. -
-
Visceral design is related to user interfaces appearance and mainly reflects the components dancing and living. Behavioural design is about the logic of interaction and functionality of the system. Does the system have an appropriate interaction to communicate and promote Santiago’s culture? Reflective design is the most important one; it is appropriate to communicate cultural components because of their complexity. Using reflective design interaction could be optimized, primarily about the following cultural
42
S.I. Herrera et al.
-
elements: beliefs, story-tellings, songs, ways of thinking, practical knowledge, crafts, sentiments, fightings and watching. To achieve this reflective aspect, music (and other sounds that evoke memories), videos, photography, drawings are necessary. They must be included in user interfaces.
• Taking into account that the use of the Internet is marking a strong tendency, e-culture systems should be implemented as web applications. • Considering the mobile and ubiquitous computing trends, e-culture systems should involve mobile context-aware systems versions, especially useful for users who do not know the culture and yet are in it.
5 Conclusions and Future Work This article has proposed the analysis and design optimization of interactive e-culture ISs. To do this, using retroprospectivation, an Existing Model of Santiago’s culture was developed (from the Existed Model) and then an Operating Model of Santiago’s e-culture was obtained. This Operating Model is based on an IIGG interaction model as well as emotional design. This model guides the analysis and design of the Santiago’s e-culture system development. Nowadays, the Operating Model is being validated -using web application prototypes-, as part of a Case Study Research: Promoting a northwest argentinian culture using e-culture. When this validation would be completed, the Meta Model development will start and will involve culturally universal dimensions. The objective of this model is to optimize the analysis and design processes of e-culture systems so that they could be adapted for the study and promotion of any Latin American culture. Only our cultural identity will give us the strength to build a place that ensures equity, progress and quality of life for our communities. This will allow generating our own messages to be included in the worldwide dialogue. That is why building technological tools to facilitate that dialogue is so important. E-culture is the new dimension IS that will promote the contemporary and future dialogue between different cultures. And in this way, all communities will reassess their identity in such a global world.
References 1. Association of Computing Machinery, Special Interest Group on Computer-Human Interaction (CHI), http://sigchi.org/ 2. Austín, M.: T. Hacia una visión sistémica de la sociedad y de la cultura (2000) 3. Bunge, M.: Social Systems: Foundations & Pholosphy of Sciences. Mc Graw Gill University, Montreal (1997) 4. Clusella, M.M.: El problema de la evaluación de la adaptabilidad del software para el diseño asistido. Univ. Católica Santiago Estero (2003) 5. Clusella, M.M.: La sistémica, como ciencia de sistema, requiere asumir una postura ética para servir mejor a las culturas, sus comunidades y personas. ALAS 2. Ibagué (2007)
An Interactive Information System for e-Culture
43
6. Clusella, M.M.: Organizational changes in catching-up countries context. In: 19th European Meeting Cybernetics Systems, Vienna (2008) 7. Clusella, M.M., Luna, P., Ortiz, E.: Systemic Epistemology: A synthetic view of the Systems Science Foundations. In: 1° World Congress of IFSR, Kobe (2005) 8. Colombres, A.: Teoría transcultural del Arte. Sol, Buenos Aires (2003) 9. Francois, C.: International Encyclopedia of Systems and Cybernetics, 2nd edn., Munchen (2004) 10. García, C.M.: Modelización Sistémica de la Interacción del usuario[Biosistema] con las nICTs [Tecnosistema]. Universidad Nacional de Santiago del Estero (2009) 11. Interaction Design Asociation. IxDA, http://www.ixda.org 12. Isajiw, W.W.: Entender la diversidad. Thompson Pub., Toronto (1999) 13. Luna, P. A.: Cursos Sistemica y Metodódica. Cátedra FCEyT, UNSE (2002) 14. Maier, A.: Complete Beginner’s Guide to Interaction Design (2009), http://www.uxbooth.com/blog/complete-beginners-guide-tointeraction-design 15. Marina, J.A.: La creación económica. Deusto, Bilbao (2003) 16. Massuh, V.: La flecha del tiempo. Ed EMECE, BsAs (1990) ISBN 950-07-0594-X 17. Mitre, M.G.: Sistema de Información para la instrucción asistida tipo simbiótica en el nivel superior a través del e-Learning. Trabajo Final UNSE, Santiago del Estero (2004) 18. Mitre, M.G., Clusella, M.M., Ortiz, M.E.: Proyecto Interactive Model for Systemic Concepts. Fundación Argentina para el Talento y el Ingenio (2005) 19. Mitre, M.G., Coronel, R.A.: Condiciones simbióticas, simbionómicas y sistémicas que validan un entorno de aprendizaje “e”. 1° Reunión ALAS, Argentina (2006) 20. Mitre, M.G., Ortiz, M.E.: Diseño de Modelo Interactivo para el aprendizaje asistido bajo condiciones cibernéticas. 2° Reunión Regional ALAS, Colombia (2007) 21. Mulej, M.: A New 4th Order Cybernetics and the Sustainable Future, University of Maribor (2009) 22. Norman, D.: Emotional Design. B. Books (2004) 23. Ortiz, M.E., Clusella, M.M.: Civilization-Culture Context as Sustenic Background. In: 50th Annual Meeting Int. Society Systems Sciences, California (2006) 24. Red Infonomía, Barcelona, http://www.infonomía.com 25. Rosnay, J.: El Hombre Simbiótico: Miradas sobre el tercer milenio. Ediciones Cátedra, Madrid (1996) ISBN 84-376-1459-7 26. Rosnay, J.: El Macroscopio. AC, Madrid (1977) 27. Santillán, M.A.: Tesis sobre aproximación sistémica a los Mitos y Leyendas Santiagueñas [MMLS]. Coloquios Univ. Termas Río Hondo (2006) 28. UNESCO. Declaración de México sobre las Políticas Culturales, México (1982)
Research and Development: Business into Transfer Information and Communication Technology Francisco V. Cipolla Ficarra1,2, Emma Nicol3, and Valeria M. Ficarra2 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva 3 Department of Computer and Information Sciences, University of Strathclyde, UK Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected], [email protected], [email protected] 1
Abstract. A set of techniques of industrial qualitative evaluation is presented to verify the distance existing between the real internal functional reality and the false corporative image towards the outside of the great textile firms. The current work is focused on three main aspects related to the software such as are management applications, the textile CAD and the human factors. It is also demonstrated how the theories, models and paradigms of software engineering and systems engineering generated in the American continent may serve to detect the antimodels inside the Southern Europe over-centennial textile industrial sector. Finally, it will be seen how the ISO quality rules in the services are used simply with commercial or publicity purposes. Keywords: Quality, Software Managment, Textile, CAD/CAM, Social Factors, Hardware, IBM AS/400, Telecommunications, R&D, ICT, ISO 9001.
Research and Development: Business into Transfer Information
45
The date of beginning of this set of rules is the year 1987, on the basis of the British rule BS 5750. However, the production of goods and services has increased in a dramatic way with the Internet, for instance, in 1994. However, these were rules aimed at the production and not at the services. Consequently, it was agreed to make less bureaucratic rules and adequate for all kinds of organizations; that is, services and products of both the public and the private service. Currently we have the last set of rules of 2008 (ISO 9001) [1]. In the same way as many research lines inside software engineering, we are in the theoretical aspect and inside the university or normative context. However, there are studies in which these rules stray away from the theory until getting totally denaturalized by the human factors inside the private or public organizations. Where many statements and theoretical and/or practical investigations such as software arquitecture, seminaring parsing , digital memories, compilation for object-oriented languages, adverstising keywords on web pages, etc. carried out by authors such as [4-8] are diametrically opposed there where these notions are applied. The problem focuses on the national certification bodies and the human factors. That is to say, in order to verify that the quality rules requisites are complied with, there are some certification bodies that audit the implementation and maintenance, issuing a certificate of conformity. These bodies are supervised by international organisms which regulate their activities, but in some geographical areas of the Alps, Apennines or Pyrenees this surveillance equals zero. Even in the case where the organizations are assisted by consultancy companies prior to implementing those quality rules. At the time of choosing a consultancy firm, it is necessary to define which is the need of the project. It is in regard to this need that the firm has to choose among the different offers of the market. The problem lies on the fact that those who will select and manage these quality rules will try with all the computer media at their disposal to continue with the famous policy of changing everything so that everything stays the same, and the boycott in the textile industries, even with external audits. This behaviour that will be described later on has been the cause of great financial losses in the implementation of management systems, for instance. The other problem that boosts this behaviour is that the requirements of the rules are generic, since these must be applicable to any company, regardless of factors such as activity, the total number of clients, production size, leadership style, etc. Therefore, in the requirements is established the “what” but no the “how”. In realities where corruption is implemented from the administration of the data net going through the management system and including the textile CAD/CAM it is in the lack of definition of the “how” through which takes place the bigger straying between the administrative-productive reality and the computing mirage in the over-centennial textile sector, inside some mountainous areas in Southern Europe. Besides, an implementation project entails that the firm develops specific criteria and develops them, through the Quality Management System to the proper activities of the firm. By developing these criteria consistent with its activity, the firm builds his Quality Management System. Therefore and with the purpose of being certified in keeping with the ISO 9001 guidelines, it is the organizations that have to choose the scope that is going to be certified, the processes or areas which it wants to involve in the project, choose a register, submit to the audit, and, after successfully completing, undergo a yearly inspection to keep the certification.
46
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
In a summarized way, an implementation project of the quality guidelines usually entails the following operations and it is where we have inserted some advice to eradicate completely or partially the mirage effect from the computing systems [1] [2] [3]: • Understanding and knowing the guidelines requirements and how these reach the activity of the firm or industry. In the case of the computing and systems department, one has got to include the whole internal and external staff who daily work in the firm systems. Requesting written reports about their operations and each one of the steps that they make in the routine maintenance operations, for instance. • Analyzing the real situation of the organization, that is, where you start from and where you intend to go. You have to get involved one of the main shareholders in the case of the great industries. • Building faithfully from each specific action a Quality Management System. • Documenting and getting signed the documents to those in charge, the processes that are required by the guidelines, as well as those that the activity of the firm requires. • Detecting the ability needs of the firm. During the implementation of the product it will be necessary to enable the staff regarding quality politics, aspects related to the management of the quality that helps them to understand the contribution or incidence of its activity to the product or service provided by the firm. Internal audit tools must also be generated for those people who are going to work in that position. In the case of industrial boycott, this is usually transferred under the quality system because usually the same people are enabled who prevent increasing the reality and not the mirage of the over-centennial industries. • Carrying out internal audits. The ideal thing would be to exclude the internal heads from each area. • Using the Quality System, register its use and improve it for months. • Working with a mixed staff between anonymous external staff and employees alien to the departments where they are made. • Requesting the certification audit. It is an activity that should be supervised by the shareholders of the big firms. The audit in these realities should serve to see cyclically the quality of the goods and services, but we are again in the face of the umpteenth Achilles’ heel in some industrial realities that avail themselves of computing and software engineering to generate mirages. Hypothetically, in the case that the auditor finds breach areas, the organization has a deadline to adopt corrective measures, without losing the validity of the certification or the continuity in the certification process (depending on whether it had already obtained the certification or not). Where there are mirages, those stay in a perennial way along the decades. A way to graphically depict those mirages of the textile industrial reality and the theoretical reality of the software engineering is as follows:
Research and Development: Business into Transfer Information
47
Fig. 1. The contextual and geographical factors of an over-centennial industry (mountains) constitutes the reverse antimodel of the ISO quality guidelines/certification (it is a little stone)
Now not only this reality is akin to the textile industrial sector, but rather of all the small, middle-sized and big firms that are located in those geographical areas. The rubberstamp of the ISO guidelines in the entrepreneurial collaborates and boosts the exports of those industries, inside and outside the EU. Getting that certification and denaturalizing qualitative reality is something common in some oasis of the antimodels of software and industrial engineering. For instance, it is feasible to find in factories that are devoted to assembling components for machinery for the dispensing of fizzy and non-fizzy beverages (pre/post mix manufacturing), exporting to many countries with the ISO certification (Technischer Überwachungs-Verein: TÜV Management Service –UNI EN ISO 9001) and which in the central headquarters have an AS/400 for under 30 users, that is, exaggerated yearly maintenance costs for such a small number of users. We find the mirage in an average PC, with less than 1Gb memory to manage the internal and external communications (intranet, extranet and Internet) with Lotus Domino. However, the head of the computing system (professional engineer “made in UniBg”) is depicted in the organizational chart of the net as a server. This entails the continuous breakdown of the false server, maintenance costs in the outsourcing services (generally in those areas where anything related to hardware is hired as external services) and an endless series of labour conflicts with employees who can no longer work with the computing systems at their disposal. Besides, if the management of the factory have computing knowledge equal to zero but with punitive mechanisms in their favour such as are the warning letters in case of contract breaches or labour failures towards the employees who in fact are vicitims of a structure which is the organization reverse anti-model in the civilized world. All of this makes them increase negatively with the passing of time, the human factors that will boost the industrial boycott, for instance. That is, in that firm or entrepreneurial group there is no computer net that converges towards the server at the moment of reading the messages, but rather a diverging net. Graphically this reality can be depicted in the following way:
48
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
Fig. 2. A white box PC is depicted as a server of the e-mail in a factory where components are assembled for the dispensing of drinks (pre/post mix manufacturing)
This is an example, like the organizational charts of the software and hardware inside a small or middle-size industry have a value equal to zero but which are considered for a TÜV Management Service –UNI EN ISO 9001 certification. It is the first state of the art in Southern Europe where it has been avoided to insert the names of the institutions, training programmes, etc., with the purpose of keeping the anonymity and the respect to the privacy of information. Finally, we have decided to leave aside those industries which make up anti-models in the research and development sector inside engineering and social sciences.
2 The Map of the Net: Realities and Mirages in the Textile Sector “Made in Italy” In the great industrial structures it is normal that the head of the new technologies of communication and information has at his disposal a map with all the servers, switchers, firewalls, printers, IP numbers, etc. Obviously, in the anti-model of international software engineering in the regions we are describing there are no such maps (figure 3). There are only mind maps among those in charge of the outsourcing of the net, for instance. The interested reader can find a more thorough description of that reality in the following bibliography [9] [10] [11] [12]. Consequently, in textile firms of those mountainous regions, with a long tradition in the sector of the cotton, flax tissues, etc., lack the net maps. Obviously, getting the outsourcing map is not an easy task at all, and entails tough labour conflicts inside and outside the organizations to which a system analyst belongs, for instance. The axes of the conflict inside the human factors are the entrepreneurial or industrial management, uneducated from the point of view of the new technologies, although they do posses college degrees, the hypothetical heads of the computing and systems office (as a rule young and hired without college
Research and Development: Business into Transfer Information
49
studies to prevent that they go to other firms of the direct competition of the textile industry for which they work), a social context that favours technical and ethical distortions because of mindset motives of those areas or in given times of history. Three axes that joined to the economic factor, destroy the principles of quality of software engineering in textile industries that bill over €€ 150 million in the Italian Alps (Lombardy region). For instance, only the external or outsourcing technicians possess all the information of the interrelations among the different hardware devices:
Fig. 3. First sketch of a net (2004). An over-centennial textile industry in the process of change of the managerial system in Lombardy region, Italy.
The economic factor that linked to the human factor in those industries plays a similar role to the thin silver lining that turns the transparency of a glass into a mirror. This distorting metamorphosis and related to software engineering, computing systems and human factors may be summed up in the following way: • Families that make up a vertical structure in the industrial management who move under the expression “carrot and stick” to hide their ignorance in the new technologies.
50
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
• Young, inexperienced and irresponsible heads in the internal computing and systems department in the firms or industries. The level of studies does not go beyond technical high school. They only use their time in teaching dysfunctions to their internal and external subordinates (outsourcing) or make queries for the obtainment of printed listings but who participate in the government of the industrial group. In the annex #1 we can see how the responsibility is equal to zero, through the examples of bills for services rendered without specifications or hundreds of thousands of euros to keep databases made in Access for the threads lab, the planning of production, etc. Some examples of industrial sabotages of those internal heads can be viewed more widely in the following bibliography [10] [12]. • Resorting to pastimes on-line (casino), videogames of the Windows operating system (patience or minesweeper), self-edition of personal pictures (Photoshop or Paint shop pro) to enlarge the daily working hours, whereas the printing of lists or the backup of the information is made. • Outsourcing of software and hardware that freely control all the reality information of the firm from the content of the e-mail of the general manager down to the firewalls, going through the listing of daily profits and losses of the entrepreneurial group. • Incorporating new hardware without training the internal staff for its correct use. For instance, going from a chaos of servers of all brands and with diverse operative systems to a big server (i.e., HP Eva series), oversized in the face of the potential number of users. A server that may make work a city hall or a university is used by less than 300 users. • Joining the systems of the diverse headquarters of the industry in the country and abroad in a single server, through the operation of framing the IP addresses, that is, as if they were different servers. Billing of services to different headquarters when in reality is that the work is done in a single server from the outside of the industry through e-work or outsourcing. • Leaving open gates in the firewalls for illegal entrances and getting confidential information or stored in the databases of marketing for its later commercialization to the industrial competition. • Resorting to the industrial social funds for R+D European projects in management software, when there are plenty of commercial products suited to that kind of industry. Later on, these economic funds are deviated to the purchase of PCs, change of servers, wiring of the net, etc. An accountancy reality of the European textile industry which has a nine figures yearly billing. • Use and abuse of marketing to create a corporate image based on the quality of a product that is not made 100% in the borders of the label. Resorting to the images of the capital of the autonomous region where they are located in the external publicity to keep the high quality mirage in the services, based on computing and systems subsidized directly and indirectly from the EU. All of these factors, directly and indirectly, have an influence on having a real situation of the net, the operative systems of the computers and servers, the state of the computers and their peripherals, etc. Getting a first approximation to the net entails in some firms having to pay hours and hours of inexistent services or unnecessary products to the providers of outsourcing services, since they possess the real knowledge of
Research and Development: Business into Transfer Information
51
the net. The labour mindset in some self-appointed outstanding regions from the technological point of view in Southern Europe, asking the design of the net, using applications such as Microsoft Visio or Smartdraw means that there is an intention to change the suppliers of outsourcing or hardware services, for instance. Consequently, the data obtained verbally or even in written reports are not 100% reliable (figure 3).
3 Managerial Leopardism “Leopardism” or “lampedusian management”, that is, “to change something so that nothing changes”, is a paradox laid down in the novel “The Leopard”, by the Italian writer Giuseppe Tomasi di Lampedusa [13]. The original quote expresses the following apparent contradiction: “If we want everything to stay the way it is, it is necessary that everything changes” [13]. Now in software engineering, through its methods and its techniques, the changes of the managerial systems, encompassing the different aspects of the products and/or services of an organization, must be carried out according to a series of guidelines, with the purpose of increasing the quality of the services and/or products and cutting down costs. Costs that in some geographical realities are due to the purchase of high range servers in the 90s, for instance, IBM, AS/400, for the management of small or middle-sized organizations, whose users ranged between 30 or 120. Then there are the functioning and maintenance costs of said servers. Evidently, having a high range of software and hardware implies counting on technicians, analysts and engineers capable of solving the multilayered problems they can present from the point of view of programming as net management. To such a purpose, some universities, theoretically secular or public but under religious control, start to develop branches of study in order to cover those vacant posts of professionals. This is the way in which hybrids in the engineering studies are born, such as management engineering aimed at the textile sector. Where such subjects as management in textile & fashion production, management, economics, Internet, Information and Technology, etc. make apparent a kind of Lombardian educational collage without defining the professional profile from the point of view of engineering. It is rather professionals where the economic activities are mixed up with the computing, organizational and administrative ones. Theoretically, in three years, the passing of these courses enables the future professionals to have a basic range of competences concerning industrial engineering, a thorough knowledge in management, business organization, supply chain management, business economics and investment appraisal. The key of leopardism goes through turning these engineers into investment agents inside the firms or textile industries, in the areas we are analyzing. They will sell the mirage of cheapening costs in all the productive sectors, marketing, etc. through the new information and communication technologies. With this purpose they start to study the possibilities of getting European subsidies for projects tending to change the whole software and industrial hardware, including the possibility of removing production to third countries, where the cost worker/hour is lower. This latter equation will turn it into the constant mirage in the face of the industrial management and the shareholders of the industrial group, as long as the project of changing the managerial system lasts. Consequently, this Lombardian college degree does not train computing experts or any other of the areas previously
52
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
mentioned, but rather a manipulator or destroyer of human resources. Everything is summed up in the equation cost per hour of an employee. Starting from this, leopardism begins, contradicting the main principles of the ISO guidelines and software engineering, just to mention two examples. Without a deep academic training and/or working experience in the textile and computing sector, he carries out a study of the art of the market on managerial systems, reaching the conclusion that there are no commercial systems that adapt to the intrinsic reality of the organization. Consequently, it is necessary to develop from zero all the managerial, productive systems, etc. This conclusion is partially shared by the heads of the managerial system to be replaced. Evidently, for them it can be a way of losing their privileged position in the industrial staff, thus setting off a series of stratagems from the software to boycott the whole system. In the first place the developers of the new software do not have available the documentation of the programs, flowchart of the processes, databases, files related to them, etc. Second, from the outsourcing 100% of the information net is controlled. Third, the constant difficulties in the change will lead to the leopardism or lampedusian management among the historical bosses and boycotters of the systems and those responsible for setting in motion the new managerial system. Fourth, the costs will multiply in an exponential way because each one of the stages of the project will be widened in an exaggerated way, setting in motion and functioning of the new managerial system, down to the change of all the entrepreneurial or industrial software and hardware. Fifth, a continuous loop is generated of bizarre excuses in the new project in order to increase the implementation timing (in the annex # 1 there are several examples). Sixth, the external programmers adapt to the rhythm and the reality existing inside the organization, thus emulating the behaviour of the hypothetic responsible of the computer systems. Seventh, the historic and new heads of the managerial system merge in their way of working. We have an interesting and typical sum-up in a classical English postcard, as it can be seen in figure 4.
Fig. 4. English humour sumps up perfectly the reality in the calculation and computing centres of some allegedly developed regions in UE from the point of view of ICT
In short, the heads who generate the European anti-model of software engineering are kept in their jobs inside the organizational staff of the Lombardian textile industries with the new managerial system. The managerial engineers adapt to the mirage and collaborate to sell it to the industrial management and the shareholders, eliminating from the organization all those people who resist the corruption of increasing the costs. The managerial engineer applies the equation of cost reduction only to human
Research and Development: Business into Transfer Information
53
resources because from the technological point of view, changing the software and the hardware, like prolonging for years the definitive setting in motion of the new managerial system, multiplies by two, three or four the initial budget in some areas of the Italian alps, for instance. Evidently, the training of the staff in the new managerial or the new purchases of hardware do not exist because this has an influence on the cost equation in the human resources. Finally, a way to hide this reality by the industrial management to the shareholders, the suppliers, the clients, etc., consists in rewarding through the local industrial union the heads of the software engineering anti-model by the alleged breakthroughs that have been achieved with the introduced changes. However, all of them forget that internal and external sector that watches over the safety of the information that circulates inside and outside the internet.
4 Resistance to the Leak of Confidential Information: Intranet and Extranet One of the main problems that must be faced by the technicians and system analysts is the change of software, where internal and external staff to the organization are involved, is the safety of the information. First the access to the firewall (set with Linux, for instance), and the software that controls the safety of the information such as can be commercial products Tivoli, Nsautoditor, ShareAlarmPro, etc. Second, the qualification to the users of the access to the files or database for reading operations, modification, etc. Third, the control of the CAD/CAM system. The two former belong to the field of the net management systems, whereas the third is joined to the bank or accountancy information of a firm, the most delicate environment of a textile industry, because there are stored all the designs of a collection that will be presented in the main international fairs of fashion design. That who has the control of those fields has a great responsibility, even if in the organizational chart of he/she stands as a simple systems technician or computer expert. Evidently, the managerial engineer does not have technical knowledge about the design of the clothes and the hypothetical internals of the calculation centre are interested in following the work schedule of the figure 5. The style department or CAD carries out the designs, passing that information to the managerial system and referring this to billing, production, sending of the product (cloth), etc. The cycle of a textile firm is born in the stylistic or creative area of the designs [14]. This is another of the variables that in the example of the anti-model we are describing has not been considered by the managerial engineer. Obviously, he/she will also support the change in front of the management for cost reasons. Nevertheless, changing it in the middle of the chaos of the new managerial may entail the total stoppage of the production. The alternative that presents itself in face of the duplication, triplication, etc., of the timings planned at the start is to keep two systems working in parallel. That is, to contribute the modifications to the CAD/CAM system so that it works whether with the one that is intended to be implemented or with the old managerial systems. Once the new managerial system has been implemented, the working CAD-CAM system will be attacked, whether from the point of view of visual simulation or placing internal staff to verify manually the data fields at the moment of transferring the
54
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
information from the designers to the databases of the managerial systems in parallel (a clear example of industrial boycott and which has been described in [10] because oddly enough nobody controlled those costs). However, it has been the vital sector per excellence, and which has abided by all the quality guidelines of software engineering, where technicians and programmers have worked anonymously, in order to keep for years the production of an over-centennial textile industry. Even with the new CAD/CAM system, this system will stay for a couple of years, functioning in parallel the new and the old system for the computer aided design for stylists and customers (i.e., Dolce & Gabana, Prada, Grifoni, Dutti, Barbari, Tom Ford, Giorgio Armani, Hackett London, Zara).
Fig. 5. The information transfer between the CAD textile system and the managerial system of information to the point where the sabotage of a textile industry is highly frequent, whenever there is an ongoing renovation process of the managerial system
Fig. 6. The simulation of tissues should not be modified while a new managerial system is being set in motion
The generation of solid structures for the safety of the information and which respect the quality guidelines is not simple in environments as those described, not so much for the reasons of the cost of the hardware or the software, because Linux offers important free tools to set a firewall or control the net as to the volume of the information that circulates inside it, the detection of the eventual risks and how to solve them,
Research and Development: Business into Transfer Information
55
etc., with free access programs. Besides, a firewall may be set in a personal computer wired to the net. The human factor conflict is generated at the moment at which someone from the lower scale of the staff of the industry and internal controls those who allegedly carry out the controlling tasks in the internal and external organizations when implementing a new software for so many months. Practically, the change of password as access to the different points of the net may entail several hours of the working daily schedule because of political reasons. These reasons are related to the constant consultation with the industrial directors about whether they allow or not the access to the information consultation in the files, for instance. The human and technical cost of these people has been very high because they are guided by the quality principles of software engineering, in the safety issues to prevent information leaks, and also in the continuous maintenance of the production for both managerial systems (new and old) through CAD. The price of that cost is high because on the one hand we have totally irresponsible heads of sectors and fosterers of industrial boycotts, and on the other hand an uneducated industrial management from the point of view of the new information and communication technologies, which can’t recognize the importance of the work being developed. However, these are real European examples of the new millennium, which market internationally brands of national and international clothes and bill 3 digits of millions of euros. Real cases that render totally insignificant the latest breakthroughs that come to us from the software sector, such as developing nations [15], nanotechnology [16], textile based computing [17], quantum computing [18], etc. In situations like those presented the risk of the safety of the information comes more from the insiders than from outside, because the “laissez faire” of the organization may be the only source of leaks of confidential information, especially if those allegedly in charge have a modus operandi like that presented in [10] and [12]. For instance, for them the ideal solution passes by authorizing all the users of the internet to have a free access to the databases of the industry. The internal human factor in these kind of realities means that all the investments made in computerized security and those related, directly and indirectly to it are of no avail at all such as the fingerprint, the scanning of the iris, the continuous change of server passwords, computers, etc., without taking into account other elements related to the physical and human aspects where the servers are located (electronic locks, safety cameras, surveillance staff, etc). Here is another example of how the change from a managerial system by inexperienced people who allegedly control costs only increase them.
5 Heuristic Techniques to Wipe Out Mirages from the Software The technological breakthroughs that are aimed to be implemented in the textile industrial sector, which as a rule is very reluctant to implement them, require correct planning and setting in motion by real specialists in computing and systems engineering. These specialists, aided by accountants, may evaluate the benefits and costs of the project, set up the budgets, timing of implementation, manage the programming errors, wiping out the potential unforeseen hindrances in the training of the staff, etc.
56
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
Hybrid degrees, such as the short-duration degrees in engineering, for instance, at non-secular colleges or allegedly secular but managed by religious organizations, for instance, only entail additional costs to a streamlining or adaptation project to the new technological challenges. In such educational, college and labour contexts, one of the ruling principles is not to leave written documentation of the tasks to be carried out, decisions made, etc. That is, there is no textual communication. Everything is verbal communication. Obviously, the heuristic evaluator of realities which are anti-models of software engineering will have to resort to the technique of direct observation, interviews to the users or agents participating in the project, and think-alouds. The compiled data serve for the writing of reports, making of statistics and generation of eventual graphics with the results. Evidently, this evaluator has to be in expert in eradicating the subjective factors of the agents who suffer the lousy implementation of the new system. As a field of study it must not only focus on the agents or internal users of the organization but also the programmers, analysts, system technicians, etc. belonging to other service firms, for instance. The goal is to get data to see the degree of prevailing mirage and which damages the quality principles of software engineering. For instance, when an external technician gets from the internal head a file with an infinite sequence of just 1 or 0, without spaces or field names, as an only register, but about which it is necessary to find out the different numerical fields, alphabetic, alphanumeric, etc. to add to them the data of the new managerial. In the opposite sense, we can find that an external programmer has the data of the managerial CAD but these do not match the structure of the fields with the real information that circulates in the AS/400. Only the written information (for instance, e-mails), the direct observation of the documents joined to the answers obtained in the interviews (assistance in the face of difficulties at the user’s workplace), and think-aloud (groups of users trying to understand how the beta version of the system works), are the most effective techniques in these realities of mirages where the users choose to stay anonymous for work reasons. Let’s remember that a managerial engineer in a textile industry occupies the higher level position in an organizational chart in some geographic areas of Southern Europe, regardless of his academic knowledge or experience in the sector. In the next table with the techniques used for analysis the diverse possible alternatives can be seen to obtain information from the users or agents participating in the project of technological innovation [18] in the face of the problems entailed by the setting in motion of a new managerial system. The agents or users have been split into the following areas and categories: accountancy (ac), sales (sl), labs (lb), production (pr), planning of the production (pp), stylists or designers (st), human resources (hr), external technical service (outsourcing), analysts and external programmers (et). The profile of the categories are workers [wk], employees [ep], heads [hd] and executives [ex]. -
Techniques Direct observation Interviews Consistency inspections Cognitive walkthroughs Users surveys User feedback Beta-testing
Users ac, lb, sl, pr, st, et, hr sl, lb, hr, ac, st sl, st, pr et, pr, st, al sl, lb et, ac, sl, lb, pp
Categories wk, ep, hd ex, hd wk, ep, hd wk, ep hd, ep wk, ep, hd wk, ep, hd
Research and Development: Business into Transfer Information
57
6 Learned Lessons and Future Works The awards granted by the industrial unions to the technological innovation in certain textile over-centennial industries has a value equal to zero in given geographical areas of the Lombardian Alps The implementation and the development of those systems constitute in many cases the international anti-model of software engineering, usability engineering and systems engineering, for instance. That is, the absolute contradiction in the formal and factual sciences, whose origin is to be found in the universities of those areas. In those university centres the secularism of European education does not prevail. Changing constantly the curricula of the state universities as if they were religious private institutions only generates professionals with little training and potential destroyers of the industrial activity of the region. It is easy to see how the industrial power of an area decays with the insertion of professionals where the costs equation primes quality and the international guidelines by which it is ruled, as in the case of the ISO guidelines in the products and services. The only ones who save themselves from the industrial paralysis at the moment of the implementation of a new managerial are those non-local professionals and those inserted as low level technicians from anonymous positions who have the management and the control (total or partial) of data safety and especially the CAD/CAM system. This system is essential for the production of clothes. Sometimes it is not necessary that this important task is recognized by the industrial management since they live in the everlasting mirage of the reality of the software anti-model. This happens thanks to their direct internal and external collaborators out of the organization. As a rule, only an internal professional of the organization can carry out an heuristic analysis of the real problems of the implementation of the new managerial software. Starting from this he can find quickly the best technical and human solutions. However, this freedom and operative quality usually has its days numbered from the working point of view, especially in those environments where corruption or eternal industrial sabotage reigns. As it has been seen in the real example of the over-centennial textile industry, the new technological investment has not only increased the costs (multiplying by two, three or four the original budget) but it also increases the human factors to boost the everlasting flaws in the functioning of the computer systems, whether it is for software and/or hardware reasons. The leopardism inside the industry is located in the intersection between the heads of the old and the new managerial system. The written documentation that exists in the communications among the participants to set in motion a new textile managerial is nil or very scarce, especially in those regions of Southern Europe which are surrounded by mountains. Therefore, we think it is fitting to carry out an analysis of the speech of the messages received in the context of computer and systems engineering, resorting to semiotics and linguistics. The goal of these future works is to draw the profiles of the industrial saboteurs through software engineering.
58
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
7 Conclusion The transfer of new technologies of communication and information is not easy in those areas where what Ferdinand de Saussure called parochialism prevails. Although in these places they try to give a positive connotation to the term, parochialism is an endless source of conflicts in the intersection and/or union of computing, telecommunications, the interactive systems, etc., through the human factors. The instruments to detect it derive from the social sciences and through an heuristic analysis it can be established with greater accuracy the scope of the industrial sabotage or the antimodel of software engineering, especially in the quality context. Now to eradicate it in those geographical areas is practically impossible because the ignorance of the new prevails in the management of the universities, industries, businesses, etc. Besides, the ISO guidelines in the services of those organizations also lack value because the internal or external audits are totally manipulated. Nor the awards of the sector’s associations in the subject of innovation or technological modernization is sufficient endorsement to guarantee the predominance of reality in the face of the qualitative mirages of software. Finally, the greater is the verticality in the over-centennial family firm, the further away it is from the qualitative management guidelines, including the software sector. This entails stronger controls of the superfluous expense, such as the computer consumables (colour printer cartridges, or the special paper for printing of clothes simulation in the area of style or design, for instance), and a lesser or none control of the economic payments to the outsourced technicians, who spend hours and hours to decipher the flux of computer data in the managerial system, since they lack a complete and accurate written documentation. The software anti-models in Southern Europe are a source of conflicts deriving from human factors disguised under the name of quality.
References 1. Tricker, R.: ISO 9001:2008 for Small Businesses: With Free Customisable Quality Management System Files! Butterworth-Heinemann, Burlington (2010) 2. Thorpe, B., Sumner, P.: Quality Management in Construction. Gower, Hants (2004) 3. Kehoe, R., Jarvis, A.: ISO 9000-3: A Tools for Software Product and Process Improvement. Springer, Berlin (1995) 4. Goodman, J.: Semiring Parsing. Computational Linguistics 25(4), 573–605 (1999) 5. Kruchten, P.: Software Architecture and Agile Software Development: A Clash of Two Cultures? In: ICSE, vol. (2), pp. 497–498 (2010) 6. Czerwinski, M., et al.: Digital Memories in an Era of Ubiquitous Computing and Abundant Storage. Communications of ACM 49(1), 44–50 (2006) 7. Bebenita, M., et al.: Stream-Based Dynamic Compilation for Object-Oriented Languages. In: Oriol, M., Meyer, B. (eds.) TOOLS EUROPE 2009. LNBIP, vol. 33, pp. 77–95. Springer, Heidelberg (2009)
Research and Development: Business into Transfer Information
59
8. Wen-Tau, Y., Goodman, J., Carvalho, V.: Finding Advertising Keywords on Web Pages. In: The 15th International World Wide Web Conference, pp. 213–222. ACM Press, New York (2006) 9. Parikh, M., Gokhale: Legal and Tax Considerations in Outsourcing. In: Information Systems Outsourcing Enduring Themes, New Perspectives and Global Challenges, pp. 137– 160. Springer, Berlin 10. Cipolla-Ficarra, F.: Vademecum for Innovation through Knowledge Transfer: Continuous Training in Universities, Enterprises and Industries 11. Cipolla-Ficarra, F., Cipolla-Ficarra, M., Ficarra, V.: Copyright for Interactive Systems: Stratagems for Tourism and Cultural Heritage Promotion. In: Cipolla Ficarra, F.V., de Castro Lozano, C., Nicol, E., Kratky, A., Cipolla-Ficarra, M. (eds.) HCITOCH 2010. LNCS, vol. 6529, pp. 136–147. Springer, Heidelberg (2011) 12. Cipolla-Ficarra, F.: Software Managment Applications, Textile CAD and Human Factors: A Dreadful Industrial Example for Information and Communication Technology 13. Gilmour, D.: The Last Leopard: A life of Giuseppe Tomasi di Lampedusa. Eland Publishing, London (2007) 14. Cipolla-Ficarra, F., Rodriguez, R.: CAD and Communicability: A System that Improves the Human-Computer Interaction. In: Jacko, J.A. (ed.) HCII 2009, Part IV. LNCS, vol. 5613, pp. 468–477. Springer, Heidelberg (2009) 15. Cleverley, M.: Emerging markets: How ICT Advances Might Help Developing Nations. Communications of ACM 52(9), 30–32 (2009) 16. Peters, W.: Nanotechnology: Environmental, Health & Safety Issues. Nova Science, New York (2009) 17. Post, E., et al.: E-broidery: Design and Fabrication of Textile-Based Computing. IBM System Journal 39(3-4), 840–860 (2000) 18. Svore, K., et al.: A Layered Software Architecture for Quantum Computing Design Tools. IEEE Computer 39(1), 74–83 (2006)
Annex #1
Fig. 7. The problems deriving from the access to the database of the new managerial system are falsely put down to errors or slowness in the internal net or intranet. This mistake in Bergamo Province (Lombardy, Italy) on the screen of the new managerial system due to the access of the programmes, files and the databases entailed buying a high range server because the hypothetical errors of the managerial were derived to the data net.
60
F.V. Cipolla Ficarra, E. Nicol, and V.M. Ficarra
Fig. 8. A special pastime or pursuit for external/outsourcing technicians and internal manager of AS/400: Look the graphical evolution in Task Manager Windows of the different servers
Fig. 9. Billing of 15 hours of service without detailing the activity made
Fig. 10. Billing for the maintenance of databases in Access (without detailing the tasks made) and which systematically are boycotted to pander to the friends who carry out the computer work on an outsourcing basis (€ € 160 per hour in the new millennium)
Research and Development: Business into Transfer Information
61
Fig. 11. Evaluation of the net to certify the vulnerability of the net. The activation of the current application has not been easy because of the industrial sabotage that is rife inside and outside the textile centennial industry, in the process of changing the managerial system.
Reducing Digital Divide: Adult Oriented Distance Learning Daniel Giulianelli, Graciela Cruzado, Rocío Rodríguez, Pablo Martín Vera, Artemisa Trigueros, and Edgardo Moreno National University of La Matanza Department of Engineer and Technological Research School of Continuing Education - Department of Education Buenos Aires, Argentina {dgiulian,graciela,rrodri,pablovera, artemisa,ejmoreno}@unlam.edu.ar
Abstract. The present paper shows the digital divide that exists among the communities that use ICTs (Information and Communication Technologies) in their everyday lives, including access to local government web sites, and those that have total ignorance of the subject. A survey regarding information technology knowledge was performed in several communities with very different sociocultural characteristics. The results of the survey, from a technological scope, show the width of the technologic gap that separates those communities. Being aware of this digital divide, this research shows the results of the implementation of a strategy that allows deprived citizens, who live in the border zones, to reach information technology knowledge. This strategy includes face learning, distance learning, the use of massive communication media, practice in national university’s computer labs and, theory and practice exams. Keyterms: Digital divide, Massive Media, Communication, Technology, Distance Learning, Training, e-Government.
1
Introduction
Digital divide is a term that refers to the gap among people, groups, countries with effective access to ICTs (Information and Communication Technologies) and those with scarcely or no access to ICTs. Nowadays is very difficult to live without the use of computers and the internet. Our society has become a digital and information society and the ICTs support our everyday lives inside this society. Social network services, learning, work, ecommerce, leisure, government services, and so on, are only a few applications of the ICTs that people can use and enjoy. So, it is very important for a country’s population to be able to access ICT. Otherwise, as a counterpart of the technological progress that enjoys a lot of communities, there are others that are technologically excluded.
Reducing Digital Divide: Adult Oriented Distance Learning
63
The digital divide regards opposite opportunities of access, training and use of ICTs. This gap can be classified by age, gender, income level, neighborhoods, etc. “The social groups that have taken advantage of this progress in their member’s benefit, have acquired a material and intellectual development level that separate them from other less privileged social groups” [1]. That is the reason why, it is possible to assert that digital divide exists among the communities that use ITCs in their everyday life, almost without noticing their presence, and those which don’t have any knowledge of the subject. One possible definition for digital divide is: “the technological distance among individuals, families, companies, interest groups, countries and geographical areas in their opportunities for information access, communication technologies and internet used for a wide range of activities” [3]. “Digital literacy” is influenced by the lack of economical resources in order to access to technological education or even in several cases by generational issues. Teaching Information Technology to adults requires a very particular approach. Adult students frequently feel that they are not able to learn and also they think that their technological ignorance is so deep that it will be very difficult trying to reach technology. Quite the opposite, young students that won’t have any previous experience with computers won’t give up until they are successful in reaching their goal. People that has grown up in times when technology was not the boom that is nowadays, feels like foreigners in this information society where technological knowledge is essential. A lot of times the children are the ones who teach and guide their parents in using a computer.
2
Digital Divide Determination
In order to determine how wide is this digital divide a long research was made in a sector of Argentina’s population. To achieve this goal, the research team made a survey among the Buenos Aires Province, more precisely in La Matanza County. This county was selected due to the following reasons: (1)To know the real needs of the population that leads to the creation of the public University of La Matanza (UNLaM) to which the research team belongs; (2) La Matanza County has an area of 323 km2 and a population’s density higher than 4.644 inhabitants per km2. Fifteen towns belong to this county. These towns can be classified in three population belts. Each belt includes communities with different sociocultural characteristics; (3) The coexistence of marginal and residential neighborhoods allows the determination of the bounds that made possible establishing the width of the digital divide. It is important to highlight that the socioeconomical level reduces from the first belt down to the third belt. This third belt is the one which has the most marginal communities, and in addition, the third belt’s neighbors have to travel more distance to reach educational centers. In order to survey all the towns that belong to La Matanza County, the research team designed a survey form, which was distributed to the High Schools that collaborate with the research. These schools had to distribute them to the inhabitants of each town, according to precise ranges regarding age and gender to obtain a representative sample.
64
D. Giulianelli et al.
Moreover, in the third belt towns, university’s assistants from the Social Studies Department of the La Matanza University, reach the poorest neighborhoods to survey the people who live in the area. Two universes and two different ways of fulfilling the survey were defined in order to minimized errors and cover the whole number of the county’s towns: (1) A sample of 4 inhabitants per 10.000 inhabitants of the county (0.04%) was reached through the High Schools that collaborate with the research. (2) Just for the specific case of the towns that are included in the third population’s belt and in order to decrease the possibility of errors, a fieldwork was developed that reached 14 inhabitants per 10.000 inhabitants, (this percentage represents 0.14%). The total number of obtained surveys, either the ones performed by university assistants or the ones obtained with the High Schools collaboration, was analyzed in order to check their veracity. This goal could be achieved because some auto-checkable questions were included in the forms, that allowed the research team the detection of contradictory answers. The quantity of valid survey’s forms was 1029. 2.1 Obtained Data Analysis The survey’s form allows carrying out a lot of comparisons among the three population belts. These comparisons address different subjects as: general knowledge level, information technology knowledge, economic issues, etc., in order to understand the situation in which each community is immersed. The results concerning training and technology show the most significant results. To be able to measure the digital divide some indicators belonging to different categories were selected: 1.
Information technology knowledge level: It is possible to highlight that in the three belts, a big portion of the population declares that they don’t have any information technology knowledge. The figure increases while the distance from the first to the third belt grows. Figure 1, shows the population’s percentage with no information technology knowledge. In addition to the information shown in Figure 1, it can be watched that the highest percentage of inhabitants that declare to have excellent information technology knowledge, belong to the first belt, and is only 3%.
Fig. 1. Population’s percentage that doesn’t have any information technology knowledge
Reducing Digital Divide: Adult Oriented Distance Learning
2.
65
Population that is interested in learning information technology: Inhabitants who answered that they don’t have any information technology knowledge, were asked if they were interested in learning it. The results show that in the three belts, more than 50% of the people want to learn information technology. Regarding the third belt, 75% of the respondent declares that they are interested in learning it. Figure 2 shows with a dark gray bar the percentage, by belt, of the people interested in learning information technology, and with a light gray bar, the opposite answer.
Fig. 2. Percentage, by belt, of the people interested in learning computer science
3.
4.
Microsoft Office Knowledge: The question regarding the Microsoft Office knowledge level was made only to the people who answered that they have information technology knowledge. The level detected in the three belts was poor. The third belt was the one that presents the lowest knowledge level in Microsoft Office knowledge. Adding information to the one that is shown in Figure 3, it may be highlighted that the third belt is where can be found the highest ignorance percentage, being the “none” option the most selected with 52%. E-mail Checking: In the three belts, less than 50% of the population navigate internet to check the e-mail. The third belt is the one with the lowest percentage of e-mail checking. This figures are not a surprise because that population have few economic resources and besides there are fewer cyber cafes. The habitual access to internet if you live in the third belt becomes very difficult.
Fig. 3. Population’s percentage with very few or none knowledge of the Office Suite
66
5.
D. Giulianelli et al.
Elementary School Education: While distance grows and economic resources decrease, not only the technologic ignorance increases but also the possibility to access to Elementary Education. It is important to highlight that the survey was made to people with, at least, 15 years old. A person of this age should be attending High School. However, due to different reasons, 10% of the population belonging to the 3º belt hasn’t finished Elementary School. Figure 4 shows the situation of the three population belts regarding if they had completed Elementary School. It can be watched that the percentage of people who completed their Elementary School level decreases towards the third belt.
Fig. 4. Percentage of population that completed elementary school
6.
People who made training courses: Only 8% of the third belt’s surveyed population declared that had attended training courses. Also 28% of that 8% declared that they have abandon the training before it had finished.(See Figure 5)
Fig. 5. Population’s percentage that had attended training courses
2.2 Digital Divide Measurement With the already shown results that are only some of the obtained by the research, it can be watch that, as economical resources decrease and distance grows, the possibilities for an inhabitant to access education are fewer. According to that reason, and in
Reducing Digital Divide: Adult Oriented Distance Learning
67
order to simplify its representation, the research considers the digital divide established between the first and the third belt. Only one indicator, information technology knowledge level is compared in the different belts. This comparison will allow marking a clear gap among communities. The width that separates the different communities must be watched in a dimensional way regarding the information supplied by the survey, from different angles: 1.
2.
3.
Technological knowledge: In order to represent this category the research took this two topics: the percentage of population that doesn´t have any information technology knowledge and the percentage of population that doesn´t navigate internet to check their e-mail. Training possibility: In one hand the number of inhabitants that had attended and finished training courses and, in the other hand, those who had abandoned the training before it had finished. It also takes into account the percentage of population that considers distance learning as a great possibility of training, because there are few or none travel costs and the time for training can be choosen by the trainee. Socioeconomic issues: Although the survey form has several questions regarding the quantity of persons that lives in the family house, if they have their own vehicle and which kind it is, the significant questions, in order to determine the digital divide are oriented to technology. That’s why, the research is based in indicators such as if the person has TV cable or satellite service or only TV by air.
Figure 6 shows the graphic mentioned in items 1 to 3. The x axis shows the indicators explained above. The upper line represents the result of the third belt survey where the deficiency percentages are higher. The lower line shows the same indicators, but regarding the results of the first belt.
Fig. 6. Digital divide between first and third belts
The area enclosed between the upper line (corresponding to the third belt) and the lower line (corresponding to the first belt) is the digital divide.
68
D. Giulianelli et al.
3 Strategy to Reduce the Digital Divide Being aware of the digital device existence, and of its meaning to the involved population, it was decided to train the most marginal communities. The research team has studied several ways of reaching knowledge to those communities by means of free training. Although there had been a massive interest regarding the training, some drawbacks arise: (1) It was impossible for the interested people to travel to the university to attend the training, due to the lack of money; (2) It was impossible too, to have available computers in some marginal neighborhoods because there are some schools that don’t have computers and, even worse, some other schools don’t have electricity. Moreover, there are some organizations in the area of the population to be trained, that have computers, but they don’t want to allow people from others communities to use them because of security problems. Due to the problems explained above, the university offered their computer’s labs to fulfill the training practice and provided buses for the people transportation. While the research goes on, some other problems arise such as the time schedule for the people to attend, because it was very difficult to find a time when everybody would be able to attend. Then, the research team decided to offer distance education. The training offered face learning to practice in the university computer’s labs and distance learning to teach the necessary topics to make the practice. The University radio station was the massive media selected to broadcast the lessons. There were implemented microprograms, 20 minutes long, twice a week. This microprograms were repeated in different hour’s ranges. But just when the mobility and equipment problems were solved, a new problem arose: the radio’s range. It is important to highlight that in some towns, for example in a four blocks range the university’s radio signal was received clearly, but in the next block, the signal was blocked by other local radio’s signals. The lower proposed age of the people who were going to attend the training was 22 years old. The High School’s Principals of two marginal neighborhoods made a list with the names of possible people to be trained and hand out the sign up forms among school’s neighbors, students’ families, etc. A total of 148 sign up forms were received and each one was analyzed. As a result of that analysis the most deprived population, who only were able to attend free training, was selected. The sign up form had questions regarding the information technology knowledge level and also if the applicant was able to receive the university’s radio signal at home. Those postulants with some information technology knowledge and who couldn’t receive the radio signal at home were noticed of the reasons why they would not be able to attend the training. Some of these people declared that although they couldn’t receive the radio signal at home, they could go to a neighbor or relative’s home to hear the radio program. The High Schools’ principals offered a school’s room with radio to allow people to hear the program at school. In parallel, three members of the research team were trained to teach radio lessons. In distance education, contrary to face education, the teacher couldn’t watch the students’ faces and expressions; they couldn’t ask questions to the students to check if they had understood the subject, or to check how quick they answer [2]. Also it is very difficult to teach about computers to students who had never seen or used one. That was the reason why some graphical material were made and delivered to the students through the High Schools, in order to allow them to watch
Reducing Digital Divide: Adult Oriented Distance Learning
69
the material while the teachers explain them in the radio program. Figure 7 shows three of these graphical materials as an example. The microprograms developed twice a week during two months. Each one of the sixteen microprograms was made alive with the following structure: (1) Brief review of the previous class; (2) Title and development of the day’s class; (3) Day’s class conclusions. The lesson was developed in a pleasant environment were the teacher chatted with the program’s host who made more interactive explanations. Two telephone numbers were frequently repeated during the class: one for the students to call and ask questions, and the other to send text messages. All the questions that were received during the microprogram were answered alive in the discussion block. Three modules were developed: Hardware, Software and Internet. Each of them had theoretical issues explained during the radio program, also these issues were complemented with four practices, three hour long each one, fulfilled in computer’s labs. Finally a final theory and practice exam was taken. Figure 7 shows the first page of each module. There were 11 pages in whole.
Fig. 7. General view of each module first picture
4 Obtained Results As a result of the trainees selection explained above, 84 persons attended this first training experience. Three practice’s groups were built in order to reduce the quantity of trainees for each practice class and to perform a better monitoring of each one of them.The trainee’s designation to the groups were made exclusively according to each person’s time possibility to attend the practice. The three groups had 34, 27 and 23 students respectively. Table 1, shows the age’s ranges of the trainees. Table 1. Trainees’ age’s ranges
Age’s ranges Less than 25 Between 25 and 40 Between 40 and 60 More than 60
Percentage 2% 37% 35% 26%
70
D. Giulianelli et al.
As a result of the training plan with the research team’s proposed methodology, the following categories were obtained: • Approved: Trainees who attended 75% of the practice training classes and who approved the final theory and practice exam at the first time or through a retake exam. • Attended: Trainees who attended 75% of the practice training classes but didn’t give the final exam. • Disapproved: Trainees who attended 75% of the practice training classes but they failed the final exam. • Absents: Trainees who attended less than 75% of the practice training classes Figure 8 shows the percentages reached by: Approved (first time or retake exam) Attended, Disapproved and Absents. Only 16% of the population that participated in the research had access to a computer either in their homes, in a relative’s home, neighbor’s home or work place. Because of the practice training were made in the university’s labs, the trainees were advised, to do more practice somewhere else, such as cybercafes, etc. A lot of the trainees brought questions each next class regarding exercises that they had tried to complete and couldn’t finish. This fact made that the training became successful because 66% of the students approved the final exam (see Figure 8).
Fig. 8. Obtained results
The desertion had been very low, only 8% of the attendees hadn’t finished the training. In a few cases the reason had been that it was impossible for the person to attend the practice because of the schedule. There were only 7% of the attendees that had failed the exam. However these same persons had completed the final survey, after knowing that they had failed the exam, and had written very good remarks regarding the training. The digital divide that separates La Matanza County different communities is wide but this research showed that it is possible to take actions to reduce it. Due to the importance of the ICTs in nowadays world, the research tries to find a channel in order to provide the population of the La Matanza County with a tool that allow them to lose the fear they have to technology and, in addition, to decide what to do with the acquired knowledge.
Reducing Digital Divide: Adult Oriented Distance Learning
71
The research demonstrates that it is possible to implement plans that make possible the reduction of the digital divide, being the massive communication media a very good channel to broadcast knowledge.
6 Related Work It is essential to establish strategies that lead to reduce the intern digital divide that exist within each country. Even nowadays there are 776 millions of adults in all the world that don’t know how to neither read nor write. More than 25 million of that figure live in South America, mainly in Mexico and Brazil. In Argentina there are 2.1% of the adult inhabitants (more than 800.000 persons) that don’t know neither read nor write. This figures have been broadcasted by UNESCO [4]. Obviously, these people cannot have any access to computer’s training, but to this percentage may be added people who knows how to read and write but who cannot access to computers training because of other reasons. Among them, the main reason is the lack of economical resources to access training. The research team shares the concern to reduce this digital divide, technological and also generational, with other research works regarding the same subject: • The research work [5] shows the subject of the Mexican society inequity from a simple observation of the last data census. This work shows a learning proposal and social practices for adults in marginal city areas. • The INEA (National Institute for Adults Education) offers since 1982, several types of training and has considered the massive communication media as a viable alternative also to literacy [4]. The current paper shows that it is possible to use the massive communication media in order to offer technological training that can contribute to reduce the digital divide.
7 Conclusion The current research has proved that a digital divide exists even among neighbor communities. In return to the economical resources shortage that prevents the marginal community to access technology and training, the enthusiasm to learn can overcame any type of barrier. It is only necessary to implement strategies that can allow these communities to reach the knowledge. The research shows a strategy that had been fulfilled giving good results. Most of the people who attended the training said that have never hope to receive an invitation from a university professor, nevertheless to be attending training into the university. The satisfaction of the research team was not only the percentage of approved trainees, moreover, before knowing that percentage, everybody was very satisfied because the mere fact of knowing that it was a task with a high social content. We received persons who had never been in contact with a computer, we were with them sharing their efforts and we enjoyed their progress in each practice. The trainees had been able to make a final exam where they answered theoretical questions and solved exercises using a computer. This training was implemented to help the
72
D. Giulianelli et al.
attendees to manage in a more and more digital and technologic world, to share information technology knowledge within their families, to get better jobs. It is just a humble sand grain to reduce the digital divide.
References 1. Serrano, S., Martinez-Martinez, E.: La brecha digital: Mitos y Realidades, pp. 4–10. Universidad Autónoma de Baja California, México (2003) 2. Ramírez-Ramírez.: La Educación a Distancia como instrumento de lucha contra la pobreza y de fortalecimiento democrático en América Latina. Universidad Estatal a Distancia de Costa Rica 3. Vázquez, A., et al.: Comunicando Comunidades: Redes Informáticas y el Partido de La Matanza. Universidad Nacional de La Matanza (2008) 4. INEA (Instituto Nacional para la Educación de los Adultos), http://www.inea.gob.mx/ineanum/ 5. Público, D.: Ecuador también es un territorio libre de analfabetismo. Quito (2009), http://www.radiolaprimerisima.com/noticias/general/60315
Didactic Software for Autistic Children Silvia Artoni1, Maria Claudia Buzzi2, Marina Buzzi2, and Claudia Fenili3 1
Abstract. In this paper we describe the aims and requirements of a project devoted to designing and developing Open Source didactic Software (SW) for children in the autism disorder spectrum, conforming to the Applied Behaviour Analysis (ABA) learning technique. In this context, participatory design with therapists and child’s parents is necessary to ensure a usable product that responds to these children’s special needs and respects education principles and constraints of the ABA methodology. Keywords: Autism, ABA, didactic SW, usability.
The success of the child’s therapy depends on the full coherence of the program, which should be respected not only at school and during the therapy sessions, but also at home with parents and relatives. An intensive and coherent ABA program schedule (at least 8 hours/day) facilitates the child’s acquisition of knowledge. The main goal of our project is to answer this question: “Can a computer SW program, i.e. didactic modules designed for autistic children, promote learning?” Potential advantages of using technology to support the learning process in autistic children are: •
• • •
Computers can provide multimedia content. Delivering content by auditory and visual formats allows better exploitation of the child’s abilities. For instance, in addition to audio, the name of the object may be shown on the screen as a string to facilitate reading (in programs such as sequences, alphabet, etc). Efficiency: the program can be set up immediately, reducing delays between exercises. Facilitation of building an abstraction process (photo, drawing, outline, sketch) Autistic children are usually very attracted by technology and interaction with electronic devices, such as computers, cell phones, mp3 readers, TV.
1. However computer-based ABA therapy cannot replace classic face-to-face ABA therapy for several reasons: • • • •
The computer, being an object, does not directly promote social inclusion In order to avoid errors, a therapist or a relative must assist the child during the SW sessions The generalization to the real world might be a concern Spending several hours a day in front of a screen is not recommended.
In this paper we describe the participatory design (also involving therapists and children’s parents) of a didactic SW program for children in the autism disorder spectrum, in conformance to Applied Behavior Analysis (ABA).
2 Related Works Putnam and Chong investigate the real use at home of SW specifically designed for autism. Results of an on-line survey (spread out throughout associations) filled out by 114 responders shed light on the limited diffusion of specific SW (only 8%) while SW for cognitive disability was used by 25% [13]. This result indicates the need to move from research prototypes to engineered SW specifically designed for autistic children, possibly according to their preferences. Augmentative and Alternative Communication (AAC) is a technique that by increasing the user’s perceptions provides an alternative method for communicating, and thus is used in learning disabilities and neurological pathologies. Recent studies have confirmed the efficacy of electronic therapy based on AAC. Hirano et al. [7] designed and implemented a visual scheduling system (vSked) for planning and organizing children’s activities in the classroom, and observed not only increased efficiency
Didactic Software for Autistic Children
75
for caregivers but also benefits and improvements in student-student and studentteacher communication. Recently, Pino and Kouroupetroglou have made available ITHACA, an Open Source framework for building adaptable, modular, multilingual, cheap and sustainable component-based AAC products [12]. However at the moment the framework is not downloadable on-line but distributed only by the authors. Participatory design actively involves all stakeholders in the design process to help ensure that a created product meets their needs and that results are very usable [18]. Participatory design is essential in designing for autistic persons. Hirano et al. successful applied participatory design in developing the vSked system [7]. De Leo and Leroy involved special education teachers in designing SW for facilitating communication with children with severe autism via smart phones [4]. A large branch of research is devoted to providing usable tools to assist therapists of children with autism. Kientz, Monibi and Hayes based their studies and their SW development on participatory design [6], [10], [9]. Kientz et al. [9] designed and developed two systems for facilitating efficient child monitoring (both progress and behavior): 1) Abaris, the supporting team executing Discrete Trial Training therapy, building indices into videos of therapy sessions and allowing easy seeking into the data; 2) CareLog, for collecting and analyzing behavioral data (unplanned incident also called “problem behaviors”). Furthermore sensors were used to monitor stimming behaviors (self-stimulatory movements) in order to understand the cause of an uncomfortable situation. Hailpern et al. [5] investigated the use of computers for assessing the behavior of nonverbal children. Defining a set of dependent variables for use in video annotation, called A3, it is possible to systematically analyze the interactions of nonverbal children, with the computer capturing feedback related to attention, engagement and vocal behavior. Although many digital products are available for augmentative communication (e.g., GoTalk, Tango, Dynavox, Activity Pad), teachers and therapists have experienced low usability and flexibility; training is required for set-up and customization, making it difficult for parents to use it at home. Furthermore, they are expensive [6]. Mobile communication tools are a promising field in AAC research. Moving AAC from specialized devices to a standard mobile platform offers many advantages: first of all drastic cost reduction, greater flexibility, simpler and faster customization, small size and ubiquity, and familiar environment (cell phone) for the children. Monibi and Hayes implemented a library of virtual cards for autistic children activities on a Nokia N800 (Mocoto prototype). The preinstalled card library may be easily extended with pictures or other digital images. A desktop software allows setup and customization of activities (e.g. size and number of cards, audio cues, etc.) [10]. Sampath et al. propose a system for autism using AAC that allows bidirectional communication between child and caregivers. Specifically, a gateway on a handheld device was built, allowing conversion between pictures and spoken language, enabling the completion of the communication loop (receptive and expressive) [16]. Furthermore, pervasive technologies are investigated for monitoring user behavior. To enhance social skills and abilities of persons with ASD, Kaliouby and Goodwin
76
S. Artoni et al.
built a suite of wearable technologies (cameras, microphones, sensors) for capturing, analyzing, and sharing (via wireless network) their social-emotional interactions, in a fun and engaging way [8].
3 The Project We believe that technology can furthermore enhance the lives of children with autism, for instance by creating more sophisticated eLearning tools. Research focusing on electronic educational programs will complete the pedagogical framework. Despite the considerable amount of research on SW to enhance children’s learning and support caregivers, to our knowledge the few open source free SW for teaching ABA are limited in functions or do not work well, while the most of stable products are commercial. Considering the high incidence of autism ([1], [11]) this is a very important. The project mainly aims to define an educational ad hoc methodology for DSA children and create didactic computer-based courses in order to render therapy more effective and efficient. The idea is to map ABA principles in the creation of a specific SW suitable for therapists and children, with modules designed to enhance children’s cognitive processes, language development, and the recognizing emotions. According to recent studies ([2], [3]), early intervention in children affected by autism disorder is more effective for learning and developing social abilities, so this project focuses primarily on teaching young children (2-6 years old). To simplify child-computer interaction and allow a modality “similar” to the physical ABA therapy, we chose to use touch-screen devices and vocal synthesis to announce the commands of the learning modules (exercises). The language required is simple and minimal (short sentences without articles, e.g. touch apple, match yellow). However caregivers may also speak in order to integrate commands in the best way to stimulate the child. The ABA therapy is based on AAC and Discrete Trial Teaching (DTT). DTT consists of a sequence of trials repeated several times, depending on child needs: 1) Mass Trial: basic trials ensuring children success (at first there is a prompt progressively eliminated). 2) Distracter phase: first a neutral distracter is added, and next a non-neutral one. Then two neutral distracters are added, and next, two non-neutral ones. At first there is a prompt progressively eliminated. 3) Extended Trials (choice between 3 items) executed by 2 different therapists. 4) Random Rotations of learned items. We have defined categories of items to be learnt (forms, colors, genre, food, numbers, etc.). For each item of each category the trial sequences are repeated in this sequence: matching (i.e. image/image, image/word, word/image, word/word), receptive (e.g. “touch apple”) and expressive (e.g.“what is this?”). The generalization is carried out changing the Discriminative Stimulus (i.e. the therapist command), the position of the items in the screen, and the visual features of the element (photo, drawing, outline, sketch). The design phase has involved face-to-face meetings with therapists, caregivers and parents, observation of ABA therapies, discussions via mailing list, video and
Didactic Software for Autistic Children
77
audio conferences. Figure 1 shows a logical scheme of the SW architecture. The three main SW components are: •
•
•
Didactic SW, i.e. modules for learning categories of articles on which operations (match, touch, order, etc.) are executed with increasing degrees of difficulty. A concept of sessions with data on exercises and progress of each child must be created to allow a coherent therapy in class and at home, with everyone involved in the learning process: teacher, therapist and parents. Monitoring SW to control the child’s progress is a key component of the methodology. By merging two data sources: 1) computer-recorded data (events made by children, such as pointing, drag-and-drop, touch zone, elapsed time for accomplishing the task, etc.) and 2) data annotation of therapists/caregivers, necessary for specifying whether a prompt is given (type, %) and to register additional notes. Additionally, the presence of changes in programs (e.g. to move to a previous difficulty level) may highlight the child’s weaknesses. Data analysis SW allows conversion of raw data in easy-to-use graphics and tables, showing the child’s learning progress. Graphics regarding a child’s progress should be available via Web interface to therapists and parents to allow decisions on how to best direct the educational program and make the learning process more effective.
Our project covers all these aspects as a whole and not as separate components. A pilot test with several autistic children (both low- and high-functioning) would allow us to perform SW tuning and customization. Thus during the whole project, data analysis will also drive both 1) SW updates and 2) refinements of the educational methodology that implements ABA principles on electronic devices.
Fig. 1. scheme of the ABA eLearning Environment for child with autism
78
S. Artoni et al.
Obviously, computer-based therapy is only one medium for promoting learning but classic face-to-face one-to-one ABA therapy should be alternated for two reasons: • •
The computer does not involve the child’s physical social interaction, thus an accurate benefit-cost analysis should be performed individually for each child (although “emotion recognition” may be one educational module). ABA requires a long period of daily therapy, at least 8 hours per day. This is a long time to be facing a screen and is potentially damaging to the child’s eyesight.
It is very important to design the UI avoiding any possible visual auto-stimulation for the child; for instance an interface with all icons in view might be preferable to carousel solutions, activated via mouse or touch. One interesting option is to customize the UI according to a child’s abilities (low-, medium- and high-functioning children) and, if possible, their preferences. As previously mentioned, each child has personal abilities that vary over a wide range, so the SW must be easily configured to meet both specific (receptive communication, expressive communication, etc.) and temporary needs (the child fails an exercise on an acquired article, so the therapist would immediately jump to a lower level to reiterate and consolidate the confusing concept). According to the ISO usability definition, didactic software should guarantee efficacy, efficiency, and user satisfaction. Other fundamental design criteria required of the SW: Modular and scalable, allowing easy addition of new modules, items (to acquire) and programs; Customizable, to better adapt to each child’s needs and abilities; Multilingual, to make the SW easily exported to the Internet Community as a benefit for the whole network; Open: to guarantee interoperability; Robust, for a solid error handling. Last, privacy and security are an important concern: all data must be kept anonymous (access via nickname) and secured with the appropriate technologies (login, certificates, etc.), and specific views only on the proper child data would be created.
4 Conclusion In this paper we describe a start-up project aimed at designing and developing a didactic SW module and monitoring SW for teaching children in the DSA spectrum efficiently and effectively. In this context, participatory design is essential in order to ensure adherence between how therapists, parents and caregivers imagine the SW (functions, UIs, reports, etc.) and the final product that should fullfil children’s needs. One of the main challenges addressed by the project is the design phase of the user interface. ABA therapy follows a complex sequence of structured steps, and therapists need flexibility to adapt the therapy to children's responses. Furthermore, didactic SW modules require customization (i.e. setting of preferences) to meet children's specific needs exploit their abilities. The mapping of all this complexity in a simple user interface is a critical point. To effectively manage this task, the project will rapidly deploy prototypes to conduct a pilot test with a few autistic children, which is crucial for gathering therapist feedback and improving the SW's usability.
Didactic Software for Autistic Children
79
Involvement with therapists is for all aspects of the project and not limited to the UI design. Didactic modules will be published on-line with a GNU license and offered free for all: teachers, parents, therapists, and educators. There is an urgent and extreme need to focus different efforts in the same direction, to create efficient and satisfactory educational tools for autistic children. Distributed Participatory Design (DPD) or Participatory Design @large, i.e. the possibility of benefiting from social contributions by researchers, therapists, parents and caregivers in general, to collaboratively improve the SW, is a key feature of open source SW. Different experiences (social, cultural, educative), suggestions, ways of seeing things, ideas, can significantly contribute to a better SW design.
References 1. Epidemiology of autism, http://en.wikipedia.org/wiki/Autism#Epidemiology 2. Anderson, S.R., Romanczyk, R.G.: Early Intervention for Young Children with Autism: Continuum-Based Behavioral Models. The Journal of The Association for Persons with Severe Handicaps 24(3), 162–173 (1999) 3. Corsello, C.M.: Early intervention in autism. Infants & Young Children 18(2), 74–85 (2005) 4. De Leo, G., Leroy, G.: Smartphones to facilitate communication and improve social skills of children with severe autism spectrum disorder: special education teachers as proxies. In: Proceedings of the 7th Int. Conf. on Interaction Design and Children (IDC 2008), pp. 45–48 (2008) 5. Hailpern, J., Karahalios, K., Halle, J., Dethorne, L., Coletto, M.: A3: HCI Coding Guideline for Research Using Video Annotation to Assess Behavior of Nonverbal Subjects with Computer-Based Intervention. ACM Trans. Access. Comput. 2(2), article 8, 29 (2009) 6. Hayes, G.R., Hirano, S., Marcu, G., Monibi, M., Nguyen, D.H., Yeganyan, M.: Interactive visual supports for children with autism. Springer Personal and Ubiquitous Computing, p. 18 (2010), doi:10.1007/s00779-010-0294-8 7. Hirano, S., Yeganyan, M.T., Marcu, G., Nguyen, D.H., Boyd, L.A., Hayes, G.R.: vSked: Evaluation of a System to Support Classroom Activities for Children with Autism. In: Proceedings of CHI 2010, pp. 1633–1642 (2010) 8. el Kaliouby, R., Goodwin, M.S.: iSET: Interactive Social-Emotional Toolkit for Autism Spectrum Disorder. In: Proceedings of IDC 2008, pp. 77–80 (2008) 9. Kientz, J.A., Hayes, G.R., Westeyn, T.L., Starner, T., Abowd, G.D.: Pervasive Computing and Autism: Assisting Caregivers of Children with Special Needs. IEEE Pervasive Computing 6(1), 28–35 (2007) 10. Monibi, M., Hayes, G.R.: Mocotos: Mobile Communications Tools for Children with Special Needs. In: Proceedings of IDC 2008, pp. 121–124 (2008) 11. Myers, S.M., Johnson, C.P., and the Council on Children With Disabilities: Management of Children With Autism Spectrum Disorders. Pediatrics 120, 1162–1182 (2007), doi:10.1542/peds.2007-2362 12. Pino, A., Kouroupetroglou, G.: ITHACA: An Open Source Framework for Building Component-Based Augmentative and Alternative Communication Applications. ACM Transactions on Accessible Computing (TACCESS) 2(4), art. no. 14, 30 (2010)
80
S. Artoni et al.
13. Putnam, C. ,Chong, L.: Software and technologies designed for people with autism: what do users want? In: Proceedings of the 10th Int. ACM SIGACCESS Conference, pp. 3–10 (2008) 14. Rosenwasser, B., Axelrod, S.: The contribution of applied behavior analysis to the education of people with autism. Behavior Modification 25(5), 671–677 (2001) 15. Rosenwasser, B., Axelrod, S.: More contributions of applied behavior analysis to the education of people with autism. Behavior Modification 26(1), 3–8 (2002) 16. Sampath, H., Sivaswamy, J., Indurkhya, B.: Assistive systems for children with dyslexia and autism. ACM SIGACCESS Accessibility and Computing (96) (2010) 17. Weiss, M.J.: Expanding ABA intervention in intensive programs for children with autism: The inclusion of natural environment training and fluency based instruction. The Behavioural Analyst 2(3), 182–186 (2001) 18. Wikipedia, Participatory Design, http://en.wikipedia.org/wiki/Participatory_design
Database Theory for Users Unexpert: A Strategy for Learning Computer Science and Information Technology Francisco V. Cipolla Ficarra1,2 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected] 1
Abstract. A set of strategies is presented for computer teaching between adult and inexpert users in the handling of computers, using the notions of databases, systems engineering and computing. The steps taken to reach optimal results in a short time are reached and with reduced costs. These strategies and stratagems are the result of a long teaching process in the professional training courses of the Social European Fund on the Mediterranean basin. Simultaneously a series of practical examples or study cases used in the generation of these strategies are shown. Keywords: Database, Computer Science, Information Technology, Software, Hardware, Education, Users, Communicability.
European Social Fund were allocated even to neighbourhoods who gathered certain requisites concerning the teachers, premises and computer equipment, for instance. In this sense, it can be seen that there was a greater democratization of professional training of with European social funds in Spain in regard to Italy in the decade of the 90s. In Italy the same industrialists who are decentralizing textile, mechanic, computer, electronic production are responsible for organizing these courses for the unemployed. In this sense, there is a greater recycling of the worker towards the new technologies in Spain than in Italy. The main challenge there in these neighbourhood academy courses is to achieve a homogeneous and motivated learning group in the least possible time [3]. In the case of computer science the first steps to be taken with the group are to learn handling of the computer. A previous step to this is learning technical English. Therefore, our working environment from the technical point of view is made up by the following areas: systems, computer science, databases, logic programming and technical English. A graphical representation can be seen in figure 1, although at first sight there may appear certain areas that overlap because of a matter of ambiguity and linguistic vagueness [4], such as can be the difference between systems, computer science and information in Southern Europe. For instance, a graduate in information sciences in Spain is a journalist, who would be in a lower level to what is understood as a social communicator in the Spanish speaking countries in Latin America (whose curricula include computer science, for instance). Whereas in Italy, a graduate in information sciences is a computer analyst.
Fig. 1. Areas of knowledge on which one works with the multilayered group of professional training students
The current work is divided in the following way: presentation of the basic notions used, eradicating the potential ambiguities, making of an active, participative and motivated group, realization of the first practical examples, enumeration of the required strategies and generation of a qualitative guide to be followed in professional training.
2 Software and Hardware for Education One of the main problems in this kind of group: it is essential that they use a common language. In this sense, the presentation of the technical terms of computer science,
Database Theory for Users Unexpert: A Strategy for Learning Computer Science
83
leaving aside the pronunciation issue, may help to unite the workgroup [5]. Therefore it is important to start with the physical aspects of the computer, such as the electronic components, the information supports, etc., the peripheral input/output that may be connected, etc. [6] [7]. In this regard seeing from the inside how a USB key is made, a hard disk, a monitor, a printer, etc., usually motivates the adult student, increasing the curiosity towards the electronic side of the computer, and other peripherals connected to it and decreasing the fear to cause irreparable damage to the computer if they make mistakes. Thus the goal is to wipe out the first barrier between the user and the electronic tool, as the computer is usually considered. In the same way also some parallels with other home appliances may be useful, such as television control remote, the loudspeakers of the audio equipment, etc. Once this first stage has been overcome, one starts to analyze the components from the point of view of the system, starting by the minimal unit of information, the CPU structure (explaining how each one of its parts work), the several magnetic-optic supports of information (a historic view is very positive so that they understand how the information storage means have been decreased until reaching the current era of micro computing) and an analysis of the peripherals that may be connected to the computer [8] [9]. Before talking to the group about the database one must differentiate between the terms “data” and “information” [10] [11] [12]. From a classical point of view of communication, information is the result of a process where some data have been introduced. Said process may be carried out manually or with the use of a computer [6]. Besides, information may turn into data, if the feedback of the system takes place. All of this can be depicted in the following way:
Fig. 2. Feedback cycle between data and information
Now in a computer the data and the information are of a binary type. When these bits unite they make up different units of information measure: bytes, megabytes, gigabytes, etc. The important thing in these cases is to differentiate system from structure. The first leads us to the core of computer science and all its derivations, such as can be the Central Processing Unit (CPU), the diverse peripherals, among other components. The second –structure- is more transcendent when it comes to organizing a database. Through the notions of bit, byte, kilobyte, megabyte, gigabyte, etc., several automatic exercises can be established for the conversion among them. It is a way for most students to go through the basic concepts of division, multiplication, etc. with decimals, creating a situation of equality of difficulties to be overcome inside the group, since many of them don’t remember how to do that, and others don’t know
84
F.V. Cipolla Ficarra
how to do it. Next are introduced the essential notions of the diverse binary numeration systems octal and hexadecimal. Once again, these are continued with the conversion exercises between these systems to strengthen the conformation of a homogeneous workgroup inside the course. At this moment it is important to resort to the concepts deriving from the databases explaining how the systems work and how they are made [13] [14]. The notions of character, field, register filing, several types of relational and federal databases may help the inexpert user to have a better understanding of how an ATM works the electronic health card, the fiscal identification code, etc. That is, examples from daily life are beginning to be inserted and these are related to the concepts of systems and computing. Traditionally a database is a set of data used in a specific computer system, whether it is of an educational type, scientific, administrative, workplace or any other kind. [15] [16]. Besides, a database is made up of two kinds of information that belong to different levels of abstraction: • The data • The structures or metadata The first represent the entity of the system to be modulated; the properties of such entities are described in the shape of values, that is, numeric (day, month, year, etc.), alphabetic (name, second name, city, etc.), alphanumeric (direction-street-number, licence plate or patent of a car-letters-numbers, to mention a couple of examples), etc. Such data can be grouped or classified in categories in relation to the common structure, that is, books, authors, year of edition, etc. The structures or metadata describe the common features of the several categories of the data, such as are the names and types. In the case of a book and in paper support it can also briefly structure the main data in the following way: title of the book, author, publisher, isbn number, place of print, total of pages, etc. From the computing point of view, each one of these data makes up a field. The length of the field will be related to the total of characters that make it up, including the spaces (every space is worth 1). The kind of field is in relation to the type of data that will be stored: 1. Computer alphabetization through the database 2. Francis Vincent 3. BCN 3000 4. 84-7711-113-3 5. Barcelona 6. 220 Therefore, the first two fields are alphabetical. The first has an extension of 45 characters, including the blanks. The third is alphanumeric with 15 characters. The sixth is a numeric of three characters and so on. In the computer systems these fields are grouped making a record. As we are seeing, so far it is about combining definitions with practical examples about the generation of a database. Later on, one introduces the code notion. That is, that the participants in the courses learn in the first lessons that at the moment of generating a database generally every record has a code. Said code, with the purpose of gaining speed at the moment
Database Theory for Users Unexpert: A Strategy for Learning Computer Science
85
of processing it, is advisable to be numeric and to be found at the beginning of our record. For instance, with a bar code reader you can know in an automatic way the price of a book at the moment of purchasing it at the bookshop. All of these records make up a file later on. Therefore, an interesting way to roughly summarize all these notions for the students is that a file is a set of records, a record is a group of fields, a field is an association of characters and the characters are made up of bits. A bit is the minimal unit of information that may have two states; 1 and 0. A graphic representation is the following:
Fig. 3. A way to sum up the structuring of information in a database
Here it is necessary to differentiate between analogical and binary databases. A classical example in the first case were the metallic files, where the human being makes the classification of the data inserted in each one of the folders of the several drawers that make it up. Here is a manual and human process. In the second case – binary or digital– there is an electronic mechanism that is designed to carrying out such operations whose heart in the computer is in the CPU (Central Processing Unit). Once the students have assimilated the basic concepts presented so far it is necessary to activate a new stage which consists in creating databases in paper support but as if it was a game. What is intended in this stage is to follow the concepts of Seymour Papert, that is, to consider learning as if it was a game [17].
3 Database and Logic Programming So far we have considered the main aspects related to the databases from the theoretical point of view. However, this is one of the most important activities in its organization, since what is put into practice is its logical aspect, that is, the essential in the databases [6]. In this stage like in all the previous process it is important to create a cooperation environment inside the classroom and using familiar concepts for everybody, such as a book, a magazine, a newspaper, a DVD, etc. It is important to establish the link between the paper work with the real possibility of inserting these data in a database generated with an office automation such as Microsoft Access.
86
F.V. Cipolla Ficarra
For instance, if someone asks us to generate a database about a library, the first thing that has to be done is to put into play the communication inference mechanisms and the mnemonic aspects in the names of the fields, that is, that they are easy to remember when there is a high number of them. For instance, a way to abridge the fields to six characters is: Name of the field on paper - Book code - Author’s first and second name - Title of the book - Year of publication - Country of edition - Name of the publisher
Mnemonic names of the field in the computer BOOCOD AUTHOR TITBOO YEAPUB COUEDI NAMPUB
Although this may look like a trivial task at first sight, it is interesting from the logical point of view to start to write on paper the several components that surround the people and even alter on they can be inserted in a database. A good mental exercise consists of determining the length of the record, for instance, working with an average of 25 fields, which can’t repeat themselves. When one acquires sufficient practice, one can easily reach as many as 50 fields, what shows a high level of accuracy and detail in the database. Obviously, once this good structure is reached for a library, this can be exported –with its corresponding modifications- to organize the films in a video library, the articles of magazines and newspapers in a periodicals library, the pictures in a photo album, etc. The important thing is to consider that the first field of the records must always be a code. It is advisable that the students try to study several alternatives so that this is numeric, in order to gain speed at the moment in which the computer processes the information, if the database had a very high number of records. As for the extension of the fields, several applications of the environment, Windows have their limitations by defect, however, it is important to analyze the cases with a higher level of detail and placing a greater length, bearing in mind the cultural diversity inside the EU, for instance. In this sense, it is advisable to start with the analysis of a Microsoft Access table where there are the different kinds of fields and the intrinsic features of each one of them: logical (yes/no), monetary, date/time, memo, counter, etc.
4 A Practical Example The features of digital papers can serve to make up a personal database. They can be recorded daily, in a hard disk directory, those news that interest us. An easy way to classify them is by date, following the English-speaking model, that is, month, day and year. In our courses we generally started with the national newspapers, such as currently can be “Publico” in Spain and then go forward towards the international ones, to insert other variables deriving from the languages and cultures. In our case of the practical analysis, one starts from the analysis of several national publications and go on comparing them. In them one tries to find the advantage of finding the advantage of those papers which at the moment of filing the articles in the computer, every article keeps the name of the printed edition, something which does
Database Theory for Users Unexpert: A Strategy for Learning Computer Science
87
not happen in other cases. Consequently, the user must introduce the name of the article that he/she wants to file. This element which can be something trivial is very important to generate databases in a fast way. Once the subject has been defined, we create a six column and two row table: DATART 09.28.2010
MEDIA Público
TYPEPU Newspaper
SUBJEC Computer Science
TITLE Blackberry launches its table to compete with the iPad
LINK http//www.publico.e s/science/338880)bl ackberry)lanza)table ta/competir/ipad
In the first row we insert the names of the fields, some in a mnemonic format, others are left to the choice of the reader. The first field has to do with the date of the article, the second with the communication media, the kind of publication, the subject, the title and the Internet link. In the case of the title, in order to save the effort of writing it one can choose it directly from the article. Finally, it is important to insert the source of the information. In our case, a link to the Publico address. To that purpose, it is necessary to create a hyperlink or link. The final result can be appreciated in the table. The latter will become an interesting database for daily consultation, for instance. From now on, when we are connected to the Internet and we want to see the news in context we only need to click on the link. Automatically the website of the paper will open and one will be able to have access to its data hyperbase, with the date and the title of the article. Here it is necessary to make clear that sometimes the issue (computer science, in our case) is a section inside a digital publication.
5 Raising the Analysis Level and Generation of More Detailed Examples Once the students have carried out some practical exercises, such as magazines, newspapers, books, etc. it is important that they start to distinguish between data that change along time from those that are unique and do not expire, such as the code of an article in a stock. The goal is that they know how to differentiate the steady fields from the variables from the diachronic point of view, placing the steady ones at the beginning of the record of a file and leaving the most variable at its end. In this sense it is advisable to start to work with the personal data, that is, the information of every person who attends the course, such as the student card or identity card, for instance. The advantage of working with the identity card is that all the participants have it and can look it up, that is, it is an immediate source of information. On it they can see how there are data which can’t be changed along time, such as the date of birth, or the fiscal identification number, contradicting other data which can be changed, such as are the names or the number of the identity card. Later on, a first level of analysis can be achieved, with the home address, given the variables that the address entails: for instance, street, avenue, boulevard, square, roundabout, flat, door, attic, stairs, floor (high or low), etc. Besides, other codifications can be introduced to occupy less space in the length of the fields, such as is the case of the letters of the patents of European cars: F=France, B=Belgium, NL= Netherlands, etc. In the following table we can see a first grouping of these data, from the point of view of the personal information,
88
F.V. Cipolla Ficarra
which may change or not. For instance, the birth date is just one, whereas the residence may change along life. Field
Abbreviation
Type
Date of birth NIF or ID Name Second name Home address St. Number Flat Door Stairs City Country Zip code Phone Fax Email
DATBIR NIF-ID NAME SENAME HOMADR STREET FLAT DOOR STAIRS CITY COUNTR ZIPCOD PHONE FAX NUMBER
Content 30001080 12345678F Maria Victoria Vicente Larrea Sotkstraaktswartier 10 First 2nd A Maastricht NL 6211 LK 3143 3881234 31433881234 [email protected]
Now excluding the personal data there are other fields where the characteristic of being steady or variable in time depends on the point of view from which one works. For instance, an email address is steady from the point of view of the computer system but in a set of personal data in a presentation card it may change at the moment in which the employee changes his/her enterprise, for instance. The same happens with the zip code which although they depend on the international organizations and before they used to be numerical of 4-5 figures but currently they tend to the Anglo-saxon system of inserting numbers and letters. Therefore, these aspects may also be challenged inside the group at the moment of carrying out the design of the database. When the group of students have assimilated these concepts, one can start to make the design of the database in a swift, autonomous way and establishing parallels among them, for instance between a database to organize a bookshop and a video library. In the annex # 1 we have a short version of them and with bi-directional references among some fields, because they are identical: • Video library: Code, title, year of production, year of release, awards, support (DVD, VHS, etc.), original title, original language, subtitled, production country, main and supporting actors and actresses, director, black and white, colour, combination of both, genre, the images are real or computer created, based on a literary work?, duration in minutes, collection or series or trilogy, targeted public (children, teenagers, family, adults), price, etc. • Bookshop: Code, ISBN number, authors, genre, year of writing, year of publication, original language/s, total of pages, type of the covers, size of the book (height, width, thickness in mm.), illustrations, does it include digital support (CD or DVD)?, publisher, country of printing, does the book belong to a collection?, volume number, book for the average reader or for the visually impaired and the blind (Braille), price, etc. In these it can be seen how there are certain common denominators which allow one to speed up the design process, such as the fields that are located in the first area,
Database Theory for Users Unexpert: A Strategy for Learning Computer Science
89
that is, the steady ones. Once this stage has been overcome, one can start to insert the real data in the Microsoft. Access. The presented notions allow to have a greater dexterity at the moment of explaining the functioning of this commercial application. With the tables one may slowly create the first relationships among them, through the selection of the fields that will serve as guide or key. Then the query notion may slowly be introduced in the database, the ordination of the records, through a field that allows to order the contents in an ascending or descending way, for instance. In relation to the degree of interest and motivation of the students group other aspects of the database may be explored and their relation to the Internet, for instance.
6 The Interface with the First Databases Lastly the issue of the interfaces may be approached because it implies theoretical and design notions that sometimes may divide and atomize the group because of design, style, tastes or preferences issues in the personalization of the information on the computer screen, for instance. A way to agglutinate the group consists of explaining the different meanings of the colours, in relation to the fact that they are applied to several daily life systems and which are linked to the uses and customs of the peoples, for instance [18] [19]. Once these theoretical notions are overcome, it is important that in a group way and using the Access, for instance, the first masks or interfaces are made for the databases that have been created. In this case, it is important that the fields that contain information from the databases are very well differentiated from those that shows that information on the screen. The latter may have diverse extensions, equipped with buttons to insert a yes or a no from the keyboard, images, titles, etc. In this regard it is important that the student knows how to organize the information of a database, following the principles of the Divine Proportion as stated by Leonardo Da Vinci in the interface, occupying the minimal space and if possible with a minimalist style [18] The goal is to gain speed at the moment of carrying out consultations [20] or introduce changes in the information stored in the database.
7 Learned Lessons Regardless of the knowledge and experiences that an adult person may have, who is forced to a certain extent by the labour market to acquire daily new knowledge in the computer environment if this is possible, following a presence learning methodology or in the real classroom, evidently, the first goal is that the student starts to think in a logical way, that is, to assimilate the essential mechanisms to start to interact with commercial applications and then, if the students wishes or is interested, may go deeper into their programming. The great advantage of working in real classrooms is that the teacher can see the progress that is made in a group way by the students who participate in the professional training courses. In many cases these students need to be motivated, not only because of didactical reasons, but also because the attendance to these courses is mandatory in the EU from the legal point of view. In the case that the student is unemployed and receives an unemployment subsidy, for instance. The workgroup does not only foster learning, but also being constantly motivated and attentive to the notions that are presented to them. In few words, prompting among the adults the learning of the computers as if it was a game, as Seymour Papert contended.
90
F.V. Cipolla Ficarra
8 Conclusion The combination of notion deriving from computer science, systems, telecommunications, logics, design of databases, for instance, may be taught to an adult public in 20 lessons of 45 minutes each. The essential strategy is to generate a homogenous group at the start and to level the students at the beginning in such a way that those who have previous knowledge or experience can help the classmates who do not have these notions. The use of paper for design is essential, and in the case of the databases, too. Through the generation, ordination and filling up of the databases one does not only learn the functioning of commercial applications, but also the logical diagramming or programming, which is an essential basis to obtain good programmers. Consequently, the methodology strategies presented in this work are the result of the improvement of 20 years and have been tested by around 10,000 students. Adult students, generally without knowledge or previous experience in the use of the computers, but who have made it apparent that following the structure of the normal pyramid of theoretical information until including the practical aspects excellent results are obtained. These results can be reached in a short time and without great costs such as can be the Computer Aided Education courses that are generally used for adult and inexperienced students in computer science.
Acknowledgments A special thanks to Emma Nicol (University of Strathclyde), Maria Ficarra (Alaipo & Ainci – Italy and Spain) and Carlos for their helps.
References 1. Lok, B.: Toward the Merging of Real and Virtual Spaces. Communications of the ACM 47(8), 48–53 (2004) 2. Cipolla-Ficarra, F.: Blended Learning and Multimedia for Adult People. In: CD Proceed. HCI International 2005, Las Vegas (2005) 3. Cipolla-Ficarra, F., Cipolla-Ficarra, M.: Attention and Motivation in Hypermedia Systems. In: Jacko, J.A. (ed.) HCII 2009, Part IV. LNCS, vol. 5613, pp. 78–87. Springer, Heidelberg (2009) 4. Cipolla-Ficarra, F.: Quality and Communicability for Interactive Hypermedia Systems: Concepts and Practices for Design. IGI Global, Hershey (2010) 5. Cipolla-Ficarra, F.: Virtual Classroom and Communicability: Empathy and Interaction for All. In: Jacko, J.A. (ed.) HCII 2009, Part IV. LNCS, vol. 5613, pp. 58–67. Springer, Heidelberg (2009) 6. Date, C.: An Introduction to Database Systems. Addision-Wesley, Massachusetts (1990) 7. Finstad, K., et al.: Bridging the Gaps Between Enterprise Software and End Users. Interactions, 10–14 (2009) 8. Mashey, J.: The Long Road To 64 Bits. Communications of the ACM 52(1), 45–53 (2009) 9. Gray, J.: Evolution of Data Management. IEEE Computer 29(10), 47–58 (1996) 10. Power, M., Trope, R.: Heuristics for De-indetifying Health Data. IEEE Security & Privacy 6(4), 58–61 (2008)
Database Theory for Users Unexpert: A Strategy for Learning Computer Science
91
11. Aiken, P., et al.: Measuring Data Management Practice Maturity: A Community’s SelfAssessment 40(4), 42–50 (2007) 12. Ambler, S.: Test-Driven Development of Relational Databases 24(3), 37–43 (2007) 13. Ploski, J., et al.: Introductiong Version Control to Database-Centric Applications in a Small Enterprise 24(1), 38–44 (2007) 14. Meyer-Wegener, K.: Database Management for Multimedia Applications. Multimedia, 105–119 (1994) 15. Finstad, K., et al.: Bridging the Gaps Between Enterprise Software and End Users. Interactions, 10–14 (2009) 16. Gemmell, J., Bell, G., Lueder, R.: MyLifeBits: A Personal Database for Everything. Communications of ACM 49(1), 88–95 (2006) 17. Piaget, J.: The Children Machine. Basic Books, New York (1993) 18. Cipolla-Ficarra, F., et al.: Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design. In: Cipolla Ficarra, F.V., de Castro Lozano, C., Nicol, E., Kratky, A., Cipolla-Ficarra, M. (eds.) HCITOCH 2010. LNCS, vol. 6529, pp. 73–86. Springer, Heidelberg (2011) 19. Cipolla-Ficarra, F.: HECHE: Heuristic Evaluation of Colours in HomepagE. In: DVD-. ROM Proc. Applied Human Factors and Ergonomics, Las Vegas (2008) 20. Quinn, B., et al.: Learning Software Engineering at a Distance. IEEE Computer 23(6), 36–43 (2006)
Code, Title, Year of production, Year of release, Awards, Support (DVD, VHS, etc.), Original title, Original language, Subtitled, Production country, Main and supporting actors and actresses, Director, Black and white/colour, combination of both, genre, The images are real or computer created, Based on a literary work?, Duration in minutes,
Collection or series or trilogy, Targeted public (children, teenagers, family, adults),
COLSER
•
Country of printing, Does the book belong to a collection?, Volume number,
TARPUB
•
PRICE
•
Price, etc.
Code, ISBN number, Title, Authors, Genre, Year of writing, Year of publication, Original language/s, Total of pages, Type of the covers, Size of the book (height, width, thickness in mm.), Illustrations, Does it include digital support (CD or DVD)?,
Book for the average reader or for the visually impaired and the blind (Braille), Price, etc.
Name CODE ISBNUM TITLE AUTHOR GENRE YEAWRI YEAPUB ORILAG TOTPAG TYPCOV SIZEBO ILLUST DIGSUP PUBLIS COUPRI BOOCOL VOLNUM BOOREA
PRICE
Design Configurable Aspects to Connecting Business Rules with Spring Juan G. Enriquez, Graciela Vidal, and Sandra Casas Universidad Nacional de la Patagonia Austral, Unidad Académica Río Gallegos – Campus Universitario 9400, Río Gallegos, Argentina [email protected]
Abstract. The requirements for separating business rule connectors show that an aspect-oriented approach is needed, since aspects are ideal for encapsulating the crosscutting connector code. Some contributions show that the AOP reduces dependencies and coupling; thus, best reusing is achieved and maintenance efforts reduced. However, other shortcomings appear directly associated with the nature of the connections and the limitations of the AO languages such as AspectJ. The focus of this paper is the problem of connecting the business rules to the core functionality in OO software. Therefore, this work outlines to explore, propose and experiment the design and implementation of aspectual connector with Spring AOP Framework. Spring AOP mechanisms are very different from AspectJ, and then, the connectors can be more flexible, easier to maintain, and more reusable. Keywords: Business Rules, Aspect-Oriented Programming, Crosscutting Concerns, Spring, AspectJ.
Design Configurable Aspects to Connecting Business Rules with Spring
93
Business rule implementation is an essential part of any enterprise system; it applies to many facets of business behavior that support policy and strategy changes. Business rules range from the simple—such as sending email after users have completed their purchase on an e-commerce site—to the complex—such as suggesting additional items a user might want to purchase based on the user profile, recent purchases, and product availability. Business rules tend to change over time due to new policies, new business realities, and new laws and regulations. Current mechanisms of implementing business rules require embedding the rule evaluation or their calls in core modules of the system, thus causing the implementation to be scattered over multiple modules. A change in the rule specification requires changes in all modules that are involved. These modifications are invasive and timeconsuming. Further, because business rules (such as a discounting scheme) are a lot more volatile compared to core business logic (such as sales), mixing them together causes the core system to become just as volatile. Aspect-Oriented paradigm (AOP) [2] offers a way to separate concerns and to ensure a good modularization in software system. AOP introduces a new abstraction called an aspect, to encapsulate a crosscutting concern. The requirements for separating business rule connectors show that an AO approach is needed, since aspects are ideal for encapsulating the crosscutting connector code. Some contributions [3] show that the AOP reduces the dependencies and coupling; thus, best reusing is achieved and maintenance efforts reduced. However, other shortcomings appear directly associated with the nature of the connections and the limitations of the AOP languages such as AspectJ[4]. The focus of this paper is the problem of connecting the business rules to the core functionality in OO software. Therefore, this work outlines to explore, propose and experiment the design and implementation of aspectual connector with Spring AOP Framework [5]. Spring AOP mechanisms are very different from AspectJ, and then the connectors can be more flexible, easier to maintain, and more reusable. The outline of this paper is as follows: Section 2 will expose the shortcomings of connecting business rules with AspectJ approach; Section 3 will present an Spring overview, Section 4 will unfold our approach to connecting business rules with Spring; Section 5 will present a study case and Section 6 will give our conclusions.
2 Connecting Business Rules with AspectJ Approach Aspect-Oriented paradigm (AOP) offers a way to separate concerns and to ensure a good modularization in software systems. AOP introduces a new abstraction called an aspect, to encapsulate a crosscutting concern. It also introduces the mechanisms to compose aspect and components. These mechanisms allow the aspect to crosscut OO abstractions. Because the AOP paradigm aims at overcoming the limitations of OOP, it must be implemented for each OO language. The de facto standard of AOP is AspectJ which is an extension of the Java. Cibrán [6] presents contributions that are directly related with the problem of the connections between BR and the basic functionality in software OO with aspects. These are shortly exposed next.
94
J.G. Enriquez, G. Vidal, and S. Casas
− AspectJ allows decoupling the different parts that constitute the rule connection in separate aspects that can be reused independently. However, this results in a proliferation of aspects which is hard to manage. − Generally, aspect relations are expressed in the same aspects that are being related (an exception is the explicit precedence relation), which reduces aspect reusability and composability. − On the other hand, reusability of aspect code is possible through inheritance. Moreover, it is observed that AspectJ has some very powerful and low-level features that are used for solving a wide range of problems, for example percflow. − However, sometimes the same features are used to solve semantically different concerns, thus blocking program understandability and portability. − AspectJ’s pointcuts are fragile as they directly point to concrete places in the core application’s execution. − Additionally, instantiation and initialization is controlled by the framework itself which can be an advantage — like in situations where the instantiation depends on complex pointcuts — but can also be restrictive when more controlled instantiation is desired. − Although static AOP approaches, such as AspectJ, do not allow the dynamic pluggability of rule integration code, they provide a more fine-grained description of the events upon which rules can be applied. In addition, these approaches allow introducing unanticipated data required by rules, quite easily in the application at hand. − AspectJ supports load time weaving of aspects. This allows aspects to be compiled separately as no details about the core application on which the aspects are to be woven are needed at compilation time. In AspectJ, changes in the aspect logic are reflected by recompiling and reloading the aspects.
3 Spring AOP Framework Overview Spring is an open source framework, created by Rod Johnson and described in his book [7]. It was created to address the complexity of enterprise application development. Spring makes it possible to use plain-vanilla JavaBeans to achieve things that were previously only possible with EJBs. However, Spring´s usefulness is not limited to serve-side development. Any Java application can benefit from Spring in terms of simplicity, testability and loose coupling. Spring is a lightweight dependency (IoC) injection and aspect-oriented container and framework. Furthermore, the Spring IoC container does not depend on AOP, meaning the developers do not need to use AOP if they do not want to. AOP complements Spring IoC to provide a very capable middleware solution. Spring 2.0 introduces a simpler and more powerful way of writing custom aspects using either a schema-based approach or the @AspectJ annotation style. Both of these styles offer fully typed advice and use the AspectJ pointcut language, while still using Spring AOP for weaving. Spring AOP is implemented in pure Java. There is no need for a special compilation process. Spring AOP does not need to control the class loader hierarchy, and is thus suitable for use in a J2EE web container or application server. Spring AOP defaults to using standard J2SE dynamic proxies for AOP proxies. This enables any interface
Design Configurable Aspects to Connecting Business Rules with Spring
95
(or set of interfaces) to be proxied. Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. Spring AOP's approach to AOP differs from that of most other AOP frameworks. The aim is not to provide the most complete AOP implementation, but rather, to provide a close integration between AOP implementation and Spring IoC to help solve common problems in enterprise applications. Spring Framework's AOP functionality is normally used in conjunction with the Spring IoC container. Aspects are configured using normal bean definition syntax: this is a crucial difference from other AOP implementations. 3.1 Schema-Based AOP Support Within Spring configurations, all aspect and advisor elements must be placed within an element. An element can contain pointcut, advisor, and aspect elements. Using the schema support, an aspect is simply a regular Java object defined as a bean in Spring application context. The state and behavior is captured in the fields and methods of the object, and the pointcut and advice information is captured in the XML. An aspect is declared using the element, and the backing bean is referenced using the ref attribute. A pointcut represents the execution of any business service in the service layer. A named pointcut can be declared inside an element, enabling the pointcut definition to be shared across several aspects and advisors or inside an advise. Five advice kinds are supported, they should be , declared inside , , and , and they have exactly the same semantics as AspectJ advice. Examples are showed in Listing 1 and 2. References audience bean as aspect performance ...
Listing 1. Aspect as declaration of superior level
In these examples, takeSeat and applaud are methods of the audience aspect. In Listing 1 the aspect is declared in a superior level and includes several advice and pointcut definitions. In Listing 2, a pointcut is declared outside aspect definition, then it can be linked to several and different aspect definitions.
Listing 2. Pointcut as declaration of superior level
4 Configurable Aspectual Spring Connector The volatility of the business rules is a problem for the software developers. The dynamics of the organizations generates new business rules, or decides to disable the existing business rules, or modifies some aspect of the current business rules. When a current software system is in operation, it is necessary that this type of modifications can be carried out in a fast and easy way. For this reason, AOP is convenient when providing mechanisms that allow to connect or to integrate the business rules to the domain without the need of altering these components. Just as it is stated in [8], the AOP facilitates the constant evolution of this type concerns. However, the design and implementation of these aspectual connectors is not a trivial task. They not only depend on the business rules, but also on the architecture and design of the system in which they should be integrated. In this case, a business rule in a system requires a basic connector while in another system the same business rule requires a more complex connector. Next, we present guidelines for the implementation and configuration of the aspectual connectors through several sequential steps. Step 1: Define a Business Rule The Business Rules are classes that implement condition(), action() and apply() methods, such as suggested by the Object Rule [8] pattern. A simple sample is showed in Listing 3. Step 2: Define the connection aspect The aspectual connector should activate the business rule when program execution reaches a specific event (calling or execution of a method of any business class). This is achieved by means of two elements: the connection aspect and the configuration AOP XML. The connection aspect is a bean that participates of two configurations.
Design Configurable Aspects to Connecting Business Rules with Spring
97
First, this bean is responsible for activating the business rule (invoke the apply method), the business rule is included as an attribute (property) of the bean that will be shot when the triggerBR method is executed. An example of the connection aspect is the AspectConnection class, in Listing 4. interface BusinessRule { boolean condition (..){} void action (..) {} void apply (..) {} } class BR#1 implements BusinessRule { boolean condition (..) { // if statement } void action(..) { // do something } void apply (..) { if (condition(..)) then action(..); } }
Listing 3. Interface BusinessRule and a concrete business rule
The business rules require information in order to evaluate the condition. This information can be global information or contextual information. Global information denotes when system parameters are required, such as the system date. Contextual information entails any specific value data or objects of event, such as the state of the customer or items of the purchase. In this last case, the aspect connection must obtain the required information and transmit it to the business rule. In Spring this is possible, including an argument in the triggerBR method, an object of JoinPoint or ProceedingJoinPoint class (the first is required when the advice is before or after, the second is required when the advice is around). These objects allow obtaining information of the event context, such as the “this” reference or arguments of the intercepted method. Now in the application context configuration we need to inject a concrete business rule into the connection aspect. For this reason, the concrete business rule is declared as bean, and this bean is injected into the aspect connection declaration, in the br property. An example is showed in Listing 5.
98
J.G. Enriquez, G. Vidal, and S. Casas
Declare a concrete business rule Connection-Aspect declaration <property name="br" ref="brsd" /> Business rule injection --
Listing 5. Business rule injection in connection aspect bean
Step 3: Configuration AOP-XML Still the connection aspect (aspcon) has not been bound to any business event. It requires a second configuration that is completely carried out in the AOP-XML configuration. It is necessary to use the same id bean, in this case "aspcon", and it will be labeled with . According to the AOP-schema support, it can be made in two ways, such as presented in Listing 6.
Listing 6. Configuraciones AOP XML
In the first possibility, inside AOP XML configuration (), the pointcut () has been defined as element of higher level. Then it is possible to associate one o more aspects. Each aspect () has a specific advice (after/before/around). In the second possibility, inside AOP XML configuration (), the connection aspect has been defined as element of higher level. Then it is possible to associate one o more pointcuts and advices.
5 Study Case: A Simple Electronic Auction A simple electronic auction was chosen as case of study to design and to configure a set of business rules and their connections, with Spring. The auction manages Offers,
Design Configurable Aspects to Connecting Business Rules with Spring
99
Bids and Users. Only the registered users can create offers and bids. A Bid is validated when its value is higher than the offer base price. A bid value can be modified or canceled while the offer is active. An offer has some properties: user, product, base price, type, condition, bids, previous-winner, winner, date and start and end time, and profit. When an offer is closed, the system checks if there is a bid winner; then the winner is allocated the offer, and the profit is calculated. The winner is the one that offered the highest price. Figure 1 presents a diagram that represents the functionality of the Electronic Auction. Auction
Some business rules are required to verify if the users are registered, to calculate auction profits, to apply discounts, to assign user category, to give scoring to the winners per category, to define users' category, to control extreme values of bids, to control the length of time of active offers, to define winner bid in case of tie, to extend the deadline of the offer if there are no bids, to control the time for cancelling a bid already made. The set of business rules to be applied are: BR#1: when the offer is closed, if there is a bid winner, 3% of profit is calculated on the base value. BR#2: when the offer is closed, if there is a bid winner and the value of the bid is higher than 10000, an extra 1% of profit is calculated on the base value. BR#3: if the value of the bid is higher than 200% of the offer’s base value, the bid is canceled and thus not added. The business rules are encapsulated in different classes, such as it was indicated in step 1 of Section 4, these classes are: BaseProfitBR (BR#1), UpperProfitBR (BR#2) and MaxBidBR (BR#3). Three aspects connection are needed, and the injection of business rules is the following:
The BR#1 and BR#2 rules are applied on the same pointcut, that it is to say, on the same event, when the offer is closed. To specify the order of execution of the rules, the attribute ‘order’ is used in the aspect definition. That one with higher order is executed first, when it is necessary to change this order, only the configuration xml is modified. The BR#3 rule is applied when a new bid is added. The AOP XML configuration is the following:
Table 1. Summary of required changes in connection software elements
Requirement To define the order between two or more aspect when the business rules are associated to the same even (pointcut) To change (configuration) or remove the existing business rule. To add a new business rule To add or remove attributes and operations to the connection aspect To change the advices on the rules
Modify Connection Aspect
Static/ Dynamic
YES
NO
Dynamic
YES
NO
Dynamic
YES
YES
Static
NO
YES
Static
YES
NO
Dynamic
XML
Design Configurable Aspects to Connecting Business Rules with Spring
101
6 Conclusions This work has presented how to design and implement the connection between business rules and core functionality using Spring aspects. After the presented experiences and others that are not described in this research, we conclude that Spring offers some interesting features: the declarative style to integrate events and business rules, the easiness of carrying out some modifications and adaptations without requiring source code manipulation, the partial reuse of the configurations (pointcuts, aspects, business rules), the capacity to associate aspects in a simple way. Firstly, these factors have allowed connection with the AOP support based on outlines requirements of different levels of complexity, in a flexible way. However, the instantiaton aspect model (Singleton) is an important restriction, particularly when it is necessary to implement complex connections. In Table 1, it is presented a summary of the software elements that should be modified when the requirements change because of the business rules volatility.
References 1. BRG. Defining Business Rules: What Are They Really? Business Rule Group (2001), http://www.businessrulesgroup.org/ 2. Kiczales, G., Lamping, L., Mendhekar, A., Maeda, C., Lopes, C., Loingtier, J., Irwin, J.: Aspect-Oriented Programming. In: Liu, Y., Auletta, V. (eds.) ECOOP 1997. LNCS, vol. 1241. Springer, Heidelberg (1997) 3. Cibrian, M., D’Hondt, M., Jonckers, V.: Aspect-oriented programming for connecting business rules. In: Proceedings of the 6th International Conference on Business Information Systems, Colorado (2003) 4. Kiczales, G., et al.: An overview of AspectJ. In: Lee, S.H. (ed.) ECOOP 2001. LNCS, vol. 2072, p. 327. Springer, Heidelberg (2001) 5. Oficial Site of Spring Framework, http://www.springsource.org/ 6. Cibrán, M., D’Hondt, M.: Composable and reusable business rules using AspectJ. In: Workshop on Software Engineering Properties of Languages for Aspect Technologies (SPLAT) at the International Conference on AOSD, Boston (2003) 7. Johnson, R.: Spring Reference 3.0.0.RC1 (2004-2009) 8. Moreira, A., Araújo, J., Whittle, J.: Modeling Volatile Concerns as Aspects. In: Dubois, E., Pohl, K. (eds.) CAiSE 2006. LNCS, vol. 4001, pp. 544–558. Springer, Heidelberg (2006)
Photography and Computer Animation for Scientific Visualization: Lessons Learned Francisco V. Cipolla Ficarra1,2 and Lucy Richardson3 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva 3 Electronic Arts – Vancouver, Canada Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected], [email protected] 1
Abstract. In the current work are discussed a series of communicability techniques to increase scientific visualization in the case of orthodontics, in an offline interactive system. This interactive system can be defined as a kind of guideline that has survived the passing of time. Additionally, the design categories are described that have allowed to draw up a high quality system with scarce financial resources, since the software and the hardware currently used can be regarded as of domestic use. It is also made apparent how a simple information structure, joined to the dynamic and static means, can facilitate the diffusion of science among the inexperienced users. Finally, a brief state of the art is made of scientific visualization. Keywords: Scientific Visualization, Computer Animation, Photography, Design, Communicability, Multimedia.
Photography and Computer Animation for Scientific Visualization: Lessons Learned
103
reality in commercial graphical computing, for instance, because the common denominator among them is simulation. From the decade of the 80s onwards, the main research centres have devoted themselves to it, whose results have started to be seen, for instance, inside the global modelling of the environment, or environment protection. It is possible to divide this modelling into: weather forecast, atmospheric contamination, ocean currents, and biosphere models. In some way, here are depicted the four essential models of simulation: making forecasts, observing simulated phenomena, finding optimal solutions and controlling complex events. In regard to the biotechnological metamorphosis joined to telecommunications they have determined for a long time the axis of the changes of the current international society that was shaped in the last two decades of the 20th century. Scientific visualization starts from a reality that has to be examined and decomposed into data, which are generally abstract, such as the mathematic models, for instance [7]. From these a concrete image must be generated of that abstraction for a spectator who wants to analyze and understand the reality that surrounds him/her. The main problem of the scientific visualization process consists in finding the visualization technique that achieves the most appropriate results. The visualization techniques are one of the essential parts of the visualization process and depend to a great extent of the kind of data to be visualized (scalar, vector, tensor), and the dimension of the dominion in which these data will be depicted (one, two, three dimensions or multidimensional). Consequently, we have the complexity (number of dimensions) of the data. If, for instance, the intention is to visualize at the same time four or more dimensions related to the physical environment, whereas on the computer screen we have two. That is, how complexity can be reduced to a degree which on the one hand does not overwhelm the spectator and on the other hand does not leave out important data. Simultaneously, we have the issue of its use. Scientific visualization is used with several purposes which may be grouped into three great sets of specific use, such as exploratory analysis, confirmation analysis and results presentation: • Exploratory analysis: One gathers a set of data without a specific hypothesis, present beforehand. These data undergo a research process of interactive information search that will yield as a result a visualization that supports a hypothesis about the data set. Let’s remember the difference between data and information. The information is the result of an analysis process starting from the data. In its turn the information may become the data of a new process in a continuous loop, through feedback. • Confirmative analysis: You have a set of data on which a hypothesis is made. A processing of said data is made which generates a visualization through which it is possible to validate or reject the hypothesis one had at the start. • Presentation of a result: specific data about the results are known, a process is made that gives as a result a visualization that stresses the veracity of said facts. The current work is structured in the following way: a brief state of the art in the static and dynamic media for scientific visualization, the 2D and 3D visualization techniques. Later on the descriptive heuristic analysis of an interactive system is made, from the point of view of each one of the design categories, such as are: layout,
104
F.V. Cipolla Ficarra and L. Richardson
navigation, content, conection, panchronic and structure, categories that keep a bidirectional relationship between themselves [8]. Finally, there are the essential reasons, through which domestic virtual reality hasn’t reached the development levels such as the 3D emulation on 2D screens [9] [10] [11], such as in the medicine sector.
2 The Importance of Visualization In Western culture and especially since the Renaissance, images have been necessary to describe abstract concepts [12]. Geometry has been a major help for the development of mathematics and of physics, whereas in the design and visualization of space it has been useful to boost engineering. Besides, with the democratization of the hardware and the software for the computer graphics and computer animation in the nineties and the start of the new millennium, scientific visualization has expanded to the academic milieu outside of the university, for instance. Traditionally and according to Focardi, the basic elements of visualization are [7]: definition of the visualization goals, definition of visual language, definition of the operations and of the graphical functions, image building techniques. Scientific visualization belongs to the computer graphics sector, and inside it to the subset of computer animations. Its use in the on-line and off-line interactive systems has demonstrated along time to be something very positive for the democratization of graphic computing and the sciences [8] [13]. Its use combined with some other dynamic and/or static means inside the narration of the content, for instance, eases the understanding of the subjects being dealt with. This understanding is appreciated by the user specialists in the sector or the users at large. In the latter group it is necessary to adjust the scientific language to everyday language: an operation that must be carried out by a real professional. In the example we have analyzed about orthodontics is present the variable of the simulation of the parts of the human organism which had its democratization genesis in the 80s [14]. From the first moment it was used basically as an observation tool. Such a concept has changed with time, since the level of details of the images from a resolution point of view, for instance, pixels by inches until the most varied techniques to approach the details of the static images. An example is the visualization of the works of art of the Renaissance to be printed on special tissues (figures 1 and 2). In these cases it is necessary to increase the resolution of the images until the last detail is captured in the most realistic possible way on a new canvass. A variable of 100% natural cotton cloth, with a weight of 410 grams per square meter can guarantee an excellent quality in the details and the lighting effects of the classical paintings. Besides, the use of diverse natural dyes allows the reaching of magnificent results. That is, there is no loss of clarity and with absolute chromatic truthfulness to the original painting. In some of the images of the website: www.haltadefinizione.com, have 28 billion pixels –about 3,000 times more than in the resolution of a photography made with a normal and commercial digital camera.
Photography and Computer Animation for Scientific Visualization: Lessons Learned
105
Fig. 1. The high resolution of the images allows a high quality visualization of the masterworks of painting on special tissues
Fig. 2. With the current techniques of qualitative visualization, the limits between the real and the virtual may escape the human eye
Now the model of considering visualization as a resource to observation has changed with the passing of time and the evolution of the software and the hardware, since the levels of detail have been refined, as it has been seen in the previous examples and the aspect of analysis that visualization entails has boosted the functions of constructive synthesis. Therefore, in our days it is possible to come across with: • • • •
Observation of regular and abnormal cases. Dynamic observation of visualization sequences. Dynamic exploration of the images. Interaction with the images, added to the modification of the simulation conditions.
From a point of view of the image as such, visualization is not restricted to reproducing on the computer screen the shape of an existing object, since it can elaborate
106
F.V. Cipolla Ficarra and L. Richardson
representative forms of the phenomena that do not correspond to the reality of the object [7]. In the context of scientific visualization, one starts from algorithmic expressions distributed in space, that is, it can be said that a simulation is the result of allocating numbers to regions in space. For instance, if we apply the option of deformation of an object submitted to pressure (imitable with commercial programs of computer graphics, for instance) it corresponds to three-dimensional objects. In contrast, if we are contemplating the movement of the air by the helices of a fan, they correspond to possible experimental results, that is, values that are to be found in a certain measurement procedure if they are applied in that point. Now bearing in mind that human beings perceive the messages through the eyes and the ear, it is necessary to establish a visual language. Therefore, it is necessary to define what shapes are associated with particular selections of values inside a numeric field. Certain categories of shapes, surfaces, colours must be defined, regarded as essential inside the system that is being studied. In some way we are in the face of a symbolic system, which in the case of visualization does not have an isolated meaning, but means more when considered all together. Hence the importance of choosing those most representative elements of the system, so that scientific visualization makes up a recognisable system. Otherwise, one would fall into a misunderstanding of the image given the wealth of information that is concentrated in it.
3 Three-Dimensional Visualization Techniques on 2D Screens Obviously, if we are using graphic symbols of great complexity this entails resorting to techniques capable of achieving more complete visual effects, and with high quality, as if we were dealing with a picture with a similar resolution to that used for the representation of the figures 1 and 2. For instance, if three-dimensional images are used, one should access them from several angles, eliminating those elements that prevent reaching the final goal, such as can be the tissues that surround the human heart, the muscles of the arms, etc. In some way, such principles are inside the notion of virtual reality since they allow the shifting of the observer in the three directions of the coordinates (x, y and z). Additionally, there are several visualization techniques of scientific images, as is the case of the combination of virtual reality and holography. Evidently it is not easy to build a three-dimensional image of very high accuracy as it is required in medicine on a bi-dimensional screen. This entails limits of software, design and communicability, among them, to be overcome. For instance, stereoscopic vision glasses allow to overcome said difficulty. In a more rudimentary way they can be described as two small screens of stereoscopic view, which generate images of a high degree of realism. In such way that a simple movement of the user or observer will change the point of view inside the visualized scene, and helped by interactive gloves, they turn the movements of the hand into entrance signs in the computer. In this case, it is important to realize how the computer plays an important role because with a powerful processor the speed of the generation of high resolution images can be increased and also, gives greater realism. Also with holography a high level of realism can be reached, since it generates a 3D object in space, almost identical to the
Photography and Computer Animation for Scientific Visualization: Lessons Learned
107
real one. Besides, in the design of objects with the computer surface and depth effects can be applied, such as: lights and shadows, transparencies of solids and surfaces, use of textures emulating reality, special effects, etc. In the construction of these threedimensional images one can start from zero on the computer screen with CAD (Computer Aided Desing) applications, commercial or not, or with a three-dimensional scanner that generates virtual objects from real. In the 90s an easy way to cheapen the production costs of images belonging to the sector of scientific visualization and orthodontics consisted of carrying out the digital scale models of dental anatomy starting from casts and 3D scanner systems. This was the strategy followed in the CD-ROM of “Interactive Visualization in 3D of the Inverse Anchorage Technique in Orthodontics” [14]. Making an analysis of the main categories into which we can divide interactive design (layout, navigation, structure, content, conection and panchronic) in this off-line interactive system we find that navigating inside it is very easy for the users. Communicability is also high, even for those who see themselves as inexpert [8]. The interface is very simple and the access to the contents from the point of view of the structure is divided into great collections of information. The presence of the dynamic and static means enriches the system because there is a convergence of high quality pictures and computer animations (among others) where the morphing technique is used to analyze comparisons between a before and an after, in regard to given treatments by the orthodontics specialist, for instance. Next, a heuristic analysis of the content to get to know some features of the used design. On the main screen of the system we have four main collections of the system and inside them several sub-collections with the specific information. The main options are: concept, description, materials and treatment. In the collection of treatment there are several guided links such as are treatments and extraction. In the second we have several guided links in a lineal way, for the maxillary and mandibular zone: cementing, extraction, cementing and subsequent levelling, subsequent retraction, anterior retraction, stabilization and settlement. Starting from the real picture and simulating three-dimensionally those teeth from diverse angles, we see how the mouth of a young woman has a malformation that technically from the point of view of orthodontics is known as of type III. The audio is activated and explains the details of that malformation. Later on, the computer animation allows one to get a better picture of the problem. For instance, in the figure 3 (intraoral frontal and intraoral profile) it is seen how these two teeth must be extirpated, and then helped by a braces. It enables to get an excellent result, as it can be seen in figure 5. With computer animation you can have a step by step of the arrangement of all the dental pieces and see how the wiring system (real) works inserted in the mouth of the teenager, with overall views on different angles and in detail (as required by the correct scientific visualization) of the set of interfaces (figure 4). The realism reached here is surprising, because you can even see the illumination effects that allow to consider the existing distance between the tooth and the braces the patient has to wear to correct her pathology.
108
F.V. Cipolla Ficarra and L. Richardson
Fig. 3. Combination of the digital photography, analogical information (cephalogram) and 3D simulation
Fig. 4. With computer graphics and computer animation the arrangement of tooth pieces can be seen with overall views from different angles and in detail
Photography and Computer Animation for Scientific Visualization: Lessons Learned
109
Fig. 5. Medical information (X-rays and cephalograms) and digital photography
This can also be seen with X-rays and cephalograms (figure 5), the results achieved after applying the whole mouth fixing process. Inside a morphing of pictures it can be seen how a kind of bump is eliminated in the cheeks (photographies). In this case we see how what in the 90s was something extraordinary and reserved for special cases, such as can be having a digitalized radiography, is today something common in many places where the health service is public, that is, having it in a CD. In regard to the morphing technique, its didactic character is still valid in our times. Evidently, the geometric transformations of computer animations are becoming another element in the audiovisual narrative in many films with special effects. However, it is not, like some apocalyptic of the educative marketing claim a totally obsolete technique, applied with merely commercial purposes. In this sense, it is necessary to have a more integrated vision [15] of the different special effects, deriving from the different environments of graphical computing, when the goal is to communicate contents to the basis of the pyramid of the population of the local and world population.From the point of view of the structuring of the information this is a classical example of the normal pyramid to present the contents at the start and in the sub-collections we have a kind of inverted pyramid, with the purpose of speeding up the reading of the contents. The guided links ease the access to the information, even in the case of expert users, interested in a lineal reading of the contents, whether they are textual or in audio, such as are the explanations of the experts in orthodontics in each one of the steps that have been made to solve the mouth problem. The interface of the system follows the essential concepts of the Macintosh style guide [16].
4 Learned Lessons and Future Research Direction The essential “Great Challenges” stated by Kenneth Wilson have been achieved in less than two decades. One of them was scientific visualization as it has been seen in the current work without resorting to virtual reality. The virtual reality in the 90s was
110
F.V. Cipolla Ficarra and L. Richardson
presented in Southern Europe as the panacea to visit the virtual spaces related to architecture, videogames and computer art, in the main. However, in videogames and in the computer art, the works presented did not have a sufficient level of creativity and originality. In regard to the technical failings that could be detected in many creations, these failings could be seen from the modelling of the three-dimensional objects until the lighting, including navigation. However, in those places they were regarded as something secondary, and in many cases they were yet another element of the artistic work which could have high doses of abstraction. Evidently, the investments in software and hardware were considerable and the results didn’t justify the costs. In such a context, the scientific visualization that requires high accuracy couldn’t expand through the normal cannons of reality simulation, but rather through emulation. In the analyzed system it can be seen how the combination of the 2D and the 3D can become a design guide for the interactive systems aimed at a part of medicine, where the accuracy requisites are very high. The information structured in a simple way does not only facilitate communicability but also the usability of the system among inexpert users. The use of medical material such as can be a radiography, a computerized axial tomography, an encephalogram, etc., combined with the 3D computer animations, can give a realism and a high didactic value in comparison with 3D images of virtual systems where the movement through the virtual environments is prized rather than the rendering of the final images. The diachronic comparisons of the images in medicine always have a positive value. They do not only allow to see a before and an after, but also from the psychological point of view the morphing technique is very positive for the patient, for instance, before undergoing operations that affect physical looks, particularly the face, as in orthodontics. Our future works will be aimed at creating a database of those systems of off-line multimedia that are to be found in CDRom support or DVD-ROM and which can widen the quality examples in interactive design. Besides, drawing up a style guide for the Web 2.0 and Web 3.0 for the combination of static and dynamic means, related to scientific visualization aimed at medicine.
5 Conclusions In principle the CD-ROM may appear to many designers of interactive systems as an obsolete support of interactive information. However, it is not so, since the images generated in analogical support (i.e., radiographies) in the environment of current medicine have been taken to digital support, through the CD-ROM. With the analyzed system it can be seen how the bases of interactive design set up in the 90s are not only still in force, but they have become something of a guideline in the design for interactive products related to medicine. The cardinal points of these interactive multimedia systems are the simplicity of use, the universality of the contents and communicability. Although scientific visualization is an environment where the data to be processed are high with the purpose of depicting as accurately as possible the reality surrounding the human being, it has been seen that there are several techniques deriving from the interactive design of the 90s that may facilitate that task. Besides, an evaluator of the communicability of these off-line interactive systems can detect quickly those main features of the different design categories. Especially in the category of the
Photography and Computer Animation for Scientific Visualization: Lessons Learned
111
presentation of the information in the interface and the combination of the dynamic and static means, the isotopic lines can be determined with the other design categories, such as can be the way in which the information and the navigation through the different contents is structured. It has also been seen how in scientific visualization the temporal component is almost always present. Consequently, the computer animations play an important role to help understand the world that one tries to recreate through scientific visualization. In the case of orthodontics the pictures combined with 3D have cut down costs in a high communicability interactive product.
Acknowledgments A special thanks to Electronic Arts (Madrid, Spain), Emma Nicol (University of Strathclyde), Sonia Arellano (Universidad de Puerto Rico), Maria Ficarra (Alaipo & Ainci – Italy and Spain) and Carlos for their helps.
References 1. Dougherty, M., et al.: Unifying Biological Image Formats with HDF5. Communications of the ACM 52(10), 42–47 (2009) 2. Rubin, D., et al.: Annotation and Image Markup: Accessing and Interoperating with the Semantic Content in Medical Imaging. IEEE Intelligent Systems 24(1), 57–65 (2009) 3. Weitzel, M., et al.: A Web 2.0 Model for Patient-Centered Health Informatics Applications. IEEE Computer 43(7), 43–50 (2010) 4. Saltz, J., et al.: e-Science, caGrid, and Transalational Biomedical Research. IEEE Computer 41(11), 58–66 (2008) 5. Kari, L., Rozenberg, G.: The Many Facets of Natural Computing. Communications of ACM, 72–83 (2008) 6. Wilson, K.: Grand Challenges to Computational Science. Future Generation Computer Systems 5(2-3), 171–189 (1989) 7. Focardi, S.: La simulazione della realtà. Editrice il Rostro, Milano (2007) 8. Cipolla-Ficarra, F.: Quality and Communicability for Interactive Hypermedia Systems: Concepts and Practices for Design. IGI Global, Hershey (2010) 9. Scharver, C., et al.: Designing Cranial Implants in a Haptic Augmented Reality Environment. Communications of the ACM 47(8), 32–39 (2004) 10. Hunter, P., et al.: Multiscale Modeling: Physiome Project Standards, Tools, and Databases. IEEE Computer 39(11), 48–53 (2006) 11. Ackroyd, K., et al.: Scientific Software Development at a Research Facility. IEEE Software 25(4), 44–51 (2008) 12. Barrow, J.: Le immagini della scienza, Mondadori, Milano (2009) 13. Minnery, B., Fine, M.: Neuroscience and the Future of Human-Computer Interaction. Interactions 16(2), 70–75 (2009) 14. Digital Illusion CD-ROM: Técnica de Anclaje Diverso en Ortodoncia, Matrust, Barcelona (1995) 15. Eco, U.: Apocalittici e integrati, Bompiani, Milano (2001) 16. Apple: Macintosh Human Interface Guidelines. Addison-Wesley, Massachusetts (1992)
Re-viewing 3D – Implications of the Latest Developments in Stereoscopic Display Technology for a New Iteration of 3D Interfaces in Consumer Devices Andreas Kratky University of Southern California Interactive Media Division School of Cinematic Arts 900 West 34th Street, SCA 201 Los Angeles, CA 90089-2211 [email protected]
Abstract. The wide-spread interest in 3D imagery that was proliferated by successful recent film experiences has triggered a wave of research and investment in immersive 3D visualization techniques. A new domain of deployment for these display systems will be that of handheld mobile devices with diverse and heterogeneous usage scenarios. The design of interfaces for these devices and usage patterns will pose a complex set of research questions revolving around the integration of virtual space, real space and social space. This article identifies some of the main threads and research directions in this complex. Keywords: Spatial Representation, Auto-stereoscopic Display Systems, Ubiquitous Computing, HCI, 3D User Interfaces.
Re-viewing 3D – Implications of the Latest Developments
113
the viewer. These devices range from cameras to computing applications and mobile devices. The company Fujifilm introduced a consumer-grade stereoscopic still camera equipped with a stereoscopic display and additional image viewing frames with stereoscopic displays to view the images shot with the 3D camera; a growing number of stereoscopic computer monitors are entering the market; and finally there are already some solutions to extend the display systems of handheld devices such as Apple’s iPod, iPhone and iPad product-line into stereoscopic viewing that has entered the market or is preparing to do so. The fact that these latter devices are tools for general users who are going to interact with them puts the task on interface designers to rethink their design concepts and consider which implications depth managed display systems will have on the interaction techniques and work-flows. Similar to how filmmakers specializing in 3D depth-managed filmmaking have to rethink their visual vocabulary, interface designers have to adjust their professional practice to enable users to seamlessly interact with the new class of devices with stereoscopic display systems. This paper will review some of the established interface design practices and investigate the challenges posed by the recent class of handheld mobile devices in the view of adopting stereoscopic display systems. It will conclude with an outline of usability and communicability aspects raised by this shift over to stereoscopic representations.
2 A New Iteration of 3D User Interfaces 2.1 Perceptual Aspects of the GUI as a Prototypical 3D Interface Data rendering in 3 dimensions has a long-standing tradition in human computer interaction. Many efforts have been made to use 3-dimensional information rendering to ease the understanding of data structures. Also the common standard of the window-based GUI – even though it presents itself in its archetypal incarnation as a 2-dimensional layout of overlapping windows – is an approach to use the visual language of spatial rendering to harmonize a multiplicity of concurrently running applications within one coherent containing space. The only spatial cue that is used in this context is the depth stacking and overlapping of windows using occlusion in order to direct and represent the focus of the viewer. Occlusion is one of the important cues human beings use in order to infer depth information. In later iterations of the window-based GUI shading was added to the vocabulary of depth cues in order to render elements that can be manipulated by the user 3-dimensional and make the appear more “haptic” so that they can easily be distinguished from non-manipulable elements. The notion of an interface containing multiple windows is derived from the document metaphor implying that each window is a document (i.e. a piece of paper) that is shuffled around with other papers on the desktop. As a result the spatial extension of this document space is minimal since paper has a minimal thickness. With a stronger focus on the handling of complex data sets efforts were undertaken to extend the depth dimension of the manipulated environment and move towards “information terrains”. While in the flat approach only two dimensions are available to map aspects of the data, the extension to a third dimension adds another axis to map statistical clusters and proximity measures of data sets across several parameters.
114
A. Kratky
Three-dimensional information spaces provide the benefit that users can navigate the represented data employing the experiences human beings have from navigating real space. Walk-throughs or fly-throughs, the organization of data into structures like cities, buildings, rooms etc. provide effective means to make the information space legible to the user [5]. Extensive research has gone into the construction and evaluation of 3-dimensional renditions of complex data. But most of the applications informed by this research remain within specialized domains. This is due on one hand to expensive and cumbersome computing and display technologies and on the other hand to the specialized nature of solutions for particular kinds of data sets. While human beings are very good at evaluating various depth cues such as shading, relative size, perspective, occlusion, parallax, and contrast in order to interpret information encoded in the third dimension it was mostly expensive and cumbersome to add the very effective depth cue of stereoscopic vision parallax. Complex display systems that allow to feed distinct information targeted to the left and right eye comprising head-mounted displays (HMD), polarized projections systems, CAVE environments and others were required for this purpose and restricted the application of stereoscopic rendition largely to specialized fields like the automotive industry or simulation environments [6]. Without the stereoscopic depth information it is much harder to intuitively decipher the 3-dimensional representations and to navigate the information space [7]. As a result of the difficulties of making the display systems necessary to render stereoscopic images available to a general public, the important depth cue of stereopsis was missing from consumer applications and thus made 3D information spaces harder to use. Even though several attempts were made to deliver real-time 3D content to the Internet with VRML or Shockwave3d it never has been adopted on a broad basis due to usability and navigation issues [8]. 2.2 A New Class of Devices Raises New Questions With the significant growth in attention towards 3D enabled display systems and a steady outpour of these devices into the consumer market we will re-encounter a lot of the questions that have been dealt with in the past in the context of specialized applications for data visualization or 3D modeling in a more universal context targeted at general purpose computing, text processing and gaming. While in the game industry progress has been made to bring the virtual worlds of games in stereoscopic 3D onto the home computer screen these efforts have not comprised other aspects of computer operation such as file management etc. We see now a new iteration of design studies for operating systems that use 3-dimensional rendering in order to optimize screen real-estate. These approaches will benefit from the increasing general availability of stereoscopic display systems. Already in the 1990s several studies of operating systems with 3D visualization were circulating such as the File System Navigator (FSN) by Silicon Graphics Industries and recent patent filings by Apple inc. indicate that research in this direction has sprung up again [9]. While development for novel 3D interfaces for the desktop environment will be able to draw from previous research and development, another set of questions is raised by the new class of mobile devices equipped with 3D display systems. In order to make these devices succeed it is important to develop seamless and unencumbered
Re-viewing 3D – Implications of the Latest Developments
115
viewing solutions. The use of glasses for stereoscopic vision is not likely to be successful in this field therefore auto-stereoscopic solution will have to be employed. A look at the new class of devices indicates that they are well positioned to make solutions possible on the rather small screen sizes of these devices that traditionally were problematic on larger screens. Most of the mobile devices come with small screens in a size between 3.5 to 10 inches screen-diagonal which makes it much easier to solve problems of viewpoint dependence of 3-dimensional representations. It is difficult to accommodate a large viewing angle on large screens that use auto-stereoscopic displays but this problem is much easier to solve on small screens where the viewer is very likely to be in a certain distance from the screen and at a rather well defined position. The use of lenticular lens systems employing an array of lenses to direct the interlaced portions of left- and right-eye images to the respective eyes is one of the most common technologies to produce an auto-stereoscopic display system that does not require glasses or other tools to achieve the left and right channel separation. The small lens arrays are problematic when the position of the viewer can vary significantly, but with the small screen sizes of the phone or tablet devices this problem is minimized. Besides encoding the parallax between the two viewpoints of left and right eye it is also possible to get parallax information through head movement. When the viewer moves his head in respect to the viewed scene the change in viewpoint produces a parallax interpreted as depth information. This kind of parallax viewing is normally implemented with complicated head-tracking mechanisms that locate markers fixed to the head allowing to reconstruct the position of the viewer towards the virtual scene rendered in 3D. Again here the small screen sizes of mobile devices help since it forces the viewer to stay within certain boundaries in order to be able to view the screen, which in turn allows to infer the viewer’s position. As the viewer is less likely to change his head position towards the screen in order to “look around” an object presented on the screen but rather tilt the device itself since it is small and lightweight, the gyro-sensors built into most of these devices can be used to detect this tilting motion and use it to calculate the resulting shift in the image. This simple technique produces a rather strong sensation of depth without having to rely on glasses or other tools. Both approaches in tandem can produce very compelling results. The larger and much less studied set of questions raised by the new class of mobile devices is their hybrid usage scenario. Mobile devices serve an application spectrum ranging from communication such as text, voice, and image communication, Internet and data browsing, document treatment, geographical navigation, and entertainment. These tasks are not new tasks but they are more and more solved with computing technology and constitute a new challenge for user-interface design in the usage profile of handheld mobile devices.
3 User Interface Design Challenges of Ubiquitous Computing 3.1
Convergence of Applications and Overlapping of Virtual and Real Space
As computing becomes really ubiquitous with devices that do not only have sufficient computing power but also have industrial design qualities that make them desirable
116
A. Kratky
and portable at the same time [10], HCI specialists have to adjust their concepts and designs for the hybrid and highly integrated tasks that are performed with these devices. Developed from smart phones to comprise a wide range of computing applications these devices were conceived originally for communication and this will surely remain the dominant filed of application. But communication goes far beyond voice communication or text messaging and includes chat and video chat and a broad range of social networking applications. All of these have their dedicated interfaces that have been more or less ported from the desktop computer to the handheld device. The challenge here will be to provide for a growing integration with the other mainstay of mobile applications: Navigation software. Probably the second most important function that mobile devices are supporting their users with, is way finding in the real world. With GPS units integrated and navigation software available or simply with the use of Google maps, users rely on their cellphones to guide them to their destination. Functions like the tracking of geo-locations of friends, social recommendations of places or the virtually manifested presence of people in real spaces as it is enabled by applications like “Foursquare” [11] the integration of social networking and geolocation services becomes stronger and stronger. This means that future interfaces need to accommodate a mapping of virtual proximities and relations to real space locations. At the same time the real space plays also an increasing role in file organization as geo-tags are more and more adopted as a way of organizing files such as photos and film clips produced with the handheld device and automatically georeferenced. In the developing field of crowd-sourced data gathering a number of measurement and monitoring tasks builds on the capacity of modern mobile devices to record sound and images in combination with geo-references and uses them for participatory sensing projects. We can anticipate that this kind of sensing operation will be carried out by a large number of users in order to better understand their immediate environment as well as to contribute to larger scale monitoring projects. The nature of data collected ranges from personal life-logging [12] to air quality measurement [13]. In future user interface design this kind of activity as well as the representation of collected data in relation to the real space of the user’s environment will have to be accommodated. Along with the described usage scenarios users will access entertainment applications such as movies and games. It is very likely that these applications will soon deliver content in stereoscopic 3D enabled formats. And as being consumed for example during a journey through real space the representations of the virtual space of social networking contacts, the real space of the journey and the virtual space of the entertainment application merge and need to form a seamless, integrated and at the same time controllable experience. Part of this hybridized experience will also be the dealing with data sets stemming from internet searches and increasingly also application interfaces for tasks like note taking, text processing, spread sheet editing and other tasks that have commonly been solved with desktop or laptop computers and that start to move into the mobile setting as well. As a last piece in the puzzle of future interface components we should mention the use of mobile handheld devices to control a wide range of appliances, public furniture such as information kiosks, and robotic machinery. The usage scenario as remote control for home entertainment machines such as TV sets will not require complex interface support but robotic and vehicle control applications will greatly benefit from
Re-viewing 3D – Implications of the Latest Developments
117
well designed 3 dimensional representations that allow the user to make judgments about distance and other spatial properties. These kinds of judgments are well supported by stereoscopic displays. Applications of this genre are still prototypes but it is likely that in one form or the other we will see these coming to consumer device operations [14]. 3.2 Designing Hetero-spatial Representations The described convergence of virtual space, real space, abstract data spaces, and individual as well as social space raises a wide array of research questions and will make human computer interface designers rethink their concepts. It is the coming task of HCI specialists to enable users of mobile handheld devices to negotiate the overlapping and integration of multiple spaces by supporting them with the appropriate interfaces. Most of these application domains have so far been treated in an insolated fashion and there is research existing on how to solve various visualization tasks individually. The common way of integrating these individual solutions into an enveloping framework such as running different visualizations inside of one operating system, employs the window metaphor. Similar to the documents in the classical GUI following the desktop-metaphor the notion of individual documents positioned on the desktop is extended to comprise multiple windows containing different applications. Each of these windows opens a view into one coherent space (such as an application or the rendition of a virtual space) that is delimited by the bounds of the window. The spaces contained in windows coexist next to each other within an environment that mostly still follows the desktop paradigm. This metaphor of a viewing frame is in holding with the basic rules of linear perspective, which is since the Renaissance the common way of depicting spatial information. A perspective window conveys the impression of looking through the frame into a space that extends beyond the frame and is governed by consistent rules [15]. This concept does not only allow to keep the different views and spaces separate, it also is suitable to render the perspectival projection on a flat screen. The user of a window-based interface negotiates the different spaces by managing the windows. In the setting of small screen stereoscopic depictions of spaces we will have to reconsider this metaphor. The spatial impression created by the stereoscopic rendition is likely to “break” the frame and extend beyond it. The coexistence of frames delimiting the stereoscopic views can create visual artifacts that interfere with the spatial impression. This difficulty is enhanced by the necessity to resolve multiple frames with different spatial characteristics. In this sense we have to reconsider the composite nature of information that is delivered by presenting different frames with different information contexts. From the experiences of editing for 3dimensional moving images it has become clear that it is easier for the viewer to keep the frame of reference constant and abandon the composite experience of a fast montage of short sequences showing different viewpoints, which is the standard editing technique today. Instead it appears beneficial to shoot longer sequences and let the camera travel across the space in order to adopt a different viewpoint. This does not only keep the frame of reference intact throughout the displacement, it also enhances the spatial effect by introducing parallax through the camera movement.
118
A. Kratky
Another reason will require us to rethink the window-metaphor: The limited screen real estate of mobile devices makes it difficult to accommodate multiple windows and allow easy navigation among them. In most current applications the entire screen is used and the mobile device itself becomes the frame that is delimiting the represented virtual space against the real space surrounding it. This means that interface elements such as buttons and other interaction elements will coexist with the rendition of the space in the same 3-dimensional environment, moving away from the currently established metaphor of toolbars containing interaction elements in a space distinct from the context that is manipulated with them. The problem of combining the rendition of a virtual space with the surrounding real space introduces also a problem of eye accommodation: The eye has to switch focus between the different environments and it is difficult to view them together. This is a known problem in aerospace and vehicle navigation where displays are combined with the view of the environment through which the vehicle is traveling. The construction of a head-up display (HUD) that merges correctly with the view of the environment has triggered a lot of research. Besides the problem of focus there are questions of attention involved [16]. Valuable research has been done in the domain of mixed reality environments. In the current state, though, applications in this area are mainly in the prototype or experimental stage and did not move on into consumer applications [17]. Also combinations of mixed reality approaches with stereoscopic rendition have been undertaken [18]. Both these research threads will need to be combined and applied to the small screen of the handheld device. While a lot of relevant findings exist in the domain of visual computing, much less is to be found in the area of visualizing social spaces. In most of the data visualizations or renditions of spaces there is little possibility for subjective influence of the user. Most of the visualizations of shared virtual spaces look the same for every user which is helpful in giving a shared reference and in easing communication about the space but does in many cases not correspond to the actual perception of users nor to their particular interests and objectives [18]. Our concept of rendering social space has to be informed by an understanding of the social relations that constitute places of interaction and how these influence conduct of individuals [19].
4 Conclusions and Future Research From the described indicators we can conclude that 3-dimensional stereoscopic interfaces will be part of our future computing devices and that this shift from 2dimensional display systems to 3-dimensional systems will bring a multiplicity of new questions to human computer interaction and interface design. A particular research emphasis will be on the concept and design of interfaces accommodating the hybrid usage scenarios of mobile computing devices. At the current time these devices are still in an early phase and attachments enabling stereoscopic displays are not yet fully integrated. Elements such as stereoscopic cameras integrated into the devices to produce full stereoscopic images that could be used for 3-dimensional augmented reality displays are still missing. We can see, though, that the technology is reaching market readiness and will be available soon. Design strategies as well as evaluation
Re-viewing 3D – Implications of the Latest Developments
119
criteria to measure their effectiveness have to be formulated now as the hardware is getting ready. The purpose of this article is to identify a set of interlinked problems and circumscribe some of the basic research directions in order to facilitate this formulation process.
References 1. The Numbers: Movie Avatar. Nash Information Services, http://www.the-numbers com/movies/2009/AVATR.php (retrieved October 15, 2010) 2. Box Office Mojo: Avatar. Service of the Internet Movie Database IMDb (2009), http: //www.boxofficemojo.com/movies/?id=avatar.htm (retrieved October 15, 2010) 3. Good Morning America: Will ‘Avatar’ make viewers nauseous? ABC News, http: //abcnews.go.com/GMA/avatar-movie-making-viewersnauseous/story?id=9370714&page=1 (retrieved October 15, 2010) 4. Verrier, R.: 3-D technology firm RealD has a starring role at movie theaters. Los Angeles Times, March 26 (2009), http://articles.latimes.com/2009/ mar/26/business/fi-cotown-reald26 (retrieved October 15, 2010) 5. Mariani, J.: Visualization Approaches and Techniques. In: Visualization of Structure and Population within Electronic Landscapes, Deliverable 3.1, The escape Project, ESPRIT Long Term Research Project 25377, 8–34 (1998) 6. Holliman, N.S.: 3D Display Systems. In: Dakin, J.P., Brown, R.G.W. (eds.) Handbook of Optoelectronics, vol. II. Taylor & Francis, Abington (2006) 7. Schor, C.: Spatial constraints of stereopsis in video displays. In: Ellis, S.R., Kaiser, M., Grunwald, A.J. (eds.) Pictorial Communication in Virtual and Real Environments, pp. 546–557. Taylor & Francis, Abington (1993) 8. Abásolo, M.J., Della, J.M.: Magallanes: 3D Navigation for Everybody. In: Proceedings of the ACM Conference GRAPHITE, pp. 135–142 (2007) 9. Chaudri, I.A., et al.: Multidimensional Desktop. US Patent and Trademark Office, Patent Application 20080307360 (2008), http://appft1.uspto.gov/netacgi/nphParser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsea rch-bool.html&r=1&f=G&l=/50&co1=AND&d=PG01&s1=20080307360&OS =20080307360&RS=20080307360 (retrieved October 15, 2010) 10. Negroponte, N.: Being Digital, p. 71. A. Knopf, New York (1995) 11. Foursquare Inc, social networking provider, http://foursquare.com/ (retrieved October 15, 2010) 12. Byrne, D., Jones, G.J.F.: Towards Computational Autobiographical Narratives through Human Digital Memories. In: Proceedings of the 2nd ACM Workshop on Story Representation, Mechanism and Context, pp. 9–12 (2008) 13. Kanjo, E., et al.: MobGeoSen: facilitating personal geosensor data collection and visualization using mobile phones. Pers. Ubiquit. Comput., 599–607 (2008) 14. Wang, M., Ganjimeh, T.: Remote controlling an autonomous car with an iPhone. In: Tech. Rep. B-10-02, Free University of Berlin, pp. 1–6 (2010) 15. Panofsky, E.: Perspective as Symbolic Form. Zone Books (2005) 16. Roscoe, S.N.: The eyes prefer real images. In: Ellis, S.R., Kaiser, M., Grunwald, A.J. (eds.) Pictorial Communication in Virtual and Real Environments, pp. 577–585. Taylor & Francis, Abington (1993)
120
A. Kratky
17. Coutrix, C., Nigay, L.: Mixed Reality: A Model of Mixed Interaction. In: Proceedings of the ACM Conference on Advanced Visual Interfaces (AVI), pp. 43–50 (2006) 18. Lindfors, C., et al.: ASTOR: An Autostereoscopic Optical See-Through Augemnted reality System. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2005) 19. Mariani, J.: Visualization Approaches and Techniques. In: Visualization of Structure and Population within Electronic Landscapes, Deliverable 3.1, The escape Project, ESPRIT Long Term Research Project 25377, pp. 8–34 (1998) 20. Crabtree, A.: Remarks on the social organization of space and place. Journal for Mundane Behaviour 1(1), 25–44 (2000)
Software Managment Applications, Textile CAD and Human Factors: A Dreadful Industrial Example for Information and Communication Technology Francisco V. Cipolla Ficarra1,2 and Valeria M. Ficarra2 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected], [email protected] 1
Abstract. In the current work the distance between computer science theory and the reality that goes around the terms “computer science” and “systems” is measured. It will be seen how due to the aesthetic factors of a management system’s interface, directors often make decisions that negatively affect for a period the functioning of centennial companies devoted to quality textile production in the cotton sector. Additionally, the details of the Lombardy textile world where human factors prevail over the new technologies will be discussed as an example. Finally, a heuristic and diachronic analysis will make plain the mechanisms used for continuous computer sabotage within a network of European industries, who invoice to each other yearly a six-digit figure in euros. Keywords: Software, Hardware, Technology, Textile, Industry, Managment Applications, Human Factors.
polyester, etc. in Southern Europe. It must be taken into account that several people work a loom, in three or four daily shifts. That is, that locating elsewhere means leaving many people without a job, especially women in the case of cotton and wool. In the case of Northern Italy, there was a certain reluctance to go to Asia, trying previously to take the production to Eastern Europe: Czech Republic, Poland, etc, since the price of labour per hour was lower among loom workers and the enterprises received subsidies from the EU. Whether it was for the purchase of new machinery and even to open shops in less developed areas within the same country. Many of these industrialists used to stick the label “made in EU” when in reality the total or partial production was not made within the EU [1]. Some in regard to the quality of their products and the assistance of international marketing placed the “made in Italy” label when for example the webbing of the material was done in the Czech Republic, for instance. Later on, a movement began to support “Design Italy” or “Style Italy”. However, these realities imitate the production theme because the designers and/or the CAD software that they use are already to be found in India or China. It is a direct consequence of those centennial industries with a great power of influence in the economy of the provinces or the autonomous regions regarded as the industrial engine have not invested in R&D for decades, and that to such an extent that they didn’t have their own departments in the industrial headquarters. You only hear them talk about R&D in the high quality of their end products in the marketing that they make through mass media communication. Some of those industrialists have seized the opening of labs or university departments related to the textile industry in the universities where they exert from the inside and the outside a great power and influence. The power they possess is such that industrial groups that invoice up to six digits implement R&D labs according to their needs, but with public funds or subsidized by the European Social Fund and other bodies with clout in Brussels. A classical example is CAD systems for textile industrial design [1]. These labs organize programmes, seminars, CAD/CAM textile masters for students, when the reality is that a elite of textile industries have at their disposal for free the latest breakthroughs in CAD/CAM and they can experiment with their students the problems that the potential employees may have at the moment of interacting with said breakthroughs of real work in those industries. In other words, the students who must pay fees are some kind of free guinea pigs for the industrialists in the area. Aside from that, reality in Europe indicates the transfer of qualitative textile production to the East: Turkey, India and China, including the design and the style. Some of those industrialists, who pose as icons in their countries of origin, have started to build in the new emerging powers of the textile, the looms and the software that make them work.
2 Interface and Users in the Software of Textile Management Without any doubt one of the main characteristics in those that define themselves as engine regions of Europe, such as Baden-Württemberg (Germany), Catalonia (Spain), Lombardy (Italy) and Rhône Alpes (France) is that they have been able to develop software and even hardware in the 80s to be exported within and outside the European borders. One example is Olivetti in Italy. However, in the late 80s and mainly in the 90s in the Lombardy region the small industrial realities bet on oversized hardware
Software Managment Applications, Textile CAD and Human Factors
123
and software equipment by the number of internal users, such as the AS/400 IBM system of International Business Machines (IBM) in the Lombardy province of Bergamo. This multiplatform system has its origin in the late 70s with the IBM System/38. This system was developed under the code-name Pacific, The current system has its interface controlled by CL (control language) menus and commands that in principle can be defined as intuitive. Besides, the operating system is based on objects and libraries- 0S7400- which allows integration with the DB2/400 database. The integration between data and programmes is reliable, and guarantees the stability of the system. Along the years the denominations have changed, such as: System i, iSeries and IBM (current initials). IBMi supports such operative systems as GNU/Linux, AIX, Windows, (they have an integrated Intel plate), etc. The programming languages with which they can work are basic, C , Cobol, Java, PHP; RPG and SQL. The hardware in its origins had a CPU CISC from IBM but in the mid 90s a migration of CPU RISC took place that is based on the 84 bit PowerPC microprocessors. Currently they are based on the Power 7 processor (unification of the IBM System I and System p platforms, under the caption of IBM Power Systems). Without any doubt, a system of slow but constant evolution in time. That survival and adaptability to the new requirements of the users is due to the Machine Interface (MI) which isolates the hardware and allows through the use of APIs. A huge advantage that the operative system and the application programmes can profit from the constant breakthroughs in hardware, without the need of having to recompile them again [3]. This action of recompiling functioning programmes is usually something very critical in an entrepreneurial or industrial environment, where the stability and the reliability of the system is essential. Besides, the services and the functions of the application cannot be suspended or interrupted when the users are working with the system. Evidently, the AS/400 today IBMi is very efficient. However, one of the main problems that the users have encountered is the interface of the applications. As a rule, the software management applications (accountancy, invoicing, customers, banks, etc.) in the enterprises, industries, financial institutions, public and private universities, etc. have a black background and the letters are white, yellow, green, etc. That is, they have stayed anchored in the dawn of computing. A user accustomed to interfaces of the home computer with Windows or Macintosh operating systems to videogames in video consoles or to the practice of virtual sports through consoles like Wii Fit, for instance. Being in front of an IBM screen with a black or blue background, after spending an afternoon or an evening in front of another hardware and software, evidently the user loses the motivation and sometimes the attention in the applications developed in the environments with interfaces far from the context of the evolution of the on-line and off-line interactive systems [4]. In the USA a way to solve these differences between the lab environment and playing one is to use the black colour as background in many websites related to the videogames, new technologies, etc. However, in the countries of Latin America, light blue and white prevail, a trend that has been implemented in Europe in the most varied types of portals, from the personal information, newspapers, universities, etc. starting with the new millennium. These are two very common colours and which exist in the sky with the clouds. We are in the face of a universal isotopy and that little by little may change the meaning of those colours, individually analyzed in the different cultures.
124
F.V. Cipolla Ficarra and V.M. Ficarra
Another of the advantages of using the white colour or its variants is that the visualization of the pages is very fast and they are related to most of the office computing programmes, by Microsoft, for example. An example of the extent to which the interface of a system may be influential we find in a centennial industry of the Bergamo textile at the beginning of the new millennium. Starting from the issue of the interface and other deriving from the maintenance cost of the IBM AS7400, it can lead to changing the whole commercial management system, administrative and productive, with the need of purchasing new computers, printers, servers, continuous feeding electricity equipment, safety systems in the net, CAD/CAM systems, wiring change, of the cabinets and switchers for the net connection of the computers, etc. The costs may lead those who control them, who in the specific case of this Lombardian province are the so-called management engineers, to insert the interface variable as one of the reasons that justify the cost of the change [5] [6] [7]. Now, this engineer is a hybrid between a graduate in business management, an advisor in accountancy control who handles statistical data and allegedly a technician in computing and/or systems, with a brief titling, not over four years but with a great power of decision in the policies to be followed in the vertical structures of the Lombardian family businesses. His technical knowledge is practically null. However, he is a hybrid professional who ends up in the staff office controlling the labour costs of the textile workers, for instance, and how to cheapen costs, displacing production out of the countries where the headquarters of the businesses he indirectly manages are located. This function may interest the management of the enterprise or industry, assigning him the activity of changing the management system that switches from local or national to international. Evidently it is not for interface issues that a whole management system is changed that works in an IBM reliable platform. It is rather political motives of the industries to cheapen production costs and obtain financial help of the EU to streamline the whole computer and telematic system, even if their invoicing and profits have nine-digits figures in euros. Theoretically, with the new system an initial study is made of the users’ preferences in order to obtain interfaces and interactive systems that prompt acceptance and do not trigger resistance situations or rejection of change. Rejection of change is one of the human factors that within a productive environment may have a very high impact because in some business situations it can degenerate things to such an extent that it can shut down entrepreneurial activity. Of course there is also the human factor on the side of the management engineer to stand out from the rest of the group given the position he holds in the entrepreneurial and/or industrial organization chart (below or at the same level as the owners or general managers of the company). Now, on the other hand one finds the resistance put up by all those who are used to working with tools they do not wish to change. The human factors in these working environments may speed up or slow down considerably the implementation of a new system. In the example we are analyzing there is also a lack of knowledge of the real market of the new technologies and the solutions developed at a low cost by the management engineer [8] [9] [10]. For instance, at the start of the new millennium there are applications that may take any screen of the AS/400 and turn it into likeable interfaces, even in Internet Explorer, as can be seen in the following figures:
Software Managment Applications, Textile CAD and Human Factors
125
Fig. 1. Low-cost commercial applications and impact on management that quickly emulate new interfaces (PowerTerm Host Publisher- www.ericom.com)
In the previous screens it can be seen how from the year 2000 onwards a simple commercial product of very low cost for computer or user allows web connectivity to a variety of hosts and databases. Power Term Host Publisher Publisher data from IBM AS/400 IBM mainframes, HP/3000, UNIX, etc. The application can also retrieve data from any database such as Oracle and Microsoft SQL server. Evidently, in the case that we are analyzing the lack of knowledge of the latest technological breakthroughs by the directors, shareholders and other decision bodies of entrepreneurial management has benefited on the one hand those who wanted change and those who resisted it. That is, a curious case of industrial sabotage and implementation of costly novelties under the label of long-term saving.
3 Changes and Resistance to Changes in the ICT Sector The constant increases in the renovation of the AS/400 systems maintenance contracts in a territory where having that technology was not a need because of the volume of users or the production volume but rather for status issues, that is, like those who in a great city prefer a luxury car to go to the office rather than taking a cab, underground or bus. With the former you draw attention and place yourself in the upper echelon of unnecessary consumption whereas with the second alternative resources are well administered to survive in the long term. That happened in the province of Bergamo in the 90s where the technicians and programmers couldn’t cope with the requests of the customers to modify and/or personalize the applications. Software and hardware sold to businesses of few users or even with many users, but for monotonous and repetitive tasks that could work with other more economic hardware platforms [11]. This led the businesses to constantly require technicians, programmers, systems analysts and computing engineers, and yet the local university did not have that professional profile available, nor does it do so now.
126
F.V. Cipolla Ficarra and V.M. Ficarra
Consequently, the technicians exiting the institutes that specialized in computing science took these jobs at an early age. That is, technical knowledge of the technological reality of the moment, but without preparation in the long run. Technicians who worked thanks to availability of the phone connection, that is, outsourcing. Without phone lines or mobiles the knowledge faded away. In spite of this lack of training and will to study, the local youngsters earned an average of 70/80 euros per hour to install a server or make the interface of a Microsoft Access application in the middle of the first decade of the new century. It was a period where the industries, from the point of view of the systems and computing, were real software and hardware labs for the providers of those materials and services, who were paid in an outlandish way. For instance, the servers were formatted weekly, the backup information systems in magnetic tapes did not work daily, Access databases were constantly remade or modified in the tissue labs, etc. Whereas some experimented with the hardware, others learned only to make queries for AS/400, that is, the logical connectors that the 15/16-year-old students learn in the computing schools. Consequently, some businessmen started to hire those technicians to pay them an overblown monthly salary per month or per hour, but in fact it was part of an external industrial sabotage. These young technicians came into conflict with the internal computing staff, but once again for cost reasons the internal technical staff were demoted in their posts, for instance, those with responsibility for the systems and computing went to control the phone network, electronic or the maintenance of the PCs. In their place were put these young technicians who in the face of complex realities such as not only the working and maintenance of AS/400 only devoted themselves to carrying out queries to satisfy the demands of internal statistics, for instance, the overall invoicing or sales daily or weekly. The rest of activities were diverted to outsourcing paying very high sums for banal tasks that would be adequate for computing school students [6] [9]. In businesses like this the staff who had set up the systems and who also had a good deal of experience, were daily scorned by their young superiors, in front of the external technicians. This staff became automatically directly responsible for all the mismanagement of the services deriving from the net, the applications, the hardware, the electricity, etc., Evidently we have inside the human factors environment a “clamp” situation, where on the one hand the external technicians, who used this industry as a great laboratory to set up their products with other customers, charge exaggerate figures for their services. On the other hand, there was the new internal staff who tolerated all that lack of control, since they didn’t pay the outsourcing services. They only carried out long lists of results of sales and profits. At the same time, the directors could ignore certain controls since they were focused on the commercial sector, marketing, investment, etc. What is more, many of them were worried about the external image of the industrial firm or getting the maximum of possible profits in the short term, instead of setting up an R&D department in a centennial industry or keeping the production inside the borders in which the cloth is allegedly 100% made. In short, the contrary movement or the resistance to change was internal and external to the industry. That is, a complex situation where the technological breakthroughs in software and hardware nothing can do to eradicate them. From the point of view of the programming of the new applications, the internal staff responsible devote themselves to passing partial or mistaken information in the
Software Managment Applications, Textile CAD and Human Factors
127
data flow. For instance, how they are structuring the databases and the relations between the fields, or the logical connectors of the queries, which necessarily must be well related with the fields of the diverse files to print or visualize on screen that information which they want to consult or edit. These like other actions slow down the chronogram or planning of work of the new programmers and generate remarkable losses to the hired enterprise to elaborate the new system of the textile industry. To fulfil a work chronogram may lead to having 50 programmers and system analysts available for a modest reality of 120 users, distributed into three headquarters of a national territory, but who constantly oppose change and indirectly foster industrial sabotage. An industrial sabotage –for instance, has been bolstered by those alleged external experts in Microsoft Access and who control the industrial production systems, through the management of quality of the tissues lab, or the planning of the activities in the looms. Internally there will be the person charged with erasing components from the database, or simply with brusquely putting a server out. The tables of the Access databases do not close well, and need to be opened manually to eliminate the errors in the damaged registers and those that have been caused by the sudden blackout. One of the key sectors in a textile business is the area of cloth design, known by some as style or product department. They are responsible for supplying the main input to the data flow of the production of the looms and simultaneously the marketing management, sending of finished products, etc. As a rule, it is an environment open to change because they are interested in having the latest technological breakthroughs related to design, the treatment of the colours, the printing of the models of the tissues, the simulation of the patterns with a high level of realism in regard to real tissue, etc. The resistance to changes inside the computing sector of the textile industries may spring from the inside and from the outside for several reasons, such as: age (some young think of climbing quickly to the upper area of the organization chart, others in retiring), lack of interest or culture in the constant learning or updating of the staff, lousy constitution of the chain of command, in relation to the studies of the staff (a graduate who has just finished his studies can’t aspire to have the post of a B.A or an engineer in computing and/or systems, with several years experience inside the industry or business), an overblown income because of non-programmed technical services, disappearance of a real environment to carry out hardware and software experiments while these hours are paid as servers maintenance activities, databases in Access for the planning of production, etc. A summary of these distortions in the computer context allows us to detect the real distance between the theoretical and practical sector in the framework of the textile industries in Southern Europe, especially in the Lombardy region.
4 Myths and Realities about the Change of Systems in Great Textile Industries The great expansion of AS/400 in small and middle industries and/or service companies in the 90s in the Lombardy forced many small businesses to revise the strategies of the investments they had made and to change their platform for the management systems. A great exception were the universities where almost a third of the yearly budget was absorbed by the maintenance of the hardware and the software
128
F.V. Cipolla Ficarra and V.M. Ficarra
(use licences, for instance). In the case of some textile businesses those changes turned into a long process of realization of ready-made systems. Oddly enough, this process, tripling the number of users in the Pyrenees and using the same software, took fewer than 6 months for its setting in motion, whereas in the Italian Alps it took three years. In the Pyrenees there were 900 potential users of the new management system, whereas in the Alps there weren’t even 150 users. Some of the keys for these differences pass through this listing which is a kind of heuristic compass to detect situations of industrial sabotage through computing and systems, and also those ideal environments where there is a harmony in the working group in the face of a new project of software and hardware. In brackets are those positive factors which eradicate those factors (+) and those negative elements that boost them (-): • The business or industrial management is family-ruled or hierarchical (-) • There are no members of the business and/or industrial management who belong to the Industrial Union of the area or region of jurisdiction (+) • The business management is made up by people other than the owners and has a horizontal structure and working delegation (+) • There are management engineers in the business management (-) • The heads of the departments are young without college studies (-) • The heads of the department may be a married couple (-) • The heads of the departments do not take up their responsibilities in the purchase of equipment which then are not used by their collaborators or inside the enterprise/industry (-) • The computer material is bought to the suppliers recommended by the industrial union, associations, guilds of the area, etc. (-) • Those responsible for the internal computer services leave their collaborators alone in the face of the non-functioning problems of the networking operating systems (Internet, intranet and extranet), software managerial applications, printers, etc. (-) • Those responsible for the internal computer services end their working day in accordance to the legally pre-established schedules, without caring about some services not working, because they regard them as a responsibility of their collaborators (-) • The heads of the departments may be people who are related to each other, for instance, a married couple, siblings, cousins, etc. (-) • There is no documentation about the flowchart of the computer processes (-) • There is no detailed and/or commented description of the programmes, the files and the databases (-) • There is no map of the networking operating systems: servers, firewalls, switchers, computers, etc. with their corresponding IP –Internet, Protocol address, connexion, etc. (-) • The internal staff is not retrained with courses about the evolution of management software, office computing applications, operating systems, etc. (-) • The internal staff must obey automatically the orders by the external technicians (-) • The internal staff have access to the Internet and is not controlled or spied on with programmes or special applications in this regard (+) • The internal staff may make international outcalls for technical issues, that is, there is no registration or phone blockage (+)
Software Managment Applications, Textile CAD and Human Factors
129
• The external technicians are controlled by the internal staff at the moment of carrying out their work (+) • The external technicians control the internal staff and use the computer equipment of their customers to carry out personal tests (-) • The external technicians have less power of decision in the purchase of the new technologies than the internal staff (+) • The external technicians charge their services in relation to the fees established of a trade union average of work per hour (+) • The external technicians can’t solve the problems in an isolated way and need to be connected via Internet or on the phone with other colleagues of theirs to solve the problems (-) • The internal staff and the external technicians are not relatives or acquaintances (+) • The purchase of the computing material and the outsourcing services is managed in an autonomous way by the department or areas heads (-) • The praxis to know the state of the art of the new technologies is to hire external enterprises who perform these services (-) • The internal staff must keep an agenda of the activities they make hour after hour and during days, and then present it to the heads of their areas (-) • In the net there are set programmes that spy on the work of the internal staff but exclude those of the external technicians [12] (-) • The agenda of activities related to the maintenance and/or widening inside the net is set by the external staff (-) • The members of the internal staff can not communicate directly with the board of directors, they can only do it through the heads of their areas (-) • The computers, peripherals, software, etc., are replaced or updated when the directors get subventions from projects presented to the EU (-) This is a first listing of those components which affect directly and indirectly the correct functioning of a business or industry at the moment they have decided to change in a total or partial way the computer system, whether it is from the software and/or hardware point of view. The problem lies in the fact that in some regions of Northern Italy, for instance, the term “computer technician” carries more weight than those people who have a PhD in computer science [1]. A situation where industrial sabotage is guaranteed through the computer systems is when the heads of the computer services/ systems are relatives (a couple, for instance) or life-long friends, (childhood, high school, college) among them. In these situations only an external audit can determine whether everything works in accordance with the principles and quality rules, computer researchs and computer ethics – such as is the legality in the respect of the privacy of the internal staff, inside and outside an industry or business.
5 Learned Lessons and Future Works In contrast to what computer theory and administration theory of big or middle-sized enterprises/industries asserts in an endless string of compendiums, published in several languages, there are businesses where the projects to change the management
130
F.V. Cipolla Ficarra and V.M. Ficarra
systems are not planned yet. Improvisation is the common denominator in those environments that slow down the setting in motion of the new, strain the human factors until it turns into a sort of resistance to change, etc. They may even lead to the overpurchase of software and hardware as compared to the reality they represent, for instance, having servers for 200-250 users in Europe when within a few kilometres the same hardware can put in motion in the city hall of the provincial capital. It is a clear example of how long term planning is essential as well as not having inexperienced staff as heads of areas or departments who do not transmit their knowledge or experience to the new developers of the software. As a rule, all of this triggers a huge chaos that forces the business management who invoice for a value of nine figure sum to keep in motion for a long period two management systems, the old and the new, and even beg on weekends to the people who have not participated in the project “to go urgently to the firm because nothing is working there”. Evidently, directors who don’t know that there are differences between software and hardware. Additionally, in the face of the industrial sabotage they should pay out of their own pockets huge sums of money before stopping textile production. The lack of knowledge of these human factors by those directors is so great that they come to the extent of rewarding in the ICT –through the industrial union that was founded by their forefathers, heads of department who directly or indirectly have caused the industrial sabotage have brought about considerable losses in sales, because of the troubles or slowness in textile production. These alleged experts may claim at the beginning that the maintenance costs of the hardware and the software are high, forcing the simplification of the commercial software, but at the end of the implementation cycle of the new management, the cost tables of the raw materials, production, profits, etc. that were made with Excel or Access will still be made with those commercial applications by Microsof, thus fulfilling the law of thermodynamics: “Everything transforms, nothing gets lost”. Joined to the saying in the Lampedusa novel “The Leopard”: “If we want things to stay as they are, things will have to change”. In our next works we intend to keep on advancing and describing the relationship between the human factors, computing and the systems that exist in given environments surrounded by mountains, such as are the Alps and which Saussure defined as parochialism. These are areas where the textile industry has had great importance to turn these regions into the engines of Europe, but which nowadays have slowed down its growth considerably.
6 Conclusion Currently we are transiting the era of communicability expansion. However, in the current work it has been made apparent how there are traditionally productive European sectors where introducing the latest technological breakthroughs entails doubling or tripling the costs as compared to other similar businesses inside the very borders of the EU. The interface of the computers may have an essential role at the time of making decisions to change a management system, adding to the imagined possibility of having whole control of an industry from a few keys on a screen. The reality of the industrial enterprises is complex because there are not only the classical sectors of production and marketing, but also the creative aspect. A creativity that implies
Software Managment Applications, Textile CAD and Human Factors
131
originality and quality of the end products, which is reached with R&D labs inside the industries and not outside. In that way, the costs must not be reimbursed directly or indirectly by the inhabitants of the community where it has been located for decades or centuries, a textile industry that year after year, has cashed in huge profits but has not invested in its telecommunications systems, computer systems, etc. Before designing new systems of entrepreneurial management it is necessary to evaluate which is the real state of the art of the new information and telecommunications technology. That is, to put this industry into the hands of internal experts. Besides, these must have a high academic level, as well as a varied working curricula. This wealth of experience and training will allow them to face with creativity and imagination the obstacles at the moment of respecting a work chronology. The main goal is to reach the date of the setting in motion of the new system with the least of financial costs and human factors.
Acknowledgments A special thanks to Maurice Uristone (New York University), Emma Nicol (University of Strathclyde), Maria Ficarra (Alaipo & Ainci – Italy and Spain), Nilda Varese di Cremona and Carlos for their helps.
References 1. Cipolla-Ficarra, F., Rodríguez, R.: CAD and Communicability: A System That Improves the Human-Computer Interaction. In: Jacko, J.A. (ed.) HCII 2009, Part IV. LNCS, vol. 5613, pp. 468–477. Springer, Heidelberg (2009) 2. Hicks, M.: Collaborate to Innovate? Getting Fresh Small Company Thinking into Big Company Innovation. Interactions 17(3), 39–43 (2010) 3. Broy, M.: The ‘Grand Challenge’ in Informatics: Engineering Software-Intensive Systems. IEEE Computer 39(10), 72–80 (2006) 4. Cipolla-Ficarra, F., Cipolla-Ficarra, M.: Attention and Motivation in Hypermedia Systems. In: Jacko, J.A. (ed.) HCII 2009, Part IV. LNCS, vol. 5613, pp. 78–87. Springer, Heidelberg (2009) 5. Boehm, B., Turner, R.: Management Challenges to Implementing Agile Processes in Traditional Development Organization. IEEE Software 22(5), 30–39 (2005) 6. Schuff, D., Louis, R.: Centralization vs. Decentralization of Application Software. Communications of the ACM 44(6), 88–94 (2001) 7. Beatty, R., Williams, C.: ERP II: Best Practices for Successfully Implementing an ERP Upgrrade. Communications of the ACM 49(3), 105–109 (2006) 8. Hardaway, D.: Replacing Proprietary Software on the Desktop. IEEE Computer 40(3), 96–97 (2007) 9. Ebert, C.: Open Source Software in Industry. IEEE Software 25, 52–53 (2008) 10. Blaze, M.: Dynamic Trust Management. IEEE Computer 42(2), 44–52 (2009) 11. Hassler, V.: Open Source Libraries for Information Retrieval. IEEE Computer, IEEE Software 22(5), 78–82 (2005) 12. Paulson, L.: Key Snooping Technology Causes Controversy. IEEE Computer 35(3), 27 (2002)
Abstract. In this article the generation and exploitation of open educational resources in “virtual attendance” are discussed. Virtual attendance is defined as the combination of synchronous and asynchronous ICT tools used to provide distance education students with the same educational experience that conventional students receive in fact-to-face classes. It combines synchronous and asynchronous video and Web conferencing and recording systems using smart boards and digital repositories, and is contextualized with the temporal and structural planning, sequencing and collaborative tools available in UNED’s VLE, aLF. OERs play a fundamental role here since they provide a conceptual and legal framework within which content generated in virtual attendance can be freely generated, modified and exploited. It is argued that such an application of OER reflects the inherently user generated content philosophy that underlies the Web 2.0, where a significant part of our students are already contributing content in a wide range of areas online. Hence, it is argued that the communication and content creation tools present in virtual attendance can potentiate the generation of new OERs, moving them into mainstream higher education, and thereby improving the educational process. Keywords: Virtual attendance, Virtual Learning Environments, Open Educational Resources.
The Generation and Exploitation of Open Educational Resources in Virtual Attendance
133
As will be seen in this article, virtual attendance can be defined as the combination of synchronous and asynchronous ICT tools used to provide distance-education students with the same educational experience that conventional students receive in faceto-face (henceforth, F2F) taught classes. Specifically, synchronous and asynchronous video and Web conferencing and recording systems are combined with smart boards and digital repositories and contextualized with the temporal and structural planning, sequencing and collaborative tools available in a virtual learning environment (henceforth, VLE) to provide an integrated environment that provides a simply and uniform way to generate, manage and exploit different modalities of information and communicating with other students, lecturers and tutors. In contrast to most universities UNED has a long history of technological and methodological innovation. Both the university’s VLE and conferencing / digital repository system were developed in house [2, 3]. The former, aLF, is a heavily customized version of dotLRN, which has been adapted and extended both to the university’s needs and also to the demands of the European Higher Education Area (henceforth, EHEA). The latter, AVIP (Audio Video over Internet Protocol), was developed to meet the needs of the regional study centres initially and there are now over 300 AVIP enabled classrooms in Spain. For virtual attendance to become consolidated within UNED it is necessary to establish a conceptual framework within which its “content” (both materials and recording of different types of communication) generated from virtual attendance can be managed and effectively exploited both within and outside of the university. Open educational resources (henceforth, OERs) offer just such a conceptual and legal framework in which content can be freely shared and modified and extended, without affecting the intellectual ownership of the author or anyone who has added to the content. OERs and their underlying technological, methodological and business models are a very hot topic at the moment (in part motivated by the need that universities have to lower costs and also attract students, [4]). There are still many questions related to finding a sustainable way to generate and manage them that need to be answered in order to overcome the lack of their general usage in mainstream education. Since UNED has been involved in the generation of quality OERs since its very beginning, over thirty years ago, there is a lot of experience in the university regarding these questions and a great progress has been made in this area. With the appearance of the Web 2.0 and the inherently user generated content philosophy that underlies it, it is of no surprise that students are now more disposed to actually generate content for the subjects they are studying. Such materials range from course summaries to solutions to specific problems that arise in their studies. While in the past this information was not typically shared within student groups (somewhat difficult in any case in distance education), it is now more common. What virtual attendance needs to achieve is the provision of tools for students to potentiate this tendency and a content management framework so that the work of one year’s students is available for subsequent generations. In what follows in this article, these questions are considered in detail together with an analysis of the underlying architecture that makes virtual attendance not only possible but also effective.
134
T. Read et al.
2 The Technological Architecture of Virtual Attendance In the introduction the architecture of virtual attendance was defined as being made up of the combination of the VLE aLF and the hybrid conferencing and digital content management system AVIP. In this section these two systems are discussed together with the way they are integrated together. It is not surprising that in a university like UNED that ICT has such an important role. Over the years its use has grown and forms a central part of both the teaching and administrative process (for example, UNED is moving towards being a paperless university, more than 90% of student admissions are being undertaken by Internet, and examinations are no longer transported to the local study centres on paper but sent as encrypted electronic files). When UNED’s virtual campus was started over a decade ago, a commercial elearning platform was selected, namely WebCT. With time it became evident that this system was not sufficiently flexible for the university's needs, and hence the platform aLF (a system being developed by researchers in the School of Computer Science) was gradually introduced as a substitute. These two systems were used in parallel for some years (for different types of courses). However, with the appearance of the EHEA, the decision was made to concentrate all development on aLF. While the great majority of F2F universities and other academic institutions offer some e-learning, their essentially complementary nature together with the low students numbers, mean that one of many different off the shelf VLEs can be used. In these cases, very little work is required to configure existing online systems to provide the technological infrastructure for these taught courses. In UNED, given its distance education-based nature, all ICT used (both for teaching and administration) have to be very robust and scalable, preferably OS portable, and interoperable with other systems. Other open source VLEs than aLF were considered, such as the popular PHP-based e-learning platform Moodle [5], but it was not considered to be capable of fulfilling the requisites of UNED. In order to meet the demands of the teaching within the EHEA and adapt aLF for virtual attendance, it has been customized in two aspects [6]: new tools have been developed, and several workspaces (conceptual and structural abstractions, where information and tools can be shared between different groups, classes or communities) have been provided. Such an abstraction enables users to be able to use a large variety of tools organized around three clearly distinguished workspaces in a flexible and reusable way. Firstly, the personal workspace provides users with an agenda, a space for documents and links to personal pages of other users, courses, communities, etc. A key difference between aLF and other VLEs is that a student maintains his/her personal space in the platform even though no current course is being undertaken. This facility is more than just a portfolio since it enables the students to maintain their presence online. Different types of users have access to different types of tools depending upon their profile. For example, administrators and teaching staff have tools for following and correcting student work. Secondly, the community workspace provides users with access to the different types of collaborative groups (e.g., teaching teams, research projects, associations, departments, faculties) where they can participate. Hence, relevant organizational and communication tools are offered to make collaboration possible (e.g., forums with notification services via e-mail; work management tools that enable different types of
The Generation and Exploitation of Open Educational Resources in Virtual Attendance
135
information to be shared; and task sequencing tools such as a group agenda with appointments and weekly task planning). Thirdly and finally, course workspaces provide the users with access to the information and tools needed to study a particular course (e.g., document management via tasks, summaries, notes, course guides, and FAQs; activity planning via weekly planning integrated with the course tasks; and other miscellaneous resources like shared course files, course content, exams, etc.). One of the key changes in aLF in recent years has been the change in the course workspace to present the materials and activities in a modular/temporal fashion (as illustrated in figure 1), where the weekly workload can be easily configured by the teaching staff and accessed by the students. While materials and activities can still be structured in terms of content, following the EHEA philosophy, which focuses on the continuous evaluation of personal and group activity, the teaching staff need to provide students with an integral plan of what they need to do and when. Hence, work is presented in a weekly basis, combining both the materials that should be studied during the week and the activities that should be undertaken with them. Such an access metaphor is useful both for the students, since they always know what they should be doing, and also for the lecturers, since they have a unified interface for the preparation of courses and materials. Within this context, the virtual attendance tools can be scheduled and made available to students like any other kind of activity in aLF, the results of which are stored and managed in the same way as any other.
Fig. 1. Modular /temporal course perspective in aLF
The AVIP system combines digital whiteboards with high quality videoconferencing systems used for many years in UNED with light weight web conferencing and digital recording technology to provide an integrated conferencing and digital repository system whose functionality can be specified in three levels:
136
1.
2.
T. Read et al.
Level 1(1+): Classrooms equipped with standard commercial videoconferencing systems and networked digital whiteboards. Each session is recorded digitally, both the materials used on the whiteboard together with the interaction and participation of the speaker and audience (present in the classroom or connected from other AVIP equipped rooms or even from home [see level 2+ below]). These sessions are semi-automatically marked up with metadata to facilitate their storage and re-use. Different metadata formats are being used and tested at the moment, ranging from SCORM 2004 (following the IEEE1484.12.1:2002 standard) to IMS Common Cartridge. They include 15 elements taken from Dublin Core and are optimized for semantic agents using associated RDF files. Level 2: Access to previously recorded sessions (different formats and client platforms). Here, a registered user can access the recordings in three ways: a. b. c.
Directly connecting to the AVIP Web portal and searching for the recording. Via a link to a recording that can be included in a Web page or an email message. Via a direct reference from within the aLF VLE. The advantage of providing access to a recording in this way is that a recording can be included as part of a learning activity, from which the students can undertake other tasks, for example, discuss the recording in a forum and then collaboratively develop a report about it.
From a practical and pragmatic perspective, being able to access a recording online, at any time, is of great value to the students. In many cases, due to their personal commitments, if they are unable to attend a F2F taught class they will either not be able to attend a videoconference that requires them to be in an AVIP enabled classroom in a particular regional UNED study centre. In this case, they can access the recording at a later date and catch up on what was taught in the classroom. Furthermore, if the recording is included in aLF and there is a forum associated with it, it is possible for the student to still ask questions to the teaching team. 3. Level 2+: A web-conferencing / online recording tool that uses a light weight flash-based client together with shared digital resources and desktop application control. The key functionality that the AVIP system provides here is not jus the ability to establish a web conference with other people using the same desktop client but also the ability to hook into level 1 videoconferences. The exact quality of the signal the user receives depends upon the network bandwidth available. As can be appreciated, AVIP permits both synchronous (small group conferencing, presentations, desktop-application control, etc.) and asynchronous communication (recorded presentations, videos, audio only, etc.). Since this system was developed around conferencing technology, its basic functionality requires a moderated conference session to be set up, and therefore, that students request the right to participate before being able to do so. Hence, they could not set up informal sessions without a
The Generation and Exploitation of Open Educational Resources in Virtual Attendance
137
teacher being present, which limited the AVIP’s usefulness as a virtual attendance tool. The new AVIP-PRO [7] was conceived to overcome this limitation and provide students with a more flexible tool. It is a variation in the standard version of the level 2+ tool that enables video recordings to be made online without having to be in a moderated AVIP session. This gives rise to a tool (the user interface can be seen in figure 2 below) that has been used in many different ways, from the preparation of simple audio / video messages, that can be attached to email messages, to recordings that form part of specialized oral discourse training (of particular relevance in disciplines like Law and Business), real-time oral evaluation (fairly common in face-toface institutions with a reasonable student-teacher ratio) and training and evaluating second language oral competences online.
Fig. 2. Example display of student using the AVIP-PRO
With the completion of the modifications and integration of aLF and the AVIP system, virtual attendance became a practical reality in the sense that both the teaching staff and the students had access to a complete set of interoperable tools [8] that would emulate the way in which F2F classes actually work., i.e., content generation and manipulation can be undertaken in a rich synchronous and asynchronous fashion, mediated by different types of communication, collaboration and coordination. However, one question that still needed to be answered was how to manage the generation and exploitation of the content that virtual attendance produces and requires, in a way that would permit its modification in a flexible way, reflecting the Web 2.0 philosophy, while at the same time protecting the rights of its authors. As was commented in the introduction, OERs offered a framework for this. This will be seen in the next section.
3 Open Education Resources OERs provide open access to high quality digital educational materials. Article 26 of the Universal Declaration of Human Rights notes the intrinsic right that everyone has to education, and that “technical and professional education shall be made generally available [9].” Universities and other educational institutions play an important part in
138
T. Read et al.
this movement. Similarly, ‘The Cape Town open education declaration’ [10] goes on to note that: “This emerging open education movement combines the established tradition of sharing good ideas with fellow educators and the collaborative, interactive culture of the Internet. OERs are built on the belief that everyone should have the freedom to use, customize, improve and redistribute educational resources without constraint. Educators, learners and others who share this belief are gathering together as part of a worldwide effort to make education both more accessible and more effective”. Following the Web 2.0 philosophy prevalent at the moment, and implicit in the generation of a lot of the material appearing on the Web, it can be seen that both lecturers and students are both consumers and producers of OERs, if they are provided with the right tools to do so, somewhere to leave their work online, and legal protection to prevent others from copying their work without citing them. The use of special licences for OER content, such as Creative Common [11], makes this a reality. In order for OERs to succeed they must not be considered as something separate from the taught curriculum (and class activity in the case of F2F learning), but have to be integrated into the educational process. As was noted in the introduction, the generation of OERs has always taken place in UNED. This happened originally in an informal way and in September 2006 UNED started its participation in Open Course Ware project setting up its own portal, which is reference for these materials in Spanish [12]. While OERs are used in UNED they are not central to the educational process; for this to happen two things need to take place: ‘mainstreaming’ and ‘multiplication’. The former is the transference of the successful results of programmes and initiatives to appropriate decision makers in local, regional, national and European systems. The latter is the persuasion of individual end-users to adopt and/or apply the results of programmes and initiatives. It is argued by the authors that incorporating OERs into virtual attendance is key in the process of establishing the use of open content because it provides tools that make it easy to generate, modify and exploit them. Furthermore, the tools present in virtual attendance give both teaching staff and students the opportunity to create resources about specific topics that are of mutual interest but that cannot be directly incorporated into the core curriculum. Those resources can then be followed by students, teachers and other teaching staff with similar interests, creating a whole new learning community around them, where people will share their knowledge, questions and ideas. A similar phenomenon is already happening on the Web, where lecturers are freely making use of wikis, tweets and blogs to share their ideas or topics they are interested in. Those resources are effectively creating open discussions with other experts and are a source of knowledge available for everyone. The only problem in this case is that, firstly, with time these resources on the Web can be removed or get lost in the ever growing body of knowledge that is growing or be mixed with online materials of dubious educational quality. Secondly, these materials that are put on the web do not have any licenses associated with them and, therefore, the ability of people to use them without citing the source is possible. The use of the digital repositories present in aLF and AVIP enable the availability and quality of the OERs stored there to be controlled, thereby avoiding problems. Now that the overall architecture underlying virtual attendance has been presented together with the details of each part, in the next section some practical examples of its use will be presented.
The Generation and Exploitation of Open Educational Resources in Virtual Attendance
139
4 Applying Virtual Attendance While this technology is relatively new in UNED, it is being used in several areas that can be presented here. As has been noted in previous sections of this article, its component parts, namely aLF and AVIP have been widely used for several years with very positive results. In this section the combination of both are presented. Firstly, its use in distance language learning courses. It is notoriously difficult to undertake oral training with large student numbers, and for that reason is not feasible. In institutions with small numbers of students, actual F2F oral evaluation can be undertaken, and in distance education, the telephone had been used for such purposes or even Web-conferencing tools like Skype. However, as the number of students rise and synchronous evaluation is not possible, it is necessary to look for a new tool that enable tasks to be prepared where students could practice online oral communication and training in a flexible way. Virtual attendance enables this to be undertaken. The AVIP-PRO tool can be used together with the task-based structure of aLF to provide a way for oral testing. In this case, the use of the AVIP-PRO within aLF has three roles associated with it: firstly, as a lecturer or person responsible for setting up the task to be performed and evaluated. Secondly, as a student, who will undertake the activity. Thirdly and finally, as a tutor or person who will undertake the evaluation. Given the flexibility of the system, the recorded results can be used as seeds for further activities, the results of which are left as OERs for future students. Secondly, its use in introductory / preparatory courses, which are provided to potential students before they even start studying at the university. There is a wellrecognized problem in distance education where quite often students come to study without having a lot of the basic skills needed for this process. Virtual attendance has enabled resources to be provided to address this problem and provides a mechanism for existing students to participate in the process as mentors. The open nature of the resources generated in this process does not just benefit future students but also provides publicity for the university and, therefore, attracts potential students. Examples of these resources include course summaries, debates and recorded videos of students giving explanations. Thirdly, professional training courses. An important part of the UNED’s social charter is the provision of training courses to the general public who do not require a university degree qualification. Typically these people are already part of the workforce and are therefore unable to attend F2F taught classes. Distance education is perfect for them and virtual attendance is particularly suitable since it enables them to participate actively in their own educative process without the need to attend the regional study centres. The flexible tool set available here, thanks to the combination of aLF and AVIP, enables students to maintain a rich and fluid communication both with their tutors, their lecturers and their peers, and enables them to work collaboratively in the preparation of the assignments they have to undertake on these courses. Fourthly and finally, some initial work has been undertaken to connect the virtual attendance tools to existing social networks. A common phenomenon that can be seen is that while some students are very active in their chosen social network (Twitter, Facebook, etc.), their participation in the university’s VLE is somewhat lacking. To try to remedy this problem, experiments are being undertaken to provide a way to link
140
T. Read et al.
the tools and content present in virtual attendance to the social networks, so that they are available for students who do not need to actively connect to the university’s ICT systems. The previous examples of the application of virtual attendance have all been initiated by teaching staff in the UNED. It is hoped and expected that as the students begin to become familiarized with it and its tools, then they will start to use the tools spontaneously on their own initiative. This step is important if the generation of OERs is going to become mainstream within the university.
5 Conclusions and Future Work This article has presented virtual attendance as a combination of synchronous and asynchronous ICT tools used to provide distance education students with the same educational experience that conventional students receive in F2F classes. It has been seen to be made up of synchronous and asynchronous video and Web conferencing and recording systems combined with smart boards and digital repositories and contextualized with the temporal and structural planning, sequencing and collaborative tools available in the UNED’s VLE: aLF. OERs have been selected as a conceptual and legal framework in which content produced in virtual attendance can be freely generated, modified and exploited. The materials and communications that they can generate here range from course summaries to solutions to specific problems that arise in their studies, to complementary explanations and concepts. Historically, it has been shown that the UNED has always participated in the generation of OERs, but only by the teaching staff. The difference now with the new Web-ready generation of students, is that this balance can change. Furthermore, it has been argued that while in the past student-generated content was not typically shared within student groups with virtual attendance, it can now be shared in a wider educational context. The main contribution of this research work is that virtual attendance provides a set of tools to students to potentiate the tendency of student content and a content management framework, so that the work of one year’s students is available for subsequent generations. It has been seen that the applications of virtual attendance made in the UNED up until now have been successful and offers a promising future. A further question that needs to be addressed in the future is how to motivate the generation and exploitation of OERs in a general sense, using the tools available in virtual attendance, so that they become a central part of the educational process. A possible solution to this problem, which is being studied, is the role of corporative social responsibility in encouraging companies’ to fund the OER generation for higher education. Here, external funding would offset the cost of the production and maintenance of OERs within virtual attendance, thereby increasing both the amount of resources available and the social impact of the entire process. This would be beneficial to the students, the universities and also the companies that provide the funding. For example, if a given company fund the development of OERs related to their field of competence, then they will be indirectly helping with the training of staff who will be better equipped to work in that company or with its products or services.
The Generation and Exploitation of Open Educational Resources in Virtual Attendance
141
References 1. Read, T., Ros, S., Rodrigo, C., Pastor, R., Hernández, R.: The UNED ICT Architecture for ‘Virtual Attendance’. In: Proceedings of the 23rd ICDE World Conference on Open Learning and Distance Education, Maastricht (2009) 2. Pastor, R., Ros, S., Hernández, R., Boticario, J., Read, T.: Open source and e-learning in UNED. In: Proceedings of the International Open Software Conference, Badajoz (2007) 3. Rodrigo, C., et. al.: Aulas AVIP y Tecnología de Colaboración en Línea (AVIP Classrooms and online collaboration technology). In: Boletín RedIRIS (2010), http://www.rediris.es/difusion/publicaciones/boletin/88-89 4. Guntram, C.: Open Educational Practices and Resources. OLCOS Roadmap 2012. European Comission (2007), http://www.olcos.org 5. Dougiamas, M., Taylor, P.: Moodle: Using Learning Communities to Create an Open Source Course Management System. In: Lassner, D., McNaught, C. (eds.) Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2003, pp. 171–178. AACE, Chesapeake (2003) 6. Pastor, R., Read, T., Ros, S., Hernández, R., Hernández, R.: Virtual communities adapted to the EHEA in an enterprise distance e-learing based environment. In: Proceedings of HCI 2009, International Conference on Human Computer Interaction, San Diego (2009) 7. Read, T., et al.: AVIP-PRO: oral training and evaluation tool for e-learning in the UNED. To be published in the Proceedings of EADTU 2010 (2010) 8. Read, T. Verdejo, F., Barros, B. Incorporating interoperability into a distributed eLearning system. In Proceedings of ED-MEDIA 2003. (Association for the Advancement of Computing in Education; AACE), Hawaii (2003) 9. http://www.ohchr.org/EN/UDHR/Pages/Introduction.aspx 10. http://www.capetowndeclaration.org 11. http://creativecommons.org/about/licenses 12. Ros, S., Reula, E., Hernandez, R., Read, T., Pastor, R.: El portal de Curso en abierto de la UNED: Innovación metodológica, organizativa y Tecnológica. In: Proceed. of OCWC Global 2010, Hanoi (2010)
New Platforms of the Violence in Internet María del Mar Ramírez-Alvarado and Inmaculada Gordillo Department of Audiovisual Communication, Advertising and Literature Communication Faculty, University of Seville, Spain {delmar,ingoal}@us.es
Abstract. Over the last few decades, some of the most defining sociological changes involve communications, especially in the permeation of information technologies and its effects in all areas of life. However, does this virtual world replicate the same problems of the real world? This article investigates the characteristics of violence in the Net and in other sociological areas. First, it analyses the traces of patriarchal culture that overlook or even justify violence against women. Secondly, it addresses the fact, two parallel realms coexist in the virtual one: the potential benefits in many fields on one hand, and the development of erotic/pornographic networks and their related products and services on the other. Keywords: New Technologies, Internet, Cyberculture, Cybersex, Cyberviolence.
It would not be reckless to say that the Internet is one of the human creations that has changed most rapidly. The reason is obvious: it was an invention intended for connecting computers, and both computers and the IT industry at large have evolved at a staggering rate. Regardless the Internet’s velocity of change, it has some practically unmatched features to promote communication: -
It is an open network (race, culture, age, or other doesn’t matter) It works permanently (all day long and around the globe) It is a highly bidirectional and multidirectional medium (it allows any emitter to be emitter and receiver of messages) It makes fast or real-time communication possible It is a global medium without geographical boundaries [1].
Before the appearance of these new spaces for communication, a question may rise: Does this virtual world replicate the same problems of the real world?
2 Relationship Violence against Women: From the Virtual to the Real World According to United Nations Development Fund for Women (Unifem) [2--5], women are not safe from violence in any country, a fact proven by the 20% to 50% of women who have been victims of some kind of aggression by men sometime in their lives. Data collected around the globe suggest that half of the women who die from homicides were murdered by their current or former husbands or partners. Furthermore, there are many who think that women are the ones causing their abusers to act against them. This is, incredibly, one of the most deeply rooted myths of our time: that men only exert their correcting authority within the borders laid out by nature and tradition. In fact, violent behaviour towards women has undoubtedly occurred since the beginnings of the patriarchal society until present day, justified and viewed as normal. There is no shortage of cases extracted from the annals of History which support this fact. The Greek deities, for instance, progressively changed their image from warring, justice bearing and wise, to maternal, depending, and submissive. We might recall how many accounts of raped goddesses are used as a means to achieve certain ends. In the Middle Ages for instance, the principle of fragilitas sexus – women’s physical, psychological, and moral frailty – was implemented. It legitimized men to be the owners and lords of women. “The woman is bound to the man because of her physical and mental weakness,” said Tomas Aquinas in the 12th century. In case of adultery, there even existed a legal figure of “uxoricide”, which permitted a husband “filled with such righteous pain” to kill his wife; the “shameless woman” deserved a punishment that could range from beating to death [6]. In the century of Enlightenment, J. J. Rousseau stated that women were made to obey men and had to bear tyranny without complaining and learn to suffer injustice [7]. Later in the Contemporary Age society evolved towards different directions, nevertheless many countries still preserved men’s rights over women exerted by the father first, then by the husband. In many cases laws established that the husband owed protection to his wife and she owed obedience to her husband.
144
M. Mar Ramírez-Alvarado and I. Gordillo
Although the outlook has changed significantly during the 20th century thanks to historical movements such as feminism, women continue to suffer abuse resulting from socio-cultural conditioning placing them in inferior and subordinate positions towards men. The situation gets more complex because society has grown to accept and tolerate this gender-based violence, understood as a punishment the woman “deserved” (“she had it coming!”) or received “for her own sake”. The solution has been effective: denial or concealing it as a private matter. Not surprisingly then, in a new cyberspace as the one created by the advent of the Internet, traces of that patriarchal culture that turns a blind eye and, in many cases, justifies violence against women still survive and perpetuate.
3 Androcentrism in the Cyberculture The consolidation of the Internet is a driving force of globalisation. Excluded from this globalisation are those who lack the money to access IT and, thereby, the knowledge to understand it and manage it. With the evolution of the new technologies, the gap between men and women in most rich and poor countries has widened, which generates great inequalities. Most people logging on the Net are men who live in industrialized countries. Consequently, women are being excluded from this means of control and dissemination of the information and communication that will undoubtedly determine our societies’ future. Also is the fact that minority languages are not present in the Net, which prevent many women from accessing relevant information and documents. Moreover, it seems evident that, with few exceptions, men run the digital media, culture, and business. Generally, the large corporations in this field have an employment distribution by gender where most women hold a large share of low and intermediate positions and hardly ever have access to higher levels; that is, senior management with significant decision-making power. The progress in female participation in directorial positions still is very slow. This low female participation in management of many Internet firms influences on the selection and treatment of information as well as in the use of language from an androcentric perspective. The acceptance of this fact has led – for instance – the Beijing Platform of Action to set the increase in women’s access to and participation in managerial and directorial positions in the media (and in the digital media specifically) as objective, thus ensuring the promotion of non-stereotypical female images. Although there has been an important increase of women’s participation in IT careers, men lead participation in the industry and, as surveys show, the male / female ratio of IT users is predominately male, as most PCs in households are used by men, regardless of the percentage variations from country to country. Therefore, we are talking about an industry in which male presence, for many reasons, turns out to be dominant. It is, consequently, not surprising that gender-based inequalities occur similarly in this universe of cyberspace communication. First, the “access denial” to knowledge, to the detriment of some groups, and the overwhelming volume of information moving along computer networks is, initially, a social and political issue of great proportions. This lack of access to new technologies in poor nations (consumers of IT junk disposed of by wealthy nations) is usually viewed as
New Platforms of the Violence in Internet
145
terribly unfair (and it actually is). Nevertheless, in light of this evidence, women are also lagging behind in this field. It is convenient to bear in mind that accessing and managing information is a source of power.
4 Signs of Alarm in the Cyberspace Recently the Asociación para la Investigación de Medios de Comunicación (Association for Media Research) in Spain [8], which conducts the most important audience surveys in that country, published the results of the 7ª Encuesta a Usuarios de Internet. Navegantes en la Red (7th Survey to Internet Users. Net Surfers). Conducted in November through December of 2004 from 50,000 respondents, it analysed the new consumer habits of Spanish Web surfers. An interesting fact from the study involves Internet use not only to communicate to other people, but also for social relationships. Almost half the users (46.3%) have made new personal relationships through the Net and many (61.6%) have taken those relationships outside the Internet. Although this finding is not so revealing in itself, it does give a clue as to the enormous possibilities the Internet has in encouraging personal relationships. Let us also keep in mind how, on several occasions, the Net’s potential has been used for the defence and promotion of human rights, and the culture of respect for citizens and of peace. In this respect, “digital activism” is increasingly in vogue through diverse organizations and movements. Among many, we could mention The Penelopes (www.penelopes.org), feminist association created in France in 1996, which uses the Internet to spread and process information by and for women around the world. The Penelopes produce a monthly online magazine covering current topics on feminism and womanhood all over the globe with a special emphasis on analysis and criticism of the most relevant topics on gender-based inequalities. Their site is available in French, English, and Spanish. In 1999, The Penelopes were in charge of co-producing the first interactive feminist TV show on the Internet in history. The right to information is basic, because it implies the right to reflection, expression, and opinion. The Penelopes’ philosophy holds that contents should be developed by all men and women. In this line of thinking, the use and management of IT is a basic step to strengthen women’s probabilities to access information. Information is not an end in itself, but a powerful political tool. To increase women’s strengths and to construct an alternative to the world order is the goal [9]. Nonetheless, as well as in the real world, in the virtual world two parallel realms coexist. Although this situation is obviously and undoubtedly useful and offers a plethora of advantages, some alarming signs have started to appear. The Internet has fostered a radical shift in the way humans do business, and the forecasts point out that e-commerce will soon reach billions of dollars, which has expanded to such highly profitable areas as degrading sex and its related products and services. One of the main consequences of these changes is the elimination of trade barriers and thus the intensification of competition in this area, to the extent that many companies devoted to erotism/degrading sex have redirected their sales strategies towards the potential of the Internet. Practically it is necessary for all business and corporations to be present and offer services on the Web, which is also true for degrading sex.
146
M. Mar Ramírez-Alvarado and I. Gordillo
Many companies in that business area have seen their revenue increase with no social responsibility and, in many cases, tax-free. In fact, a major issue is that users can become content providers very easily. It is also true that there is an increasing number of services and communities where webmasters can restrict - or at least decrease - users’ ability to access sexual content or their ability to share pornographic/degrading sex material. The fierce competition in the Internet forces higher traffic of more shocking, impacting, cruel, or stronger images. Every year millions of surfers log onto porn sites around the globe and are regular consumers of the industry’s products and services. According to the American firm Nielsen/NetRatings, specialized in audience surveys and Internet market research, the six most visited websites by 35+ year-old men in Hong Kong in 2001 included a utility payment service, three sites with financial information for investment, economy, and markets, and two adult sites: porncity.net y Japanesegirls.com [10]. The Net has become the hotbed of worldwide proliferation of business and ventures linked to diverse interests: women’s trafficking, porn sales, child prostitution, child molester rings, etc. Therefore, the effectiveness of these virtual connections has allowed the expansion of the consumer base and, consequently, the number of real victims. In addition, the cases of sexual harassment have turned into a growing phenomenon hard to control given the users’ anonymity. Sadly, child degrading sex rings are more frequent, some of which law enforcement agencies finally dismantle, making evident to what extent the Net is used for this purpose. According to a report published in 2003 by Anesvad [11], an association that has been working against sexual exploitation of children in Spain, the Internet has promoted a greater visibility, publicity, and accessibility to child degrading sex content. It has also encouraged the creation of such spaces as chat rooms and virtual communities specially aimed at paedophiles and child molesters to contact each other, intermediaries, or minors themselves. Photo and video material can be instantly disseminated, avoiding all customs or police controls [11]. According to this report, an estimated four million sites worldwide with child sex content were visited over two billion times per year. There also are over 2,400,000 pay websites with child erotism/degrading sex. In 2002, Anesvad conducted a research on child pornography in the Internet named “Nymphasex” (www.anesvad.org/nymphasex) presented as a fake website where a variety of “services with minors” such as “inexpert / submissive girls”, “ready for anything / just for you”. Its main objective was to fight child cybersex in the Internet, by sensitizing the society at large to this issue as the information displayed in the site left no room for doubts: “sexually exploited little girls”, “subdued because their will is nullified”, as “they do it for free, against their will”. According to the number of visits to Nymphasex, the main consuming countries of child degrading sex are the United States in the first place (41.96%) and Spain in second (37.34%) [11]. Despite recent laws passed to protect women and children from the dissemination of images degrading them or banning different forms of lewdness in the media or the Internet, we can say without hesitation that there is no clear legislation against pornography in the Internet. Obviously, virtual sex hides behind anonymity and there are many arguing that any prohibition on this matter could be regarded as an attack against free speech. Furthermore, as it is a realm that escapes territoriality, there are problems stemming out of the different national legislations.
New Platforms of the Violence in Internet
147
5 Erotica in Videogames Several current studies have shown that, at early stages, both girls and boys are equally interested in learning information technology and able to access it. Differences arise as they grow, mostly during adolescence. The huge storage capacity of computers has remarkably increased the offer and complexity of games. They are more affordable, have a higher quality, and – in many cases – are promoted or shared freely over the Internet. No longer are flying saucers, aliens, monsters, or pacmans the leading character figures of these diversions. Now they are animated characters that battle, cold bloodedly kill, or are chopped mercilessly in the most violent fights to death. The best-selling games are filled with horrifying effects, blood spattered all over and bathing the computer screen, along with the most sinister weapons, (electric saw chains, torches, nail rifles, etc.), chopped heads and beating hearts snatched by enemies obeying the shout “finish him”. It would be hard to estimate how many average simulated murders a gamer commits in any given hour. This passion for computer games has permeated the Internet, where users of the furthest and most disparate places in the world meet in the cyberspace to play a match, or engage in full contact battles (allowed by the features of the most popular applications and games) of their favourite games. Nevertheless, the Internet is the showroom for the state of the art and more. Nowadays, there are erotic games in which it is possible to strip their leading characters naked by removing patches. Many interactive programs offer increasingly “real” cybersex experiences. For example, Playboy.com has an area of free games such as Playmate Strip Poke, Playboy Puzzles, Strip Tac Toe (nude version of the classical Tic Tac Toe), or Playmate Breakout, allowing gamers to remove clothes until the female nude of the day is fully exposed. All this plus access to the Playboy Casino online. “Hello, my name is Lynda. Do you like my body?” Users have two options: they may get the body of this seducing Lynda by clicking “Yes” or, if they want, get another girl of their choice from those in the virtual catalogue, who are even classified by complexion or ethnicity: brunettes, blondes, red haired, white, black, Asian, Latinas, Russian… The postures are provocative. Besides, there is a toy box or toolbox available with all kinds of devices that will make Lynda (or Maxi, Valerie, Lori, whoever that is chosen) tremble with pleasure. Whips, cuffs, oversized penises are the artefacts that will make the girl feel more satisfied and ask for more action. She will never complain or refuse anything that is uncomfortable; submissively she will do as asked and never show pain [12]. The combination of music, sound, and dialogue increases the virtual realism of these eager women. Correspondingly, many developments are taking place in virtual reality. Through 3D simulation, a person geared with helmet, goggles, and gloves will be able to experience new erotic fantasies. Experiments in touch screen technologies, along with gloves and data suits providing users with tactile information of what is seen on screen, allow the user to directly manipulate pictures. It is known that Internet porn sites and some porn CD ROMs play a major role in teens and kids’ favourite games, as they are having their first sexual experiences with virtual women. What happens when these youth back in the real world, relate to flesh and blood women reacting as human beings and not like VR animated characters?
148
M. Mar Ramírez-Alvarado and I. Gordillo
The development of increasingly sophisticated programs has promoted the production of drawings, graphics, and animation of virtual pornography. In the case of child degrading sex, the situation is alarming as it involves the creation of computer games: 3D images of minors performing sex acts (produced with programs that synchronize body movement, actions, and words) which are not regarded as a felony. For instance, at a trial that took place in the U.S., the Supreme Court has ruled in favour of free speech [11]. Crime or not, undoubtedly these “not real” images, product of fantasy, encourage purchasing other materials with “real” degrading sex content. Without any doubt, in the artificially configured reality of a computer, a social sanction to the violence against women and children is impossible. In the relationships with virtual human beings, there is not a direct perception of violent acts, at all times mitigated through the computer. In fact, in her work The First Sex, the American anthropologist Helen Fisher comments that the communications company, using its website, took a poll among thousands of users that made the company realise that women use the Net more rapidly and effectively. Fisher says, in line with her hypothesis (Tomorrow’s society is going to need specifically female skills and there are fields that are changing in such a way that women will be irreplaceable in them), that “if women start supplying a good part of the programming in this powerful medium, their interests are going to saturate the world in the coming decades” [13]. To this extent, women are the first and not the second sex, as Simone de Beauvoir said [14].
6 Lessons Learned and Conclusions We may wonder, to start these closing thoughts, about the place of violence in general, and specifically about gender-based violence in the Internet. The answer is perhaps somewhat tautological, because the violence in virtual spaces is exerted, in my opinion, in ways and against individuals very similar to the real world, trapping the weakest, poorest, most vulnerable and traditionally oppressed people. As far as women are concerned, traces of patriarchal culture endorsing violence, to all extents, is one of its main tools of domination, support, and permanence which still prevails. Concerning communication, the theoretical models are not comprehensive enough to reflect what has happened and what is going on. Presently, it is a complex process to establish the identity of emitters and receivers, as well as to understand how messages are produced and what their characteristics are. There are no consistent theoretical models which respond to the global changes taking place with the advent of new technologies and the Internet. What are the characteristics of communication and information? How do we give theoretical answers as to what is to come? We are privileged witnesses to the appearance of new liberties being shaped by means of the Net, but also new kinds of slavery. The gap among nations and social groups who have access to the Internet and others who do not is widening. How do we address this issue from the standpoint of communication studies? What connections can we make between the development of nations and the new technologies? The outlook has its shadows, but its lights too. Many positive actions are underway all over the world. As for the new technologies, some governments are developing programs to promote equal access to the groups left behind in the use of IT, as well as
New Platforms of the Violence in Internet
149
training and teaching projects in this field. Additionally, the Internet is becoming in a medium promoting the exchange of news and cooperation through many strategies: increasing use of e-mail, mailing lists to organize activities, debate topics on the Net, or the collection of signatures for petitions to governments; bulletin boards; discussion forums, virtual communities with similar interests; website construction with the help from several NGOs, etc. In remote areas, Internet connection centres, and some NGOs that are specially devoted to give workshops on computing and web surfing. From a theoretical standpoint, there has been much discussion over women and the media, and the creation of new communication models starting from the shift of paradigms and contents. One of the most significant developments in recent years has been the passing of laws to protect women and children from the dissemination of images that may be degrading to them or ban diverse forms of smut in the media and advertising. Similarly, in some mainstream media, and in the Net, there are efforts to balance men and women in directorial capacity. In addition, there has been an increase in women studying careers related to IT, communications, advertising, and journalism, a great number of whom pursue their careers in the labour market. Nevertheless, it is necessary to bring the law closer to the reality of Internet violence, starting with a deep knowledge and analysis of its new frontiers, limits, and characteristics. Similarly, an appropriate legislation regulating the expressions of violence in the Net from the criminal, civil, labour, social, and family standpoints has to be passed in a way that spares someone reporting a felony the pilgrimage through a multitude of law enforcement agencies and jurisdictions. The dissemination of contents in the Internet against human dignity in general, and against women and children specifically, should receive as response the search for procedures, practices, and actions at all levels (judiciary, social sanction, political, etc.). It is necessary to balance the rights to confidentiality and free speech with the defence of dignity, privacy, respect to image and protection from violence taking place in both real life and the cyberspace.
References 1. Sádaba, I., Roig, G.: Internet: Nuevos escenarios, nuevos sujetos, nuevos conflictos. In: Aparici, R., Sáez, V.M.M. (coords.) Cultura Popular, Industrias Culturales y Ciberespacio, pp. 399–424. Universidad Nacional de Educación a Distancia, Madrid (2003) 2. Unifem: Not a Minute More: Ending Violence Against Women. United Nations Development Fund for Women, New York (2003) 3. Unifem: Not a Minute More: Facts and Figures. United Nations Development Fund for Women, New York (2004) 4. Drezin, J., Lloyd-Laney, M.: Making a Difference: Strategic Communications to End Violence Against Women. United Nations Development Fund for Women – UNIFEM, New York (2003) 5. Drezin, J.: Picturing a Life Free of Violence: Media and Communications Strategies to End Violence Against Women. United Nations Development Fund for Women – UNIFEM, New York (2001) 6. Lorente-Acosta, M.: Mi marido me pega lo normal. Agresión a la mujer: realidades y mitos, Barcelona, Ares y Mares (2001)
150
M. Mar Ramírez-Alvarado and I. Gordillo
7. Asociación para la Investigación de Medios de Comunicación en España. 7a Encuesta a Usuarios de Internet (Navegantes en la Red) (2004), http://www.aimc.es 8. Gamboa, J.: Analizando la apropiación de las tecnologías de la información y de la comunicación como herramienta para el cambio social. In: Ramírez-Alvarado, M.d.M. (coord.) Medios de Comunicación y violencia contra las mujeres, pp. 65–72. Fundación Audiovisual de Andalucía, Sevilla (2003) 9. Nielsen/NetRatings, ‘Show Me The Money! ‘Thirty Something’ Men Head For Finance, Investment Sites (2001), http://www.nielsen-netratings.com 10. Anesvad: Informe sobre la pornografía infantil en Internet (2003), http://www. anesvad.org/informe.pdf 11. Gerstendörfer, M.: Computer as the place of violence. International Feminist Magazine 5, 40–43 (1996) 12. Fisher, H.: El primer sexo. Las capacidades innatas de las mujeres y cómo están cambiando el mundo, Madrid, Taurus (2000) 13. Liniers, R., Cruz, M.: La imagen de la mujer en Internet (2002), http://www.uc3m. es/uc3m/inst/MU/cruz_rubio.html
Gnome Desktop Management by Voice Alberto Corpas1, Mario Cámara1, and Guillermo Pérez2 1
Consorcio Fernando de los Ríos, C/ José Luis Pérez Pujadas, s/n 18006, Granada, Spain {alberto.corpas,mario.camara}@juntadeandalucia.es 2 Intelligent Dialogue Systems, Avda. de los Descubrimientos 11, 41092 Seville, Spain [email protected]
Abstract. There are open source tools available for voice based desktop control. Unfortunately these tools often work with languages that have reliable open source large vocabulary acoustic models. This paper describes the steps needed to build a specific small vocabulary acoustic model with a reduced set of desktop control commands. It includes a review of the state of the art of open source voice technologies and open source desktop control tools. It also describes the experimental work in Spanish. This language is especially well suited for this task because 1) it does have a large vocabulary acoustic model that can be used as baseline and 2) there are specific regional peculiarities that give a insight on the benefits of building small vocabulary specialized acoustic models . Keywords: Computer Science, Voice Recognition, E-accessibility.
Currently, Guadalinfo has as one of its priorities to remove that digital divide from that portion of the population who is even more at risk [2]: the elderly and people with disabilities. In this paper we describe a project undertaken in the Guadalinfo community to improve access to the computer for people with severe physical problems. This is based on the importance that new tools and developments can bring to people with disabilities in their life [6, 7]. In the following subsections, an overview of the state-of-the-art tools is given. In subsection 1.1, the open source speech recognition engines are described. In subsection 1.2 the desktop control tools are summarized. 1.1 Speech Recognition Tools 1.1.1 HTK The Hidden Markov Model Toolkit (HTK) [3] is a set of modules implemented to train, build and test Markov models. It was initially developed by Cambridge University, but today it is owned by Microsoft. However, Microsoft has licensed it back to Cambridge University who can redistribute it and is currently providing support to new developments. The set of tools included in HTK are very friendly and easy to use. In the research community, HTK is the most commonly used toolkit for speech recognition. Although owned by Microsoft the license policy of HTK is quite flexible. It is fully open source, and let the users and developers manipulate and distribute their model training components. There are only some restrictions to the commercial usage of the decoding component. 1.1.2 Sphinx Developed at Carnegie Melon University, “CMU Sphinx” [4] is actually a series of systems including training tools (“Sphinx Train”), decoding components and pronunciation dictionaries. There are four versions available: • Sphinx2: Focused on real-time applications, currently deprecated. • Sphinx3: Includes new algorithms proven to provide highest recognition rates. However it is not suitable for real-time applications. • Sphinx4: Developed in Java, is a re-implementation of Sphinx3 that improves the modularization and flexibility. • PocketSphinx: A version of Sphinx that can be used in embedded systems. The Sphinx community is very active, with several mailing lists and very frequent posts. Recently, the community has issued a set of new releases, including a new Spanish acoustic model, a new conversion tool for HTK models and a new minor version of Sphinx4. The license policy of Sphinx is completely open. Their code can be used, modified and distributed with almost no restrictions.
Gnome Desktop Management by Voice
153
1.1.3 Julius Julius [5] is a language independent decoding software for speech recognition, optimized for large vocabulary continuous speech applications. It has been developed in Japan by means of several research projects. Currently it is maintained by the Interactive Speech Technology Consortium (ISTC): Unlike HTK or Sphinx, Julius is a pure decoding solution, without specific training tools. It relies on external acoustic and language models. Namely, it works with HTK [3] format models. 1.2 Desktop Control Tools 1.2.1 Gnome Voice Control Gnome Voice Control [9] is a Gnome [8] desktop control system by voice. It was developed in C programming language in the Google Summer of Code 2007. Currently the system is only available for use in English. In addition to a voice recognition engine, it makes use of other libraries like GStreamer, a library used to create multimedia applications that provides a framework to deal with audio and video signals. It also includes other basic desktop application development libraries and Gnome accessibility libraries. Currently, this system is comprised of 32 basic desktop control commands. Gnome Voice Control uses the CMU Sphinx recognition engine, including the “PocketSphinx” libraries. The reason for using the PocketSphinx libraries instead of Sphinx4 is because PocketSphinx is optimized for applications requiring a very short run time (especially recommended by the developers of CMU Sphinx real-time applications). Also, PocketSphinx is considered to be more flexible to changes in the acoustic and language models. Currently, the Gnome Voice Control project has very low activity with almost no recent contributions. Regarding the license policy, Gnome Voice Control is distributed with GPL, similar to Sphinx license. 1.2.2 Simon Simon [10] "Speech Interaction Daemon" is a desktop control system by voice consisting of three modules: • Simon: Simon GUI (Simond client). • Simond: Server running the recognition engine along with acoustic and language models. • Ksimond: Graphical application used to configure Simond. This system was developed by the University of Cambridge for the voice control of KDE-based desktops. Simon is a Client / Server application in which you can define different users with different acoustic models and customized commands. The user’s commands are sent to the Simond server that provides its transcription to the client using the recognition engine. Afterwards, the Simon client executes the requested action.
154
A. Corpas, M. Cámara, and G. Pérez
Unlike Gnome Voice Control, Simon does not have a set of predefined commands. Instead, each user must define the commands she wants to have available. Simon is therefore a more flexible system, since it allows the user to define custom commands for specific applications. Simon uses Julius as recognition engine, with acoustic and language models generated with HTK. Regarding the license policy, Julius does not include any restriction to the usage and distribution of the tool. 1.2.3 Vedics VEDICS [11] tool was born in 2010 as a college project in India, where three undergraduate students implemented a desktop voice management solution. This solution was announced to the community through the Gnome accessibility list and the developers have also posted some videos demonstration on YouTube. VEDICS uses a Gnome API called at-spi that provides access to everything appearing on the screen. Using at-spi, VEDICS generates a new recognition grammar in each "new desktop status". Therefore VEDICS should recognize virtually "anything", including file names. VEDICS reads the contents of the screen through the at-spi API and then calls the Sphinx tool "MAKEDICT", which is responsible for generating the pronunciation of all possible commands and existing files. Then generates the recognition grammars and combines them with an existing acoustic model. VEDICS uses Sphinx as the recognition engine, sharing the same license policy.
2 Design Choices 2.1 Acoustic Models In order to include an acoustic model for a language that does not have any available, the alternatives are the following: • Make use of the acoustic language of a foreign language. The idea would be to generate the closest transcription of every command using the phonemes of the foreign language. • Building a specific small vocabulary with a predefined set of commands. • Building a general purpose large vocabulary. The first alternative should be immediately discarded since the WER (Word Error Rate) [12] will be very high in most of the cases. One should only consider it if the languages are very similar (i.e., Galician and Spanish) and there are very few resources allocated for the task. The third alternative is ideal since it provides the highest degree of flexibility and reusability. However, in order to build such a model, one would need thousands of hours of voice recordings. The number of man hours is also much higher (building the dictionaries, defining the recording corpus, labelling, testing, etc.).
Gnome Desktop Management by Voice
155
Finally, the second alternative looks like the best trade-off between recognition accuracy and resources needed. In this paper a small vocabulary based approach is described. 2.2 Designing the Corpus The approach described on this paper let the developers build a small vocabulary speech recognition solution that is to be integrated in a desktop control tool. One of the first choices to be made is the size of the Corpus, since there is no fixed number to identify what “small vocabulary” means. This decision implies a trade-off between two factors: • Recognition accuracy [13]: The bigger the corpus, the worse WER will show the solution. • Usability: The bigger the corpus, the better user perception will have the system, since the user would be given a wider range of functionalities. From our point of view a set of 30-50 commands should give enough functionality for desktop control application while keeping a WER below 15% with 500 – 1.000 recordings. Another key point when designing the Corpus is defining the commands so that they are not phonetically similar. The closer two commands are (from a phonetic perspective) the lower recognition accuracy will be obtained. For instance, in Spanish one should find alternatives to the couple of commands “cerrar” (to close) and “centrar” (to centre), since they are phonetically similar. 2.3 Recording the Corpus In order to record the specific small vocabulary Corpus there are a several alternatives: • Local on premises recording • Remote recording (telephone access) • Remote recording (web access) The first alternative would imply that the voice donors would have to get to the premises where the voices are recorded. The advantage of this approach is the total control on how each recording is done. However, this approach has two major drawbacks: 1) it is much more complicated from a logistic point of view, requiring dedicated resources for every recording and 2) there is a risk of bias of the Corpus towards the local accent. The choice between the second and third alternative depends on the final usage of the application. The goal of this work is the desktop control by voice, so the third alternative (web access, therefore recording the voices with PC microphones) is much closer to the final environment than the second one (telephone access). There are open source initiatives like VoxForge [14] whose goal is to compile large vocabulary corpus via web. The code of these web pages could be used for this task.
156
A. Corpas, M. Cámara, and G. Pérez
2.4 Building the Acoustic and Language Models As pointed out in previous sections, the HTK toolkit is widely used by the community when working with speech applications. It is user-friendly, well documented and, above all, the models generated are compatible with all the possible recognition engines (needing an additional conversion step for Sphinx). The other alternative would be using the specific Sphinx tools, but as a major drawback, the resulting models would only be usable by the Sphinx decoder. 2.5 Recognition Engine The three recognition engines (Sphinx, Julius and HTK) are perfectly useful for the task at hands. HTK decoder, however, have some licensing restrictions, so one should go for Julius or Sphinx if there is some redistribution or commercialization plans. In this work we have chosen Julius as the baseline recognition engine. Even thought Sphinx would also be a good option, Julius let us work natively with HTK models and has a slightly faster response time. 2.6 Desktop Control Tool Regarding the baseline desktop control tool, there are three alternatives: Vedics, Simon and GVC (Gnome Voice Control). Both Vedics and Simon have some technical advantages over GVC: Simon includes a much more advanced GUI and has a higher degree of flexibility; Vedics is integrated with at-spi so that the user could eventually refer to any element of the screen. Furthermore, Vedics and Simon are currently active projects, while GVC has been frozen for a while, with no recent contributions. However, Vedics and Simon are better suited for large vocabulary speech recognition, because they are designed to be adapted to new environments and new set of commands. Also, in the case of Simon, a specific work would be necessary to uncouple it from the KDE specific libraries. Taking all these advantages and drawbacks into account, GVC seems to be the best alternative.
3 Working in Community Once the design choices are made, the first steps should be to build the recording web page and to inform the community about the project. The recording web page developed for this project can be found in [15]. This web page includes an applet that let the donor use her microphone to record her own voice. In order to maximize the number of recordings received, two actions have been taken: • Including a Virtual Assistant in the web page. Its role is to explain the donors how to proceed. It is expected that this Virtual Assistant will minimize the number of wrong recordings and will attract more donors.
Gnome Desktop Management by Voice
157
• Including a contest among the donors. The winner will be awarded with a multimedia hard disk. A second step that must be taken is sharing the project with the several communities working in related tasks. The immediate communication of the plans can avoid unnecessary mistakes and put some light on some choices that have to be taken. The communication can be done through the different mailing lists that group together the efforts of the open source community. It includes the recognition tools (HTK, Sphinx and Julius has their own mailing lists for newbies, developers, etc.), desktop control tools (Simon, GVC and probably Vedics in the near future) and other related groups like VoxForge. During the work described in this paper, all of the communities above mentioned have been informed. It has been a useful exchange, with plenty of advices that have contributed to the progress of this project.
4 Preliminary Results Due to the initial stage of the project only preliminary results can be given. As of the date that this article has been written, up to 280 voices have been recorded. The minimum estimation is 500 recordings for a WER below 15%. However, a first acoustic model with as low as 100 recording has been built with very promising results. An informal internal evaluation with three testers has been carried out, obtaining a WER around 30%.
5 Conclusions and Future Work This project is under current development and its full completion is estimated to occur in December 2010. At that moment, full results will be available about the recognition rates obtained by this tool by real local users of the tool. In this paper, an easy and cheap procedure to create a new corpus has been described and how to use this newly available corpus to allow the desktop control by voice for users with physical disabilities. The methods shown in this paper, although still under development, are of great interest to many users whose language hast not any open access corpora at the moment. Thus, the replication of this experience can be paramount to enhance accessibility to alternative systems in any language.
References 1. http://www.guadalinfo.es 2. Dobransky, K., Hargittai, E.: The disability divide in Internet access and use. Information, Communication & Society 9(3), 313–334 (2006) 3. http://htk.eng.cam.ac.uk/ 4. http://cmusphinx.sourceforge.net/ 5. http://julius.sourceforge.jp/en_index.php
158
A. Corpas, M. Cámara, and G. Pérez
6. Scherer, M.: Outcomes of assistive technology use on quality of life. Disability & Rehabilitation 18(9), 439–448 (1996) 7. Helander, E.: Training in the community for people with disabilities, World Health Organization, vol. 2 (1989) 8. http://www.gnome.org/ 9. http://live.gnome.org/GnomeVoiceControl 10. http://simon-listens.org/ 11. http://sourceforge.net/projects/vedics/ 12. Zechner, K., Weibel, A.: Minimizing word error rate in textual summaries of spoken language. In: ACM International Conference Proceeding Series, vol. 4, pp. 186–193 (2000) 13. Sennhauser, R.: Improving the Recognition Accuracy of Text Recognition Systems Using Typographical Constraints. Electronic Publishing 6(3), 273–282 (1993) 14. http://www.voxforge.org/ 15. http://www.indisys.eu/donatuvoz
SIeSTA: Aid Technology and e-Service Integrated System C. de Castro1, E. García1, J.M. Rámirez1, F.J. Burón1, B. Sainz2, R. Sánchez, R.M. Robles1, J.C. Torres1, J. Bell1 and F. Alcantud 1 1 Department of Computer Science, University of Córdoba, Campus of Rabanales Madrid-Cádiz Road, km.396-A, Albert Einstein Building, 14071 Córdoba, Spain {egsalcines,jburon,ma1caloc}@uco.es 2 Department of Communications and Signal Theory and Telematics Engineering, Higher Technical School of Telecommunications Engineering, University of Valladolid, Campus Miguel Delibes, Paseo de Belén nº 15, 47011 Valladolid, Spain [email protected]
Abstract. Recently published social protection and dependence reports reaffirm that the elderly, the disabled, or those in situations of dependency objectively benefit from continuing to live at home with the assistance from direct family. Currently in Spain - amongst the elderly, or people in a situation of dependency - 8 out of every 10 people stay at home. The end result is that the direct family relations have the responsibility of performing 76% of the tasks during the daily 1 routine where aid is needed . Associations for people with disabilities, however, not only report a lack of adequate aid services, but a lack of direct-family assistance as well. It is necessary, therefore, for an “evolution” or overhaul amongst the social and health service provision systems. The elderly, people in situations of dependency, or people with disabilities should be provided with enough re2 sources and aids to allow them to decide their own future . Keywords: HCI, iTV, Accessibility.
1 Home Control and Support Programs - Providing Health and Social Services Recently published social protection and dependence reports reaffirm that the elderly, the disabled, or those in situations of dependency objectively benefit from continuing to live at home with the assistance from direct family. Currently in Spain - amongst the elderly, or people in a situation of dependency - 8 out of every 10 people stay at home. The end result is that the direct family relations have the responsibility of 1
2
CERMI “Discapacidad severa y vida autónoma” Centro Español de Representantes de personas con discapacidad (CERMI) 2002. Available at: http://www.cermi.es/documentos /descargar/dsyva.pdf García Alonso, J. V. (coordinador), “El movimiento de vida independiente. Experiencias internacionales”, Fundación Luis Vives, Madrid, 2003. Available at: http://www.cermi.es/ documentos/descargar/MVIDocumentoFinal.pdf
performing 76% of the tasks during the daily routine where aid is needed [1]. Associations for people with disabilities, however, not only report a lack of adequate aid services, but a lack of direct-family assistance as well. It is necessary, therefore, for an “evolution” or overhaul amongst the social and health service provision systems. The elderly, people in situations of dependency, or people with disabilities should be provided with enough resources and aids to allow them to decide their own future [2]. Structural changes in social and health services could potentially provide an increase in the well-being of a country´s citizens through the use of self-care programming and proactive management/prevention of disease. Automated home platforms can act as an accessibility instrument which permits users to avoid, compensate, mitigate, or neutralize the deficiencies and dependencies caused by living alone. At the same time it can improve the quality of the user´s life by easing domestic device activation and external assistance resource availability. An automated home platform could improve the quality of services given to citizens, as well as optimize resource consumption. At this time, however, automated platforms present some limitations: reduced functionality, insufficient technological infrastructure support, high installation and management cost, lack of privacy, complexity, etc. Home technologies are becoming the paradigm of Environmental Intelligence systems. They consist of a combination of computation technologies and intelligent interfaces which provide an ideal setting for developing adaptive systems. These systems are based on user information and automatically change functionalities and interaction aspects in order to accommodate the preferences and requirements of different people. They can reorganize themselves through independent agents which react according to changes in their environment, or make decisions proactively before those changes occur. This concept is revolutionizing the idea of the digital home. SIeSTA. The adaptive, usable, and accessible user interface. SIeSTA is a new user interface concept which endeavors to achieve the same objectives defined by Human Computer Interaction. The iFreeTablet is the combination of different physical devices (tablet-PC, remote control, web-camera, communication devices, home electronics, digital medical systems, and generally any element which can be integrated or connected to a PC) and logic devices (applications) created for SIeSTA (Aid Technology and e-Service Integrated System). The iFreeTablet incorporates all assistive technologies and new person-tocomputer interface tendencies, such as the multimodal ubiquitous systems and the adaptive and intelligent hypermedia systems. SIeSTA was designed in order to integrate computer sciences, the most recent “environmental intelligence” ubiquitous computation advances, and the newest user-machine intelligent interaction concepts into a human setting. Environmental intelligence consists in the creation of a series of objects which are used daily and whose interactive qualities are “smooth” and non-intrusive. The ability of a person to communicate with the surrounding environment provides a range of possibilities for assistance in daily tasks - especially in those areas related to services for people in a situation of dependency. SIeSTA [3] is based on software (operating systems, web platforms, authoring tools and applications) from the Concept and function Amplifier System (Concept Board or Keyboard) which was patented by the EATCO investigation group in 1988 [4] .This project was financed by IMSERSO, as was the web platform user interface
SIeSTA: Aid Technology and e-Service Integrated System
161
IPTVMunicipal [5] which was developed in 2007-2008 by CPMTI S.L. in collaboration with EATCO, the RedEspecial foundation, and Plan Avanza Contenidos Digitales from the Ministry of Industry, Tourism, and Commerce.
Fig. 1. iFreeTablet
Fig. 2. Concept board and IPTVMunicipal main category menu
Although it shares many of the characteristics (educational, professional, and entertainment tools) which are advertised by similar projects such as OLPC [6] , Classmate [7], iPad [8], Google Tablet [9], etc., the iFreeTablet is “designed for everyone.” This is to say that the iFreeTablet was developed to conform to the needs of people who typically don´t have access to technology: the disabled, the elderly, children, people in situations of dependency, women in rural zones, etc. This product was the result of investigations performed by usability laboratories in collaboration with the Access Unit at the University of Valencia, following all accessibility models in order to
162
C. de Castro et al.
achieve the SIMPLIT certificate from the Bio-mechanic Institute of Valencia (IBV) and AENOR [10]. SIMPLIT ES EL is the first certificate which ensures that a product is easy to use and designed for the elderly [11]. As a result, the needs and experience of distinct user-types provided the necessary framework to create a simple, consistent, accessible, usable, and adaptive interface that can be accessed via distinct means (multimodal), in any setting (ubiquitous), and that uses Environmental Intelligence, Multi-agent intelligent systems, and semantic web as aid technologies in order to achieve its objectives. Human-to-computer interaction is possible through use of the iFreeTablet´s multitouch screen, web camera, voice and movement recognition system, RFID system, or remote control (Natural Interaction System – iFreeSIN). Because of this, the barriers for people who have disabilities - which are created by high cost or limited access devices such as the keyboard, mouse, or special remote controls - are removed. The iFreeTablet has the functionality of a personal computer and the ergonomics and interface of a television. Internet, office applications, multimedia centers (music, movie, game, education, etc), and digital home and medical device controls are accessible through an integrated system of local applications and accessible web platform based on the Concept Desktop. The Concept Desktop: a new desktop concept The Internet Television Protocol (IPTV) is currently the most common television/video signal distribution system which uses broadband IP connections. IPTV represents an alternative mechanism for the distribution of live and stored content, all of which is available through computers via the internet. Currently, however, no standard exists for IP television interactive interface development. This is mainly due to the differences in human-to-device interface technology such as a larger screen with more resolution, and the majority of remote controls which have not been designed to deal with voice or movement recognition. One of the components paired to work with the iFreeTablet is a colored remote control (the iFreeMando). This device was the culmination of usability studies that focused on the elderly, the disabled, and children. Pressing a button – depending on the context or scenario - works the same as pressing a keyboard key, keyboard combination, or mouse click. An application, in turn, detects whatever an action was taken and then performs that task.
Fig. 3. Two iFreeMando models adapted with accelerometer and gyroscope
SIeSTA: Aid Technology and e-Service Integrated System
163
The Concept Desktop interface is designed in such a way that any system entity (category, application, scenario, etc.) can be accessed via a remote control with six colored buttons. This simplifies the system to the point where a user is only faced with six options at a time and activate any action or application with one movement, gesture, or voice. The SIeSTA “white book” or interface guide [12] clarifies the interface specifications which an application adapted for the Concept Desktop should meet. This guide was made in hopes of encouraging the development of free software (whether for GNU-Linux operating systems or web applications) which can be integrated into SIeSTA. Software which follows these guidelines would not only be accessible for any type of user – no matter whether they are disabled or not, but also could be compatible with a variety of different supports such as interactive IPTV, third generation mobile telephones, tablets, pads, UMPC, and computers. The Concept Desktop, which is the SIeSTA operating system interface, is software designed to offer a user comfortable and friendly interaction. This interaction is achieved through a fully graphic interface consisting of icons, buttons, and tool bars that can be pushed, scrolled, and dragged [13]. The purpose of the Concept Desktop is to achieve accessible, adaptable, and usable interaction between an operating system and people in a situation of dependency (the disabled, the elderly, etc.). The distinct SIeSTA interface elements or concepts are classified as: ontologies, categories, scenarios, galleries, viewers, applications, resources, activities, content, metadata, semantic web, and intelligent multi-agents.
Fig. 4. SIeSTA category menu
Ontology: in relation to computer sciences, ontology refers to the exhaustive and rigorous design of a conceptual scheme within one or various determined areas, with the end result of easing information communication and sharing between different systems and entities [14].
164
C. de Castro et al.
Categories/subcategories: A category is one of the most abstract and general notions in which an entity can be recognized, differentiated, and classified into a hierarchy. Entities that are similar or have common characteristics form one category, and in turn those categories which share aspects can form a superior category [15]. The distinct Concept Desktop activities and contents are grouped (by default) in six categories which appear in the main configurable SIeSTA menu: 1. Leisure, 2. Home, 3. Health, 4. Education, 5. Communication, and 6. Preferences. The category “leisure” can be subdivided into: multimedia, games, social networks, YouTube, and web news. The subcategory multimedia can be separated into: video club, television, music, photos, and online radio. -Video club contains: online movies, iFreeTablet movies, and external device movies. -Television contains: iFreeTablet channels, and TDT channels. -Music contains: online music, iFreeTablet music, external device music, Spotify, and recorder. -Photos contains: server photos, iFreeTablet photos, external device photos, take photos and videos (Cheese). The subcategory games can be separated into: linux, and internet. The subcategory social networks can be separated into: iFreeSocial, Facebook, Tuenti, and Twitter. The category “education” can be subdivided into: office, book reader (FbReader), Wiki-courses, PDF reader (Evince), paint, writer (Xournal), text editor (Gedit), and events. The subcategory office can be separated into: OpenOffice. -OpenOffice contains: “Writer” word processor, “Calc” spreadsheet, and “Impress” presentations. The category “home” can be subdivided into: lights, devices, and computers. The category “health” can be subdivided into: phonendoscope, tensiometer, measurements, and video-assistance. The category “communication” can be subdivided into: internet navigation, video-chat, mail, calls, messenger, and RSS reader. The subcategory mail can be separated into: send mail, receive mail, and agenda. The category “preferences” can be subdivided into: accessibility system, controls, connectivity, favorites, modify personal information, backend, internet connection, document explorer, remote control connection, VPN and connectivity. The subcategory accessibility system can be separated into: screen sweeper, auditive menu zoom, head-to- cursor gesture control, Orca, and menu sound system. -Screen sweeper contains: deactivate, activate, deactivate sound, and activate sound.
SIeSTA: Aid Technology and e-Service Integrated System
165
The subcategory controls can be separated into: connect to projector, sound control, brightness control, and connect to printer. The subcategory backend can be separated into: menu, health, and home. The subcategory connectivity can be separated into: iFreeMando, and Mobile telephone. More categories, subcategories, applications, and content can be added through iFreeMenu, a web application designed to configure websites and web platforms. Scenarios: Each SIeSTA interface concept is a composition of base information units called scenarios. A scenario is a template which contains the distinct interface types that determine interactivity and navigation. Several scenarios exist: Galleries, Viewers, and Interactive Objects. Galleries: A way to present an organized collection of information elements (photos, videos, pdf, etc.) is through a content gallery. Viewers: Once an element in a gallery is selected, it can be visualized through a viewer which provides options and more detail. Applications: In computer sciences, an application is a type of program designed as a tool for carrying out certain tasks. There are two application types – local (OpenOffice writer, for example) and web (i.e. Wiki-courses). Local applications that have been integrated into SIeSTA still haven´t been adapted to the Concept Desktop. The majority of web applications integrated into SIeSTA have been personalized by CPMTI. Content: Texts, images, photos, videos, learning objects, etc. Metadata: In general, a metadata group refers to a group of data. This is analogous to the use of indexes in order to find information. Libraries, for example, use cards which specify authors, titles, editorial houses, and places to find books. Metadata works in this way as well. Semantic web: Is based on the idea of adding semantic and ontological metadata to the World Wide Web. This additional information – which describes content, its significance, and its relation to other data – should be provided in a formal manner so that automatic machine process evaluation is possible. The objective is to improve the internet by increasing communication between systems using “intelligent agents.” Intelligent agents: Are computer programs, without human operators, which look for information. Many Iber-american, European, and Spanish groups/centers have aided in the development of the iFreeTablet. Among these groups are ACCESO from the University of Valencia directed by Francisco Alcantud, the SIA group (Applied Intelligent Systems) [16] from the Polytechnic University of Madrid directed by José Gabriel Zato, the Germán Ruiperez Philological Section Didactic Engineering laboratory, Antonio Rodríguez de la Heras – director of the Technological and Cultural Institute at the University of Carlos III [17], the Information Society investigation group coordinated
166
C. de Castro et al.
by Miguel Lopez Coronado [18], and the Virtual Teaching Center from the University of Huelva directed by Alfonso Infante Moro [19]. The iFreeTablet is ideal as an easy-to-understand educative computer for teaching children. The iFreeTablet integrates the e-Aprendo platform based on Moodle, with (Creative Commons License) free and open learning objects (interactive multimedia courses). The iFreeTablet bases its revolutionary technology on the following development premises: • • • • • •
EATCO´s patented “Concept and Function Amplifier System” Usability, accessibility, and adaptability “for everyone” Compatibility with any system that has a linux kernel or FreeBSD Use of the Concept desktop as blueprint Interactive accessibility with the PC and digital TV Emphasis on natural and multimodal interaction
2 Physical Characteristics The physical characteristics of the iFreeTablet - which is to say the actual device - is the combination of a touch screen tablet-PC and remote control, paired with hardware support for various devices which permit interaction with other systems. The tabletPC has a 10.2 inch screen with 1024x600 pixel resolution, a 1.6G Intel Atom Mobile N470 processor with the following components: • • • • • • • • •
160G SATA HDD hard drive Ethernet connection, WLAN WiFi 1.3 Megapixel Camera 3 USB ports, 1 VGA port, 1 earphone jack, 1 microphone jack, 1 internal microphone, 1 RJ-45 LAN port, 1 DC-in jack, 1 four-inch card reader 1 DIMM slot, 2 mini PCI-E for WiFi, 802.11b / g 54Mb and 3G/3.5G HSDPA / WCDMA card 5 hour battery duration, 35W adaptor, thermic refrigeration system with intelligent ventilator Kensington Lock Security Dimensions: 28 x 18 cm, 2 cm thick Weight: 1.03 kg (including battery)
Nucleus The nucleus of the iFreeTablet is the adaptation of Ubuntu GNU/Linux and the IPTVMunicipal Web platform to specific device characteristics and hardware components. iFreeTablet Applications For Leisure This module - comprised of a multimedia center, games, educational courses, and interactive TV - serves as the base entertainment service.
SIeSTA: Aid Technology and e-Service Integrated System
167
Fig. 5. SIeSTA Multimedia resources center menu
For Games From the iFreeTablet, the user can access categorized Linux or web based games (sports, logic, arcade, etc.).
Fig. 6. Game menu
For Education From the iFreeTablet, the user can access Guadalinex EDU applications, interactive multimedia courses such as the European Computer Driving License (ECDL),
168
C. de Castro et al.
OpenOffice, Web browsers, Mozilla mail, and any FREE or Creative Common License application.
Fig. 7. Education menu
Interface The design of the iFreeTablet interface focuses on person-to-computer interaction, using usability and accessibility as the main objectives. Accessibility consists in providing access to content without any limitation, avoiding any barriers that disabilities can provoke. The International Standardization Organization (ISO) refers to Usability, ISO/IEC 9126, as the “ability of software to be understood, learned, and utilized by the user and attract the user´s attention in specific use conditions.” This definition stresses the internal and external product attributes which contribute to usability, functionality, and efficiency. Usability not only depends on the product, however, but also the user. For this reason, no product is intrinsically usable, but rather each product has the capacity to be used in a particular context by particular users. Usability cannot be evaluated through isolated study. By using the Concept Desktop as a standard; by simplifying the actions, providing a fully graphic interface, and creating multimodal interaction - the principal definitions of usability and accessibility are achieved.
References 1. De Castro, C., et al.: Tecnologías de la información y las comunicaciones y discapacidad, dependencia y diversidad, Fundación Vodafone (2005) 2. De Castro, C., García, E.: Herramienta autor INDESAHC para la creación de cursos hipermedia adaptativos. In: Revista Latinoamericana de Tecnología Educativa (RELATEC), pp. 1–12 (2004)
SIeSTA: Aid Technology and e-Service Integrated System
169
3. García, E., Romero, C., Ventura, S., De Castro, C.: An architecture for making recommendations to courseware authors through association rule mining and collaborative filtering. In: User Modeling and User-Adapted Interaction (UMUAI), vol. 19, pp. 99–132. Springer, Berlin (2009) 4. Garcia, E., Romero, C., Ventura, S., De Castro, C.: Sistema recomendador colaborativo usando minería de datos distribuida para la mejora continua de cursos e-learning. IEEE Rita: Revista Iberoamericana de Tecnologías del Aprendizaje (2008) 5. García, E., Romero, C., Ventura, S., de Castro, C.: Using rules discovery for the continuous improvement of e-learning courses. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds.) IDEAL 2006. LNCS, vol. 4224, pp. 887–895. Springer, Heidelberg (2006) 6. Garcia, E., Romero, C., Ventura, S., De Castro, C.: Algoritmos evolutivos para descubrimiento de reglas de predicción en la mejora de sistemas educativos basados en web. IE Comunicaciones (2005) 7. García, E., Romero, C., Ventura, S., de Castro, C.: Using rules discovery for the continuous improvement of e-learning courses. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds.) IDEAL 2006. LNCS, vol. 4224, pp. 887–895. Springer, Heidelberg (2006) 8. De Castro Lozano, C., et al.: Wiki tool for adaptive, accesibility, usability, colaborative hypermedia courses: wikicourse. In: Congress: Current Developments in TechnologyAssisted Education (2006) 9. Burón Fernández, F., Artiles, A., García Salcines, E., de Castro, C.: E-aprendo, virtual learning management based on moodle. In: Congress: Current Developments in Technology-Assisted Education (2006) 10. García Salcines, E., de Castro Lozano, C.: Producción de materiales educativos en páginas web. In: Congress: IV Conferencia Internacional de la educación y la formación basada en las tecnologías: Online Educa Madrid (2004) 11. García Salcines, E., de Castro Lozano, C.: Herramienta autor INDESAHC para la creación de cursos hipermedia adaptativos. In: Congress: XII Jornadas universitarias de tecnología educativa (2004) 12. De Castro, C., et al.: Sistema de desarrollo integrado para cursos hipermedia adaptativos. In: Congress: IV Congreso de educación en ingeniería y tecnología (2004) 13. De Castro y otros, C.: Sistema ampliador de funciones o conceptos para ordenador. Application number: 8802199, Fecha, July 13 (1988) 14. De Castro y otros, C.: Trazador bucal para ordenador. Application number: 9301765, Fecha, August 06 (1993) 15. De Castro y otros, C.: SIeSTA sistema integrado de e-servicios y tecnologias de ayuda. Application number: 052645879, Fecha, May 16 (2005) 16. De Castro y otros, C.: TELEDOMEDIA. Application number: 2.645.878, Fecha, May 16 (2005) 17. De Castro y otros, C.: e-APRENDO. Application number: 2.673.413, Fecha, May 16 (2005) 18. De Castro y otros, C.: E-TRABAJO. Application number: 2.673.399, Fecha, May 16 (2005) 19. De Castro y otros, C.: E-PORTAL. Application number: 2.673.406, Fecha, May 16 (2005) 20. De Castro y otros, C.: CIBERLANDIA. Application number: 2.672.226, Fecha, May 16 (2005) 21. De Castro y otros, C.: INDESAHC. Application number: 2.673.406, Fecha, May 16 (2005)
170
C. de Castro et al.
22. De Castro y otros, C.: GUION EDITOR. Application number: 2023, Fecha, September 17 (2001) 23. De Castro y otros, C.: HAM VISUAL. Application number: 2570, Fecha, September 17 (2001) 24. De Castro y otros, C.: HAM CD. Application number: 2569, Fecha, September 17 (2001) 25. De Castro y otros, C.: HAM WEB. Application number: 2028, Fecha, September 17 (2001)
SIeSTA Project: Products and Results Carlos de Castro Lozano2, Javier Burón Fernández2, Beatriz Sainz de Abajo1, and Enrique García Salcines2 1
Department of Communications and Signal Theory and Telematics Engineering, Higher Technical School of Telecommunications Engineering, University of Valladolid, Campus Miguel Delibes, Paseo de Belén nº 15, 47011 Valladolid, Spain [email protected] 2 Department of Computer Science, University of Córdoba, Campus of Rabanales Madrid-Cádiz Road, km.396-A, Albert Einstein Building, 14071 Córdoba, Spain {egsalcines,jburon,ma1caloc}@uco.es
Abstract. The goal of this project is the development of several computer systems, defined as “deliveries” that can be exchanged between customers and developers along the execution of project. These deliveries are classified relating to either, the project management and the objective. Keywords: Human-Computer Interface, iTV, Accessibility.
1 Introduction The project management deliveries make reference to documents about project situation,. In this case, the objetive is to control the project and the following tasks are included: the project planning, budget, costs, work reports,, documents about planning or quality control, risk studies during the execution, etc. The objectives-related deliveries relatives make reference to documents, applications, prototypes or devices concerning to the information system and the development of the computer system. These deliveries are: system requirements, system specifications, documentation about designs, source code, execution programs, hardware devices, user manuals, etc.
2. System Specification System description (DFDs, etc.). Data requirements. Telecomunication requirements. Hardware requirements Integration test planning 2.2 Design 1. System Description in Detail Programs, reusable modules and objects Files and databases. Transactions Data dictionary Procedures System charge and respose times Software and Hardware Interfaces 2. Control System Description 3. Advised Alternative Designs 4. Advised Programmation and Design Standards 4.1 Advised Implementation Techniques: Own Code, Code Packets Purchase, External Hiring, Etc. 4.2 Planning of Computer Applications Test 4.3 Execution 1 2 3 4 5 6 7 8 9 10 11
System final design documents of the and every application Final outlines of the System andcomputer applications. Description in detail of every application logic. Input and otput descriptions (Files, screeners, lists,etc.) List of Applications with coments. Executions process, if necessary(JLC, scripts,etc.). Tests results of every unit Tests results of every application Tests results of integration Guide for system operators. Training program for operators
SIeSTA Project: Products and Results
173
12 Manual for System users 13 In every stage of the project, documents about valuation and planning of the next stage and the rest of the project must be added Also, the project index will be actualized
3 Objectives-Related Deliveries Deliveries relatives to the project objectives will be specify for every stage or activity: 1.Process redesign 2.Analysis of Requirements and planning 3.Execution 4.Evaluation and budget planning 5.Project monitoring 1. Deliveries of the stage “Process Redesign” The delivery of this first stage is the document Project Support . This study will contain: 1. 2. 3. 4. 5. 6. 7. 8.
Reference models among the international and national experience. State of the art about those Comnunication and Information Technologies applied to old and handicapped persons Process and products develop metodology Improvement and automatization of Social-Sanitary Attendance and healthcare geriatric Systems Sanitary programs Improvement and automatization of Attendance Systems at home Learning of management to the dependency for for health technicians, care professionals, and final users Integration Platform design oriented to the e-services and help technologies, applied to a test house
2. Deliveries of the stage “Requirements defining and planning” The deliveries of this stage are two documents: Document 2.1 Improvements and Innovation. It shows the causes of the possible changes in the process and the selection of posible improvements to be introduced. Documento 2.2 Project planning. Design and develop of a whole planning showing how will be carried out every stage, activities, tasks and chronogram and where the applications of SIeSTA configuration , will be specified: Servicios de Asistencia a Domicilio, Validación del Plan de Cuidados, Tecnologías de Ayudas y e-Servicios y Plan de Aprendizaje para la Calidad de Vida y Manejo de la Dependencia (Home Attendance services, Care planning validation, Help technologies, e-services, Learning planning for quality of life and Dependency management).
174
C. de Castro Lozano et al.
4 Deliveries of the Stage “Project Execution” This stage shows the acquisition, developing, adaptation and integration of the final products belonging to the differents modules included in the SIeSTA System. 4.1 Subproject 3.1: Generator and Content Administrador Based on Contents Management System Teledomedia-Moodle-Typo3(source code) and on the authoring tools Mulimedia Systems Production INDESAHC1 y TCAutor1, it will be the core system that will allow to organize the SIeSTA information and services. It will contain four main modules: User Interface, Multimedia Database Management, Adaptative Virtual Agent and Authoring tool INDESAHP. Module 3.1.1 User Interface. Based on the standard use of protocols of INTERNET and Interactive TV. It will be used WWW like tool for diffusion of contents and services and HTML combined with the protocol CGI and the language Java as well as the technology .NET and PHP of free code. Module 3.1.2 Agent of database multimedia. it will be based on a system of classification of data by categories and on a simple, adaptative and intuitive design for search approaches of data. The generic character of this module will allow us to enlarge the management of data as much as our application field would require. This module will be composed for the following applications: • Administrative Management. It will allow to take an automated control of the Administration of the system. Related to the module of management and activities monitoring and to other e-services applications, an interface will try to unify itself all the query options of and the access to this whole information. • Management and administration of the Server. it will allow to carry out all the administration tasks of the SIeSTA server, through pages Web Módulo 3.1.3 Author System. Authoring Multimedia Tools for the automatic generation of adaptative hipermedia systems for interfaces based on Web pages and Interactive TV. It will enable the author of contents to generate interactive, accessible and adaptative multimedia products This module will be composed by the following applications or products: • INDESHAP (Integrated Development System for Adaptive Hypermedia Product) whose objective is to automate the stages of development of scripts, programming, integration of a adaptative product hipermedia for the Interactive TV. • GuionEditorTVI Application for the realization of scripts of products multimedia for the Interactive TV • EPRules Tool for the discovery of information, evaluation and pursuit of the aptitudes, users' needs or preferences with the objective of designing an adaptative, personal system.
SIeSTA Project: Products and Results
175
4.2 Subproyecto 3.2. Learning Virtual Community and Communication The deliveries of each subproject can be classified as products or modules, and services. The modules or products in the subproject 3.2 are based on the Learning system Teledomedia-Moodle, developed by the research group EATCO and registered by the Foundation RedEspecial España. The Services that are derived of the Subproject 3.2 are those that had been proposed in previously for a Digital Home. Next, we will describe the modules and services of the subproject 3.2: • Módulo 3.2.1 Textual communication. It will facilitate the creation of channels and/or materials of textual communication, for their use in realtime, (Chat) or in an asynchronous (electronic mail, forum) way. This module will be the base of the Service of textual very useful Communication for blind or defficient vision people • Module 3.2.2 Audiovisual Communication. The implementation has been based on the use of free commercial software or open software, depending on the characteristics of the application field. We can distinguish: - Audiovisual communication in Realtime. Modules corresponding to communication unicast and to communication multicast. In the case unicast he/she has been integrated a messaging application in free compatible code with NetMeeting of Microsoft. In the videoconference multicast the option is the use of the characteristic tools of Mbone (VEC). This system will be the base of the Service of Videoconference that allows to maintain a conversation with one or several people, at the same time that their images are being received. Also, the Service of Unified Messaging will allow to the mobile phone, PDA, PC users to connect the messages with the system Nap so that they can show up in television or in any other exit (audio) peripheral and in that way, to integrate and to simplify the communications. - Audiovisual communication under request. Vídeostreaming applications injected by a servant to the speed required by the client, what allows flexibilizar the necessary line requirements to receive an emission under conditions (RTC, RDSI, ADSL, Frame Relay,...). The elected software to implement this communication type has been "Quicktime", RealVideo and Windows Mediates. This system will be the base of the Service of low Video it demands that it consists on the reception of channels generalistas or thematic through the line ADSL or of any other access of wide band, it includes access to videoclub and the selection of movies of a virtual videoteca • Module 3.2.3 Agent of products and services multimedia. It will be the one in charge of organizing the products and services that compose the system: hiperdocumentos, links, images, audio, videotape, courses, games etc. It will be related with the I Classify of Services and Products.
176
C. de Castro Lozano et al.
This module will be the base of the Service of Entertainment that contemplates the virtual minicadena, the reproducer of virtual DVD, you List of distribution, Calendars, Plank of Announcements, debate Forums, virtual Cafeteria, virtual Library, Room of games, virtual stores, rooms of games Widegts to see the favorite programs, to consult the state of the time, of the highways, of the airports, of the bank bills, evolution of sport events, you buy, games, etc. • Module 3.2.4 Catalog of Services and Products. Module through which you can select a service or product, to acquire it and to execute it in such a way that the data of the user's petitions are registered to send it to the Assistant Virtual adaptativo. He/she will not only show up a list of the services and available products, but rather the whole necessary logistics is included to consent to the information related with the same ones, to make the search or discharge of its contents and to facilitate its later execution. This module will be the base of the Service of Virtual Library Multimedia Adaptativa that will be able to give consultation services and acquisition of books, documents, audio, videotape, own courses or of other suppliers it is also constituted like a multiplataforma that combines television emissions of the traditional chains and of interactive (Satellite or Digital Terrestrial Televisi'n) future digital television, TV to the letter, Rent of games, Telecompra, Electronic commerce, Telebanca etc. • Module 3.2.5 Learning and Collaborative Work. Based on the system andLearning Teledomedia-Moodle and the Collaborative (GPC) Agent of Projects will Allow the creation of virtual work spaces shared for all type of user profiles s (dinamizadores, technicians, caretakers, tutors, professors, doctors, patient, student (to) etc.) that they facilitated the learning processes and project development and collaborative work. This module will give place to the Service of Teleformación that implies the creation of a virtual" "classroom to continue or to impart on-line courses and the Service of Teletrabajo that it will allow the user to work at home. 4.3 Subproject 3.3: Control of the Domótico System Installed in the House: Accessible Interactive Digital Adaptative House (CADIA) This subproject has a module or deliverable product: Module 3.3.1: Kit for the installation of a Net of area local maidservant (LAN) that allows to connect the digital different devices of the home using protocols standard (X10, WLAN, ETHERNET, HomePNA) compound system of control domótico through Internet for the management of digital combined devices of the home and of automatisms that assure the security of the housing, environmental control and of energy saving and facilitate heavy tasks communicating alarms or incidences to a central servant. This Kit is the base of the Services of Digital Management of the Home that you/they understand the management of the operation of all the digital devices of the home, so much in a local way as in a remote way, through Internet.
SIeSTA Project: Products and Results
177
To classify these services, two categories can settle down: Services of Help to the independent Life and Services to facilitate the Communication the Comfort and the Leisure Services to facilitate the communication and the leisure that have to do with the comfort, the security, the medioambiente and the energy economy in the home that can be classified in the following categories: Auomatización and access control, Control of technical alarms, Air conditioning, Control and diagnosis of energy appliancesaving and Security. Automation and access control • • • • •
Illumination for detection of presence Lifestyle programming Control and management of the energy Electronic access to the home-Domoporteros Control of the schedules of the access visit-profiles
Control of technical alarms • • •
Detection of flights of gas, dilutes, fire, smoke Warnings and automatic calls (telephone, e-mail, SMS) Automatic realization of preventive actions: closing of key of water, opening of doors and blinds etc.
Air conditioning Control of boilers, air conditioning Watering control Control and diagnosis of energy appliance-saving • • • •
Ignition and out remote of appliances Night programming for energy saving Telediagnostico by means of remote checkup of the state of the appliances Accountants' of water remote reading, gas, electricity etc.
Security • •
Videovigilancia Teleseguridad
Services of help to the independent life focused to the attendance sociosanitaria and people's cares in situation of dependence that you/they can be classified in the following categories: Automatic system of help to the taking of decisions that it allows: • To plan the day or the week, reminding the user the task that he/she should carry out in each moment
178
C. de Castro Lozano et al.
•
To facilitate step-by-step guides associated to tasks of interest or planned by the user: to program the washing machine, posología and administration of medications, to prepare the agreement food with a menu defaulted by a specialist, cultured description of a rehabilitative chart of physical exercises of character that he/she doesn't need of the active participation of a specialist • To facilitate the establishment of telephone calls with family or friends. Attended system of rehabilitation and retard of the dependence • Formative activities for reaprender with the purpose of avoiding bad habits and bad habits in the grown-ups • Rehabilitative activities, by way of simple games that, by means of the inclusion of mechanisms adaptativos and the specialist's supervision, the users go continuing little by little with the objective of maintaining, to improve or to rehabilitate a series of areas that you/they can turns affected by the step of the time, illnesses neurodegenerativas, or have an accident that they bear cerebral damage • Activities to evaluate and to improve the level of attention and concentration, this is, if the individual is capable of focalizar the attention in a point, if it is able to maintain it the wanted time and to change point of attention when he wants. By means of the presentation of stimuli in the screen of the computer to those that the individual should respond on the march, it is possible to intervene about the capacity of alert of the individual. • Quick detection of problems of visual perception: visual agnosias, hemiapnosias (evaluation taquitoscópica of the attention). • Problems by heart: amnesias, losses by heart semantics and/or procedural. Training of the visual memory, to detect functional problems of the memory day by day in tasks of the, activities in those that he/she is organized a task to carry out step-by-step. 4.4 Subproject 4: Platform of Integration of and-Services for Interactive TV The modules or deliverable products of this subproject are: Module 4.4.1: Platform mixed PC-TV with an integral architecture for the benefit of and-services so that the user uses an only communication mechanism. This architecture should be open so that, in a dynamic and transparent way, be integrated future services of interest with oneself homogeneous interface This architecture is "independent" of the technology since it would be implemented following a patron SOA (Architecture guided to services). These and-services could be implemented in diverse technologies .NET, Java, JINI, PHP that would "be consumed" by the user through common User's Graphic Interface. These services could make use of technologies of communications (UMTS, xDSL.) to offer or to send on-line information or to only make use of local resources of the user's hybrid PC-TV. The Projects Open Source of Departure is Freevo and MythTV of Linux, Mediaportal of Windows and Widtges of Tiger
SIeSTA Project: Products and Results
179
Module 4.4.2: Chip for the interaction person-TVinteractiva and ubiquitous interfaces that it contemplates supervision systems, monitorización, control of environments and situation (Control of wandering) that he/she communicates with all the digital devices of the home. Module 4.4.3: Control prototype at universal distance, ubiquitous with speech recognition based on the tracer one buccal with speech recognition developed by the group EATCO. Module 4.4.4: Integrated system of on-line checkup with domestic devices (electrocardiogram, espirómetro, glucómetro, tensiometro, pulsioximetro etc.) and I send for Internet to a central servant where a professional will give an I diagnose preventive and she will send it to the user or you/he/she will activate an alarm mechanism. Module 4.4.5 Agent virtual adaptativo. Automatic system of help to the taking of decisions, this system will use intelligent (bots) agents that will guide or they will inform the user on the different events that show up in the system. They will be able to carry out different tasks, among them to serve as guides for the different modules of Nap, to work as clerks in virtual stores facilitating this way the commercial transactions by means of the electronic commerce or simply to support the user when this he/she requests it. In a same way, these they will behave from a transparent way to the user in administration tasks, search of data,... But mainly they will be able to act as agents virtual adaptativos so that they adapt to the rhythm and the user's necessities and he/she advises him/her and help in their tasks and habitual activities. In principle the intelligent (bots) agents will use systems based so much on algorithmic intelligence, like in artificial intelligence. These "bots" will have access to different databases that will serve like support so that the bot can offer information to the users of the system Nap. Module 4.4.6 Hospitalization at home. System that facilitates programs of precocious detection or rehabilitation of problems cognitivos mediating monitorización and continuous control of vital parameters. Module 4.4.7 Terminals of wireless users and environmental technologies for the daily life, (invisible motives, ubiquitous interfaces, contents, applications for generic services, communications 3G, GPS, new systems of geographical localization, nets domóticas, platforms Aml, system pursuit RFID, ubiquitous computation, advanced technology of nets, intuitive intelligent interfaces included in objects and environments daily etc) Module 4.4.8. Services specific middleware (service of patient identificaciónde, service of access control and users' privileges, modules of capture of signs and configuration of devices, access service to the monitorización devices, servant of cynic history, module for reading of non standard HC and conversion to the preEnv 13606 and others. Module 4.4.9 recognition System and synthesis of natural language 4.5 Subproject 5: Automation of a Center of Assistence Technologies for the Personal Autonomy (CTAP) Module 5.1 System of automatic Control of the Intelligent Building
Supplies: it dilutes, gas, electricity Air conditioning (air conditioning, heating and ventilation) Illumination (interior and external) Parking controls Elevators Control schedule (entrances, exits, presences,...) Conduits of negative pressure for cleaning (vacuum cleaner) Systems of uninterrupted feeding, groups electrógenos Protection against sabotage and disfunciones Detecting of presence Detecting of vibrations and of seismic movements Computer security, safeguards, access to information and encrypted of data Detection of level of static electricity and powder Closed circuits of television Surveillance perimetral Outlying surveillance Control and blockade of accesses Surveillance of local and objects Protection intruding anti Control / confirmation of beats of surveillance Emergency communications Connection with the forces of the order, firemen or others Teams of confirmation of the previous systems and verification of alarms Detection of fires (smoke and fire) Detection of escapes or flights of gas Activation and/or monitorización of teams anti fire propagation Systems antiincendios (sprayers or sprinklers) Automatic evacuation of smoke You alarm diverse Signaling and emergency megafonía Emergency telephony (it interns or external) Connection with the forces of the order, firemen or others Teams of confirmation of the previous systems and verification of alarms Air conditioning for areas: control of temperature and humidity Exchange of heat between areas of the building and use of the external air Use of energy (I fry or heat) accumulators to displace the consumption to fringes horarias of smaller cost (for example, it tariffs night) Eventual use of alternative energy sources (lot, wind,...) Statistical (numeric analysis or graph) information of consumption points Automatic activation of illumination (with schedules and with detecting of presence) Control of circuits of hot water Control of accesses, elevators, hoist, itineraries
Module 5.2 automatic Control of the systems Prevention and Treatment of the dependence (electrotherapy, mecanoterapia, hydrotherapy, ozonoterapia, laxoterapia etc.)
SIeSTA Project: Products and Results
181
Module 5.3 System inmersivo of virtual reality and reality increased for the treatment of phobias and sexual education (Teleactuación, devices hápticos, technology of projection stereoscopic cave inmersiva, Modeling, animation and render of sequences infográficas, production, sonorización and development of programs antifobias, shop of caresses virtual etc.) 4. Deliverable of the phase "Evaluation and analysis of costs" The deliverable of this phases is a document that contains the following points: Tests: • Plan of tests of the system (up-to-date). • Inform of the results of the tests. • Description of the tests, the prospective result, obtained result and work to take to correct the deviations. • Results of the tests to the documentation. Installation: • Detailed plans of contingencies of exploitation, fallen of the system and recovery. • Plan of revision post-installation. • Inform of the installation. • Letter of acceptance of the system. Maintenance: • Listing of shortcomings detected in the system. • Listing of improvements requested by the users (if they don't give place to new projects). • It traces detailed of the changes carried out in the system. • Records of the reg
Computer Graphics and Mass Media: Communicability Analysis Francisco V. Cipolla Ficarra1,2 and Miguel Cipolla Ficarra 2 HCI Lab. – F&F Multimedia Communic@tions Corp. ALAIPO: Asociación Latina de Interacción Persona-Ordenador 2 AINCI: Asociación Internacional de la Comunicación Interactiva Via Pascoli, S. 15 – CP 7, 24121 Bergamo, Italy [email protected] 1
Abstract. In the current work are studied the existing relationships between graphic computing and mass media. The main purpose is to establish a methodology of qualitative and creative analysis of the different layers that make up the existing interrelations which are scarcely visible for the computer animation designers and the users or receptors of these contents. The methodology known as “onion-iceberg” allows us to establish the first isotopies on the level of the content of those computer productions. Additionally, a study of the state-of-theart is made, bearing in mind the diachronic and synchronic factor of technological evolution, and also the diffusion of these contents in the mass media and the Internet. Keywords: Computer Graphics, Computer Animation, Communicability, Mass Media, HCI, Evaluation.
Computer Graphics and Mass Media: Communicability Analysis
183
to the infographic synthesis in 3D (Computer Generated Imagery –CGI, and 3D Computer Animation) not to the 3D illusion upon watching the film, the sense recently spread by cinema marketing. The acronym 3D has had a wider diffusion instead of the stereoscopic term. However, the three-dimensionality of the images in origin is more meaningful, as it will be seen later on.
2 Computer Animation, Mass Media and Creativity: Evolution In almost five decades, computer animation has progressed and has been internationally spread as a technique and as a means in a fast way. Currently, the computer is an alternative element to the camera that allows the visualization of any thing imagined in motion. And computer animations, whatever their genre, are more and more attractive and remarkable. A way of grouping the animations by computer in relation to the mass media or their genre is as follows: 1. 2. 3. 4. 5. 6.
Development and research with the latest technologies Creative experimentation Short and feature films Visual effects Scientific dissemination Marketing, entertainment and corporative image in TV
In the two first it can be seen how the origins of computer animation are located starting with the first drawings made by Ivan Sutherland with a computer (Sketchpad) in 1963 [5] In the same year we have the first interactively controlled animation, that is, the first videogame in history: Spacewar 1. Other examples inside the first environment and whose titles in brackets are [2] [5] [6] [7]: • Application of genetic algorithms to create morphologies of creatures which have to act in a simulated three-dimensional environment (Evolving Virtual Creatures). • Modelling of natural landscapes and visualization of atmospheric effects (First Flight). • Demonstration of the creative possibilities of the particles technique: explosions, a water cascade, a snow storm, etc. (Particle Dreams). • Virtual encounter between two famous actors: Marilyn Monroe and Humphrey Bogart, when the realism of human characters was very hard to accomplish (Rendez-vous a Montreal). • Recreation of the Parthenon of Athens, starting from the current construction and the remains that are kept at the British Museum. The creative experience entails the creativity factor. A notion that entails a rhetoric question: What is meant by creativity? Creativity may derive from three different stages: inspiration (the Muses as the ancient Greeks called them), experiences (the more one knows, the more creative one is), and luck (the moment in which, through random reasons, one finds that one had been looking for a long time [8]). The two former are contradictory. As long as one reaches something creative that does not respond neither to the first postulate, nor to the second, evidently it is the play of
184
F.V. Cipolla Ficarra and M. Cipolla Ficarra
destiny or random. In the experimental environments of the computer animations, in their origins they used to respond to random questions, especially in the artistic context, for instance. Now inside the second group of genres of animations we can mention the following examples [2] [5]: • One of the first computer animations of free creation made in the USA (Butterflies). • One of the first experimentations of fractal images accompanied by music. (A volume of Julia Sets). • One of the representative works by John Whitney, pioneer and driver of audiovisual digital art (Arabesque). • Coral-like and organic shapes in evolution, whose author is the Englishman William Latham (Biogenesis). • Numeric models of the image and the sound matching the colours (Limosa). • Sample of algorithmic art which uses mathematical concepts, musical composition and interactivity (Lormalized). • A spatial immersion in Pablo Picasso’s painting which reveals usually hidden aspects (Picasso’s Guernica). All these works made apparent the artistic potentiality of animated graphic engineering or computer animation where the final errors of the images in the rendering process, for instance, do not have serious consequences as they would have in a scientific or accuracy context, such as is the case of a component for the engine of a car or the pillars that support a building made in the environment of the CAD (Computer Aided Design). This far the production environment of animations which have set a landmark in their international history is the third part of what short length and feature films would mean. In this regard some examples of short films are [2]: • Short film based on the XIX century science fiction novel by Edwin Abbott about the 2D and 3D of reality (Flatland). • An articulated lamp plays with its son (Pixar). • An icon of a pencil, escaped from the interface of an Apple Macintosh II, tries to go back to the screen (Pencil Test). • The first project of film integrally in 3D (prior to Toy Story) based on the universe of the French cartoonist Moebius (Starwatcher). • The inconveniences of being the toy of a baby who doesn’t respect his/her toys at all (Tin Toy). Starting from the short length films, the next breakthrough of the computer animations was the big screen, that is, the cinema. From that moment on, the productions in the USA have quickly grown, compared to the rest of the world, thanks to the growing demand for these productions. In principle it was aimed at the kids and teenagers but currently aimed at every age. The short length films have served to create that bridge. In some cinematographic productions real images of actors and actresses are combined with the 2D and 3D [7]. Next some examples of cinema productions:
Computer Graphics and Mass Media: Communicability Analysis
185
• The adventures and rivalries of a boy’s toys –astronaut Buzz and cowboy Woody. First film integrally made by computer (Toy Story). • The travel of a boy in a magical train to find Santa Claus. Film made with real actors but visualized with 3D actors and environments (The Polar Express). • The adventures of a Nordic warrior based on an old English epic poem. Film with 3D actors and environments visualized from real actors (Beowulf). With the passing of time, the creative experimentation and development and research genres with the latest technologies found in the feature films a way to make known the potentialities of the visual and special effects. Sometimes these effects were required by the film script, and the movie industry paid even millions of dollars for a few animation seconds as it was the case of the Terminator 2 films and the liquid metal cyborg [2]. Among the main visual effects and as examples we have: • Generation of life on a dead planet visualized through fractal techniques (Star Trek II, The Wrath of Khan). • Submarine adventure with an extraterrestrial presence. A living water-being with the shape of a worm in a character of this film and can only be visualized through a computer (The Abyss). • Wave effects, froth, fog, etc., of a fisherman in the midst of an impressive storm (The Perfect Storm). • Successive transformations of some animals. These were the first morphing effects in the cinema (Willow). • Initial sequence of the film with a zoom from the Earth towards the limits of the universe with the sound background of the human radio broadcasts, theoretically as far as they can be listened to (Contact). Scientific dissemination through a genre that is based on the contents of the history, geography, astronomical, physics documentaries, such as can be seen in the following group of examples [9] [10]: • Visualization of the spatial mission to Mars and the actions of the exploring vehicle (Mars Exploration Rover). • In the television series Cosmos by Carls Sagan: 4 billion years of evolution told in 40 seconds of computer animation (Evolution). • A sort of cosmic magnifying glass through the 3D map of the most complete universe that exists, starting in the Himalayas and finishing in the Big Bang (The Known Universe). • A television series that recreates nature and the animal world during the time of the great saurian. (Walking with Dinosaurs). • 3D models of the body of a man and a woman. The data, of public knowledge, serve to create animated visualizations of human anatomy (The Visible Human Project). • The researcher James Blinn explaining the principles to achieve realism in the 3D images (The Quest for Realism). Finally, we have the genre where the computer made images, little by little, have been related to the demographic basis of the population, such as the publicity or
186
F.V. Cipolla Ficarra and M. Cipolla Ficarra
institutional spots, channel curtains or transition on television, music videoclips, etc. These are pieces where creativity and originality have converged at the same time: • The first singer in 3D animation (Doxo). • Videoclip with the song by the Rolling Stones and some animated innovating characters in On the Wire (Hard Woman). • Videoclip of the musician Peter Gabriel. • The publicity heroes of our time set on stage in a world of logotypes and brands (Logorama). • The Flying Lakes were 3D animations typical of the 80s. This piece treats the topic with humour and imagination (Flying Logos). However, in some of them one runs the risk of falling into plagiarism, since they are simple reproductions of others, improved from the technical point of view, but that doesn’t keep them from being simple imitations. The original creativity of a professional of computer animation must gather a series of variables and components such as imagination, observation, the knowledge of the issue he/she is approaching, among others. In this regard there are definitions about creativity [8]: • Barron understands as creativity “the disposition towards originality”. • Parker defines it as “the art of seeking, testing and combining, in new ways, knowledge and information. The production of any new thing is essentially subjective, that is, “what is new, is new for the subject”. • On his side, Kneller claims that “creative thought is innovating, explorative, impatient with conventionalisms, dissected towards the unknown, and not determined”. • Lastly, Sillamy tells us that “creativity is the disposition to create that exists potentially in every individual and in all ages, in a narrow dependence of the sociocultural environment”. This natural tendency towards personal realization needs some favourable conditions to be expressed. Without any doubt the graphical software to carry out computer animations has been and remains an ideal pathway for creativity. Some factors that have an influence on creative thought are fluidity or productivity (the approaches opened in the design stage or the animation storyboard boost the production of plentiful alternatives to the original script), the variety (sometimes one only sees one dimension of the problem), the flexibility of perception and the generation of new ideas will give us the measure of the intellectual wealth (you have got to determine the old canons and change them for a wealth of points of view) and originality, which entails the creation of unprecedented answers to specific situations.
3 The Onion-Iceberg Method for the Heuristic Evaluation of the Computer Animation With the technological and evolution of the graphic software, commercial or not, in many cases it is difficult to differentiate at first sight that which is computer animation from the normal video. This is due to the fact that the final images of the rendering can be additionally retouched, frame by frame, with the self-editing
Computer Graphics and Mass Media: Communicability Analysis
187
programmes, such as can be Photoshop, for instance. Not for nothing some authors claim that the image is dead and that we should go back to textual messages [11]. Within the first step made by our method, we intend to split each one of the layers that make up the generative process of computer animated images, in the search of the isotopic lines which give one sense or another to an audiovisual work [12] [13], whether it is on the computer screen or in the television or the cinema, to mention just some examples. In the first place we must consider the genres to which the creations made by the computer belong, establishing the first intersections among them, from a perspective of content such as development and research in the latest technologies, scientific dissemination and creative experimentation. Later on, we may consider from the diachronic point of view the entertainment, marketing and corporative image in television, linked to computer image jump, as in the short film, the feature film, including the visual effects. Simultaneously, it is necessary to consider the kind of images with which we are working, that is, 2D, 3D, combination of 2D and 3D, wire models with rendering, combination of rendering with the wire model. The colour of the components of the different scenes is essential, since there is a whole range of implicit meanings in their selection and which change from culture to culture [14] [15]. The textures occupy a special section with the different kinds of materials that can be inserted in an object to turn something into wood or into metal. Even in those cases it is important to evaluate the photographs that can be incorporated as textures. Evidently, photographs that entail a series of techniques and special effects which in rendering can merge with the 2D or 3D computer context of the computer animation and which sometimes go totally unnoticed to the human eye [16] [17]. Illumination is another of the key elements that make up any of the scenes, with their matching features, such as can be: ambient, semi-directional, indirect, global, skylight, spot/direct, (free or target), etc. In regard to the special effects it is important to consider them from the physical point of view (translation, rotation, forces, deflectors, geometric/deformable, particles and dynamics, etc.), natural (rain, fog, hail, wind, fire, etc.). Obviously, in animation the camera plays an essential role and its main components are: the camera planes (overall, close-up, American, etc.), its movements (zoom-in, zoom-out, static subjective), the lenses and the field of view (FOV). Then we have the timing and the speed of the animation, that is, if there are pauses, reverse movements, fast advances, etc. We may find other essential elements such as the typography in the use of the placards, because they also open a range of denotations and connotations, in the plane of meaning, if we stick to a semiotics definition, for instance [12]. The use of the video, cartography and/or maps, drawings, illustration and other kinds of works by artists of international renown, etc., which make up the universe of the static and dynamic images. In the graphic of annex #1 are summarized the main components of the animation or the different onion layers. At the same time we must take into account that in computer animation we only see the end result. Nevertheless, it is important to know and analyze the iceberg of the stages that make it up to be able to establish the isotopic line, too.
188
F.V. Cipolla Ficarra and M. Cipolla Ficarra
Computer animation as a cinematographic art starts in paper support with the first sketches of the characters and the storyboard. This history of animation can briefly be divided into several sections or stages: generation of the animation, definition of the characters, scenery (objects, colours, textures, kind of illumination, etc.), character animation (style of movement of the characters in the face of given situations or obstacles to be overcome in movement, facial animation and dialogues, etc.), the insertion of the objects in the scenes, the definition of the different kinds of lighting, the incorporation of the cameras, their lenses, standpoint, their movement, etc. Later on, one proceeds to animation itself, including the special effects. Making of the first tests, and correction of mistakes, until the final animation is rendered with the high quality rendering [18 --22]. Schematically it can be summed up in the following way:
Fig. 1. Summary of the main steps that make up a “classical” computer animation
Now in the face of the rhetoric question: Where is the originality of an animation? It is important to consider several aspects which may be visible a priori, but which are related to the combination of techniques and the programming of special modules, aside from the temporal factor. That is, when the user of the commercial applications makes several experiments at the moment of incorporating textures, light reflect effects or transparencies, the development of special solutions where the intersection of solutions is presented to gain time, and time in these cases entails cutting down the production cost and therefore the final cost. For instance, in Maya at the moment of inserting the different lights in the scenes and create an atmosphere in accordance with the storyboard, one can resort in every scene to programming a script in MEL (Maya Embedded Language) so that in an automatic way it places the values of shading, lighting, rendering in the scene at the moment of the files opening. The temporal factor refers to the solutions presented in the 80s or 90s where in some works they combined 2D and 3D. For instance, real human figures were inserted which appeared as 2D in a 3D animation, as it can be seen in the identification or corporative sign of Telemadrid (figure 3) or the formation of the identification in the second TV channel of the Basque Country, starting from a person who walks until reaching the shape of a number 2 (figure 4).
Computer Graphics and Mass Media: Communicability Analysis
189
Fig. 2. Telemadrid –Ostra Delta
Fig. 3. ETB 2 –Ostra Delta
These are the technical aspects but evidently everything starts from a script that tells a story and it is important that it is simple, universal and with a dose of humour to overcome the linguistic barriers in the international circulation. There is no originality when one makes a simple cut, copy and paste of the video to the computer animations. It is important that the analyst of computer animation has a wide background of technical knowledge of graphic computing and its history, including the cinema, television and literature [17].
4 Lessons Learned and Future Works The state of the art has us allowed to see that the technology transfer, the creativity, and the social communication media has allowed the creation an entertainment industry in less than 50 years and its productions or final products are consumed by millions of people in the world. In the current work the bases have been set down for a diachronic and synchronic analysis of computer animations with the purpose of reaching creativity and originality, in the least possible time and besides cutting down production costs. Evidently, the isotopic lines that exist in a 2D and/or 3D work require a wide previous knowledge in the evolution of the diverse genres that make up the animations, and also the software and hardware. This methodology has been developed starting from the analysis of the pioneers in computer animations and the mass communication media. As a future work, we will make an interactive system that
190
F.V. Cipolla Ficarra and M. Cipolla Ficarra
allows one to summarize the data of the made animation to determine inside the different layers and that in the submerged area of the iceberg , what works have a higher communicability considering the creativity, universality, simplicity and originality of the characters used, the computing graphics techniques and the script.
5 Conclusion The current work makes apparent that the existing isotopic lines in the computer animation have derived from the mass media, first from television and then from the movies. These lines allow to know those that have a higher or lower creativity level, whether from a technical point of view, deriving both from the evolution of the hardware and the software and from the content, based essentially on the screenplay or storyboard. It is a sector of the analysis of products aimed at the public at large, it allows the presence of a heuristic evaluator, and it can increase the quality of computer animations, especially the communicability to try to widen the potential receptors and/or users. Besides, computer animation is a useful tool to eradicate the bureaucratic red tape at the moment of reconstructing real spaces, objects, told stories, etc., without counting on the permits of the local authorities, whether they are provincial, regional or national. It has also been seen how in the division of genres that make up the different kinds of computer animation there have always been intersections among them. At the moment of generating these techniques and contents intersections, the originality is present and there has been a high level of creativity in the computer-made work.
Acknowledgments A special thanks to Emma Nicol (University of Strathclyde), Electronics Arts (Madrid, Spain), Microsoft (Madrid, Spain), Autodesk (Barcelona, Spain), Maria Ficarra (Alaipo & Ainci – Italy and Spain) and Carlos for their helps.
References 1. Kaprow, A.: New Media Applications in Art and Design. ACM Siggraph, New York (1991) 2. Vaz, M., Duignan, P.: Industrial Light+Magic into the Digital Realm. Ballantine Books, Hong Kong (1996) 3. Bolter, J., Knoespel, K.: Word and Image in Multimedia. Multimedia, 237–243 (1994) 4. Bryan-Kinns, N.: Anthropomorphizing Mass Communications. Interactions 11(2), 57 (2004) 5. Rosebush, J.: Historical Computer Animation: The First Decade 1960-1970. ACM Siggraph, New York (1992) 6. Magnenat-Thalmann, N., Thalmann, D.: Virtual Worlds and Multimedia. Wiley, New York (1993) 7. Smith, A.: The Reality of Simulated Actors. Communications of the ACM 45(7), 36–40 (2002)
Computer Graphics and Mass Media: Communicability Analysis
191
8. Weisberg, R.: Creativity: Genius and other myths. Freeman & Company, New York (1986) 9. Cipolla, R., Pentland, A.: Computer Vision for Human-Machine Interaction. Cambridge University Press, Cambridge (1998) 10. Terzopoulus, D.: Artificial Life for Computer Graphics. Communications of the ACM 42(8), 32–42 (1999) 11. Debray, R.: Vie et mort de l’image, Gallimard, Paris (1992) 12. Nöth, W.: Handbook of Semiotics. Indiana University Press, Indianapolis (1995) 13. Garrand, T.: Writing for Multimedia. Focal Press, Boston (1997) 14. Clubb, O.: Human-to-Computer-to-Human Interactions (HCHI) of the Communications Revolution. Interactions 14(2), 35–39 (2007) 15. Nielsen, J., Pernice, K.: Eyetracking Web Usability. New Riders, Bekerley (2010) 16. Menache, A.: Understanding Motion Capture for Computer Animation and Video Games. Academic Press, San Diego (2000) 17. Cipolla-Ficarra, F.: Heuristic Evaluation of Animated Help in Hypermedia. In CD Proc. HCI International, Las Vegas (2005) 18. Luebke, D., Humphreys, G.: How GPUs Work. IEEE Computer 40(2), 96–100 (2007) 19. Foley, J., et al.: Introduction to Computer Graphics. Addison-Wesley, Massachusetts (1993) 20. Hearn, D., Baker, P.: Computer Graphics. Pretince Hall, London (1994) 21. Williams, R.: The Animator’s Survival Kit, Expanded Edition: A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators. Faber and Faber, London (2009) 22. Kerlow, I.: The Art of 3D Computer Animation and Effects. Wisley, New Jersey (2009)
192
F.V. Cipolla Ficarra and M. Cipolla Ficarra
Annex #1
Communicability and Animation Attributes: Universality; Simplicity; Originality; Creativity; Humour; Realism visual; Cognitive imagination; Equilibrium of moviments/animation context and body of the characters; Naturally of the images and components; Isotopies of the contents; Synchronism between dynamic and static components; Harmony of the interaction between media components (cameras, lighting, textures, colours, FX and contexts). Diachronic and synchronism (temporal interrelations).
Author Index
Alcantud, F. 16, 159 Artoni, Silvia 73 Ayala, Alfredo Medina
Kratky, Andreas 8
Leporini, Barbara
112 20
Bell, J. 16, 159 Bur´ on, F.J. 16, 159 Buzzi, Maria Claudia 20, 73 Buzzi, Marina 20, 73
Mitre, Mar´ıa G. 30 Moreno, Edgardo 62 Mori, Giulio 20
C´ amara, Mario 151 Casas, Sandra 92 Cipolla Ficarra, Francisco V. 1, 44, 81, 102, 121, 182 Cipolla Ficarra, Miguel 182 Clusella, Mar´ıa M. 30 Corpas, Alberto 151 Cruzado, Graciela 62
Nicol, Emma
de Abajo, Beatriz Sainz 171 de Castro, C. 16, 159 de Castro Lozano, Carlos 171 Enriquez, Juan G.
92
Fenili, Claudia 73 Fern´ andez, Javier Bur´ on 171 Ficarra, Valeria M. 44, 121 Garc´ıa, Claudia M. 30 Garc´ıa, E. 16, 159 Giulianelli, Daniel 62 Gordillo, Inmaculada 142 Hernandez, Roberto 132 Herrera, Susana I. 30
44
Pastor, Rafael 132 Penichet, Victor M.R. 20 P´erez, Guillermo 151 R´ amirez, J.M. 16, 159 Read, Timothy 132 Richardson, Lucy 102 Robles, R.M. 16, 159 Rodrigo, Covadonga 132 Rodr´ıguez, Roc´ıo 62 Ram´ırez-Alvarado, Mar´ıa del Mar Ros, Salvador 132 Sainz, B. 16, 159 Salcines, Enrique Garc´ıa S´ anchez, R. 16, 159 Santill´ an, Mar´ıa A. 30 Torres, J.C. 16, 159 Trigueros, Artemisa 62 Vera, Pablo Mart´ın 62 Vidal, Graciela 92