ITSE_4-4_Cover.qxd
03/08/08
02:53 PM
Page 1
Interactive Technology and Smart Education
ISSN 1741-5659
SPECIAL ISSUE Papers from the IEEE International Workshop on Multimedia Technologies for E-Learning (MTEL) Gerald Friedland, Lars Knipping and Nadine Ludwig
179
Vector graphics for web lectures: experiences with Adobe Flash 9 and SVG Markus Ketterl, Robert Mertens and Oliver Vornberger
182
Authoring multimedia learning material using open standards and free software Alberto González Téllez
192
E-learning activity-based material recommendation system Feng-jung Liu and Bai-jiun Shih
200
Educational presentation systems: a workflow-oriented survey and technical discussion Georg Turban
208
Vol 4 Issue 4 November 2007
Guest editorial Gerald Friedland, Lars Knipping and Nadine Ludwig
Journal of Interactive Technology and Smart Education
Volume 4 Number 4 November 2007
Volume 4 Issue 4
www.emeraldinsight.com/itse.htm
November 2007
Interactive Technology and Smart Education PROMOTING INNOVATION AND A HUMAN TOUCH
Interactive Technology and Smart Education PROMOTING INNOVATION AND A HUMAN TOUCH
contents Vol 4 No 4 November 2007
SPECIAL ISSUE
Papers from the IEEE International Workshop on Multimedia Technologies for E-Learning (MTEL) Gerald Friedland, Lars Knipping and Nadine Ludwig
VOL
4
NO
Guest editorial Gerald Friedland, Lars Knipping and Nadine Ludwig
179
Vector graphics for web lectures: experiences with Adobe Flash 9 and SVG Markus Ketterl, Robert Mertens and Oliver Vornberger
182
Authoring multimedia learning material using open standards and free software Alberto González Téllez
192
E-learning activity-based material recommendation system Feng-jung Liu and Bai-jiun Shih
200
Educational presentation systems: a workflow-oriented survey and technical discussion Georg Turban
208
4 NOVEMBER 2007
177
Founder and Editor-in-Chief Dr Claude Ghaoui
School of Computing & Mathematical Sciences, Liverpool John Moores University, Byrom Street, Liverpool L3 3AF, UK. Email:
[email protected] Honorary Advisory Editor Professor Alistair Sutcliffe, University of Manchester, UK
Editorial Advisory Board Anne Adams, UCL Interaction Centre, UK Petek Askar, Hacettepe University, Turkey Ray Barker, British Educational Suppliers Association, UK Maria Bonito, Technical University of Lisbon, Portugal Marie-Michèle Boulet, Université Laval, Canada Sandra Cairncross, Napier University, UK Gayle J. Calverley, University of Manchester, UK John M. Carroll, Penn State University, USA Chaomei Chen, Drexel University, USA Sara de Freitas, Birkbeck University of London, UK Alan Dix, Lancaster University, UK Khalil Drira, LAAS-CNRS, France Bert Einsiedel, University of Alberta, Canada Xristine Faulkner, London South Bank University, UK Terence Fernando, University of Salford, UK Gerhard Fischer, University of Colorado, CO, USA Monika Fleischmann, Fraunhofer Institute for Media Communication, Germany Giancarlo Fortino, University of Calabria, Italy Gerald Friedland, Freie Universität Berlin, Germany Bernie Garrett, University of British Columbia, Canada Lisa Gjedde, Danish University of Education, Denmark Ugur Halici, Middle East Technical University, Turkey Lakhmi Jain, University of South Australia, Australia Joanna Jedrzejowicz, University of Gdansk, Poland Joaquim A. Jorge, Technical University of Lisbon, Portugal Athanasios Karoulis, Aristotle University of Thessaloniki, Greece Lars Knipping, Freie Universität Berlin, Germany John R. Lee, University of Edinburgh, UK Paul Leng, Liverpool University, UK Anthony Lilley, magiclantern, UK Zhengjie Liu, Dalian Maritime University, China Nadia Magnenat-Thalmann, University of Geneva, Switzerland Terry Mayes, Glasgow Caledonian University, UK Toshio Okamoto, University of Electro-Communications, Japan Martin Owen, NESTA Futurelab, UK Vasile Palade, Oxford University, UK Roy Rada, University of Maryland, MD, USA Elaine M. Raybourn, Sandia National Laboratories, NM, USA Rhonda Riachi, Oxford Brookes University (ALT), UK Kerstin Röse, University of Kaiserslautern, Germany Joze Rugelj, University of Ljubljana, Slovenia Eileen Scanlon, Open University, UK Jane K. Seale, University of Southampton, UK Helen Sharp, Open University, UK Vivien Sieber, University of Oxford, UK David Sloan, University of Dundee, UK Andy Smith, University of Luton, UK Paul Strickland, Liverpool John Moores University, UK Josie Taylor, Open University, UK Malcolm J. Taylor, Liverpool University, UK Thierry Villemur, LAAS-CNRS, France Weigeng Wang, University of Manchester, UK
Interactive Technology and Smart Education (2007) 179–181 © Emerald Group Publishing Limited
Guest editorial
Guest Editors: Gerald Friedland International Computer Science Institute, Berkeley, California, USA
Lars Knipping Department of Mathematics, Berlin Institute of Technology, Berlin, Germany and
Nadine Ludwig MuLF – Center for Multimedia in eLearning and eResearch, Berlin Institute of Technology, Berlin, Germany
INTRODUCTION Ever since the advent of automatic computation devices, efforts have been made to answer the question of how to properly integrate them and take advantage of their capabilities in education. Educational multimedia systems promise to make learning easier, more convenient, and thus more effective. For example, classroom teaching enriched by vivid presentations promise to improve the motivation of the learner. Concepts may be given a perceivable existence in a video show and the observability of important details can be stressed. Video capturing of lectures has become common practice to produce distance education content directly from the classroom. Simulations allow students to explore experiments which would be otherwise impossible to be conducted physically by students. Today, almost every university claims to have a strategy to utilize the opportunities provided by the Internet or digital media in order to improve and advance traditional education. However, the question about how multimedia can really make education more exploratory and enjoyable is as yet not completely answered. In fact, we are just beginning to understand the real contribution of multimedia to education. For example, various web sites and lecture videos produced as part of the “e-learning hype” often do not exploit the full potential of multimedia
VOL
4
NO
4 NOVEMBER 2007
for teaching. For example, how can we support participant interaction in classrooms and lecture halls better? What are the best tools for the development of educational multimedia material? How can we make the production of educational material easier and existing application more reusable? In addition, new technologies and trends – such as mobile and semantic computing – open up new and exciting opportunities for teaching with multimedia and the creation of multimedia learning material. How can these new trends in multimedia research be used to improve multimedia education or education in general? In order to find answers to these and many other questions, we organized the second IEEE International Workshop on Multimedia Technologies for E-Learning (MTEL) in connection with the 9th IEEE International Symposium on Multimedia. Based on the success of the first MTEL workshop in 2006, our goal was to attract researchers and educators from the multimedia community as well as researchers from other fields, such as semantic computing and HCI, who are working on issues that could help improve multimedia education as well as teaching and learning in general. Based on discussion among these experts with different backgrounds, the workshop’s aimed to identify new trends and highlight future directions for multimedia-based teaching. 179
Guest editorial
The following special issue of Interactive Technology Smart Education presents four papers that have been carefully selected by the program committee for publication in this journal. They have been extended by the authors according to reviewers suggestions. We hope that these articles are able to inspire even more creativity in the overlap between human-centered and technologycentered research. The following paragraphs provide a short overview of the selected articles. Vector Graphics for Web Lectures: Experiences with Adobe Flash 9 and SVG, presents experiences made during the development and every day use of two versions of the lecture recording system virtPresenter. The first of these versions is based on SVG while the second one is based on Adobe Flex2 (Flash 9) technology. The authors point out the advantages vector graphics can bring for web lectures and briefly present a hypermedia navigation interface for web lectures that is based on SVG. Also, they compare the formats Flash and SVG and conclude with describing changes in workflows for administrators and users that have become possible with Flash. Authoring Multimedia Learning Material using Open Standards and Free Software deals with avoiding drawbacks like license cost and software company dependencies at distributing interactive multimedia learning materials. The authors propose using open data standards and free software as an alternative without these inconveniences. But available authoring tools are commonly less productive. The proposal is based on SMIL as composition language particularly the reuse and customization of SMIL templates used by INRIA on their technical presentations. The authors also propose a set of free tools to produce presentation content and design focusing on RealPlayer as delivery client. E-Learning Activity-based Material Recommendation System an application to utilize the techniques of LDAP and JAXB to reduce the load of search engines and the complexity of content parsing is described. Additionally, through analyzing the logs of learners’ learning behaviors, the likely keywords and the association among the learning course contents will be conducted or figured out. In conclusion, the integration of metadata of the learning materials in different platforms and maintenance in the LDAP server specified. Finally, Educational Presentation Systems: a workfloworiented survey and technical discussion presents an overview of processes before, during and after an educational presentation. The different processes are presented in form of a workflow. The workflow is also used in order to present, analyze and discuss different systems including their individual tools covering the different phases of the workflow. After this overview of systems, the different approaches are discussed in respect to the workflow. This discussion provides specific technical details and differences of the focused systems. 180
ACKNOWLEDGEMENTS The Guest Editors wish to thank Claude Ghaoui, ITSE Editor-in-Chief, and then dedicated reviewers for their detailed and thoughtful work. They were: Abdallah Al-Zoubi, Princess Sumaya University for Technology, Jordan Michael E. Auer, Carinthia Tech Institute, Austria Helmar Burkhart, University of Basel, Switzerland Paul Dickson, University of Massachusetts, USA Berna Erol, Ricoh California Research Center, USA Rosta Farzan, University of Pittsburgh, USA Claude Ghaoui, Liverpool John Moores University, UK Wolfgang Hürst, University of Freiburg, Germany Sabina Jeschke, University of Stuttgart, Germany Ulrich Kortenkamp, Paedagogische Hochschule Gmuend, Germany Ying Li, IBM T.J. Watson Research Center, USA Marcus Liwicki, University of Bern, Switzerland Robert Mertens, University of Osnabrück, Germany Jean-Claude Moissinac, ENST Paris, France Thomas Richter, University of Stuttgart, Germany Anna Marina Scapolla, University of Genova, Italy Georg Turban, Darmstadt Institute of Technology, Germany Nick Weaver, ICSI Berkeley, USA Debora Weber-Wulff, FHTW Berlin, Germany Marc Wilke, University of Stuttgart, Germany Peter Ziewer, Munich Institute of Technology, Germany We would like to thank all authors for their quick revision and extension of the articles presented herein. Their commitment made it, again, possible to release this special issue so quickly after the workshop.
REFERENCE Friedland, G., Knipping, L., and Ludwig N. (2007), “Second IEEE International Workshop in Multimedia Technologies for E-Learning”, Proceedings of the 9th IEEE International Symposium on Multimedia, IEEE Computer Society, Taichung, Taiwan, pp. 343–95.
ABOUT THE GUEST EDITORS Dr Gerald Friedland is currently a researcher at the International Computer Science Institute in Berkeley, California. Prior to that, he was a member of the multimedia group of the computer science department of Freie Universität Berlin. His work concentrates on intelligent multimedia technology with a focus on methods that help people to easily create, edit, and navigate content, aiming at creating solutions that “do what the user means”. He is program co-chair of the 10th IEEE INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Guest editorial
Symposium on Multimedia and the Second IEEE International Conference on Semantic Computing. In addition to the second IEEE International Workshop on Multimedia Technologies for E-learning, he also cochaired the first ACM Workshop on Educational Multimedia and Multimedia Education. He has received several international research and industry awards. Among them is the European Academic Software Award in 2002, for the creation of the E-Chalk system in cooperation with Lars Knipping. He is also member of the editorial advisory board of ITSE. Dr Lars Knipping is a researcher at the mathematics department at Technische Universität Berlin. He belongs to the board of editors of ITSE and the editorial team of iJET (International Journal of Emerging Technologies in Learning). Before joining Technische Universität he worked as a scientific consultant in a research project for
VOL
4
NO
4 NOVEMBER 2007
a state-funded TV broadcaster, the “Sender Freies Berlin”, followed by positions as researcher and instructor at the multimedia group at the computer science department of Freie Universität Berlin and as lecturer in International Media and Computing at the FHTW Berlin. Dr. Knipping received his Ph.D. degree for his work on the E-Chalk system and holds M.Sc. degrees in both mathematics and computer science. Nadine Ludwig graduated from Technische Universität Ilmenau with a degree in Computer Science in 2005. In her thesis she described the integration of remote laboratories in Learning Content Management Systems via SCORM. Since May 2006 Ms Ludwig has been a part of the MuLF Center at Technische Universität Berlin as a research associate. Currently she is working on her PhD-thesis in the field of Semantics and Modularization of Learning Objects in Cooperative Knowledge Spaces.
181
Interactive Technology and Smart Education (2007) 182–191 © Emerald Group Publishing Limited
Vector graphics for web lectures: experiences with Adobe Flash 9 and SVG
Markus Ketterl Virtual Teaching Support Center, University of Osnabrück, Osnabrück, Germany Email:
[email protected] Robert Mertens Fraunhofer IAIS, Schloß Birlinghoven, Sankt Augustin, Germany Email:
[email protected] and
Oliver Vornberger Department of Computer Science, University of Osnabrück, Osnabrück, Germany Email:
[email protected] Abstract Purpose – The purpose of this paper is to is to describe vector graphics for web lectures, focusing on the experiences with Adobe Flash 9 and SVG. Design/methodology/approach – The paper presents experiences made during the development and everyday use of two versions of the lecture-recording system virtPresenter. The first of these versions is based on SVG, while the second is based on Adobe Flex2 (Flash 9) technology. The paper points out the advantages vector graphics can bring for web lectures and briefly presents a hypermedia navigation interface for web lectures that is based on SVG. The paper also compares the formats Flash and SVG and concludes with describing changes in workflows for administrators and users that have become possible with Flash. Findings – Vector graphics are an ideal content format for slide-based lecture recordings. File sizes can be kept small and graphics can be displayed in superior quality. Information about text and slide objects is stored symbolically, which allows texts to be searched and objects on slides to be used interactively, for example, for navigation purposes. The use of vector graphics for web lectures is, however, a trend that has begun only recently. A major reason for this is that multiple media formats have to be combined in order to replay video and slides. Originality/value – The paper offers in insight into vector graphics as an ideal content format for slide-based lecture recordings. Keywords: Lectures, Worldwide web, Graphical user interfaces, Presentation graphics, Multimedia, Teaching aids Paper type: Research paper
1.
INTRODUCTION
Vector based graphics formats offer a number of possibilities for the realization of web lecture interfaces for 182
slide based talks. One major advantage is that they support capturing contents in a symbolic manner which is a requirement for searching text in a recording (Lauer and Ottmann, 2002). They also offer superior picture INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
quality. Last but not least, vector based graphics formats enable developers to realize a high degree of interactivity that can be used for implementing advanced navigation concepts as described in (Mertens et al. 2006d). They also can be used to tackle a number of layout problems as further described in (Mertens et al. 2006b). Vector graphics are, however, not very common in web lectures. This article presents the authors’ experience with two different vector graphics formats: (scalable vector graphics SVG ) and Adobe’s new Flex 2 (Flash 9 based) technology for content presentation and control in the web lecture system virtPresenter. The SVG based version of the lecture recording system has been used at the University of Osnabrück and at the University of Applied Sciences Osnabrück since summer 2003. During this time, users with different backgrounds, knowledge and expectations experienced the system in every day use. The Adobe Flex 2 based counterpart has been introduced in February 2007 after a seven month development and testing period. This new version is, apart from small changes concerning further system requirements and improvements, in productive use since March 2007. The article is organized as follows: Section 2 points out the advantages vector graphics can bring for web lectures and briefly presents a hypermedia navigation interface for web lectures that is based on SVG. Section 3 describes experiences with this SVG based interface and points out difficulties that arose during the use of this interface in a number of university courses. Section 4 compares Flash and SVG with respect to their use in lecture recording. Section 5 introduces the Flash based successor of the SVG based interface. Section 6 describes changes in workflows for administrators and users that have become possible with Flash. Section 7 briefly summarizes the work presented in this article and refers to future projects and ideas.
2. ADVANTAGES OF VECTOR GRAPHICS IN WEB LECTURES The advantages of using vector graphics for content representation in web lectures can be summarized in a couple of words: vector graphics store content in a symbolic way, vector graphics can be enlarged without loss of quality and many vector graphics formats allow for interactive onthe-fly manipulation of contents. The aim of this section is to show why these properties of vector graphics are useful by showing how each of them improves web lectures.
2.1 Symbolic Representation of Contents and Interactivity The original virtPresenter user interface shown in Figure 1 was developed to implement a hypermedia VOL
4
NO
4 NOVEMBER 2007
navigation concept for lecture recordings (Mertens, 2007). Hypermedia navigation consists of the five elements full text search, bookmarks, backtracking, structural elements and footprints (Bieber, 2000). Full text search is realized by searching the text of the slides in the slide overview. Search results are highlighted by an animation that grows and shrinks them repeatedly. Both the ability to search in the slides directly and to animate search results is based on the properties of SVG (symbolic representation and manipulation on the fly). Bookmarks are realized as a functionality that allows for selecting arbitrary passages and storing them for later viewing or exchanging them with other students. Backtracking is implemented by storing the play position whenever the user navigates to another play position. Thus each navigation action can be undone. In order to facilitate orientation at the stored play positions, replay begins at their time index minus three seconds. Structural elements are realized in two ways the simple one of which are next/previous buttons that allow navigating to the next or previous slide or animation step. A more sophisticate realization of structural elements is the interactive slide overview implemented in virtPresenter (Mertens et al., 2006c). In the overview, those parts of a slide that had been animated during the original presentation when the lecture was recorded can be clicked on with the mouse. The recording then starts replay at the time index when the respective animation takes place during the lecture. To realize these features, the slide documents are analyzed and script code containing the respective time indices is added automatically to the animated elements of a slide (Mertens et al., 2007). The implementation of this step was relatively easy due to the symbolic representation of the slide elements in SVG. Footprints serve the purpose of showing users which parts of a hyperdocument they have already visited. In classic hypertext, this is done by colouring visited and non-visited links differently. Since web lectures are time based media, another approach had to be found. In virtPresenter, coloured parts of the timeline indicate that the corresponding passages of the recording have already been watched by the user. Multiple visits are indicated by deeper shadings. The footprints are stored symbolically as pairs of start and end time indices. They are drawn on the fly when a lecture is watched. This has been realized by the use of animated SVG rectangles. The different colour shadings are created by overlapping semitransparent rectangles. This brief description shows that the properties of SVG as a vector graphics format have been crucial for the realization of the virtPresenter user interface. Especially the implementation of footprints, bookmarks and full text search has been facilitated immensely by SVG as a vector graphics format. 183
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Figure 1 VirtPresenter 1.0 user interface
2.2
Superior Picture Quality
Good picture quality of lecture slides is important even for standard usage scenarios (Ziewer and Seidl, 2002). However, it becomes even more important, when the lecture slides are shown on a large screen as in the scenario depicted schematically in Figure 2. In this scenario, the lecture is replaced by a cinemalike session in which the recording of the lecturer and the slides are presented to the audience on two large screens. This scenario has been carried out successfully at the University of Osnabrück a number of times (Mertens et al., 2005). Since the slides are shown on a large screen, bad picture quality becomes even more obvious than during replay on a standard computer display. At the University of Osnabrück, the slides used had been in SVG and had thus been presented in the same quality as in the original lecture.
3.
LESSONS LEARNED
The SVG-based version of the viewer interface was first developed in 2003 and improved in various steps. The main focus of the development was to implement the hypermedia navigation concept for lecture recordings described in section 2 and in more detail in (Mertens et al., 2004). At the time when development of the SVG based version began, SVG seemed to be a promising choice for a content format to be used in lecture recordings. SVG is an XML based vector graphics format and was expected to grow in importance. We had expected that SVG renderers supporting the required subset of the SVG standard would soon become available on more platforms than Windows and that their performance would increase in order to rival that of Macromedia Flash (now Adobe 184
Flash). Things have, however, developed in a different direction. While all the features described in (Mertens et al., 2004) could be realized with a combination of JavaScript, SVG and Real Video, the technology used lead to a number of problems in every-day use. Loading and rendering speed has shown to be a major problem when combining SVG and Real technology. Table 1 compares slide loading times of the SVG and the Flash based implementation (further described in sections 4 and 5). It also shows loading times for an optimized version of the SVG slides in which background graphics in the slides (logos) had been deleted to speed up rendering. The testing environment was a Windows XP system with an AMD Athlon 64 based processor with 2,01 GHz and 1 GB RAM. The tests were made locally on that system without internet connection interferences. This test indicates the elapsed time till a slide object is loaded and fully available in the main application. As some interactivity and animation features of SVG that are only supported in the Adobe SVG Viewer (ASV) had been used in the interface, replay was only possible with the ASV for Microsoft’s Internet Explorer (IE). This viewer plug-in does, however, exhibit low rendering speeds and support will be discontinued in January 2008. This fact is especially problematic when many slides have to be shown at once as it is the case for overviews. Also switching from one slide to another happens with a noticeable delay. The Real video player buffers data when users navigate in the video. This buffering also slows down the interfaces responding times noticeably. Another problem with SVG was that the plug-in required only exists for Microsoft’s Internet Explorer. Even though Adobe had implemented plug-in-versions for other browsers, only the one for IE supports the subset of the SVG specification required for the implementation. This fact rules out platform independence for the INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Figure 2 Lecture slides on large screens
interface. Last but not least, the fact that plug-ins are required for both Real Video and SVG poses an obstacle for first time users of the interface. The use of the SVG-based interface has been evaluated in a number of courses. In these evaluations, the above mentioned points have shown to have a considerable negative impact on user acceptance. In 2006, three courses have been evaluated with a questionnaire developed for the evaluation of e-Learning at the University of Osnabrück. For abbreviation purposes, these courses are referred to in the paper as courses A, B and C. Table 2 summarizes relevant details on the courses. Figure 3 shows how the students judged download times of the recordings. No actual download was offered. The term “download times” does thus refer to loading and rendering times of the viewer interface. By and large the numbers in the figure do not seem too critical at first sight. In practice, however, the interface loads considerably longer than other material found on the course web site. Also, the results show that while the loading times
Table 1 Slide loading with SVG and Flash Technology
SVG
SVG optimized
Flash
average slide loading time (ms)
164*
120*
67
430**
243**
81
(Real video)
(Real video)
(Flash video)
54
25
28
average slide loading time 1 video (ms) average slide size (KB)
13 different converted PowerPoint slides System: Windows XP AMD Athlon 64 Processor; *outlier here: 520, 635 2,01 GHz, 1GB Ram **outlier here: 7300, 6349, 2280, 4300
VOL
4
NO
4 NOVEMBER 2007
have been acceptable for most students, they have not been acceptable for all students. Figure 4 shows how many students reported problems using virtPresenter. The problem descriptions were entered as free text answers in the questionnaires. In course A, no student reported a problem. This might be due to the fact that students were given very detailed instructions. Having a non-technical background, the students have very likely followed these instructions closely. The questionnaires have also shown that all students in course A used IE. In the other courses, the questionnaires have shown that some students did not use IE (even though they had been instructed that using another web browser would cause problems with the interface). In contrast to course A, course B and C had been attended by a number of students with technical backgrounds. The questionnaires lead to the assumptions that some of these students, being used to solve problems by trial and error, have tried to use the interface with other browsers than IE unregarding the information that it would not work on these browsers. Seemingly unaware of the fact that the interface was not supported under these settings, the students reported the system behaviour as faults. From one problem description it even became clear, that the student had not installed any SVG viewer. In order to counter the above described effects, a number of improvements had been devised for the SVG based version of the interface. For example, a nearly equivalent solution with QuickTime video instead of Real video that also works with SVG for the slide representation and a Flash 6 based thumbnail overview component for faster slide loading and interface responding. This approach of mixing technology did not solve the problems either. The reason was that the users had to install another plug-in, QuickTime instead of Real as well as the Flash plug-in. 185
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Table 2 Course details Course
Full Name of Course
Didactic Setting
A
Fundamentals of Biblical Theology
Lecture took place as usual, all students could attend and the recordings had been provided as an add on.
B
Internet Technologies
Lecture took place at one University and was transmitted to another one. Recordings were provided as an add on. A more detailed description of the scenario can be found in (Hoppe et al., 2007).
27
Same as course C.
19
C
Managing Innovation and Projects
Number of Students 25
Adobe Flex 2 in combination with the Open Source Red5 streaming server backend as described in section 5.
4. TECHNOLOGY REVIEW: FLASH VS. SVG
Figure 3 Lecture recordings download times
Figure 4 User problems
Moreover the reaction time of the interface could not be improved by this approach. As a preliminary workaround, plug-in and browser checks had been added to the original version. These measures alert users if they try to use the interface with wrong software settings and thus reduced bug reports that are due to accessing the interface with wrong software setting. Also, a number of enhancements had been added to avoid unnecessary loading of slides when slide changes happen at a high frequency. These approaches have, however, been limited by the technology setting in which they had been employed. In order to overcome these problems, we have turned to 186
In a strict sense, the new interface cannot reach the function range of the old virtPresenter interface described in (Mertens et al., 2006a; Mertens, 2007) by now. This is mainly due to the fact that the new version does not yet feature an automatically generated thumbnail slide overview which is crucial to a number of functionalities implemented in the SVG based version (Mertens et al., Mertens et al., 2004, 2006d). The thumbnail overview is used both to visualize the connection of navigation actions to the structure of a talk (Mertens et al., 2006d) and to allow structure based navigation on the level of animations within a slide. The latter is realized by clickable slide elements that allow for direct navigation to the replay position when the corresponding slide element first appeared on screen during the recorded lecture (Mertens et al., 2004). However, the reimplementation was necessary due to frequent user problems with unsupported computer platforms, wrong browsers or browser settings or missing plug-ins. The underlying shared infrastructure (Mertens et al., 2007) was enhanced to export, besides different podcast formats flash content (Flash video and Flash slides) (Ketterl et al., 2006b, 2007a). Adobe’s Presenter (formerly named Breeze) is now also a part of the automatic lecture recording production chain. This software component enables a fast PowerPoint to Flash conversion that could be fully automated as well. This software component was selected in this new process due to the fact that it is reliable and now even affordable for a smaller university project. Today there are some open source or commercial PowerPoint to Flash export systems besides the Adobe product on the market. However, Adobe Presenter currently seems to be the only system that fits into our automated production chain. The other systems could not be integrated in the automatic production chain as they could not be started from other INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
programmes. A Problem with Adobe Presenter is constituted by the fact that this component exports only Flash 6 slides in the current version. The communication between old Flash objects and new Flash 9 objects is not ideal at the moment. Difficult is for example the handling of different old Flash version based slides in a Flash 9 application. A prototype version which also features slide based navigation is depicted in Figure 5 on the left hand side. Nevertheless, the time for post processing (video and slide conversion and slide text analysis and building all the required software files for the interface could be reduced from previously about three hours down to only about one hour for a 1.5 hour lecture. Of particular importance is here, that the flash video conversion is much faster than our previous Real video conversion. Our initial recording format here is still MPEG-2 because of the fact that this video format is of good quality and can be converted into many different video/audio formats in the post processing process. Figure 5 (right) depicts the revised and newly implemented Flex 2 based web interface. Besides the objective of using it on any computer platform without adjustments, the aim was that people without a technical background could use the interface as easily as internet experts. On the right hand side of Figure 5 one can find an area where users can choose from a list of recorded lectures or search text in the recordings. Figure 6 shows this lecture list (section a) and search results (section b) in a more detailed view. The lecture list gets updated over an RSS notify mechanism. Inspirational were our positive experiences with Apple’s iTunes, their popular Music Store and the podcast subscriber facility (Ketterl et al., 2006a, b). The main reason why we do not use Apple’s iTunes (or other podcatcher software) and the podcast technology as main distribution facility is, that the navigation possibilities in podcasts are limited compared to the navigation options in the virtPresenter system. Further inquiries about navigation in lecture podcasts and how lecture podcasts are being used in contrast to the normal lecture recordings are ongoing. Several examination results with student users and external users are described in (Schulze et al., 2007) for virtPresenter and (Hürst and Welte, 2007a) for a system used at the University of Freiburg. In the revised virtPresenter system, users can subscribe to lecture recordings using our internal university learn management system Stud. IP (www.studip.de). The virtPresenter interface gets updated and shows the lecture recordings as soon as they are available. Aside from that, external users can subscribe to the recordings (like subscribing to a normal podcast with a podcatcher software like Apple’s iTunes) and can view recordings for example that are open for public viewing. This lecture recording offer is presented over a public website. In short, this means that students as well as VOL
4
NO
4 NOVEMBER 2007
external viewers use the same interface for different recordings. They do not need to switch between applications and there is no need to follow additional links in other browser windows. The interface can also be used if a link from our lecture website or the LMS points to a specific lecture or a specific time index in a recording. This is done by interpreting assigned url parameters. The feature is a further extension of a functionality implemented for the SVG based version and described in further detail in (Mertens et al., 2005). Section b in Figure 6 also depicts a possibility to search in the recordings. Users can search not only in one web lecture but in all recordings they have subscribed to. The search results are presented in a hierarchical tree overview similar to Adobe’s Acrobat. The results can be selected and are linked directly to the corresponding lecture recording section. Due to the changeover to Flex 2 technology, users can navigate fluently in the recordings with a new time scrubber component (see Figure 7). In the SVG based version, visible scrolling in the sense of (Hürst and Müller, 1999) was only possible with the slides used in the recording, in the Flex 2 based version, it is possible for both slides and video. Presently we highlight slide borders in the timeline and show the lecture slide title directly above the respective area of the timeline. Colourcoded are the sections which have been viewed already by the user. When a lecturer is using the mouse cursor during the presentation, this data is also logged with the underlying recording system and the data can be presented in the user interface as well. The Flex based interface responds considerably quicker than the old one (see Table 1). Delays resulting from slide loading, jumps to other sections or disturbing video buffering that we had in the old Real video respectively SVG based version are not noticeable anymore. Even a complete reload of the system due to a browser refresh is quick. The interface was tested on Windows, Linux or Mac OS X computer platforms, all with the Flash 9 player plugin. The results described were alike on all platforms.
5. VIRTPRESENTER 2.0: HOW FLASH 9 MADE THINGS FAST For the new implementation of the lecture recording system we used Adobe’s Flex 2 technology (this technology was introduced in June 06) for the user interface and for user interaction. Flex 2 is based entirely on ActionScript 3, which was introduced as a revised and extended programming language as part of Adobe’s new Flash 9 player. Flex applications are deployed as compiled byte code that is executed within the Flash player runtime. The core of Flex is the developer-centric Flex framework, a library of ActionScript 3 objects that 187
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Figure 5 VirtPresenter 2 Flex technology based interface
Figure 7 Timeline with slide border visualization and slide title overview
Figure 6 RSS updated lecture overview with lecture search
provide a foundation for building rich internet applications. Writing applications with Flex is similar to developing in .NET or Java (Kazoun and Lott, 2007). Also, Flex provides a wealth of useful components so that developers do not have to build everything from scratch. Important besides the comfortable developer framework in our scenario is that neither a special browser version nor a combination of different plug-ins has to be installed on the users’ computers (like needed in the SVG based implementation). The user only needs the Flash player plug-in for viewing the web lecture recordings. The current plug-in version is Flash 9, which is available for browsers on Windows (IE, Firefox and Opera), Apple (Safari, Firefox) and Linux (Firefox) as well. Normally this plug-in can be installed without difficulties or special computer knowledge. Besides, this software component is very popular and widespread nowadays (Téllez, 2007). That means that no special browser adjustments or compatibility checks are required. The same version will work on different computer platforms as a cross browser solution. The plug-in base for ActionScript 3 is a newly implemented virtual 188
machine called ActionScript Virtual Machine 2 (AVM2) that converts byte code into native machine code. It is more like a Java Virtual Machine (Java VM) or the .NET Common Language Runtime (CLR) than a browser script engine. The most important advantage is (and this is a main reason why we are using Flash 9) that the new browser environment is faster than previous versions and it uses much less memory on the computer (Adobe, 2007). We could confirm this assertion in our daily work with the new Flex 2 framework. Student users report that they like how fast the new interface responds and reacts to user interaction. Further user acceptance/problem surveys are planned for February 2008. In order to respond fast, a further component is important. Like mentioned before, a main problem was the video buffering of the Real player in the interface. A dedicated and reliable video server is also required. Like most universities we have a fairly good server infrastructure backend. Through that we could use Adobe’s recommended and expensive Flash Media Server 2 for working with recorded lecture videos. Instead of this expensive solution we a have used an open source Flash streaming server implemented in Java for a couple of months now which is called Red5 (Red5 2007). The adoption was an experiment, because this open source server deployment was not really stress tested, barely documented and only available in version state 0.6 (currently version 0.6.3 is available). The server worked very stable even during the critical exam time at the end of the term. Our productive streaming system used during that time was a 2,8 Ghz Intel Dual Core Xenon processor based Windows XP system with 4 GByte RAM. This video server system is more than adequate with sufficient reserves in case of user request peaks. At present there is no need to use Adobe’s expensive Flash Media Server 2 solution in our production environment. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
6.
BEHIND THE SCENES: ADMINISTRATION AND WORKFLOWS
Lecture recording with virtPresenter makes use of a fully automated recording and an extended production chain described in (Ketterl et al., 2007a). While this process is fully automated, a number of administration tasks still remained. Currently we manage and generate eighteen web lecture recordings with additional podcasts (Ketterl et al., 2006a) from different university courses in different rooms a week plus some additional recordings for special occasions like conferences and workshops with this system. This number increases steadily. The lecture recording system is tightly connected to the learn management system Stud.IP used at the University of Osnabrück. We have also defined more general interfaces that make metadata like the name of the course, the name of the lecturer and data for full text search available to other systems like content portals or search engines. These interfaces also allow for authentication handling by the other system. Thus users do not have to log in separately in the lecture viewer since they are authenticated externally, e.g. by the portal. Normally the recordings are assigned to the web-page of the course in the university LMS. Figure 8 shows what this integration looks like in our university LMS Stud.IP. The recordings can additionally be tagged with further meta-data or can be stored in other database systems wherefrom further platforms can use them as well. At present we are working on a rights management system for the recordings that will serve the purpose of defining whether episodes are available for university members, publically (distribution over Apple’s iTunes music store (Ketterl et al., 2006a) for example), as part of a course exchange programme with other universities or on a pay per view basis. A recurring administration task at the end of a study term is to bring the web lecture recordings offline on a computer DVD or a CD for data backup purposes, or for students respectively lecturers whishing to watch the lecture recordings offline. The normal approach in our production system was to copy the recorded video, the lecture slides and the complete source code for the web interface on that offline medium. In addition to the fact that it is not very convenient for users to start the recordings by clicking a specific file link in the DVD file system we had the drawback that the complete (maybe copyrighted) material is on that offline medium as well. Over the internet, we had at least user authentication to protect the content. A more attractive and promising way to reduce administration effort and to keep the content protected is to use Adobe’s new integrated runtime environment called AIR (prior development name Apollo). AIR stands for Adobe Integrated Runtime. The environment is a new cross-platform desktop runtime that allows web developers to use web technologies VOL
4
NO
4 NOVEMBER 2007
to build and deploy Rich Internet Applications and web applications to the desktop (Chambers et al., 2007). During the last years, there has been an accelerating trend of applications moving from the desktop to the web browser. With the maturation of the Flash Player runtime and Ajax type functionality it became possible for developers to offer richer application experiences without disturbing page refreshes. This means that the Flex implementation of the web lecture system can be installed offline on a Windows PC or on a Macintosh system (a Linux version is promised by Adobe to appear by the end of 2007) and it will behave like any other application on the system. On Windows, for example, the virtPresenter web lectures appear now offline in the start menu and in the windows taskbar. As a drawback, users have to install the AIR runtime on their system. The adoption of this technology in general is still in question. Why should users prefer a web like application on their normal desktop computers? Unlike this approach there are other projects and ideas that focus on the web as an operating system (Vahdat et al., 1996) or new alternative technologies as described in the next section. In the literature one can find further examples for using RIAs on the desktop or ideas for adopting this technology (Chambers et al., 2007). In our lecture recording production environment, AIR solves some of the offline related problems. We can offer virtPresenter recorded AIR versions for standard download in case of a Red5 streaming server breakdown. Another prospect is that users do not need to be online while watching the lecture recordings since the AIR application could include all required files. The offline application gets updated through a new interpretation of the associated RSS files whenever the computer is online and new data (new lecture recordings) can be transferred and updated in the offline version. For a simple lecture recording data backup mentioned in the beginning of this chapter, AIR is not an option, due to the fact that the content is encapsulated in the AIR application and it is problematic to disassemble it.
7.
CONCLUSION AND FUTURE RESEARCH
During the last few years, Flash has evolved into an ideal content format for web lectures. Especially the fact that both slides and video can be replayed with one single browser plug-in makes web lecture interfaces built upon this technology easy to use for almost anyone. This paper has demonstrated the feasibility of a Flex 2 based user interface for web lectures and it has shown that this technology can be used to improve usability and ease the administrative workload. 189
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Figure 8 Lecture Recordings in the Learn Management System
With AIR it is even possible to protect content in offline versions of a web lecture. Given the fact that AIR and AIR- or Flash-like approaches (Silverlight (Cohen, 2007), the JavaFX family, or Google Gears) are rumoured to be supported by a number of mobile devices in the near future, AIR could also open more perspectives for interactive presentation of web lectures on mobile devices. If AIR on mobile devices worked just like conventional AIR applications, it would be possible to produce learning content that can be used for normal websites and for m-learning modules at he same time, that is without expensive device adjustments. Our lecture podcasts (audio, video and enhanced podcast versions) (Ketterl et al., 2006a, b) were a step forward to support mobile users with fine granulated lecture recordings. In combination with additional mobile self assessments as developed for the system presented here (Ketterl et al., 2007b) and other systems (Hürst et al. 2007a) learning on the go becomes possible. The podcast technology has a drawback at present for mobile learners. Mobile users cannot give feedback to the lecturer for example due to technical limitations of devices and of the podcast technology. With full AIR support on mobile devices, it is likely that these problems could be solved easily as one AIR application could run on different platforms (mobile, internet and desktop). Another branch we are pursuing in the Flex based version of the interface is implementing social navigation functionalities that had previously been tested in the SVG based version of the interface (Mertens et al., 2006a). Flex 2 does, however, open new perspectives for social navigation in lecture recordings. The reduced loading times allow for editing and rearranging content on the client side without having to change its server side representation. It is also easier to embed the player in other web sites. To prove this, some of our lecture recordings and the newly implemented Flex 2 based virtPresenter interface have been integrated as an 190
application in the social community Facebook. An issue that does still remain to be solved is how navigation can be facilitated in re-arranged and re-structured content.
REFERENCES Adobe Systems Incorporated (2007), “Flex2 technical overview: technical whitepaper”, available at: www.adobe.com/products/ flex/whitepapers/pdfs/flex2wp_technicaloverview.pdf (accessed December 2007). Bieber, M. (2000), in Ralston, A., Reilly, E. and Hemmendinger, D. (Eds), Hypertext. Encyclopaedia of Computer Science, 4th ed., Nature Publishing Group, pp. 799-805. Chambers, M., Dixon, R. and Swartz, J. (2007), Apollo for Adobe Flex Developers Pocket Guide, Adobe Developer Library, O’Reilly, Media Inc., Sebastopol, CA. Chambers, M., Dura, D. and Hoyt, K. (2007), Adobe Integrated Runtime (AIR) for JavaScript Developers, Adobe Developer Library, O’Reilly, Media Inc., Sebastopol, CA. Cohen, B. (2007), “Silverlight technical articles. Silverlight architecture overview: technical whitepaper”, Microsoft Corporation, April, available at: http://msdn2. microsoft.com/ en-us/library/bb428859.aspx (accessed December 2007). Hoppe, U., Klostermeier, F., Boll, S., Mertens, R. and Kleinefeld, N. (2007), “Wirtschaftlichkeit von Geschäftsmodellen für universitäre Lehrkooperationen – eine Fallstudie”, Zeitschrift für E-Learning, Vol. 3, No. 2, pp. 29-40. Hürst, W. and Müller, R. (1999), “A synchronization model for recorded presentations and its relevance for information retrieval”, 7th ACM International Conference on Multimedia, Orlando, Florida, pp. 333-42. Hürst, W. and Welte, M. (2007), “An evaluation of the mobile usage of e-lectures podcasts”, Proceedings of the Mobility Conference on Mobile Technology, Applications and Systems, Singapore, September 2007. Hürst, W., Jung, S. and Welte, M. (2007), “Effective learnquiz generation for handheld devices”, Proceedings of the 9th Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2007), Singapore. Kazoun, C. and Lott, J. (2007), Programming Flex 2, Adobe Developer Library, O’Reilly, Media Inc., Sebastopol, CA. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Ketterl, M., Mertens, R. and Morisse, K. (2006a), “Alternative content distribution channels for mobile devices”, Microlearning International Conference on Micromedia & eLearning 2.0: Getting the Big Picture, Innsbruck, 8-9 June 2006, pp. 119-30. Ketterl, M., Mertens, R. and Vornberger, O. (2007a), “Vector graphics for web lectures: comparing Adobe Flash 9 and SVG”, Workshop on Multimedia Technologies for E-Learning (MTEL), IEEE International Symposium on Multimedia 2007, Taichung, Taiwan, 10-12 December 2007, pp. 389-95. Ketterl, M., Heinrich, T., Mertens, R. and Morisse, K. (2007b), “Enhanced content utilisation: combined re-use of multi-type e-learning content on mobile devices”, IEEE Multidisciplinary Engineering Education Magazine, Vol. 2 No. 2, pp. 61-4. Ketterl, M., Mertens, R., Morisse, K. and Vornberger, O. (2006b), “Studying with mobile devices: workflow and tools for automatic content distribution”, World Conference on Educational Multimedia. Hypermedia & Telecommunications EDMedia 2006, Orlando, FL, June 2006, pp. 2082-8. Lauer, T. and Ottmann, T. (2002), “Means and methods in automatic courseware production: experience and technical challenges”, Proceedings of the World Conference on E-Learning in Corp., Govt., Health. & Higher Education, E-Learn 2002, Montreal, Quebec, Canada, 15-19 October 2002, pp. 553-60. Mertens, R. (2007), “Hypermediale Navigation in Vorlesungsaufzeichnungen: Nutzung und automatische Produktion hypermedial navigierbarer Aufzeichnungen von Lehrveranstaltungen”, PhD thesis, Universität Osnabrück, Osnabrück. Mertens, R., Farzan, R. and Brusilovsky, P. (2006a), “Social navigation in web lectures”, ACM Hypertext 2006, Odense, Denmark, 23-25 August 2006, pp. 41-4. Mertens, R., Friedland, G. and Krüger, M. (2006b), “To see or not to see? Layout constraints, the split attention problem and their implications for the design of web lecture interfaces”, World Conference on E-Learning, in Corporate, Government, Healthcare & Higher Education, E-Learn 2006, Honolulu, HI, 13-17 October 2006, pp. 2937-43. Mertens, R., Ketterl, M. and Vornberger, O. (2006c), “Interactive content overviews for lecture recordings”, IEEE ISM 2006 Workshop on Multimedia Technologies for
VOL
4
NO
4 NOVEMBER 2007
E-Learning (MTEL), San Diego, California, USA, 11-13 December 2006, pp. 933-7. Mertens, R., Ketterl, M. and Vornberger, O, (2007), “The virtPresenter lecture recording system: automated production of web lectures with interactive content overviews”, International Journal of Interactive Technology and Smart Education (ITSE), Vol. 4 No. 1, pp. 55-66. Mertens, R., Brusilovsky, P., Ishchenko, S. and Vornberger, O. (2006d), “Time and structure based navigation in web lectures: bridging a dual media gap”, World Conference on E-Learning, in Corporate, Government, Healthcare & Higher Education, E-Learn 2006, Honolulu, HI, USA, 13-17 October 2006, pp. 2929-36. Mertens, R., Ickerott, I., Witte, Th. and Vornberger, O. (2005), “Entwicklung einer virtuellen Lernumgebung für eine Großveranstaltung im Grundstudium”, Proceedings of the Workshop on elearning 2005, HTWK Leipzig, 11-12 July 2005, pp. 197-210. Mertens, R. Schneider, H., Müller, O. and Vornberger, O. (2004), “Hypermedia navigation concepts for lecture recordings”, World Conference on E-Learning in Corporate, Government, Healthcare & Higher Education, E-Learn 2004, Washington DC, November 2004, pp. 2480-7. Red5 Open Source Streaming Server (2007), available at: http://osflash.org/red5 (accessed December 2007). Schulze, L., Ketterl, M., Gruber, C. and Hamborg, K.C. (2007), “Gibt es mobiles Lernen mit Podcasts? Wie Vorlesungsaufzeichnungen genutzt warden”, 5.e-Learning Fachtagung Informatik (DeLFI), Siegen, Germany, September 2007, pp. 233-44. Téllez, A.G. (2007), “Authoring multimedia learning material using open standards and free software”, IEEE International Symposium on Multimedia 2007 Workshop on Multimedia Technologies for E-Learning (MTEL), Taichung, Taiwan, 10-12 December 2007, pp. 383-9. Vahdat, A., Dahlin, M. and Anderson, T. (1996), “Turning the web into a computer”, Technical report, University of California, Berkeley, CA. Ziewer, P. and Seidl, H. (2002), “Transparent teleteaching”, 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), Auckland, New Zealand, December 2002, Vol. 2, pp.749-58.
191
Interactive Technology and Smart Education (2007) 192–199 © Emerald Group Publishing Limited
Authoring multimedia learning material using open standards and free software
Alberto González Téllez Departamento de Informática de Sistemas y Computadores, Valencia, Spain Email:
[email protected] Abstract Purpose – The purpose of this paper is to describe the case of synchronized multimedia presentations. Design/methodology/approach – The proposal is based on SMIL as composition language. Particularly, the paper reuses and customizes the SMIL template used by INRIA on their technical presentations. It also proposes a set of free tools to produce presentation content and design focusing on RealPlayer as delivery client. The integration in this e-learning platform of multimedia compositions developed following the proposed technique is also presented. Findings – Technological support to learning and teaching has become widespread due to computers and internet ubiquity. Particularly e-learning platforms permit the any-time-and-any-place distribution of interactive multimedia learning materials. There are commercial tools available to author this kind of content, usually based on proprietary formats. This option has some drawbacks like license cost and software company dependency. To use open data standards and free software is an alternative without these inconveniences but available authoring tools are commonly less productive. This shortcoming is certainly important to non-technical authors and it could be solved by open source collaboration. Originality/value – The paper presents multimedia learning material using open standards and free software. Keywords: Multimedia, E-learning, Teaching aids, Computer software Paper type: Research paper
1.
INTRODUCTION
Digital format learning/teaching materials are commonly used in universities due to classroom computer availability and to the added capabilities that computer based delivery offers compared to classic blackboard only method. In fact the classroom has been extended by ubiquitous e-learning platforms (Sakai, Moodle, WebCT, etc.) that impose the use of digital format to learning content. Computer authoring tools permit to create dynamic presentations with animation effects, audio and video clips to make knowledge transference more 192
effective. For instance the more commonly used presentation editor PowerPoint is able to produce narrative presentations adding a speaker voice track to slides. A step forward is done by tools like eChalk (Jeschke, 2006) that allows the recording of all live activity on a pen based input device or electronic whiteboard, including the lecturer voice, and the delivery of recorded content to Java aware web clients. Common presentations authored with office suites tools are intended to be used locally in the computer were they are stored. Web format is supported as an export option but usually the format obtained is not well INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
suited for the Internet (i.e. slides converted to bitmaps, lack of streaming support, etc.). HTML extended with Flash and JavaScript is more suitable for web delivery and this is the nowadays general choice for web content authors. The main reason is that very good commercial authoring tools are available and almost all clients and platforms support this formats. In this context Flash is the part that adds multimedia support and compared to HTML and JavaScript it is a proprietary format. In spite of being the de facto standard for multimedia on the web (i.e. YouTube is based on Flash) it has the shortcomings of forcing authors to be linked to Adobe and to its commercial decisions. Portability is not a problem because the Flash plug-in is available on Windows, Linux and MacOSX. Open format alternatives to Flash are SVG and SMIL, two XML compliant languages standardized by the World Wide Web consortium (W3C). XML is a W3C effort to enforce the definition and implantation of open and application independent data formats. SVG stands for Scalable Vector Graphics and is defined to design static and animated vector graphics. SMIL (Synchronized Multimedia Integration Language) (W3C SMIL site) permits to combine and synchronize several independent media in a presentation. The presence of SVG and SMIL on the web is nowadays clearly surpassed by Flash but successful open source initiatives such as Firefox and Helix can change the scenario in the future. There are two other alternatives to make the web multimedia capable: ActiveX controls and Java applets. ActiveX is a Windows-only technology and it is successful due to the actual domain of Windows clients on the Internet. Java applets are supported in all Java aware platforms and are very convenient to implement small web compatible interactive applications. We make use of applets to enrich our learning documents with interactive simulators (González, 2003) and, as we will see later, to include multimedia compositions into our e-learning platform. The availability of the Java plug-in for all common web clients and operating systems makes this technology a good development platform for e-learning environments (Jeschke, 2006). In this work we propose a technique to develop multimedia contents for the Internet based on open standards, particularly SMIL that has been used since many years in the context of lecture recording (INRIA site, Yang et al., 2001, Ma et al., 2003, Joukov, 2003, Hunter, 2001). Our proposal is comparable to the one appearing in (Yang et al., 2001) but it is simpler and more concrete in the sense that all the tools and procedures required are presented and available. It includes a set of free and in most cases open source authoring tools. A main goal in the proposal is multiplatform support (particularly considering Windows, Linux and MacOSX) on the delivery and production processes. VOL
4
NO
4 NOVEMBER 2007
2.
CONTENT PRODUCTION SCHEME
We have been working in recent years on the utilization of open and XML compliant formats to produce teaching content. As a result we have developed an authoring environment to produce and to manage content based on Docbook (González, 2006, González, 2007). Until now content media was limited to text and static graphics focusing on paper format delivery. In the academic year 2006-2007 our university started PoliformaT an e-learning platform based on Sakai (Mengod, 2006). This has opened some working directions to us. One of them is based on the fact that Sakai has chosen the IMS formats for learning content (IMS site), particularly IMS content package and IMS QTI languages. Our previous decision about XML has been wise because Docbook content can be automatically translated to IMS format by means of XSLT. This is quite feasible because Docbook is well structured and format independent. A second working line is based on the possibility of delivering more dynamic content (multimedia compositions) containing animations, video, audio and user interaction. Our learning content anatomy has text as backbone, we are classic in this respect. Text combined with static graphics is delivered in paper format (PDF) and in web format (HTML). Web format is extended by means of multimedia compositions based on SMIL, this extension is the topic of this work. Our multimedia compositions are classified following the following increasing structural complexity sequence: 1. Static image with voice narration. 2. Computer animation or natural video with synchronized voice narration. 3. Multiple media synchronized with voice narration or lecturer video.
2.1
Media Formats and Media Player
An important decision to make when dealing with multimedia on the Internet is to select the target client. Web clients only support directly HTML, JavaScript and bitmap graphics (JPEG, PGN and GIF). Other content like vector graphics, audio and video require specific plug-ins (Rogge, 2004) and then specific formats. The authoring of this kind of content is then strongly conditioned by the target client. Some of the most common multimedia clients are: • • •
Windows Media Player (Microsoft). Quick Time (Apple). RealPlayer (Realnetworks, Helix). 193
Téllez: Authoring multimedia learning material using open standards and free software
• • •
Flash (Adobe). Mplayer (open source). VLC (open source).
Realaudio and realvideo are specially designed for streaming delivery and are the more convenient audio and video formats to get good synchronization results in SMIL constructs played by RealPlayer.
define the synchronized integration of text, graphics, audio and video in multimedia presentations. SMIL permits to define the spatial and temporal composition of several media and the interaction between the media inside the presentation and with the presentation and the user. Because of being XML compliant only a plain text editor is required to create SMIL documents by hand and it is also straight forward to generate them automatically. SMIL document structure is similar to HTML; there is a root element <smil> with two children elements and . The element is the document header and contains several kinds of metadata elements. The most important one is that defines the spatial regions on the presentation as shown for instance in Figure 1. The element defines the features (background color, size, etc.) of the main presentation panel. Element also includes the definition of spatial regions that will contain the presentation media. Every region is defined by a element that sets its location and size, if also assigns a unique identifier (id attribute) to the region in order to be able to make references to it from the content part of the document. The header section can also include descriptive metadata in <meta> elements that will permit the document inclusion in a automatically managed content repository. After the header we have the element that includes the references to the media shown in the presentation and their spatial and temporal locations. Every media is included by means of a media element like: , , , and , using the attribute src that specifies the location path of the media file. Spatial location is defined by means of the region attribute that is set to a region identifier defined by the id attribute in elements. Time behavior is defined by means of a nested composition of <seq> and <par> elements that define sequential and parallel playing, respectively. Inside a <seq> or <par> element we can have <switch> elements intended to select from a media collection the ones that comply with some conditions (i.e. presentation language). Every media element has attributes that establish its timing behavior: begin for the start time, end for the end instant and dur for the media playing duration. SMIL also has links implemented with element that allow user interaction with the presentation. Links can point to content locations (as in HTML) and to temporal locations. Temporal links destinations are defined by means of <area> elements included in temporal media element (i.e. locations in a video clip). In Section 4.2 we describe how this is performed in our compositions.
3.
3.1
Excluding Windows Media Player, only available on Windows, all players are multiplatform. Flash is with no doubt the one that wins in terms of amount of content published on the web. RealPlayer has been shadowed by Windows Media Player but it is still alive (release 11.0 has been delivered on November 2007) and it has also the interesting feature of having a linked open source initiative named Helix (Helix site). Helix was started by Realnetworks and includes several open source projects including several players, the Helix server and streaming formats. RealPlayer supports SMIL 2.0, briefly described on the next section, that allows composition structures and user interaction capabilities that surpass the ones offered by proprietary formats (Pihkala, 2006). In (Bulterman, 2003) SMIL is proposed to encode peer-level annotations that allow dynamic expansion of multimedia presentations. The counterpart is that SMIL players use to have limitations on the features supported and even errors (Eidenberg, 2003). And last but not least there is no media content standardization for SMIL. After balancing pros and cons we have chosen SMIL as the language to create our multimedia learning/teaching material. The purpose of SMIL is to define the spatial and temporal integration of several media in a multimedia composition and to establish the user interaction with the composition. Previous considerations indicate that it is advisable to choose a target client among the available SMIL aware clients. This will define precisely the media formats to use and the SMIL specification portion that is properly supported and then reliable. RealPlayer is our choice because it is available on Windows, Linux and MacOSX and it supports an extensive subset of the SMIL 2.0 specification. In spite of being a proprietary player it has the interesting feature mentioned previously of being related to the open source project Helix. RealPlayer support, among others, the following formats: • • • • •
Text: Plain text and Realtext. Images: JPEG. Audio: Realaudio. Animations: Realvideo. Natural video: Realvideo.
SMIL
SMIL (Synchronized Multimedia Integration Language, W3C SMIL site) is the XML W3C standard intended to 194
INRIA’s SMIL Template
The French research institution INRIA (Institute National de Recherche en Informatique et en INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
Figure 1 Layout section example
Automatique) has chosen SMIL as the format for their technical presentations (INRIA site). Our interest in SMIL began after noticing how well these presentations are reproduced with RealPlayer in Windows, Linux and MacOSX. It has been possible to analyze these presentations and to reuse their template because SMIL is an open format and INRIA has not defined privacy restrictions. INRIA presentations are about one hour long and they are made using two designs; the first one has a root presentation that links to partial presentations that are several minutes long. The second design includes the whole presentation in one SMIL document. Our presentations are conceived as small pieces in a text backboned lecture and then their length will be up to 5 or 10 minutes, then the second template is more adequate. The spatial design of the SMIL template selected defines the regions shown in Figure 2: • •
• • •
Title: It includes presentation titles. Slides: It shows presentation slides (i.e. JPEG images): It can include sub regions to show different types of media inside a slide. Temporal link index: It includes temporal links to presentation time locations. Lecturer: It shows a narrative lecturer video. Logo: It includes the institution logo.
The presentation timing structure has a root <par> element that contains the slides sequence, the menu links and the narrative lecturer video. The sequence of slides is made putting the elements ( or ) corresponding to every slide inside a <seq> element. If a
slide is made up of different media then it corresponds to a <par> element that includes all the media. The timing control of the slides sequence is implemented by means of the dur attribute inside every slide element. The narrative lecturer video covers the whole presentation and it is encoded in Realvideo. The menu links point to <area> locations in the lecturer video that are also synchronized with the timing defined in the slide sequence. A more detailed analysis can be performed by looking at the SMIL source of a presentation. This can be done by clicking at the clip source entry in the floating menu when a presentation is played with RealPlayer. We have found INRIA technical presentations a good example of the SMIL capability to create multimedia presentations. Our customization of their design template, in order to elaborate our multimedia material, is described in Section 4.1.
4.
AUTHORING TOOLS
After establishing the technology to use we have to select a set of good enough authoring tools to produce the media we are going to include in our multimedia presentations. We have preference for free, open source and multiplatform tools. In Table 1 we propose a set of free tools available on Windows and Linux. Real Producer Basic is a free product from Realnetworks that permits to capture and to convert audio and video to Real formats. The converted streams can not be edited inside Real Producer Basic. Therefore the media edition, if required, should be performed before conversion. Impress is the OpenOffice presentation editor and we have found it good enough to produce teaching content. CamStudio and xvidcap are screen video recorders both of them open source. They allow producing demos or animations by recording screen videos (i.e. Impress animations). Finally JEdit is an open source text editor written in Java with several extensions. One that is particularly Table 1 Authoring tools
Figure 2 INRIA technical presentation
VOL
4
NO
4 NOVEMBER 2007
Tool type
Windows
Linux
Audio capture Video capture Screen video cap Animations SMIL editor
Real Producer Basic Real Producer Basic CamStudio Impress JEdit
Real Producer Basic Real Producer Basic xvidcap Impress JEdit
195
Téllez: Authoring multimedia learning material using open standards and free software
relevant here is XML extension that is very adequate to edit SMIL documents.
4.1
SMIL Template Customization
The multimedia compositions that we are interested on are mentioned in Section 2. It is very straight forward to reuse INRIA SMIL template to create these types of multimedia compositions. To produce a type A composition we only have to make the following changes in the SMIL template: • • • •
Delete the link menu. Replace the narration by an element. Delete the <area> elements in the narration element. Reduce the slide sequence to only one element.
A type A composition is converted into a type B composition by replacing the element in the slice sequence by a element. To produce a type C composition we only have to define the slide sequence and the synchronization between the link menu, the narrative video or audio and the slide sequence. A detailed explanation is given in Section 4.2. An example of a type C composition is shown in Figure 3. The lecturer video has been replaced by an audio track and a GIF animation in order to reduce the amount of storage or network bandwidth required.
4.2
Production Process
The hardware equipment required is very accessible and it is compounded by a PC, speakers, a microphone and a digital video camera. The production process has two main steps:
1. Content creation (slides, audio clips, video clips, etc.) 2. Content integration using SMIL. A common content is a slide sequence created by a presentation editor like PowerPoint or Impress. PowerPoint allows exporting a presentation in JPEG or PNG format in such a way that every slide is exported as a JPEG or PNG file. Impress has the HTML exportation option that also exports every slide as a JPEG file. If a slide includes animation the previous technique is not adequate. An animated slide can be captured using one of the screen capture utilities proposed in Section 4. The capture process will generate a video clip that will be converted later into realvideo format in order to get a good result on Realplayer. This conversion is performed by means of Real Producer Basic that supports several video formats as input, like uncompressed AVI and DV. Natural video obtained with a video camera (webcam, camcorder, etc.) can also be included by performing the same conversion as in screen capture clips. We have found that a target bandwidth between 256 and 512 kbps for realvideo gives satisfactory results for both screen records and natural video. The presentation narration is produced by recording an independent audio or video clip for every presentation item. This can be done by means of the capture capability of Real Producer Basic that directly generates realaudio and realvideo formats. The inconvenient is that the captured clip can not be edited. If audio and video edition is needed then it is required a capture utility (i.e. Nero 7) that generates a Real Producer Basic compatible format. When all the individual clips are available in realaudio or realvideo formats they are glued into a single narration by means of rmeditor console utility included in Real Producer Basic. After having obtained all the presentation content items and the presentation narration, the next step is to customize the SMIL template (i.e. using JEdit). The customization process has two dimensions: 1. Spatial. Definition of the presentation layout (slide region, link region, title region, etc.). 2. Temporal. Definition of temporal behavior and synchronization (slide durations and time link locations). Temporal design is the most complex and it is performed in three steps:
Figure 3 Customization example
196
1. Get the duration of every individual narration clip (tslide,dur). This is indicated by Realplayer when playing the clip. 2. Obtain the sequence of slide starting times (tslide,start). This can be computed from tslide,dur values using a spreadsheet. 3. Design the temporal link index by grouping slides and getting the location of every anchor in the presentation timeline from tlink,start values. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
The values obtained in the temporal design are included in the SMIL file in the following way:
•
The dur attribute of the elements corresponding to the slide sequence ( for static slides, for dynamic slides) is set to tslide,dur.
•
The begin attribute of the <area> elements inside the or element that corresponds to the presentation narration, is set to tlink,start.
Finally, the attribute href of the elements corresponding to the linking menu is set to the values of the id attribute of the associated <area> elements defined in the narration. An example of setting temporal links is shown in Figure 4. A presentation of this paper can be found at “SMIL presentation of this paper” reference. It can illustrate the previous description by looking at the source code.
4.3
Automatic Generation of SMIL Composition
The manual generation of SMIL compositions following the procedure in the previous section is not very attractive. A good feature of SMIL is that its XML compliancy allows easy automatic generation by means of standard XML tools like XSLT. Once the author has generated the media (slides, audio clips, title and table of content) an XSLT stylesheet is used to produce the SMIL file without any further user intervention. The most difficult issue is temporal synchronization but fortunately Realnetworks media formats can be converted to text by means of the “rmeditor” utility, particularly using the “-d” option. The text file generated includes the file temporal length in miliseconds. In order to hide this process to the user a simple front end implemented in Java gets the clips temporal length and
executes the XSLT stylesheet in order to generate the complete SMIL composition. The front end also includes layout customization of the four presentation regions: slide, table of content, title and icon.
5.
POLIFORMAT INTEGRATION
The integration in our LMS platform (PoliformaT) of the multimedia material, developed with the proposed technique, has faced some problems. The main one is related to the PoliformaT content delivering protocol that is HTTPS and the lack of HTTPS support on RealPlayer. It would be nice that our university administration become convinced about the convenience of our technique and decide to startup a Helix server coordinated with PoliformaT, but in the meantime we looked for a “PoliformaT alone” solution. After some caveats and several tests we have developed a solution based on storing multimedia clips compressed in zip format in PoliformaT. A Java applet, located in the learning content web page, performs a zip file download, a local zip file decompression and finally launches RealPlayer playing the local copy of the clip. This solution is feasible because our compositions will be about 5 minutes long giving a zip file size about 10 Mbytes. The download time using a common broadband Internet connection requires from 10 to 20 seconds. A drawback of the solution is that some configuration has to be performed on the client side. Particularly the applet needs permission to read and write in a local folder and to connect to Poliformat HTTPS port. Also it needs execution permission over RealPlayer. Applet permissions can be defined on a text file with specific name, location and syntax (Liai et al., 1999). Every user can have a configuration for applet permissions on a file named “.java.policy” that is located on the user home directory. Permissions are defined by including in the permission file “grant” entries. To establish the required permissions to our applet a grant entry is then required. Supposing a user named “agonzale” and an applet working folder “poliformat” located in the user home folder, the grant entry in Windows Vista is as shown in Figure 5. The grant is restricted to the content URL in PoliformaT and four permissions are established: 1. To read user home folder path and (on MacOSX only) file encoding property. 2. To read and to write in the working folder. 3. To execute a “realplay.exe”. 4. To connect to PoliformaT server on the HTTPS port.
Figure 4 Defining temporal links
VOL
4
NO
4 NOVEMBER 2007
The grant entry syntax depends on the operating system, particularly on the file path syntax, and then it is slightly different on Windows Vista, Windows XP, Linux and MacOSX. 197
Téllez: Authoring multimedia learning material using open standards and free software
Figure 5 Grant entry to allow required applet permissions
The applet design is simple; it includes a “start” button, a progress bar that shows the zip download process and a clip preview image. A set of parameters indicates to every applet instance the zip file to download and the SMIL file to play. Figure 6 shows a screen shot with the applet embedded on a learning content HTML page inside PoliformaT and RealPlayer launched pressing the applet “Start” button. The solution has been successfully tested on Internet Explorer (32 bits version) on Windows XP and Windows Vista, and Firefox on Windows XP, Windows Vista, Linux (Ubuntu 7.10 and OpenSuSE 10.3) and MacOSX 10.4 (Tiger). Some improvements can be added to the developed solution in order to make it more secure, for instance the applet can be signed and a checksum can be performed in order to assure that the program started by the applet really is RealPlayer. The permission configuration procedure is made automatically by means of a Java application that generates the “.java.policy” update offering a friendly user interface. This application is implemented in Java because Java applications are portable and they do not have local access restrictions apart from the ones that apply to the user. The application requires very little user intervention, it asks only to the user for the path location of
RealPlayer in case of being unable to locate it on the common paths used by the user operating system.
6.
CONCLUSIONS AND FUTURE WORK
In this work we have presented a proposal to produce multimedia compositions based on SMIL and Realnetworks technology linked to the open source project Helix. Particularly we have chosen RealPlayer as the target multimedia client and realaudio and realvideo formats to deliver audio and video media. The proposal includes three types of multimedia compositions of increasing complexity and their implementation by customizing the SMIL template used by INRIA in its technical presentations. A set of free, and most of them open source, authoring tools is proposed on Windows and Linux. Therefore following the technique presented lecturers can author multimedia presentations at a very low cost. The integration in our Sakai based e-learning platform PoliformaT of our multimedia compositions has been solved relying on Java applet technology. We have observed that students like multimedia material as a complementary resource on presential classes and
Figure 6 Realplayer launced from the applet located in PoliformaT
198
INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
as self learning material it is much more helpful than static and silent material. We have not performed yet a quantitative evaluation of the impact of including multimedia compositions in our teaching documents. This impact will be measured by the level of student participation, the lecturer evaluation polls and the student’s academic results.
REFERENCES Bulterman, D. (2003), “Using SMIL to encode interactive, peer-level multimedia annotations”, Proceedings of the 2003 ACM Symposium on Document Engineering, Grenoble, France, pp. 32-41. Bulterman, D. and Rutledge, Ll. (2004), SMIL 2.0 Interactive Multimedia for Web and Mobile Devices, Springer-Verlag, Heidelberg. Eidenberger, H. (2003), “‘SMIL and SVG in teaching: Internet Imaging V”, Proceedings of the SPIE, Vol. l No. 5304, pp. 69-80. González, A. (2003), “Interactive applets for introductory courses on computer architecture”, International Conference on Engineering Education 2003, Valencia, Spain. González, A. (2006), “Teaching document production and management with Docbook”, II International Conference on Web Information Systems and Technologies (WEBIST 2006), Setúbal, Portugal. González, A. (2007), “Authoring reusable slide presentations”, III International Conference on Web Information Systems and Technologies (WEBIST 2007), Barcelona. Helix project, available at: http://helixcommunity.org Hunter, J. and Little, S. (2001), “Building and indexing a distributed multimedia presentation archive using SMIL”, Proceedings of the 5th European Conference on Research and Advanced Technology for Digital Libraries, pp. 415-28.
VOL
4
NO
4 NOVEMBER 2007
IMS Global Learning Consortium, available at: www.imsglobal.org/ INRIA technical presentations, available at: www. inria.fr/ multimedia/exposes Jeschke, S., Knipping, L. and Pfeiffer, O. (2006), “The eChalk system: potential of teaching with intelligent digital chalkboards”, Current Developments in Technology-Assisted Education 2006, m-ICTE2006, pp. 996-1000. Joukov, N. and Chiueh, T. (2003), “Lectern II: a multimedia lecture capturing and editing system”, Proceedings of 2003 International Conference on Multimedia and Expo. ICME ’03, Vol. 2, pp. 681-4 Liai, Ch., Gong, L., Koved, L., Nadalin, A. and Shemers, R. (1999), “User authentication and authorization in the Java platform”, 15th IEEE Annual Computer Security Applications Conference, pp. 285-90. Ma, M., Shilings, V., Chen, T. and Meinel, Ch. (2003), “T-Cube: a multimedia authoring system for eLearning”, Proceedings of E-Learning 2003, 7-11 November, Phoenix, Arizona, pp. 2289-96. Mengod, R. (2006), “PoliformaT, the Sakai-based on-line campus for UPV – history of a success”, 5th Sakai Conference, Vancouver, BC, Canada, 30 May-2 June 2006. Pihkala, K. and Vuorimaa, P. (2006), “Nine methods to extend SMIL for multimedia applications”, Multimedia Tools and Applications, Vol. 28, pp. 51-67. Rogge, B., Bekaert, J. and Van de Walle, R. (2004) “Timing issues in multimedia formats: review of the principles and comparison of existing formats”, IEEE Transactions on Multimedia, Vol. 6 No. 6, pp. 910-24. SMIL presentation of this paper, available at: www.disca.upv. es/agt/mtel2007/mtel2007.ram W3C SMIL standard, available at: www.w3.org/TR/2005/ REC-SMIL2-20050107/ Yang, Ch., Yang, Y. and Lin, K. (2001), “A SMIL-based lesson recording system for distance learning”, Proceedings of 2001 Conference on Distributed Multimedia Systems (DMS2001), pp. 486-9.
199
Interactive Technology and Smart Education (2007) 200–207 © Emerald Group Publishing Limited
E-learning activity-based material recommendation system
Feng-jung Liu Department of Digital Arts and Multimedia Design, TAJEN University, Ping-Tung, Taiwan Email:
[email protected] and
Bai-jiun Shih Department of Management Information System, TAJEN University, Ping-Tung, Taiwan Email:
[email protected] Abstract Purpose – Computer based systems have great potential for delivering learning material. However, problems are encountered, such as: difficulty of Learning resource sharing, high redundancy of learning material, and deficiecy of the course brief. In order to solve these problems, this paper aims to propose an automatic inquiring system for learning materials which, utilize the data-sharing and fast searching properties of the Lightweight Directory Access Protocol (LDAP) and JAVA Architecture for XML Binding (JAXB). Design/methodology/approach – The paper describes an application to utilize the techniques of LDAP and JAXB to reduce the load of search engines and the complexity of content parsing. Additionally, through analyzing the logs of learners’ learning behaviors, the likely keywords and the association among the learning course contents is ascertained. The integration of metadata of the learning materials in different platforms and maintenance in the LDAP server is specified. Findings – As a general search engine, learners can search contents by using multiple keywords concurrently. The system also allows learners to query by content creator, topic, content body and keywords to narrow the scope of materials. Originality/value – Teachers can use this system more effectively in their education process to help them collect, process, digest and analyze information. Keywords: E-learning, Teaching aids Paper type: Research paper
1
INTRODUCTION
Computer-based systems have great potential for delivering learning material (Masiello et al., 2005), which frees teachers from handling mechanical matters so they can practice far more humanized pedagogical thinking. However, information comes from different sources embedded with 200
diverse formats in the form of metadata, making it troublesome for the computerized programming to create professional materials (Shih et al., 2007). The major problems are: 1. Difficulty of learning resource sharing. Even if all E-earning systems follow the common standard, users still have to visit individual platforms to gain INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Liu and Shih: E-learning activity-based material
appropriate course materials contents. It is comparatively inconvenient. 2. High redundancy of learning material. Due to difficulty of resource-sharing, it is hard for instructors to figure out the redundancy of course materials and therefore this results in the waste of resources, physically and virtually. Even worse, the consistency of course content is endangered which might eventually slow down the innovation momentum of course materials. 3. Deficiency of the course brief. It is hard to abstract the course summary or brief automatically in an efficient way. So, most courseware systems only list the course names or the unit titles. Information is insufficient for learners to judge the quality of course content before they enroll certain courses. To solve the problems mentioned above, we propose an automatic inquiring system for learning materials which utilize the data-sharing and fast searching properties of LDAP. Our system not only emphasizes friendly search interfaces, but also excavates the association rule from log data of learning activities. Meanwhile, collaborative filtration is employed to improve the reliability of the searching results. In this paper, a detailed description of the system construction is provided, followed by pedagogical application suggestions to possible alternatives for the integration of the technology into the learning field. In the literature review section, LDAP and JAXB are introduced. Then, we give a full review of our experience constructing the LMS supported by the National Science Committee in Taiwan, and explain how related techniques were applied. Both the technical and educational evaluations of the system were proposed. Al last, our conclusion brings some ideas and suggestions to the alternative possible integration of technology into learning in all fields.
learning focusing on the systematic methods to learning (Ausubel, 1968). The implication of these theories is twofold; one, knowledge has an internal structure to sustain itself; two, systematic retrieval and understanding of knowledge is an effective method for learning (Shih et al., 2007). In order to effectively use technology to assist the education process, helping learners to collect, process, digest, and analyze the information, we introduce data mining, association and collaborative filtering technologies, describing how the technology facilitates the processing of data, and how the dynamic map can achieve the pedagogical goal. This “Information overload” problem is even more emphasized with the growing amount of text data in electronic form and the availability of the information on the constantly growing World Wide Web (Mladenic et al., 2003). When a user enters a keyword, such as “pencil” into a search engine, the returned result is often a long list of web pages, many of which are irrelevant, moved, or abandoned (Smith et al., 2003). It is virtually impossible for a single user to filter out quality information under such overloading situation (Shih et al., 2007). New contextual knowledge objects can be found in new contexts in certain organization regardless of potential manually-assigned categories in groupware-based organizational memories can be gathered (Klemke, 2000, Nonaka et al., 1995). To effectively process, manage, share and apply the huge amount of data has become a major task of knowledge management. In the system, we utilized the techniques of the LDAP directory server and JAXB to reduce the load of development of search engine and the complexity of parsing the contents (Li et al., 2005). Additionally, we applied the data-mining techniques to support the e-learning materials recommendation. As below, we made the brief illustration to relevant techniques, such as LDAP and JAXB, association rule and collaborative filtering.
2.1 2
LITERATURE REVIEW
It is seen that a key factor of hypermedia-based learning is customizable cognitive style as it suffices users’ information processing habits, representing individual user’s typical modes of perceiving, thinking, remembering and problem solving. From the perspective of learning theories, cognitive psychologists recognize that knowledge has a basic structure which presents the inter-relationships between concepts. Anderson (1980) distinguished knowledge into declarative and procedural types to identify their characteristics as abstract or practical functions (Anderson, 1980). In understanding knowledge, Ausubel (1968) provided two strategies including progressive differentiation and integrative reconciliation for making meaningful VOL
4
NO
4 NOVEMBER 2007
Search and Storage
The Lightweight Directory Access Protocol, LDAP, is an application protocol for querying and modifying directory services running over TCP/IP. LDAP was originally intended to be a “lightweight” alternative protocol for accessing X.500 directory services through the simpler (and now widespread) TCP/IP protocol stack. LDAP directory is a tree of directory entries and is made up of two parts mainly: first part, it is a database of the directory. It has a perfect data schema which describes the attributes of the data; another part, it is the access protocol used to inquire about and handle database. LDAP deployments today tend to use Domain Name System (DNS) names for structuring the topmost levels of the hierarchy. Currently, lots of network resource management applications, are implemented 201
Liu and Shih: E-learning activity-based material
with LDAP technology, such as DNS, Mail system, telephone directory, etc. The properties of LDAP are listed as follows: 1. Fast searching of data. LDAP utilizes the properties of data hierarchy, caching technology and the innovative index technology to offer fast inquiry service. 2. Extendable Data schemata. The data schema mainly describes and defines the attributes of entries in the directory tree. LDAP allows users to define data schemas by themselves and, therefore, makes schema specification more flexible. 3. Multiple access permissions and data encoding. Except that be able to establish the access permissions according to users’ specifications individually, LDAP also supports some security mechanisms, such as Secure Socket Layers (SSL), Transport Layer Security (TLS) and Simple Authentication & Security Layer (SASL). 4. Suitable for the inquiry of a large amounts of data. The directory database is designed under the assumption that the frequency of reading is greater than frequency of writing. It can improve the usability and dependability of data by duplicating data extensively.
2.2
Data Association
In Apriori algorithm, it is time-consuming because the database has been scanned for many times. Therefore, many algorithms, like the DIC algorithm (Brin et al., 1997), DHP algorithm (J.S Park et al., 1995) and AprioriTid algorithm (Agrawal et al., 1994), etc., are proposed successively to improve the performance. In this paper, we suggested the use of association rule mining techniques to build an e-learning material recommendation system that tried to intelligently recommend on-line learning activities or shortcuts in the course web site to learners based on the actions of previous learners to improve the course content navigation as well as to assist the on-line learning process. The association rule is proposed by Agrawal et al. 1993). The main purpose is to find out the relation among the materials from a large amount of material data through analyzing that the transaction logs and calculating the supporting degree and the confidence degree used to evaluate whether the association are reasonable. The Apriori algorithm, proposed by Agrawal et al. etc., previously, is one of the most representative algorithms of association rule. Its key steps are described as follows: 1. Step 1: To produce the candidate set from the database, and to find out the largest item-set according with the minimum support degree. 2. Step 2: To find out the items from the large itemset derived by Step 1, in accord with the minimum confidence degree. 202
In Step 1, it will spend much time for scanning the database many times. Thus many improved algorithms are proposed successively, such as the DIC algorithm (Brin et al., 1997), DHP algorithm (Park et al., 1995) and AprioriTid algorithm (Agrawal et al., 1994), etc. No matter what network connections users use to surf the internet, bussiness administrators can make use of the analysis results of users’ behaviors or the network trading activities to promote some items to their customers. It is very important to use the techniques of the association rule to conduct the associations of items. So the proposed recommendation system utilizes related techniques to uncover the learning behaviors with the same essense of learning and expolits these associations to offer recommendation for advanced learning or providing relevant learning materials to learners.
2.3
Collaborative Filtering
Collaborative filtering is one of recommendation mechanism for personage, also called people to people correlation. Currently, it is applied to all kinds of e-commerce extensively to infer user’s interest in other products or the service through the analysis of user’s materials or the behavior. It can be divided into the following several models from the methods of collaborative filtering: 1. Rule-based filtration. By the questionnaire or other ways to obtain the data, such as user’s preference, or interest, etc. which will be made use of the reference as recommending. 2. Content-based filtration. The system recommends other relevant contents according to the user’s selected content. And, the recommended content mostly were used and hived off or consulted by other user’s experience before. 3. Activity-based filtration. By collecting user’s activity information, infer the relation of some contents. Such mechanism is usually applied on these websites without member system. Prior works in collaborative filtering have dealt with improving the accuracy of the predictions. In the proposed system, we adopted the collaborative filtering instead of the complicated content processing, and provided accurate recommendation on possible keywords of contents. The system utilizes the simple and easy mechanism in collaborative filtering to replace the complicated characters. Setting-up the characteristic of the characters is originally a difficult and complicated work, but search the analysis of writing down through the user, find out the key word of course, under the situation according to user’s experience, though unable to be accurate like result that the characters prospect, do not lose a feasible method either. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Liu and Shih: E-learning activity-based material
3
SYSTEM DESIGN
The system is divided into 4 parts: collecting indexing data, inquiring services, association rule and collaborative filtering. As shown in Figure 1, it explains the relation between users and the system and the processing flow of usage data. In the system model, LMS is used to provide e-learning content. But in real life, there exist various LMS platforms, so we designed an interface for content registration. By registering materials be stored on various LMS platforms, the web spider process will parse the content through the registration information and store the parsing results into LDAP server. Thus, such the system design is available to all of e-learning content accessed by HTTP protocol. At last, we designed a web-based system to provide users to query and learn the content friendly. We will present details for the four parts below.
Figure 2 Parts of the program of JAXB Marshaller mechanism
In the part of collecting indexing data, it is similar to other web spider searching engines. On parsing, the parser will automatically search the complete course content and distinguish between the topic or body from the content by the HTML tags. Moreover, it is able to easily transform the data into the XML documents compliant to the XML schema by utilizing the Marshaller mechanism of JAXB (Ort and Mehta, 2003). In the Figure 2, it showed a part of program with JAXB Marshaller mechanism for the recommendation system. Besides, it is easy to validate and access data of the XML document using the get and set functions of JAXB for unmarshalling. At last step, by java naming and directory Interface (JNDI), the processed data are stored into the LDAP database.
what subjects included in the course, etc. . . Through the data sharing mechanism of the LDAP, the metadata of the learning materials distributed in several platforms are stored into the same directory database. With such a deployment, teachers can easily inquire the related learning resource and enrich them with innovation. The learners just search through the uniform interface of the system to learn of many materials in several systems instead of repeating the same queries on different LMS sites. Besides inquiring services, it is another task of this part to collect the users’ operating behavior, including searching strings, entry links, etc., is another task for this part. In the system implementation, we treated each searching string as a transaction and the keywords in searching string are represented as the items. Then, we could use these transaction logs to mine the association rule with user behaviors. Additionally, the usage logs created by students following the resource links provided by the inquiring services will be treated as the important data for collaborative filtering.
3.2
3.3
3.1
Collecting Indexing Data
Inquiring Services
Usually, learners in the e-learning system always are concerned on what learning resources related to the topics,
Association Rule Mining
In the system design, we adopted both the Apriori Algorithm (Agrawal et al., 1994) and the Tree-based algorithm (Han et al., 2000) to develop the association rule for the learning material recommendation system. For example, the information that a leaner who queried by the keyword A and also tended to query by the keyword B at the same time is represented by the following association rule: A⇒B
Figure 1 System architecture
VOL
4
NO
4 NOVEMBER 2007
Every keyword would be treated as an item. A set of items is referred to as an itemset. In this system, an itemset can also be seen as a conjunction of a sequence of searching keywords by a learner in the same session. If an itemset satisfied minimum support, then it is a frequent itemset. It means that most learners used those keywords 203
Liu and Shih: E-learning activity-based material
in the frequent itemset to look up content they were interested in. The process of association rule mining for the system is described in the following: When a user inquired the course contents through the query interface, the system will parse the query strings into the keywords of length one and store them into the database. These keywords act as the large one-itemset of the Apriori algorithm. Using the SQL methodology to filter and to sort the large one-items according to minimum support. And then, to make the combination of these filtered items and to calculate the number of transaction records in database which have the same elements as ones of the composed subset. An itemset is called a frequent itemset if it satisfies the minimum support threshold. And more, the reasonable items will be derived if these items satisfy the minimum confidence threshold. The minimum support threshold and minimum confidence threshold influence the reliability of the filtered results. If set to a smaller value, the reliability of conducted results will be reduced; otherwise, it will cause the number of the resulted data become too small and can not infer to the accuracy prediction.
3.4
Collaborative Filtering
The general collaborative filtration recommends the related information to the users. But we used the mechanism for finding out and providing the keywords of the course. Usually, text mining is treated as one of the general techniques to find out the keywords in a context. However, the related techniques of text mining cover many complicated algorithms and mechanisms, which are not easy to implement. Thus, we plan to analyze the transaction log of learners’ searching strings, to infer the possibly keywords of the course contents. Its main steps are described as follows: 1. Step 1: Calculate the frequency number of keywords used to inquire the preferred subjects by learners. 2. Step 2: By utilizing the result of the step above, filter the items by minimum threshold, and to store into LDAP server.
and easy usages, directly adopting the abundant packages with powerful functions, such as HttpClient in Apache Jakarta project (http://jakarta.apache.org/commons/httpclient/), HtmlParser (http://htmlparser.sourceforge.net/) created by derrickoswald, etc.., is good for reducing the cost of system implementation.
4.1
Collecting the Index Data
As shown in the Figure 3, while content providers uploaded their learning materials to the LMS systems, the process will pass the pathname or URL address of uploaded materials to the web spider-like process for collecting index data. The web spider parses all linked documents of the course contents. As an example of course content made by StreamAuthor package (CyberLink. http://www.cyberlink.com/multi/products/main_7_EN U.html), the content mostly consisted of a lot of HTML document translated from MS Powerpoint formatted files. We defined and implemented the PptHtmlParser class derived from HtmlParser interface to parse the materials created by the StreamAuthor package. Besides, we adopted the JAXB mechanism to transform the result, generated by HtmlParser process, to the XML documents, fit to the predefined schema. Because of the unmarshalling mechanism, it is easy to access data elements in XML documents with minimum burden of programming (Figure 4).
4.2
Searching Services
One of the advantages of the E-learning is that it is easy to get huge histories data of learning activities, which have been saved as log data in the E-learning activities. In the system, we treat each of material inquiry as a transaction, and the searching strings as items. By logging these users’ actions including query strings and the links, users clicked for advanced studies, it will be used
Though the reliability of measured results is not verified yet, the proposed mechanism is simple to implement and good for automatic material-inquiring services instead of complicated text processes.
4
SYSTEM IMPLEMENTATION
In the system, we developed the backend program for collecting related data in the JAVA programming language. In addition to the properties of cross-platform 204
Figure 3 The data flow of the recommendation system
INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Liu and Shih: E-learning activity-based material
Figure 4 The schema for JAXB
for association rule mining and collaborative filtration. Besides providing with the content searching by strings, we used the idea of association rules to train the recommendation system to suggest some interesting topics associated with the current topic.
4.3 Recommendation with Association rule and Collaborative filtration In the system implementation, the association rule is mainly adopted to find out the relations between these keywords learners used for searching the content. And, the collaborative filtration is applied to automatically filter the correct keywords of each course. Such mechanisms will greatly benefit learners on searching out their interested materials quickly and correctly. We believe that as the amount of material grows, the better the performance of the content the navigation recommendation system will provide to the learners will become.
5
RESULTS
As a general search engine, learners can search contents by using multiple keywords concurrently. And, the system also allows learners to query by content creator, topic, content body and keywords to narrow the scope of materials. As shown in Figure 5, a learner would like to search the contents containing the “pencil” word. And then, the system filtered and displayed the links of contents containing the “pencil” word from the LDAP server, and related keywords at the top. Learners could click these content links for study according to what they wanted, or follow other entries of related keywords, commented by the system, to learn more about associated materials. The system main page shows a simplified table of contents with multiple levels of embedded categories. Users start from the first-level keyword search, the related material will be tabulated just like the regular search engine. Below each topic, the collaborative filtered keywords will be presented. Users can follow the link to reach associated topics (course material). In this way, VOL
4
NO
4 NOVEMBER 2007
users can recursively trace down the topic tree with virtually infinite search levels. All results are presented in XML form and comply with the SCORM (SCORM, 2004. http://www.adlnet. gov/) requirement. In Figure 5, we demonstrate a search result for the keyword “pencil”. Users first type in the keyword and the system reposes with “Unit 2: Is this your PENCIL?”, which is a unit with the title containing target keyword. Below that, two keywords, “pencil” and “Conversation”, defined by the teacher of Unit 2 or filtered by collaborative filtration, are cited also. Users can click on the hyperlinks and read the course content or explore the recommended topics. Meanwhile, the system had found related topics generated from the user log. We can see in our system, previous users are interested with “pen” and the teacher’s name (of course, learners can trace down the link if he/she want to).
5.1
Retrieval of Documents
While teachers upload their learning materials to the LMS systems, the process will pass the pathname of uploaded material to the web spider-like process for collecting index data to parse all linked document of the material. As an example, material created by Stream Author package, the content mostly contain a lot of the HTML document which are translated from the MS Powerpoint formatted files. We defined and implemented the PptHtmlParser class derived from HtmlParser interface, to parse materials created by the StreamAuthor package. Besides, we adopt the JAXB mechanism to transform the result generated by HtmlParser process into XML documents fitting to the defined schema. Due to the Unmarshalling mechanism, it is easy to access data elements in documents without troubles in programming. Currently, there are 28 URLs of courses registered in the recommendation system respectively. It contains about 574 course units in total. According to the usage logs, users get into their content search by 2~3 keywords (averaged 2.39). And, each query request spends about 0.65 seconds. To access the association efficiency, we observe the usage log in one hour period and found that 317 out of 517 queries followed system recommendation links, that is, approximately 60 percent or two-third of the users followed the system recommendation. This fact also implies a high “hit-rate” of association which we intent to exploit in a future survey.
6
CONCLUSION AND FUTURE WORKS
From the educational perspective, teachers can use this system more effectively in their education process to help 205
Liu and Shih: E-learning activity-based material
Figure 5 The results with searching the “pencil” word
them collect, process, digest, and analyze information. Learners can gain a complete view of the subject matter while surfing the course contents. In the paper, we designed the material recommendation system for improving the learning performance of learners based on the learning activities of previous learners. Also, in the implementation of the system, we integrated the techniques of the LDAP and the JAXB to reduce both the load of development of search engine and the complexity of the content parsing. With the present E-learning situation, it is difficult to integrate all e-learning platforms from various vendors. However, as digital contents explosively grow, a resourcesharing mechanism should not be built solely for material-inquiring service across diverse e-learning LMS platforms. Thus, we proposed an integrated learning activity-based mechanism to assist users with automical material recommendation. Thus, we established a prototype and proposed an integrated learning activity-based recommendation system. Currently, we proceed to collect a large number of user learning logs and to evaluate the effectness of the material recommendation system. We believe such a deployment will be helpful in achieving the better learning performance and a higher learner’s satisfaction. Meanwhile an educational evaluation are scheduled in the forthcoming year after accumulating threshold amount of user-experience. We plan to investigate the usability and instructional value of the LMS, which includes the presentation of material categories and the trouble-free search for materials; the convenience of data retrieval; the meaningfulness and usefulness of resources; 206
the value of the system’s assistance in new knowledge discovery; the usefulness of instructional needs; the appraisal of the overall conceptual presentation of large amount of information; and the level of acceptance and comprehensive understanding of the learning materials, and so forth. At the same time, a focus group performs interview to gather more feedbacks in depth. The proposed LMS supports systematic learning as well as constructive learning, which can effectively guide users through systematic browsing and inquiry. With this function, it works more as a dynamic researching tool than static learning material. On the other hand, the LMS sustains constructive learning. Although the functionality of a topic map is formulaic and systematic, it is also feasible for task-based learning. From the constructive point of view, learners need resources from multiple sources for the purpose of independent research. The mechanism can suffice the exploration of various learning styles, tendencies of interests, and professional abilities. More importantly, this guidance is not provided by teachers working in the classrooms, but by an autonomous system which is supported by a professional team with a wide array of resources. It turns learning into an information-guided dynamic. Therefore, this material helps users to “discover” new knowledge by presenting explicit and implicit knowledge so that they are able to see ideas and concepts that are most unexpected. This process matches the basic principles of constructivist learning. From the usability perspective, we see that the LMS can carry out autonomous processing and presentation. It provides teachers and learners with an autonomous abstract environment. Even facing substantial documents, INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Liu and Shih: E-learning activity-based material
computers can replace human labor to efficiently process the tedious algorithm, but maintain high-level humanistic and professional analysis. Teachers can apply it on the teaching websites, and let the system compile thematic materials for them, saving time of copying and pasting, coding, and rewriting. It is customizable and interactive. The interface interacts with users, opening up layers of information upon every selection and inquiry. Hence, the route taken by every user is different, and the system returns with different results upon every choice. The system has proper support for customized learning. Best of all, it is easily manipulated. Most search engines, websites, and databases are designed to carry documents with different formats, content areas, and inquiry methods. Users who are unfamiliar with each system can get lost in it, and every inquiry can take up much energy and time. By the way, it is easy to use, and simple for users to grasp the key terms generated from the knowledge content. Users do not need to spend much time and energy to get a hold of the main theme of the massive resources. Even learners without prerequisite knowledge or sufficient subject understanding to reach into the depth of content, can still get on the system quickly, starting from the search item.
ACKNOWLEDGEMENT The authors are grateful to the anonymous reviewers of this paper for their insightful comments. This research is funded by National Science Council Grants NSC962221-E-127-005 in Taiwan, Republic of China.
REFERENCES Agrawal, R. and Srikant, R. (1994), “Fast algorithms for mining association rules”, Proceedings of the 20th VLDB Conference, Santiago. Agrawal, R., Imielinski, T. and Swami, A. (1993), “Database mining: a performance perspective (special issue on learning and discovery in knowledge-based databases”, IEEE Transactions on Knowledge and Data Engineering, Vol. 5 No. 6, pp. 914-25. Anderson, J.R. (1980), Cognitive Psychology and Its Implications, W. H. Freeman and Company, San Francisco.
VOL
4
NO
4 NOVEMBER 2007
Apache Software Foundation. Jakarta Project-HttpClient, available at: http://jakarta.apache.org/commons/httpclient/ Ausubel, D.P. (1968), Educational Psychology: A Cognitive View, Holt, Rinehart & Winston, New York. Brin, S., Motwai, R., Ullman, J.D. and Tsur, S. (1997), “Dynamic itemset counting and implication rules for market basket data”, ACM SIGMOD Conference on Management of Data, pp. 265-76. CyberLink StreamAuthor, available at: www.cyberlink. com/multi/products/main_7_ENU.html Derrickoswald and Somik, HTML Parser, available at: http://htmlparser.sourceforge.net/ Han, J., Pei, J. and Yin, Y. (2000), “Mining frequent patterns without candidate generation”, Proceedings of the ACM SIGMOD Conference on Management of Data. Klemke, R. (2000), “Context framework – an open approach to enhance organizational memory systems with context modeling techniques”, Proceedings of the 3rd International Conference on Practical Aspects of Knowledge Management (PAKM2000), Basel, Switzerland, pp. 30-1. Li, S.-T. and Lin, C.-H. (2005), “On the distributed management of SCORM-compliant course contents”, Advanced Information Networking and Applications, AINA 2005. 19th International Conference 28-30 March, Vol, 1, pp. 221-6. Masiello, I., Ramberg, R. and Lonka, K. (2005), “Attitudes to the application of a web-based learning system in a microbiology course”, Computer & Education, Vol. 45, pp. 171-85. Mladenic, D. and Grobelnik, M. (2003), “Feature selection on hierarchy of web documents”, Decision Support Systems, Vol. 35, pp. 45-87. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-Creating Company – How Japanese Companies Create the Dynamics of Innovation, Oxford University Press, New York, NY. Ort, E., and Mehta, B., (2003), Java Architecture for XML Binding (JAXB) available at http://java.sun.com/ developer/technicalArticles/WebServices/jaxb. Park, J.S., Ming-Syan, C. and Philip, S. Y. (1995), “An effective hash based algorithm for mining association rules”, Proceedings of ACM SGMOD, pp. 175-85. SCORM 2004 3rd ed., available at: www.adlnet.gov/ Shih, B.-J., Shih, J.-L. and Chen, R.-L. (2007), “Organizing learning materials through hierarchical topic maps: an illustration through Chinese herb medication”, Journal of Computer Assisted Learning, Vol. 23 No. 6, pp. 477-90. Smith, K.A. and Ng, A. (2003), “Web page clustering using a self-organizing map of user navigation patters”, Decision Support Systems, Vol. 35, pp. 245-56. Smolnik, S. and Erdmann, I. (2003), “Visual navigation of distributed knowledge structures in groupware-based organizational memories”, Business Process Management Journal, Vol. 9 No. 3, pp. 261-80.
207
Interactive Technology and Smart Education (2007) 208–217 © Emerald Group Publishing Limited
Educational presentation systems: a workflow-oriented survey and technical discussion
Georg Turban Technische Universität Darmstadt, Darmstadt, Germany Email:
[email protected] Abstract Purpose – Within the last few years, a couple of presentation systems have been developed for assisting higher education. This article aims to provides an overview of available systems and highlight differences regarding their individual intention and technical approaches. Design/methodology/approach – The article consists of a comprehensive system and literature review and provides a taxonomy. Famous systems are categorized and discussed including their individual approaches. Findings – The advantages and disadvantages of different approaches are presented. The discussion provides readers also with information relevant for rating systems according their personal needs. Research limitations/implications – The categorization for presentation systems can be extended and applied for the categorization of audience response systems. Practical implications – A source of information that assists users while they choose an appropriate existing presentation system and developers while they design new ones. Originality/value – This article presents a workflow-oriented taxonomy for educational presentation systems which is used to analyze several systems. In addition, the different underlying conceptual and technical approaches of different systems are discussed in this work. The provided information is useful for users and developers of such systems. Keywords: Presentations, Education, Teaching aids Paper type: Research paper
1
INTRODUCTION
Within the last few years, an increasing amount of educational presentation systems has been developed by different groups. Thus, to select the appropriate product or to avoid reinventing the wheel is getting more difficult. The website e-teaching.org (Hesse, 2007) currently lists and describes over 100 presentation systems or systems that are generally useful for higher education. 208
Addressing educational presentations including their preparation and postprocessing can be described by a simplified workflow. A workflow-oriented survey of presentation systems is an appropriate method to provide users with an overview of different systems. The workflow assists the discussion of different systems and provides a common vocabulary for communicating between different persons such as users and developers and eases the classification and understanding of presentation systems for both. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Turban: Educational presentation systems
The workflow, systems and table including their description are used for a discussion that involves different systems and stages. Different aspects are extracted and extensively discussed across systems and phases. The aspects are capable to illustrate the different approaches including technical decisions, underlying solutions and details. This discussion is of benefit for users who want to get an overview of systems and for their understanding of workflow phases, so that they can better evaluate the different approaches and select an appropriate solution. Different options, that can influence developers’ system design, are presented. For both parties, the workflow oriented representation of the content – in contrast to other articles that typically focus on systems and not workflow phases – highlights the complexity of the different systems according to different phases. This article provides the following contributions: 1. a more exhaustive literature and system review of educational presentation systems; 2. a workflow-oriented description of the phases before, during and after a presentation; and 3. aspects and major criterions for the appropriate selection/development of an existing/new system The remainder of the article is organized as follows: In section 2, the workflow including its different phases is described in general terms in order to be comprehensible for all readers. This introduction is illustrated by real world examples. Furthermore, this article emphasizes the presentation of different systems using the general workflow phases in section 3. This section ends with an overview of presentation systems in tabular form. The discussion in section 4 focuses on all phases and selected aspects of the presented systems. This article concludes with section 5 that also covers future trends regarding the development and usage of presentation systems.
2
WORKFLOW
Publications such as (Aldrich, 2007), (Belenkaia, 2004) and (Abowd, 1999) contain workflow-oriented views on educational presentation systems. The given workflows usually rely on the phases before, during and after a lecture. The work presented in (Abowd, 1999) distinguishes between a preproduction, live caption, postproduction and access phase. Other authors such as (Benbunan-Fich, 2002) distinguish systems using the two dimensions time and place. Latter do typically use these dimensions to distinguish the kind of a software system (e.g. distance education vs. face-to-face learning support) rather the different phases inspected here. Other publications such as (Burdet et al., 2007) focus on the different persons involved in an educational scenario and do also differ VOL
4
NO
4 NOVEMBER 2007
from the intention and work presented here. We focus on a chronological analysis (please refer to Figure 1) and presentation of systems and parts that cover the same place/same time category of the categorization noted before the latter one. Examples for processes during the phases shown in Figure 1 are presented in the following subsection and used for the analysis of the presentation systems.
2.1
Before, During and After the Presentation
The phases before the presentation cover the authoring of content using a system or what kind of prepared content can be integrated. Apart from the compatibility with widespread presentation formats, important issues such as the quality of potential conversions need to be considered in these phases. In this context, the converted files often lack in providing animations, videos or any other dynamic content within a presentation file. These phases are important since they might restrict the number of formats a lecturer is able to use during the presentation. Within the workflow chain, they influence the complexity of systems and the technical details developers have to deal with. Considering all phases increases the understanding of advantages/disadvantages of available systems including their individual approaches. During the presentation, augmenting the content using freeform ink annotation plays an important role (Anderson et al., 2005b). It is also important to collect and record most events (e.g. such as slide transitions and movements of the stylus) including temporal information for lecture recordings that should create an impression during replay that appears close to reality. Later phases within the workflow allow comparing and expressing the effort for students in obtaining and replaying the presentation files. After the presentation, it is sometimes requested by the lecturer or just technically necessary to modify and edit the recorded content. Some presentation systems produce specific formats (e.g. only readable on specific platforms or using commercial systems) that require to be converted to a more popular format. Another aspect for selecting a specific format is that authors might prefer their work to be published in a format not capable to be directly edited by others. Also processes such as the delivery and access by students are covered by these phases of the workflow. The material can be digitally, spread using CDs/DVDs or published using the Internet. A new way of publishing lectures is podcasting (Bell, 2007). Later phases imply the accessibility for students using the presented systems and how good the impression of a live lecture can be reconstructed. Systems that concentrate on student to student cooperation are therefore not focused in this work. 209
Turban: Educational presentation systems
Figure 1 Workflow that covers phases before, during and after educational presentations
3
DESCRIPTION OF SYSTEMS
In the following subsections, presentation systems that are nowadays typically used in higher education are inspected. Apart from these systems, we focus also on some older or simpler systems in order to prepare and enable a better understanding for the discussion of systems. This section continues with subsections covering the different systems and their description that considers the different phases presented in section 2. Systems that are relevant for the discussion but not separately presented regarding the different phases are presented in a common subsection close to the end. At this stage the discussion does not compare systems with others, but provides as a result the reader with typical systems and their functionality using the different tools including a background of their underlying approaches. The results are summarized in a table that contains also more systems and they are used for the later discussion that will also compare systems and their individual approaches and solutions regarding the presented phases within the workflow. The table gives an overview of systems supporting a specific workflow part and what tools they offer and therefore clearly states the different processing phases supported by a system. Using this table, the discussion in the next section will also focus on technical details and the handling/usability for the lecturer.
3.1
Windows Journal
Windows Journal covers different phases within the processing chain and its usage in the classroom has been focused and evaluated by publications such as (Enriquez, 2007), (Weitz et al., 2006) and (Frolik et al., 2004). Journal is part of the official Microsoft Windows XP Tablet PC operating system and includes a preinstalled virtual printer driver. Similar to PowerPoint, Microsoft offers a freely available viewer version, not capable of creating documents and not consisting of the virtual printer driver. The virtual printer driver can be used to convert content that can be typically printed on real paper into the Journal format. This content does not 210
include animations or embedded videos, e.g. as known from PowerPoint. Apart from these type of content, the user can also create and use different kind of backgrounds (e.g. blank, colored or ruled) using Journal. During the lecture, the selected background can be augmented using different pens, markers, text and images. Freeform ink annotations can be converted using the recognition engine of the operating system. Since Journal does not separate views, lecturers reported the pros and cons of using it in the lecture hall. After the lecture, the viewer allows viewing the augmented Journal files. Users do often also use additional tools in order to convert the Journal format to e.g. pdf and to increase the amount of students that are able to view the files. Publishing the Journal files instead, allows students owning the Journal to modify these files.
3.2
PowerPoint
Among many other presentation systems, Microsoft’s PowerPoint is one of the most popular. PowerPoint covers many phases within the workflow, but focuses on the phases of authoring and presentation. PowerPoint can be used before the lecture to create slides including multimedia data such as text, images, audios and videos. In this phase, it is also possible to assign specific timings to the slides and therefore preauthor the slide transitions. During the presentation, freeform ink annotations can be added on top of the slides. The annotations can be saved together with the original presentation, but do not include temporal information. PowerPoint allows to record an audio stream that will be split by any slidetransition and to assign the snippets to slides. Assigning a video stream during the presentation is not supported by PowerPoint. Although it is possible to combine PowerPoint with Microsoft’s media encoder, this approach does not really fit educational scenarios. Freeform ink annotations and annotations are omitted while slides have to be assigned afterwards manually to the recording. Obviously this functionally better serves for producing tutorials rather than to assist teaching during the lecture. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Turban: Educational presentation systems
PowerPoint files can be viewed using the freely available PowerPoint viewer. More and more presentation systems of other vendors such as Open Office provide an import function for the PowerPoint format. The recorded PowerPoint presentations can be replayed on windows based systems using Internet Explorer.
3.3
Classroom Presenter
Before the lecture, the lecturer has to convert his PowerPoint files to so-called slide-decks (Simon et al., 2003). Earlier versions of classroom presenter provided a standalone converter-application. Newer versions provide an add-in for PowerPoint that can be used to convert presentation files directly from PowerPoint. The add-in does also provide special objects (shown in Figure 2) that can be included in the presentation such as instructor notes that will be hidden to the audience during the presentation. Using classroom presenter, the lecturer can add inkannotations on top of slides/blank backgrounds and ask students to submit their notes and slides which he can reintegrate into his own slide-deck. Apart from the final slide-decks or exported image sets, classroom presenter itself does not support a special kind of playback. In combination with Conference XP (Anderson et al., 2008), solutions were created that record slides in addition to a video of the lecturers and are capable to play back the captured content as shown in Figure 2.
3.4
E-chalk
Part of e-chalks philosophy is that teachers are provided with an easy to use and familiar environment (Knipping, 2005). E-chalk is based on the so-called chalkboardmetaphor and does not require preauthoring or converting existing content, although images or applications (e.g. applets) can be prepared for integration during the lecture. Using different pens, the lecturer develops content on the chalkboard during the lecture. The content can be
scrolled in a similar way to transparencies on overhead projectors. The most important features can be controlled using the stylus instead of the keyboard. Additional content can be integrated using interfaces to computer algebra systems such as Maple, handwriting recognition and CGI scripts that deliver text or pictures. The voice and optionally the video of the lecturer can be recorded in addition to all events on the chalkboard. The development of the board content can be viewed live by remote learners or asynchronously replayed in combination with the audio or video of the lecturer. Playback is possible using common web browsers and the e-chalk player-applet shown in Figure 3. E-chalk can store lectures in a database or a learning management system.
3.5
Lecturnity
Lecturnity consists of different applications that cover different workflow phases (Figure 4). The Lecturnity assistant has been introduced for supporting the lecturer before and during the lecture. The assistant also comes with an add-in for PowerPoint that allows starting a Lecturnity-based recording directly from an opened PowerPoint presentation. It is important to note that this is a preprocessing step that transparently converts the PowerPoint-files to an internal format. This format is not capable to reflect videos or animations that were available in the original presentation. Furthermore, the assistant enables the user to configure, e.g. the screen recording that is performed during the presentation. The assistant contains basic annotation tools such as pens and markers that enable the user to augment the presentation while recording the slides together with the audio or video of the lecturer. For after-lecture support, the editor, publisher and player were developed. The editor allows cutting, copying and merging parts of the recording. In addition, different tracks such as the audio track can be extracted and delivered separately. Audio-only versions of the lecture became popular, especially since iPods and MP3-Player are frequently used by students (Kennedy et al., 2006).
Figure 2 Classroom presenters PowerPoint add-in (left), classroom presenter (center) and conference XP (right)
VOL
4
NO
4 NOVEMBER 2007
211
Turban: Educational presentation systems
Figure 3 E-chalk (left), exymen editor (center) and e-chalk applet for lecture replay (right)
Figure 4 Lecturnity assistant (left), editor (center) and player (right)
The publisher is mainly capable to create two different representations of a lecture recording, namely an offline and online representation for distribution on CDs respectively the Internet. Additionally, the quality for each representation can be selected and the content can be classified and combined to learning modules. For viewing the recordings either on- or offline, different players can be used. Navigation is possible using fulltext-search and thumbnails for example (Hürst et al., 2004).
3.6
Other Systems
Camtasia: Camtasia is a screen recording software and is frequently used for lecture recording and creation of tutorials. For samples please refer to (Robinson, 2007) and (Anderson, 2006). The software is capable to record everything shown on the screen including demonstrations, arbitrary applications and dynamic content. Lectern II: Lectern II and Microsoft’s Windows Journal consist of a virtual printer driver that can be used to convert every print-inquiry into a document of their own format. Lectern II provides a modified enhancedmetafile-printer. The desired preprocessing step can be performed easily by inexperienced users and covers all printable content like appearing on different sheets of 212
printed paper. Conceptually, this approach ignores specifics of the original file-format. Similar to printed paper, the converted files are not capable to represent many features of the original content. Animations, sounds or videos in content are lost. Moowinx: (formerly named ActiveSlide) has been integrated into the survey because it has interesting features that differ from common presentation systems. It contains basic features of PowerPoint such as the possibility to add objects to a slide. While PowerPoint allows animating the objects along predefined slices, Moowinx allows programming an environment using functions that is e.g. capable to simulate the hyperbolic flight of objects on earth. In addition, any object can be moved during the presentation. A sample for latter functionality consists of an order-problem that can be interactively solved rather than just displayed using different snapshots or a specific animation sequence. TeleTeachingTool: The system relies on VNC and is a platform independent solution (Ziewer, 2004). Using VNC as a basis allows presenting and augmenting the desktop (and therefore any content) of a remote system on a system that is present in the lecture hall. The content can be distributed live or replayed using a provided player. The system allows reconstructing slide transitions by analyzing the recorded content after the lecture. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Turban: Educational presentation systems
3.7
Overview of systems
Table 1
Comparison of educational presentation systems and systems used for educational presentations
Lecture time/ system Camtasia (TechSmith, 2007) Classroom, 2000 (Abowd, 1999) Classroom Presenter (Anderson et al., 2005) e-chalk (Knipping, 2005) Gueneva (Burdet et al., 2007) Lectern II (Joukov et al., 2003) Lecturnity (Hürst et al., 2004) moowinx (Pharus53 2007) Multimedia lecture board (Vogel, 2003)
Before
During
Authoring
Preparation
Presentation
–
–
–
–
Camtasia Studio Camtasia Studio Screen Recording Screen Recording and Presentation and Presentation Class Pad Class Pad
–/PowerPoint
Classroom Presenters PowerPointimporter –
Post processing
Access
common video editor
common streaming server/common video player Class Pad
–
Classroom Presenter (and optionally Conference XP) e-chalk
–
–/(Conference XPs Webviewer video replay tool)
Exymen
–
UI for uploading
Lectern II
Lectern Server Extensions
Real player
Assistant
UI for – scheduling recordings Lecterns Lectern II virtual/EMF printer driver Assistant Assistant
e-chalk web-based player (applet) common video-players
Assistant
Editor/Publisher
moowinx
–
moowinx
moowinx
moowinx
Lecturnity Player/web-browser Adobe flash player
–
mlb document creator
PowerPoint (DeAntonio, 2006) Projector Box (Denoue et al., 2005)
PowerPoint
–
Multimedia lecture Multimedia lecture – board board/Interactive Media on Demand System PowerPoint PowerPoint –/PowerPoints HTML-export
–
–
ProjectorBox hardware device (RGB-grabber)
ProjectorBox capture software component
ProjectorBox server software component
T-Cube (Ma et al., 2003) TeraVision (Leigh et al., 2002) TeleTeachingTool (Ziewer et al., 2002) Windows Journal (Microsoft, 2007)
–
–
T-Cube
Encoding Server
–
–
any video source
–
running VNC-Server
TTT-Recorder
VBox hardware (server) TTT-Recorder
Post Processor/ Streaming Server – VBox hardware (client) TTT-Editor TTT-Player
– / any “printable” content
virtual printer driver
Windows Journal
Windows Journal
4
– –
–
Classroom Presenter
Augmentation
After
e-chalk
DISCUSSION
The overview of systems according their support for various workflow phases shown in Table 1 is multifarious and varies with respect to their covering. Regarding the different phases, it should be noted that also systems that support a similar number of phases can heavily vary in their intention, approach and underlying technical solution respectively effort. In addition, the analysis of the different systems has shown that design decisions for specific workflow phases heavily influences other workflow phases. VOL
4
NO
4 NOVEMBER 2007
Multimedia lecture board/Internet explorer PowerPoint (Viewer)/ web-browser Web UI/Corporate Memory Media Player (Hilbert et al., 2006) Real Player
Windows Journal Windows Journal (Viewer)
Especially decisions within the first phase are fundamental design decisions that affect the whole system design. Systems that want to support as many presentation formats as possible, videos and applications have obviously different problems to solve than systems that concentrate on a single format, potentially restricted to still images. Differences regarding the processing of static vs. dynamic content are therefore highlighted in the following. Systems that require little effort in earlier stages, but are still able to support many input formats and applications do often rely on screen recording approaches. 213
Turban: Educational presentation systems
For latter systems, the challenges are typically shifted to phases in the middle or near the end of the workflow since they deal with high frame rates, performance issues and huge amount of data. The following sections focus on this aspect and deal with the intention of such systems.
4.1
Before
The requirements and complexity of systems and their implementations varies based on the decision whether to support the presentation of static or dynamic content. Presenting and augmenting content is often simplified to the problem of processing static images. Many systems, including Classroom Presenter (Anderson et al., 2005), Multimedia Lecture Board (Vogel, 2003) and Lectern II (Joukov et al., 2003), chose this approach and therefore require converting presentation files to image sets. Annotating dynamic content is similar to annotating videos (Bulterman, 2004), but requires additional, expensive processes (e.g. screen-capturing and real-time rendering to various destinations such as the displays of lecturers and students). Systems that accept any content do not require authoring the content using a specific application. Instead (please refer to Figure 5) the lecturer is able to use any software and often even any operating system before the lecture. During the lecture, a networked connection acquires the remote content, which is then processed using the additional presentation system on a computer located in the lecture hall. The distributed presentation scenario in Figure 5 corresponds to the work of the University of Cambridge (Li, 2000) and TeleTeachingTool (Ziewer, 2002).
4.2
During
Systems can be grouped according their complexity (resulting from support of static vs. dynamic content during the presentation) into two groups. Systems that just support static content do typically require converting presentations to images. While working on images, the
Figure 5 Distributed presentation scenario
214
presentation and augmentation during the lecture contains fewer technical challenges. The main functionality has been implemented by hundreds of painting applications and additional functionality such as separation of views for lecturer and students, storing the modified slide images or distribution can be implemented straightforwardly. In such static scenarios, the processing of events such as slide transition is also easier than in dynamic scenarios. Systems that process arbitrary, dynamic content initially can not rely on any events and often implement very specific solutions for each application and the corresponding events they support. Systems that follow the screen recording approach can receive events from underlying presentation systems by implementing add-ins (Turban et al., 2007). Add-ins for PowerPoint have been implemented by systems such as Camtasia (TechSmith, 2007) or virtPresenter (Mertens et al., 2004). For assignment of annotations to slides and removal of annotations during slide-transitions there is a major difference for systems that support static vs. dynamic content. Systems such as TeleTeachingTool use key-bindings (e.g. for the page up or down key) that are usually associated with the underlying actions (e.g. for seeking to the next or previous slide) within a presentation system. Presenter contains a more proactive approach: an extension for presenter is capable to load and control PowerPoint-presentations and can therefore handle events easier and avoids modifying PowerPoint (Turban et al., 2006). Moreover, a uniform way to handle any slide-based presentation has been presented in (Turban et al., 2007b). The corresponding reference implementation called Universal Presentation Controller and the specific handling of PowerPoint are also discussed in latter article. In contrast to the differences of both groups regarding the processing of content, features regarding the augmentation of content are usually implemented very similarly. Annotations are added on top of the content and remain visible until the user or dedicated events such as slide-transitions force them to disappear. There is obviously no difference for annotation engines how (easy or hard) those events are obtained. For both scenarios, annotations are typically not adapted (e.g. they are not automatically translated or scaled) in correspondence to the underlying content. Associating the underlying content is rather difficult and may cover problems such as object-recognition. A restricted solution is presented by Avaya (Kashi et al., 2003) that associates annotations on top of HTML-web pages using the underlying DOM-tree. But current presentation systems, especially those supporting dynamic content, do not focus on associations. Therefore, annotations can be rendered independently of the type of the underlying content and efficiently implemented using layers that are well-known from 2d graphic applications. Programming languages typically provide overlays that are similar to glass panes and their processing is often hardware-accelerated. INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Turban: Educational presentation systems
4.3
After
Systems such as TeraVision (Schroeter et al., 2003), T-Cube (Ma et al., 2003) and ProjectorBox (Denoue et al., 2005) have been developed for recording purposes or transmission of video signals in mind. Approaches that avoid installing software on the local system typically require processing the content after the lecture. They post process images so that slide transitions can be detected or text can be extracted from slide images. There are even approaches that avoid any modification of the remote system, but are also capable to provide benefits, while the presenter can still operate his system locally. Systems that focus on such post processing steps are described in (Erol et al., 2003), (Sumec, 2006) and (Repp et al., 2007), for example. Systems that are not focusing on the screen recording approach and require converting the presentation have also the disadvantage that the lecturer has manually to keep track of the different versions of the original and converted files. Approaches that focus on supporting native formats avoid the described problems. Moreover, they support the reintegration of resources as described in (Turban et al., 2005) and ease their usage over different workflow iterations.
5
CONCLUSION AND FUTURE TRENDS
We presented an analysis of educational presentation systems according to a typical workflow that also covers the phases before and after their usage in the lecture. In addition, the development of educational presentation systems was focused using this workflow. The key aspects of such systems and their technical details were continuously discussed along the workflow. Finally, we presented solutions and experiences for identified aspects that should be considered while developing flexible educational presentations systems. The problems that were focused within the last years have often been very technical problems and today’s systems provide solutions to them. The next stage of problems that occurred are related to interpretation and analysis of content in order to semantically understand the content of lecture recordings or automatically created lecture transcripts. Especially approaches that reconstruct information during late phases within the workflow, instead to remain the information during previous phases, became outdated. The combination of screen recording approaches and approaches that deliver meaningful navigational indices, such as stated in (Ziewer, 2004), is an advantage using two existing but competing approaches. Because of the combination, the complexity of previously developed solutions is shifted from late phases to earlier
VOL
4
NO
4 NOVEMBER 2007
phases while increasing the quality of content and reducing post processing costs. The remaining phases focus on new challenges (Martin et al., 2005) such as analyzing handwritings, audios and videos for applications and the automatic generation of lecture transcripts (Wald, 2008). Much work has been done in the field of speech-recognition (Hürst, 2003). The recognition of freeform ink annotations moved from the challenge of text recognition to object recognition within drawings and interpretation of the relationship between different objects. Regarding the post processing phases, the creation of recordings that are close to the live experience have been intensively analyzed. Nowadays, the creation of explorative and therefore much more interactive recordings that can be still modified and extended after their distribution became important. The solutions for supporting cooperation and augmentation afterwards are still extended and substitute non interactive replay of lectures. Altogether, while moving the complexity from center phases to early and late phases, especially the new possibilities for late phases play an important role for research within the next years.
REFERENCES Abowd, G.D. (1999), “Classroom 2000: an experiment with the instrumentation of a living educational environment”, IBM Systems Journal, Vol. 38 No 4, pp. 508-530. Aldrich, D. (2007), Video Screencasting: A Recipe for Automation in the Educational Environment, Whitepaper, University of Washington Classroom Support Services, USA. Anderson, R., Anderson, R., McDowell, L. and Simon, B. (2005), “Use of classroom presenter in engineering courses”, Proceedings of the 35th Annual Conference on Frontiers in Education, Indianapolis, IN, USA, pp. Tzh: 13-18. Anderson, R., Anderson, R.E., Hoyer, C., Prince, C., Su, J., Videon, F. and Wolfman, S.A. (2005), “A study of diagrammatic ink in lecture”, Computers & Graphics, Vol. 29 No. 4, pp. 480-9. Anderson, R., Chen, J., Jie, L., Li, J., Li, Ning., Linnell, N., Razmov, V. and Videon, F. (2007) “Supporting an Interactive Classroom Environment in a Cross-Cultural Course”, Proceedings of the 37th Annual Conference on Frontiers in Education, Milwaukee, WI, USA, pp. F3D: 1-6. Anderson, S.T. (2006), “You can produce a video tutorial in under an hour – even your first!”, Proceedings of the 2006 ASCUE Conference, Myrtle Beach, SC, USA, pp. 10-20. Belenkaia, L., Mohamed, K.A. and Ottmann, T. (2004), “Creation, presentation, capture, and replay of freehand writings in e-lecture scenarios”, Proceedings of the AACE World Conference on Educational Multimedia, Hypermedia & Telecommunications, Lugano, Switzerland, pp. 791-6. Bell, T., Cockburn, A., Wingkvist, A. and Green, R. (2007), “Podcasts as a supplement in tertiary education: an experiment with two computer science courses”, Proceedings of the Mobile Learning Technologies and Applications Conference 2007, Auckland, New Zealand, pp. 70-71. Benbunan-Fich, R. (2002), “Improving education and training with IT”, Journal, Communications of the ACM, Vol. 45 No. 6, pp. 94-9.
215
Turban: Educational presentation systems
Bulterman, D.C.A. (2004), “Creating peer-level video annotations for web-based multimedia”, Proceedings of the 7th Eurographics Workshop on Multimedia, Nanjing, China. Burdet, B., Bontron, C. and Burgi, P.-Y. (2007), “Lecture capture: what can be automated?”, Educause Quarterly, Vol. 30 No. 2, pp. 40-8. DeAntonio, M., Sandoval, L.M. and Arceo, R. (2006), “Work in progress: a quantitative study of the effectiveness of powerpoint in the classroom”, Proceedings of the 36th Annual Conference on Frontiers in Education, San Diego, CA, USA, pp. 22-3. Denoue, L., Hilbert, D., Adcock, J., Billsus, D. and Cooper, M. (2005), “ProjectorBox: seamless presentation capture for classrooms”, Proceedings of E-Learn 2005, Vancouver, Canada, pp.1986-1991. Enriquez, A. (2007), “Developing an interactive learning network using tablet PCs in sophomore-level engineering courses”, Proceedings of the 2007 American Society for Engineering Education Annual Conference, Honolulu, HI, USA. Erol, B., Hull, J.J. and Lee, D.-S. (2003), “Linking multimedia presentations with their symbolic source documents: algorithm and applications”, Proceedings of the ACM Multimedia conference 2003, Berkeley, CA, USA, pp. 498-507. Frolik, J. and Zurn, J.B. (2004), “Evaluation of tablet PCs for engineering content development and instruction”, Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition, Salt Lake City, UT, USA, pp. 101-105. Hesse, F.W. (2007), “e-teaching.org”, available at: www. e-teaching.org/technik/produkte/ (accessed 31 December 2007). Hilbert, D., Billsus, D. and Denoue, L. (2006), “Seamless capture and discovery for corporate memory”, Proceedings of the 15th International World Wide Web Conference, Edinburgh, UK. Hürst, W. (2003), “A qualitative study towards using large vocabulary automatic speech recognition to index recorded presentations for search and access over the web”, IADIS International Journal on WWW/Internet, Vol. I No. 1, pp. 43-58. Hürst, W., Mueller, R. and Ottmann, T. (2004), “The AOF method for production, use, and management of instructional media”, Proceedings of the International Conference on Computer in Education, Melbourne, Australia. Joukov, N. and Chiueh, T. (2003), “Lectern II: a multimedia lecture capturing and editing system”, Proceedings of the International Conference on Multimedia and Expo, Baltimore, MD, USA, pp. 681-684. Kashi, R. and Ramachandran, S. (2003), “An architecture for ink annotations on web documents”, Proceedings of the 7th International Conference on Document Analysis and Recognition, Edinburgh, Scotland, Vol. 1, pp. 256-260. Kennedy, G., Krause, K.-L., Judd, T., Churchward, A. and Gray, K. (2006), First Year Students’ Experiences with Technology: Are they really Digital Natives? University of Melbourne, Preliminary Report of Findings, September 2006. Knipping, L. (2005), “An Electronic chalkboard for classroom and distance teaching”, PhD thesis, Fachbereich Mathematik und Informatik, Freie Universität Berlin, Germany. Leigh, J., Girado, J., Singh, R., Johnson, A., Park, K. and DeFanti, T.A. (2002), TeraVision: a Platform and
216
Software Independent Solution for Real Time Display Distribution in Advanced Collaborative Environments, Electronic Visualization Laboratory, University of Illinois at Chicago, USA. Li, S.F., Spiteri, M., Bates, J. and Hopper, A. (2000), “Capturing and indexing computer-based activities with virtual network computing”, Proceedings of the 2000 ACM Symposium on Applied Computing, Como, Italy, pp. 601-603. Ma, M., Schillings, V., Chen, T. and Meinel, C. (2003), “TCube: a multimedia authoring system for eLearning”, Proceedings of E-Learn 2003, Phoenix, AZ, USA, pp. 22892296. Martin, T., Boucher, A. and Ogier, J.-M. (2005), “Multimodal analysis of recorded video for E-learning”, Proceedings of the ACM Multimedia Conference, Singapore, pp. 1043-1044. Mertens, R., Schneider, H., Müller, O. and Vornberger, O. (2004), “Hypermedia navigation concepts for lecture recordings”, Proceedings of E-Learn 2004, Washington, DC, USA, pp. 2480-2487. Microsoft Corporation (2007), “Microsoft product information center”, available at: www.microsoft.com (accessed 31 December 2007). Pharus53 software solutions GmbH (2007), moowinx, available at: www.moowinx.com (accessed 31 December 2007). Repp, S., Linckels, S. and Meinel, C. (2007), “Towards to an automatic semantic annotation for multimedia learning objects”, Proceedings of the 1st ACM Workshop on Educational Multimedia and Multimedia Education in conjunction with ACM Multimedia 2007, Augsburg, Bavaria, Germany, pp. 19-26. Robinson, A., Mittelholz, D. and Kohlruss, T. (2007), “Physics whiteboard tutorials delivered over the internet”, Proceedings of ED-MEDIA 2007, Vancouver, Canada, AACE Press. Schroeter, R., Hunter, J. and Kosovic, D. (2003), “Vannotea – a collaborative video indexing, annotation and discussion system for broadband networks”, Proceedings of Knowledge Markup and Semantic Annotation Workshop, K-CAP 2003, Sanibel, FL, USA. Simon, B., Anderson, R. and Wolfman, S. (2003), “Activating computer architecture with classroom presenter”, Proceedings of the Workshop on Computer Architecture Education, in conjunction with the 30th International Symposium on Computer Architecture, San Diego, CA, USA. Sumec, S. (2006), “Extracting additional information from lecture recordings”, CESNET technical report number 11/2006. TechSmith Corporation (2007), “Camtasia studio screen recording and presentation”, available at: www.techsmith. com (accessed 31 December 2007). Turban, G. and Mühlhäuser, M. (2005), “A category based concept for rapid development of ink-aware systems for computer-assisted education”, Proceedings of the 7th IEEE International Symposium on Multimedia, Irvine, CA, USA, pp. 449-57. Turban, G. and Mühlhäuser, M. (2007), “A framework for educational presentation systems and its application”, Proceedings of the 1st ACM Workshop on Educational Multimedia and Multimedia Education in conjunction with ACM Multimedia 2007, Augsburg, Bavaria, Germany, pp. 115-118. Turban, G. and Mühlhäuser, M. (2006), “An open architecture for face-to-face learning and its benefits”, Proceedings of the
INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Turban: Educational presentation systems
8th IEEE International Symposium on Multimedia, San Diego, CA, USA, pp. 901-906. Turban, G. and Mühlhäuser, M. (2007), “A uniform way to handle any slide-based presentation: the universal presentation controller”, Advances in Multimedia Modeling, 13th International ACM Multimedia Modeling Conference, Singapore, pp. 741-750. Vogel, J. (2003), “Präsentation und Kollaboration in Televeranstaltungen mit dem multimedia lecture board”, Tagungsband der 17. DFN-Arbeitstagung über Kommunikationsnetze, LNI, GI, Düsseldorf, Germany, pp. 411-424. Wald, M. (2008), “Learning through multimedia: automatic speech recognition enhancing accessibility and
VOL
4
NO
4 NOVEMBER 2007
interaction”, Journal of Educational Multimedia and Hypermedia, Vol. 17 No. 2, pp.215-233. Weitz, R.R., Wachsmuth, B. and Mirliss, D. (2006), “The tablet PC for faculty: a pilot project”, Educational Technology & Society, Vol. 9 No. 2, pp. 68-83. Ziewer, P. (2004), “Navigational indices and full text search by automated analyses of screen recorded data”, Proceedings of E-Learn 2004, Washington, DC, USA, pp. 3055-3062. Ziewer, P. and Seidl, H. (2002), “Transparent teleteaching”, Proceedings of the Australasian Society for Computers in Learning in Tertiary Education Conference, Auckland, New Zealand, pp.749-758.
217