Multimedia Networking: Technology, Management and Applications Syed Mahbubur Rahman Minnesota State University, Mankato, USA
Idea Group Publishing
Information Science Publishing
Hershey • London • Melbourne • Singapore • Beijing
Acquisition Editor: Managing Editor: Development Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Mehdi Khosrowpour Jan Travers Michele Rossi Maria Boyer LeAnn Whitcomb Deb Andre Integrated Book Technology
Published in the United States of America by Idea Group Publishing 1331 E. Chocolate Avenue Hershey PA 17033-1117 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2002 by Idea Group Publishing. All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Library of Congress Cataloging-in-Publication Data Multimedia networking : technology, management, and applications / [edited by] Syed Mahbubur Rahman p. cm. Includes bibliographical references and index. ISBN 1-930708-14-9 1. Multimedia systems. 2. Computer networks. I. Rahman, Syed Mahbubur, 1952QA76.575 .M8345 2001 006.7--dc21
2001039268
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library.
NEW from Idea Group Publishing • • • • • • • • • • • • • • • • • • • • • • • • • • •
Data Mining: A Heuristic Approach Hussein Aly Abbass, Ruhul Amin Sarker and Charles S. Newton/1-930708-25-4 Managing Information Technology in Small Business: Challenges and Solutions Stephen Burgess/ ISBN:1-930708-35-1 Managing Web Usage in the Workplace: A Social, Ethical and Legal Perspective Murugan Anandarajan and Claire Simmers/1-930708-18-1 Challenges of Information Technology Education in the 21st Century Eli Cohen/ 1-930708-34-3 Social Responsibility in the Information Age: Issues and Controversies Gurpreet Dhillon/1-930708-11-4 Database Integrity: Challenges and Solutions Jorge H. Doorn and Laura Rivero/ 1-930708-38-6 Managing Virtual Web Organizations in the 21st Century: Issues and Challenges Ulrich Franke/1-930708-24-6 Managing Business with Electronic Commerce: Issues and Trends Aryya Gangopadhyay/ 1-930708-12-2 Electronic Government: Design, Applications and Management Åke Grönlund/1-930708-19-X Knowledge Media in Health Care: Opportunities and Challenges Rolf Grutter/ 1-930708-13-0 Internet Management Issues: A Global Perspective John D. Haynes/1-930708-21-1 Enterprise Resource Planning: Global Opportunities and Challenges Liaquat Hossain, Jon David Patrick and MA Rashid/1-930708-36-X The Design and Management of Effective Distance Learning Programs Richard Discenza, Caroline Howard, and Karen Schenk/1-930708-20-3 Multirate Systems: Design and Applications Gordana Jovanovic-Dolecek/1-930708-30-0 Managing IT/Community Partnerships in the 21st Century Jonathan Lazar/1-930708-33-5 Multimedia Networking: Technology, Management and Applications Syed Mahbubur Rahman/ 1-930708-14-9 Cases on Worldwide E-Commerce: Theory in Action Mahesh Raisinghani/ 1-930708-27-0 Designing Instruction for Technology-Enhanced Learning Patricia L. Rogers/ 1-930708-28-9 Heuristic and Optimization for Knowledge Discovery Ruhul Amin Sarker, Hussein Aly Abbass and Charles Newton/1-930708-26-2 Distributed Multimedia Databases: Techniques and Applications Timothy K. Shih/1-930708-29-7 Neural Networks in Business: Techniques and Applications Kate Smith and Jatinder Gupta/ 1-930708-31-9 Information Technology and Collective Obligations: Topics and Debate Robert Skovira/ 1-930708-37-8 Managing the Human Side of Information Technology: Challenges and Solutions Edward Szewczak and Coral Snodgrass/1-930708-32-7 Cases on Global IT Applications and Management: Successes and Pitfalls Felix B. Tan/1-930708-16-5 Enterprise Networking: Multilayer Switching and Applications Vasilis Theoharakis and Dimitrios Serpanos/1-930708-17-3 Measuring the Value of Information Technology Han T.M. van der Zee/ 1-930708-08-4 Business to Business Electronic Commerce: Challenges and Solutions Merrill Warkentin/ 1-930708-09-2
Excellent additions to your library! Receive the Idea Group Publishing catalog with descriptions of these books by calling, toll free 1/800-345-4332 or visit the IGP Online Bookstore at: http://www.idea-group.com!
Multimedia Networking: Technology, Management and Applications Table of Contents
Preface ............................................................................................................ vii Chapter 1 ......................................................................................................... 1 Managing Real-Time Distributed Multimedia Applications Vana Kalogeraki, Hewlett-Packard Laboratories, USA Peter Michael Melliar-Smith, UC Santa Barbara, USA Louise E. Moser, UC Santa Barbara, USA Chapter 2 ....................................................................................................... 17 Building Internet Multimedia Applications: The Integrated Service Architecture and Media Frameworks Zhonghua Yang, Nanyang Technological University, Singapore Robert Gay, Nanyang Technological University, Singapore Chengzheng Sun, Griffith University, Queensland, Australia Chee Kheong Siew, Nanyang Technological University, Singapore Chapter 3 ....................................................................................................... 54 The Design and Performance of a CORBA Audio/Video Streaming Service Naga Surendran, Washington University-St. Louis, USA Yamuna Krishamurthy, Washington University-St. Louis, USA Douglas C. Schmidt, University of California, Irvine, USA Chapter 4 .................................................................................................... 102 MPEG-4 Facial Animation and its Application to a Videophone System for the Deaf Nikolaos Sarris, Aristotle University of Thessaloniki, Greece Michael G. Strintzis, Aristotle University of Thessaloniki, Greece Chapter 5 .................................................................................................... 126 News On Demand Mark T. Maybury, The MITRE Corporation, USA
ix protocol and technology aspects also have focus on the QoS frameworks and implementation issues. Because of wide range of application areas the delivery of high quality video content to customers is now is a driving force for the evolution of the Internet. Chapter eight presents an originally developed video retrieval application with its unique features including a flexible user interface based on HTTP browser for content querying and browsing, support for both unicast and multicast addressing and a user oriented control of QoS of video streaming in Integrated Services IP networks. A part of the chapter is devoted to some selected methods of modelling information systems, the prediction of a system performance, and on influence of different control mechanisms on quality of service perceived by end users. Chapter nine discusses various issues related to the shaping of Motion Picture Experts Group (MPEG) video for generating constrained or controlled variable bit rate (VBR) data streams. The results presented in this chapter can be utilized not only for network and nodal (buffer) capacity engineering but also for delivering the user-defined quality of service (QoS) to the customers. The next chapter presents a novel traffic shaping approach to optimize both the resource allocation and utilization for VBR media transmission. This idea is then extended to online transmission problems. The emergence of high-speed networked multimedia systems provides opportunities to handle collection of real-time continuous media (CM) applications. Admission control in CM servers or video-on-demand systems restricts the number of applications supported on the resources. It is necessary to develop more intelligent mechanisms for efficient admission control, negotiation, resource allocation, and resource scheduling with an aim to optimize the total system utilization. In particular, there has been increased interest in I/O issues for multimedia or continuous media. Chapter eleven presents a dynamic and adaptive admission control strategy for providing a fair disk bandwidth scheduling and better performance for video streaming. It also presents a comparison of the simulation result on the behavior of conventional greedy admission control mechanisms with that of the proposed admission control and scheduling algorithm. The traffic generated by multimedia applications presents a great amount of burstiness, which can hardly be described by a static set of traffic parameters. For dynamic and efficient usage of the resources the traffic specification should reflect the real traffic demand and at the same time optimize the resources requested. To achieve this goal chapter twelve presents a model for dynamically renegotiating the traffic specification (RVBR) and shows how this can be integrated with the traffic reservation mechanism RSVP demonstrating through an example of application that is able to accommodate its traffic to manage QoS dynamically. The remaining of this chapter is focused on the technique used to implement RVBR taking into account the problems deriving from delay during the renegotiation phase and on the performance of the application with MPEG4 traffic. Audio is frequently perceived as one of the most important component of multimedia communications. Very high transmission delay and transmission delay variance (known as jitter) experienced in the current architecture of the Internet impair real-time human conversations. One way to cope with this problem is to use adaptive control mechanisms. These mechanisms are based on the idea to use a voice reconstruction buffer at the receiver in order to add artificial delay to the audio stream to smooth out the jitter. Chapter thirteen describes three different control mechanisms that are able to dynamically adapt the audio application to the network conditions so as to minimize the impact of delay jitter (and packet loss). A set of performance results is reported from extensive experimentation with an Internet audio tool designed by the authors.
Chapter 6 .................................................................................................... 134 A CSCW with Reduced Bandwidth Requirements Based on a Distributed Processing Discipline Enhanced for Medical Purposes Iraklis Kamilatos, Informatics and Telematics Institute, Greece Michael G. Strintzis, Informatics and Telematics Institute, Greece Chapter 7 .................................................................................................... 151 A Layered Multimedia Presentation Database for Distance Learning Timothy K Shih, Tamkang University, Taiwan Chapter 8 .................................................................................................... 172 QoS-Aware Digital Video Retrieval Application Tadeusz Czachorski, Polish Academy of Sciences, Poland Stanislaw Jedrus, Polish Academy of Sciences, Poland Maciej Zakrzewicz, Poznan University of Technology, Poland Janusz Gozdecki, AGH University of Technology, Poland Piotr Pacyna, AGH University of Technology, Poland Zdzislaw Papir, AGH University of Technology, Poland Chapter 9 .................................................................................................... 186 Network Dimensioning for MPEG-2 Video Communications Using ATM Bhumip Khasnabish, Verizon Labs, Inc, USA Chapter 10 .................................................................................................. 222 VBR Traffic Shaping for Streaming of Multimedia Transmission Ray-I Chang, Academia Sinica, Taiwan Meng-Chang Chen, Academia Sinica, Taiwan Ming-Tat Ko, Academia Sinica, Taiwan Jan-Ming Ho, Academia Sinica, Taiwan Chapter 11 .................................................................................................. 237 RAC: A Soft-QoS Framework for Supporting Continuous Media Applications Wonjun Lee, Ewha Womans University, Seoul, Korea Jaideep Srivastava, University of Minnesota, USA Chapter 12 .................................................................................................. 255 A Model for Dynamic QoS Negotiation Applied to an MPEG4 Applications Silvia Giordano, ICA Institute, Switzerland Piergiorgio Cremonese, Wireless Architect, Italy Jean-Yves Le Boudec, Laboratoire de Reseaux de Communications, Switzerland M. Podesta, Whitehead Laboratory, Italy
Chapter 13 .................................................................................................. 269 Playout Control Mechanisms for Speech Transmission over the Internet: Algorithms and Performance Results Marco Roccetti, Universita di Bologna, Italy Chapter 14 .................................................................................................. 290 Collaboration and Virtual Early Prototyping Using The Distributed Building Site Metaphor Fabien Costantini, CEDRIC, France Christian Toinard, CEDRIC, France Chapter 15 .................................................................................................. 333 Methods for Dealing with Dynamic Visual Data in Collaborative Applications–A Survey Binh Pham, Queensland University of Technology, Australia Chapter 16 .................................................................................................. 351 An Isochronous Approach to Multimedia Synchronization in Distributed Environments Zhonghua Yang, Nanyang Technological University, Singapore Robert Gay, Nanyang Technological University, Singapore Chengzheng Sun, Griffith University, Queensland, Australia Chee Kheong Siew, Nanyang Technological University, Singapore Abdul Sattar, Griffith University, Queensland, Australia Chapter 17 .................................................................................................. 369 Introduction To Multicast Technology Gábor Hosszú, Budapest University of Technology & Economics, Hungary Chapter 18 .................................................................................................. 412 IP Multicast: Inter Domain, Routing, Security and Address Allocation Antonio F. Gómez-Skarmeta, University of Murcia, Spain Pedro M. Ruiz, University Carlos III of Madrid, Spain Angel L. Mateo-Martinez, University of Murcia, Spain Chapter 19 .................................................................................................. 441 Mobile Multimedia over Wireless Network Jürgen Stauder, Thomson Multimedia, France Fazli Erbas, University of Hannover, Germany About the Authors ..................................................................................... 473 Index ............................................................................................................ 482
vii
Preface We are witnessing an explosive growth in use of multiple media forms (voice, data, images and video etc.) in varied application areas including entertainment, communication, collaborative work, electronic commerce and university courses. The increasing computing power, integrated with multimedia and telecommunication technologies, is bringing into reality our dream of real time, virtually face-to-face interaction with collaborators sitting far away from us. In the process of realizing our technological ambitions, we need to address a number of technology, management and design issues. We need to be familiar with exciting current applications. It is impossible to track the magnitude and breadth of changes that the multimedia and communication technology is bringing daily to us in many different ways throughout the world. Consequently this book presents an overview of the expanding technology beginning with application techniques that lead to management and design issues. Our goal is to highlight major multimedia networking issues, understanding and solution approaches, and networked multimedia applications design. Because we wanted to include diverse ideas from various locations, chapters from professionals and researchers from about thirteen countries working in the forefront of this technology are included. This book has nineteen chapters, which include the following major multimedia networking areas. • • • • • • • •
Development and management of real time distributed multimedia applications Audio/video applications and streaming issues Protocols and technologies for building Internet multimedia applications QOS frameworks and implementation Collaborative applications Multimedia synchronization in distributed environment Multicasting technology and applications Use of mobile multimedia over wireless network.
The chapters in this book address the dynamic and efficient usage of the resources, which are the fundamental aspects of multimedia networks and applications. This book also details some current research, applications and future research directions. The following paragraphs are intended to put together the abstracts from each chapter in a manner to provide an overview of the topics covered.
Development and management of real time distributed multimedia applications Management of distributed multimedia networking along with streaming issues are focused in the first three chapters. Real-time distributed multimedia environments, characterized by timing constraints and end-to-end quality of service (QoS) requirements, have set forth new challenges for the efficient management mechanisms to respond to transient changes in the load or the availability of the resources. Chapter one presents a real-time distributed multimedia framework, based on the Common Object Request Broker Architecture (CORBA), that provides resource management and Quality of Service (QoS) for CORBA
viii applications. Chapter two presents state-of-the art coverage of the Internet integrated service architecture and two multimedia frameworks that support the development of real time multimedia applications. The Internet integrated service architecture supports a variety of service models beyond the current best effort model. A set of new real time protocols that constitute the integrated service architecture are described in some detail. The new protocols covered are those for real-time media transport, media session setup and control, and those for resource reservation in order to offer the guaranteed service. Two emerging media frameworks that provide a high level abstraction for developing real time media applications over Internet: CORBA Media Streaming Framework (MSF) and Java Media Framework (JMF) both of which provide an object-oriented multimedia middleware. The future trends are also discussed. Chapter three focuses on another important topic in ORB end system research: the design and performance of the CORBA audio/video streaming service specification.
Protocols and technologies for building Internet multimedia applications Next several chapters focus on the protocols and technological aspects for building networked multimedia applications. Chapter four aims to introduce the potential contribution of the emerging MPEG-4 audio-visual representation standard for future multimedia systems. This is attempted by the ‘case study’ of a particular example of such a system -‘LipTelephone’which is a special videoconferencing system. The objective of ‘LipTelephone’ is to serve as a videophone that will enable lip readers to communicate over a standard telephone connection, or even over the Internet. The main objective of the chapter is to introduce students to these methods for the processing of multimedia material, provide to researchers a reference to the state-of-the-art in this area and urge engineers to use the present research methodologies in future consumer applications. Recently scientists have been focusing on a new class of application that promises ondemand access to multimedia information such as radio and broadcast news. Chapter five describes how the synergy of speech, language and image processing has enabled a new class of information on demand news systems. In this chapter the ability to automatically process broadcast video 7x24 and serve this to the general public in individually tailored personal casts has also been discussed and some remaining challenging research areas identified. The next chapter presents another application covering open telecooperation architecture for medical telecosultation with modern high power workstations implemented using distributed computing system. The resulting medical Computer Supported Cooperative Work (CSCW) tool is evaluated experimentally. This tool has also the potential to be used in distance education environment. Chapter seven describes a 5 layer multimedia database management system (MDBMS) with storage sharing and object reuse support with application to an instruction on demand system that is used in the realization of several computer science related courses at Tamkang university.
More focus on QOS frameworks and implementation In multimedia applications, media data such as audio and video are transmitted from server to clients via network according to some transmission schedules. Different from the conventional data streams, end-to-end quality-of-service (QoS) is necessary for media transmission to provide jitter-free playback. The subsequent chapters while dealing with the
x Collaborative applications The next two chapters have major focus on collaborative applications and their design issues. Rapid Prototyping within a virtual environment offers new possibilities of working. But tools to reduce the time to design a product and to examine different design alternatives are missing. The state of art shows that the current solutions offer a limited collaboration. Within the context of an extended team, the solutions do not address how to move easily from one style of working to another one. They do not define how to manage the rapid design of a complex product. Moreover, the different propositions suffer mainly from the client-server approach that is inefficient in many ways and limits the openness of the system. Chapter fourteen presents a global methodology enabling different styles of work. It proposes new collaboration services that can be used to distribute a virtual scene between the designers. The solution, called the Distributed Building Site Metaphor, enables project management, meeting management, parallel working, disconnected work and meeting work, real time validation, real time modification, real time conciliation, real time awareness, easy motion between these styles of work, consistency, security and persistency. Much work has been devoted on the development of distributed multimedia systems in various aspects: storage, retrieval, transmission, integration and synchronization of different types of data (text, images, video and audio). However, such efforts have concentrated mostly on passive multimedia material, which had been generated or captured in advance. Yet, many applications require active data, especially 3D graphics, images and animation that are generated by interactively executing programs during an ongoing session, especially of a collaborative multimedia application. These applications demand extensive computational and communication costs that cannot be supported by current bandwidth. Thus, suitable techniques have to be devised to allow flexible sharing of dynamic visual data and activities in real time especially for collaborative applications. Chapter fifteen discusses different types of collaborative modes and addresses major issues for collaborative applications, which involve dynamic visual data from four perspectives: functionality, data, communication and scalability. In this chapter current approaches for dealing with these problems are also discussed, and pertinent issues for future research are identified.
Multimedia synchronization in distributed environment Synchronization between various kinds of media data is the key issues for multimedia presentation. Chapter sixteen discusses temporal relationship and multimedia synchronization mechanism to ensure a temporal ordering of events in a multimedia system. Multicasting technology and applications Chapter seventeen and eighteen has major focus on multicasting technologies. Multicasting increases the user’s ability to communicate and collaborate, leveraging more value from the network investment. Typical multicasting applications are video and audio conferencing for remote meetings, updates on the latest election results, replicating databases and web site information, collaborative computing activities, transmission over networks of live TV news or live transmission of multimedia training, etc. Multimedia multicasting would demand huge resources if not properly optimized. Although IP Multicast is considered a good solution for internetworking multimedia in many-to-many communications, there are issues that have not
xi been completely solved. Protocols are still evolving and new protocols are constantly coming up to solve these issues because that is the only way for making multicast to become a true Internet service. The multimedia transport on the Internet, and the IP multicasting technology including the routing and transport protocols is described in chapter seventeen. It also includes discussions on the popular Multicast Backbone (MBone) and presents different aspects of the policy of the multicast applications detailing the main multicast application design principles, including the lightweight sessions, the tightly coupled sessions and the virtual communication architectures on the Internet. Chapter eighteen continues to describe the evolution of IP multicast from the obsolete MBone (Multicast Backbone) and intra-domain multicast routing to the actual inter-domain multicast routing scheme. Special attention is given to the challenges and problems that need to be solved, the problems that have been solved and the way they were solved. The readers can get a complete picture of the state of the art explaining the idea behind each protocol and how all those protocols work together. Some of the topics discussed are broadly related to address allocation, security and authentication, scope control and so on. Results and recommendations are also included in this chapter.
Mobile multimedia over wireless network In recent years increasing use of multimedia over the Internet are being experienced in most application areas. The next step in the information age is the mobile access to multimedia applications: everything everywhere any time! The last chapter of this book is a tutorial chapter that addresses a key point of this development: Data transmission for mobile multimedia applications in wireless cellular networks. The main concern of this chapter is the cooperation between multimedia services and wireless cellular global networks. For network developers, the question is what constraints impose multimedia transmission on wireless networks? For multimedia experts, the question is rather which constraints impose the existing or foreseen wireless network standards on multimedia applications? This chapter follows the multimedia expert’s view of the problem. Having studied this chapter, the reader should be able to answer several questions like: Which network will be capable to transmit real-time video? Does a rainfall interrupt my mobile satellite Internet connection? When will high bandwidth, wireless networks be operational? How to tune existing multimedia applications to be efficient in wireless networks?
Audiences As is evident from the above discussions many different audiences can make use of this book. Students and teachers can use the book in their courses related to multimedia networking. Professionals involved in the management and design of multimedia network and applications will find many solutions to their questions and technological conundrums. Provocative ideas from the applications, case questions and research solutions included in this book will be useful for professionals, teachers and students in their search for design and development projects and ideas. It will also benefit even casual readers by providing them a broader understanding of this technology.
xii Acknowledgments Many people deserve credit for the successful publication of this book. I express sincere gratitude to each of the chapter authors, who contributed their ideas and expertise in bringing this book to fruition. Thanks to many colleagues and authors who have contributed invaluable suggestions in their thorough reviews of each chapter. Support from colleagues and staff in the Department of Computer and Information Sciences at Minnesota State University, Mankato helped sustain my continued interest. Many also helped with reviews of the chapters. A further special note of thanks goes also to all staff at Idea Group Publishing, whose contributions throughout the whole process from inception of the initial idea to final publication have been invaluable. In particular thanks to Mehdi Khosrowpour for his encouragement to continue with this project proposal, and to Jan Travers and Michele Rossi, who continuously prodded via e-mail for keeping the project on schedule. I am grateful to my parents, my wife Sharifun and my son Tahin who by their unconditional love have steered me to this point and given me constant support. They have sacrificed my company for extended periods of time while I edited this book. Mahbubur Rahman Syed Editor
Managing Real-Time Distributed Multimedia Applications 1
Chapter I
Managing Real-Time Distributed Multimedia Applications Vana Kalogeraki Hewlett-Packard Laboratories, USA Peter Michael Melliar-Smith and Louise E. Moser UC Santa Barbara, USA
Distributed multimedia applications are characterized by timing constraints and endto-end quality of service (QoS) requirements, and therefore need efficient management mechanisms to respond to transient changes in the load or the availability of the resources. This chapter presents a real-time distributed multimedia framework, based on the Common Object Request Broker Architecture (CORBA), that provides resource management and Quality of Service for CORBA applications. The framework consists of multimedia components and resource management components. The multimedia components produce multimedia streams, and combine multimedia streams generated by individual sources into a single stream to be received by the users. The resource management components provide QoS guarantees during multimedia transmissions based on information obtained from monitoring the usage of the system’s resources.
INTRODUCTION Real-time distributed multimedia environments have set forth new challenges in the management of processor and network resources. High-speed networks and powerful endsystems have enabled the integration of new types of multimedia applications, such as videoon-demand, teleconferencing, distance learning and collaborative services, into today’s computer environments. Multimedia applications are variable in nature, as they handle a combination of continuous data (such as audio and video) and discrete data (such as text, images and control information) and impose strong requirements on data transmission, including fast transfer and substantial throughput. Copyright © 2002, Idea Group Publishing.
2 Kalogeraki, Melliar-Smith & Moser
The technical requirements necessary to achieve timeliness are obviously more difficult to satisfy in distributed systems, mostly because of the uncertain delays in the underlying communication subsystem. This difficulty is further exacerbated by the heterogeneity of today’s systems with respect to computing, storage and communication resources and the high levels of resource sharing that exist in distributed systems. Multimedia tasks may involve components located on several processors with limited processing and memory resources and with shared communication resources. Different transport mechanisms, such as TCP or UDP, can be used for data transfer within local- or wide-area networks. Distributed object computing (DOC) middleware is software built as an independent layer between the applications and the underlying operating system to enable the applications to communicate across heterogeneous platforms. At the heart of the middleware resides an object broker, such as the OMG’s Common Object Request Broker Architecture (CORBA), Microsoft’s Distributed Component Object Model (DCOM) or Sun’s Java Remote Method Invocation (RMI). Multimedia technologies can take advantage of the portability, location transparency and interoperability that middleware provides to enable efficient, flexible and scalable distributed multimedia applications. Developing a system that can provide end-to-end real-time and QoS support for multimedia applications in a distributed environment is a difficult task. Distributed multimedia applications are characterized by potentially variable data rates and sensitivity to losses due to the transmission of data between different locations in local- or wide-area networks and the concurrent scheduling of multiple activities with different timing constraints and Quality of Service (QoS) requirements. Several QoS architectures (Aurrecoechea, Campbell & Hauw, 1998) that incorporate QoS parameters (such as response time, jitter, bandwidth) and QoS-driven management mechanisms across architectural layers have emerged in the literature. Examples include the QoS Broker, COMET’s Extended Integrated Reference Mode (XRM), the Heidelberg QoS model and the MAESTRO QoS management framework. Providing end-to-end QoS guarantees to distributed multimedia applications requires careful orchestration of the processor resources, as multimedia interactions may lead to excessive utilization and poor quality of service, and multimedia applications can easily suffer quality degradation during a multimedia session caused by network saturation or host congestion. Efficient management of the underlying system resources is therefore essential to allow the system to maximize the utilization of the processors’ resources and to adapt to transient changes in the load or in the availability of the resources. The goals of this chapter are to present a distributed framework for coordinating and managing the delivery of real-time multimedia data. The framework manages the transmission of real-time multimedia data and uses current resource measurements to make efficient management decisions.
CORBA The Common Object Request Broker Architecture (CORBA) (Object Management Group, 1999) developed by the Object Management Group (OMG) has become a widely accepted commercial standard for distributed object applications. CORBA provides an architecture and platform-independent programming interfaces for portable distributed object computing applications. The CORBA core includes an Object Request Broker (ORB) which acts as the message bus that provides the seamless interaction between client and server objects. CORBA
Managing Real-Time Distributed Multimedia Applications 3
Interface Definition Language (IDL) describes the functional interface to the objects and the type signatures of the methods that the object embodies. IDL interfaces are mapped onto specific programming languages (e.g., Java, C/C++, etc.). From the IDL specifications, an IDL compiler generates stubs and skeletons that are used for the communication between the client and server objects. Both the implementation details and the location of the server object are kept hidden from the client objects. Interoperability is achieved using the General Inter-ORB Protocol (GIOP) and the TCP/IP-specific Internet Inter-ORB Protocol (IIOP). CORBA’s independence from programming languages, computing platforms and networking protocols makes it highly suitable for the development of distributed multimedia applications and their integration into existing distributed systems.
RELATED WORK Because of its general applicability, the OMG streaming standard (Object Management Group, 1997) has provoked interest in areas such as telecommunications, biomedicine, entertainment and security. McGrath and Chapman (1997) have successfully demonstrated that the A/V streaming specification can be used for telecommunication applications. Mungee, Surendran and Schmidt (1999) have developed an audio/video streaming service based on the OMG’s A/V Streams model. Several researchers have focused on providing QoS support for distributed multimedia applications. Hong, Kim and Park (1999) have defined a generic QoS Management Information Base (MIB) which consists of information objects that represent a set of layered QoS parameters, organized into four logical groups: service, application, system and network. Le Tien, Villin and Bac (1999) have used m-QoS and resource managers responsible for the QoS mapping and monitoring of multimedia applications and for managing the QoS of the resources. Waddington and Coulson (1997) have developed a Distributed Multimedia Component Architecture (MCA) that extends the CORBA and DCOM models to provide additional mechanisms and abstractions for continuous networked multimedia services. MCA exercises the use of those foundation object models by using object abstractions in the form of interfaces to encapsulate and abstract the functionality of multimedia devices and processing entities. MCA presents a solution which incorporates support for real-time continuous media interactions; configuration, control and QoS management of distributed multimedia services; and dual control/stream interfaces and support for basic multimedia object services, including event handling, timer and callback services. Szentivanyi and Kourzanov (1999) have provided foundation objects that define different aspects of multimedia information management. Their approach is built on two notions: (1) a model that covers all related aspects of media management, such as storage, streaming, query, manipulation and presentation of information of several media-types, and (2) a distributed architecture that provides distribution, migration and access for the object model and its realization in a seamless, configurable and scalable manner. Several research efforts have concentrated in enhancing non-CORBA distributed multimedia environments with resource management capabilities. Alfano and Sigle (1996) discuss the problems they experienced at the host and network levels when executing multimedia applications with variable resource requirements. Nahrstedt and Steinmetz (1995) have employed resource management mechanisms, emphasizing host and network resources, to guarantee end-to-end delivery for multimedia data transmission and to adapt when system resource overloading occurs. Rajkumar, Juvva, Molano and Oikawa (1998)
4 Kalogeraki, Melliar-Smith & Moser
have introduced resource kernels to manage real-time multimedia applications with different timing constraints over multiple resources.
OVERVIEW OF THE MULTIMEDIA FRAMEWORK Desired Features Our CORBA framework for managing real-time distributed multimedia applications is responsible for dynamic QoS monitoring and adaptation over changing processor and network conditions. End users receive multimedia streams from different sources without the need to know the exact location of the sources or to have specialized processors to capture the multimedia data. The framework satisfies QoS requirements expressed by the users through a combination of system design choices (e.g., assigning priority/importance metrics to the multimedia objects) and physical configuration choices (e.g., allocating memory and bandwidth). The framework has the following design objectives: 1. To reduce the cost and difficulty of developing multimedia applications. End users should be engaged with a convenient way of expressing their QoS requirements without having to address low-level implementation details. Information such as the port number or the host address of the endpoints should be kept transparent to the users. 2. To satisfy the QoS requirements and to meet the timing constraints specified by the users. User QoS requirements are translated into application-level parameters and are mapped into requirements for system-level resources. Providing QoS-mapping mechanisms is beneficial to the system because it is more systematic and therefore can largely reduce potential user errors. Monitoring functions determine QoS violations that can cause quality degradation and lead to poor system performance. 3. To coordinate the transmission and synchronization of multimedia streams. Live synchronization requires both intra-stream and inter-stream synchronization (Biersack & Geyer, 1999). Intra-stream synchronization refers to maintaining continuity within a single multimedia stream, while inter-stream synchronization refers to preserving the temporal relationships between media units of related streams. Live synchronization requires the capture and playback/display of multimedia streams at run-time and, therefore, can tolerate end-to-end delay on the order of a few hundred milliseconds. 4. To balance the load on the resources and to minimize system overheads. Dynamic configuration management is essential to deal with complex, scalable and evolving multimedia environments. Multimedia objects must be distributed evenly across the processors with respect to their resource requirements and dependency constraints.
Structure of the Framework The framework manages multimedia applications and the underlying system resources in an integrated manner. The framework consists of multimedia components for managing the transmission and delivery of multimedia data and resource management components for managing the multimedia components and monitoring the underlying system resources, as shown in Figure 1.
Managing Real-Time Distributed Multimedia Applications 5
Figure 1: The architectural components of the framework
The multimedia components consist of Suppliers that produce streams of multimedia data, a Coordinator that receives multimedia streams from different sources and combines them into a single stream, and Consumers that receive a single stream and separate the different flows in the stream for individual playback or display. The resource management components consist of Profilers that measure the usage of the resources, Schedulers that schedule the tasks of the multimedia objects and a Resource Manager that allocates the multimedia objects to the processors and takes appropriate actions to satisfy resource requirements. The Resource Manager is implemented as a set of CORBA objects that are allocated to various processors across the distributed system and replicated to increase reliability; logically, however, there is only a single copy of the Resource Manager. The Resource Manager maintains a global view of the system and is responsible for allocating the multimedia objects to the processors. The Resource Manager works in concert with the Profilers and the Schedulers. The Profiler on each processor monitors the behavior of the multimedia objects and measures the current load on the processors’ resources. It supplies information to the Resource Manager, which adapts the allocations over changing processing and networking conditions. The Scheduler on each processor exploits the information collected from the Resource Manager to schedule the multimedia objects on the processor. The multimedia components of the framework are implemented as a set of CORBA objects. The Resource Manager decides the location of those objects across the system based on the utilizations of the processors’ resources, the users’ QoS requirements and the communication among the multimedia objects. The Coordinator uses a reliable, totally ordered group communication system to multicast the multimedia data to the Consumers to achieve synchronization of the streams.
6 Kalogeraki, Melliar-Smith & Moser
QUALITY OF SERVICE FOR DISTRIBUTED MULTIMEDIA APPLICATIONS Quality of Service (QoS) represents the set of those quantitative and qualitative characteristics that are needed to realize the level of service expected by the users (Vogel, Kerherve, Bochmann & Gecsei, 1995). There are typically many layers that determine the actual end-to-end level of service experienced by an application. User QoS parameters are translated into application-level parameters and are mapped into system-level (processor, network) parameters to control the system resources. The QoS mapping is still an open research issue, largely because there are numerous ways (Alfano & Sigle, 1996) to describe QoS for each layer. The QoS mapping is performed by the resource management components of the framework, which enables the user to specify QoS requirements without having to map the QoS parameters into parameters for the underlying layers. The QoS parameters are expressed as (name,value) pairs.
User QoS Parameters To enable users to express their QoS requirements in a simple and convenient manner, a graphical user interface is provided. User QoS parameters are specified in terms of a level of service (such as best effort or best quality) or properties that the user requires. The user must be prepared to pay a higher price when higher Quality of Service is desired. For example, a high-resolution video stream incurs a higher price in terms of increased delivery delay. User QoS requirements are expressed in terms of the media type (i.e., audio or video) and a set of media format parameters such as the color space or the data size (i.e., width and height) of an image, or the compression technique for the frames of a video stream. Users can also specify timing constraints such as start and end delivery times, the desired rate of transmission, the worst-case end-to-end delay and the maximum jitter. The QoS specified by the user includes media-specific parameters, if additional hardware or software constraints are imposed.
Application Layer Application QoS parameters describe the characteristics of the media requested for transfer. Some of the user’s parameters (e.g., end-to-end delay, rate of transmission) can be used directly as application QoS parameters, while others are translated into QoS parameters for the application. For example, for a video stream, the frame size is determined by the image height, width and color of an uncompressed image as specified by the user, and is computed as Frame_size = Width * Height * Color_resolution. A multimedia application has an associated level of service metric, which is explicitly defined by the user or is determined by the resource management components of the framework based on the user’s QoS parameters and the other multimedia applications running in the system. In addition, priority metrics can be associated with the multimedia application as a whole or as individual frames. For example, in MPEG compression, video I-frames contain the most important information and, therefore, should have a higher priority than P-frames or B-frames. Application QoS parameters may also include media-specific information, such as the format of the video source (i.e., PAL or NTSC), the pixel data type, the compression pattern (i.e., IBP pattern for MPEG-1 compression), the bit rate and the number of images to be skipped between captures for a video transmission. The rate of
Managing Real-Time Distributed Multimedia Applications 7
transmission can be derived from the IMAGE_SKIP parameter and the format of the video source. The maximum number of buffers determines the maximum number of images that a video card can store.
System Layer While perception of QoS can vary from user to user and from application to application, user and application QoS requirements must be translated into system parameters in order to monitor and control the system resources. The processor layer determines whether there are sufficient resources to accommodate the user and application requirements. Typical parameters of this layer are the utilization of the CPU, the size of the memory, the available disk space and the load imposed by special devices used for multimedia processing. This layer also encompasses the scheduling strategy used to schedule the multimedia objects on the processors’ resources. The network layer determines the transport requirements for the multimedia application, including the transport protocol to be used for the delivery of packets and packet-related parameters such as packet size, priority, ordering, transfer rate, round-trip delay, packet error and loss rate. Different multimedia streams experience random delays in the delivery of multimedia data due to the heterogeneity of the underlying communication infrastructure. Ideally, the network would deliver the multimedia data as they are generated with minimal or bounded delay.
DEVELOPING A DISTRIBUTED MULTIMEDIA FRAMEWORK FOR CORBA CORBA A/V Streaming Specification The CORBA A/V streaming specification (Object Management Group, 1997) defines a basic set of interfaces for implementing a multimedia framework that leverages the portability and flexibility provided by the middleware. The principal components of the framework are: 1. Multimedia Device (MMDevice): A multimedia device abstracts one or more items of hardware. Typical multimedia devices can be physical devices, such as a microphone, a video camera or a speaker, or logical devices, such as a video clip. A MMDevice can potentially support a number of streams simultaneously. For each individual stream, a virtual device and a stream endpoint connection are created. 2. Virtual Device (Vdev): A virtual multimedia device represents the device-specific aspects of a stream. Virtual devices have associated configuration parameters, such as the media format and the coding scheme of the transmitted stream. For instance, a video camera might be capable of transmitting both JPEG and MPEG formats in the data streams. A multimedia device can contain different virtual multimedia devices with different characteristics, and different virtual devices can refer to the same physical device. 3. Stream Endpoint (StreamEndPoint): A stream endpoint terminates a stream within a multimedia device and represents the transport-specific parameters of the stream. A stream endpoint specifies the transport protocol and the host name and port number of the transmitting endpoints.
8 Kalogeraki, Melliar-Smith & Moser
4.
Stream: A stream represents continuous media transfer between two or more virtual multimedia devices. Each stream is supported by the creation of a virtual multimedia device and a stream endpoint connection representing the device-specific and network-specific aspects of a stream endpoint. A stream may contain multiple flows, each flow carrying data in one or both directions. 5. Stream Controller (StreamCtrl): A stream controller abstracts continuous media transfer between virtual devices. It supports operations to control (start, stop) the stream as a whole or the individual flows within the stream. The StreamCtrl interface is used by the application programmer to set up and manage point-to-point or multipoint streams. Our framework uses the components of the A/V streaming specification as building blocks for the multimedia components. The advantage is that the A/V streaming specification allows the framework to hide the underlying implementation details.
Multimedia Components of the Framework Figure 2 shows the UML representation of the multimedia components of the framework. The multimedia components are based on a three-layered object structure. Multimedia suppliers and consumers are represented by the Supplier and Consumer objects, respectively. Multimedia streams that originate from different Suppliers are transmitted to the Coordinator object for multiplexing as a single stream before being forwarded to the Consumers. The Coordinator object is a key component of our framework. It is responsible for the synchronization of the streams that the user wishes to receive so that individual buffers at the endpoints are not required.
The Supplier The Supplier (Figure 3) represents the stream endpoint from which the multimedia data are derived. The Supplier defines the media to be transferred using the MMDevice interface. Typical configuration parameters of the MMDevice object are the type (i.e., video camera, microphone) or the name (i.e., “Star Wars”) of the media. The Supplier is implemented as a CORBA object and, therefore, can be located on any of the processors, but typically is associated with the physical location of the multimedia device. For example, to obtain live images from a camera or to listen to a live conversation, specific physical devices must be selected. On the other hand, to playback a video clip from a file, any of the processors can be chosen. The Supplier uses the virtual multimedia device (VDev) object to configure the specific flow transfer (i.e., by setting the video format for a video transfer) and the StreamEndPoint object to define the host address where the Supplier is located.
The Coordinator The Coordinator (Figure 4) multiplexes different streams originating from different sources into a single stream to be transmitted to the Consumers. Specific transport parameters are associated with the Coordinator through the StreamEndPoint interface. These parameters define the host address where the Coordinator is located and the port number to which it listens. To accommodate a large number of consumers, different Coordinator objects can be configured to receive different multimedia streams. The Coordinator is an essential component of the framework and, therefore, is replicated for fault tolerance (Kalogeraki & Gunopulos, 2000).
Managing Real-Time Distributed Multimedia Applications 9
Figure 2: UML representation of the multimedia components of the framework
Figure 3: The Supplier
10 Kalogeraki, Melliar-Smith & Moser
Figure 4: The Coordinator
Figure 5: The Consumer
The Consumer The Consumer (Figure 5) receives a single stream of multimedia data from the Coordinator and separates the flows that originate from different sources. These flows are subsequently supplied to video and audio buffers to be viewed or played, respectively. Compressed video images must be decompressed before they are displayed by the Consumer. The Consumer is associated with the MMDevice interface, where multiple VDev objects can be created to represent the various flows that the object is expected to receive. Typical parameters of the VDev object are image displays and speakers. The host address where the Consumer is located is defined using the StreamEndPoint interface.
RESOURCE MANAGEMENT Multimedia applications have high resource requirements, and lack of resource management mechanisms can lead to transmission problems with multimedia objects competing for limited unmanaged resources. Pre-planned allocations are usually not
Managing Real-Time Distributed Multimedia Applications 11
efficient because they can result in overloaded resources as the multimedia environment evolves over time. To provide efficient delivery of multimedia data, the framework employs resource management components that consist of a Profiler and a Scheduler for each processor and a global Resource Manager for the system (Kalogeraki, Moser & MelliarSmith, 1999).
The Profilers The Profiler for each processor measures the current load on the processor’s resources (i.e., CPU, memory, disk) and the bandwidth being used on the communication links. The Profiler also monitors the messages exchanged between the objects and uses the information extracted from the messages to measure the execution and communication times of the objects and compute the percentage of the resources used by the objects during execution. The Profilers supply their measurements as feedback to the Resource Manager. During operation, the Profilers may detect overloaded or insufficient resources to provide the Quality of Service required by the user. The QoS can change either because of an explicit request by the user (for example, when the user desires a higher level of service) or implicitly while the application executes. In both cases, the Profiler reports the monitored change of the QoS parameters to the Resource Manager which can initiate negotiation with the user so that alternative QoS parameters can be selected.
The Schedulers The Scheduler on each processor specifies an ordered list (schedule) for the method invocations on the objects on that processor, which defines how access to the CPU resources is granted. The schedule is based on the least-laxity scheduling algorithm. In least-laxity scheduling, the laxity of a task is defined as: Laxity = Deadline Remaining_Computation_Time, where Deadline is the interval within which the task must be completed and Remaining_Computation_Time is the estimated remaining time to complete the multimedia task. The Scheduler calculates the Deadline and the Remaining_Computation_Time for each task, thus deriving the task laxity. The task with the least laxity is assigned the highest real-time priority, and tasks are then scheduled according to the real-time priorities assigned by the Scheduler.
The Resource Manager The Resource Manager is implemented as a set of CORBA objects that are allocated to various processors across the distributed system. The Resource Manager objects are replicated for fault tolerance; logically, however, there is only a single logical copy of the Resource Manager in the system. All the replicas of the Resource Manager perform the same operations in the same order, and therefore have the same state. The Resource Manager maintains a system profile that includes the physical configuration of the system (various resources along with their specific characteristics) and the multimedia objects running on the processors. As new requests are introduced, the Resource Manager determines whether it can satisfy those requests, by considering the available resources and the other multimedia applications running in the system. To make accurate decisions, the Resource Manager uses current system information, obtained from the Profilers, that increase the likelihood of meeting the QoS requirements specified by the user. For example, if a service requested by the user is available on more than one processor, the
12 Kalogeraki, Melliar-Smith & Moser
Resource Manager selects the service from the most appropriate processor, e.g., the least-loaded processor or the processor located closest to the sender. The Resource Manager translates the QoS properties specified by the user into application QoS parameters and then allocates the necessary resources at the nodes along the path between the sender and the receiver. When insufficient resources remain to provide the required Quality of Service, the Resource Manager gradually degrades the Quality of Service for certain multimedia applications. The applications chosen are the ones with the least importance to the system, as defined by the user or determined by the Resource Manager. Alternatively, the Resource Manager attempts to reallocate the multimedia objects dynamically by migrating them to other processors. This reallocation may free some computing resources and enable the remaining objects to operate. Dynamic reallocation may also be required if a processor or other resource is lost because of a fault. If the quality of the multimedia applications continues to deteriorate, the Resource Manager can drop the least important multimedia applications so that the remaining applications can be accommodated at their desired level of service.
IMPLEMENTATION AND EXPERIMENTAL RESULTS Prototype Implementation The platform for our implementation and experiments consisted of six 167 MHz Sun UltraSPARCs running Solaris 2.5.1 with the VisiBroker 3.3 ORB over 100 Mbit/s Ethernet. For the implementation, we used three Supplier objects, two of which transmit live images captured from different cameras, while the third reads a video clip stored in a file. The Supplier objects are located on the processors equipped with the cameras, while the third Supplier object is located on a different processor. Each Supplier object transmits its data stream to a Coordinator object located on a different processor. The Coordinator waits until it receives all of the data streams sent by the Supplier and merges them into a single stream, which it then transmits to the Consumer objects. The three Consumers receive data streams from the Coordinator and display each of the individual flows within the data stream on a separate display. Figure 6 shows the multimedia application at run-time. The implementation uses the XIL Imaging Library for image processing and compression. XIL provides an object-oriented architecture and supplies a rich set of basic image processing functions that serve as building blocks for developing complex image processing applications. The implementation also uses the SunVideo subsystem, a real-time video capture and compression subsystem for Sun SPARCstations. The SunVideo subsystem consists of a SunVideo card, which is a digital video card, and supporting software, which captures, digitizes and compresses unmodulated NTSC and PAL video signals from video sources. MPEG-1 video coding is used for image data compression. The compressed video streams are transmitted over the network, stored on disk, and decompressed and displayed in a window on a workstation.
Performance Measurements In our experiments we measured the end-to-end delay experienced by the video frames i.e., the delay between the time a video frame is captured from the camera at a Supplier, until
Managing Real-Time Distributed Multimedia Applications 13
Figure 6: Multimedia application at run-time
the time it is transferred to a Consumer and displayed to the user. Of particular interest was the jitter associated with the random delays of the individual frames. Ideally, frames should be displayed continuously with a fixed frame rate. Typically, the jitter is eliminated by employing a buffer at the receiver (Stone et al., 1995). For example, if the receiver knows a priori the maximum possible end-to-end delay experienced by a video frame, it can buffer the first frame for this maximum time before displaying it. In our framework, the end-to-end delay experienced by the video frames depends on the following factors: (1) the time required for the Suppliers to capture the frames from the camera devices and compress them into a compressed frame sequence, (2) the time to transmit the compressed frame sequence from the Suppliers to the Coordinator, (3) the time required for the Coordinator to collect the compressed frame sequences from the individual Suppliers and combine them into a single stream, (4) the time to transmit the single stream from the Coordinator to the Consumers, and (5) the time required for the Consumers to separate the compressed frame sequences from the different Suppliers and decompress them for display. To determine the end-to-end delay, we assumed that the compression/decompression time of frames of the same size is approximately the same; thus, the end-to-end delay is mainly a function of the delay in the transmission of compressed frame sequences from the Suppliers to the Consumers and the delays in the collection of compressed frame sequences from different sources. Our measurements indicate that the frames are captured and displayed at the same rate by both the Suppliers and the Consumers, as the introduction of the Coordinator did not result in any irregularity in the compressed frame sequence transmissions and did not introduce any additional delay in the transmissions. Figure 7 shows the delay in the transmission of frames as a function of time, with the load on the processor increasing over time. As the load on the processor increases, the delay becomes larger and more variable. System jitter refers to the variable delays arising within the end-system, and is generally caused by the varying system load and the compression/decompression load. Network jitter refers to the varying delays the stream packets experience on their way from the sender to
14 Kalogeraki, Melliar-Smith & Moser
Figure 7: Delay (in ms) for successive frames as the load on the processor is increased
Figure 8: Jitter as a function of processor load
the receiver, and is introduced by buffering at the intermediate nodes. For our application, we measured both the system jitter and the network jitter. The jitter mainly depends on the load on the processors at which the Suppliers and the Consumers are located. To demonstrate the effect of the processor load on the jitter, we introduced a random increase in the load of the processor at which the Supplier was located and the frames were captured. We measured the jitter when both a single Supplier and two Suppliers were used to transmit live images to the Consumer. Figure 8 shows that the jitter is larger for a single Supplier than for two Suppliers and that it increases unacceptably when the load on the processor is increased above 0.5.
Managing Real-Time Distributed Multimedia Applications 15
CONCLUSION Distributed object computing is becoming a popular paradigm for next-generation multimedia technologies. Several efforts have demonstrated that the CORBA provides a powerful platform for developing distributed multimedia applications. By leveraging the flexibility, portability and interoperability that CORBA provides, we can build real-time distributed multimedia applications more easily. We have designed a framework for managing real-time distributed multimedia applications based on CORBA. The multimedia components of the framework consist of Suppliers that produce streams of multimedia data, a Coordinator that receives multimedia streams generated by the individual Suppliers and combines them into a single stream, and Consumers that receive the single stream of multimedia data and separate the flows within the stream to be viewed or played individually. The resource management components of the framework consist of Profilers that monitor the usage of the resources and the behavior of the application objects, Schedulers that schedule the tasks of the multimedia objects and a Resource Manager that allocates the multimedia objects to the processors, sharing the resources efficiently and adapting the allocations over changing processing and network conditions.
REFERENCES Alfano, M. and Sigle, R. (1996). Controlling QoS in a collaborative multimedia environment. Proceedings of the Fifth IEEE International Symposium on High Performance Distributed Computing, 340-347, Syracuse, NY: IEEE Computer Society. Aurrecoechea, C., Campbell, A. T. and Hauw, L. (1998). A survey of QoS architectures. Multimedia Systems, 6(3), 138-151. Biersack, E. and Geyer, W. (1999). Synchronized delivery and playout of distributed stored multimedia streams. Journal of Multimedia Systems, 7, 70-90. Hong, J. W. K., Kim, J. S. and Park, J. T. (1999). A CORBA-based quality of service management framework for distributed multimedia services and applications. IEEE Network, 13(2), 70-79. Kalogeraki, V., Moser, L. E. and Melliar-Smith, P. M. (1999). Using multiple feedback loops for object profiling, scheduling and migration in soft real-time distributed object systems. Proceedings of the 2nd IEEE International Symposium on ObjectOriented Real-Time Distributed Computing, 291-300, Saint Malo, France: IEEE Computer Society. Kalogeraki, V. and Gunopulos, D. (2000). Managing multimedia streams in distributed environments using CORBA. Sixth International Workshop on Multimedia Information Systems, Chicago, IL, 114-123. McGrath, D. and Chapman, M. (1997). A CORBA framework for multimedia streams. Proceedings of TINA’97-Global Convergence of Telecommunications and Distributed Object Computing, 239-243, Santiago, Chile: IEEE Computer Society. Mungee, S., Surendran, N. and Schmidt, D. (1999). The design and performance of a CORBA audio/video streaming service. Proceedings of the 32nd Annual IEEE Hawaii International Conference on Systems Sciences, 14, Maui, HA: IEEE Computer Society. Nahrstedt, K. and Steinmetz, R. (1995). Resource management in networked multimedia systems. Computer, 28(5), 52-63.
16 Kalogeraki, Melliar-Smith & Moser
Object Management Group, Inc. (1997). Control and Management of Audio/Video Streams Specification, 1.0. Object Management Group, Inc. (1999). The Common Object Request Broker: Architecture and Specification, 2.3.1. Rajkumar, R., Juvva, K., Molano, A. and Oikawa, S. (1998). Resource kernels: A resource-centric approach to real-time and multimedia systems. Multimedia Computing and Networking, 150-164, San Jose, CA: SPIE-International Society for Optical Engineering. Stone, D. L. and Jeffray, K. (1995). An empirical study of delay jitter management policies. Journal of Multimedia Systems, 2(6), 267-279. Szentivanyi, G. and Kourzanov, P. (1999). A generic, distributed and scalable multimedia information management framework using CORBA. Proceedings of the 32nd Annual IEEE Hawaii International Conference on Systems Sciences, 15, Maui, HA: IEEE Computer Society. Le Tien, D., Villin, O. and Bac, C. (1999). Resource managers for QoS in CORBA. Proceedings of the 2nd IEEE International Symposium on Object-Oriented RealTime Distributed Computing, 213-222, Saint Malo, France: IEEE Computer Society. Vogel, A., Kerherve, B., Bochmann, G. V. and Gecsei, J. (1995). Distributed multimedia applications and quality of service: A survey. IEEE Multimedia, 2(2), 10-19. Waddington, D. G. and Coulson, G. (1997). A distributed multimedia component architecture. Proceedings of the First International Enterprise Distributed Object Computing Workshop, 334-345, Gold Coast, Queensland, Australia: IEEE Computer Society.
Building Internet Multimedia Applications 17
Chapter II
Building Internet Multimedia Applications: The Integrated Service Architecture and Media Frameworks Zhonghua Yang, Robert Gay and Chee Kheong Siew Nanyang Technological University, Singapore Chengzheng Sun Griffith University, Australia
The Internet has become a ubiquitous service environment. This development provides tremendous opportunities for building real-time multimedia applications over the Internet. In this chapter, we present a state-of-the art coverage of the Internet integrated service architecture and two multimedia frameworks that support the development of real-time multimedia applications. The Internet integrated service architecture supports a variety of service models beyond the current best-effort model. A set of new real-time protocols that constitute the integrated service architecture are described in some detail. The new protocols covered are those for real-time media transport, media session setup and control, and those for resource reservation in order to offer the guaranteed service. We then describe two emerging media frameworks that provide a high-level abstraction for developing real-time media applications over Internet: CORBA Media Streaming Framework (MSF) and Java Media Framework (JMF), both of which provide an object-oriented multimedia middleware. The future trends are also discussed.
Copyright © 2002, Idea Group Publishing.
18 Yang, Gay, Sun & Siew
INTRODUCTION The Internet has gone from near-invisibility to near-ubiquity and penetrated into every aspect of society in the past few years (Deptartment of Commerce, 1998). The application scenarios have also changed dramatically and now demand a more sophisticated service model from the network. A service model consists of a set of service commitments, in other words, in response to a service request the network commits to deliver some service. Despite its tremendous growth, the Internet is still largely based on a very simple service model, best effort, providing no guarantee on the correct and timely delivery of data packets. Each request to send is honored by the network as best it can. This is the worst possible service: packets are forwarded by routers solely on the basis that there is any known route, irrespective of traffic conditions along that route. Routers that are overloaded are allowed to discard packets. This simplicity has probably been one of the main reasons for the success of IP technology. The best-effort service model, combined with an efficient transport layer protocol (TCP), is perfectly suited for a large class of applications, which tolerate variable delivery rates and delays. This class of applications is called elastic applications. The interactive burst communication (telnet), interactive bulk transfers (FTP) and asynchronous bulk transfers (electronic mail, Fax) are all examples of such elastic applications. The elastic applications are insensitive to delay since the receiver can always wait for data that is late, and the sender can usually re-transmit any data that is lost or corrupted. However, for a real-time application, there are two problems with using this service model: if the sender and/or receiver are humans, they simply cannot tolerate arbitrary delays; on the other hand, if the rate at which video and audio arrive is too low, the signal becomes incomprehensible. To support real-time Internet applications, the service model must address those services that relate most directly to the time-of-delivery of data. Real-time applications like video and audio conferencing typically require stricter guarantees on throughput and delay. The essence of real-time service is the requirement for some service guarantees in terms of timing. There has been a great deal of effort since 1990 by Internet Engineering Task Force (IETF) to add a broad range of services to the Internet service model, resulting in the Internet Integrated Service model (Braden, Clark and Shenker, 1994; Crowcroft, Handley and Wakeman, 1999). The Internet Integrated Services Model defines five classes of service which should satisfy the requirements of the vast majority of future applications: 1. Best Effort: As described above, this is the traditional service model of the Internet. 2. Fair: This is an enhancement of the traditional model, where there are no extra requests from the users, but the routers attempt to partition up network resources in some fair manner. This is typically implemented by adopting a random drop policy when encountering overload, possibly combined with some simple round robin serving of different sources. 3. Controlled load: This is an attempt to provide a degree of service guarantee so that a network appears to the user as if there is little other traffic, and it makes no other guarantees. The admission control is usually imposed so that the performance perceived is as if the network were over-engineered for those that are admitted. 4. Predictive service: This service is to give a delay bound which is as low as possible, and at the same time, is stable enough that the receiver can estimate it. 5. Guaranteed service: This is where the delay perceived by a particular source or to a group is bounded within some absolute limit. This service model implies that resource reservation and admission control are key building blocks of the service.
Building Internet Multimedia Applications 19
The Internet that provides these integrated services is called the Integrated Service Internet. In order to use the integrated service such as multimedia applications, the user must have a workstation that is equipped with built-in multimedia hardware (audio codec and video frame grabbers). However, the realization of the Integrated Service Internet fundamentally depends upon the following enabling technologies: High-Bandwidth Connection: A fast/high bandwidth Internet access and connection are important. For example, an Internet user probably will not spend 46.3 minutes to transfer a 3.5-minutes video clip (approximately the amount of video represented by a 10 mega-byte file) if using 28.8 Kbps modem. But he would wait if it took only a few seconds to download the same file (8 seconds if using a 10 Mbps cable modem). Obviously, the bandwidth of an Internet connection is a prime determinant of the Internet multimedia service. Telephone companies, satellite companies, cable service providers and Internet service providers are working to create faster Internet connections and expand the means by which users can access the Internet. New Internet access technologies such as ADSL (Asynchronous Digital Subscriber Line) enable copper telephone lines to send data at speeds up to 8 mega-bit per second (Mbps). For the wireless access, 28.8 Kbps is widely available, and the bandwidth of 1.5 Mbps is offered from the Local Multipoint Distribution Service (LMDS) or the Multi-Channel Multipoint Distribution Service (MMDS). Internet access using cable modems can have 1.2–27 Mbps speed. Real-time Protocols: Complementing TCP/UDP/IP protocol stack, a new set of realtime protocols are required, which provide end-to-end delivery services for data with realtime characteristics (e.g., interactive audio and video). IP-Multicasting: Most of the widely used traditional Internet applications, such as WWW browsers and email, operate between one sender and one receiver. In many emerging multimedia applications, such as Internet video conferencing, one sender will transmit to a group simultaneously. For these applications, IP-multicast is a requirement, not an option, if the Internet is going to scale. Digital Multimedia Applications: Using IP-multicast and real-time protocols as the underlying support, the sophisticated multimedia applications can be developed. There are two approaches to multimedia application development: directly by using network APIs (IP sockets) or by using media frameworks as middleware. The MBone (Internet Multicast Backbone) applications are developed using Internet socket APIs, and Java Media Framework and CORBA Media Stream Framework provide middleware-like environments for multimedia application development. In this chapter, we describe two aspects of the latest development of the Internet: the Internet multimedia protocol architecture and the media frameworks. The Internet multimedia protocol architecture supports the integrated service models beyond the current TCP/IP best-effort service model. These service models are presented, and a set of real-time protocols is described in some details. The development of multimedia applications over the Internet is a complex task. The emerging media frameworks-CORBA Media Stream Framework and Java Media Framework--provide an environment that hides the details of media capturing, rendering and processing. They also abstract away from the details of underlying network protocols. These two frameworks are described. Future trends are also discussed.
20 Yang, Gay, Sun & Siew
THE INTERNET MULTIMEDIA PROTOCOL ARCHITECTURE Since the large portion of Internet applications use TCP/IP protocol stacks, it is tempting to think of using TCP and other reliable transport protocols (for example, XTP) to deliver real-time multimedia data (audio and video). However, as Schulzrinne argued, TCP and other reliable transport protocols are not appropriate for real-time delivery of audio and video. The reasons include the following (Schulzrinne, 1995): • TCP cannot support multicast, which is a fundamental requirement for large-scale multimedia applications (e.g., conference). • Real-time data is delay-sensitive. The real-time multimedia applications can tolerate the data loss but will not accept the delay. The TCP mechanism ensures the reliability by re-transmitting the lost packets and forcing the receiver to wait. In other words, reliable transmission as provided by TCP is not appropriate, nor desirable. • Audio and video data is a stream data with a natural rate. When network congestion is experienced, the congestion control for media data is to have transmitter change the media encoding, video frame rate or video image size based on the feedback from receivers. On the other hand, the TCP congestion-control mechanisms (slow start) decrease the congestion window when packet losses are detected. This sudden decrease of data rate would starve the receiver. • The TCP (or XTP) protocol headers do not contain the necessary timestamp and encoding information needed by the receiving applications. In addition, the TCP (or XTP) headers are larger than a UDP header. • Even in a LAN with no losses, TCP would suffer from the initial slow start delay. As described previously, the integrated services Internet offers a class of service models beyond the TCP/IP’s best-effort service, and thus it imposes strict new requirements for a new generation of Internet protocols (Clark and Tennenhouse, 1990). In the Integrated Service Internet, a single end system will be expected to support applications that orchestrate a wide range of media (audio, video, graphics and text) and access patterns (interactive, bulk transfer and real-time rendering). The integrated services will generate new traffic patterns with performance considerations (such as delay and jitter tolerance) that are not addressed by present Internet TCP/IP protocol architecture. The integrated service Internet is expected to have a very fast bandwidth and even operate at gigabit rate; at this speed, the current Internet protocols will present a bottleneck. The fast networks demand the very low protocol overhead. Furthermore, the new generation of protocols for integrated services will operate over the range of coming network technology, including Broadband ISDN which is based on small fixed sized cell switching (ATM) mode different from classic packet switching. The set of Internet real-time protocols, which constitute the Internet Multimedia Protocol Architecture, represents a new style of protocols. The new style of protocols follows the principles of application level framing and integrated layer processing proposed by Clark and Tennenhouse (1990). According to this principle, the real-time application is to have the option of dealing with a lost data unit, either by reconstituting it or by ignoring the error. The current Internet transport protocols such as TCP do not have this feature. To achieve this, the losses are expressed in terms meaningful to the real-time application. In other words, the application should break the data into suitable aggregates (frame), and lower levels should preserve these frame boundaries as they process the data. These aggregates are called Application Data Units (ADUs), which will take the place of the packet as the unit of manipulation. This principle is called Application Level Framing. From
Building Internet Multimedia Applications 21
an implementation perspective, the current implementation of layered protocol suites restricts the processing to a small subset of the suite’s layers. For example, the network layer processing is largely independent of upper layer protocols. A study shows that presentation can cost more than all other manipulations combined; therefore, it is necessary to keep the processing pipeline going. In other words, the protocols are so structured as to permit all the manipulation steps in one or two integrated processing loops, instead of performing them serially as is most often done traditionally. This engineering principle is called Integrated Layer Processing. In this approach to protocol architecture, the different functions are “next to” each other, not “on top of” each other as seen in the layered architecture. The integrated service Internet protocol architecture is shown in Figure 1. As shown, the overall multimedia data and control architecture currently incorporates a set of real-time protocols, which include the real-time transport protocol (RTP) for transporting real-time data and providing QoS feedback (Schulzrinne, Casner, Frederick and Jacobson, 2000), the real-time streaming protocol (RTSP) for controlling delivery of streaming media (Schulzrinne, Rao and Lanphier, 1998), the session announcement protocol (SAP)for advertising multimedia sessions via multicast (Handley, Perkins and Whelan, 1999) and the session description protocol (SDP) for describing multimedia sessions (Handley and Jacobson, 1998). In addition, it includes the session initiation protocol (SIP) which is used to invite the interested parties to join the session (Handley, Schulzrinne, Schooler and Rosenberg, 2000). But the functionality and operation of SIP does not depend on any of these protocols. Furthermore, the Resource ReSerVation Protocol (RSVP) is designed for reserving network resources (Braden, Zhang, Berson and Herzog, 1997; Zhang, Deering, Estrin, Shenker and Zappala, 1993; Braden and Zhang, 1997). These protocols, together with the IP-Multicast, are the underlying support for Internet multimedia applications. We will describe these protocols in some detail. The IP-Multicast is described in great detail in Deering and Cheriton (1990); Deering (1991); Obraczka (1998); Floyd, Jacobson, Liu, McCanne and Zhang (1997); Paul, Sabnani, Lin and Bhattacharyya (1997); and Hofmann (1996). Note that in the context of multimedia communication, a session is defined as a set of multimedia senders and receivers and the data streams flowing from senders to receivers. A multimedia conference is an example of a multimedia session.” (Handley and Jacobson, 1998). As defined, a callee can be invited several times, by different calls, to the same session. In the following, we describe real-time Internet protocols that constitute the Internet multimedia protocol architecture.
Figure 1: Internet protocols architecture for multimedia Multimedia Applications
MBone
RTP/RTCP
Multimedia Session Setup & Control
Reliable Multicast
RSVP
UDP
RTSP
SDP SAP
SIP TCP
IP + IP Multicast Integrated Service Forwarding (Best-Effort, Guaranteed)
HTTP
SMTP
22 Yang, Gay, Sun & Siew
REAL-TIME TRANSPORT PROTOCOLS: RTP AND RTCP We now describe the Internet protocols designed within the Internet Engineering Task Force (IETF) to support real-time multimedia conferencing. We first present the transport protocols, then describe the protocols for session control (session description, announcement and invitation). These protocols normally work in collaboration with IP-multicast, although they can also work with unicast protocols. A common real-time transport protocol (RTP) is primarily designed to satisfy the needs of multi-participant multimedia conferences (Schulzrinne, Casner, Frederick and Jacobson, 2000). Note that the named “transport protocol” is somewhat misleading, as it is currently mostly used together with UDP that is a designated Internet transport protocol. The RTP named as a transport protocol emphasizes that RTP is an end-to-end protocol, and it provides end-to-end delivery services for data with real-time characteristics, such as interactive audio and video. Those services include payload type identification, sequence numbering, timestamping and delivery monitoring. As discussed earlier, the design of RTP adopts the principle of Application Level Framing (ALF). That is, RTP is intended to be malleable to provide the information required by a particular application and will often be integrated into the application processing rather than being implemented as a separate layer. RTP is a protocol framework that is deliberately not complete; it specifies those functions expected to be common across all the applications for which RTP would be appropriate. Unlike conventional protocols in which additional functions might be accommodated by making the protocol more general or by adding an option mechanism that would require parsing, RTP is intended to be tailored through modifications and/or additions to the headers as needed. Therefore, a complete specification of RTP for a particular application will require one or more companion documents, typically a profile specification and a payload format specification. Profile defines a set of payload-type codes and their mapping to payload formats (e.g., media encodings). A profile may also define extensions or modifications to RTP that are specific to a particular class of applications. Typically an application will operate under only one profile. A profile for audio and video data is specified in Schulzrinne and Casner (1999). Payload format specification defines how a particular payload, such as an audio or video encoding, is to be carried in RTP. RTP is carried on top of IP and UDP (see Figure 1). RTP consists of two closely linked parts, data part and control part. Although named real-time, RTP does not provide any mechanism to ensure timely delivery or provide other quality of service guarantees, and it is augmented by the control protocol, RTCP, that is used to monitor the quality of data distribution; RTCP also provides control and identification mechanisms for RTP transmissions. Continuous media data is carried in RTP data packets. The functionality of the flow control uses the control packets. If quality of service is essential for a particular application, RTP can be used over a resource reservation protocol, RSVP, which will be described below. In summary: • the real-time transport protocol (RTP), to carry data that has real-time properties; • the RTP control protocol (RTCP), to monitor the quality of service and to convey information about the participants in an on-going session. In RTP, the source of a media stream is called synchronization source (SSRC). All packets from a synchronization source form part of the same timing and sequence number space, so a receiver can group packets by synchronization source for playback. Examples of
Building Internet Multimedia Applications 23
synchronization sources include the sender of a stream of packets derived from a single source such as a microphone or a camera. A synchronization source may change its data format, e.g., audio encoding, over time. The receiver has to be able to tell each source apart, so that those packets can be placed in the proper context and played at the appropriate time for each source. In the following, we describe RTP data transfer protocol and RTP control protocol in some detail. But first we have to discuss the network components that constitute an RTP media network.
RTP Network Configuration The RTP media network typically consists of end systems and intermediate systems. The end system is the one that generates continuous media stream and delivers it to the user. Every original RTP source is identified by a source identifier, and this source is carried in every packet as described below. In addition, RTP allows flow from several sources to be mixed or translated in intermediate systems, resulting in a single flow or a new flow with different characteristics. When several flows are mixed, each mixed packet contains the source IDs of the entire contributing source. Generally, the RTP intermediate system is an RTP relay within the network. The purpose of having the RTP relays is that the media data can be transmitted on the links of different bandwidth in different formats. There are two types of RTP relay: mixer and translator. To further illustrate why the RTP relay (mixer and translator) are desirable network entities, let’s consider the case where participants in one area are connected through a low-speed link to the majority of the conference participants who enjoy high-speed network access. Instead of forcing everyone to use a lower bandwidth, reduced-quality audio encoding, a mixer may be placed near the low-bandwidth area. This mixer resynchronizes incoming audio packets to reconstruct the constant 20 ms spacing generated by the sender, mixes these reconstructed audio streams into a single stream, and translates the audio encoding to a lower bandwidth one and forwards the lower bandwidth packet stream across the low-speed link. To achieve this, the RTP header includes a means, Contributing Source Identifier (CSRC) field, for mixers to identify the sources that contributed to a mixed packet so that correct indication can be provided at the receivers. In general, a mixer receives RTP packets from one or more sources, possibly changes their data format, combines them in some manner and then forwards a new RTP packet. All data packets originating from a mixer will be identified as having the mixer as their synchronization source. In the other case, some of the intended participants in the audio conference may be connected with high bandwidth links but might not be directly reachable via IP multicast. For example, they might be behind an application-level firewall that will not let any IP packets pass. For these sites, mixing may not be necessary; instead another type of RTP-level relay called a translator may be used. Two translators are installed, one on either side of the firewall, with the outside one funneling all multicast packets received through a secure connection to the translator inside the firewall. The translator inside the firewall sends them again as multicast packets to a multicast group restricted to the site’s internal network. Using translators, a group of hosts speaking only IP/UDP can be connected to a group of hosts that understand only ST-II, or the packet-by-packet encoding of video streams can be translated from individual sources without resynchronization or mixing. A translator forwards RTP packets with their synchronization source intact. Mixers and translators may be designed for a variety of purposes. A collection of mixers and translators is shown in Figure 2 to illustrate their effect on SSRC and CSRC
24 Yang, Gay, Sun & Siew
Figure 2: The RTP network configuration with end systems, mixer and translators End System
E1
E6
E1:17 Mixer
E6:15
Translator
M1:48(1,17) M1
E6:15 M1:48(1,17)
M1:48(1,17) T1
T2
E2:1
E7
E4:47 M3:89(45)
E4:47 E4:47
E2
M3:89(45)
E4 M3
Legend:
E5:45
Source:SSRC (CCRC, ...)
E5
Figure 3: RTP fixed header 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + V=2 P X CC M PT Sequence number timestamp Synchronization Source (SSRC) identifier Contributing Source (CSRC) identifier
identifiers. In the figure, end systems are shown as rectangles (named E), translators as triangles (named T) and mixers as ovals (named M). The notation “M1: 48(1,17)” designates a packet originating a mixer M1, identified with M1’s (random) SSRC value of 48 and two CSRC identifiers, 1 and 17, copied from the SSRC identifiers of packets from E1 and E2.
RTP Data Transfer Protocol Every RTP data packet consists of a header followed by the payload (e.g., a video frame or a sequence of audio samples). The RTP header is formatted as in Figure 3. The first 12 octets are present in every RTP packet, while the list of CSRC identifiers is present only when inserted by a mixer. As shown in Figure 3, the RTP header contains the following information: 1. version (V): 2 bits. This field identifies the version of RTP. The current version is two (2). 2. padding (P): 1 bit. If the padding bit is set, the packet contains one or more additional padding octets at the end which are not part of the payload.
Building Internet Multimedia Applications 25
3.
extension (X): 1 bit. If the extension bit is set, the fixed header must be followed by exactly one header extension. 4. CSRC count (CC): 4 bits. The Contributing Source (CSRC) contains the number of CSRC identifiers that follow the fixed header. 5. marker (M): 1 bit. The interpretation of the marker is defined by a profile. For video, it marks the end of a frame. 6. payload type (PT): 7 bits. This field identifies the format of the RTP payload and determines its interpretation by the application, for example JPEG video or GSM audio. A receiver must ignore packets with payload types that it does not understand. 7. sequence number: 16 bits. The sequence number increments by one for each RTP data packet sent, and may be used by the receiver to detect packet loss and to restore packet sequence within a series of packets with the same timestamp. 8. timestamp: 32 bits. The timestamp reflects the sampling instant of the first octet in the RTP data packet. The sampling instant must be derived from a clock that increments monotonically and linearly in time to allow synchronization and jitter calculations. The resolution of the clock must be sufficient for the desired synchronization accuracy (Yang, Sun, Sattar and Yang, 1999) and for measuring packet arrival jitter. 9. SSRC: 32 bits. The Synchronization Source (SSRC) field identifies the synchronization source. This identifier is chosen randomly, with the intent that no two synchronization sources within the same RTP session will have the same SSRC identifier. 10. CSRC list: 0 to 15 items, 32 bits each. The CSRC list identifies the contributing sources for the payload contained in this packet. The number of identifiers is given by the CC field. Readers may note that RTP packets do not contain a length indication; the lower layer, therefore, has to take care of framing.
Real-Time Control Protocol (RTCP) RTCP occupies the place of a transport protocol in the protocol stack. However, RTCP does not transport application data but is rather an Internet control protocol, like Internet Control Message Protocol (ICMP), Internet Group Management Protocol (IGMP), or routing protocols. Thus, RTCP as the control protocol works in conjunction with RTP (Schulzrinne, Casner, Frederick and Jacobson, 2000). Real-Time Control Protocol (RTCP) packets supplement each RTP flow. RTCP control packets are periodically transmitted by each participant in an RTP session to all other participants. Feedback of information to the application can be used to control performance and for diagnostic purposes. RTCP performs the following four functions: 1. Provide feedback information to application: The primary function is to provide information to an application regarding the quality of data distribution. Each RTCP packet contains sender and/or receiver reports that report statistics useful to the application. These statistics include number of packets sent, number of packets lost, inter-arrival jitter, etc. This reception quality feedback will be useful for the sender, receivers and third-party monitors. For example, the sender may modify its transmissions based on the feedback; receivers can determine whether problems are local, regional or global; network managers may use information in the RTCP packets to evaluate the performance of their networks for multicast distribution. Note that each RTCP packet contains the NTP and RTP timestamps by data senders that help intermedia synchronization.
26 Yang, Gay, Sun & Siew
Identify RTP source: RTCP carries a transport-level identifier for an RTP source, called the canonical name (CNAME). This CNAME is used to keep track of the participants in an RTP session. Receivers use the CNAME to associate multiple data streams from a given participant in a set of related RTP sessions, e.g., to synchronize audio and video. 3. Control RTCP transmission interval: To prevent control traffic from overwhelming network resources and to allow RTP to scale up to a large number of session participants, control traffic is limited to at most 5% of the overall session traffic. This limit is enforced by adjusting the rate at which RTCP packets are periodically transmitted as a function of the number of participants. Since each participant multicasts control packets to everyone else, each can keep track of the total number of participants and use this number to calculate the rate at which to send RTCP packets. 4. Convey minimal session control information: As an optional function, RTCP can be used as a convenient method for conveying a minimal amount of information to all session participants. For example, RTCP might carry a personal name to identify a participant on the user’s display. This function might be useful in loosely controlled sessions where participants informally enter and leave the session. The RTP Control Protocol defines five RTCP packet types to carry a variety of control information: sender report (SR), receiver report (RR), source description (SDES), packet BYE to indicate end of participation and packet APP for application-specific functions. The control packets are distributed to all participants in the session using the same distribution mechanism as the data packets. The underlying protocol must provide multiplexing of the data and control packets, for example using separate port numbers with UDP. Since RTCP packets are sent periodically by each session member, the balance between the desire for up-to-date control information and the desire to limit control traffic to a small percentage of data traffic (5%) must be made. The RTCP protocol specification presents an algorithm to compute the RTCP transmission interval. RTP and RTCP packets are usually transmitted using UDP/IP service. Figure 4 shows an RTP packet encapsulated in a UDP/IP packet. However, RTP is designed to be transport-independent and thus can be run on top of other transport protocols, even directly over AAL5/ATM. 2.
PROTOCOLS FOR MULTIMEDIA SESSION SETUP AND CONTROL There are two basic forms of multimedia session setup mechanism. These are session advertisement and session invitation. Session advertisements are provided using a session directory, and session invitation (inviting a user to join a session) is provided using a session invitation protocol such as SIP (described below) (Handley, Schulzrinne, Schooler and Rosenberg, 2000) or Packet-Based Multimedia Communication Systems standard H.323 (ITU, 1998). Before a session can be advertised, it must be described using the session description protocol (SDP). SDP describes the content and format of a multimedia session, and the session announcement protocol (SAP) is used to distribute it to all potential session recipients. SDP and SAP are described on the next page. Figure 4: RTP packets are encapsulated in an IP packet IP header
UDP header
RTP header
RTP payload
Building Internet Multimedia Applications 27
The Session Description Protocol (SDP) The session description protocol is used for general real-time multimedia session description purposes. It assists the advertisement of conference sessions and communicates the relevant conference setup information to prospective participants. SDP is designed to convey such information to recipients. This protocol is purely a format for session description. In other words, it does not incorporate a transport protocol, and is intended for using different transport protocols as appropriate, including the Session Announcement Protocol (SAP), Session Initiation Protocol (SIP), Real-time Streaming Protocol (RTSP), electronic mail using the MIME extensions and the Hypertext Transport Protocol (HTTP). SDP serves two primary purposes. It is a means to communicate the existence of a session, and is a means to convey sufficient information to enable joining and participating in the session. In a unicast environment, only the latter purpose is likely to be relevant. A session description contains the following: 1. Session name and purpose. 2. The media comprising the session: the type of media (video, audio); the transport protocol (RTP/UDP/IP, H.320); the format of the media (H.261 video, MPEG video). 3. Time(s) the session is active: an arbitrary list of start and stop times bounding the session; for each bound, repeat times such as “every Wednesday at 10am for one hour.” 4. Information to receive those media (addresses, ports, formats and so on). As resources necessary to participate in a session may be limited, some additional information may also be desirable: 1. Information about the bandwidth to be used by the conference. 2. Contact information for the person responsible for the session. In general, SDP must convey sufficient information to be able to join a session (with the possible exception of encryption keys) and to announce the resources to be used to nonparticipants that may need to know. A SDP description is formatted using the following keys: Session description v= (protocol version) o= (owner/creator, session identifier) s= (session name) i=* (session information) u=* (URL of description) e=* (email address) p=* (phone number) c=* (connection, IN: Internet, IP address/ttl) b=* (bandwidth information) z=* (time zone adjustments) k=* (encryption key) a=* (session attribute) Time description t= (time the session is active)
r=* (zero or more repeat times)
Media description m= (media name and transport address) i=* (media title) c=* (connection information) b=* (bandwidth information) k=* (encryption key) a=* (media attribute lines)
28 Yang, Gay, Sun & Siew
An example SDP description is: v=0 o=yang 2890844526 2890842807 IN IP4 132.234.86.1 s=Internet Integrated Service Architecture i=An overview of Internet service models and protocol architecture u=http://www.cit.gu.edu.au/~yang/services.pdf
[email protected] (Zhonghua Yang) p=+61 7 3875 3855 c=IN IP4 132.234.86.1/127 t=2873397496 2873404696 a=recvonly m=audio 49170 RTP/AVP 0 m=video 51372 RTP/AVP 31 The SDP description is announced using the Session Announcement Protocol (SAP) described next.
Session Announcement Protocol (SAP) In order to assist the advertisement of multicast multimedia conferences and other multicast sessions, and to communicate the relevant session setup information to prospective participants, a distributed session directory may be used. An instance of such a session directory periodically multicasts packets, which contain a description of the session, and these advertisements are received by potential participants who can use the session description to start the tools required to participate in the session. SAP defines an announcement protocol to be used by session directory clients (Handley, Perkins and Whelan, 1999). Sessions are described using the session description protocol (SDP) as described in the previous section. The session description is the payload of SAP packet (Figure 5).
Session Announcement and Deletion SAP defines no rendezvous mechanism. A SAP announcer periodically sends an announcement packet to a well-known multicast address and port. In other words, the SAP announcer is not aware of the presence or absence of any SAP listeners, and no additional reliability is provided over the standard best-effort UDP/IP semantics. A
Figure 5: SAP packet format 0 0
1 1
2
3
4
5
6
7
8
9
0
2 1
2
+ + + + + + + + + + + + + + V=1
A R T E C
3
4
5
6
7
8
9
0
3 1
2
3
4
5
6
7
8
9
0
1
+ + + + + + + + + + + + + + + + + + +
auth len
msg id hash
originating source (32 bits for IPv4 or 128 for IPv6) optional authentication data optional payload type payload
0 + +
Building Internet Multimedia Applications 29
SAP announcement is multicast with the same scope as the session it is announcing, ensuring that the recipients of the announcement can also be potential recipients of the session being advertised. A SAP listener learns of the multicast scopes it is within and listens on the well-known SAP address and port for those scopes. Multicast addresses in the range 224.2.128.0— 224.2.255.255 are used for IPv4 global scope sessions with SAP announcements being sent to 224.2.127.254. For IPv4 administrative scope sessions, the administratively scoped IP multicast is used (Mayer, 1998). The announcement interval or the time period between periodic multicasts to the group of an announcement is chosen such that the total bandwidth that is used by all announcements on a single SAP group remains below a pre-configured limit. The base interval between announcements is derived from the number of announcements being made in that group, the size of the announcement and the configured bandwidth limit. Sessions that have previously announced may be deleted by implicit timeout or explicit deletion using the session deletion packet. The session description payload contains timestamp information that specifies a start and end time for the session. If the current time is later than the end-time for the session, then the session is deleted. The deletion packet specifies the version of the session to be deleted. The announcement and deletion packets are indicated by the message type field as described in the SAP packet format (Figure 5).
The SAP Packet Format The SAP packet contains the following information (Figure 5): 1. 2. 3. 4. 5. 6.
7.
8.
9. 10.
V: Version number. The version number field must be set to 1. A: Address type. If the A bit is 0, the originating source field contains a 32-bit IPv4 address. If A bit is 1, the originating source contains a 128-bit IPv6 address. R: Reserved. T: Message type. The T field 0 for a session announcement packet and 1 for a session deletion packet. E: Encryption bit. 1 if the payload of the SAP packet is encrypted; 0 if the packet is not encrypted. C: Compressed bit. 1 if the payload is compressed using the zlib compression algorithm (Jacobson and Casner, 1998). If the payload is to be compressed and encrypted, the compression must be performed first. auth len: Authentication Length. An 8-bit unsigned quantity giving the number of 32-bit words following the main SAP header that contain authentication data. If it is zero, no authentication header is present. Optional authentication data containing a digital signature of the packet, with length as specified by this authentication length header field. msg id hash: Message Identifier Hash. A 16-bit quantity that, used in combination with the originating source, provides a globally unique identifier indicating the precise version of this announcement. Originating source: This gives the IP address of the original source of the message. Whether it is an IPv4 address or an IPv6 address is indicated by the A field. optional payload type: The payload type field is a MIME content type specifier, describing the format of the payload. This is a variable length ASCII text string, followed by a single zero byte (ASCII NUL). An example of the payload type is “application/sdp.”
30 Yang, Gay, Sun & Siew
11.
payload: If the packet is an announcement packet, the payload contains a session description. If the packet is a session deletion packet, the payload contains a session deletion message. If the payload type is “application/sdp,” the deletion message is a single SDP line consisting of the origin field of the announcement to be deleted. If the E or C bits are set in the header, both the payload type and payload are encrypted and/ or compressed.
Encrypted and Authenticated Announcements As indicated in the SAP packet format, an announcement can be encrypted (set E=1). However, if many of the receivers do not have the encryption key, there is a considerable waste of bandwidth since those receivers cannot use the announcement they have received. For this reason, this feature of encrypted SAP announcements is not generally recommended to be used. The authentication header can be used for two purposes: 1. Verification that changes a session description or deletion of a session are permitted. 2. Authentication of the identity of the session creator. SAP is not tied to any single authentication mechanism. The precise format of authentication data in the packet depends on the authentication mechanism in use. SAP protocol describes the use of Pretty Good Privacy (PGP) (Callas, Donnerhacke, Finney and Thayer, 1998) and the Cryptographic Message Syntax (CMS) (Housley, 1999) for SAP authentication.
Session Initiation Protocol (SIP) Not all sessions are advertised, and even those that are advertised may require a mechanism to explicitly invite a user to join a session. The Session Initiation Protocol (SIP) is an application layer control protocol that can establish, modify and terminate multimedia sessions or calls (Handley, Schulzrinne, Schooler and Rosenberg, 2000). These multimedia sessions include multimedia conferences, distance learning, Internet telephony and similar applications. SIP can invite both persons and “robots,” such as a media storage service. SIP can invite parties to both unicast and multicast sessions; the initiator does not necessarily have to be a member of the session to which it is inviting. Media and participants can be added to an existing session. SIP can be used to initiate sessions as well as invite members to sessions that have been advertised and established by other means. Sessions can be advertised using multicast protocols such as SAP, electronic mail, news groups, Web pages or directories (LDAP), among others. SIP does not care whether the session is already ongoing, or is just being created, and it doesn’t care whether the conference is a small, tightly coupled session or a huge broadcast. It merely conveys an invitation to a user in a timely manner, inviting them to participate, and provides enough information for them to be able to know what sort of session to expect. Thus although SIP can be used to make telephonestyle calls, it is by no means restricted to that style of conference. SIP supports five facets of establishing and terminating multimedia communications: 1. User location: determination of the end system to be used for communication. 2. User capabilities: determination of the media and media parameters to be used. 3. User availability: determination of the willingness of the called party to engage in communications. 4. Call setup: “ringing,” establishment of call parameters at both called and calling party. 5. Call handling: including transfer and termination of calls.
Building Internet Multimedia Applications 31
Inviting a callee to participate in a single conference session or call involves one or more SIP request-response transactions. In the remainder of this section, we illustrate the SIP protocol operation.
SIP Protocol Operation Callers and callees are identified by SIP addresses. When making a SIP call, a caller first locates the appropriate server and then sends a SIP request. The most common SIP operation is the invitation. Instead of directly reaching the intended callee, a SIP request may be redirected or may trigger a chain of new SIP requests by proxies. Users can register their location(s) with SIP servers. SIP address is used to locate users at hosts represented by a SIP URL. The SIP URL takes a form similar to a mailto or telnet URL, i.e., user@host. The user part is a user name or a telephone number. The host part is either a domain name or a numeric network address. It can also contain information about the transport (UDP, TCP, SCTP) and user (e.g., user=phone), even the method of the SIP request (e.g., INVITE, ACK, BYE, CANCEL and REGISTER). Examples of SIP URLs are: sip:
[email protected] sip:j.doe:
[email protected];transport=tcp sip:
[email protected]?subject=project sip:
[email protected] sip:
[email protected] sip:
[email protected] sip:
[email protected];method=REGISTER sip:alice sip:+1-212-555-1212:
[email protected];user=phone When a client wishes to send a request, the client either sends it to a locally configured SIP proxy server (as in HTTP) or sends it to the IP address and port of the server. Once the host part has been resolved to a SIP server, the client sends one or more SIP requests to that server and receives one or more responses from the server. A request (and its retransmissions) together with the responses triggered by that request make up a SIP transaction. The methods of the SIP request include INVITE, ACK, BYE, CANCEL and REGISTER. The protocol steps involved in a successful SIP invitation are simple and straightforward. The SIP invitation consists of two requests, INVITE followed by ACK. The INVITE request asks the callee to join a particular conference or establish a two-party conversation. After the callee has agreed to participate in the call, the caller confirms that it has received that response by sending an ACK request. If the caller no longer wants to participate in the call, it sends a BYE request instead of an ACK. The SIP protocol specification describes more complicated initiations (Handley, Schulzrinne, Schooler and Rosenberg, 2000). The INVITE request typically contains a session description, for example, using SDP format, as described in the previous section, that provides the called party with enough information to join the session. For multicast sessions, the session description enumerates the media types and formats that are allowed to be distributed to that session. For a unicast session, the session description enumerates the media types and formats that the caller is willing to use and where it wishes the media data to be sent. In either case, if the callee wishes to accept the call, it responds to the invitation by returning a similar description listing the media it wishes to use. For a multicast session, the callee only returns a session description if it is unable to receive the media indicated in the caller’s description or wants to receive data via unicast. The protocol operations for the INVITE method are shown in Figure 6 using a proxy
32 Yang, Gay, Sun & Siew
Figure 6: The example SIP protocol operations Caller
Proxy Server
Callee
Location Server
(1) INVITE (2) Contact the location server with the address
(4) INVITE
(3) The location server returns more precise location
(5) 200 OK (6) 200 OK (7) ACK (8) ACK
server as a server example. In this SIP transaction, the proxy server accepts the INVITE request (step 1), contacts the location service with all or parts of the address (step 2) and obtains a more precise location (step 3). The proxy server then issues a SIP INVITE request to the address(es) returned by the location service (step 4). The user agent server alerts the user and returns a success indication to the proxy server (step 5). The proxy server then returns the success result to the original caller (step 6). The receipt of this message is confirmed by the caller using an ACK request, which is forwarded to the callee (steps 7 and 8). Note that an ACK can also be sent directly to the callee, bypassing the proxy. All requests and responses have the same Call-ID. Note that SIP does not offer conference control services such as floor control or voting and does not prescribe how a conference is to be managed, but SIP can be used to introduce conference control protocols. SIP does not allocate multicast addresses. SIP can invite users to sessions with and without resource reservation. SIP does not reserve resources, but can convey to the invited system the information necessary to do this. SIP as an application protocol makes minimal assumptions about the underlying transport and network-layer protocols. The lower-layer can provide either a packet or a byte stream service, with reliable or unreliable service. In an Internet context, SIP is able to utilize both UDP and TCP as transport protocols. SIP can also be used directly with protocols such as ATM AAL5, IPX, frame relay or X.25. In addition, SIP is a text-based protocol, and much of the SIP message syntax is and header fields are identical to HTTP. SIP uses URL for the user and service addressing. However, SIP is not an extension of HTTP. As SIP is used for initiating multimedia conferences rather than delivering media data, it is believed that the additional overhead of using a text-based protocol is not significant.
Controlling Multimedia Servers: RTSP A standard way to remote control multimedia streams delivered, for example, via RTP, is Real-Time Stream-Control Protocol (RTSP) (Schulzrinne, Rao and Lanphier, 1998). Control includes absolute positioning within the media stream, recording and possibly
Building Internet Multimedia Applications 33
device control. RTSP is primarily aimed at Web-based media-on-demand services, but it is also well suited to provide VCR-like controls for audio and video streams, and to provide playback and record functionality of RTP data streams. A client can specify that an RTSP server plays a recorded multimedia session into an existing multicast-based conference, or can specify that the server should join the conference and record it. RTSP acts as a network remote control for multimedia servers; it does not typically deliver the continuous streams itself. The protocol supports the following operations: 1. Retrieval of media from media server: The client can request a presentation description via HTTP or some other method. If the presentation is being multicast, the presentation description contains the multicast addresses and ports to be used for the continuous media. If the presentation is to be sent only to the client via unicast, the client provides the destination for security reasons. 2. Invitation of a media server to a conference: A media server can be “invited” to join an existing conference, either to play back media into the presentation or to record all or a subset of the media in a presentation. This mode is useful for distributed teaching applications. Several parties in the conference may take turns “pushing the remote control buttons.” 3. Addition of media to an existing presentation: The media server can tell the client about additional media becoming available. This feature is useful, particularly for live presentations. In order for RTSP to control the media presentation, each presentation and media stream is identified by an RTSP URL. The overall presentation and the properties of the media the presentation is made up of are defined by a presentation description file. The presentation description file contains a description of the media streams making up the presentation, including their encodings, language and other parameters that enable the client to choose the most appropriate combination of media. In this presentation description, each media stream that is individually controllable by RTSP is identified by an RTSP URL, which points to the media server handling that particular media stream and names the stream stored on that server. Several media streams can be located on different servers; for example, audio and video streams can be split across servers for load sharing.
RTSP Protocol Operation The syntax and operation of RTSP is intentionally similar to HTTP/1.1 so that extension mechanisms to HTTP can in most cases also be added to RTSP. As such, RTSP has some overlap in functionality with HTTP, and it also may interact with HTTP in that the initial contact with streaming content is often to be made through a Web page. Similar to HTTP, the RTSP protocol is an Internet application protocol using the request/response paradigm. A client sends to the server a request that includes, within the first line of that message, the method to be applied to the resource, the identifier of the resource (RTSP URL) and the protocol version in use. The main request methods defined in RTSP include: 1. SETUP: A client can issue a SETUP request for a stream that is already playing to change transport parameters. 2. PLAY: The client tells the server to start sending data via the mechanism specified in SETUP. 3. PAUSE: The PAUSE request causes the stream delivery to be interrupted (halted) temporarily. 4. DESCRIBE: The client issues the DESCRIBE method to retrieve the description of a
34 Yang, Gay, Sun & Siew
presentation or media object identified by the request URL from a server. 5. ANNOUNCE: When sent from client to server, ANNOUNCE posts the description of a presentation or media object identified by the request URL to a server. When sent from server to client, ANNOUNCE updates the session description in real-time. The RTSP URL uses “rtsp:” or “rtspu:” as schemes to refer to network resources via the RTSP protocol, for example, the RTSP URL: rtsp://media.example.com:554/twister/ audiotrack identifies the audio stream within the presentation “public-seminar,” which can be controlled via RTSP requests issued over a TCP connection to port 554 of host media.example.com. Each request-response pair has a CSeq field for specifying the sequence number. After receiving and interpreting a request message (containing the method above), a server responds with an RTSP response message that basically consists of the protocol version, a numeric status code followed by the reason-phrase. The RTSP adopts the most HTTP/1.1 status codes, for example, “200 OK,” “400 Bad Request,” “403 Forbidden” and “404 Not Found.” The following example client-server interactions uses SETUP method: C->S: SETUP rtsp://example.com/foo/bar/baz.rm RTSP/1.0 CSeq: 302 Transport: RTP/AVP;unicast;client_port=4588-4589 S->C: RTSP/1.0 200 OK CSeq: 302 Date: 23 Jan 2001 15:35:06 GMT Session: 47112344 Transport: RTP/AVP;unicast; client_port=4588-4589;server_port=6256-6257 In the following example, the server, as requested by the client, will first play seconds 10 through 15, then, immediately following, seconds 20 to 25, and finally seconds 30 through the end. C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 835 Session: 12345678 Range: npt=10-15 C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 836 Session: 12345678 Range: npt=20-25 C->S: PLAY rtsp://audio.example.com/audio RTSP/1.0 CSeq: 837 Session: 12345678 Range: npt=30It should be noted that while RTSP is very similar to HTTP/1.1,it differs in a number of important aspects from HTTP:
Building Internet Multimedia Applications 35
1. 2. 3. 4.
RTSP introduces a number of new methods and has a different protocol identifier. An RTSP server needs to maintain state by default in almost all cases, as opposed to the stateless nature of HTTP. Both an RTSP server and client can issue requests. Data is carried out-of-band by a different protocol. (There is an exception to this). RTSP is now an Internet proposed standard (RFC 2363).
RESOURCE RESERVATION PROTOCOL (RSVP) For flows that may take a significant fraction of the network resources, we need a more dynamic way of establishing these multimedia sessions. In the short term, this applies to many multimedia conferences since at present the Internet is largely under-provisioned. The idea is that the resources necessary for a multimedia session are reserved and if no sufficient resource is available, the admission is rejected. The Resource ReSerVation Protocol (RSVP) has been standardized for just this purpose (Braden, Zhang, Berson and Herzog, 1997; Braden and Zhang, 1997; Zhang, Deering, Estrin, Shenker and Zappala, 1993). The RSVP protocol is part of a larger effort to enhance the current Internet architecture with support for Quality of Service (QoS) flows. It provides flow identification and classification. The RSVP protocol is used by a host to request specific qualities of service from the network for particular application data streams or flows. RSVP is also used by routers to deliver quality-of-service requests to all nodes along the path(s) of the flows and to establish and maintain state to provide the requested service. RSVP requests will generally result in resources being reserved in each node along the data path. RSVP is a simplex protocol in that it makes reservations for unidirectional data flows. RSVP is receiver-oriented, i.e., the receiver of a data flow is responsible for the initiation and maintenance of the resource reservation used for that flow. The designers of the RSVP protocol argue that this design decision enables RSVP to accommodate heterogeneous receivers in a multicast group. In that environment, each receiver may reserve a different amount of resources, may receive different data streams sent to the same multicast group and may switch channels from time to time without changing its reservation (Zhang, Deering, Estrin, Shenker and Zappala, 1993). Normally, a host sends IGMP messages to join a host group and then sends RSVP messages to reserve resources along the delivery path(s) of that group. RSVP operates on top of IPv4 or IPv6, occupying the place of a transport protocol in the protocol stack. However, RSVP does not transport application data but is rather an Internet control protocol, like ICMP, IGMP or routing protocols. It uses underlying routing protocols to determine where it should carry reservation requests. As routing paths change, RSVP adapts its reservation to new paths if reservations are in place.
RSVP Reservation Model and Styles An RSVP reservation is described by flow descriptor which is a pair of the flowspec and the filter spec. The flowspec specifies a desired QoS. The filter spec, together with a session specification, defines the set of data packets—the “flow”—to receive the QoS defined by the flowspec. The stream filtering allows the receiver to reserve the amount of network resources only for the subset of the data the receiver is interested in receiving and thus not to waste network resources. Stream data packets that do not match any of the filter specs are handled as best effort traffic with no further QoS guarantee. If no filter is used, on
36 Yang, Gay, Sun & Siew
the other hand, then any data packets destined for the multicast group may use the reserved resources. The flowspec is used to set parameters in the node’s packet scheduler or other link layer mechanism, while the filter spec is used to set parameters in the packet classifier (see below). To support different needs of various applications and to make the most efficient use of network resources, RSVP defines different reservation styles. RSVP reservation styles are actually a set of options included in a reservation request. There are three styles (options) defined: 1. Wildcard-Filter (WF) Style that creates a single reservation shared by flows from all upstream senders. 2. Fixed-Filter (FF) Style that creates a distinct reservation for data packets from a particular sender, not sharing them with other senders’ packets for the same session. 3. Shared Explicit (SE) Style that creates a single reservation shared by selected upstream senders. Unlike the WF style, the SE style allows a receiver to explicitly specify the set of senders to be included. These styles make it possible that the intermediate switches (routers) can aggregate the different reservations for the same multicast group and results in more efficient utilization of network resources.
RSVP Protocol Operations We now focus on the RSVP protocol mechanisms and message types for making resource reservations. The RSVP protocol uses the traffic control mechanisms to implement quality of service for a particular data flow. These mechanisms include: 1. A packet classifier that determines the QoS class (and perhaps the route) for each packet. 2. Admission control that determines whether the node has sufficient available resources to supply the requested QoS, and policy control that determines whether the user has administrative permission to make the reservation. 3. A packet scheduler or some other link-layer-dependent mechanism to determine when particular packets are forwarded, thus to achieve the promised QoS. During reservation setup, a QoS request from a receiver host application is passed to the local RSVP process. The RSVP protocol then carries the request to all the nodes (routers and hosts) along the reverse data path(s) to the data source(s), but only as far as the router where the receiver’s data path joins the multicast distribution tree. In this procedure, if the RSVP QoS request passes “admission control” (a sufficient resource available) and “policy control” (the reservation permitted), the reservation is successful and parameters are set in the packet to obtain the desired QoS. If either check fails, the RSVP protocol returns an error notification to the application process that originated the request. The RSVP protocol defines two fundamental RSVP message types: Path and Resv. Each RSVP sender host transmits RSVP “Path” messages downstream along the uni-/ multicast routes provided by the routing protocol(s), that is, “path” messages follow the paths of the data. These Path messages store “path state” in each node along the way. The path state is used to route the Resv messages hop-by-hop in the reverse direction. “Resv”
Building Internet Multimedia Applications 37
messages are RSVP reservation requests that each receiver host sends upstream towards the senders. These messages must follow exactly the reverse of the path(s) the data packets will use upstream to all the sender hosts.
ALL WORKING TOGETHER In this section, we illustrate how the real-time protocols described above work together to support Internet multimedia conferencing. As shown in Figure 7, User A at Site 1 first created a conference session, which is described using the Session Description Protocol (SDP), and announced it to the world using the Session Announcement Protocol (SAP). According to the start time as described in the SDP, User A started the session and sent out the media data using RTP, and periodically sent out the sender’s report using RTCP. User B at Site 2 received the conference announcement and notes the start time. He joined the conference using Internet Group Management Protocol (IGMP). During the session, User B would send out the receiver's report using RTCP. In order to provide some guarantee on the quality of service, User B made the resource reservation along the path (from A) using RSVP protocol. The reservation has been successful, the quality of service was improved. The RTP/RTCP session continues until User B leaves the group or User A stops the conferencing. As shown, the audio and video are carried in a separate RTP session with RTCP packets controlling the quality of the session. Routers communicate via RSVP to set up and manage reserved-bandwidth sessions. Figure 7: An illustration of all protocols working together Site 1
Site 2
User A creates Conference User A Starts Sending
SDP/SAP
User B receives Session Announcement
RTP
User B joins conference
RTCP IGMP
RTP RTP
RTCP
User B sends RTCP receiver's report
RTCP RSVP path RTP
RSVPresv
RTP RTP RTCP
User A makes reservation
38 Yang, Gay, Sun & Siew
THE MEDIA FRAMEWORKS FOR DEVELOPING INTERNET MEDIA APPLICATIONS With the real-time protocols described above as foundations, we now present two emerging media frameworks that provide a high-level abstraction for developing real-time media applications over the Internet. The media frameworks we will discuss are CORBA Media Streaming Framework (MSF) (OMG, 1998) and Java Media Framework (JMF) (Sun Microsystems, 1999), both of which provide an object-oriented multimedia middleware. The Internet media applications can be developed using the low-level network APIs (for example, MBone tools). They can also be implemented within some middleware environments that abstract away from the low-level details of the Internet. The CORBA Media Streaming Framework and Java Media Framework (JMF) are the examples of the middleware for developing media applications. We first describe CORBA Media Stream Framework, followed by JAVA Media Framework, both of which are based on the distributed object technology.
CORBA MEDIA STREAMING FRAMEWORK The Common Object Request Broker Architecture, also known as CORBA, is a distributed object architecture that allows objects to interoperate across networks (Internet) regardless of the language in which they were written. The CORBA is derived from a high-
Figure 8: The OMG's Object Management Architecture (OMA) Not standardized by OMG; Scope is Single application / vendor
Application Objects
Business Objects
Compound Docs
Healthcare
Task Mgmt
Finance, Mfg
Help Facilities
Telecommunication
Desktop Mgmt
CORBA Domains (Vertical Facilities)
CORBAfacilities (Horizont. Facilities)
Object Request Broker (ORB)
Lifecycle
Externalization
Events
Security
Naming Persistence
CORBA Services
Time Properties
Transactions
Query
Concurrency
Licensing
Building Internet Multimedia Applications 39
level OMA architecture (Figure 8). The OMA divides the whole distributed object space into the four areas: CORBA services, CORBA facilities, CORBA domains and Application Objects. In the systems, Object implementations that provide capabilities required by all objects in the environment have code-named CORBAservices. The growing set of object services adopted by the OMG handles lifecycle, naming, events and notification, and persistence, transactions, relationships, externalization, security, trading and messaging. CORBA facilities are object implementations that many applications will use. Specification of common facilities is the most recent aspect of the architecture. Facilities include support for compound documents, workflow, among others. For vertical markets and applications, such as health care, insurance, financial services, manufacturing and e-commerce, the architecture defines CORBAdomains interfaces, which will provide the mechanism for software vendors to build components or to support the integration of new or existing (legacy) applications into a CORBA environment. The final component refers to Application Objects, those object implementations specific to end-user applications. Typically, these purpose-built objects comprise the systems that support a business process, and thus are not standardized by the OMG. Within the OMA, CORBA specifies the infrastructure for inter-object communication, called the object request broker or ORB. In essence, ORB is the object bus, allowing distributed object to communicate in heterogeneous environments. Very often, CORBA is called distributed object middleware, because they mediate between components to allow them to work together, integrating them into a single, functional whole. The problem that CORBA addresses is how to define, construct and deploy the software elements that comprise complex systems. The CORBA architecture is based on the distribution of data and process as objects. This general approach is the result of industry experience with mainframe or host-centric systems, and data server-centric technology in use over the last four decades. The goal of such distributed systems is to enable applications and data to reside on different platforms, yet to work together seamlessly and support various business processes. CORBA is a complete architecture for a vast variety of distributed applications in heterogeneous environments. In this section we focus on its support for real-time multimedia applications, the more detailed information about CORBA can be found in OMG (1999); Schmidt (2000); Vinoski (1997); and Yang and Duddy (1996).
CORBA MSF Components The CORBA Media Streaming Framework (CORBA MSF) takes the object-oriented approach to the control and management of multimedia, especially audio/video stream. The CORBA MSF specification (OMG, 1998) defines a set of object interfaces which implement a distributed media streaming framework. The principal components of the CORBA Media Streaming Framework are stream and stream endpoint, flow and flow endpoint, multimedia device and virtual multimedia device, and flow device. These interfaces provide a high-level abstraction for developing multimedia applications in a CORBA environment and are described below. 1. Streams. A stream represents continuous media transfer, usually between two or more virtual multimedia devices. A stream interface is an aggregation of one or more source and sink flow endpoints associated with an object. Although any type of data could flow between objects, this CORBA framework focuses on applications dealing with
40 Yang, Gay, Sun & Siew
audio and video exchange with Quality of Service (QoS) constraints. The stream is represented by the IDL interface StreamCtrl which abstracts a continuous media transfer between virtual devices. It supports operations to bind multimedia devices using a stream. There are also operations to start and stop a stream. An application can establish a stream between two devices by calling the operation bind_devs() on the StreamCtrl interface: boolean bind_devs(in MMDevice a_party, in MMDevice b_party, inout streamQoS the_qos, flowSpec the_spec). When the application requests a stream between two or more multimedia devices, it can explicitly specify the quality of service (QoS) of the stream. 2. Stream endpoints. A stream endpoint terminates a stream, and is represented by the StreamEndPoint interface. A stream endpoint and a virtual multimedia device will exist for each stream connection. There are two flavors of stream endpoint: an A-party represented by the interface StreamEndPoint_A and a B-party represented by the interface StreamEndPoint_B. An A party can contain producer flow endpoints as well as consumer flow endpoints, similarly with a B party. Thus, when an instance of a typed StreamEndPoint is created, the flows will be plumbed in the right direction (i.e., an audio consumer FlowEndPoint in a StreamEndPoint_A will correspond to an audio producer FlowEndPoint in a StreamEndPoint_B). 3. Flows and flow endpoints. A flow is a continuous sequence of frame in a clearly identified direction, and a stream endpoint terminates a flow. A flow endpoint may be either a source (producer) or a sink (consumer). A stream may contain multiple flows. For example, a videophone stream may contain four flows labeled video1, video2, audio1, audio2. An operation on a stream (for example, stop or start) may be applied to all flows within the stream simultaneously or just a subset of them. A stream endpoint may contain multiple flow endpoints. The flow and flow endpoints are represented by FlowConnection and FlowEndPoint interfaces respectively. The framework supports two basic profiles for the streaming service: the full profile in which flows endpoints and flow connection have accessible IDLinterfaces and light profile in which flows endpoints and flow connections do not expose IDL interfaces and is a subset of the full profile. In the light profile the FlowEndPoint objects are colocated with the StreamEndPoint objects and so do not expose IDL interfaces. 4. Multimedia device. Multimedia device (as opposed to a virtual multimedia device) is the abstraction of one or more items of multimedia hardware and acts as a factory for virtual multimedia devices. A multimedia device can support more than one stream simultaneously, for example, a microphone device can stream audio to two speaker devices. For each stream connection requested, the multimedia device creates a stream endpoint and a virtual multimedia device. The multimedia devices are represented by IDL interface MMDevice. A virtual device is represented by VDev interface. 5. Flow devices. The flow devices are exactly analogous to the multimedia devices for streams. A flow connection binds two flow devices in exactly the same manner as a stream connection (StreamCtrl) binds multimedia devices (MMDevice). The flow devices are represented by the FDev. An FDev operation creates a FlowEndPoint, whereas an MMDevice creates a StreamEndPoint and a VDev. A simple stream between a microphone device (audio source or producer) and speaker device (audio sink or consumer) is shown in Figure 9. To illustrate the interaction between the components of the framework, we show the process when establishing a stream by calling the bind_devs () operation on a StreamCtrl
Building Internet Multimedia Applications 41
Figure 9: A simple audio stream represented using CORBA Media Stream Framework Legend: B A contains B B A associated with B
A A
StreamCtrl
VDev
VDev
StreamEndpoint
StreamEndpoint
Stream connection
Figure 10: Establishing a stream 1) bind_devs(aMMdev,bMMdev,someQoS) aStreamCtrl aMMDev
2.1) create_A(...)
2.3) create_B(...) bMMDev
2.2) A_EndPoint_ref
2.4) B_EndPoint_ref
4) connect(B_EndPoint,someQoS)
A_EndPoint
B_EndPoint
5) request_connection() aVDev
bVDev 3) configure()
Fi
10 E t bli hi
t
object (Figure 10). 1. A StreamCtrl object may be created by the application to initiate a stream between two multimedia devices(aMMDev and bMMDev). For this purpose, a call is made to the bind_devs() operation on the StreamCtrl interface. 2. The StreamCtrl asks aMMDev and bMMDev for aStreamEndPoint and VDev to support the stream by calling create_A(...,someQoS,...) on one and create_B(...,someQoS,...) on the other. The former call, for example, will return a StreamEndPoint_A object which is associated with a VDev. This is the point at which an MMDevice could decide that it can support no more connections and refuse to create the StreamEndPoint and VDev. A particular MMDevice might be specialized to only create A-endpoints for a type of stream (for example, a microphone endpoint
42 Yang, Gay, Sun & Siew
3.
4.
5.
1. 2. 3.
4.
5.
6.
7. 8.
and VDev) or only B-endpoints (for example, a speaker endpoint and VDev). This step involves the aVDev calling configuration operations on the bVDev and vice versa. This is the device configuration phase. When two virtual devices are connected using a stream, they must ensure that they are both appropriately configured. For example, if the microphone device is using A-law encoding and a sampling frequency of 8-kHz, then it must ensure that the speaker device it is connected to is configured similarly. It does this by calling configuration operations such as set_format(“audio1”,”MIME:audio/basic”), set_dev_params(“audio1”,myaudioProperties) on the VDev interface for the speaker device. The actual stream is set up by calling connect() on the A_EndPoint with the B_EndPoint as a parameter. Stream endpoints contain a number of flow endpoint objects. A flow endpoint object can be used to either pull information from a multimedia device and send it over the network or vice versa. The A_EndPoint may choose to listen on transport addresses for a subset of the flows to be connected. It will then call request_connection(). Among the information passed across will be the transport addresses of the listening flows on the A-side. The B_EndPoint will connect to the listening flows on the A-side and will listen on transport addresses for any remaining flows for which the A-side is not listening. Among the information passed back by the request_connection() operation will be the transport addresses of the flows listening on the B-side. The final stage is for the A_EndPoint to connect to the listening flows on the B_EndPoint. The CORBA Media Streaming Framework has the following features: topologies for streams: allows for one-to-one (unicast) and/or one-to-many (multicast) flow sources and sinks to be configured into the same stream binding. multiple flows: allows for flows to be aggregated within a stream, such that joint operations can be performed. stream description and typing: allows a stream interface, in terms of its constituent flow end-points, to be described and typed. Operations defined in flow endpoint control object IDL interfaces may be used to determine the characteristics of a flow endpoint. The subtyping of the flow endpoint control interfaces may be used to type the flow endpoints themselves. stream interface identification and references: allows a stream interface and its flow endpoints to be unambiguously named and identified by other objects and the ORB, by using normal CORBA object references corresponding to IDL stream and flow endpoint control interfaces. stream setup and release: allows for the establishment and tear-down of bindings between stream interfaces and flow endpoints. Establishment may include the negotiation of QoS parameters (using reference to QoS specification interfaces) and error handling capabilities if QoS is not successfully negotiated. stream modification and termination: the IDL interfaces include operations for reconfiguration of a stream connection during its lifetime, such as adding and removing endpoints from an existing stream connection. The framework allows for flow data endpoints to be terminated either in hardware or software. multiple protocols: the framework allows for multiple flow protocols and flow protocol endpoint addresses. Quality of Service (QoS): allows QoS specification interfaces to be defined and passed
Building Internet Multimedia Applications 43
9. 10.
via reference to specify QoS to be expressed. Interfaces can be specialized to allow for the monitoring of QoS characteristics. flow synchronization: limited support for synchronization is provided by interfaces. interoperability: because all operations are expressed in unmodified OMG CORBA IDL, normal CORBA interoperability may be used for all operations defined in this media framework. Any protocol conversions required for interworking of various flow protocols are attainable using normal application objects.
Media Streaming Framework in CORBA Environments The CORBA Media Stream Framework operates in a general CORBA environment. We now discuss the major components of the CORBA stream architecture and their interaction in the CORBA environment, as shown in Figure 11. In the example CORBA stream architecture (Figure 11), there is a stream with a single flow between two stream endpoints, one acting as the source of the media data and the other the sink. Each stream endpoint consists of three logical entities (see Figure 11): 1. A stream interface control object that provides IDL defined interfaces (as server, 2b) for controlling and managing the stream (as well as potentially, outside the scope of this specification, invoking operations as client, 2a, on other server objects).The Stream interface control object uses a basic object adapter (BOA) or portable object adapter (POA) (OMG, 1999, Chapter 11) that transmits and receives control messages in a CORBA-compliant way. 2. A flow data source or sink object (at least one per stream endpoint and can be many in the multi-point case) that is the final destination of the data flow (3). 3. A stream adapter that transmits and receives a flow (a sequence of frames) over a network. The framework supports multiple transport protocols. In Figure 11, the CORBA MSF provides definitions of the components that make up a stream and for the interface definitions onto stream control and management objects (denoted as interface number 1a). The standardized interface onto stream interface control objects associated with individual stream endpoints is denoted as interface number 2b. The CORBA MSF does not standardize the interfaces shown as dashed lines. That is, the CORBA MSF does not provide the standard way of how the stream interface control object communicates with the source/sink object and perhaps indirectly with the stream adapter (interface 4), and how the source/sink object communicates with the stream adapter (interface 3). When a stream is terminated in hardware, the source/sink object and the stream adapter may not be visible as distinct entities. As indicated early, the CORBA MSF allows multiple transport protocols for transmitting and receiving the media data, thus the media data is not necessarily transported by TCP. In many cases, the media data is transported by RTP/UDP, ATM/AAL5 or the Simple Flow Protocol defined in the CORBA MSF framework, which is described next.
CORBA’s Simple Flow Protocol (SFP) The CORBA Media Streaming Framework is designed to support three fundamental types of transport: 1. Connection-Oriented Transport like TCP: This is provided by transports like TCP and required where completeness and reliability of data are essential. 2. Datagram-Oriented Transport like UDP: This is used by many popular Internet streaming applications. This type of transport is frequently more efficient and
44 Yang, Gay, Sun & Siew
Figure 11: CORBA Media Stream Framework in a CORBA environment Stream Endpoint 4. Flow data endpoint (Source) 3.
data flow
Stream Adapter
4. Stream Interface Control
Control & Management Objects
Client
Server
2a.
2b.
Client
Server
Flow Data Endpoint (Sink)
Client
data flow
3.
Server 1a.
BOA/POA
Client
Stream Interface Control
1b. Stream Control 2b. 2a. Operations BOA/POA
Stream Adapter
Object Request Broker (ORB) Core & IIOP Flow
Flow
SFP, RTP/IP-Multicast, AAL5/ATM
lightweight if the application doesn’t mind losing the occasional datagram and handling a degree of mis-sequencing. The framework must insert sequence numbers into the packets to ensure that mis-sequencing and packet-loss is detected and handled by applications. 3. Unreliable connection-oriented transport: This is the type of service supplied by ATM AAL5. Messages are delivered in sequence to the endpoint, but they can be dropped or contain errors. If there are errors then this fact is reported to the application. There is no flow control unless provided by the underlying network. In addition to various transport types, the CORBA MSF allows streams to be structured and transported ‘in band.’ Such information can include sequence numbers, Source indicators, Timestamps and Synchronization source. The CORBA MSF argues that there is no single transport protocol that provides all the capabilities needed for streamed media. ATM AAL5 is good but it lacks flow control so a sender can overrun a receiver. Only RTP provides facilities for transporting the in-band information above, but RTP is Internet-centric, and it should not be assumed that a platform must support RTP in order to take advantage of streamed media. Furthermore, none of the transports provide a standard way for transporting IDL-typed information. In order to accommodate the various needs of multimedia transport over a multitude of transports, CORBA MSF defines a simple specialized protocol which works on top of various transport protocols and provides architecture independent flow content transfer. This protocol is referred to as the Simple Flow Protocol (SFP). There are two important points to note about SFP: 1. It is not a transport protocol, it is a message-level protocol, like RTP, and is layered on top of the underlying transport. It is simple to implement. 2. It is not mandatory for a Media Streaming Framework implementation to support SFP. A flow endpoint which supports SFP can switch it off in order to communicate with a flow endpoint which does not support it. The use of the SFP is negotiated at stream establishment. If the stream data is not IDL-typed (i.e., it is in agreed byte layout, for
Building Internet Multimedia Applications 45
example MPEG) then, by default, SFP will not be used. This allows octet stream flows to be transferred straight to a hardware device on the network. It should be emphasized, however, that the role of the SFP in CORBA Media Streaming Framework is a very important one. In the CORBA context, it is little use to standardize a set of IDL interfaces that are to manipulate Audio/Video streams if the flows themselves are not interoperable or do not contain the framing and in-band information. This information is necessary to provide timely, accurate delivery of multimedia data over a variety of common protocols such as ATM AAL5 and UDP which cannot do this of themselves. These goals are simply unattainable without the specification of SFP. The SFP is not “just another protocol.” It is as fundamental to the media streaming framework as General Inter-ORB Protocol (GIOP) is to the CORBA architecture (OMG, 1999, Chapter 15). In other words, just as CORBA was not complete without GIOP, so this standard would be of little value without SFP. The SFP message format, including message types and messages header, is formally specified in the OMG IDL (OMG 1999, Chapter 3). module flowProtocol{ enum MsgType{ // Messages in the forward direction Start, EndofStream, SimpleFrame, SequencedFrame, Frame, SpecialFrame, // Messages in the reverse direction StartReply, Credit }; struct frameHeader{ char magic_number[4]; // ‘=’, ‘S’, ‘F’, ‘P’ octet flags; // bit 0 = byte order, // 1 = fragments, 2-7 always 0 octet message_type; unsigned long message_size; // Size following this header }; struct fragment{ char magic_number[4]; // ‘F’, ‘R’, ‘A’, ‘G’ octet flags; // bit 1 = more fragments unsigned long frag_number; // 1,..,n unsigned long sequence_num; unsigned long frag_sz; unsigned long source_id; // Required for UDP multicast }; // with multiple sources struct Start{
46 Yang, Gay, Sun & Siew
char magic_number[4]; // ‘=’, ‘S’, ‘T’, ‘A’ octet major_version; octet minor_version; octet flags; // bit 0 = byte order }; // Acknowledge successful processing of // Start struct StartReply{ char magic_number[4]; // ‘=’,’S’,’T’,’R’ octet flags; // bit 0 = byte order, 1 = exception }; // If the message_type in frameHeader is sequencedFrame // the the frameHeader will be followed by this struct sequencedFrame{
unsigned long sequence_num; }; // If the message_type is Frame then // the frameHeader is followed by this struct frame{ unsigned long timestamp; unsigned long synchSource; sequence source_ids; unsigned long sequence_num; }; struct specialFrame{ frameID context_id; sequence context_data; }; struct credit{ char magic_number[4]; // ‘=’,’C’,’R’,’E’ unsigned long cred_num; }; }; // module For a message type of simple frames (SimpleFrame), there is no subsequent header; the message is simply a stream of octets. The other message headers are frameHeader, frame and specialFrame followed by a stream of octets. When SFP is running over RTP, the sequencedFrame and frame structures are not used since the values for timestamp, synchSource, source ids and sequence number are embedded directly into the corresponding RTP protocol fields. The typical structure of an SFP dialog for a point-to-point flow is shown in Figure 12. The dialogue begins with a Start message (for its format see the IDL definition earlier) being sent from source to sink. The source waits for a StartReply message. The message_size field in the frame header denotes the size of the message (including the headers) if no fragmen-
Building Internet Multimedia Applications 47
Figure 12: Typically SFP dialogue waits
StartReply
Start frame FrameHeader{... flags = 02 i.e. fragmentation message_type = frame message_size = 65,536} frame{ timestamp = x …} fragment
65,536 octets
Data... Fragment{... flags = 0 ie no more fragments}
frames x 9
Data...
waits
Credit More messages EndofStream EndofStream
tation is taking place. If fragmentation is being used, then message_size indicates the size of the first fragment including headers. When fragmentation occurs, the first fragment, which begins with the FrameHeader, is implicitly the fragment number 0. Subsequent fragments associated with this header are labeled fragment number 1 through N. As shown in Figure 12, the sink of a flow may send a credit message to the source of the flow to tell it to send more data. The cred_num field will be incremented with each credit message sent. This facility may be used with protocols such as ATM/AAL5 or UDP that have no flow control. Note that the SFP dialogue is much simpler on a multicast flow since no messages are sent in the reverse direction.
JAVA MEDIA FRAMEWORK (JMF) Java has been considered as the Internet programming language. By exploring the advantage of the Java platform, Sun Microsystems, in collaboration with other companies, has designed the JavaMedia Framework (JMF) to provide a common cross-platform Java API for accessing underlying media framework and for incorporating time-based media into Java applications and applets (Sun Microsystems, 1999). The current version of JMF is 2.0. The JMF 2.0 API is being jointly designed by Sun Microsystems and IBM Corporation; the earlier version, JMF 1.0 API, was jointly developed by Sun Microsystems, Intel and Silicon Graphics. JMF 2.0 supports media capture and addresses the needs of application developers who want additional control over media processing and rendering; it also provides a plug-in architecture that provides direct access to media data and enables JMF to be more easily customized and extended. Furthermore, JMF provides support for RTP, which enables the
48 Yang, Gay, Sun & Siew
transmission and reception of real-time media streams across the Internet. The RTP APIs in JMF 2.0 support the reception and transmission of RTP streams and allow application developers to implement media streaming and conferencing applications. JMF is designed to support most standard media content types, such as AU, AVI, MIDI, GSM and MPEG, QuickTime. The JMF API consists mainly of interfaces that define the behavior and interaction of objects used to capture, process and present time-based media; and implementations of these interfaces operate within the structure of the JMF framework. 1. Media Capture: A multimedia-capturing device can act as a source for multimedia data delivery. For example, a microphone can capture raw audio input or a digital video capture board might deliver digital video from a camera. DataSources in JMF are abstraction of such capture devices. Some devices deliver multiple data streams; the corresponding DataSource can contain multiple SourceStream interfaces that model the data streams provided by the device. JMF data sources can be categorized as pull data-source or push data-source according to how data transfer is initiated: a. Pull Data-Source: The client initiates the data transfer and controls the flow of data from pull data-sources. Established protocols for this type of data transfer include HTTP and FILE. b. Push Data-Source: The server initiates the data transfer and controls the flow of data from a push data-source. The examples of this model include broadcast media and media-on-demand (VOD). The Internet protocols for this type of data include RTP as we discussed early. 2. Media Presentation: In JMF, the presentation process is modeled by the Controller interface. Controller defines the basic state and control mechanism for an object that controls, presents or captures time-based media. A Controller posts a variety of controller-specific MediaEvents to provide notification of changes in its status. JMF employs the Java event model to handle events. The JMF API defines two types of controllers: Players and Processors. A Player or Processor is constructed for a particular data source. A Player performs the basic playback functions, processes an input stream of media data and renders it at a precise time. A DataSource is used to deliver the input media-stream to the Player. The rendering destination depends on the type of media being presented. A processor is a specialized type of (or inherited from) player that provides control over what processing is performed on the input media stream. A processor takes a DataSource as input, performs some user-defined processing on the media data and then outputs the processed media data. A processor can send the output data to a presentation device or to a DataSource, in which case, DataSource can be used as the input to another player or processor, or as the input to a DataSink. When a presentation destination is not a presentation device, a DataSink is used to read data from a DataSource and render the media to the specific destination. A DataSink may write media to a file (e.g., media file writers), transmit it over network (for example, transmitting RTP data, in this case, RTP network streamer). The JMF Figure 13: (a) The JMF player model; (b) The JMF processor model
DataSource
Player
(a)
DataSource
DataSource
Processor
(b)
Building Internet Multimedia Applications 49
Figure 14: The JMF class (interface) hierarchy Note y The time-based media requires a has a Clock TimeBase clock as a base y Presentation is modeled by extends controller y There are two types of controller: Controller Controls player and processor y Presentation needs data source extends extends and they are either pull data has a DataSource source or push data source Player y Media capture is modeled by data source extends Processor
DataSource
creates a
SourceStream
manages extends PullDataSource
PushDataSource
Figure 15: The RTP-based JMF architecture Java Applications, Java Applets, Java Beans Java Presentation and Processing API RTP APIs JMF Plug-In API Demux
Mux
Codec
Renderers
Effects
(Native or Pure Java)
Player model and processor model are shown in Figure 13. Media Processing. In JMF, media processing is performed by a processor. In this case, a processor is used as a programmable Player that enables you to control the decoding and rendering process; it can also be used as a capture processor that enables you to control the encoding and multiplexing of the captured media data. Figure 14 shows the major JMF interfaces and their relationship. Because the JMF is designed to manipulate the time-based media, the clock class (interface) is the base class. We now discuss how JMF 2.0 enables the playback and transmission of RTP streams through a set of APIs defined in the following Java media packages: javax.media.rtp javax.media.rtp.rtcp javax.media.rtp.event The JMF RTP APIs are designed to work seamlessly with the capture, presentation and processing capabilities of JMF. The JMF RTP architecture is shown in Figure 15. 3.
50 Yang, Gay, Sun & Siew
The JMF RTP applications are often divided into RTP clients and RTP servers. The RTP client applications are those that receive stream data from the network. The examples are multimedia conferencing applications which need to receive a media stream from an RTP session and render it on the console. A telephone answering machine application is another example. The application needs to receive a media stream from an RTP session and store it in a file. The RTP server applications are those that transmit captured or stored media stream across the network. For example, in a conferencing application, a media stream might be captured from a video camera and sent out on one or more RTP sessions. The media streams might be encoded in multiple media formats and sent out on several RTP sessions for conferencing with heterogeneous receivers. Some applications are both RTP client and server. A session manager in JMF manages the RTP session. More specifically, the JMF RTP APIs: 1. Enable the development of media streaming and conferencing application in Java. 2. Support media data reception and transmission using RTP and RTCP. 3. Support custom packetizer and depacketizer plug-ins through the JMF 2.0 plug-in architecture. The RTP media transmission and reception using JMF object components that implement the corresponding interfaces are shown in Figure 16 (a) and (b) respectively. As show in Figure 16, a session manager in JMF is used to coordinate the reception and transmission of RTP stream from/to network. In RTP, the association among a set of participants communicating with RTP constitutes an RTP media session. A session is defined by a network address plus a port pair for RTP and RTCP. In effect, a JMF session manager is a local representation of a distributed entity, i.e., the RTP session (see the previous discussion about RTP). Thus, the session manager also handles the RTCP control channel, and supports RTCP for both senders and receivers. The JMF SessionManager interface defines methods that enable an application to initialize and start participating in a session, remove individual streams created by the application and close the entire session. Just like any other media contents, Players and Processors are used to present and manipulate Figure 16: RTP transmission data flow in JMF File Data Source
Data Source
Session Manager
Data Source
DataSink
Network
Processor
Capture Device
File
(a)
Network
Session Manager
Data Source
Processor
Data Source
Player
Data Source
DataSink
Data Source
Console
(b)
File
DataSink
File
Building Internet Multimedia Applications 51
RTP media streams (RTPStream) that have been captured using a capture DataSource or that have been stored to a file using a DataSink. JMF RTP APIs also define several RTP-specific events to report on the state of the RTP session and streams. RTP events are handled in a standard way of Java.
FUTURE TRENDS The Internet will continue to be very dynamic, and it is somewhat dangerous to predict its future technological trends. In this section, we restrict ourselves to the enabling technologies that will affect the development and deployment of real-time multimedia applications by discussing high-bandwidth connection, Internet appliance, the new generation of protocols and high-level development environments. The fast Internet access and high-bandwidth connectivity will expand from business to residential communities. In the past few years, we have seen a rapid increase of the options for high-speed Internet access. This trend will continue. These options include the penetration of ISDN technologies and various Digital Subscriber Line (DSL) technologies. Cable TV infrastructure will be upgraded to enable 1.2Mbps to 27 Mbps shared capacity Internet access speeds. Wireless Internet access from terrestrial and satellite service providers are the new options for business and residential users. Internet Appliance will evolve from currently the primary PC to new types of Internet appliances, such as Internet TV and smart phones. These new Internet appliances are based on familiar consumer electronics that non-technical consumers will likely find less intimidating and easier to use than the PC. The new generation of Internet protocols for real-time multimedia, some of which were described in this chapter, will mostly become standardized or widely deployed. The Internet will evolve to become an integrated service Internet in a large deployed scale. The development environments for real-time multimedia applications are in their infancy stage. Multimedia applications are often large and complex; there is not enough attention to the systematic approach for taming this complexity (McCanne, Brewer, Katz, Rowe et al., 1997). The noticeable efforts for providing a standard multimedia-networking middleware are Java JMF and CORBA MSF. These two frameworks provide higher level programming interfaces (APIs) that hide the details and complexity of the underlying medianetwork. They are immature, incomplete and still under development. It is expected that these frameworks will become promising, attractive environments in which new emerging Internet multimedia applications will be built in a standard and portable way.
CONCLUSION The Internet has evolved from a provider of the simple TCP/IP best-effort service to an emerging integrated service Internet. This development provides tremendous opportunities for building real-time multimedia applications over Internet. In this chapter we have introduced the emerging Internet service models and presented the Internet Integrated Service architecture that support the various service models. The constituent real-time protocols of this architecture are the foundations and the critical support elements for building of Internet real-time multimedia applications, and are described in some detail. Multimedia applications are often large and complex. The CORBA Media Streaming
52 Yang, Gay, Sun & Siew
Framework and Java Media Framework are the two emerging environments for implementing Internet multimedia. They provide applications with a set of APIs to hide the underlying details of Internet real-time protocols. In this chapter, we provided a high-level overview about these two frameworks. The Internet multimedia applications developed using these two frameworks are expected to appear in the near future.
REFERENCES Braden, B., Zhang, L., Berson, S. and Herzog, S. (1997). Resource ReSerVation Protocol (RSVP)-Version 1 Functional Specification. IETF, RFC 2205. Braden, R., Clark, D. and Shenker, S. (1994). Integrated Services in the Internet Architecture: an Overview. IETF RFC 1633, June. Braden, R. and Zhang, L. (1997). Resource ReSerVation Protocol (RSVP)—Version 1 Message Processing Rules. IETF, RFC 2209, September. Callas, J., Donnerhacke, L., Finney, H. and Thayer, R. (1998). OpenPGP Message Format. Internet Engineering Task Force. RFC 2440, November. Clark, D. D. and Tennenhouse, D. L. (1990). Architectural considerations for a new generation of protocols. In SIGCOMM’90 Symposium. Computer Communications Review, ACM Press, 20(4), 200-208. Crowcroft, J., Handley, M. and Wakeman, I. (1999). Internetworking Multimedia, Morgan Kaufmann Publishers. Deering, S.E. (1991). Multicast Routing in a Datagram Internetwork. PhD thesis, Stanford University, December. Deering, S. E. and Cheriton, D. R. (1990). Multicast routing in datagram Internetworks and extended LANs. ACM Transactions on Computer Systems, 8(5), 85-110. Dept of Commerce. (1998). The Emerging Digital Economy, United States, April. Floyd, S., Jacobson, V., Liu, C., McCanne, S. and Zhang L. (1997). A reliable multicast framework for light-weight sessions and application level framing. IEEE/ACM Transactions on Networking, 5(6), 784-803. Handley, M. and Jacobson, V. (1998). SDP: Session Description Protocol. Internet Engineering Task Force. RFC 2327, April. Handley, M., Perkins, C. and Whelan, E. (1999). Session Announcement Protocol. Internet Engineering Task Force. Internet-Draft. Handley, M., Schulzrinne, H., Schooler, E. and Rosenberg J. (2000). SIP: Session Initiation Protocol. Internet Engineering Task Force. Internet-Draft. Hofmann, M. (1996). A generic concept for large-scale multicast. In Proceedings of International Zurich Seminar on Digital Communications (IZS’96). SpringerVerlag, February. Housley, R. (1999). Cryptographic Message Syntax. Internet Engineering Task Force. RFC 2630, June. ITU (1998). Packet-Based Multimedia Communication Systems Recommendation H.323. Telecommunication Standardization Sector of ITU, Geneva, Switzerland, February. Jacobson, V. and Casner, S. (1998). Compressing IP/UDP/RTP Headers for Low-Speed Serial Links. Internet Engineering Task Force. Internet-Draft, December. Mayer, D. (1998). Administratively Scoped IP Multicast. Internet Engineering Task Force. RFC 2365, July. McCanne, S., Brewer, E., Katz, R., and Rowe, L. (1997). Toward a common infrastructure
Building Internet Multimedia Applications 53
for multimedia-networking middleware. In Proceedings of the 7th Int’l Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSAV’97), At. Louis, Missouri, May. OMG. (1998). Control and management of audio/video streams. In CORBAtelecoms: Telecommunications Domain Specifications, Chapter 2, Object Management Group, June. OMG. (1999), The Common Object Request Broker: Architecture and Specification (Revision 2.3), Object Management Group (OMG), Framingham, MA., June. Obraczka, K. (1998). Multicast transport protocols: A survey and taxonomy. IEEE Communications Magazine, 36(1), January. Paul, S., Sabnani, K.K, Lin, J. and Bhattacharyya, S. (1997). Reliable multicast transport protocol (RMTP). IEEE JSAC, Special issue on Network Support for Multipoint Communication. Schmidt, D.(2000). Distributed Object Computing with CORBA Middleware. Retrieved on the World Wide Web: http://www.cs.wustl.edu/~schmidt/corba.html. Schulzrinne, H. (1995). Internet services: From electronic mail to real-time multimedia. In Proceedings of KIVS’95, 21-34, Chemnitz, Germany, February, Springer Verlag. Schulzrinne, H., Casner, S., Frederick, R. and Jacobson, J. (2000). RTP: A Transport Protocol for Real-Time Applications. Internet Engineering Task Force, Internet Draft, 14 July. Schulzrinne, H. and Casner, S. L. (1999). RTP Profile for Audio and Video Conferences with Minimum Control. Internet Engineering Task Force, Internet Draft, October 21. Schulzrinne, H., Rao, A. and Lanphier, R. (1998). Real-time Streaming Protocol (RTSP). Internet Engineering Task Force, Network Working Group, RFC 2326, April. Sun Microsystems. (1999). Java Media Framework API Guide, JMF 2.0 FCS, November 19. Vinoski, S. (1997). CORBA: Integrating diverse applications within distributed heterogeneous environments. IEEE Communications Magazine, 14(2), February. Yang, Z. and Duddy K. (1996). CORBA: A platform for distributed object computing. ACM Operating Systems Review, 30(2), 4-31, April. Yang, Z., Sun, Z., Sattar, A. and Yang, Y. (1999). On clock-based distributed multimedia synchronization. In Proceedings of the Sixth International Conference on Distributed Multimedia Systems (DMS’99), Aizu, Japan, 26-30, IEEE CS Press. Zhang, L., Deering, S., Estrin, D., Shenker, S. and Zappala, D. (1993). RSVP: A new Resource ReSerVation Protocol. IEEE Network, 7, 8-18, September.
54 Surendran, Krishamurthy & Schmidt
Chapter III
The Design and Performance of a CORBA Audio/Video Streaming Service Naga Surendran and Yamuna Krishamurthy Washington University-St. Louis, USA Douglas C. Schmidt University of California, Irvine, USA
INTRODUCTION Advances in network bandwidth and CPU processing power have enabled the emergence of multimedia applications, such as teleconferencing or streaming video, that exhibit significantly more diverse and stringent quality-of-service (QoS) requirements than traditional data-oriented applications, such as file transfer or email. For instance, popular Internet-based streaming mechanisms, such as Realvideo (RealNetworks, 1998) and Vxtreme (Vxtreme, 1998), allow suppliers to transmit continuous streams of audio and video packets to consumers. Likewise, non-continuous media applications, such as medical imaging servers (Hu et al., 1998) and network management agents (Schmidt and Suda, 1994), employ streaming to transfer bulk data efficiently from suppliers to consumers. However, many distributed multimedia applications rely on custom and/or proprietary low-level stream establishment and signaling mechanisms to manage and control the presentation of multimedia content. These types of applications run the risk of becoming obsolete as new protocols and services are developed (Huard and Lazar, 1998). Fortunately, there is a general trend to move from programming custom applications manually to integrating applications using reusable components based on open distributed object computing (DOC) middleware, such as CORBA (Object Management Group, 1999), DCOM (Box, 1997), and Java RMI (Wollrath et al., 1996). Copyright © 2002, Idea Group Publishing.
CORBA Audio/Video Streaming Service 55
Although DOC middleware is well-suited to handle request/response interactions among client/server applications, the stringent QoS requirements of multimedia applications have historically precluded DOC middleware from being used as their data transfer mechanism (Pyarali et al., 1996). For instance, inefficient CORBA Internet Inter-ORB Protocol (IIOP) (Gokhale and Schmidt, 1999) implementations perform excessive datacopying and memory allocation per-request, which increases packet latency (Gokhale and Schmidt, 1998). Likewise, inefficient marshaling/demarshaling in DOC middleware decreases streaming data throughput (Gokhale and Schmidt, 1996). As the performance of DOC middleware steadily improves, however, the stream establishment and control components of distributed multimedia applications can benefit greatly from the portability and flexibility provided by DOC middleware. Therefore, to facilitate the development of standards-based distributed multimedia applications, the Object Management Group (OMG) has defined the CORBA Audio/Video (A/V) Streaming Service specification (OMG, 1997a), which defines common interfaces and semantics necessary to control and manage A/V streams. The CORBA A/V Streaming Service specification defines an architecture for implementing open distributed multimedia streaming applications. This architecture integrates (1) well-defined modules, interfaces and semantics for stream establishment and control with (2) efficient data transfer protocols for multimedia data transmission. In addition to defining standard stream establishment and control mechanisms, the CORBA A/V Streaming Service specification allows distributed multimedia applications to leverage the inherent portability and flexibility benefits provided by standardsbased DOC middleware. Our prior research on CORBA middleware has explored the efficiency, predictability and scalability aspects of ORB endsystem design, including static (Schmidt et al., 1998a) and dynamic (Gill et al., 2001) scheduling, I/O subsystem (Kuhns et al., 1999) and pluggable ORB transport protocol ((O’Ryan et al., 2000) integration, synchronous (Schmidt et al., 2001) and asynchronous (Arulanthu et al., 2000) ORB Core architectures, event processing (Harrison et al., 1997), optimization principle patterns for ORB performance (Pyarali et al., 1999), and the performance of various commercial and research ORBs (Gokhale and Schmidt, 1996; Schmidt et al., 1998b) over high-speed ATM networks. This chapter focuses on another important topic in ORB endsystem research: the design and performance of the CORBA A/V Streaming Service specification. The vehicle for our research on the CORBA A/V Streaming Service is TAO (Schmidt et al., 1998a). TAO is a high-performance, real-time Object Request Broker (ORB) endsystem targeted for applications with deterministic and statistical QoS requirements, as well as best effort requirements. The TAO ORB endsystem contains the network interface, OS I/O subsystem, communication protocol and CORBA-compliant middleware components and services shown in Figure 1. Figure 1 also illustrates how TAO’s A/V Streaming Service is built over the TAO ORB subsystem. TAO’s real-time I/O (RIO) (Kuhns et al., 2001) subsystem runs in the OS kernel and sends/receives requests to/from clients across high-speed, QoS-enabled networks, such as ATM or IP Integrated (IETF, 2000b) and Differentiated (IETF, 2000a) Services. TAO’s ORB components, such as its ORB Core, Object Adapter, stubs/skeletons and servants, run in user-space and handle connection management, data transfer, endpoint and request demultiplexing, concurrency, (de)marshaling and application operation processing. TAO’s A/V Streaming Service is implemented atop its user-space ORB components. At the heart of TAO’s A/V Streaming Service is its pluggable A/V protocol framework. This framework
56 Surendran, Krishamurthy & Schmidt
Figure 1: Layering of TAO’s A/V Streaming Service atop the TAO ORB endsystem
provides the “glue’’ that integrates TAO’s A/V Streaming Service with the underlying I/O subsystem protocols and network interfaces. The remainder of this chapter is organized as follows: first, we illustrate how we applied patterns to develop and optimize the CORBA A/V Streaming Service to support the standard OMG interfaces; second, we describe two case studies that illustrate how to develop distributed multimedia applications using TAO’s A/V Streaming Service and its pluggable A/V protocol framework; third, we present the results of empirical benchmarks we conducted to illustrate the performance of TAO’s A/V Streaming Service; fourth, we outline our plans for future work and finally present concluding results. For completeness, we include three appendices that outline the intents of all the patterns applied in TAO’s A/V Streaming Service, summarize the CORBA reference model, and illustrate the various pointto-point and point-to-multipoint stream and flow endpoint bindings implemented in TAO’s A/V Streaming Service.
CORBA Audio/Video Streaming Service 57
THE DESIGN OF TAO’S AUDIO/VIDEO STREAMING SERVICE This section first presents an overview of the key architectural components in the CORBA A/V Streaming Service. We then summarize the key design challenges faced when developing TAO’s CORBA A/V Streaming Service and outline how we applied patterns (Gamma et al., 1995; Buschmann et al., 1996; Schmidt et al., 2000) to resolve these challenges. Finally, we describe the design and performance of the pluggable A/V protocol framework integrated into TAO’s A/V Streaming Service.
Overview of the CORBA Audio/Video Streaming Service Specification The CORBA Audio/Video (A/V) Streaming Service specification (OMG, 1997a) defines an architectural model and standard OMG IDL interfaces that can be used to build interoperable distributed multimedia streaming applications. Below, we outline the architectural components and goals of the CORBA A/V Streaming Service specification.
Synopsis of Components in the CORBA A/V Streaming Service The CORBA A/V Streaming Service specification defines flows as a continuous transfer of media between two multimedia devices. Each of these flows is terminated by a flow endpoint. A set of flows, such as audio flow, video flow and data flow, constitute a stream, which is terminated by a stream endpoint. A stream endpoint can have multiple flow endpoints. Figure 2 shows a multimedia stream, which is represented as a flow between two flow endpoints. One flow endpoint acts as a source of the data and the other flow endpoint acts as a sink. Note that the control and signaling operations pass through the GIOP/IIOP-path of the ORB, demarcated by the dashed box. In contrast, the data stream uses out-of-band stream(s), Figure 2: CORBA A/V Streaming Service architecture
58 Surendran, Krishamurthy & Schmidt
which can be implemented using communication protocols that are more suitable for multimedia streaming than IIOP. Maintaining this separation of concerns is crucial to meeting end-to-end QoS requirements. Each stream endpoint consists of three logical entities: (1) a stream interface control object that exports an IDL interface, (2) a data source or sink and (3) a stream adaptor that is responsible for sending and receiving frames over a network. Control and Management objects are responsible for the establishment and control of streams. The CORBA A/V Streaming Service specification defines the interfaces and interactions of the Stream Interface Control Objects and the Control and Management objects. The section CORBA A/V Streaming Service Components describes the various components in Figure 2 in detail.
Synopsis of Goals for the CORBA A/V Streaming Service • •
•
•
The goals of the CORBA A/V Streaming Service include the following: Standardized stream establishment and control protocols. Using these protocols, consumers and suppliers can be developed independently, while still being able to establish streams with one another. Support for multiple data transfer protocols. The CORBA A/V Streaming Service architecture separates its stream establishment and control protocols from its data transfer protocols, such as TCP, UDP, RTP or ATM, thereby allowing applications to select the most suitable data transfer protocols for a particular network environment or set of application requirements. Provide interoperability of flows. A flow specification is passed between two stream endpoints to convey per-flow information, such as format, network host name and address, and flow protocol, required to bind or communication between two multimedia devices. Support many types of sources and sinks. Common stream sources include videoon-demand servers, video cameras attached to a network or stock quote servers. Common sinks include video-on-demand clients, display devices attached to a network or stock quote clients.
Overview of Design Challenges and Resolutions Below, we present an overview of the key challenges faced when we developed TAO’s CORBA A/V Streaming Service and outline how we applied patterns (Gamma et al., 1995; Schmidt et al., 2000) to resolve these challenges. Later sections then examine these design and optimization pattern techniques in more depth. Appendix 1 outlines the intents of all the patterns applied in TAO’s A/V Streaming Service. Flexibility in stream endpoint creation strategies. The CORBA A/V Streaming Service specification defines the interfaces and roles of stream components. Many performance-sensitive multimedia applications require fine-grained control over the strategies governing the creation of their stream components. For instance, our past studies of Web server performance (Hu et al., 1997,Hu et al., 1998) motivate the need to support adaptive concurrency strategies to develop efficient and scalable streaming applications. In the context of our A/V Streaming Service, we determined that the supplier-side of our MPEG case-study application (described in the section Case Study 1: An MPEG A/V Streaming Application) required a process-based concurrency strategy to maximize stream throughput by allowing parallel processing of separate streams. Other types of applications required different implementations, however. For example, the consumer-side of our
CORBA Audio/Video Streaming Service 59
MPEG application benefited from the creation of reactive (Schmidt, 1995) consumers that contain all related endpoints within a single process. To achieve a high degree of flexibility, therefore, TAO’s A/V Streaming Service design decouples the behavior of stream components from the strategies governing their creation. We achieved this decoupling via the Factory Method and Abstract Factory patterns (Gamma et al., 1995). Flexibility in data transfer protocol. A CORBA A/V Streaming Service implementation may need to select from a variety of transfer protocols. For instance, an Internet-based streaming application, such as Realvideo (RealNetworks, 1998), may use the UDP protocol, whereas a local intranet video-conferencing tool might prefer the QoS features offered by native high-speed ATM protocols. Likewise, RTP (Schulzrinne et al., 1994) is gaining acceptance as a transfer protocol for streaming audio and video data over the Internet. Thus, it is essential that an A/V Streaming Service support a range of data transfer protocols dynamically. The CORBA A/V Streaming Service defines a simple specialized protocol, called the Simple Flow Protocol (SFP), which makes no assumptions about the communication protocols used for data streaming and provides an architecture independent flow content transfer. Consequently, the stream establishment components in TAO’s A/V Streaming Service provide flexible mechanisms that allow applications to define and use multiple network programming APIs, such as sockets and TLI, and multiple communication protocols, such as TCP, UDP, RTP or ATM. Therefore, another design challenge we faced was to define stream establishment components that can work with a variety of data transfer protocols. To resolve this challenge, we applied the Strategy pattern (Gamma et al., 1995). Providing a uniform API for different flow protocols. The CORBA A/V Streaming Service specification defines the flow specification syntax that can be used for connection establishment. It defines the protocol names and syntax for specifying the flow and data transfer protocol information, but it does not define any interfaces for protocol implementations. We resolved this omission with our pluggable A/V protocol framework (described in the section Overview of TAO’s Pluggable A/V Protocol Framework) using design patterns, described in Appendix 1, such as Layer (Buschmann et al., 1996), Acceptor-Connector (Schmidt et al., 2000), Facade and Abstract Factory (Gamma et al., 1995). Moreover, TAO’s A/V Streaming Service defines a uniform API for the different flow protocols, such as RTP and SFP, that can handle variations using the standard CORBA policy. Flexibility in stream control interfaces. A/V streaming middleware should provide flexible mechanisms that allow developers to define and use different operations for different streams. For instance, a video application typically supports a variety of operations, such as play, stop and rewind. Conversely, a stream in a stock quote application may support other operations, such as start and stop. Since the operations provided by the stream are application-defined, it is useful for the control logic component in streaming middleware to be flexible and adaptive. Therefore, another design challenge facing designers of CORBA A/V Streaming Services is to allow applications the flexibility to define their own stream control interfaces and access these interfaces in an extensible, type-safe manner. In TAO’s A/V Streaming Service implementation, we used the Extension Interface (Schmidt et al., 2000) pattern to resolve this challenge. Flexibility in managing states of stream supplier and consumers. The data transfer component of a streaming application often must change behavior depending on the current
60 Surendran, Krishamurthy & Schmidt
state of the system. For instance, invoking the play operation on the stream control interface of a video supplier may cause it to enter a PLAYING state. Likewise, sending it the stop operation may cause it to transition to the STOPPED state. More complex state machines can result due to additional operations, such as rewind and fast-forward operations. Thus, an important design challenge for developers is designing flexible applications whose states can be extended. Moreover, in each state, the behavior of supplier/consumer applications, and the A/V Streaming Service itself, must be well-defined. To address this issue we applied the State Pattern (Gamma et al., 1995). Providing a uniform interface for full and light profiles. To allow developers and applications to control and manage flows and streams, the CORBA A/V Streaming Service specification exposes certain of their IDL interfaces. There are two levels of exposure defined by the CORBA A/V Service: (1) the light profile, where only the stream and stream endpoint interfaces are exposed and the flow interfaces are not exposed and (2) the full profile, where flow interfaces are also exposed. This two-level design provides more flexibility and granularity of control to applications and developers since flow interfaces are CORBA interfaces and are not locality constrained. Therefore, the design challenge was to define a uniform interface for both the light and full profiles to make use of TAO’s pluggable A/V protocol framework. We resolved this challenge by deriving the full and light profile endpoints from a base interface and by generating the flow specification using the Forward_FlowSpec_Entry and Reverse_FlowSpec_Entry classes. Providing multipoint-to-multipoint bindings. Different multimedia applications require different stream endpoint bindings. For example, video-on-demand applications require point-to-point bindings between consumer and supplier endpoints, whereas videoconferencing applications require multipoint-to-multipoint bindings. The CORBA A/V specification defines a point-to-multipoint binding, but not a multipoint-to-multipoint binding, which is left as a responsibility of implementers. Thus, we faced the design challenge of providing multipoint-to-multipoint bindings for applications that use multicast protocols provided by the underlying network. We have provided a solution based on IP multicast and used to Adapter pattern (Gamma et al., 1995) to adapt it to ATM’s multicast model. The Adapter pattern is used to allow multiple components to work together, even if they were not originally designed to work together. This adaptation was done by having TAO’s A/V Streaming Service set source ids for the flow producers so that the flow consumers can distinguish the sources. We added support in both SFP and RTP to allow them to be adapted for such bindings. Our implementation of Vic, described in the section Case Study 2: The Vic Video-Conferencing Application, uses TAO’s A/V Streaming Service multipoint-to-multipoint binding and its RTP adapter.
CORBA A/V Streaming Service Components The CORBA A/V Streaming Service specification defines a set of standard IDL interfaces that can be implemented to provide a reusable framework for distributed multimedia streaming applications. Figure 3 illustrates the key components of the CORBA A/V Streaming Service. This subsection describes the design of TAO’s A/V Streaming Service components shown in Figure 3. The corresponding IDL interface name for each role is provided in brackets. In addition, we illustrate how TAO provides solutions to the design challenges outlined in the previous section.
CORBA Audio/Video Streaming Service 61
Figure 3: A/V Streaming Service components
Multimedia Device Factory (MMDevice) An MMDevice abstracts the behavior of a multimedia device. The actual device can be physical, such as a video microphone or speaker, or be logical, such as a program that reads video clips from a file or a database that contains information about stock prices. There is typically one MMDevice per physical or logical device. For instance, a particular device might support MPEG-1 (ISO, 1993) compression or ULAW audio (SUN Microsystems, 1992). Such parameters are termed “properties’’ of the MMDevice. Properties can be associated with the MMDevice using the CORBA Property Service (OMG, 1996), as shown in Figure 4. An MMDevice is also an endpoint factory that creates new endpoints for new stream connections. Each endpoint consists of a pair of objects: (1) a virtual device (VDev), which encapsulates the device-specific parameters of the connection and (2) the StreamEndpoint, which encapsulates the data transfer-specific parameters of the connection. The MMDevice component also encapsulates the implementation of strategies that govern the creation of the VDev and StreamEndpoint objects. For instance, the implementation of MMDevice in TAO’s A/V Streaming Service provides the following two concurrency strategies: • Process-based strategy. The process-based concurrency strategy creates new virtual devices and stream endpoints in a new process, as shown in Figure 5. This strategy is useful for applications that create a separate process to handle each new endpoint. For instance, the supplier in our MPEG player application described in the section Case Study 1: An MPEG A/V Streaming Application creates separate processes to stream the audio and video data to the consumer concurrently. • Reactive strategy: In this strategy, endpoint objects for each new stream are created in the same process as the factory, as shown in Figure 6. Thus, a single process handles all the simultaneous connections reactively (Schmidt, 1995). This strategy is useful for applications that dedicate one process to control multiple streams. For instance, to minimize synchronization overhead, the consumer of the MPEG A/V player application uses this strategy to create the audio and video endpoints in the same process. In TAO’s A/V Streaming Service, the MMDevice uses the Abstract Factory pattern (Gamma et al., 1995) to decouple (1) the creation strategy of the stream endpoint and virtual
62 Surendran, Krishamurthy & Schmidt
Figure 4: Multimedia device factory
Figure 5: MMDevice process-based concurrency strategy
Figure 6: MMDevice reactive concurrency strategy
device from (2) the concrete classes that define it. Thus, applications that use the MMDevice can subclass both the strategies described above, as well as the StreamEndpoint and the VDev that are created. The Abstract Factory pattern allows applications to customize the concurrency strategies to suit their needs. For instance, by default, the reactive strategy creates new stream endpoints using dynamic allocation, e.g., via the new operator in C++. Applications can override this behavior via subclassing so they can allocate stream endpoints using other allocation techniques, such as thread-specific storage (Schmidt et al., 2000) or special frame buffers.
CORBA Audio/Video Streaming Service 63
Virtual Device (Vdev) The virtual device (VDev) component is created by the MMDevice factory in response to a request for a new stream connection. There is one VDev per stream. The VDev is used by an application to define its response to configure requests. For instance, if a consumer of a stream wants to use the MPEG video format, it can invoke the configure operation on the supplier VDev, as shown in Figure 7. Stream establishment is a mechanism defined by the CORBA A/V Streaming Service specification to permit the negotiation of QoS parameters via properties. Properties are name-value pairs, i.e., they have a string name and a corresponding value. The properties used by the A/V Streaming Service are implemented using the CORBA Property Service (OMG, 1996). The CORBA A/V Streaming Service specification specifies the names of the common properties used by the VDev objects. For instance, the property currformat is a string that contains the current encoding format, e.g., “MPEG.’’ During stream establishment, each VDev can use the get_property_value operation on its peer VDev to ensure that the peer uses the same encoding format. When a new pair of VDev objects are created, each Vdev uses the configure operation on its peer to set the stream configuration parameters. If the negotiation fails, the stream can be torn down and its resources released immediately. The section Interaction Between Components in the CORBA Audio/Video Streaming Service Model describes the CORBA A/V Streaming Service stream establishment protocol in detail.
Media Controller (MediaCtrl) The Media Controller (MediaCtrl) is an IDL interface that defines operations for controlling a stream. A MediaCtrl interface is not defined by the CORBA A/V Streaming Service specification. Instead, it is defined by multimedia application developers to support operations for a specific stream, such as the following IDL interface for a video service: interface video_media_control { void select_video (string name_of_movie); void play ();
Figure 7: Virtual device
64 Surendran, Krishamurthy & Schmidt
void rewind (short num_frames); void pause (); void stop (); }; The CORBA A/V Streaming Service provides developers with the flexibility to associate an application-defined MediaCtrl interface with a stream. Thus, the A/V Streaming Service can be used with an infinitely extensible variety of streams, such as audio and video, as well as non-multimedia streams, such as a stream of stock quotes. The VDev object represented device-specific parameters, such as compression format or frame rate. Likewise, the MediaCtrl interface is device-specific since different devices support different control interfaces. Therefore, the MediaCtrl is associated with the VDev object using the Property Service (OMG, 1996). There is typically one MediaCtrl per stream. In some cases, however, application developers may choose to control multiple streams using the same MediaCtrl. For instance, the video and audio streams for a movie might have a common MediaCtrl to enable a single CORBA operation, such as play, to start both audio and video playback simultaneously.
Stream Controller (StreamCtrl) The Stream Controller (StreamCtrl) interface abstracts a continuous media transfer between virtual devices (VDevs). It supports operations to bind two MMDevice objects together using a stream. Thus, the StreamCtrl component binds the supplier and consumer of a stream, e.g., a video-camera and a display. It is the key participant in the Stream Establishment protocol described in the section Interaction Between Components in the CORBA Audio/Video Streaming Service Model. In general, a StreamCtrl object is instantiated by an application developer. There is one StreamCtrl per stream, i.e., per consumer/supplier pair.
Stream Endpoint (StreamEndpoint) The StreamEndpoint object is created by an MMDevice in response to a request for a new stream. There is one StreamEndpoint per stream. A StreamEndpoint encapsulates the data transfer-specific parameters of a stream. For instance, a stream that uses UDP as its data transfer protocol will identify its StreamEndpoint via a host name and port number. In TAO’s A/V Streaming Service, the StreamEndpoint implementation uses patterns, such as Double Dispatching and Template Method (Gamma et al., 1995), described in Appendix 2, to allow applications to define and exchange data transfer-level parameters flexibly. This interaction is shown in Figure 8 and occurs in the following steps: 1. An A/V streaming application can inherit from the StreamEndpoint class and override the operation handle_connection_requested in the new subclass TCP_StreamEndpoint. 2. When binding two MMDevices, the StreamCtrl invokes connect on one StreamEndpoint with the peer TCP_StreamEndpoint as a parameter. 3. The StreamEndpoint then requests the TCP_StreamEndpoint to establish the connection for this stream using the network addresses it is listening on. 4. The virtual handle_connection_requested operation of the TCP_StreamEndpoint is invoked and connects with the listening network address on the peer side. Thus, by applying patterns, the StreamEndpoint design allows each application to configure its own data transfer protocol, while reusing the generic stream establishment control logic in TAO’s A/V Streaming Service.
CORBA Audio/Video Streaming Service 65
Figure 8: Interaction between StreamEndpoint and a multimedia application
Interaction Between Components in the CORBA Audio/Video Streaming Service Model The preceding discussion described the structure of components that constitute the CORBA A/V Streaming Service. Below, we describe how these components interact to provide two key A/V Streaming Service features: stream establishment and flexible stream control.
Stream Establishment Stream establishment is the process of binding two peers who need to communicate via a stream. The CORBA A/V Streaming Service specification defines a standard protocol to establish a binding between streams. Several A/V Streaming Service components are involved in stream establishment. A key motivation for providing an elaborate stream establishment protocol is to allow components to be configured independently. This design allows the stream establishment protocol to remain standard, while still providing sufficient hooks for multimedia application developers to customize the process for a specific set of requirements. For instance, an MMDevice can be configured to use one of several concurrency strategies to create stream endpoints. Thus, at each stage of the stream establishment process, individual components can be configured to implement desired policies. The CORBA A/V Streaming Service specification identifies two peers in stream establishment, which are known as the “A’’ party and the “B’’ party. These terms define complimentary relationships, i.e., a stream always has an A party at one end and a B party at the other. The A party may be the sink, i.e., the consumer, of a video stream, whereas the B party may be the source, i.e., the supplier, of a video stream and vice versa. Note that the CORBA A/V Streaming Service specification defines two distinct IDL interfaces for the A and B party endpoints. Hence, for a given stream, there will be two distinct types for the supplier and the consumer. Thus, the CORBA A/V Streaming Service specification ensures that the complimentary relationship between suppliers and consumers is typesafe. An exception will be raised if a supplier tries to establish a stream with another supplier accidentally. Stream establishment in TAO’s A/V Streaming Service occurs in several steps, as illustrated in Figure 9. This figure shows a stream controller (aStreamCtrl) binding the A party together with the B party of a stream. The stream controller need not be collocated with either end of a
66 Surendran, Krishamurthy & Schmidt
Figure 9: Stream establishment protocol in the A/V Streaming Service
stream. To simplify the example, however, we assume that the controller is collocated with the A party, and is called the aStreamCtrl. Each step shown in Figure 9 is explained below: 1. The aStreamCtrl binds two Multimedia Device (MMDevice) objects together: Application developers invoke the bind_devs operation on aStreamCtrl. They provide the controller with the object references of two MMDevice objects. These objects are factories that create the two StreamEndpoints of the new stream. 2. Stream Endpoint creation: In this step, aStreamCtrl requests the MMDevice objects, i.e., aMMDevice and bMMDevice, to create the StreamEndpoints and VDev objects. The aStreamCtrl invokes create_A and create_B operations on the two MMDevice objects. These operations request them to create A_Endpoint and B_Endpoint endpoints, respectively. 3. VDev configuration: After the two peer VDev objects have been created, they can use the configure operation to exchange device-level configuration parameters. For instance, these parameters can be used to designate the video format and compression technique used for subsequent stream transfers. 4. Stream setup: In this step, aStreamCtrl invokes the connect operation on the A_Endpoint. This operation instructs the A_Endpoint to initiate a connection with its peer. The A_Endpoint initializes its data transfer endpoints in response to this operation. In TAO’s A/V Streaming Service, applications can customize this behavior using the Double Dispatch (Gamma et al., 1995) pattern. 5. Stream Establishment: In this step, the A_Endpoint invokes the request_connection operation on its peer endpoint. The A_Endpoint passes its network endpoint parameters, e.g., hostname and port number, as parameters to this operation. When the B_Endpoint receives the request_connection operation, it initializes its end of the data transfer connection. It subsequently connects to the data transfer endpoint passed to it by the A_Endpoint. After completing these five stream establishment protocol steps, a data transfer-level stream is established between the two endpoints of the stream. Later, we will describe how the Media Controller (MediaCtrl) can control an established stream, e.g., by starting or stopping the stream.
CORBA Audio/Video Streaming Service 67
Stream Control Each MMDevice endpoint factory can be configured with an application-defined MediaCtrl interface, as described above. Each stream has one MediaCtrl and every MediaCtrl controls one stream. Thus, if a particular movie has two streams, one for audio and the other for video, it will have two MediaCtrls. The MediaCtrl applies the Extension Interface pattern (Schmidt et al., 2000) outlined in Appendix 1. After a stream has been established by the stream controller, applications can obtain object references to their MediaCtrls from their VDev. These object references control the flow of data through the stream. For instance, a video stream might support certain operations, such as play, rewind and stop, and be used as shown below: // The Audio/Video Streaming Service invokes this application-defined // operation to give the application a reference to the media // controller for the stream. Video_Client_VDev::set_media_ctrl (CORBA::Object_ptr media_ctrl, CORBA::Environment &env) { // “Narrow” the CORBA::Object pointer into a media controller for the // video stream. this->video_control_ = Video_Control::_narrow (media_ctrl); } The video control interface can be used to control the stream, as follows: // Select the video to watch. this->video_control_->select_video (“gandhi”); // Start playing the video stream. this->video_control_->play (); // Pause the video. this->video_control_->stop (); // Rewind the video 100 frames. this->video_control_->rewind (100); When binding two multimedia devices, a flow specification is passed between the two StreamEndpoints to convey per-flow information. A flow specification represents key aspects of a flow, such as its name, format, flow protocol being used, and the network name and address. A flow specification string is analogous to an interoperable object reference (IOR) in the CORBA object model. The syntax for the interoperable flow specifications is shown in Figure 10. Standardizing the flow specifications ensures that two different StreamEndpoints from two different implementations can interoperate. There are two different flow specifications, depending on the direction in which the flowspec is traveling. If it is from the A party’s StreamEndpoint to the B party’s StreamEndpoint then it is a “forward flowspec;’’ the opposite direction is the “reverse flowspec.’’ TAO’s CORBA A/V Streaming Service implementation defines two classes, Forward_FlowSpec_ Entry and Reverse_FlowSpec_Entry, that allow multimedia applica-
68 Surendran, Krishamurthy & Schmidt
Figure 10: Flow specification
tions to construct the flow specification string from their components without worrying about the syntactic details. For example, the entry takes the address as both an INET_Addr and a string and provides convenient parsing utilities for strings.
The Design of a Pluggable A/V Protocol Framework for TAO’s A/V Streaming Service At the heart of TAO’s A/V Streaming Service is its pluggable A/V protocol framework, which defines a common interface for various flow protocols, such as TCP, UDP, RTP or ATM. This framework provides the “glue’’ that integrates its ORB components with the underlying I/O subsystem protocols and network interfaces. In this section, we describe the design of the pluggable A/V protocol framework provided in TAO’s A/V Streaming Service and describe how we resolved key design challenges that arose when developing this framework.
Overview of TAO’s Pluggable A/V Protocol Framework The pluggable A/V protocol framework in TAO’s A/V Streaming Service consists of the components shown in Figure 11. Each of these components is described below. AV_Core. This singleton (Gamma et al., 1995) component is a container for flow and data transfer protocol factories. An application using TAO’s A/V implementation must Figure 11: Pluggable A/V protocol components in TAO's A/V Streaming Service
CORBA Audio/Video Streaming Service 69
initialize this singleton before using any of its A/V classes, such as StreamCtrl and MMDevice. During initialization, the AV_Core class loads all the flow protocol factories, control protocol factories and data transfer factories dynamically using the Component Configurator pattern (Schmidt et al., 2000) and creates default instances for each known protocol. Data Transfer components. The components illustrated in Figure 12 and described below are required for each data transfer protocol: • Acceptor and Connector: These classes are implementations of the Acceptor-Connector pattern (Schmidt et al., 2000), which are used to accept connections passively and establish connections actively, respectively. • Transport_Factory: This class is an abstract factory (Gamma et al., 1995) that provides interfaces to create Acceptors and Connectors in accordance to the appropriate type of data transfer protocol. • Flow_Handler: All data transfer handlers derive from the Flow_Handler class, whose methods can start, stop and provide flow-specific functionality for timeout upcalls to the Callback objects, which are described in the following paragraph. Callback interface. TAO’s A/V Streaming Service uses this callback interface to deliver frames and to notify FlowEndPoints of start and stop events. Multimedia application developers subclass the Callback interface for each flow endpoint, i.e., there are producer and consumer callbacks. TAO’s A/V Streaming Service dispatches timeout events automatically so that applications need not write event handling mechanisms. For example, all flow producers are automatically registered for timer events with a Reactor. The value for the timeout is obtained through the get_timeout hook method on the Callback interface. This
Figure 12: TAO's A/V Streaming Service pluggable data transfer components
70 Surendran, Krishamurthy & Schmidt
hook method is called whenever a timeout occurs since multimedia applications typically have adaptive timeout values. Flow protocol components. Flow protocols carry in-band information for each flow that a receiver can use to reproduce the source stream. The following components are required for each flow protocol supported by TAO’s A/V Streaming Service: • Flow_Protocol_Factory: This class is an abstract factory that creates flow protocol objects. • Protocol_Object: This class defines flow protocol functionality. Applications use this class to send frames and the Protocol_Object uses application-specified Callback objects to deliver frames. Figure 13 illustrates the relationships among the flow protocol components in TAO’s pluggable A/V protocol framework. AV_Connector and AV_Acceptor Registry. As mentioned above, different data transfer protocols require the creation of corresponding data transfer factories, acceptors and connectors. The AV_Core class creates the AV_Connector and AV_Acceptor registry classes to provide a facade that maintains and accesses the abstract flow and data transfer factories both for light and full profile objects. This design gives users a single interface that hides the complexity of creating and manipulating different data transfer factories.
Applying Patterns to Resolve Design Challenges for Pluggable A/V Protocols Frameworks Below, we outline the key design challenges faced when developing TAO’s pluggable A/V protocol framework and discuss how we resolved these challenges by applying various patterns (Gamma et al., 1995; Buschmann et al., 1996; Schmidt et al., 2000).
Adding New Data Transfer Protocols Transparently ·
Context: Different multimedia applications often have different QoS requirements. For example, a video application over an intranet may want to take advantage of native ATM protocols to reserve bandwidth. An audio application in a video-conferencing
Figure 13: TAO’s A/V Streaming Service pluggable A/V protocol components
CORBA Audio/Video Streaming Service 71
Figure 14: Connector registry
•
•
•
•
application may want to use a reliable data transfer protocol, such as TCP, since loss of audio is more visible to users than video and the bit-rate of audio flows are low (approximately 8 kbps using GSM compression). In contrast, a video application might not want the overhead of retransmission and slow-start congestion protocol incurred by a TCP (Stevens, 1993). Thus, it may want to use the facilities of an unreliable data transfer protocol, such as UDP, since losing a small number of frames may not affect perceived QoS. Problem: It should be possible to add new data transfer protocols to TAO’s pluggable A/V protocol framework without modifying the rest of TAO’s A/V Streaming Service. Thus, the framework must be open for extensions but closed to modifications, i.e., the Open/Closed principle (Meyer, 1989). Ideally, creating a new protocol and configuring it into TAO’s pluggable A/V protocol framework should be all that is required. Solution: Use a registry to maintain a collection of abstract factories based on the Abstract Factory pattern (Gamma et al., 1995). In this pattern, a single class defines an interface for creating families of related objects, without specifying their concrete types. Subclasses of abstract factories are responsible for creating concrete classes that collaborate amongst themselves. In the context of pluggable A/V protocols, each abstract factory can create concrete Connector and Acceptor classes for a particular protocol. Applying this solution in TAO’s A/V Streaming Service: In TAO’s A/V Streaming Service, the Connector_Registry plays the role of the protocol registry. This registry is created by the AV_Core class. Figure 14 depicts the Connector_Registry and its relation to the abstract factories. These factories are accessed via a facade defined according to the Facade pattern (Gamma et al., 1995). This design hides the complexity of manipulating multiple factories behind a simpler interface. The Connector_Registry described above plays the facade role.
Adding New A/V Protocols Transparently •
Context: Multimedia flows often require a flow protocol since most multimedia flows need to carry in-band information for the receiver to reproduce the source stream. For example, every frame may need a timestamp so that the receiver can play the frame at the right time. Moreover, sequence numbers will be needed if a connectionless
72 Surendran, Krishamurthy & Schmidt
• •
•
protocol, such as UDP, is used so that the applications can do resequencing. In addition, multicast flows may require information, such as a source identification number, to demultiplex flows from different sources. SFP is a simple flow protocol defined by the CORBA A/V Streaming Service specification to transport in-band data. Likewise, the Real-time Transport Protocol (RTP) (Schulzrinne et al., 1994) defines facilities to transport in-band data. RTP is Internet-centric, however, and cannot carry CORBA IDL-typed flows directly. For example, RTP specifies that all header fields should be in network-byte order, whereas the SFP uses CORBA’s CDR encoding and carries the byte-order in each header. Problem: Flow protocols should be able to run over different data transfer protocols. This configuration of a flow protocol over different data transfer protocol should be done easily and transparently to the application developers and users. Solution: To solve the problem of a flow protocol running over different data transfer protocols, we applied the Layers pattern (Buschmann et al., 1996) described in Appendix 1. We have structured the flow protocols and data transfer protocols as two different layers. The flow protocol layer creates the frames with the in-band flow information. The data transfer layer performs the connection establishment and sends the frames that are sent down from the flow protocol layer onto the network. The layered approach makes the flow and data transfer protocols independent of each other and hence it is easy to tie different flow protocols with different data transfer protocols transparently. Applying this solution in TAO’s A/V Streaming Service: TAO’s A/V Streaming Service provides a uniform data transfer layer for a variety of flow protocols, including UDP unicast, UDP multicast and TCP. TAO’s A/V Streaming Service provides a flow protocol layer using a Protocol_Object interface. Likewise, its AV_Core class maintains a registry of A/V protocol factories.
Adding New Protocols Dynamically •
•
•
•
Context: When developing new pluggable A/V protocols, it is inconvenient to recompile TAO’s A/V Streaming Service and applications just to validate a new protocol implementation. Moreover, it is often useful to experiment with different protocols to compare their performance, footprint size and QoS guarantees systematically. Moreover, in telecom systems with 24x7 availability requirements, it is important to configure protocols dynamically, even while the system is running. This level of flexibility helps simplify upgrades and protocol enhancements. Problem: The user would like to populate the registry dynamically with a set of factories during run-time and avoid the inconvenience of recompiling the AV Service and the applications when different protocols are plugged in. The solution explains how we can achieve this. Solution: We can solve the above stated problem using the Component Configurator pattern (Schmidt et al., 2000), which decouples the implementation of a component from the point in time when it is configured into the application. By using this pattern, a pluggable A/V protocol framework can dynamically load the set of entries in a registry. For instance, a registry can simply parse a configuration script and dynamically link the services listed in it. Applying this solution in TAO’s A/V Streaming Service: The AV_Core class maintains all parameters specified in a configuration script. Adding a new parameter to represent the list of protocols is straightforward, i.e., the default registry simply examines this
CORBA Audio/Video Streaming Service 73
Figure 15: Acceptor-connector registry and service configurator
list and links the services into the address-space of the application, using the ACE Service Configurator implementation (Schmidt and Suda, 1994). ACE provides a rich set of reusable and efficient components for high-performance, real-time communication, and forms the portability layer of TAO’s A/V Streaming Service. Figure 15 depicts the connector registry and its relation to the ACE Service Configurator framework, which is a C++ implementation of the Component Configurator pattern (Schmidt et al., 2000).
Designing an Extensible Interface to Control Protocols •
•
•
•
Context: RTP has a control protocol—RTCP—associated with it. Every RTP participant must transmit RTCP frames that provide control information, such as the name of the participant and the tool being used. Moreover, RTCP sends reception reports for each of its sources. Problem: Certain flow protocols, such as SFP, use A/V interfaces to exchange control interfaces. The use of RTP for a flow necessitates it to transmit RTCP information. RTCP extracts this control information from RTP packets. Therefore, TAO’s A/V Streaming Service must provide an extensible interface for these control protocols, as well as provide a means for interacting between the data and control protocols. Solution: The solution is to make the control protocol information part of the flow protocol. For example, RTP knows that RTCP is its control protocol. Therefore, to reuse pluggability features, it may be necessary to make the control protocol use the same interfaces as its data components. Applying this solution in TAO’s A/V Streaming Service: During stream establishment, Registry objects will first check the flow factory for the configured flow protocol. After the listen or connect operation has been performed for a particular data flow, the
74 Surendran, Krishamurthy & Schmidt
Registry will check if the flow factory has a control factory. If so, it will perform the same processing for the control factory, except the network endpoint port will be one value higher than the data endpoint. Since the CORBA A/V Streaming Service specification does not define a portable way to specify control flow endpoint information, we followed this approach as a temporary solution until the OMG comes up with a portable solution. The RTCP implementation in TAO’s A/V Streaming Service uses the same interfaces that RTP does, including the Flow_Protocol_Factory and Protocol_Object classes. Thus, RTP will call the handle_control_input method on the RTCP Protocol_Object when a RTP frame is received. This method enables the RTCP object to extract the necessary control information, such as the sequence number of the last frame.
Uniform Interfaces that Hide Variations in Flow Protocols •
•
•
•
Context: Above, we explained how TAO’s pluggable A/V protocol framework factors out different flow protocols and provides a uniform flow protocol interface. In certain cases, however, there are inherent variations in such protocols. For example, RTP must transmit the payload type, i.e., the format of the flow in each frame, whereas SFP uses the control and management interface in TAO’s A/V Streaming Service to set and get the format values for a flow. Similarly, the RTP control protocol, RTCP, periodically transmits participant information, such as the sender's name and email address, whereas SFP does not transmit such information. Such information does not changed with every frame, however. For example, the name and email address of a participant in a conference will not change for a session. In addition, the properties of the transfer may need to be controlled by applications. For instance, a conferencing application may not want to have multicast loopback. Problem: An A/V Streaming Service should allow end-users to set protocolspecific variations, while still providing a single interface for different flow protocols. Moreover, this interface should be open to changes with the addition of new flow protocol and data transfer protocols. Solution: The solution to the above problem is to apply the CORBA Policy framework defined in the CORBA specification (Object Management Group, 1999). The CORBA Policy framework allows the protocol component developer to define policy objects that control the behavior of the protocol component. The policy object is derived from the CORBA Policy interface (Object Management Group, 1999) which stores the Policy Type (Object Management Group, 1999) and the associated values. Applying this solution in TAO’s A/V Streaming Service: By defining a policy framework, which is extensible and follows the CORBA Policy model, the users will have shorter learning curve to the API and be able to add new flow protocols flexibly. We have defined different policy types used by different flow protocols that can be accessed by the specific transport and flow protocol components during frame creation and dispatching. For example we have defined the TAO_AV_PAYLOAD_TYPE_ POLICY, which allows the RTP protocol to specify the payload type.
CORBA Audio/Video Streaming Service 75
CASE STUDIES OF MULTIMEDIA APPLICATIONS DEVELOPED USING TAO’S A/V STREAMING SERVICE To evaluate the capabilities of the CORBA-based A/V Streaming Service, we have developed several multimedia applications that use the components and interfaces described in the previous. In this section, we describe the design of two distributed multimedia applications that use TAO’s A/V Streaming Service and pluggable A/V protocol framework to establish and control MPEG and interactive audio/video streams.
Case Study 1: An MPEG A/V Streaming Application This application is an enhanced version of a non-CORBA MPEG player developed at the Oregon Graduate Institute (Chen et al., 1995). Our application plays movies using the MPEG-1 video format (ISO, 1993) and the Sun ULAW audio format (SUN Microsystems, 1992). Figure 16 on page 76 shows the architecture of our A/V streaming application. The MPEG player application uses a supplier/consumer design implemented using TAO. The consumer locates the supplier using the CORBA Naming Service (OMG, 1997b) or the Trading Service (OMG, 1997b) to find suppliers that match the consumer’s requirements. For instance, a consumer might want to locate a supplier that has a particular movie or a supplier with the least number of consumers currently connected to it. Once a consumer obtains the supplier’s MMDevice object reference, it requests the supplier to establish two streams, i.e., a video stream and an audio stream, for a particular movie. These streams are established using the CORBA A/V stream establishment protocol. The consumer then uses the MediaCtrl to control the stream. The supplier is responsible for sending A/V packets via UDP to the consumer. For each consumer, the supplier transmits two streams, one for the MPEG video packets and one for the Sun ULAW audio packets. The consumer decodes these streams and plays these packets in a viewer, as shown in Figure 17. This section describes the various components of the consumer and supplier. The following table illustrates the number of lines of C++ source required to develop this system and application. Component Lines of Codenes of code TAO CORBA ORB 61,524 TAO Audio/Video (A/V) Streaming Service 3,208 TAO MPEG Video Application 47,782 Using the ORB and the A/V Streaming Service greatly reduced the amount of software that otherwise would have been written manually.
Supplier Architecture The supplier in the A/V streaming application is responsible for streaming MPEG1 video frames and ULAW audio samples to the consumer. The files can be stored in a filesystem accessible to the supplier process. Alternately, the video frames and the audio packets can be sent by live source, such as a video camera. Our experience with the supplier indicates that it can support approximately 10 concurrent consumers simultaneously on a dual-CPU 187 Mhz Sun Ultrasparc-II with 256 MB of RAM over a 155 mbps ATM network.
76 Surendran, Krishamurthy & Schmidt
Figure 16: Architecture of the MPEG A/V streaming application
Figure 17: A TAO-enabled audio/video player
Figure 18: TAO audio/video supplier architecture
CORBA Audio/Video Streaming Service 77
The role of the supplier is to read audio and video frames from a file, encode them and transmit them to the consumer across the network. Figure 18 depicts the key components in the supplier architecture. The main supplier process contains an MMDevice endpoint factory. This MMDevice creates connection handlers in response to consumer connections, using process-based concurrency strategy. Each connection triggers the creation of one audio process and one video process. These processes respond to multiple events. For instance, the video supplier process responds to CORBA operations, such as play and rewind, and sends video frames periodically in response to timer events. Each component in the supplier architecture is described below: • The Media controller component. This component in the supplier process is a servant that implements the Media Controller interface (MediaCtrl). A MediaCtrl responds to CORBA operations from the consumer. The interface exported by the MediaCtrl component represents the various operations supported by the supplier, such as play, rewind and stop. At any point in time, the supplier can be in several states, such as PLAYING, REWINDING or STOPPED. Depending on the supplier’s state, its behavior may change in response to consumer operations. For instance, the supplier ignores a consumer’s play operation when the supplier is already in the PLAYING state. Conversely, when the supplier is in the STOPPED state, a consumer rewind operation transitions the supplier to the REWINDING state. The key design forces that must be resolved while implementing MediaCtrls for A/V streaming are (1) allowing the same object to respond differently, based on its current state, (2) providing hooks to add new states and (3) providing extensible operations to change the current state. To provide a flexible design that meet these requirements, the control component of our MPEG player application is implemented using the State pattern (Gamma et al., 1995). This implementation is shown in Figure 19. The MediaCtrl has a state object pointer. The object being pointed to by the MediaCtrl’s state pointer represents the current state. For simplicity, the figure shows the Playing_State and the Stopped_State, which are subclasses of the Media_State abstract base class. Additional states, such as the Rewinding_State, can be added by subclassing from Media_State. The diagram lists three operations: play, rewind and stop. When the consumer Figure 19: State pattern implementation of the media controller
78 Surendran, Krishamurthy & Schmidt
Figure 20: Reactive architecture of the video supplier
•
invokes an operation on the MediaCtrl, this class delegates the operation to the state object. A state object implements the response to each operation in a particular state. For instance, the rewind operation in the Playing_State contains the response of the MediaCtrl to the rewind operation when it is in the PLAYING state. State transitions can be made by changing the object being pointed to by the state pointer of the MediaCtrl. In response to consumer operations, the current state object instructs the data transfer component to modify the stream flow. For instance, when the consumer invokes the rewind operation on the MediaCtrl while in the STOPPED state, the rewind operation in the Stopped_State object instructs the data component to play frames in reverse chronological order. The Data transfer component. This component is responsible for transferring data to the consumer. Our MPEG supplier application reads video frames from an MPEG1 file and audio frames from a Sun ULAW audio file. It sends these frames to the consumer, fragmenting long frames if necessary. The current implementation of the data component uses the UDP protocol to send A/V frames. A key design challenge related to data transfer is to have the application respond to CORBA operations for the stream control objects, e.g., the MediaCtrl, as well as the data transfer events, e.g., video frame timer events. An effective way to do this is to use the Reactor pattern (Schmidt et al., 2000), as shown in Figure 20 and described in Appendix 1. The video supplier registers two event handlers with TAO’s ORB Reactor. One is a signal handler for the video frame timer events. The other is a UDP socket event handler for feedback events coming from the consumer. The frames sent by the data component correspond to the current state of the MediaCtrl object, as outlined above. Thus, in the PLAYING state, the data component plays the audio and video frames in chronological order. Future implementations of the data transfer component in our MPEG player application will support multiple encoding protocols via the simple flow protocol (SFP) (OMG, 1997a). SFP encoding encapsulates frames of various protocols within
CORBA Audio/Video Streaming Service 79
an SFP frame. It provides standard framing and sequence numbering mechanisms. SFP uses the CORBA CDR encoding mechanism to encode frame headers and uses a simple credit-based flow control mechanism described in (OMG, 1997a).
Consumer Architecture The role of the consumer is to read audio and video frames off the network, decode them, and play them synchronously. The audio and video servers stream the frames separately. A/V frame synchronization is performed on the consumer. Figure 21 depicts the key components in the consumer architecture. The original non-CORBA MPEG consumer (Chen et al., 1995) used a process-based concurrency architecture. Our CORBA-based consumer maintains this architecture to minimize changes to the code. Separate processes are used to do the buffering, decoding and playback, as explained below: 1. Video buffer. The video buffering process is responsible for reading UDP packets from the network and enqueueing them in shared memory. The video decoder process dequeues these packets and performs MPEG decoding operations on them. 2. Audio buffer. Similarly, the audio buffering process is responsible for reading UDP packets of the network and enqueueing them in shared memory. The control/audio playback process dequeues these packets and sends them to /dev/audio. 3. Video decoder. The video decoding process reads the raw packets sent to it by the Video Buffer process and decodes them according to the MPEG-1 video specification. These decoded packets are sent to the GUI/video process, which displays them. 4. GUI/video process. The GUI/video process is responsible for the following two tasks: a. GUI: It provides a GUI to the user, where the user can select operations like play, stop and rewind. These operations are sent to the control/audio process via a UNIX domain socket (Stevens, 1998). b. Video: This component is responsible for displaying video frames to the user. The decoded video frames are stored in a shared memory queue. 5. Control/audio playback process. The control/audio process is responsible for the following tasks: a. Control: This component receives control messages from the GUI process and sends the appropriate CORBA operation to the MediaCtrl servant in the supplier
Figure 21: TAO audio/video consumer architecture
80 Surendran, Krishamurthy & Schmidt
process. b. Audio playback: The audio playback component is responsible for dequeueing audio packets from the audio buffer process and playing them back using the multimedia sound hardware. Decoding is unnecessary because the supplier uses the ULAW format. Therefore, the data received can be directly written to the sound port, which is /dev/audio on Solaris.
Case Study 2: The Vic Video-Conferencing Application Vic (McCanne and Jacobson, 1995) is a video-conferencing application developed at the University of California, Berkeley. We have adapted Vic to use TAO’s A/V Streaming Service components and its pluggable A/V protocol framework. The Vic implementation in TAO uses RTP/RTCP as its flow and data transfer protocols.
Overview of Vic Vic provides a video-conferencing application. Audio conferencing is done with another tool, Vat (LBNL, 1995). The Vic family of tools synchronize media streams using a conference bus mechanism, which is the “localhost’’ synchronization mechanisms used via loopback sockets. The Architecture of Vic is driven largely by the TclObject interface (McCanne and Jacobson, 1995). TclObject provides operations so that operations on the object can be invoked from a Tcl script. By using Tcl, Vic allows rapid prototyping and reconfiguration of its encode/decode paths. One design challenge we faced while adapting Vic to use TAO’s A/V Streaming Service was to integrate both the GUI and ORB event loops. This was solved using the Reactor pattern (Schmidt et al., 2000). In particular, we developed a Reactor that unified the GUI and ORB into a single event loop.
Implementing Vic Using TAO’s A/V Streaming Service Below, we discuss the steps we followed to adapt Vic to use TAO’s A/V Streaming Service. 1. Structuring of conferencing protocols. In this step, we decomposed the flow, control and data transfer protocols using TAO’s pluggable A/V protocol framework. The original Vic application was tightly coupled with RTP. For instance, its encoders and decoders were aware of the RTP headers. We decoupled the dependencies of the encoders/decoders from RTP-specific details by using the frame_info structure and using TAO’s A/V Streaming Service Protocol_Object interface. The modified Vic still preserves the application-level framing (ALF) (Clark and Tennenhouse, 1990) model embodied in RTP. Moreover, Vic’s RTCP functionality was abstracted into the TAO’s pluggable A/V protocol framework, so the framework automatically defines an RTCP flow for an RTP flow. The modified Vic is independent from the network specific details of opening connections and I/O handling since it uses the pluggable A/ V protocol framework provided by TAO’s A/V Streaming Service. Vic uses the multipoint-to-multipoint binding provided by TAO’s A/V Streaming Service, which is described in Appendix 3. Thus, our first step when integrating into TAO was to determine the proper abstraction for the conference device. A videoconferencing application like Vic serves as both a source and sink; thus, we needed a source and sink MMDevice. Moreover, to be extensible for future integration with Vat
CORBA Audio/Video Streaming Service 81
2.
and other multimedia tools, Vic uses flow interfaces, i.e., video is considered as a flow within the conference stream. Since Vat runs in a separate address space, its flow interfaces must be exposed using TAO’s full profile flow interfaces, i.e., FDev, FlowProducer and FlowConsumer. Define callback objects. In this step, we defined Callback objects for all the source and sink FlowEndPoints. The Source_Callback uses the timer functionality to schedule timer events to send the frames. Figure 22 illustrates the sequence of events that trigger the sending of frames. When the input becomes ready on the video card, the grabber reads it and gives it to the transmitter. The transmitter then uses the Source_Callback object to schedule a timer to send the frames at the requested bit rate using a bitrate buffer. On the sink-side, when a packet arrives on the network the receive_frame upcall is done on the Sink_Callback object which, using the frame_info structure gives it to the right Source object, which then passes it to the right decoder. To implement RTCP functionality, Vic implements a RTCP_Callback to provide Vic-specific source objects.
Figure 22: Architecture of Vic using TAO’s A/V Streaming Service
82 Surendran, Krishamurthy & Schmidt
3.
Select a centralized or distributed conference configuration. In this step, we have ensured that Vic can function both as a participant in a centralized conference, as well as a loosely coupled distributed conference. This flexibility is achieved by checking for a StreamCtrl object in the Naming Service and creating new StreamCtrl if one is not found in the Naming Service. Thus, by running a StreamCtrl control process that registers itself with the Naming Service, all Vic participants will become part of a centralized conference, which can be controlled from the control process. Conversely, when no such process is run, Vic reverts to the loosely controlled model by creating its own StreamCtrl and transmitting on the multicast address.
PERFORMANCE RESULTS This section describes the design and results of three performance experiments we conducted using TAO’s A/V Streaming Service.
CORBA/ATM Testbed The experiments in this section were conducted using a FORE systems ASX-1000 ATM switch connected to two dual-processor UltraSPARC-2s running Solaris 2.5.1. The ASX-1000 is a 96 Port, OC12 622 Mbs/port switch. Each UltraSPARC-2 contains a 300 MHz Super SPARC CPU with a 1 Megabyte cache per-CPU. The Solaris 2.5.1 TCP/IP protocol stack is implemented using the STREAMS communication framework (Ritchie, 1984). Each UltraSPARC-2 has 256 Mbytes of RAM and an ENI-155s-MF ATM adaptor card, which supports 155 Megabits per-sec (Mbps) SONET multimode fiber. The Maximum Transmission Unit (MTU) on the ENI ATM adaptor is 9,180 bytes. Each ENI card has 512 Kbytes of on-board memory. A maximum of 32 Kbytes is allotted per ATM virtual circuit connection for receiving and transmitting frames (for a total of 64 Kb). This allows up to eight switched virtual connections per card. The CORBA/ATM hardware platform is shown in Figure 23.
CPU Usage of the MPEG Decoder The aim of this experiment is to determine the CPU overhead associated with decoding and playing MPEG-1 frames in software. To measure this, we used the MPEG/ULAW A/ V player application described in the preceding section. We used the application to view two movies, one of size 128x96 pixels and the other of size 352x240 pixels. We measured the percentage CPU usage for different frame rates. The frame rate is the number of video frames displayed by the viewer per second. The results are shown in Figure 24. These results indicate that for large frame sizes (352x240), MPEG decoding in software becomes expensive, and the CPU usage becomes 100% while playing 12 frames per second or higher. However, for smaller frame sizes (128x96), MPEG decoding in software does not cause heavy CPU utilization. At 30 frames per second, CPU utilization is approximately 38%.
A/V Stream Throughput The aim of this experiment is to illustrate that TAO’s A/V Streaming Service does not introduce appreciable overhead in transporting data. To demonstrate this, we wrote a TCPbased data streaming component and integrated it with TAO’s A/V service. The producer
CORBA Audio/Video Streaming Service 83
Figure 23: Hardware for the CORBA/ATM testbed
Figure 24: CPU usage of the MPEG decoder
84 Surendran, Krishamurthy & Schmidt
in this application establishes a stream with the consumer, using the CORBA A/V stream establishment mechanism. Once the stream is established, it streams data via TCP to the consumer. We measured the throughput, i.e., the number of bytes per second sent by the supplier to the consumer, obtained by this streaming application. We then compared this throughput with the following two configurations: 1. TCP transfer — i.e., by a pair of application processes that do not use the CORBA A/ V Streaming Service stream establishment protocol. In this case, Sockets and TCP were the network programming API and data transfer protocol, respectively. This is the “ideal’’ case since there is no additional ORB-related or presentation layer overhead. 2. ORB transfer — i.e., the throughput obtained by a stream that used an octet stream passed through the TAO (Schmidt et al., 1998a) CORBA ORB. In this case, the IIOP data path was the data transfer mechanism. We measured the throughput obtained by varying the buffer size of the sender, i.e., the number of bytes written by the supplier in a single write system call. In each stream, the supplier sent 64 megabytes of data to the consumer. The results shown in Figure 25 indicate that, as expected, the A/V Streaming Service does not introduce any appreciable overhead to streaming the data. In the case of using IIOP as the data transfer layer, the benchmark incurs additional Figure 25: Throughput results
CORBA Audio/Video Streaming Service 85
performance overhead. This overhead arises from the dynamic memory allocation, datacopying and marshaling/demarshaling performed by the ORB’s IIOP protocol engine (Gokhale and Schmidt, 1996). In general, however, a well-designed ORB can achieve performance equivalent to sockets for higher buffer sizes due to various optimizations, such as eliding (de)marshaling overhead for octet data (Gokhale and Schmidt, 1999). The largest disparity occurred for smaller buffer sizes, where the performance of the ORB was approximately half that of the TCP and A/V streaming implementations. As the buffer size increases, however, the ORB performance improves considerably and attains nearly the same throughput as TCP and A/V streaming. Clearly, there is a fixed amount of overhead in the ORB that is amortized and minimized as the size of the data payload increases.
Stream Establishment Latency This experiment measures the time required to establish a stream using TAO’s implementation of the CORBA A/V stream establishment protocol described in the section Interaction Between Components in the CORBA Audio/Video Streaming Service Model. We measured the stream establishment latency for the two concurrency strategies: processbased and reactive. The timer starts when the consumer gets the object reference for the supplier’s MMDevice servant from the Naming Service. The timer stops when the stream has been established, i.e., when a TCP connection has been established between the consumer and the supplier. Figure 26: Stream establishment latency results
86 Surendran, Krishamurthy & Schmidt
We measured the stream establishment time as the number of concurrent consumers establishes connections with the supplier increased from 1 to 10. The results are shown in Figure 26. When the supplier’s MMDevice is configured to use the process-based concurrency strategy, the time taken to establish the stream is higher, due to the overhead of process creation. For instance, when 10 concurrent consumers establish a stream with the producer simultaneously, the average latency observed is about 2.25 seconds with the process-based concurrency strategy. With the reactive concurrency strategy, the latency is only approximately 0.4 seconds. The process-based strategy is well-suited for supplier devices that have multiple streams, e.g., a video camera that broadcasts a live feed to many clients. In contrast, the reactive concurrency strategy is well-suited for consumer devices that have few streams, e.g., a display device that has only one or two streams.
FUTURE WORK Our next steps are to enable end-to-end QoS for the flow protocols. TAO’s pluggable A/V protocol framework enables us to provide QoS guarantees using either IP-based QoS protocols (such as RSVP and Differentiated Services) or ATM. For example, if an application wants to use ATM, QoS guarantees it can choose ATM AAL5 data transfer protocol. Conversely, if it wants to use RSVP or Differentiated Service QoS provisions, it can choose the TCP or UDP data transfer protocols. TAO’s pluggable A/V protocol framework helps applications choose flow and data transfer protocols dynamically, in accordance with the data that they are streaming. For example, applications can use reliable TCP for audio transmission and unreliable UDP for video transmission. The CORBA A/V Streaming Service specification has provisions for applications to specify their QoS requirements when setting up a connection. These QoS parameters can be specified for each flow through a sequence of name/value properties. The specification leaves it to the A/V Streaming Service implementation to translate application-level QoS parameters (such as video frame rate) to network-level QoS parameters (such as bandwidth). We are planning to build a QoS framework the A/V Streaming Service can use to ensure endto-end QoS for all its flows. We have identified three main components for this framework: 1. QoS mapping. This component translates QoS specifications between different levels, such as application and network, in order to reserve sufficient network resources at connection establishment. Good mapping rules are required to prevent reservation of too much (or too little) resources. 2. QoS monitoring and adaptation. This component measures end-to-end QoS parameters over a finite time period and takes actions based on the measured QoS and the application requirements. It facilitates renegotiation of the QoS parameters between the sender and receiver. 3. QoS-based transport API. This component provides calls for provisioning control (renegotiation and violation notification) and media transfer enforcing end-to-end network QoS. The ACE framework provides QoS API’s that provide these functionalities. The ACE QoS APIs use the G-QoS and RAPI APIs to enforce end-toend network QoS.
CORBA Audio/Video Streaming Service 87
CONCLUDING REMARKS The demand for high quality multimedia streaming is growing, both over the Internet and for intranets. Distributed object computing is also maturing at a rapid rate due to middleware technologies like CORBA. The flexibility and adaptability offered by CORBA makes it attractive for use in streaming technologies, as long as the requirements of performance-sensitive multimedia applications can be met. This chapter illustrates an approach to building standards-based, flexible, adaptive, multimedia streaming applications using CORBA. There is a great deal of activity in the codec community to design new formats for audio and video transmission. Active research is also being done in designing new flow and data transfer protocols for multimedia. In such situations, a flexible framework that makes use of the A/V interfaces and also abstracts the network/protocol details is needed to adapt to the new developments. In this chapter we present a pluggable A/V protocol framework that provides the capability to rapidly adapt to new flow and data transfer protocols. With growing demand for real-time multimedia streaming and conferencing with increase in network bandwidth and the spread of the Internet, TAO provides the first freely available, open-source implementation of the CORBA Audio/Video Streaming Service specification, i.e., flow interfaces, point-to-multipoint binding and multipoint-to-multipoint binding for conferencing applications. Our experience with TAO’s A/V implementation indicates that the standard CORBA specification defines a flexible and efficient model for developing flexible and high-performance multimedia streaming applications. While designing and implementing the CORBA A/V Streaming Service, we learned the following lessons: 1. We found that CORBA simplifies a number of common network programming tasks, such as parsing untyped data and performing byte-order conversions. 2. We found that using CORBA to define the operations supported by a supplier in an IDL interface made it much easier to express the capabilities of the application. 3. Our performance measurements revealed that while CORBA provides solutions to many recurring problems in network programming, using CORBA for data transfer in bandwidth-intensive applications is not as efficient as using lower level protocols like TCP, UDP or ATM directly. Thus, an important benefit of the TAO A/V Streaming Service is to provide applications the advantages of using CORBA IIOP in their stream establishment and control modules, while allowing the use of more efficient data transfer protocols for multimedia streaming. 4. Enhancing an existing A/V streaming application to use CORBA was a key design challenge. By applying patterns, such as the State, Strategy (Gamma et al., 1995) and Reactor (Schmidt et al., 2000), we found it was much easier to address these design issues. Thus, the use of patterns helped us rework the architecture of an existing MPEG A/V player and make it more amenable to distributed object computing middleware, such as CORBA. 5. Building the CORBA A/V Streaming Service also helped us improve TAO, the CORBA ORB used to implement the service. An important feature added to TAO was support for nested upcalls. This feature allows a CORBA-enabled application to respond to incoming CORBA operations, while it is making a CORBA operation on a remote object. During the development of the A/V Streaming Service, we also applied many optimizations to TAO and its IDL compiler, particularly for sequences of octets and the CORBA::Anytype.
88 Surendran, Krishamurthy & Schmidt
All the C++ source code, documentation and benchmarks for TAO and its A/V Streaming Service are available at www.cs.wustl.edu/~schmidt/TAO.html.
APPENDIX 1. DESIGN PATTERNS USED IN THE TAO A/V STREAMING SERVICE This section outlines the intents of all the patterns used in TAO’s A/V Streaming Service and its pluggable A/V protocol framework. The references explore each pattern in greater depth. • Abstract Factory pattern [Gamma et al., 1995]: This pattern provides an interface for creating families of related or dependent objects without specifying their concrete classes. • Acceptor-Connector pattern [Schmidt et al., 2000]: This pattern decouples the connection and initialization of cooperating peer services in a distributed system from the processing performed by these peer services once they are connected and initialized. • Adapter pattern [Gamma et al., 1995]: This pattern allows two classes to collaborate that were not designed originally to work together. • Component Configurator pattern [Schmidt et al., 2000]: This pattern decouples the implementation of services from the time when they are configured. • Double Dispatching pattern [Gamma et al., 1995]: In this pattern, when a call is dispatched to a method on a target object from a parent object, the target object in turn makes method calls on the parent object to access certain attributes in the parent object. • Extension Interface pattern [Schmidt et al., 2000]: This pattern prevents bloating interfaces and breaking client code when developers add or modify functionality to existing components. Multiple extensions can be attached to the same component, each defining a contract between the component and its clients. • Facade pattern [Gamma et al., 1995]: This pattern provides a unified higher-level interface to a set of interfaces in a subsystem that makes the subsystem easier to use. • Factory Method pattern [Gamma et al., 1995]: This defines an interface for creating objects, but lets subclasses decide which class to instantiate. • Leader/Followers pattern [Schmidt et al., 2000]: This pattern provides a concurrency model where multiple threads efficiently demultiplex events received on I/O handles shared by the threads and dispatch event handlers that process the events. • Layer pattern [Buschmann et al., 1996]: This pattern helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction. • Reactor pattern [Schmidt et al., 2000]: This pattern demultiplexes and dispatches requests that are delivered concurrently to an application by one or more clients. • State pattern [Gamma et al., 1995]: This pattern allows an object to alter its behavior when its internal state changes. The object will appear to change its class. • Strategy pattern [Gamma et al., 1995]: This pattern defines and encapsulates a family of algorithms and makes them interchangeable. • Template Method pattern [Gamma et al., 1995]: This pattern defines the skeleton of an algorithm in an operation, deferring certain steps to subclasses.
CORBA Audio/Video Streaming Service 89
APPENDIX 2. OVERVIEW OF THE CORBA REFERENCE MODEL CORBA Object Request Brokers (ORBs) [Object Management Group, 2000] allow clients to invoke operations on distributed objects without concern for the following issues: • Object location: CORBA objects either can be collocated with the client or distributed on a remote server, without affecting their implementation or use. • Programming language: The languages supported by CORBA include C, C++, Java, Ada95, COBOL, and Smalltalk, among others. • OS platform: CORBA runs on many OS platforms, including Win32, UNIX, MVS, and real-time embedded systems like VxWorks, Chorus, and LynxOS. • Communication protocols and interconnects: The communication protocols and interconnects that CORBA run on include TCP/IP, IPX/SPX, FDDI, ATM, Ethernet, Fast Ethernet, embedded system backplanes, and shared memory. • Hardware: CORBA shields applications from side effects stemming from differences in hardware, such as storage layout and data type sizes/ranges. Figure 27 illustrates the components in the CORBA 2.x reference model, all of which collaborate to provide the portability, interoperability and transparency outlined above. Each component in the CORBA reference model is outlined below: • Client: A client is a role that obtains references to objects and invokes operations on them to perform application tasks. Objects can be remote or collocated relative to the client. Ideally, a client can access a remote object just like a local object, i.e., object>operation(args). Figure 27 shows how the underlying ORB components described below transmit remote operation requests transparently from client to object. • Object: In CORBA, an object is an instance of an OMG Interface Definition Language (IDL) interface. Each object is identified by an object reference, which associates one or more paths through which a client can access an object on a server. An object ID associates an object with its implementation, called a servant, and is unique within the scope of an Object Adapter. Over its lifetime, an object has one or more servants associated with it that Figure 27. Components in the CORBA 2.x Reference Model INTERFACE REPOSITORY
IDL COMPILER
CLIENT
operation()
IMPLEMENTATION REPOSITORY
in args
DII
IDL STUBS
ORB REF
out arts + return value
OBJECT (SERVANT)
IDL SKELETON ORB INTERFACE
ORB CORE
DSI OBJECT ADAPTER
GIOP/IIOP/ESIOPS
STANDARD INTERFACE
STANDARD LANGUAGE MAPPING
ORB-SPECIFIC INTERFACE
STANDARD PROTOCOL
90 Surendran, Krishamurthy & Schmidt
implement its interface. • Servant: This component implements the operations defined by an OMG IDL interface. In object-oriented (OO) languages, such as C++ and Java, servants are implemented using one or more class instances. In non-OO languages, such as C, servants are typically implemented using functions and structs. A client never interacts with servants directly, but always through objects identified by object references. • ORB Core: When a client invokes an operation on an object, the ORB Core is responsible for delivering the request to the object and returning a response, if any, to the client. An ORB Core is implemented as a run-time library linked into client and server applications. For objects executing remotely, a CORBA-compliant ORB Core communicates via a version of the General Inter-ORB Protocol (GIOP), such as the Internet Inter-ORB Protocol (IIOP) that runs atop the TCP transport protocol. In addition, custom Environment-Specific Inter-ORB protocols (ESIOPs) can also be defined. • ORB Interface: An ORB is an abstraction that can be implemented various ways, e.g., one or more processes or a set of libraries. To decouple applications from implementation details, the CORBA specification defines an interface to an ORB. This ORB interface provides standard operations to initialize and shut down the ORB, convert object references to strings and back, and create argument lists for requests made through the dynamic invocation interface (DII). • OMG IDL Stubs and Skeletons: IDL stubs and skeletons serve as a “glue’’ between the client and servants, respectively, and the ORB. Stubs implement the Proxy pattern [Gamma et al., 1995] and provide a strongly-typed, static invocation interface (SII) that marshals application parameters into a common message-level representation. Conversely, skeletons implement the Adapter pattern [Gamma et al., 1995] and demarshal the message-level representation back into typed parameters that are meaningful to an application. • IDL Compiler: An IDL compiler transforms OMG IDL definitions into stubs and skeletons that are generated automatically in an application programming language, such as C++ or Java. In addition to providing programming language transparency, IDL compilers eliminate common sources of network programming errors and provide opportunities for automated compiler optimizations [Eide et al., 1997]. • Dynamic Invocation Interface (DII): The DII allows clients to generate requests at runtime, which is useful when an application has no compile-time knowledge of the interface it accesses. The DII also allows clients to make deferred synchronous calls, which decouple the request and response portions of two-way operations to avoid blocking the client until the servant responds. • Dynamic Skeleton Interface (DSI): The DSI is the server’s analogue to the client’s DII. The DSI allows an ORB to deliver requests to servants that have no compile-time knowledge of the IDL interface they implement. Clients making requests need not know whether the server ORB uses static skeletons or dynamic skeletons. Likewise, servers need • Object Adapter: An Object Adapter is a composite component that associates servants with objects, creates object references, demultiplexes incoming requests to servants, and collaborates with the IDL skeleton to dispatch the appropriate operation upcall on a servant. Object Adapters enable ORBs to support various types of servants that possess similar requirements. This design results in a smaller and simpler ORB that can support a wide range of object granularities, lifetimes, policies, implementation styles, and other properties. • Interface Repository: The Interface Repository provides run-time information about IDL
CORBA Audio/Video Streaming Service 91
interfaces. Using this information, it is possible for a program to encounter an object whose interface was not known when the program was compiled, yet be able to determine what operations are valid on the object and make invocations on it using the DII. In addition, the Interface Repository provides a common location to store additional information associated with interfaces to CORBA objects, such as type libraries for stubs and skeletons. • Implementation Repository: The Implementation Repository contains information that allows an ORB to activate servers to process servants. Most of the information in the Implementation Repository is specific to an ORB or OS environment. In addition, the Implementation Repository provides a common location to store information associated with servers, such as administrative control, resource allocation, security, and activation modes.
APPENDIX 3. SUPPORTING MULTIPLE ENDPOINT BINDING SEMANTICS IN TAO’S A/V STREAMING SERVICE The CORBA A/V Streaming Service can construct different topologies for establishing streams between stream endpoints. For instance, one-to-one, one-to-many, many-toone, and many-to-many sources and sinks may need to be configured in the same stream binding. The need for certain stream endpoint bindings is dictated by the multimedia applications. For example, a video-on-demand application may require a point-to-point binding when sources and sinks are pre-selected. However, a video-conferencing application may require a multipoint-to-multipoint binding to receive from and transmit to various sources and sinks simultaneously. This section illustrates the various stream and flow endpoint bindings that have been implemented in TAO’s A/V Streaming Service and shows how stream endpoints are created and the connections are established. In TAO’s A/V Streaming Service, we have implemented the standard point-to-point and point-to-multipoint bindings of the stream endpoints. In addition, we have used these configurations as building blocks for multipoint-tomultipoint bindings. Point-to-Point Binding Below, we describe the sequence of steps during a point-to-point stream establishment, as defined by the CORBA A/V specification and implemented in TAO’s A/V Streaming Service. In our example, we consider the stream establishment in a video-on-demand (VoD) application that is similar to the MPEG player application described in the case-studies section. As shown in Figure 28, the VoD server and VoD client device with two audio and video flows. The audio flow is carried over TCP and video over UDP. The client must first locate the Server MMDevice reference and then pass its MMDevice as the A party and the Server MMDevice as the B party to the StreamCtrl. Step 1: Endpoint creation. At this point, the Vdev and StreamEndpoint are created for this stream from the MMDevices. The client and server applications can choose either Process_Strategy, where the endpoints are created in a separate process, or a Reactive_Strategy, where the endpoints are created in the same process. The pluggable A/ V protocol framework in TAO’s A/V Streaming Service provides flexible Concurrency
92 Surendran, Krishamurthy & Schmidt
Figure 28. Video-on-Demand Consumer and Supplier
WUGS HIGH-SPEED NETWORK TAO QOS-ENABLED ORB
RIO SUBSYSTEM
TAO QOS-ENABLED ORB
RIO SUBSYSTEM
Strategies [Mungee et al., 1999] to create the endpoints. Step 2: Creation of flowendpoints. To create a full profile, an MMDevice can act as a container for FDevs. In this case, the MMDevice will create a FlowProducer or FlowConsumer from the FDev, depending on the direction of the flow specified in the flow specification parameter. The flow direction is always with respect to the A side. Thus, the direction “out’’ means that the flow originates from the A side to the B side, whereas “in’’ means that the flow originates from the B side to the A side. In the above case, the server is streaming the data to the client. Therefore, the direction of the flow for both audio and video will be “in’’ and the MMDevice will create a Flowproducer from the audio and video FDevs on the server and a FlowConsumer from the audio and video FDevs on the client. These FlowProducers and FlowConsumers are then added to the StreamEndpoint using the add_fep operation. The advantage of using the flow interfaces is that the FDevs can be shared across different applications. In our VoD server, for example, the audio and video processes could run as two separate processes and contain only the flow objects and a control process could add the FDevs from these two processes to the stream. Both flows can now be controlled through the same StreamCtrl interface. This configuration is a more scalable and extensible approach than the implementation of a MPEG player described in the case-studies section, where the audio and video were treated as two separate streams. Step 3: VDev configuration. The StreamCtrl then calls set_peer on each of the VDevs with the other VDevs. For light profiles, multimedia application developers are responsible for implementing the set_peer call to check if all flows are compatible. For full profiles, the VDev interface is not used because the FlowEndPoint contain these configuration operations. Step 4: Stream setup. During this step the actual connections for the different flows are established. For light profiles, the flows do not have any interfaces and the flow specification should contain the transfer information for each flow. For example, the following flow specs are typically passed to the bind_devs call from the VoD client: “audio\in\MIME:audio/mpeg\\TCP=ace.cs.wustl.edu;10000” and “video\in\MIME:video/mpeg\\UDP=ace.cs.wustl.edu;8080” In these flow specs, the client is offering to listen for a TCP connection and the server will connect to the client. This configuration might be useful if the server is behind a firewall.
CORBA Audio/Video Streaming Service 93
The StreamCtrl calls connect on one of the StreamEndpoints passing the other StreamEndpoint, QoS, and the flow spec. Step 5: Stream QoS negotiation. The StreamEndpoint will first check if the other StreamEndpoint has a negotiator property defined. If it does, StreamEndpoint calls negotiate on the negotiator and the client and server then negotiate the QoS. TAO’s A/V Streaming Service provides a default implementation that can be overridden by the applications. The StreamEndpoint then queries the “AvailableProtocols” property on the other StreamEndpoint. If there is no common protocol the stream setup will fail and the exception StreamOpDenied will be thrown. Step 6: Light profile connection establishment: The A party StreamEndpoint will then try to setup the stream for all its flows. For light profiles, the following steps are done for each flow: 1. The StreamEndpoint will extract the flow protocol and data transfer protocol information from the flow spec entry for this flow. If a network address is not specified then a default stream endpoint is picked. 2. The StreamEndpoint then does the following actions. a. It goes through the list of flow protocol factories in the AV_Core instance to find if there is any matching flow protocol. If no flow protocol is specified, it passes the protocol as the flow protocol string. TAO’s A/V Streaming Service provides “noop’’ implementations for all data transfer protocols so that the layering of the architecture is preserved and a uniform API is presented to the application. These no-op flow protocols do not process the frames — they simply pass them to the underlying data transfer protocol. b. If a flow protocol factory matches the specified flow protocol/transfer protocol, the Stream Endpoint then checks for the data transfer protocol factory that matches the protocol specified for this flow. c. After finding a matching data transfer protocol factory, it creates a one-shot acceptor for this flow passing the FlowProtocolFactory to the acceptor. d. If the flow protocol factory has an associated control protocol factory, the StreamEndpoint then tries to match the data transfer factory for that, as well. Figure 29 illustrates the sequence of steps outlined above. In each step, the StreamEndpoint uses base interfaces, such as Protocol_Factory, Transport_ Factory, and AV_Acceptor. Therefore, it can be extended easily to support new flow and data transfer protocols. In addition, the address information is opaque to the StreamEndpoint and is passed down to an Acceptor that knows how to interpret it. Moreover, since the flow and data transfer protocols can be linked dynamically via the ACE Figure 29. Acceptor Registry 1 . open(flowspec_entry)
ACCEPTOR REGISTRY 3. open(flowspecentry,flow_factory)
2. create_acceptor()
TRANSPORT FACTORY
ACCEPTOR 4. on accept make_protocol_object(entry,endpoint,handler,transport)
FLOW FACTORY 5. get_callback
6.callback
TAO_Base_Endpoint
94 Surendran, Krishamurthy & Schmidt
1. ge t_ fe p
,B)
4. 1
nnect (A
go 3 . is _t _fep o_ _com lis patib te le n
4 . co
int Po nd wE Flo fep et_ 2. g
Service Configurator mechanisms, applications can take advantage of these protocols by simply changing the name of the protocols in the flow spec. After completing the preceding steps, the StreamEndpoint then calls the request_connection operation on the B StreamEndpoint with the flowspec. This StreamEndpoint_B performs the following steps for each of the flow: 1. It extracts the flow and data transfer protocol information from the flow spec entry for this flow. If a network address is not specified then a default stream endpoint is picked. 2. The StreamEndpoint then performs the following actions. 3. Finds a flow protocol factory matching the flow protocol specified for this flow and in the absence of a flow protocol tries to match a null flow protocol for the specified data transfer protocol. 4. Finds a matching data transfer protocol factory and creates a connector for it. Then it calls connect on the connector, passing it the flow protocol factory. 5. Upon establishing a data transfer connection, the connector creates a protocol object for this flow. 6. The flow protocol factory typically creates the application-level callback object and sets the protocol object on the Base_EndPoint interface passed to it. 7. If an address was not specified for this flow then the StreamEndpoint does the similar steps for listening for those flows and extracts the network endpoints and inserts it into the flowspec to be sent back to the A StreamEndpoint. The A StreamEndpoint after receiving the reverse flowspec does the connect for all the flows for which B StreamEndpoint is listening and also sets the peer address for connectionless protocols, such as UDP. Step 7: Full profile connecFigure 30. Full Profile Point to Point Stream tion establishment. In the full proEstablishment file, the flow specification does not contain the data transfer information bind(A_StreamEndpoint,B_StreamEndPoint) for each flow since the flows are represented by flow interfaces and STREAMCTRL they need not be collocated in the same process. A StreamCtrl can be t in Po nd E used to control different flows, each ow Fl of which could reside on a different B_STREAMENDPOINT A_STREAMENDPOINT n machine. In this case, each . . FlowEndPoint will need to know the . 1 . . . FLOW CONNECTION 1 n network address information. In the 4.3 co nn full profile stream setup, the bind ec t_t o_ pe er operation is called on the StreamCtrl, 3.1 get_property(protocols) passing the two StreamEndpoints. FLOWENDPOINT FLOWENDPOINT B 3.2 get_property(format) A Figure 30 illustrates the se4.2 open(flow_spec_entry) quence of steps performed for a full 4.4 connect(flow_spec_entry) profile point-to-point stream setup. CONNECTOR REGISTRY ACCEPTOR REGISTRY Each of these steps is outlined below: 1. Flow endpoint matching. The StreamCtrl obtains the flow names in each StreamEndpoint by querying the “flows’’ property. For
CORBA Audio/Video Streaming Service 95
each flow name, it then obtains the FlowEndPoint using the get_fep method on the StreamEndpoint. If the flowspec is empty all the flows are considered. Otherwise, only the specified flows are considered for stream establishment. It then goes through the list of FlowEndPoints trying to find a match between the FlowEndPoints on the A and B side. Two FlowEndPoints are said to match if is_fep_compatible returns true. This call checks to make sure that the format and the protocols of the two FlowEndPoints match. Applications can override this behavior to do more complex checks, such as checking for semantic nuances of device parameters. For example, the FlowEndPoint may want only a French audio stream, whereas the other FlowEndPoint may support only English. These requested semantics can be checked by querying the property “devParams’’ and by checking the value for “language.’’ The StreamEndpoint then tries to obtain a FlowConnection from the StreamCtrl. The application developer can set the FlowConnection object for each flow using the StreamCtrl. All operations on a stream are applied to the contained FlowConnections and by setting specialized FlowConnections the user can customize the behavior of the stream operations. If the stream does not have a FlowConnection then a default FlowConnection is created and set for that flow. The StreamEndpoint then calls connect on the FlowConnection with the producer and consumer endpoints with the flow QoS. 2. Flow configuration. The FlowConnection calls set_peer on each of the FlowEndPoints during the connect operation and this will let the FlowEndPoints to check and set the peer FlowEndpoint’s configuration. For example, a video consumer can check the ColourModel, ColourDepth, and VideoResolution and allocate a window for the specified resolution and also other display resources, i.e., colormap, etc. In the case of audio, the quantization property can be used by the consumer to allocate appropriate decoder resources. 3. Flow connection establishment. In this step, the FlowConnection calls go_to_listen on one of the FlowEndPoints with the is_mcast parameter set to false and also passes the flow protocol that was set on the FlowConnection using the use_flow_protocol operation. The FlowEndPoint can raise an exception failedToListen in which case the FlowConnection calls go_to_listen on the other FlowEndPoint. In TAO’s implementation the go_to_listen does the sequence of operations shown in Figure 29 to accept on the selected flow protocol and data transfer protocol and also if needed Figure 31. Connector Registry 1 . connect
CONNECTOR REGISTRY 2 . create_connector() / get_connector()
TRANSPORT FACTORY
3 . open(flowspec,flow_factory)
CONNECTOR 4 . make_protocol_object
FLOW FACTORY 5 . get_callback
Endpoint
96 Surendran, Krishamurthy & Schmidt
Figure 32. Point-to-Multipoint Binding LIVESTREAM ENDPOINT SPANISH FLOW PRODUCER Spanish
Video
ENGLISH FLOW PRODUCER English
VIDEO FLOW PRODUCER
STREAMCTRL
English
sh gli
En
eo
TV DEVICE
RADIO ENDPOINT
ish
TV ENDPOINT
ENG FLOW CONSUMER
SPANISH FLOW
Vid
VIDEO ENG FLOW FLOW CONSUMER CONSUMER
ENGLISH FLOW
an
Vid
eo
VIDEO FLOW
Sp
the control protocol for the flow. Since the FlowEndPoint also derives from Base_EndPoint the Callback and Protocol_Objects will be set on the endpoint. In the case of the FlowProducer the get_timeout operation will be invoked on the Callback object to register for timeout events. The FlowConnection then calls connect_to_peer on the other FlowEndPoint with the address returned by the listening FlowEndPoint and also the flowname. In the case of connectionless protocols, such as UDP, the listening FlowEndPoint may need to know the reverse channel to send the data, in which case it can call the get_rev_channel operation to get it. When FlowEndPoint calls connect_to_peer, the sequence of steps shown in Figure 31 will occur to connect to the listening endpoint. With the above sequence of steps a stream will be established in a point-to-point binding between two multimedia devices.
VIDEO FLOW CONSUMER
ENG FLOW CONSUMER
TV ENDPOINT
TV DEVICE
Point-to-Multipoint Binding TAO’s point-to-multipoint binding support is essential to handle broadcast/multicast streaming servers. With new technologies, such as Web Caching [Fan et al., 1998], multicast updates of web pages and streaming media files is becoming common place. In addition, it has become common on websites to broadcast live events using commercial-off-the-shelf (COTS) technologies, such as RealPlayer. For example, during the World Cup Cricket 99, millions of people listened to the live commentaries of the matches from the BBC website. In such cases, it would be ideal for the servers to use multicast technologies like IP multicast to reduce server connections and load. TAO’s point-to-multipoint binding essentially provides such an interface for a source to multicast the flows to multiple sinks as shown in Figure 32. TAO’s A/V Streaming Service implementation provides a binding based on IP multicast [Deering and Cheriton, 1990]. In this section we explain the sequence of steps that lead to a point-to-multipoint stream establishment both in the light and full profiles. Step 1: Adding a multipoint source. A multipoint source MMDevice must be added before any sinks can be added to the stream. For example, the multicast server could add itself to the StreamCtrl and expose the StreamCtrl interface through a standard CORBA object location service, such as Naming or Trading. If the B party MMDevice parameter to bind_devs is nil, the source is assumed to be a multicast source. As with a point-to-point stream, the endpoints for the source device are created, i.e., the StreamEndpoint and VDev for light profiles, and the StreamEndpoint containing FlowProducers for the full profile. Unlike the point-to-point stream, however, there can only be FlowProducers in the MMDevice. Figure 33 shows the creation of endpoints in point-to-multipoint bindings. Step 2: Configure multicast interface. In the case of a multipoint binding there can be numerous sinks. Therefore, the CORBA A/V Streaming Service specification provides
CORBA Audio/Video Streaming Service 97
-A
,A _V Set Mca st Pee Dev r
Cr ea te
nt
oi
2. 1
dp
En
A_
_Fep
2.4 Add
umer
Cons
Fep
3.3 F low C onsum er
2.2 Cre ate P
reate
3.4 Add_
3.2 C
rod 2.3 uce Flow r Pro duc er
o dp En B_
eer
-B v te ea De Cr _V ,B 3.1 int
Add P
an MCastConfigIf interface, which is used instead of using point-to-point VDev con- Figure 33. Creation of Endpoints in figurations. Upon addition of a multipoint the Point-to-Multipoint Binding source, the StreamCtrl creates a new MCastConfigIf interface and sets it as the A . bind_devs ( Mpoint_source,Nil) B . bind_devs ( Nil,Mpoint_sink) multicast peer of the source VDev. This Light Profile design allows the stream binding to use Full Profile STREAM CTRL multicasting technologies to distribute the stream configuration information instead of using numerous point-to-point configurations. MPOINT SOURCE MCAST CONFIG IF The MCastConfigIf interface provides operations to set the initial configuMPOINT SINK ration of the stream example via the B_VDEV set_initial_configuration operation. This A - ENDPOINT FDEV operation can be called by the source VDev during the set_MCast_peer call. 1 . . . n B - ENDPOINT FDEV This information is conveyed to the A_VDEV multicast sink VDev during the set_peer 1 . . . n call on the MCastConfigIf when a multicast sink is added. The MCastConfigIf performs the configuration operation using point-to-point invocation on all sink VDevs. Step 3: Adding multicast sinks. When a sink wants to join a stream as a multicast sink, it can call bind_devs with a nil A party MMDevice. This call will create the endpoints for the multicast sink, i.e., the StreamEndpoint and the VDev. For full profiles, the StreamEndpoint will contain FlowConsumers. For light profiles, the VDev is added to the MCastConfigIf. Step 4: Multicast connection establishment. The StreamCtrl then calls connect_leaf on the multicast source endpoint for the multicast sink endpoint. In TAO, the connect_leaf operation will throw the notSupported exception. The StreamCtrl will then try the IP multicast model using the multiconnect call on the source StreamEndpoint. The following steps occur when multiconnect is called on StreamEndpoint_A for each flow in the full profile: 1. The StreamEndpoint makes sure that the endpoint is indeed a FlowProducer. 2. It then checks to see if a FlowConnection interface exists for this flow in the StreamCtrl, which is obtained through the Related_StreamCtrl property. 3. In the absence of a FlowConnection, the StreamEndpoint_A will create a FlowConnection and set the multicast address to be used for this flow on the FlowConnection. An application configure this address by passing it to the StreamEndpoint during its initialization. The A/V specification does not define how multicast addresses are allocated to flows. Thus, TAO’s StreamEndpoint uses a base multicast address and assigns different ports for the flows and sets the FlowConnection on the StreamCtrl. We ultimately plan to strategize this allocation so applications can decide on the multicast addresses to use for each flow. 4. The StreamEndpoint then calls add_producer on FlowConnection. 5. The call to add_producer will result in a connect_mcast on the FlowProducer, passing the multicast address with which to connect. The FlowProducer then returns the
98 Surendran, Krishamurthy & Schmidt
eer st_p Mca t cas Set_ M .4 t 1 nec ss Con re d d 1.2 A 1.3
um
ns
co
d_
ad
2
2.
er pe t_ se
2.4 connect (flow_spec_entry)
1.1 O pen(flo w
_spec_ 1.2 F entry) low_s pec_ entry
5 2.
y) _entr spec low_ pen(f 2.2 o
r ce du pro d_ ad 1.1
er
1.
Mu
ltic
on
1.5 Flo ,1.3 w_ sp ec
ne
ct
(flo
ws
)
address to which it will multicast the flow. If the return address is complete with network Figure 34. Connection Establishment in address, then IP multicast is used. In contrast, the Point-to-Multipoint Binding if the return address specifies only the protocol name an ATM-style multicast is used. STREAMCTRL 2. 6. In addition, the FlowConnection cre1 M ul tic on ates a MCastConfigIf if it has not been crene ct ated and sets it as the multicast peer on the A_ENDPOINT B_ENDPOINT FlowProducer. Since the same MCastConfigIf r is used for both FlowEndPoint and VDev, the ee FLOW CONSUMER _p to parameters to MCastConfigIf are passed as t_ c e nn co CORBA objects. It is the responsibility of 3 2. FLOWCONNECTION MCastConfigIf to check whether the peer is a VDev or a FlowEndpoint. 7. The connect_mcast does the actual MCASTCONFIGIF connection to the multicast address and reACCEPTOR REGISTRY CONNECTOR sults in the sequence of steps for multicast op en REGISTRY (flo w_ accept using the pluggable A/V protocols. sp ec _e ntr y) Figure 34 illustrates these steps graphiFLOW PRODUCER cally. The steps described above occur for each multipoint sink that is added to the stream. TAO’s pluggable A/V protocol framework is configured with both full profile and light profile objects. It is also configured in the point-to-point and point-to-multipoint bindings. Thus, the control and management implementation objects can be closed for modifications, yet new flow and data transfer protocols can be added flexibly to the framework without modification to these interface implementations. A similar set of steps happens when multiconnect is called on the StreamEndpoint_B.
Multipoint-to-Multipoint Binding The multipoint-to-multipoint binding is important for applications, such as videoconferencing, where there are multiple source and sink participants. The CORBA A/V Streaming Service specification does not mandate any particular protocol for multipoint-tomultipoint binding, leaving it to implementers to provide this feature. In TAO, we provide a multipoint-to-multipoint binding by extending the point-to-multipoint binding based on IP multicast. We apply a Leader/Follower pattern [Schmidt et al., 2000] for the sources, where the first source that is added to the stream will become the leader for the multipoint-tomultipoint binding, and every other source become a follower. This design implies that all stream properties, such as format and codec, will be selected by the leader.
REFERENCES Arulanthu, A. B., O’Ryan, C., Schmidt, D. C., Kircher, M., and Parsons, J. (2000). The design and performance of a scalable ORB architecture for CORBA asynchronous messaging. In Proceedings of the Middleware 2000 Conference. ACM/IFIP. Box, D. (1997). Essential COM. Addison-Wesley, Reading, MA. Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., and Stal, M. (1996). PatternOriented Software Architecture-A System of Patterns. Wiley and Sons.
CORBA Audio/Video Streaming Service 99
Chen, S., Pu, C., Staehli, R., Cowan, C., and Walpole, J. (1995). A distributed real-time MPEG video audio player. In Fifth International Workshop on Network and Operating System Support of Digital Audio and Video. Clark, D. D. and Tennenhouse, D. L. (1990). Architectural considerations for a new generation of protocols. In Proceedings of the Symposium on Communications Architectures and Protocols (SIGCOMM), 200-208, Philadelphia, PA. ACM. Deering, S. E. and Cheriton, D. R. (1990). Multicast routing in datagram internetworks and extended LANs. ACM Transactions on Computer Systems, 8(2), 85-110, May. Eide, E., Frei, K., Ford, B., Lepreau, J., and Lindstrom, G. (1997). Flick: A flexible, optimizing IDL compiler. In Proceedings of ACM SIGPLAN ’97 Conference on Programming Language Design and Implementation (PLDI), Las Vegas, NV. ACM. D. D. (1996). Vaudeville: A High Performance, Voice Activated Teleconferencing Application. Department of Computer Science, Technical Report WUCS-96-18, Washington University, St. Louis. Fan, L., Cao, P., Almeida, J., and Broder, A. (1998). Summary cache: A scalable wide-area Web cache sharing protocol. In SIGCOMM 98, 254-265. SIGS. Gamma, E., Helm, R., Johnson, R. and Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA. Gill, C. D., Levine, D. L., and Schmidt, D. C. (2001). The design and performance of a realtime CORBA scheduling service. Real-Time Systems, The International Journal of Time-Critical Computing Systems, special issue on Real-Time Middleware, 20(2). Gokhale, A. and Schmidt, D. C. (1996). Measuring the performance of communication Middleware on high-speed networks. In Proceedings of SIGCOMM ‘96, 306-317, Stanford, CA, ACM. Gokhale, A. and Schmidt, D. C. (1998). Measuring and optimizing CORBA latency and scalability over high-speed networks. Transactions on Computing, 47(4). Gokhale, A. and Schmidt, D. C. (1999). Optimizing a CORBA IIOP protocol engine for minimal footprint multimedia systems. Journal on Selected Areas in Communications special issue on Service Enabling Platforms for Networked Multimedia Systems, 17(9). Harrison, T. H., Levine, D. L. and Schmidt, D. C. (1997). The design and performance of a real-time CORBA event service. In Proceedings of OOPSLA ‘97, Atlanta, GA. ACM. Hu, J., Mungee, S. and Schmidt, D. C. (1998). Principles for developing and measuring highperformance Web servers over ATM. In Proceedings of INFOCOM ‘98. Hu, J., Pyarali, I. and Schmidt, D. C. (1997). Measuring the impact of event dispatching and concurrency models on Web server performance over high-speed networks. In Proceedings of the 2nd Global Internet Conference. IEEE. Huard, J. F. and Lazar, A. (1998). A programmable transport architecture with QoS guarantees. IEEE Communications Magazine, 36(10), 54-62. IETF. (2000a). Differentiated services (diffserv). Retrieved on the World Wide Web: http:/ /www.ietf.org/html.charters/diffserv-charter.html. IETF. (2000b). Integrated services (intserv). Retrieved on the World Wide Web: http:// www.ietf.org/html.charters/intserv-charter.html. ISO. (1993). Coding Of Moving Pictures And Audio For Digital Storage Media At Up To About 1.5 Mbit/s. International Organization for Standardization. Kuhns, F., Schmidt, D. C. and Levine, D. L. (1999). The design and performance of a realtime I/O subsystem. In the Proceedings of the 5th IEEE Real-Time Technology and
100 Surendran, Krishamurthy & Schmidt
Applications Symposium, 154-163, Vancouver, British Columbia, Canada. IEEE. Kuhns, F., Schmidt, D. C., O’Ryan, C. and Levine, D. (2001). Supporting high-performance I/O in QoS-enabled ORB Middleware. Cluster Computing: the Journal on Networks, Software, and Applications. McCanne, S. and Jacobson, V. (1995). Vic: A flexible framework for packet video. In ACM Multimedia 95, 511-522, New York. ACM Press. Meyer, B. (1989). Object Oriented Software Construction. Prentice Hall, Englewood Cliffs, NJ. Mungee, S., Surendran, N. and Schmidt, D. C. (1999). The design and performance of a CORBA audio/video streaming service. In Proceedings of the Hawaiian International Conference on System Sciences. NRG, LBNL. (1995). LBNL Audio Conferencing Tool (vat). Retrieved on the World Wide Web: ftp://ftp.ee.lbl.gov/conferencing/vat/. Object Management Group. (1999). The Common Object Request Broker: Architecture and Specification. Object Management Group, 2.3 edition. Object Management Group. (2000). The Common Object Request Broker: Architecture and Specification. Object Management Group, 2.4 edition. OMG. (1996). Property Service Specification. Object Management Group, 1.0 edition. OMG. (1997a). Control and Management of A/V Streams Specification. Object Management Group, OMG Document telecom/97-05-07 edition. OMG. (1997b). CORBAServices: Common Object Services Specification, Revised Edition. Object Management Group, 97-12-02 edition. O’Ryan, C., Kuhns, F., Schmidt, D. C., Othman, O. and Parsons, J. (2000). The design and performance of a pluggable protocols framework for real-time distributed object computing Middleware. In Proceedings of the Middleware 2000 Conference. ACM/ IFIP. Pyarali, I., Harrison, T. H. and Schmidt, D. C. (1996). Design and performance of an objectoriented framework for high-performance electronic medical imaging. USENIX Computing Systems, 9(4). Pyarali, I., O’Ryan, C., Schmidt, D. C., Wang, N., Kachroo, V. and Gokhale, A. (1999). Applying optimization patterns to the design of real-time ORBs. In Proceedings of the5^th Conference on Object-Oriented Technologies and Systems, San Diego, CA. USENIX. RealNetworks. (1998). Realvideo Player. Retrieved on the World Wide Web: http:// www.real.com. Ritchie, D. (1984). A stream input-Output system. AT&T Bell Labs Technical Journal, 63(8), 311-324. Schmidt, D. C. (1995). Reactor: An object behavioral pattern for concurrent event demultiplexing and event handler dispatching. In Coplien, J. O. and Schmidt, D. C. (Eds.), Pattern Languages of Program Design, 529-545. Addison-Wesley, Reading, MA. Schmidt, D. C., Levine, D. L. and Mungee, S. (1998a). The design and performance of realtime object request brokers. Computer Communications, 21(4), 294-324. Schmidt, D. C., Mungee, S., Flores-Gaitan, S. and Gokhale, A. (1998b). Alleviating priority inversion and non-determinism in real-time CORBA ORB core architectures. In Proceedings of the 4th IEEE Real-Time Technology and Applications Symposium, Denver, CO. IEEE. Schmidt, D. C., Mungee, S., Flores-Gaitan, S. and Gokhale, A. (2001). Software architectures for reducing priority inversion and non-determinism in real-time object request
CORBA Audio/Video Streaming Service 101
brokers. Journal of Real-time Systems, special issue on Real-time Computing in the Age of the Web and the Internet. Schmidt, D. C., Stal, M., Rohnert, H. and Buschmann, F. (2000). Pattern-Oriented Software Architecture: Patterns for Concurrency and Distributed Objects, 2. Wiley& Sons, New York, NY. Schmidt, D. C. and Suda, T. (1994). An object-oriented framework for dynamically configuring extensible distributed communication systems. IEE/BCS Distributed Systems Engineering Journal (Special Issue on Configurable Distributed Systems), 2, 280-293. Schulzrinne, H., Casner, S., Frederick, R. and Jacobson, V. (1994). RTP: A transport protocol for real-time applications. Internet-Draft. Stevens, W. R. (1993). TCP/IP Illustrated, Volume 1. Addison Wesley, Reading, Massachusetts. Stevens, W. R. (1998). UNIX Network Programming, Volume 1: Networking APIs: Sockets and XTI, Second Edition. Prentice Hall, Englewood Cliffs, NJ. SUN Microsystems, I. (1992). Sun Audio File Format. Sun Microsystems, Inc. Vxtreme. (1998). Vxtreme Player. Retrieved on the World Wide Web: http:// www.microsoft.com/netshow/vxtreme. Wollrath, A., Riggs, R. and Waldo, J. (1996). A distributed object model for the Java system. USENIX Computing Systems, 9(4).
102 Sarris & Strintzis
Chapter IV
MPEG-4 Facial Animation and its Application to a Videophone System for the Deaf Nikolaos Sarris and Michael G. Strintzis Aristotle University of Thessaloniki, Greece
INTRODUCTION This chapter aims to introduce the potential contribution of the emerging MPEG-4 audio-visual representation standard to future multimedia systems. This is attempted by the ‘case study’ of a particular example of such a system--‘LipTelephone’--which is a special videoconferencing system being developed in the framework of MPEG-4 (Sarris et al., 2000b). The objective of ‘LipTelephone’ is to serve as a videophone that will enable lip readers to communicate over a standard telephone connection. This will be accomplished by combining model-based with traditional video coding techniques in order to exploit the information redundancy in a scene of known content, while achieving high fidelity representation in the specific area of interest, which is the speaker’s mouth. Through this description, it is shown that the standard provides a wide framework for the incorporation of methods that had been the object of pure research even in recent years. Various such methods are referenced from the literature, and one is proposed and described in detail for every part of the system being studied. The main objective of the chapter is to introduce students to these methods for the processing of multimedia material, provide to researchers a reference to the state-of-the-art in this area and urge engineers to use the present research methodologies in future consumer applications. Copyright © 2002, Idea Group Publishing.
MPEG-4 Facial Animation and its Application 103
CONVENTIONAL MULTIMEDIA CODING SCHEMES AND STANDARDS The basic characteristic and drawback of digital video transmission is the vast amount of data that need to be stored and transmitted over communication lines. For example, a typical black-and-white videophone image sequence with 10 image frames per second and dimensions 176 x 144 needs 2Mbits/sec transmission rate (8 bits/pixel x [176x144] pixels/ frame x 10 frames/sec). This rate is extremely high even for state-of-the-art communication carriers and demands high compression, which usually results in image degradation. In recent years many standards have emerged for the compression of moving images. In 1992 the Moving Picture Experts Group (MPEG) completed the ISO/IEC MPEG-1 video-coding standard, while in 1994 the MPEG-2 standard was also approved (MPEG, online). These standards contributed immensely to the multimedia technological developments as they both received Grammy awards and made interactive video on CD-ROM and digital television possible (Chariglione, 1998). The ITU-T (formerly CCITT) organization established the H.261 standard in 1990 (ITU-T, 1990) and H.263 in 1995 (ITU-T, 1996), which were especially targeted to videophone communications. These have achieved successful videophone image sequence transmission at rates of approximately 64 Kbps (Kbits per second). The techniques employed in these standards were mainly based on segmentation of the image in uniformly sized 8x8 rectangular blocks. The contents of the blocks were coded using the Discrete Cosine Transform (DCT), and their motion in consecutive frames was estimated so that only the differences in their positions had to be transmitted. In this way spatial and temporal correlation is exploited in local areas and great compression rates are achieved. The side effects of this approximation, however, are visual errors on the borders of the blocks (blocking effect) and regularly spaced dots on the reconstructed image (mosquito effect). These effects are more perceptible when higher compression rates are needed, as in videophone applications. In addition, these standards impose the use of the same technique on the whole image, making it impossible to distinguish the background or other areas of limited interest, even when they are completely still (Schafer & Sikora, 1995; IMSPTC, 1998). These problems are easily tolerated during a usual videoconference session where sound remains the basic means of communication, but the system is rendered useless to potential hearing-impaired users who exploit lip reading for the understanding of speech. Even for these users however, image degradation could be tolerated in some areas of the image. For example, the background does not need to be continuously refreshed, and most areas of the head, apart from the mouth area, do not need to be coded with extreme accuracy. It is therefore obvious that multimedia technology at that time could benefit immensely from methods that utilize knowledge of the scene contents, as these could detect different objects in the scene and prioritize their coding appropriately (Benois et al., 1997).
THE MPEG-4 STANDARD In acknowledgment of the previously mentioned limitations of the existing standards, MPEG launched in 1993 a new standard called MPEG-4 which was approved in Version 1 in October 1998 and in Version 2 in December 1999. MPEG-4 is the first audio-visual representation standard to model a scene as a composition of objects with specific
104 Sarris & Strintzis
characteristics and behavior (MPEG, Figure 1: A virtual scene example combining online; Koenen et al., 1997; Abrantes & real and synthetic objects (Kompatsiaris, 2000) Pereira, 1997; Doenges, 1998). Every such object may be real or synthetic and can be coded using a different technique and requiring different quality in its reconstruction. A whole subgroup of MPEG-4 called Synthetic/Natural Hybrid Coding (SNHC) was formed to develop the framework for combining synthetic and natural content in the same scene (an example scene is shown in Figure 1). In addition, the standard provides a detailed framework for the representation of the human face. This is accomplished by a 3D face object which is associated mainly with two sets of atFigure 2: The MPEG-4 Facial Definition tributes: The Facial Definition ParamPoints (ISO/IEC, 1998) eters (FDPs) (shown in Figure 2) and the Facial Animation Parameters (FAPs) (a representative subset of which is included in Table 1). The first describes a set of characteristic feature points on the face while the second provides a set of facial deformation parameters, which have been found to be capable of describing any human expression. Both sets of parameters were built based on the physiology of the human face. In particular, FAPs are based on the study of minimal facial actions and are closely related to muscle actions. The units of measurement for the FAP values are relative to the face dimensions shown in Figure 3. The nonrigid motion dictated by any particular FAP may be uni- or bi-directional on one 3D axis, as shown in Table 1. At this point it should be made clear that the standard does not provide any means of performing the necessary operations in order to segment the image and detect the position of feature points. But, having detected the positions of these feature points and calculated the values of the animation parameters, MPEG-4 provides a model-based coding scheme, which multiplexes this information together with the representations of any other objects the scene may contain and transmits them over the communication channel. The MPEG-4 decoder on the receiver side demultiplexes the coded information and displays the reconstructed image. The coding gain using this technique for the transmission of a human
MPEG-4 Facial Animation and its Application 105
Table 1: Some of the FAPs as described by MPEG-4 (ISO/IEC, 1998) Uni-
FDP
FDP
motion
group num
subgrp num
U
down
2
1
MNS
B
down
2
2
Vertical bottom middle inner lip displacement
MNS
B
up
2
3
stretch_l_cornerlip
Horizontal displacement of left inner lip corner
MW
B
left
2
4
stretch_r_cornerlip
Horizontal displacement of right inner lip corner
MW
B
right
2
5
lower_t_lip_lm
Vertical displacement of midpoint between left corner and middle of top inner lip
MNS
B
down
2
6
FAP name
FAP description
units
or Bidir
open_jaw
Vertical jaw displacement (does not affect mouth opening)
MNS
lower_t_midlip
Vertical top middle inner lip displacement
raise_b_midlip
Pos
Figure 3: The MPEG-4 Facial Animation Parameter Units (ISO/IEC, 1998)
face image sequence is obvious, as the 3D model has to be transmitted only at the beginning of the session, and only the set of 68 FAPs (numbers) has to be transmitted for every frame. Experimental coding methods based only on the 3D face object have been reported to deliver videoconferencing scenes at bandwidths as low as 10 Kbps (Abrantes & Pereira, 1999).
‘LIP-TELEPHONE’: A CASE STUDY In previous sections the conventional multimedia coding standards were examined and their limitations with respect to demanding videoconferencing systems were pointed out. Furthermore, we highlighted some innovative features of the emerging MPEG-4 standard that may be utilized to overcome the aforementioned limitations. The present section describes in detail how the capabilities of MPEG-4 can be exploited in future videoconferencing systems to improve the quality of the reconstructed multimedia stream even in situations where the communication channel bandwidth is limited. These capabilities are demonstrated by a ‘case study’ of the ‘LipTelephone’ system, which is being developed in the Information Processing Lab under the Greek GSRT 98AMEA23 research project. LipTelephone will be a special videophone system that will enable the lip-reading deaf and hearing impaired to communicate over a standard telephone line. That will be accomplished by providing much better image quality and fidelity in the speaker’s mouth area than that provided by standard videophone systems. The development
106 Sarris & Strintzis
team aims to achieve this by prioritizing the need of image quality and fidelity in separate parts of the image as described in the following sections. It will be shown that the amenities of the MPEG-4 standard are necessary for the use of the proposed system in parallel with the development and use of special image processing algorithms, which will also be described in detail.
General Description of the System The reason which renders this application useful to the lip-readers is that existing videophone systems lack the necessary resolution and accuracy in the image provided that is necessary for the visual understanding of speech. We believe that these limitations can be overcome with the help of the recently emerged video coding standard MPEG-4. In particular, the purpose of this work is to combine a way of transmitting the area of the speaker’s mouth with high resolution and fidelity, while achieving very high compression rate for the rest of the image, which is not critical for the visual understanding of speech. The MPEG-4 standard may greatly assist in this cause as it has provided a framework, which allows: 1. Compression, by use of conventional methods, of a whole image, or specially defined parts of the image (i.e., rectangular or even irregular areas of the whole frame). 2. Special coding of a human face, describing its position, shape, texture and expression, by means of two sets of arithmetic parameters: the Facial Definition Parameters (FDPs) and the Facial Animation Parameters (FAPs). When using this second method, given a three-dimensional (3D) model of the speaker’s head, the only data that have to be transmitted are the values of the Facial Animation Parameters, which have been found to be capable of describing any facial expression. The 3D head model will be known to the receiver end as it will be transmitted at the beginning of the conference session. Based on past reports on MPEG-4 systems (Sarris et al., 2000b), we estimate that, together with the transmission of the sound which will be needed to make the system usable both to deaf and hearing persons, the required bandwidth for satisfactory quality will lie around 15-20kbps, while excellent quality will be assured with 30-40kbps. Using the first method to code the area of the mouth, which requires high quality in the decoded image stream, it is expected that the extra bandwidth needed will not exceed 20kbps. Therefore, the complete bitstream combining both methods with satisfactory quality is not expected to exceed 48kbps, which can be transmitted through a simple ISDN line, widely available to the public at low cost. The design and implementation of this system on dedicated hardware is also in progress by INTRACOM SA, the largest private telecommunication company in Greece (which also participates in the aforementioned research project), to ensure that the final product will work robustly and in real time providing the required image quality. As explained in the previous paragraphs, MPEG-4 provides the framework for developing model-based coding schemes, but it is open to the particular techniques that may be used to achieve the image analysis and modeling within these schemes. Thus, for the present system, techniques need to be developed for the following tasks: 1. Three-dimensional modeling of the human face: A ‘neutral’ 3D facial model has to be created and properly labeled. 2. Segmentation of the videophone image and detection of the characteristics of the face: The face and some of its characteristic points have to be detected within the first image frame
MPEG-4 Facial Animation and its Application 107
Adaptation of a 3D face Figure 4: LipTelephone block diagram: Initialization model to the speaker’s phase (above), tracking phase (below) face: The ‘neutral’ face model has to be adapted to 1st Image frame the particular speaker acSegmentation cording to the characterisfeature points tic points detected. Adaptation of 4. 2D feature tracking: The the 3D face Adapted 3D characteristic points of the face model face have to be tracked in Initial 3D face model subsequent image frames. 5. Estimation of the 3D mon tion of the speaker’s head: frame The 3D motion of the whole 2D Tracking of the head must be estimated feature points n+1 based on the observed 2D frame motion of the feature positions of position of the all feature mouth area points. points 6. Development of the user interface and communica3D motion estimation of High quality coding of tion channel handling isthe head and conversion the mouth area to FAPs sues: A window-based user interface has to be developed and the issues of transMPEG-4 stream mitting the encoded stream over the communication channel resolved. The above procedures are highlighted in the block diagram of the system shown in Figure 4. As seen, the system operation is divided to two phases: 1. The initialization phase, where the feature points are extracted from the first image frame and the ‘neutral’ face model is adapted to these characteristics. This process takes place only at the beginning of a conference session. 2. The tracking phase, where the feature points are tracked in subsequent image frames; from these the mouth area is detected and coded, and the 3D motion of the speaker’s head is estimated. The high resolution coding of the detected mouth area and the creation of the MPEG4 stream are issues handled fully by the standard. MPEG-4 provides a rate-controlled DCTbased coding algorithm, which is an improved version of the MPEG-2 codec with the added capability of defining areas in the image with specific coding characteristics. Therefore, having detected the position of the mouth area in every frame, this area will be coded with the provided algorithm so that when reconstructed, it displays the area of the mouth with the desired quality and accuracy. Then, the speaker’s head and mouth objects will be multiplexed in a stream according to the MPEG-4 format. In the ensuing sections each of the above tasks is described in detail by analyzing the techniques being developed and comparing them to other methods described in the literature. 3.
108 Sarris & Strintzis
Three-Dimensional Modeling The creation of three-dimensional graphic models was initiated by the need to electronically describe real-world scenes in an accurate and realistic way. Since our world is three-dimensional, 3D models are essential for this purpose. These models are composed of vertices (points) in the 3D space and polygonal interconnections between these vertices. The surface formed by these adjacent polygons aims to approximate the real-world surface. The denser these vertices are (i.e., the smaller the polygons), the better the approximation. In the proposed system we must represent the three-dimensional form of a human speaker. Adequate detail in modeling the head and particularly the face is essential. Various methods are used for the creation of such models: 1. Purely synthetic creation of the model with the aid of special computer design software (example screenshots of such applications are shown in Figure 5). The modeling starts with an initial rough approximation of the human head (may vary from a simple 3D ellipse to an ordinary head model) which is manually modified until the desired facial characteristics have been created. 2. By use of special hardware. Special laser scanners can produce accurate 3D models of small- to medium-sized artifacts, but the cost of these devices is still too high to integrate in a general-purpose product. 3. By processing of multiview images. Given a set of views (two or more) of a scene, special techniques may be employed to detect feature correspondences and construct a 3D model (Tzovaras et al., 1999). This technique, although cost efficient, as it requires no special hardware, has not yet proved to be robust enough for real-world applications. 4. Knowledge based modeling. Based on the knowledge of the characteristics of the subject to be modeled (i.e., the speaker’s head), methods have been developed which adapt a similar ‘generic’ model by deforming it in a natural-looking way (Zhang, 1998; Terzopoulos & Waters, 1990; Magnenat-Thalmann et al., 1998; Escher & MagnenatThalmann, 1997). Extensive research in the areas of 3D modeling has resulted in the development of a variety of formats for the representation of 3D scenes. Two of the most popular and easy to use are VRML (Virtual Reality Modeling Language) and the Open Inventor format from Silicon Graphics, Inc. Both formats provide a basic structure to describe a simple model, which consists of two lists: a list of nodes/vertices with their 3D coordinates and a list of polygons with the numbers of the nodes they interconnect. In addition to these two lists, many features are provided to render the model more easy to use and realistic. For example, Figure 5: Commercial applications for 3D modeling
MPEG-4 Facial Animation and its Application 109
keywords are provided for the insertion of primitives such as circles, squares, etc.; transformations such as translation, rotation or scaling can be easily applied and different colors can be defined for separate objects within the model. A simple example of a definition of a 3D model in VRML is shown in Figure 6. Moreover, a vast variety of 3D models are publicly available, including models of the human face of various characteristics (male, female, young, aged, etc.). Most of these models are built based on the knowledge of the basic anatomy of the human face, combining the use of special capturing devices with post processing by state-of-the-art 3D modeling applications. The resulting model is made even more realistic by the process of texture mapping which colors the surface of the model with the actual texture of a face provided by its photograph. In the particular case of the ‘LipTelephone’ system, techniques are sought to detect the positions of the known face characteristics (eyes, nose, mouth, etc.) and deform an initial face model so that it adapts naturally to these features. A single model will be used for this purpose (the one shown in Figure 5), with every node labeled by a special tag showing the part of the face in which it belongs (e.g., mouth, left eye, etc.). This labeling has been achieved with the development of an interactive tool, which highlights one node at a time and requests the user to identify the face part. Obviously, that is an off-line process which is needed every time a new ‘neutral’ model has to be incorporated into the system. Figure 6: A simple 3D model in VRML #VRML V1.0 ASCII Separator { # Separates different objects in the model Coordinate3 { # Set of 3D nodes point [ 0.000000 0.029621 0.011785, # node No0 0.000000 0.025242 0.015459, # node No1 0.014062 0.026902 0.008709, # .... 0.007543 0.019862 0.012705, 0.011035 0.018056 0.007652, 0.012934 0.018587 0.007649, 0.012934 0.018587 0.007649, 0.011035 0.018056 0.007652, 0.015393 0.016240 0.004451, 0.000000 0.029621 0.011785 ] } IndexedFaceSet { # Set of polygons-triangles coordIndex [ 0, 1, 3, -1, 3, 4, 5, -1, # a triangle which connects # nodes No3-No4-No5 6, 7, 8, -1, 6, 8, 9, -1, 0, 1, 8, -1, 1, 8, 9, -1 ] } }
110 Sarris & Strintzis
For the accurate calculation of the speaker’s face characteristics in 3D, two views at right angles (front and profile) will be needed, as shown in the later sections. This technique has been selected because it does not require the use of special equipment and was seen to be fast and reliable. Although the adaptation of the model needs to be performed only once at the beginning of a conference session, it must be fast enough to avoid annoying the user. To achieve even better performance, a set of previously adapted models will be stored by the system so that the procedure does not have to be repeated for someone who has previously participated in a conference session. The particular face model, shown in Figure 5, has neutral characteristics and medium density in number of vertices (the face consists of 240 vertices). These features make it easy to adapt to other faces and quick to handle through the various stages of the system.
Segmentation of the Image and Facial Feature Extraction The segmentation of an image in general and particularly the detection and extraction of facial features are common problems in many image processing applications such as facial recognition, tracking and modeling. In the special case of videoconferencing images, the detection of the face becomes a simpler task as the programmer is assisted by the knowledge of the scene content (i.e., it is known that the image scene contains one face approximately in the middle of the scene). In general face detection applications--techniques based on neural networks (NNs) (Lawrence et al., 1997; Sarris et al., 1999b; Sarris et al., 2000c), principal component analysis (PCA) (Craw et al., 1999; Turk & Pentland, 1991), or analysis of color distribution (Chai & Nghan, 1999; Sarris et al., 2000a; Sobottka & Pittas, 1998)--have proved to be efficient and reliable. General NN and PCA methods are based on the degree of correlation of the given image with a type of template representation of a face. In this way they manage to detect the presence and position of a face and sometimes even recognize its identity by comparing to a number of images in a given database. More complicated techniques based on deformable contours (Cootes et al., 1995), or use of deformable 2D and 3D models (DeCarlo & Metaxas, 1996) have also been reported to accurately detect the position and shape of a face. These are based on an initial contour approximation of a face (either 2D or 3D), which is deformed by the application of specific forces until the desired fitting to the given face has been achieved. In the current application, however, the extraction of facial features is necessary for the adaptation of a generic face model. These features consist of the exact locations of particular points, as outer left eye (which means the outermost/leftmost point of the left eye), inner left eye, leftmost mouth, etc. Therefore, a result in far greater detail is sought than that of detecting a rectangular, ellipsoid area or even the irregular contour containing the face. In summary, possible methods for extracting the face region from the rest of the scene can be based either on temporal or spatial homogeneity. Methods using temporal criteria may separate the user from his homogeneous movement in the scene. However, such methods will not separate the face from the rest of the visible body (hair, neck, etc.) and will also not work when the user is standing still. Spatial criteria may be applied as the region should generally be similarly colored, connected and have an ellipsoidal shape. However, difficulties arise in their implementation because dark areas, like eyes and facial hair, do not have the same color as the rest of the face. Moreover, the skin color may differ among users, and the proximity of the neck of the user, as well as objects in the background with color similar to that of the skin, can produce confusion. In order to deal with these difficulties, a semi-automatic method was implemented to
MPEG-4 Facial Animation and its Application 111
train a neural network to recognize image blocks Figure 7: Selection of the contained in the facial region (a more detailed neural network training area analysis is given by Sarris et al., 2000c). This is achieved by directing the user to position his face in a highlighted area, as shown in Figure 7. The contents of this area are then used as positive examples for the training of a feed-forward neural network, while the contents of the rest of the image scene (outside the highlighted area) are used as negative examples for the training of the same neural network. To avoid dependencies on varying illumination of the scene, only the chrominance components Cb and Cr from the YCbCr color space are being used for the description of the contents of the scene. Moreover, the feature vectors used for the training of the neural network are not built solely from the direct chrominance values, because the resulting neural network would then not be capable to separate skin pixels from similarly colored pixels in the background of the scene. Rather, both the facial and the background areas are broken in consecutive rectangular blocks from which the Cb and Cr histograms are quantised and concatenated to form the training vectors. The neural network, trained from the image shown in Figure 7, is used to determine facial blocks in subsequent frames captured from the camera. This is accomplished by dividing every such frame in rectangular blocks and constructing a feature vector from each block in the same way as in the training process (i.e., the Cb and Cr histograms are built, quantized and concatenated). The neural network is then consulted with each feature vector to decide whether the particular block belongs to the face of the user or not. Results are quite satisfactory (two samples are shown in Figure 8), although a small number of blocks are misclassified. To compensate for these errors, a number of post-processing operations, similar to the ones proposed in Chai and Nghan (1999), are performed on the output of the neural network. Finally, a connected component algorithm is employed to find all connected regions and accept only the one of the largest area, Figure 8: (from left to right) Camera image, facial region as identified by the neural network, corrected facial region after post-processing operations
112 Sarris & Strintzis
based on the knowledge that only one face should be present in the scene. Typical results from these post-processing operations are shown in Figure 8. Having detected the facial region in the image, specific operators are implemented to locate the exact positions of the eyes, eyebrows, nose and mouth. To locate robustly the position of the eyes, a temporal criterion was employed by detecting the blinking of the eyes as proposed by Bala et al. (1997). To detect the exact eye region within each window, the connected regions are sought within the window and the region with the greatest area is considered to represent the eye. The proposed algorithm proves to be robust in all situations where the user’s eyes are clearly visible in the scene. A sample result is shown in Figure 9. Having reliably located the positions of the eyes, we employ an iterative adaptive thresholding algorithm on the image luminance Y, within the facial area, until the connected thresholded areas are found at the expected positions. The thresholding operation is performed adaptively according to the mean luminance value observed within the detected facial area. False features are eliminated by exploiting geometric knowledge of the structure
Figure 9: Contour refinement and blink detection
Figure 10: The facial regions and features extracted by thresholding in the frontal and profile image views. In the profile view the rectangles show the search areas for the particular features.
MPEG-4 Facial Animation and its Application 113
of the face. Sample results for the frontal and profile view of a face are shown in Figure 10. The exact positions of the feature points are found as the edge points of the detected areas.
Three-Dimensional Facial Model Adaptation This section describes a method for the adaptation of a generic three-dimensional face to the particular person’s face viewed by the camera. Several methods have been proposed in the literature for the deformation of 3D face models. In Lee et al. (1995) and Eisert and Girod (1998), the required geometry of the face is captured by a 3D laser scanner. In Lee et al. (1995) the generic geometric model is deformed by applying physical spring forces to predetermined nodes according to the positions of the corresponding features in the target face geometry. In Eisert and Girod (1998) the generic model is built using triangular B-spline patches, and the deformation involves the displacement of the spline control points which correspond to facial features. Lee et al. (1999) deformed a generic face model using a threedimensional geometric transformation targeting to fit the two orthogonal views of the 3D model on the corresponding photographs taken from a target face. The transformation used, called Dirichlet FFD, was proposed by Moccozet and Magnenat-Thalmann (1997) and is a type of the Free Form Deformation (FFD) introduced by Sederberg and Parry (1986). Pighinn et al. (1997) estimated the required 3D positions of the facial features from a series of captured image frames of the target face and transformed the generic model by applying an interpolation function based on radial basis functions. Information from one view of the target face is utilized by Zhang (1998) to measure the face, eye and mouth dimensions which are used to adapt a simple 3D face model by rigidly transforming the whole face and locally correcting the position and orientation of the face, eyes and mouth. Geometric assumptions have to be made, however, as the 3D characteristics of the features cannot be totally deduced from only one view. Our approach (detailed in Sarris & Strintzis, 2000) differs from those of all above methods in that it treats the facial model as a collection of facial parts, which are allowed to be deformed according to separate affine transformations. This is a simple method and is effective for face characterization because the physiological differences in characteristics between faces are based on precisely such local variations, i.e., a person may have a longer or shorter nose, narrower eyes, etc. Figure 11: Local adaptation of the right eyebrow The required face geom(upper part) and right eye (lower part) etry is captured from two orthogonal views of the face acquired by the camera and segmented as described in the previous section. Having located the facial features as described in the previous section, their edge points (e.g., leftmost-eye, rightmost-eye, etc.) are defined as the feature points of the face. Having the positions of these feature points in 2D, standard geometrical calculations, assisted by least squares methods, provides
114 Sarris & Strintzis
the required positions of the feature points in the 3D space. Having calculated these required 3D positions for the facial feature points, the 3D facial model needs to be deformed so that its corresponding nodes are displaced towards these positions while maintaining the natural characteristics and symmetry of the human face. The first part of this adaptation process involves a rigid transformation of the model. Thus, the model is rotated and translated to match the pose of the real face. The rotation and translation transformations are calculated by a slight modification of the ‘spatial resection’ problem in photogrammetry (Haralick & Shapiro, 1993). A non-linear minimization is performed so that the rotation and translation transformations bring the model feature nodes the closest possible to their required positions. The rigid transformation may align the generic model with the required face and scale it to meet the total face dimensions. However, the local physiology of the face must also be altered in this way. Thus, in the second step of the adaptation process, the model is transformed in a non-rigid way aiming to further displace the feature nodes bringing them as close as possible to their exact calculated positions while the facial parts retain their natural characteristics. To perform this adaptation, the model is split into face parts (left eye, right eye, mouth, etc.) and a rigid adaptation is performed on every part separately. This means that every 3D face part is rotated, translated and stretched so as to minimize the distances of the feature points belonging to that face part from their required positions. This is accomplished with the following transformations: 1. Centering at the origin: The center of the face part is found as the 3D center of the feature nodes contained and the whole part is translated towards the origin so that this center is on the origin. The same is done with the set of required feature positions (i.e., their center is found and they are translated towards the origin in the same manner). Figure 12: Rigid (upper) and non-rigid (lower) adaptation of the face model shown on the front and profile views
MPEG-4 Facial Animation and its Application 115
2.
Alignment: The face part is rotated around the origin so that three lines connecting three pairs of feature nodes are aligned with the corresponding lines connecting the required feature positions. This is accomplished by minimizing the differences in the gradients of these lines. 3. Stretching: The face part is scaled around the origin with different scale factors for every axis so that the distances of the transformed nodes from their required positions are minimized. 4. Translation: After the stretching transformation the face part is translated back towards its original position by adding the position vector of the face part center calculated and subtracted in Step 1. Results of these steps for two face parts are shown in Figure 11. Between all face parts, one series of nodes is defined as border nodes (i.e., these nodes do not belong to any face part). The positions of these nodes after the deformation are found by linear interpolation of the neighboring nodes belonging to face parts. This is done to assure that a smooth transition is made from one facial part to the other, and possible discontinuities are filtered out. Thus, the final deformed model adapts to the particular characteristics implied by the feature points (e.g., bigger nose or smaller eyes) keeping the generic characteristics of a human face (e.g., smoothness of the skin and symmetry of the face). Figure 12 shows the results of the rigid and non-rigid adaptation procedures.
2D Feature Tracking and 3D Motion Estimation This section presents a method that tackles the problem of tracking the facial rigid and non-rigid 3D motion. Among the techniques that have been proposed for the estimation of facial motion (DeCarlo & Metaxas, 1996; Essa et al., 1994; Guenter et al., 1998; Pighin et al., 1999; Terzopoulos & Waters, 1993), most employ dense optical flow estimation algorithms, and are not suitable for real-time reliable computation requirements because of their complexity. Feature-based tracking methods have generally been avoided, as they tend to give unreliable results. Successful attempts to simplify this task by using artificial markers or heavy make-up on the tracked faces (Guenter et al., 1998; Terzopoulos & Waters, 1993) proved the suitability of feature-based tracking; however, these techniques are not applicable to real, unrestricted systems. The minimization of an error function over the set of facial expressions and face positions spanned by the 3D model by altering the values of facial position and expression parameters is proposed by Pighin et al. (1999). A similar approach where the facial expressions are represented by a set of eigenvectors is proposed by Valente and Dugelay (2000). Although in theory these two methods can yield directly the required values of facial motion in the parametric space introduced, they are based on the assumptions that an extremely realistic rendering of the model is available and that lighting conditions can be perfectly compensated. In the above methods, several parametric models of the face have been introduced for the interpretation of facial motion and expressions. However, the only standardized facial animation parameter space, recently introduced by MPEG-4, i.e., the FAPs, has not been sufficiently explored. Only one method (Eisert & Girod, 1998), has been proposed in the literature for the extraction of FAPs. This is based on a linearized dense optical flow motion estimation method that utilizes a spline-based 3D facial model for the extraction of FAPs. This model, however, can only be generated using specialized 3D scanning equipment, while the motion estimation method proposed suffers from the aforementioned complexity of dense optical flow. The present section first proposes a technique to improve the reliability of featurebased tracking methods for 3D facial motion estimation. This is achieved by introducing
116 Sarris & Strintzis
a criterion for assessing the reliability of tracked features that correspond to 3D model nodes, which is combined with two other criteria of accuracy of a feature tracking algorithm, to provide a measure of confidence for the estimated position of every feature. Furthermore, the framework standardized by MPEG-4 is utilized to provide a non-rigid facial motion model, which guides the feature tracking algorithm. These techniques are integrated into a system, which, without the use of any face markers, estimates the 3D rigid and non-rigid motion of points of a human face and through these, the Facial Animation Parameters (FAPs).
Facial Feature Tracking Various methods have been proposed for the tracking of motion in images based on dense optic flow, block matching, Fourier transforms or pel recursive methods, as described by Dufaux and Moscheni (1995). In the present work a feature-based approach has been preferred both because of its speed and because of its natural suitability to this knowledgebased node-point correspondence problem (Sarris et al., 1999a). In general, the features to be tracked may be corners, edges or points and the selection of the particular set of features must be such that they may be tracked easily and reliably. A widely used feature-based algorithm is that proposed by Kanade, Lucas and Tomasi, often referred to as the KLT algorithm in the literature (Tomasi & Kanade, 1991; Shi & Tomasi, 1994). The KLT algorithm is based on the minimization of the sum of squared intensity differences between a past and a current feature window, which is performed using a Newton-Raphson minimization method. Although the KLT algorithm has proved to yield satisfactory results on its own, in the present system it is very important to assess the results of tracking so that the optimum set of feature correspondences is used in the stage of the model adaptation. For this reason, the tracked feature points are sorted according to two criteria introduced by and closely related to the operation of the KLT algorithm, and a third criterion related to the nature of the 3D model to be adapted. These criteria are defined as follows: Trackability: The ability of a 2D feature point to be tracked reliably, which is related with the roughness of the texture within its window (Tomasi & Kanade, 1991; Shi & Tomasi, 1994). Dissimilarity: The sum of squared intensity differences within the feature window W, between the two consecutive image frames I and J. Dissimilarity indicates how well the feature has been tracked (Tomasi & Kanade, 1991; Shi & Tomasi, 1994). Reliability: Every tracked feature point corresponds to a node (feature node) on the 3D face model, which has been adapted to the face located in the previous image frame. The reliability metric of a node is defined as cosè, where è is the angle formed by the optical axis Figure 13: The tangent plane and normal vector of a node in relation with the optical axis
MPEG-4 Facial Animation and its Application 117
and the normal vector to the surface at the node in question, as seen in Figure 13. These three criteria are combined to provide a measure of confidence for a feature point that has been tracked. According to this measure we choose the best-tracked feature points for use in the next step, which is the estimation of facial motion.
Rigid 3D-Motion Estimation Having found the set of the most reliably tracked, according to these criteria, positions of the feature points, we need to move the head model in such a way that its feature nodes project the closest possible to these positions. This means that we need to compute the 3D motion parameters (R, T), which will minimize the distance of these projections from their positions tracked in the image. Many methods have been proposed in the literature for the solution of this problem (Aggarwal & Nandhakumar, 1988), and the accuracy of their solution depends highly on the reliability of the given feature correspondences. Having ensured from the previous step that the selected feature correspondences are suitable for the given tracking method, we employ the method proposed by Tsai and Huang (1984), improved by Weng et al. (1989) and enhanced to include the focal length of our camera in order to estimate R and T up to a scaling factor. Further, using the 3D model coordinates in the previous frame, we compute this scaling factor and determine an absolute solution for R and T. Furthermore, this solution is optimized by an iterative minimization algorithm, as recommended by Weng et al. (1993). Having determined R and T, the new rigidly transformed 3D positions for all the model nodes are known. As stressed in the previous section, the tracked feature points are ranked according to their trackability, dissimilarity and reliability, and only the ones with sufficiently high rank are used for the estimation of the 3D rigid motion. Thus, many features may Figure 14: Example of the assessment of tracked features, the rigid fitting of the 3D model and the relocation of the mistracked (rejected) features by projection of the corresponding transformed model nodes (Rejected feature points are shown by an empty box, accepted by a full box and the corrected positions by crosses)
118 Sarris & Strintzis
not be accurately positioned by the tracking procedure in the current frame. The correct positions of these feature points are calculated by the estimated R and T matrices. In this way, these points are relocated in the image, as shown in Figure 14, so that they can be tracked in subsequent frames. Obviously, a reliable relocation always requires a reliable 3D motion estimation. Moreover, as the possibility always exists that the real features are not visible due to occlusions by head rotation, we check again the sign and magnitude of the reliability metric. If it is negative (è>90o) the tangent to the node surface is oriented towards the opposite direction of the camera and thus the feature point is invisible on the image frame. Therefore, this feature point is not taken into consideration for tracking in the next frame. Even if the reliability metric is positive, its magnitude has to be greater than a threshold, as there is a high possibility for the re-projected feature to be close to the border of the face. In that case, a small calculation error may cause the feature to be assigned to the background, which of course is undesirable. This method has proven to be reliable even under extreme rotation conditions where most other methods fail. In Figure 16 the tracking results are shown for a moving head image Figure 15: Examples of tracking along a line: Black boxes: rigid positions of feature points; White boxes: Non-rigid positions of feature points
Figure 16: Results of rigid and non-rigid tracking
MPEG-4 Facial Animation and its Application 119
sequence. There, it is evident that although the feature tracking module may fail in the correct estimation of the positions of some feature points, our method succeeds in selecting the most accurately tracked points and thus calculates the accurate 3D rigid motion of the head.
Non-Rigid 3D Facial Motion Estimation As explained in the beginning of the chapter, the MPEG-4 standard has defined a set of 68 parameters, (the Facial Animation Parameters; FAPs), to fully describe the allowed rigid and non-rigid motions that can be observed on a human face with respect to a so-called ‘neutral pose.’ Three FAPs are used to describe the rotation of the head around the neck (FAP values 48-50: head pitch, yaw and roll, respectively). After estimating the rigid motion of the head as described in the previous section, these FAPs can be readily computed from the Euler angles of the rotation matrix R. Most of the other FAPs affect only one FDP describing its non-rigid motion, but one FDP may be affected by more than one FAP. For each of these parameters, the direction of movement is specified by the MPEG-4 standard, while the FAP value describes the algebraic value of the non-rigid motion. Once the rigid part of the motion has been successfully recovered by the proposed approach, the local 2D motion estimates provided by the feature tracker can be used to determine the non-rigid motion of the corresponding feature point. Although only the 2D motion of each feature point is available, its 3D non-rigid motion can be recovered since the possible axes of movement are specified by the MPEG-4 standard. Three cases may be identified for the evaluation of the possible number of degrees of freedom for a non-rigid feature point: The feature point can move in any of three possible directions. This happens only for the eyes, tongue and jaw. However, the eyes can be handled as separate rigid objects, while the tongue and jaw are given three degrees of freedom only to support exaggerated movement (e.g., by cartoon characters). Thus, in our case of real human face image sequences, the possible number of degrees of freedom will always be smaller than three. The feature point can only move in two directions (two degrees of freedom). For example, FDP 2.1 can be translated along the X-axis using FAP 15 and along the Z-axis using FAP 14. If (x, y, z) are the initial values of the corresponding wireframe node in the neutral pose (e.g., time 0), then (x+F15, y, z+F14) is the position of the same node at frame t if only non-rigid motion exists. However, under the presence of rigid motion, the resulting position of the 3D node will be:
x' x + F15 y ' = R y + T z ' z + F14 Using the projection equations we obtain a system of two equations with two unknowns, which can be used to determine the unknown FAP values F15, F14. The feature point can only move in one direction (one degree of freedom). This case can be dealt with in a similar way, resulting in an over-determined system with two equations and one unknown. Note that the tracking procedure used by the KLT transform has two degrees of freedom. Although this kind of tracking procedure is suitable for FDP nodes that are affected by two FAPs, it is not suitable for FDP nodes affected by only one FAP (one degree of freedom). For such nodes, it would be more suitable to design a 1D feature tracker, which determines the feature position along a 2D line. This line is determined using the above equation and the projection equation on the image plane. For this reason, we have constrained the KLT tracking algorithm to optionally search along a single 2D line, which
120 Sarris & Strintzis
is the projection of the calculated 3D-axis of permitted non-rigid motion. Examples of nonrigid tracking along a line are shown in Figure 15. A moving head image sequence illustrating both rigid and non-rigid motion was used to demonstrate the results of our method. The R and T matrices calculated by the rigidmotion estimation module were used to transform the head model adapted to the face in frame 1, which is projected on every subsequent frame to illustrate the accuracy of the proposed technique. The non-rigid motion estimation method produced an MPEG-4compatible FAP-file, which was given to an MPEG-4 FAP player to animate a synthetic face and illustrate the correspondence between the captured image sequence and the synthetically produced face animation. Both from the projection of the 3D model and from the observation of the rotation of the synthetic face, it is evident that both the rigid and non-rigid motion of the head are estimated with adequate accuracy in every frame.
User-Interface and Networking Issues The interface of the system has been designed to be user friendly and simple in providing the following operations: Establishment of a connection over a telephone line (simple or ISDN), or over the Internet. The users are able to dial a telephone number and establish a connection using a standard modem over a standard telephone line, or provide an IP address if an Internet connection exists. Real-time transmission and reception of video. The system is able to transmit and receive the video images of the speakers in real time. Real-time transmission and reception of audio. The system is able to transmit and receive audio in case a user is hearing. When this feature is disabled, the bandwidth gained will be exploited to improve the video image. Real-time transmission and reception of text. A text window is provided where the users may exchange messages in real time. Real-time transmission and reception of graphics. A ‘whiteboard’ window is provided where users may draw or paste areas of their desktop. This way many concepts are explained by diagramming information, or using a sketch. An initial Graphical User Interface (GUI), which has been developed to incorporate the menus and buttons needed for these operations, is shown in Figure 17. The leftmost button below the video image is used to place a call and establish a connection with another LipTelephone user. The next three buttons to the right launch the helper application windows (whiteboard, chat and file transfer), while the two in the right are used to enable or disable video and Figure 17: The main window of audio. The ‘File’ menu provides operations to cap- the Graphical User Interface ture and save in the local disk still images or video. The ‘Options’ menu controls the settings for the provided video and audio. The ‘Drivers’ menu lets the user select the capture card and camera to be used (if more than one). The additional windows needed during the operation are shown in Figure 18. For the communication of data between the two parties, we have selected the Real Time Protocol (RTP) (Schulzrinne, 1996) over an IP connection which is established either over a telephone line (standard or ISDN) or over the Internet. RTP is the
MPEG-4 Facial Animation and its Application 121
Figure 18: The helper applications windows as provided by NetMeeting
Internet-standard protocol for the transport of real-time data and has been preferred over TCP as it provides special functionality suited for carrying real-time content, such as timestamps and control mechanisms for synchronizing different streams with timing properties. RTP consists of a data and a control part. The data part of RTP is a thin protocol providing support for applications with real-time properties such as continuous media (e.g., audio and video), including timing reconstruction, loss detection, security and content identification. The control part of RTP provides support for real-time conferencing of groups of any size within the Internet. This support includes source identification and support for gateways like audio and video bridges as well as multicast-to-unicast translators. It offers quality-of-service feedback from receivers to the multicast group as well as support for the synchronization of different media streams.
Results Results from our initial experiments with the system have shown satisfactory image quality for a frame rate of 10fps (frames per second). User tests with hearing-impaired subjects have proved that 10fps are adequate for lip-reading, provided that the image is clear within the area of the lips. In a CIF-sized image (352x288 pixels), the bounding box of the lips is less than 100x50 pixels for the usual position of the speaker’s face (i.e., positioned so that the face covers almost the whole area of the image). Adequate image quality (25 db minimum) was achieved in our experiments for the lip area at a coding rate of 0.5bpp (bits per pixel). This means that the bandwidth required for the lip area is around 20Kbps (Kbits per second) at 10fps. This is the main bandwidth overhead of the system as the sound can be coded with 2Kbps at reasonable quality (11KHz sampling rate), and for the rest of the head only six floating point numbers (240bps) need to be transmitted for the description of the rigid motion of the head, or 68 floating point numbers–all FAPs–(2.7Kbps) for the description of both rigid and non-rigid motion of the head. Furthermore, these numbers can be arithmetically coded as proposed in the MPEG-4 standard providing a bitstream requiring a maximum bandwidth of 2Kbps. This means that the whole system requires a bandwidth of less than 25Kbps operating at 10fps, which is achievable over a standard telephone line or even some fast Internet versions. The initialization of the system, which involves locating the facial features and adapting the face model, requires 5-10 seconds but can be performed in parallel with the establishment of the connection, thus making the delay invisible to the user. During normal operation the system can currently code frames at a rate of 5-7fps on a standard Pentium II PC at 450MHz with 256MB RAM, but work is currently carried out in optimizing the speed of the system which is expected to reach 10fps.
122 Sarris & Strintzis
CONCLUSIONS In this chapter an attempt has been made to introduce the capabilities that are provided by the MPEG-4 audiovisual standard to content-based video coding. It has been shown that MPEG-4 has standardized the format of content-based coded data but most of the methods needed to perform the coding have been left open so that many different implementations can be proposed. Many such methods, which have been the object of past and present research, have been referenced from the literature and a particular method has been proposed and described in detail for every part of the videoconferencing application. The purpose of this study has been to introduce students to such image processing techniques, provide to researchers a reference to the state of the art in this area and urge engineers to use the present research methodologies in future consumer applications. The way to future multimedia applications, at least in the area of content-based coding, is now clearly visible with the help of the MPEG-4 standard. It is left to industry to elaborate and embed the developed methodologies in the applications to come.
REFERENCES Abrantes, G. A. and Pereira, F. (1999). MPEG-4 facial animation technology: Survey, implementation and results. IEEE Transactions on Circuits and Systems for Video Technology, 9(2). Aggarwal, J. K. and Nandhakumar N. (1988). On the computation of motion from sequences of images-A review. Proceedings of the IEEE, 76(8), 917-935. Bala, L. P., Talmi, K. and Liu, J. (1997). Automatic detection and tracking of faces and facial features in video sequences. Proceedings of the Picture Coding Symposium (PCS). Berlin, Germany. Benois-Pineau, J., Sarris, N., Barba, D. and Strintzis, M. G. (1997). Video coding for wireless varying bit-rate communications based on area of interest and region representation. International Conference on Image Processing ICIP’97, 3, 555-558. Santa Barbara, CA, USA. Chai, D. and Nghan, K. N. (1999). Face segmentation using skin color map in videophone applications. IEEE Transactions on Circuits and Systems for Video Technology, 9(4). Chariglione L. (1998). Impact of MPEG standards on multimedia industry. Proceedings of the IEEE, 86(2). Cootes, T. F., Di Mauro, E. C., Taylor, C. J. and Lanitis, A. (1995). Flexible 3D models from uncalibrated cameras. Proceedings of the British Machine Vision Conference, 147156. BMVA Press. Craw, I., Costen, N., Kato, T. and Akamatsu, A. (1999). How should we represent faces for automatic recognition? IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(8). DeCarlo, D. and Metaxas, D. (1996). The integration of optical flow and deformable models with applications to human face shape and motion estimation. Proceedings of the CVPR, 231-238. Doenges, P. K. (1998). Synthetic/natural hybrid coding mixed media content in MPEG-4. MPEG-4 Seminar, Waseda University, Tokyo.
MPEG-4 Facial Animation and its Application 123
Dufaux, F. and Moscheni, F. (1995). Motion estimation techniques for digital TV: A review and a new contribution. Proceedings of the IEEE, 83(6), 858-876. Eisert, P. and Girod, B. (1998). Analyzing facial expressions for virtual conferencing. IEEE Computer Graphics and Applications, 70-78. Escher, M. and Magnenat-Thalmann, N. (1997). Automatic 3D cloning and real-time animation of a human face. Proceedings of Computer Animation. Geneva, Switzerland, . Essa, I., Darell, T. and Pentland, A. (1994). Tracking facial motion. Proceedings of the IEEE Workshop on Non-Rigid and Articulate Motion. Austin, Texas. Guenter, B., Grimm, C., Wood, D., Malvar, H. and Pighin, F. (1998). Making faces. Proceedings of the SIGGRAPH, 55-66. Haralick, R. M. and Shapiro, L. G. (1993). Computer and Robot Vision. Volume II, 125-150. Addison Wesley. ISO/IEC. (1998). MPEG video and SNHC. Text of ISO/IEC FDIS 14496-3: Audio Doc. ISO/MPEG N2503, Atlantic City MPEG Meeting. ITU-T. (1990). Video codec for audiovisual services at px64 kbit/s. ITU-T Rec. H.261. Geneva. ITU-T. (1996). Video coding for narrow telecommunications channels at