January 2011 Cover 1
12/16/10
12:48 PM
Page 1
IEEE www.comsoc.org
January 2011, Vol. 49, No. 1
MAGAZINE ee Fr l ria to Tu c ls So cel 1 om to ge C m a Fe e P Se
•Network Disaster Recovery •Future Converged Services
®
A Publication of the IEEE Communications Society
LYT-TOC-JAN
12/16/10
4:01 PM
Page 2
Director of Magazines Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland)
IEEE
Editor-in-Chief Steve Gorshe, PMC-Sierra, Inc. (USA) Associate Editor-in-Chief Sean Moore, Centripetal Networks (USA) Senior Technical Editors Tom Chen, Swansea University (UK) Nim Cheung, ASTRI (China) Nelson Fonseca, State Univ. of Campinas (Brazil) Torleiv Maseng, Norwegian Def. Res. Est. (Norway) Peter T. S. Yum, The Chinese U. Hong Kong (China) Technical Editors Sonia Aissa, Univ. of Quebec (Canada) Mohammed Atiquzzaman, U. of Oklahoma (USA) Paolo Bellavista, DEIS (Italy) Tee-Hiang Cheng, Nanyang Tech. U. (Rep. Singapore) Jacek Chrostowski, Scheelite Techn. LLC (USA) Sudhir S. Dixit, Nokia Siemens Networks (USA) Stefano Galli, Panasonic R&D Co. of America (USA) Joan Garcia-Haro, Poly. U. of Cartagena (Spain) Vimal K. Khanna, mCalibre Technologies (India) Janusz Konrad, Boston University (USA) Abbas Jamalipour, U. of Sydney (Australia) Deep Medhi, Univ. of Missouri-Kansas City (USA) Nader F. Mir, San Jose State Univ. (USA) Amitabh Mishra, Johns Hopkins University (USA) Sedat Ölçer, IBM (Switzerland) Glenn Parsons, Ericsson Canada (Canada) Harry Rudin, IBM Zurich Res.Lab. (Switzerland) Hady Salloum, Stevens Institute of Tech. (USA) Antonio Sánchez Esguevillas, Telefonica (Spain) Heinrich J. Stüttgen, NEC Europe Ltd. (Germany) Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea) Danny Tsang, Hong Kong U. of Sci. & Tech. (Japan) Series Editors Ad Hoc and Sensor Networks Edoardo Biagioni, U. of Hawaii, Manoa (USA) Silvia Giordano, Univ. of App. Sci. (Switzerland) Automotive Networking and Applications Wai Chen, Telcordia Technologies, Inc (USA) Luca Delgrossi, Mercedes-Benz R&D N.A. (USA) Timo Kosch, BMW Group (Germany) Tadao Saito, University of Tokyo (Japan) Consumer Communicatons and Networking Madjid Merabti, Liverpool John Moores U. (UK) Mario Kolberg, University of Sterling (UK) Stan Moyer, Telcordia (USA) Design & Implementation Sean Moore, Avaya (USA) Salvatore Loreto, Ericsson Research (Finland) Integrated Circuits for Communications Charles Chien (USA) Zhiwei Xu, SST Communication Inc. (USA) Stephen Molloy, Qualcomm (USA) Network and Service Management Series George Pavlou, U. of Surrey (UK) Aiko Pras, U. of Twente (The Netherlands) Topics in Optical Communications Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan) Osman Gebizlioglu, Telcordia Technologies (USA) John Spencer, Optelian (USA) Vijay Jain, Verizon (USA) Topics in Radio Communications Joseph B. Evans, U. of Kansas (USA) Zoran Zvonar, MediaTek (USA) Standards Yoichi Maeda, NTT Adv. Tech. Corp. (Japan) Mostafa Hashem Sherif, AT&T (USA) Columns Book Reviews Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) History of Communications Mischa Schwartz, Columbia U. (USA) Regulatory and Policy Issues J. Scott Marcus, WIK (Germany) Jon M. Peha, Carnegie Mellon U. (USA) Technology Leaders' Forum Steve Weinstein (USA) Very Large Projects Ken Young, Telcordia Technologies (USA) Publications Staff Joseph Milizzo, Assistant Publisher Eric Levine, Associate Publisher Susan Lange, Online Production Manager Jennifer Porcello, Publications Specialist Catherine Kemelmacher, Associate Editor Devika Mittra, Publications Assistant
®
2
MAGAZINE January 2011, Vol. 50, No. 1
www.comsoc.org/~ci NETWORK DISASTER RECOVERY GUEST EDITORS: CHI-MING CHEN, ANIL MACWAN, AND JASON RUPE
GUEST EDITORIAL 26 RAPIDLY RECOVERING FROM THE CATASTROPHIC LOSS OF A MAJOR TELECOMMUNICATIONS OFFICE
28
AT&T has a mature network emergency management and business continuity program that plans for and responds to events that affect the AT&T network and its support systems around the globe. Within that NEM plan, the AT&T Network Disaster Recovery team is responsible for the restoration of a failed network office’s services. The author describes NDR’s mobile response process and how it fits within AT&T’s overall response to a disaster. KELLY T. MORRISON
36 DISASTERS WILL HAPPEN — ARE YOU READY? The authors describe the important considerations and critical steps necessary to develop and maintain a credible disaster response capability based on the experience of a major telecommunication services provider. J. CHRIS OBERG, ANDREW G. WHITT, AND ROBERT M. MILLS
44 CONSIDERATIONS AND SUGGESTIONS ON IMPROVEMENT OF COMMUNICATION NETWORK DISASTER COUNTERMEASURES AFTER THE WENCHUAN EARTHQUAKE The author, by analyzing the damage on the communication network caused by the Wenchuan Earthquake, and the priorities of the communication restoration construction, points out that in the future emphasis shall be placed on construction of the emergency communication capability by providing priority service functions for the public switched telephone network, and attaching importance to wireless communications. YANG RAN
48 LACK OF EMERGENCY RECOVERY PLANNING IS A DISASTER WAITING TO HAPPEN The author highlights the need for emergency preparedness, discusses several key actions needed to properly prepare for an emergency, and cites a recent example of industry’s efforts to anticipate and prepare for an emergency situation. RICHARD E. KROCK
52
HOW TO DELIVER YOUR MESSAGE FROM/TO A DISASTER AREA The author sheds light on communication needs of evacuees in shelters during a post-disaster period and advocates that it is essential to develop a completely new communication service for those in shelters to maintain communication channels with those outside shelters as well as with those in other shelters. KENICHI MASE
FUTURE CONVERGENT TELECOMMUNICATIONS SERVICES: CREATION, CONTEXT, P2P, QOS, AND CHARGING GUEST EDITORS: ANTONIO SANCHEZ-ESGUEVILLAS, BELÉN CARRO-MARTINEZ, AND VISHY POOSALA
58 60
GUEST EDITORIAL AN ONTOLOGY-BASED CONTEXT INFERENCE SERVICE FOR MOBILE APPLICATIONS IN NEXT-GENERATION NETWORKS The author presents a telecom operator service that supplies mobile applications with context information to illustrate how context infrastructures can leverage NGN capabilities. He introduces an innovative context inference approach involving third party applications within the inference process itself. PHILIPP GUTHEIM
68
INTERPERSONAL CONTEXT-AWARE COMMUNICATION SERVICES Context awareness is a large research area, encompassing many issues ranging from physical measurement of a given situation to the question of social acceptance. It is one of the most promising technologies to evolve current communication services into fluid, flexible, automagic, intuitive communication means. The authors review the various applications of context awareness to convergent interpersonal communications, making clear the evolutionary potential introduced by these techniques. FRANÇOIS TOUTAIN, AHMED BOUABDALLAH, RADIM ZEMEK, AND CLAUDE DALOZ
IEEE Communications Magazine • January 2011
LYT-TOC-JAN
12/16/10
4:01 PM
Page 4
2011 Communications Society Elected Officers
76
EMPLOYING COLLECTIVE INTELLIGENCE FOR USER DRIVEN SERVICE CREATION
84
VITAL++ — A NEW COMMUNICATION PARADIGM: EMBEDDING P2P TECHNOLOGY IN NEXT GENERATION NETWORKS
Byeong Gi Lee, President Vijay Bhargava, President-Elect Mark Karol, VP–Technical Activities Khaled B. Letaief, VP–Conferences Sergio Benedetto, VP–Member Relations Leonard Cimini, VP–Publications Members-at-Large Class of 2011 Robert Fish, Joseph Evans Nelson Fonseca, Michele Zorzi Class of 2012 Stefano Bregni, V. Chan Iwao Sasase, Sarah K. Wilson Class of 2013 Gerhard Fettweis, Stefano Galli Robert Shapiro, Moe Win 2011 IEEE Officers Moshe Kam, President Gordon W. Day, President-Elect Roger D. Pollard, Secretary Harold L. Flescher, Treasurer Pedro A. Ray, Past-President E. James Prendergast, Executive Director Nim Cheung, Director, Division III IEEE COMMUNICATIONS MAGAZINE (ISSN 01636804) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. Headquarters address: IEEE, 3 Park Avenue, 17th Floor, New York, NY 10016-5997, USA; tel: +1-212705-8900; http://www.comsoc.org/ci. Responsibility for the contents rests upon authors of signed articles and not the IEEE or its members. Unless otherwise specified, the IEEE neither endorses nor sanctions any positions or actions espoused in IEEE Communications Magazine. ANNUAL SUBSCRIPTION: $27 per year print subscription. $16 per year digital subscription. Non-member print subscription: $400. Single copy price is $25.
The authors describe the major components and their interactions of a novel architecture called VITAL++ that combines the best features of the two seemingly disparate worlds: peer-to-peer and NGN. ATHANASIOS CHRISTAKIDIS, NIKOLAOS EFTHYMIOPOULOS, JENS FIEDLER, SHANE DEMPSEY, KONSTANTINOS KOUTSOPOULOS, SPYROS DENAZIS, SPYRIDON TOMBROS, STEPHEN GARVEY, AND ODYSSEAS KOUFOPAVLOU
92
102
AND
REPRINT
Send address changes to IEEE Communications Magazine, IEEE, 445 Hoes Lane, Piscataway, NJ 08855-1331. GST Registration No. 125634188. Printed in USA. Periodicals postage paid at New York, NY and at additional mailing offices. Canadian Post International Publications Mail (Canadian Distribution) Sales Agreement No. 40030962. Return undeliverable Canadian addresses to: Frontier, PO Box 1051, 1031 Helena Street, Fort Eire, ON L2A 6C7
SUBSCRIPTIONS, orders, address changes — IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08855-1331, USA; tel: +1-732-981-0060; e-mail:
[email protected].
118
SERVICE CHARGING CHALLENGES IN CONVERGED NETWORKS
The authors discuss visualization of data from the perspective of the needs of the differing end user groups, and discuss how algorithms are required to contextualize and convey information across location and time. In order to illustrate the issues, current work on night-time AAL services for people with dementia is described. MAURICE MULVENNA, WILLIAM CARSWELL, PAUL MCCULLAGH, JUAN CARLOS AUGUSTO, HUIRU ZHENG, PAUL JEFFERS, HAIYING WANG, AND SUZANNE MARTIN The charging model in telecommunications networks has evolved greatly. With the emergence of mobile and multimedia networks, the charging model became more complicated. The authors discuss how these new charging models impact the technology of online charging systems. MARC CHEBOLDAEFF
TOPICS IN OPTICAL COMMUNICATIONS: MEETING THE BANDWIDTH DEMAND CHALLENGE — TECHNOLOGIES AND NETWORK ARCHITECTURAL OPTIONS SERIES EDITORS: OSMAN S. GEBIZLIOGLU, HIDEO KUWAHARA, VIJAY JAIN, AND JOHN SPENCER
124 126
SERIES EDITORIAL TECHNOLOGY AND ARCHITECTURE TO ENABLE THE EXPLOSIVE GROWTH OF THE INTERNET At current growth rates, Internet traffic will increase by a factor of one thousand in roughly 20 years. It will be challenging for transmission and routing/switching systems to keep pace with this level of growth without requiring prohibitively large increases in network cost and power consumption. The authors present a high-level vision for addressing these challenges based on both technological and architectural advancements. ADEL A. M. SALEH AND JANE M. SIMMONS
ADVERTISING: Advertising is accepted at the discretion of the publisher. Address correspondence to: Advertising Manager, IEEE Communications Magazine, 3 Park Avenue, 17th Floor, New York, NY 10016. SUBMISSIONS: The magazine welcomes tutorial or survey articles that span the breadth of communications. Submissions will normally be approximately 4500 words, with few mathematical formulas, accompanied by up to six figures and/or tables, with up to 10 carefully selected references. Electronic submissions are preferred, and should be sumitted through Manuscript Central http://mc.manuscriptcentral.com/commag-ieee. Instructions can be found at the following: http://dl.comsoc.org/livepubs/ci1/info/sub_guidelines.html. For further information contact Sean Moore, Associate Editor-inChief (
[email protected]). All submissions will be peer reviewed.
A CONTEXT-AWARE SERVICE ARCHITECTURE FOR THE INTEGRATION OF BODY SENSOR NETWORKS AND SOCIAL NETWORKS THROUGH THE IP MULTIMEDIA SUBSYSTEM
VISUALIZATION OF DATA FOR AMBIENT ASSISTED LIVING SERVICES
PERMISSIONS:
POSTMASTER:
The authors present specification and testbed implementation results of an application-based QoE controller, proposing a solution for objective and context-aware end-to-end QoE control in the NGN networks. The proposed solution is based on standardized NGN service enabler operation principles that allows for efficient in-service QoE estimation and optimization. JANEZ STERLE, MOJCA VOLK, URBAN SEDLAR, JANEZ BESTER, AND ANDREJ KOS
110
in-Chief, Steve Gorshe, PMC-Sierra, Inc., 10565 S.W. Nimbus Avenue, Portland, OR 97223; tel: +(503) 4317440, e-mail:
[email protected]. Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of U.S. Copyright law for private use of patrons: those post-1977 articles that carry a code on the bottom of the first page provided the per copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For other copying, reprint, or republication permission, write to Director, Publishing Services, at IEEE Headquarters. All rights reserved. Copyright © 2011 by The Institute of Electrical and Electronics Engineers, Inc.
APPLICATION-BASED NGN QOE CONTROLLER
The author proposes a new context-aware architecture for the integration of body sensor networks and social networks through the IP Multimedia Subsystem. Its motivating application scenarios are described, and the benefits and main research challenges using the proposed architecture are outlined. MARI CARMEN DOMINGO
EDITORIAL CORRESPONDENCE: Address to: Editor-
COPYRIGHT
The authors present two kinds of collective intelligence for user-driven service creation: the user’s own experiences in service composition, and activity knowledge from the web. These collective intelligence types will aid in creating end-user service compositions by enforcing knowledge support in terms of user experiences and activity-aware functional semantics, and will finally accelerate the development of various kinds of converged applications. YUCHUL JUNG, YOO-MI PARK, HYUN JOO BAE, BYUNG SUN LEE, AND JINSUL KIM
134
MC-FIWIBAN: AN EMERGENCY-AWARE MISSION-CRITICAL FIBER-WIRELESS BROADBAND ACCESS NETWORK The authors introduce MC-FiWiBAN, a new emergency-aware architecture that leverages layer 2 virtual private networks to support mission-critical services over the integration. AHMAD R. DHAINI AND PIN-HAN HO
President’s Page Certification Corner Conference Calendar
4
6 14 16
Global Communications Newsletter Advertisers’ Index
21 144
IEEE Communications Magazine • January 2011
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 6
THE PRESIDENT’S PAGE
‘COMSOC 2020’ IN THE CONVERGED COMMUNICATIONS ERA
A
The three vertices of the ’Golden Trianyear has passed since I wrote my first gle’ — globalization, developing young leadPresident’s Column, bringing me half ers, and building relations with industry — way through my presidential term. Looking are fundamental for ComSoc’s renewal. back a year ago, I recollect the view of the Globalization means incorporating global road ahead was hazy, obscured by the global culture and values in ComSoc operations, to economic downturn, but now our society’s change the way we have been thinking and outlook is much brighter. Our financial status behaving in ComSoc operations. Developing has significantly improved throughout the young leaders signifies securing open career year, from the “deep red” of about a million paths for capable young members to grow dollars negative to the “light black” of over a into ComSoc leaders who will help continualhundred thousand dollars positive in 2010, ly to recharge ComSoc with fresh energy. and the promise of further improvement in Building relations with industry implies bring2011, possibly turning into the “deep black” of ing ComSoc closer to industry and creating a sound surplus. Popular membership promultidimensional ComSoc-industry linkages grams that suffered from severe budget cuts, BYEONG GI LEE through industry-oriented programs and sere.g., Distinguished Lecture Tours (DLT), Stuvices on one side, and giving dent Travel Grants (STG), and industry the opportunity to conChapter funding, have all recovtribute to ComSoc on the other. ered. To my astonishment, ComSoc This will restore a balance membership, which had been between industry and academia declining toward 40,000 during the within ComSoc and stimulate past decade, has made a marvelous mutually beneficial industryincrease in 2010, to over 48,000, academia collaboration through approaching 50,000. We may now ComSoc activities. look toward a vibrant ComSoc In 2010, we achieved much going forward toward 2020. progress in setting foundations for Such a dramatic lift in 2010 was these three key initiatives. We first made possible thanks to the hard generated consensus among volunwork of volunteer leaders and staff. teer leaders and responsible parTheir individual- and team-level ticipants. Then we instituted efforts, in particular a strong volunpolicies and procedures relevant to teer-staff partnership, enabled us globalization and young leaders, to escape from economic hardship and formed ad hoc committees and resume our normal healthy and new programs for industry, as state. They closely cooperated to described below. reduce expenses wherever possible and actively developed new revRealistic Globalization enue sources as well. In particular, they created and impleIn order for ComSoc to embrace global culture and values, mented in partnership several new products and services, with it is essential to recruit global communications leaders with the most representative examples being the Virtual Intensive diverse cultural backgrounds as part of the ComSoc leaderCourse on Wireless Communications Engineering; Membership team. We call this ‘level-2 globalization’ or ‘realistic globship Reach-out through Social Networking based on Facealization’ in contrast to ‘level-1 globalization’ which concerns book, LinkedIn, and Twitter; the SmartGridComm global membership-level participation. Level 2-globalization Conference; and the Corporate Patron Program (CPP). Going brings the benefits of the cultural diversity of the world into one step further, they collaborated to implement the vision of the management of ComSoc. ‘ComSoc’s Golden Triangle.’ This vision was a timely goal-setGeographically balanced representation in ComSoc officer ting framework that triggered innovations in ComSoc’s operapositions is a key to level-2 globalization. Among the ComSoc tions in the direction of greater agility and efficiency. The leadership team, the appointed positions such as Directors recovery of financial stability and the striking increase of and Committee Chairs can be geographically balanced, as membership may be viewed as evidence of the innovational they are appointed by the President. It is harder to realize effects of the vision. geographical balance in elected positions, such as PresidentUPDATE ON ‘GOLDEN TRIANGLE’ Elect, the four Vice Presidents and the 12 Members-at-Large (MaLs) of the Board of Governors (BoG). Most important ComSoc’s ‘Golden Triangle’ is the vision that I set at the among them is the MaL group, which is large in number and start of 2010 for the transformation of ComSoc into a vigorous has voting power, in contrast to the non-voting Directors and Society with goals that acknowledged the progress of globalizaCommittee Chairs. tion and the evolution of a knowledge-based society. I thought An ad hoc Nominations and Elections Process Committee, that ComSoc’s main goals of advancing communications engichaired by Larry Greenstein, investigated the Society’s process neering and arts, and promoting high professional standards, of nominating and electing officers. Through extensive studies could be faithfully accomplished by cultivating an open, healthy, and many discussions, the committee devised various ways of diversified, and balanced culture in ComSoc operations.
6
IEEE Communications Magazine • January 2011
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 7
THE PRESIDENT’S PAGE balancing the number of MaLs in each of the four ComSoc Regions (Asia-Pacific; Europe, Middle East, and Africa; Latin America; and North America). It then proposed a new process of MaL nominations that is intended to gradually align the number of elected MaLs in proportion to the regional membership distribution. Although the first trial of this procedure did not turn out as intended, we expect that present and on-going efforts for improvement will eventually bring regionally balanced representation to ComSoc’s BoG. Young Leaders In order to encourage young ComSoc members to grow into volunteer leadership positions, it is essential to offer open-and-equal opportunity for their participation in technical and regional activities, including publication and conference activities. We have established an ‘open call’ system for editorial positions in journals and magazines as well as for technical program committee (TPC) members of conferences. ‘Open call’ can help ComSoc enrich its human resources by identifying ‘hidden’ young talents who are often overlooked by the conventional recommendation-based appointment system. Setting term limits for voluntary positions in publications and conferences (e.g., editors, area editors, editor-in-chief, steering committee members and chairs) is another important strategy that will stimulate a healthy circulation of young volunteers into responsible positions. The publications volunteer leadership group, led by our VP-Publications, Len Cimini, endeavored to build consensus among the editorial board members on how to institute open calls. They have established an open call system in which any qualified ComSoc member who wants to be an editor (member of an editorial staff) of any ComSoc journal or magazine, and is ready to devote considerable time to this demanding duty, can apply using the on-line editor application form on the ComSoc website (http://www.comsoc.org/editor). They also specified the responsibilities of an editor. This open call policy has been announced on the ComSoc website, and in IEEE Communications Magazine, and IEEE Transactions on Communications. Regarding fixed terms, the publication leader group formalized the following policy: An editorial member of any ComSoc journal or magazine is appointed for a nominal three-year term with a possible extension for two additional years, after which he or she must leave the position for at least three years. The term limit is reset when an editor is promoted to a higher position within the editorial board. In some exceptional situations (e.g., a unique technical expertise), this 3-2-3 rule can be waived by the VP-Publications under recommendation of the Director in charge. In implementing the policy, each Editor-in-Chief will notify the editorial board what the term limit of each member is, and stop assigning papers to those whose terms are up or nearly up, requesting them to finish papers already assigned. Conference volunteer leaders, led by our VP-Conferences, Khaled Letaief, have come up with an open call system that allows and encourages any volunteer who is interested in getting involved in one of our major conferences and events (e.g., as TPC member, symposium chair, tutorial chair, local arrangement chair, publicity chair, etc.) to fill out a web-based application form. The form is available on the ComSoc website (http://www.comsoc.org/open-call-volunteers) or can be directly downloaded from http://www.comsoc.org/form/communications-society-conference-involvement-application-form. The application will then be reviewed by each pertinent Chair, and the applicant will be affirmatively notified if the applicant’s profile matches the needs of the particular event.
IEEE Communications Magazine • January 2011
The conference leadership team also worked on formally regulating policies on membership and term limits for the steering committee of ComSoc’s major conferences. Each conference must now create a charter, and/or update its charter if it has one, so as to include the following items: the purpose of the conference, the purpose of the Steering Committee (SC), the constitution of SC members, the term limits of the SC chair and members, provisions for replacement of the SC chair and members, voting procedures, frequency of meeting, and procedures for dissolution of the SC/conference. One typical example is that the term length of the SC chair is three years, with reappointment permissible for a maximum tenure of two terms. Industry Relations Our effort to bring ComSoc closer to industry and build multidimensional ComSoc-industry linkages was expressed last year in two activities. One was to develop packages of existing ComSoc products and services that industry would be most interested in, and the other was to develop new services solely targeted at industry and industry employees. The former activity was led by the newly created ad hoc Industry Promotion Committee (IPC), chaired by Adam Drobot, and the latter by the ad hoc Industry Services Committee (ISC), chaired by Harvey Freeman. The IPC developed two industry programs. One is the Corporate Patron Program (CPP), which is a package of existing ComSoc products and services that can be specifically customized and bundled to meet the needs of each individual participant company. The other is the Industry Now Program (INP), which is designed to promote industry participation in ComSoc by offering companies and their employees with opportunities to benefit from the values that ComSoc creates by working with professionals around the world. The CPP was developed to provide an opportunity for industry leaders to reach out to the worldwide communications community, not only ComSoc members, through exposure across the ComSoc web site, publications, and conferences. It enables companies to leverage ComSoc products and services through package discounts, including IEEE Communications Magazine advertising, webinar sponsorship, conference patronage, discounts on wireless training and certification, on-line tutorial sponsorship, and more. Several levels of patronage opportunity exist, designated as Platinum, Gold, Silver, and Bronze, enabling the participating company to tailor the level of products and services to meet specific needs. In the initial trial of 2010, five companies joined the CPP: Samsung Electronics, LG Electronics, Cisco, KT (Korea Telecom), and SK Telecom. We expect many more companies will join in 2011, especially from China, Japan, India, Europe, and the U.S. The INP is specifically tailored for industry in the fast developing regions of the world and was first conceived during a visit with top communications companies in India during 2007-8. The program offers the option of customizing packages to address both geographic and company-specific needs. Packages utilize IEEE and ComSoc resources to provide the latest information on technology trends through tutorials, Distinguished Lecture Tours (DLTs), conference participation, the Wireless Communications Engineering Technologies (WCET) certification program, and participation in technical and standards committees. The INP packages were initially offered in India and are now being modified for China. Many companies and organizations have shown interest in the INP. The second committee, ISC, was launched in September 2010 to identify service items meeting the needs of individual
7
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 8
THE PRESIDENT’S PAGE industry-based ComSoc members and propose fast-track implementations for the most promising items among them. Examples of the types of new services that the ISC could offer include serving as a professional anchor for members throughout their careers; filtering useful information for the practicing engineer; providing unbiased product information; providing business guidance; and assisting in job searches. Those potential programs will be designed, tested, and evaluated, and may be especially valuable for those who are shifting careers (e.g., from ‘engineer in a big company’ to ‘entrepreneur starting a small business’).
NEW GROWTH ENGINES The vision of ComSoc’s ‘Golden Triangle’ presented at the start of 2010 has been vigorously implemented and is becoming stabilized. The next step is to turn our eyes outward, looking around the communications environment and toward the future. The communications business was dominated by telephone service for over 100 years until it was challenged by the Internet in recent years. For the past three decades there occurred interactions, competitions, and confrontations due to the tension between telephone-originated circuit mode technology and computer-originated IP packet technology. This ended up with a convergence based largely on the IP packet technology. This digital convergence, which has spread to broadcasting and mobile communications, has established a firm platform of IP technology and networking on which many new information service providers can join, collaborate, and/or compete with traditional communications service providers, who are themselves now major providers of this new IP packet platform. This technological and business transformation occurred in every sector, including devices, networks, services, and operators. Based on the maturity of mobile networks and the emergence of smart phones served by open content markets, digital convergence has shifted gears to encompass a broader definition of communications, including Internet/computing, broadcasting/media, platform/OS/software, content/applications, and even print media and advertising. It has brought in new business models in converged communications and new dimensions of competition among different groups constituting “eco-clusters” in the new communications ecology. We are aware that the environment we work in today is very different from that of 10 years ago, or even a couple of years ago. The major players today are innovators of mediaoriented appliances, operating systems, and distributed applications, supported by cloud computing. We are all familiar with RIM’s Blackberry device, which changed the way people conduct business and stay connected. Apple made a spectacular entry into the communications arena by introducing the i-Phone and AppStore. Likewise, Google introduced the open Android OS and an ads-mediated business model. Telecom operators provide services to support exciting applications based on the above platforms. Along with such a precipitous environmental change, the content business has been emerging at the central point of the converged communications world, overwhelming the predominance of the traditional
8
voice-and-data oriented communications business. And critically for our members, jobs have declined in the traditional communications world and been replaced by jobs in the new converged communications ecology. In the midst of such dizzy environmental change, we have to stop and contemplate the future of the Society: Does ComSoc need to redefine itself as it moves forward? What strategies should ComSoc pursue to survive the paradigm change and, furthermore, find an opportunity for future growth? What will be the new areas that will serve us as future growth engines? Content, including education content, is a very important area that ComSoc needs to migrate into to survive in the converged communications era, as well as to find new growth opportunities. In addition, industry, now including those companies that have newly joined the converged communications arena, is important for ComSoc to collaborate with in developing new products and services that meet its needs. Finally, standards, an urgent business necessity for new technologies, including those newly emerging in the convergence era, will play a pivotal role in serving industry as well as in fulfilling our responsibility to the communications community. Content, including Education Content The main thrust of services has been shifting from communications services represented by voice and data toward content services that deliver on-line and mobile content. Users are switching from cellular phones to smart handheld devices accessing applications stores, and they spend more time on content/applications than on phone calls. In order to follow this trend and stay current, ComSoc, too, should convert its existing products and services to be accessible in user-friendly formats over such devices. From now on, we must conceive and create new products and services from the perspective of smart device users. Together with this, we have to develop effective methods of advertising using online and mobile content and develop a strategy for a smooth migration from paper-based advertising to on-line/mobilebased advertising. These are some of the mandatory actions we have to take to survive the megatrend of digital convergence and to offer timely mobile content services to our members. If we neglect or delay the transformation of our own delivery system, we will find it increasingly difficult to provide current services to members and our products will become outdated as well. If we take proactive actions toward this transformation, however, it will bring us new opportunities for our future growth. Content will surely serve us well as a future growth engine. Among various content types, education content is of the utmost importance for ComSoc. Though there had been some notable progress in the past, we have not realized the full potential that education and education content hold for ComSoc, its members, and the converged communications community. In recent years, we started an ambitious initia(Continued on page 10)
IEEE Communications Magazine • January 2011
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 10
THE PRESIDENT’S PAGE (Continued from page 8) tive in certification and training programs as an extension of the education area. We made a major investment in developing a Wireless Communication Engineering Technologies (WCET) certification program, though it has not yet reached the financial break-even point. We recently developed an online ‘Virtual Intensive Course’ in association with the WCET, which was a solid success in its first trial. New educational programs on various topics in communications, accessible on-line and by mobile devices, are a tremendous opportunity for ComSoc. Other actions for content delivery in the mobile-oriented digital converged communications era include an ad hoc Smartphonomics Committee, chaired by Fred Bauer and fully supported by James Hong, Director of On-Line Content, Alex Gelman, Chief Information Officer, David Alvarez, Manager of Information and Communications Technology, and his ICT staff team. The committee has been developing, in close volunteer-staff partnership, content for mobile device users. They worked in collaboration with colleagues at IEEE, which has been developing IEEE Technology News. As its first project, the team has created a mechanism based on the open source Drupal platform for mobile content production. The team then set up the mobile web version of the ComSoc website so that smart phone users can access the ComSoc website with their smart phones. For more convenience, the team has developed a ComSoc App for accessing the ComSoc website with the added capability of alarm services for new announcements on events and conferences. All major known mobile devices can now be recognized by the ComSoc web server and rerouted to the mobile content at m.comsoc.org. The team will later provide access to the Digital Library, conference registration, and membership renewal. This is an important beginning that will soon be expanded to mainstream development of a diverse range of mobile content. Industry in Converged Communications The megatrend of digital convergence, though started by the convergence of communications and Internet, with broadcasting joining later, has spread to encompass a very large set of information-related areas. Apple and Google played pivotal roles in this process by jumping into the converging communications arena with ‘esoteric weapons,’ which include not only new devices and OS platforms, but also open content markets and new business models. Transformations began to occur at a high speed and on a vast scale — from feature phones to smart devices, from operator-centric operation to user-centric operation, from voice-and-data services to content services, from ‘walled-garden’ content businesses to open-market content businesses, from two-party business models to three-party business models, and from individual-level competition to ecocluster-level competition. The open content market, in particular, had a big impact on the communications market. It freed content providers from operators and activated content creation activities on a large scale. A new business model, introduced by Apple, established a firm foundation for win-win coexistence of content providers and device providers. The business model soon consolidated into an eco-cluster, with network operators and other players, including traditional (e.g., print) media, joining. A three-party business model mediated by advertising was introduced by Google to consolidate advertisers into another eco-cluster. The three-party model, in which advertisers pay the bills of users in compensation for users’ watching their
10
advertising, was originally invented by the publishing industry, taken up by the broadcasting business, and later adopted by the Internet business. However, it was a totally new model to the communications market where the operator-user twoparty model existed for over 100 years. In this process, competition in the converged communications arena has turned into eco-cluster to eco-cluster competition. Such eco-cluster level competition has drawn in a diverse set of companies in the fields of networks, Internet, computers, devices, platforms, software, applications, content, broadcasting, music, movies, print media, and advertising. As a consequence, the communications field, while undergoing digital convergence, has broadly expanded to encompass all those fields under the name of ‘converged communications.’ As a consequence, the communications industry in the digital converged communications era has a vastly augmented scope, including all those businesses belonging to the eco-clusters. Therefore, ComSoc must accommodate, and provide services to, the converged communications industry. As a starting point, we must study how to expand the scope and operation of ComSoc and develop new programs, products, and services to serve all those newly joining eco-cluster members. As we redesigned the structure and operation of ComSoc to comply with the trend of globalization about two decades ago, we should now renovate it again to conform to the megatrend of communications convergence. In this context, the IPC and ISC programs should be refocused with a broader scope on the new converged communications industry. We must devise a strategy for effectively publicizing ComSoc to the non-conventional communications companies in the communications eco-clusters. Standards Activities for Emerging Technologies In reaction to the fierce competition in the converged communications arena, telecom operators have sought a winning edge by repositioning and diversifying their business strategies and service offerings. On one side, they try direct entry as platform operators by creating their own open content markets such as the Wholesale Applications Community (WAC), a collaboration among telecom operators. On the other side, they are pursuing new businesses in information delivery, such as e-banking, e-meeting, e-health, e-learning, e-publishing, and smart work. Going one step further, they are venturing to develop new fields, such as green communications, cloud computing, smart grid communication, machine-to-machine communication, and applicationenabled networks. ComSoc should accommodate the new endeavors of telecom companies within our technical activities. These emerging technologies are a nice match with the activities of our Technical Committees and Standards Board. We can, as needed, create new Technical Committees or start new standards initiatives, with ComSoc’s Emerging Technologies Committee playing a central role in their development. We can also create new conferences in support of these technical activities, and develop from them new magazines and journals as technologies mature. We have already started such a technical and standards activities development cycle in the smart grid communications area by creating an ad hoc Smart Grid Communications Committee, chaired by Stefano Galli. The committee has grouped experts in smart grid communications and held the highly successful first SmartGridComm conference in October 2010. The committee also launched a ComSoc Smart Grid Communications Vision project (Continued on page 12)
IEEE Communications Magazine • January 2011
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 12
THE PRESIDENT’S PAGE (Continued from page 10) designed to produce a very long term vision for the Smart Grid Communications area. The result of the project will take the form of a peer-reviewed publication by IEEE Press. This project is sponsored by the IEEE Standards Association. We are carefully examining other fields, including cloud computing/communications and life sciences, to initiate similar development cycles. Standards activity is particularly important for ComSoc. Standardization facilitates concentrates study among interest groups and accelerates competitive technical development and implementation. Standardization has a big impact on industry, so by pursuing standardization more actively we can meet the needs of industry better. As the technology in each field matures through technical activities, as supported by our conferences and publications, standards work comes in to expedite maturity and the practical implementation of the technology. For the smart grid communications field, for example, we sponsored two projects — one related to Broadband over Power line and the other Narrow Band Power Line Communications Standards for Smart Grid Applications. ComSoc was late in joining IEEE standards work, so we lost many opportunities to contribute to industry’s standardization activities. However, in recent years we have become very active and successful in developing standards in the power-line communications field, stimulating and leading the IEEE P1901 working group. This successful experience, achieved by the former ComSoc Director-Standards, Alex Gelman, the current Director, Curtis Siller, and numerous contributing volunteers, demonstrated a promising future for ComSoc’s standards activity and motivated us to upgrade our standards officer from Director-Standards to VP-Standards. A bylaws change was completed in December 2010 in support of this organizational change, and a new VP-Standards will be elected in 2011. We will soon be fully fledged to fly in standards activities for the converged communications era, closely collaborating with teams within the IEEE Standards Association (SA). Standards work contributes not only to the development of technology, but also to enhancing industry’s participation in the technical activities of ComSoc. Though participation of industry members in ComSoc publications and conferences has decreased in recent years, standards work will boost their interest and participation. They can take part in ComSoc’s standards activity directly and also benefit from standards related research forums, symposia, conferences, and educational offerings. Standards work will provide an attractive opportunity for industry members to contribute technically and, thereby, will bring us a precious opportunity to increase membership and, possibly, revenue as well.
‘COMSOC 2020’ VISION As the vision of ComSoc’s ‘Golden Triangle’ was designed to bring innovation to ComSoc’s operation, so is our develop-
12
ing vision of ‘Growth Engines’ intended to create new mechanisms for ComSoc’s future growth, technical influence, and financial stability. The former vision was inward-looking, focusing on ComSoc’s character and operations, while the latter vision looks outside and forward, focusing on technical and business trends. Implementation of the ‘Golden Triangle’ vision is making good progress. Geographically balanced representation is being implemented by revising the nominations and elections process, the ‘open call’ system has been established in our publications and conferences areas, and industry relations are being promoted through the activities of two new ad hoc committees. However, the more recent vision of ‘Growth Engines’ is still in the beginning stage. We made a good start, though still on the learning curve, in developing content for users of smart devices and educational information, and we are well prepared for standards work, with our spirits lifted by our recent successes and the introduction of the VP-Standards position. However, in dealing with the converged communications industry, much preparatory ground work still needs to be done, including the redesign of ComSoc’s organization and operations to match the needs of this new industry alignment. We anticipate that the functionality of IPC and ISC will be continually refocused as communications convergence evolves. For ComSoc to stay current throughout the rapid change of emerging technologies and evolving business environments, we should first bring innovation to our current areas of operation; second, create new areas; and, third, think anew, imagining and reaching for our future. With the first two elements addressed by the ‘Golden Triangle’ and ‘Growth Engine’ visions, we must now consider how to address the third element that we may call the ‘ComSoc 2020’ Vision. If we properly deal with this third element, ComSoc has the potential to become ‘the No.1 Society in the IEEE’ in every aspect over the next ten years. The ‘ComSoc 2020’ vision, which does not yet exist, can include goals that are both ambitious and achievable based on our current assets and actions. If we critically review the current status of ComSoc, we can affirm that ComSoc is healthy in many respects. We have creative and dedicated volunteers, a talented and hard-working staff, and, most of all, an exemplary volunteer-staff partnership. We have a well structured organization (although, like all of IEEE, it may be more complex and bureaucratic than necessary), reasonable governing regulations, and a relatively agile operation. Technically we are on the leading edge among all the Societies in the IEEE, with the communications field ranked first among IEEE publications in the top 20 Science Citation Index (SCI) journals in each field, and our conferences are also recognized as such. We are the most globalized among all Societies, with over 200 Chapters and 20 Sister Societies covering the globe. Our membership has resumed the growth curve and has reached 48,000 in October 2010, which is only 8,000
IEEE Communications Magazine • January 2011
LYT-PRES PAGE-JAN
12/16/10
1:34 PM
Page 13
THE PRESIDENT’S PAGE fewer than the Computer Society (excluding Affiliate Members). Our finances have recovered their long-time healthy state, escaping from the deep tunnel of global economic downturn with an improvement of more than $1 million over the past year. We may conclude that ComSoc is a vigorous Society, made even more vibrant by implementing the ‘Golden Triangle’ vision and preparing its new ‘Growth Engines’ vision. We will soon start working on the ‘ComSoc 2020’ vision based on ComSoc organizational and operational strategic reports that have been prepared in 2010 and will be combined and organized by our Strategic Planning Committee (chaired by Steve Weinstein). The first of these reports was a comprehensive overview and strategic assessment on Marketing completed in September 2010 (by John Pape) and an Organizational Design Review completed in December 2010 (by Jack Howell). Similar strategic reports will be submitted by January 2011 on Meetings & Conferences (by Bruce Worthman), Information & Communications Technology (by David Alvarez), and Education Programs & Content (by Stefano Bregni). In addition, ComSoc’s Resources Review will be completed by June (by Sergio Benedetto). If a ‘ComSoc 2020’ Vision team is organized by February 2011, the team will be able to fully utilize those strategic reports in redesigning ComSoc for the converged communications era. We may set an ambitious goal of becoming the No.1 Society by 2020, growing to 80,000 in membership and 30 million
dollars in budget. In the most recent ComSoc Board of Governors (BoG) meeting held in Miami during the GLOBECOM 2010 conference, I announced an open call for volunteers who wish to join the ad hoc ‘ComSoc 2020’ Vision Committee, which will be supported by our Strategic Planning Committee. I invite young and experienced members with good ComSoc background and great vision to join the committee to redesign the future of our Society. Our ComSoc, if we properly design and faithfully carry out the ‘ComSoc 2020’ vision, will become an invaluable asset not only to our members but also to the global communications community and to the world. ComSoc will stimulate and document technical advances, inspire engineers’ devotion to technical progress, and support the communications industry as it develops systems and deploys networks, thereby facilitating the availability of affordable services to all people in all nations. Communications services, in a converged sense, will enable people to reach people and information sources without limits of time or geographical separation; inspire people in oppressed countries with the courage to acquire freedom; assist people in underdeveloped or developing countries to find competitive ways of economic development; and enable creative minds to create new values for human society. In this way, our ComSoc will continue enhancing the quality of life and serving humanity.
DEDICA ATI TIION ON TO A ADVANCEMENT DVANCEMENT UÊ/
ÀiiÊi iÜÊ""-Ê`iÃÊÊÓä£äÊUÊ-ÝÊiÜÊ"- `iÃÊÊ>ÃÌÊ{ >ÃÌÊ{ Þi>ÀÃ
AQ6373 S Short Wavelength OSA 350-1200nm
New AAQ6370C-20 High Performance
AQ6370C-10 Standard Performance Industry’s Newest Telecom Range OSA 600-1700nm
High Resolution, AND Sensitivity in the Visible Range and Beyond. S Wavelength Resolution: 0.02 to 10 nm and 0.01nm (400 to 470nm) S Wavelength Accuracy: ± 0.05 nm S Use with Single-mode, Multimode, and Large-core fibers up to 800 um
Best in Class for Performance and Value S S S S
Wavelength Resolution: 0.02nm Wavelength Accuracy: ±0.01nm Fastest measurement ever: 0.2 sec. (100nm span) Use with Single-mode and Multimode fibers
tmi.yokogawa.com
IEEE Communications Magazine • January 2011
800-447-9656
AQ6375 Long Wavelength OSA 1200-2400nm World’s Only nly Long Wavelength OSA S Wavelength Resolution: 0.05nm S Wavelength Accuracy: ± 0.05 nm S Use with Single-mode and Multimode fibers
AD11103
New
Op ics pposter att EE Opt FREEE FR 2508 Photonics West SF Booth 325 / FOEC LA Booth 1325 OFCC/N OF
13
LYT-CERTIFICATION-JAN
12/16/10
1:29 PM
Page 14
CERTIFICATION CORNER REFLECTIONS BY ROLF FRANTZ The end of the year is often a time to look both forward and back. Since this column is being written in early December, the prevailing mood is one of reflection: on how the WCET certification program came into existence and how it has developed over the past several years. It has, after all, been four years since the first Practice Analysis Task Force convened to begin developing the description of practice (the Delineation) that underlies the whole WCET program. For this writer, that was a new and different experience. A career in telecommunications focused on wireline, and particularly optical, communications was not the best preparation for working with a group of experts in wireless communications, many of whom knew each other from conferences, standards committees, or industry and professional activities. They accomplished an impressive amount in just a few days. They outlined a broad structure that organized the overall field into seven distinct technical areas of responsibility; they developed detailed lists of the tasks that a practicing engineer with three years
14
of experience could be expected to perform in each area; and they generated lists of the knowledge that such a practitioner would require in order to perform those tasks. The thoroughness of their efforts was validated repeatedly as focus groups, independent reviewers, and survey respondents offered suggestions that fine-tuned the Delineation, while affirming both its overall structure and the details of its content. When the decision was made to develop the WCET certification program, considerable emphasis was placed on doing it thoroughly, carefully, and with attention to detail, even if that meant the process would take some time. Evidence that ComSoc
adhered to that policy is found in the fact that it took almost a year to conduct the various reviews just described and to incorporate that feedback into the Delineation. ComSoc then invested another year before offering the certification exam for the first time, in the Fall of 2008. That year was spent finding industry experts to write and review exam questions, collecting those into a bank of items to be drawn on in compiling the first exam, then creating and validating that exam. The time was also used to develop resources such as the Candidate’s Handbook and the bi-monthly Wireless Communications Professional e-newsletter. A team was assembled to start writing the overview book, Guide to the Wireless Engineering Body of Knowledge (the WEBOK). Arrangements were made to offer the exam at computer-based testing centers worldwide. And significant effort was invested in making companies and individuals in the industry aware of the program and of the value in seeking vendor neutral, transnational WCET certification from IEEE. Then suddenly the past two years seem to have flown by. The exam has been offered five times (Fall 2008, Spring and Fall 2009, Spring and Fall 2010). The fourth issue of the Candidate’s Handbook was recently published, updated for 2011. The Delineation has been thoroughly reviewed and refreshed. A team has begun working on the second edition of the WEBOK. Industry experts continue to contribute new questions to keep the question bank up to date. A group of skilled practitioners invested considerable time this Fall going through those new questions and updating the exam for 2011 to reflect the changes that have occurred in the industry since the start of the WCET program. And ComSoc’s Education Board has developed both live and web-based courses covering the topics addressed in the certification exam. The WCET program in all its aspects is the result of many hundreds of hours of dedicated effort by a great number of volunteers. These wireless experts have understood the value that WCET certification can bring to individuals and companies in the industry. ComSoc is indebted to them for their commitment, and the arrival of 2011 offers the opportunity to wish them all a much-deserved Happy and Prosperous New Year!
IEEE Communications Magazine • January 2011
LYT-CALENDAR-JAN
12/16/10
1:38 PM
Page 16
CONFERENCE CALENDAR 2011 JANUARY • COMSNETS 2011 - 3rd Int’l. Conference on Communication Systems and Networks, 4-8 Jan. Bangalore, India. http://www.comsnets.org/
♦ IEEE CCNC 2011 - IEEE Consumer Communications and Networking Conference, 9-12 Jan. Las Vegas, NV. http://www.ieee-ccnc.org/
• RWW 2011 - 2011 Radio and Wireless Week, 16-20 Jan. Phoenix, AZ. http://www.mttwireless.org/
• IEEE ISGT 2011 - IEEE PES Innovative Smart Grid Technologies Conference, 17-19 Jan. Anaheim, CA. http://www.isgt2011.com/site/
• WONS 2011 - 8th Int’l. Conference on Wireless On-Demand Network Systems and Services, 26-28 Jan. Bardonecchia, Italy. http://conferenze.dei.polimi.it/wons2011/
• NCC 2011 - 17th National Conference on Communications, 28-30 Jan. Bangalore, India. http://www.ncc.org.in/ncc2011/index.html
♦ IEEE CogSIMA 2011 - IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, 22-24 Feb. Miami, FL. http://www.ieee-cogsima.org
MAY • IEEE SARNOFF - 34th Sarnoff Symposium 2011, 2-4 May
Hong Kong, China. http://www.iswpc.org/2011/
Princeton, NJ. http://sarnoff-symposium.ning.com/
• WSA 2011 - Int’l. ITG Workshop on Smart Antennas, 24-25 Feb.
• MC-SS 2011 - 8th Int’l. Workshop on Multi-Carrier Systems and Solutions, 3-4 May
Aachen, Germany. http://www.wsa2011.rwth-aachen.de/
MARCH ♦ OFC/NFOEC 2011- Optical Fiber Communication Conference, 6-10 March Los Angeles, CA. http://www.ofcnfoec.org/
♦ IEEE WCNC 2011 - IEEE Wireless Communications and Networking Conference, 28-31 March Cancun, Mexico. http://www.ieee-wcnc.org/
• ICCIT 2011 - Int’l. Conference on Communications and Information Technology, 28-31 March Aqaba, Jordan. http://iccit-conf.org/
APRIL
Paris, France. http://www.ntms-conf.org/innovative-projects.htm
• ONDM 2011 - 15th Int’l. Conference on Optical Networking Design and Modeling, 8-10 Feb. Bologna, Italy. http://www.ondm2011.unibo.it/
♦ IEEE ISPLC 2011 - 15th IEEE Int’l. Symposium on Power Line Communications and Its Applications, 3-6 April Udine, Italy. http://www.ieee-isplc.org/
♦ IEEE INFOCOM 2011 - IEEE Conference on Computer Communications, 10-15 April Shanghai, China. http://www.ieee-infocom.org
• ICACT 2011 - 13th Int’l. Conference on Advanced Communication Technology, 13-16 Feb.
• IEEE RFID 2011 - IEEE Int’l. Conference on RFID 2011, 12-14 April
Phoenix Park, Korea. http://www.icact.org/
Orlando, FL. http://ewh.ieee.org/mu/rfid2011/
♦ Communications Society portfolio events are indicated with a diamond before the listing; • Communications Society technically co-sponsored conferences are indicated with a bullet before the listing. Individuals with information about upcoming conferences, calls for papers, meeting announcements, and meeting reports should send this information to: IEEE Communications Society, 3 Park Avenue, 17th Floor, New York, NY 10016; e-mail:
[email protected]; fax: +1-212-705-8996. Items submitted for publication will be included on a space-available basis.
16
New York, NY. http://www.csupomona.edu/~wtsi/wts/index.h tm
• ISWPC 2011 - Int’l. Symposium on Wireless Pervasive Computing, 23-25 Feb.
FEBRUARY • NTMS 2011 - 4th Int’l. Conference on New Technologies, Mobility and Security, 7-10 Feb.
• WTS 2011 - Wireless Telecommunications Symposium 2011, 13-15 April
Herrsching, Germany. http://www.mcss.dlr.de/
♦ IEEE DySPAN 2011 - IEEE Int’l. Symposium on Dynamic Spectrum Access Networks, 3-6 May Aachen, Germany. http://www.ieee-dyspan.org/
• ICT 2011 - 18th Int’l. Conference on Telecommunications, 8-11 May Ayia Napa, Cyprus. http://www.ict2011.org/
♦ IEEE CQR 2011 - 2011 Annual IEEE CQR Int’l. Workshop, 10-12 May Naples, FL. http://committees.comsoc.org/cqr/
♦ IEEE ICSOS 2011 - IEEE Int’l. Conference on Space Optical Systems and Applications, 11-13 May Santa Monica, CA. http://icsos2011.nict.go.jp/
♦ IEEE/IFIP IM 2011 - 12th IFIP/IEEE Int’l. Symposium on Integrated Network Management, 23-27 May Dublin, Ireland. http://www.ieee-im.org/
JUNE ♦ IEEE ICC 2011 - IEEE Int’l. Conference on Communications, 5-9 June Kyoto, Japan. http://www.ieee-icc.org/2011/
• IEEE POLICY 2011 - IEEE Int’l. Symposium on Policies for Distributed Systems and Networks, 6-8 June Pisa, Italy. http://www.ieee-policy.org/
♦ IEEE CAMAD 2011 - IEEE Int’l. Workshop on Computer-Aided Modeling Analysis and Design of Communication Links and Networks 2011, 10-11 June Kyoto, Japan. http://www.nprg.ncsu.edu/camad/
IEEE Communications Magazine • January 2011
LYT-CALENDAR-JAN
12/16/10
1:38 PM
Page 17
CONFERENCE CALENDAR IEEE HEALTHCOM 2011 - 13th IEEE Int’l. Conferece on e-Health Networking, Application & Services, 13-15 June Columbia, MO. http://www.ieee-healthcom.org/
• ConTEL 2011 - 11th Int’l. Conference on Telecommunications, 15-17 June
«Fiber meets RF»
Graz, Austria. http://www.contel.hr/
FTTA – Fiber To The Antenna
• ICUFN 2011 - 3rd Int’l. Conference on Ubiquitous and Future Networks
ODC – Leading standard solution for remote radio systems • Robust fiber optic solution based on proven N-type connector • EMI protection, salt-mist proof and IP67 • High shock, vibration and mechanical resistance • Broad temperature range • Easy, reliable and cost-effective installation Applications • WiMAX, W-CDMA, TD-SCDMA, CDMA2000, LTE • Fiber optic link between Remote Radio Head and base station
Dalian, China. http://www.icufn.org/main/
♦ IEEE CTW 2011 - IEEE Communication Theory Workshop, 20-22 June Sitges, Spain. http://www.ieee-ctw.org
♦ IEEE SECON 2011 - 8th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, 27-30 June Salt Lake City, Utah. http://www.ieee-secon.org/2011/
• IEEE ITMC 2011 - IEEE Int’l. Technology Management Conference, 27-30 June San Jose, CA. http://www.ieee-itmc.org/
♦ IEEE ISCC 2011 - 16th IEEE Symposium on Computers and Communications, 28 June-1 July Kerkyra, Greece. http://www.ieee-iscc.org/2011/
JULY ♦ IEEE HPSR 2011 - 12th IEEE Int’l. Conference on High Performance Switching and Routing, 4-6 July Cartagena, Spain. http://www.ieee-hpsr.org/
♦ IEEE ICME 2011 - 2011 IEEE Int’l. Conference on Multimedia and Expo, 11-15 July Barcelona, Spain. http://www.icme2011.org/
AUGUST • ICADIWT 2011 - 4th Int’l. Conference on the Applications of Digital Information and Web Technologies, 46 Aug. Stevens Point, WI. http://www.dirf.org/DIWT/
(Continued on page 19)
IEEE Communications Magazine • January 2011
USA and Canada: Toll free 1866 HUBER SUHNER (1-866-482-3778) Fax 1-802-878-9880 HUBER+SUHNER AG 9100 Herisau Switzerland T +41 71 353 4111
[email protected] hubersuhner.com
17
LYT-CALENDAR-JAN
12/16/10
1:38 PM
Page 19
CONFERENCE CALENDAR (Continued from page 17) ♦ IEEE P2P 2011 - IEEE Int’l. Conference on Peer-to-Peer Computing, 31 Aug.-2 Sept. Tokyo, Japan. http://p2p11.org/
♦ IEEE EDOC 2011 - 15th IEEE Int’l. Enterprise Distributed Object Computing Conference, 31 Aug.-2 Sept. Helsinki, Finland. http://edoc2011.cs.helsinki.fi/edoc2011/
SEPTEMBER • ITC 23 2011 - 2011 Int’l. Teletraffic Congress, 6-8 Sept. San Francisco, CA. http://www.itc-conference.org/2011
♦ IEEE PIMRC 2011 - 22nd IEEE Int’l. Symposium on Personal, Indoor and Mobile Radio Communications, 11-14 Sept. Toronto, Canada. http://www.ieee-pimrc.org/2011/
• ICUWB 2011 - 2011 IEEE Int’l. Conference on Ultra-Wideband, 14-16 Sept.
• COMCAS 2011 - 2011 IEEE Int’l. Conference on Microwaves, Communications, Antennas and Electronic Systems, 7-9 Nov.
Bologna, Italy. http://www.icuwb2011.org/
Tel Aviv, Israel. http://www.comcas.org/
♦ IEEE GreenCom 2011 - Online Conference, 26-29 Sept.
♦ MILCOM 2011 - Military Communications Conference, 7-10 Nov.
Virtual. http://www.ieee-greencom.org/
Baltimore, MD. http://www.milcom.org/index.asp
OCTOBER
DECEMBER
• DRCN 2011 - 8th Int’l. Workshop on Design of Reliable Communication Networks, 10-12 Oct.
♦ IEEE GLOBECOM 2011 - 2011 IEEE Global Communications Conference, 5-9 Dec.
Krakow, Poland. http://www.drcn2011.net/index.html
Houston, TX. http://www.ieee-globecom.org/2011/
NOVEMBER
2012
• ISWCS 2011 - 8th Int’l. Symposium on Wireless Communication Systems
JANUARY
Aachen, Germany. http://www.ti.rwth-aachen.de/iswcs2011/
♦ IEEE CCNC 2012 - IEEE Consumver Communications and Networking Conference, 8-11 Jan. Las Vegas, NV. http://www.ieee-ccnc.org/
Stay cool when things get hot! Find out more at: www.omicron-lab.com/extreme
... from 1 Hz to 40 MHz with the portable VNA Bode 100
5,490.- US$
(Snowboard & PC not included)
Smart Measurement Solutions
IEEE Communications Magazine • January 2011
19
LYT-NEWSLETTER-JAN
12/16/10
1:43 PM
Page 21
Global
Newsletter January 2011
Five-Day Virtual Intensive Course on Wireless Communications Engineering By Xavier Fernando, Ryerson University, Canada, on behalf of the IEEE Communications Society Education Board Wireless Communication Engineering Technologies (WCET) certification for practicing wireless communications engineering professionals is gaining widespread popularity and recognition. It is internationally seen as a flagship qualification for wireless professionals, due mainly to its vendor neutrality and trans-national scope. Industry participation at all stages of the development process has ensured that the certification examination is focused on job-related knowledge and skills. WCET-qualified professionals are able to clearly demonstrate their practical knowledge and are increasingly getting better visibility, leading to career advancement. Several training aids have been developed by ComSoc directed towards improving practitioners’ knowledge in wireless communications. The book “Guide to the Wireless Engineering Body of Knowledge” (WEBOK) was written by industry experts to provide a comprehensive overview of current wireless engineering technology; its many references direct the reader to original sources of in-depth knowledge in the field. The authors and editors of the WEBOK were selected from a worldwide call after a thorough screening. A three-day “in person” or five-day “virtual” (webcast) “boot camp” style intensive training covering the breadth of wireless communications technology has been under development for some time. This intensive course was envisioned as a complement to other training and online tutorials that focus on specific technical areas and topics. The intended audience is composed of practicing engineers in various wireless engineering disciplines. The course was designed to be particularly suited for those who work in a specific, perhaps rather narrow, aspect of wireless communications, since one primary goal was to provide a comprehensive overview of overall wireless system design and implementation, and of the operation of wireless networks. Although intended as offering broad training in wireless communications to any and all practitioners, the course was also anticipated to be helpful for candidates considering seeking WCET certification, since it would address all seven technical areas covered by the WCET examination. It is important to note that, although these intensive courses would cover material likely to be tested in the WCET exam, this training was never intended to specifically prepare the attendees for the exam itself. Keeping this in mind, those who worked on developing the course content (including WEBOK authors) and the instructors were purposely kept
separate from any involvement in the creation of WCET exam questions or the construction of the exam. The first online intensive course was successfully organized during September 20-24, 2010. This five-day course began with a review of Fundamental Knowledge in wireless communication technologies. Then a detailed discussion on RF Engineering, Propagations and Antennas was presented. Subsequently, the topics of Wireless Access Technologies, Network and Service Architectures, Network Management and Security, Facilities Infrastructure, and Agreements, Standards, and Policies also were covered. 76 wireless communications professionals from 15 countries in five continents registered for this course. The course materials were developed after issuing a worldwide call for developers. The course content was partly drawn from the WEBOK, and parallels the various chapters in the book. However, more than 400 detailed PowerPoint slides were created and the instructors each developed their own notes for use in presenting the material. Participants had ample time to ask questions, which were broadcast over the web to all participants, along with the answers from the instructors. Attendees were encouraged to join the online WCET group formed on LinkedIn as a professional network, as a way to follow up with their questions and discussions. All registered students had access to the presented course material (PowerPoint slides and audio) for seven days after the live event. CEUs were offered to course participants who applied for them and who completed a short survey. The response has been overwhelmingly positive, with attendees rating the course good or excellent, well organized, and definitely increasing their practical knowledge of wireless communications. Several participants suggested minor changes that they felt would make the course even more useful; these suggestions will be appropriately considered in the forthcoming offerings of this course, both in person and on line. Indeed a second offering of the virtual intensive course is planned for the week of January 17-21, 2011. The ComSoc Training website, www.comsoc.org/training, has more details on this presentation, including a link to register. The September offering was actually oversubscribed, so wireless professionals who are interested in taking this course are encouraged to register early! The author acknowledges the contribution of Rolf Frantz, who gratefully edited the article.
Global Communications Newsletter • January 2011
1
LYT-NEWSLETTER-JAN
12/16/10
1:43 PM
Page 22
IEEE ComSoc Santa Clara Valley (SCV) Chapter Meeting on “40/100 Gigabit Ethernet – Market Needs, Applications, and Standards” By Alan J. Weissberger, Chair of the ComSoc SCV Chapter, USA At its October 13, 2010 meeting, IEEE ComSocSCV was most fortunate to have three subject matter experts present and discuss 40G/100G Ethernet- the first dual speed IEEE 802.3 Ethernet standard. The market drivers, targeted applications, architectecture and overview of the the recently ratified IEEE 802.3ba standard, and the important PHY layer were all explored in detail. A lively panel discussion followed the three presentations, In addtion to pre-planned questions from the moderator (ComSocSCV Emerging Applications Director Prasanta De), there were many relevent questions from the audience. Of the 74 meeting attendees, 52 were IEEE members. The presentation slides, together with the program of next technical meetings and other information, are available online at the ComSocSCV web site www.comsocscv.org.
Presentation Highlights Ethernet’s Next Evolution – 40GbE and 100GbE by John D’Ambrosia of Force10 Networks. The IEEE 802.3ba standard was ratified on June 17, 2010 after several years of hard work. What drove the market need for this standard? According to John D’Ambrosia, the “bandwidth explosion” has created bottlenecks eveywhere. In particular, Increased number of users, faster access rates and methods, new video based services have created the need for higher speeds in the core network. Mr D’Ambrosia stated, “IEEE 802.3ba standard for 40G/ 100G Ethernet will eliminate these bottlenecks by providing a robust, scalable architecture for meeting current bandwidth requirements and laying a solid foundation for future Ethernet speed increases.” John sees 40G/ 100G Ethernet as an enabler of many new network architectures and high bandwidth/ low latencey applications. Three such core networks were seen as likely candidates for higher speed Ethernet penetration: campus/ enterprise, data center, and service provider networks John showed many illustrative graphs that corroborated the need for higher speeds in each of these application areas. The “Many Roles and Options for Ethernet Interconnects (in the Data Center),” “Ethernet 802.3 Umbrella,” and “Looking Ahead Growing the 40GbE / 100GbE Family” charts were especially enlightening. We were surprised to learn of the breadth and depth of the 40G/100G Ethernet standard, which can be used to reduce the number of links for: Chip-to-Chip / Modules, Backplane, Twin Ax, Twisted Pair (Data Center), MMF, SMF. This also improves energy efficiency according to Mr. D’Ambrosia. Looking Beyond 100GbE, John noted that the industry is being challenged on two fronts: Low cost, high density 100GbE and the Next Rate of Ethernet (?). To be sure, the IEEE 802.3ba Task Force co-operated with ITU-T Study Group 15 to ensure the new 40G/ 100G Ethernet rates are transportable over optical transport networks (i.e. the OTN), Mr. Ambrosia identified the key higher speed market drivers as Data Centers, Internet Exchanges, Carrier’s Optical Backbone Networks. The other two speakers also speculated about higher speed Ethernet (see below). The IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Architecture by Ilango Ganga of Intel Corp. Mr. Ganga presented an Overview of the IEEE 802.3ba standard, which has the following characteristics: •Addresses the needs of computing, network aggregation and core networking applications •Uses a Common architecture for both 40 Gb/s and 100 Gb/s Ethernet
2
•Uses IEEE 802.3 Ethernet MAC frame format •The architecture is flexible and scalable •Leverages existing 10 Gb/s technology where possible •Defines physical layer technologies for backplane, copper cable assembly and optical fiber medium Ilango identified two future standards related to IEEE Std 802.3ba: •IEEE P802.3bg task force is developing a std for 40 Gb/s serial single mode fiber PMD •100 Gb/s backplane and copper cable assemblies Call For Interest scheduled for Nov’10 Physical Layer (PCS/PMA) Overview by Mark Gustlin of Cisco Systems. Mr. Gustin explained the all important PHY layer, which is the heart of the 802.3ba standard. The two key PHY sublayers are the PCS = Physical Coding Sublayer and the PMA = Physical Medium Attachment. •The PCS performs the following functions: Delineates Ethernet frames. Supports the transport of fault information. Provides the data transitions which are needed for clock recovery on SerDes and optical interfaces. It bonds multiple lanes together through a striping/distribution mechanism. Supports data reassembly in the receive PCS - even in the face of significant parallel skew and with multiple multiplexing locations. •The PMA performs the following functions: Bit level multiplexing from M lanes to N lanes. Clock recovery, clock generation and data drivers. Loopbacks and test pattern generation and detection. Mark drilled down to detail important multi-lane PHY functions of transmit data striping and receiver data alignment. These mechanisms are necessary because all 40G/ 100G Ethernet PMDs have multiple physical paths or “lanes.” These are either multiple fibers, coax cables, wavelengths or backplane traces. Module interfaces are also multiple lanes, which are not always the same number of lanes as the PMD interface. Therefore the PCS must support a mechanism to distribute data to multiple lanes on the transmit side, and then reassemble the data in the face of skew on the receiver side before passing up to the MAC sublayer. Mark also touched on the topic of a higher speed for Ethernet. He speculated that the next higher speed might be 400 Gb/s, or even 1Tb/s? Mr. Gustin opined that it was too early to tell. He noted that the IEEE 802.3ba architecture is designed to be scaleable. In the future, it can support higher data rates by increasing the bandwidth per PCS lane and the number of PCS lanes. He suggested that for 400 Gb/s, the architecture could be 16 lanes @25 Gb/s for example, with the same block distribution and alignment marker methodology. Mark summed up by reminding us that the 40G/100G Ethernet standard supports an evolution of optics and electrical interfaces (for example, a new Single-mode PMD will not need a change to the PCS), and that the same architecture (sublayers and interface between them) can support future faster Ethernet speeds.
Panel Discussion/Audience Q and A Session The ensuing panel session covered 40G/ 100G Ethernet market segments, applications (data center, Internet exchanges, WAN aggregation on the backbone, campus/enterprise, etc),competing technologies (e.g. Infiniband for the data center), timing of implementations (e.g. on servers, switches, network controllers. There were also a few technical questions (Continued on Newsletter page 4)
Global Communications Newsletter • January 2011
LYT-NEWSLETTER-JAN
12/16/10
1:43 PM
Page 23
The IEEE Branch Office Re-locates in Singapore By Fanny Su and Ewell Tan, IEEE Singapore Since 1995, the Communications Society has used the IEEE branch office in Singapore to support its members, customers and volunteers in the Asia Pacific. Shortly after, the office was included into the Charter of the ComSoc Asia Pacific Board (APB) and its role expanded to provide support services to its’ Director and its’ committee activities. On the 1st Nov 2010, IEEE announced its relocation of its branch office to Solaris@Fusionopolis. A ribbon cutting and unveiling of the IEEE signage was organized for the afternoon event celebrating the occasion. The re-location to Singapore’s science and engineering research hub will enable IEEE to collaborate with the Agency for Science, Technology and Research (A*STAR), and the Singapore Economic Development Board (EDB), on worldclass scientific research in biomedical sciences, physical sciences and engineering in Singapore. A*STAR also took the opportunity to honor 3 of its technologists for winning IEEE awards; David Townsend for 2010 IEEE Medal for Innovations in Healthcare Technology, DimLee Kwong for 2011 IEEE Frederik Philips Award and Tony Quek for IEEE GLOBECOM 2010 Best Paper and Gold Best Paper Awards. Helene Fung, a long time IEEE volunteer joins the branch office in Singapore as Senior Strategic Planning Manager under Corporate Activities. IEEE Staff Fanny Su and Ewell Tan will continue to provide support services to the Communications Society, the ComSoc APB Director and its committees, AP ComSoc Chapters and its members. The office continues to serve as the coordinating and information centre for ComSoc volunteers the Asia Pacific, and to provide continuity during the transition period of handing over of responsibilities of the AP Board Directorship and its officers from one committee to the next. Over the years, we have coordinated an increasing no. of ComSoc Distinguished Lecture Tours to the Asia Pacific region to stimulate more
Ribbon cutting and celebration at the new IEEE AP office at Fusionopolis on 1 Nov., 2010. From left to right: Tony Quek, Principal Investigator and Senior Research Engineer of the Institute for Infocomm Research A*STAR; Dim-Lee Kwong, Executive Director of the Institute of Microelectronics A*STAR; Lim Chuan Poh, Chairman of A*STAR; Pedro Ray, IEEE President; James Prendergast, IEEE Executive Director; Low Teck Seng, Managing Director of A*STAR. local Chapter activities. We also monitor Chapter activities and encourage them to publicize for member recruitment and retention. Technological advances in web applications in recent years, has empowered our members, customers and volunteers and enabled them to complete transactions, retrieve information and network with their peers. We strive to support and complement these new online services. IEEE membership in the Asia Pacific continues to grow, and our office will be looking to leverage membership growth opportunities and to provide greater visibility for the Communications Society in 2011. The new office is currently undergoing renovation and outfitting. It is expected to be fully operational end January 2011. Do drop by to visit us at our new location: 1 Fusionopolis Walk, #04-07, South Tower, Solaris, Singapore 138628.
Activities of the Tomsk IEEE ComSoc Chapter in 2010 By Oleg Stukach, Tomsk Chapter Vice-Chair, Russia Tomsk IEEE Chapter is part of the IEEE Russia Siberia Section and the IEEE Region 8 (Europe, Middle-East and Africa region). Chapter has been initially established in Tomsk after approval of the Russia Section on January 1, 2000. The Chapter is created as part of the following Societies: Electron Devices (ED), Antennas and Propagation (AP), Communications (COM), Microwave Theory and Techniques (MTT), Electromagnetic Compatibility (EMC). Efforts to increase the level of IEEE activities in the Siberia and Far East regions of Russia are in progress. There has been significant progress in the last several years. A good example is the International Siberian Conference on Control and Communications SIBCON which was technically co-sponsored by IEEE, was attended by 300 participants from 8 countries. The unique opportunity to meet and to hear the views of different participants was of great value and an excellent forum for focusing on the many common technology issues. In the first half of this year, the Tomsk Joint Chapter has held two technical meetings and we plan to hold another four meetings by the end of the year. In April, we presented seminar “Join the ComSoc” to our students and plan to extend this direction of our activity. This experience contributes to motivate us to continue working hard for the ComSoc community. We really appreciate the opportunity of sharing ideas via
Members of the Tomsk Chapter having dinner at the IEEE ComSoc SIBCON Conference (left to right: Igor Sychev, Irina Sycheva, Alexey Osovsky, Denis Kutuzov, Oleg Stukach). the ComSoc hosting of our website http://chapters.comsoc.org/ tomsk. It will encourage us to increase Tomsk Chapter member’s activity as well as all Russian ComSoc members and expand our projects especially Web project “ComSoc in Russian” http://chapters.comsoc.org/tomsk/comsoc/comsoc.htm Currently, the chapter programs are focused on three main aspects: an effort to increase membership by participation of scientists and students in IEEE activities, SIBINFO student paper contest, and co-operation in organizing international conferences covering various aspects of electronics, radar, communications etc. We are proud that Tomsk Chapter has (Continued on Newsletter page 4)
Global Communications Newsletter • January 2011
3
LYT-NEWSLETTER-JAN
12/16/10
1:43 PM
Page 24
2010 Celebrates the 30th Sister Society Agreement By Roberto Saracco, Director - Sister Societies, IEEE Communications Society In the last 10 years COMSOC has consistently sought to establish ties with other Societies with a local footprint. The reason for that is still valid today. COMSOC is a international organization providing a global reach and global perspective. It is to the benefit of all its members if this global reach can be complemented with a local focus. Indeed, many COMSOC members are also members of their local Society because there they can find the desired ties with their territory. The goal of setting up agreements with local Society is to extend this local view and perspective also to engineers living in a different area. As the world shrinks it gets more and more interesting to share experiences among different realities understanding the local perspective. Additionally, COMSOC members are part of a mostly homogeneous community whilst local Societies very often include different branches of engineering, civil engineering, electronic engineering, ... As communications becomes more and more pervasive, the possibility of being exposed to a varied world is increasingly important. During 2010 we have reached the 30 Sister Societies mark, an important figure, giving us a broad coverage of the world. We are still missing some parts of the world, like central Africa, an area that is likely, and hopefully, to see a significant evolution in the coming years, and Australia, but we are working on this. So expect some good news in 2011. In these last two years we have managed to establish ties with organizations that are slightly different from the ones I have just presented, having a global (or regional) footprint or a complementary area of operation. Notably, we have established ties with EWI, the East West Institute that is addressing policy and political issues, with FITCE, the European Organizations clustering all European Telecommunications Associations also dealing with policy issues.
Global Newsletter www.comsoc.org/pubs/gcn
STEFANO BREGNI Editor Politecnico di Milano - Dept. of Electronics and Information Piazza Leonardo da Vinci 32, 20133 MILANO MI, Italy Ph.: +39-02-2399.3503 - Fax: +39-02-2399.3413 Email:
[email protected],
[email protected] IEEE COMMUNICATIONS SOCIETY
KHALED B. LETAIEF, VICE-PRESIDENT CONFERENCES SERGIO BENEDETTO, VICE-PRESIDENT MEMBER RELATIONS JOSÉ-DAVID CELY, DIRECTOR OF LA REGION GABE JAKOBSON, DIRECTOR OF NA REGION TARIQ DURRANI, DIRECTOR OF EAME REGION NAOAKI YAMANAKA, DIRECTOR OF AP REGION ROBERTO SARACCO, DIRECTOR OF SISTER AND RELATED SOCIETIES REGIONAL CORRESPONDENTS WHO CONTRIBUTED TO THIS ISSUE
NICOLAE OACA, ROMANIA (
[email protected])
®
4
A publication of the IEEE Communications Society
Having reached this milestone we are now looking to exploit what has been built over the years rather than focussing on further expanding the number of agreements (with the exception of Central Africa and Australia, as I previously mentioned). One concrete step that has been taken is the appointment of a liaison officer in Chapters co-located with each Sister Society and a COMSOC officer for each global Society. This should ensure more effective communications and better exploitation of the relationship. Next years we will be establishing a metric to measure the effectiveness of each agreement, in terms of joint activities: cosponsored conferences, exchange of lecturers, presentation at each other events, participation in each other events, number of members of one organization joining the other, publication of papers in each other’s magazines and journals. For more information: http://host.comsoc.org/sistersocieties/index.html
SANTA CLARA VALLEY CHAP./continued from page 2 for clarification and research related to single lane high speed links. It was noted by this author that almost 10 years after standardization, servers in the data center only recently have included 10G Ethernet port interfaces while 10G Ethernet switches only now can switch multiple ports at wire-line rates. So how long will it take for 40G/ 100G Ethernet to be widely deployed in its targeted markets? The panelists concurred that more and more traffic is being aggregated onto 10G Ethernet links and that will drive the need for 40G Ethernet in the data center. Mark Gustin said, “100GE is needed today for uplinks in various layers of the network.”. But the timing is uncertain. Higher speed uplinks on Ethernet switches, high performance data centers (e.g. Google), Internet exchanges, wide area network aggregation, and box to box communications were seen as the first real markets for 40G/ 100G Ethernet. Each market segment/ application area will evolve at its own pace, but for sure the 40G/ 100G Ethernet standard will be an enabler of all of them. The final question was asked by former IEEE 802.3 Chair, Geoff Thompson. Geoff first noted that 40G/ 100 G Ethernet standard and all the higher speed Ethernet studies being worked in IEEE 802.3 are for the core enterprise or carrier backbone network. He then asked the panelists when would there be big enough technological advances in the access or edge network to enable higher speeds there, i,e, the on ramps/ off ranps to the core network. The panelists could not answer this question as it was too far from their areas of expertise. In particular, nothing was said about the very slow-to-improve telco wireline access network (DSL or fiber) and the need to build out fiber closer to the business and residential customers to achieve higher access rates. Nonetheless, the audience was very pleased to learn the 802.3ba architecture was scalable and seems to be future proof for higher speed Ethernet.
TOMSK CHAPTER/continued from page 3 been selected as one of the four winners of the 2010 Chapter Achievement Award. We will continue to make efforts to include our specialists in the scientific and educational process worldwide; and in this respect, count on further precious assistance of IEEE ComSoc.
Global Communications Newsletter • January 2011
LYT-GUEST EDIT-Chen
12/16/10
1:16 PM
Page 26
GUEST EDITORIAL
NETWORK DISASTER RECOVERY
Chi-Ming Chen
Anil Macwan
T
elecommunication networks are known for high reliability. This is true for normal operations and outages due to failures in the network associated with hardware, software, and so on. However, unexpected disasters (e.g., earthquakes, hurricanes, and terrorist attacks) do happen, and can have catastrophic impacts without excellent network contingency planning. This feature topic will address the various approaches to plan for and manage recovery following disasters, so that all concerned can mitigate these events effectively. Network disaster recovery (NDR) has become timelier in the last few years, with catastrophic natural disasters occurring in different parts of the world due to various causes, with different responses necessary. There is no telling when and where the next disaster will occur, but disasters will occur. This topic will continue to gain attention until properly mitigated. Communications is critical to survival and safety, as well as to the health and security of the world. With growing penetration of the Internet from residential use, combined with commercial and otherwise significant traffic being carried over backbone networks, impacts from service interruptions continue to be critical to many segments of society. Most technical articles in this area cover emerging technologies, standards, protocols, and networking solutions; but address less in terms of what to do when they fail, especially when such failures are numerous and catastrophic, and how to maintain critical communications. This feature topic of IEEE Communications Magazine aims to provide a comprehensive overview of the state of the art in technology, regulation, and standardization for NDR, and to present a view of research and industry challenges and opportunities in this area. NDR is mainly about recovering communications after a disaster, supporting communications during disaster recovery operations, and rapidly bringing life back to normal for customers of the network. At times, you need to work toward all three at once! And because triggering
1
That is why network disaster recovery is such an important issue, and why we present these articles for your reference.
26
Jason Rupe
events can happen without notice and in unpredictable ways, with unpredictable results, we must plan accordingly. Furthermore, the rarity of these events makes it difficult to obtain resources and assess risk.1 Network recovery requires mobile replacements within short times, including rapid power backup and fast restoration of service. Network operators need to be prepared to rebuild any part of their networks extremely rapidly. In addition to communications networks, disasters will most likely damage other infrastructures and systems, and render them dysfunctional. Repair of the other infrastructures requires coordination, which requires effective communications. This need makes network recovery a high priority. Disasters can cause severe disruption to peoples’ lives. Communications with relatives and assistance resources outside the scope of the disaster is critical to recovery and human security. The importance of effective communications is clear, but equally clear is the vastness of the causes of disasters. Both natural and manmade disasters should be considered. It is important to plan for what you cannot prevent; strengthen your mitigation capabilities, and plan for the massive impacts of the worst case. Because disasters can not be predicted well, it is crucial to plan creatively, robustly, but without concrete information or known risks. Network disaster recovery is insurance paid for through preparation, supplemental to an insurance policy, so that networks, and the businesses and end users the networks serve, can survive through and after a catastrophe. Among the five articles of this NDR feature topic, two are on the best practices of major operators; one on the lessons learned from the China earthquake in 2008; one is from a large product manufacturer on key actions necessary to properly prepare for an emergency; and one is from academia on how to deliver messages from and to a disaster area. In “Rapidly Recovering from the Catastrophic Loss of a Major Telecommunications Office,” Kelly Morrison describes the mobile response components and processes for service restoration of a failed network office. With the fast evolution of communications technology and various environmental conditions, a key step to the success of a
IEEE Communications Magazine • January 2010
LYT-GUEST EDIT-Chen
12/16/10
1:16 PM
Page 27
GUEST EDITORIAL recovery is exercise so that the recovery team is trained and ready. This article also shares the recovery experiences from real disasters. In “Disasters Will Happen — Are You Ready?” Chris Oberg et al. explain in great detail the planning necessary for a solid network disaster recovery plan, the pitfalls to avoid, and the many elements of the plan that you may not have considered. They suggest several elements of a good plan, and provide some guidelines and tools you can apply to your own disaster planning. Furthermore, they cover key elements to include in a plan, and even how to work with the government to coordinate successful recovery of network services, as well as how to support recovery efforts. By sharing their real world experience at Verizon, these disaster recovery and communication network veterans share their 99 combined years of experience on network disaster recovery. Considering many lessons learned, Yang Ran provides “Considerations and Suggestions on Improvement of Communication Network Disaster Countermeasures after the Wenchuan Earthquake.” In this article she analyzes what worked and did not work well in the recovery efforts of the Wenchuan earthquake, and in related events such as Katrina, the 2005 London bombings, and the 9/11 attacks. From this analysis, the author provides several improvements: priority service functions in the public switched telephone network (PSTN), strengthening wireless communications capabilities, and leveraging technologies to handle emergency high-volume traffic. In addition, Yang Ran provides suggestions for future research to enhance network disaster recovery capabilities. Rick Krock, in “Lack of Emergency Recovery Planning Is a Disaster Waiting to Happen,” highlights the need for emergency preparedness and discusses several key actions necessary to properly prepare for an emergency. He cites a 2006 study, “Availability and Robustness of Electronic Communications Infrastructures” (ARECI), which focused on reliability and security of networks in addition to emergency preparedness. He discusses specific recommendations that are applicable to disaster recovery. This article also cites a recent example of industry’s efforts to anticipate and prepare for an emergency situation, related to the 2009 worldwide flu pandemic associated with the H1N1 virus strain. Cooperation among government and industry, including network operators, vendors, and other parties, as well as among countries, should continue and get stronger for better recovery planning in the future. An important need for end users of networks is the ability to communicate during the post-disaster phase while they are in shelters. Kenichi Mase presents a novel communication system and service to accomplish this end in the article “How to Deliver Your Message from/to a Disaster Area.” The system, called the Shelter Communication System (SCS), connects to the Internet, and accounts for the fact that networks may be severely damaged and traffic overload may occur. The SCS is designed
IEEE Communications Magazine • January 2010
to provide message communication service between shelters, as well as between shelters and points outside the disaster area. Evaluation of a prototype of the SCS and other message services, cellular phone mail, and facsimile is presented. According to the author, the SCS can help maximize the role of the Internet as the social infrastructure to contribute to rapid disaster recovery.
BIOGRAPHIES C HI -M ING C HEN [SM] (
[email protected] ) has been working in the R&D departments of major telecommunications service providers since 1985. He joined AT&T in 1995. His current responsibility is the operations support system (OSS) architecture of process automation, traffic management, incident management, change management, and network disaster recovery. He supports the architecture planning for the AT&T Global Network Operations Center (GNOC), which monitors and controls AT&T’s worldwide network. Prior to joining AT&T, he was with Bell Communications Research (currently Telcordia) from 1985 to 1995. His responsibilities included specification of quality and reliability requirements for various networks and network elements, and supplier product testing and analyses. From 1975 to 1979 he was a faculty member at Tsing Hua University, Taiwan. He received his Ph.D. in computer and information science from the University of Pennsylvania; his M.S. in computer science from Pennsylvania State University; and his M.S. and B.S. in physics from Tsing Hua University, Taiwan. For 2010–2011 he is serving as the Advisory Board Chair of the IEEE Communications Society Technical Committee on Communications Quality & Reliability. He was/is the Business Forums Chair of IEEE GLOBECOM 2009 and 2010 and ICC 2011. He has also served as Technical Program Co-Chair of the IEEE Symposium on Computers and Communications (ISCC) 2006 and 2010. He is a senior member of ACM. ANIL P. MACWAN [M’98] (
[email protected]) has been working on reliability analysis, methods, and methodology in communication and other industries since 1992. He started his career in communications with Lucent Technologies in 1996. His responsibilities include root cause analysis, hardware and procedural reliability, quality improvement, modeling and analysis, data analysis, migration to next-generation networks, and IP transformation. He has participated in the Network Reliability and Interoperability Council and the QuEST Forum. Prior to joining the communications industry, he worked on reliability projects for various industries such as power, chemical, manufacturing, aviation and space, marine transportation, and public safety. He led risk and reliability analysis projects at Delft University of Technology, The Netherlands. He received his Ph.D. in reliability engineering from the University of Maryland, his M.S. from Iowa State University, his M.Tech. from the Indian Institute of Technology, and his B.E. from M.S. University, India. He is currently serving as the Chair-Elect of the IEEE Communications Society Technical Committee on Communications Quality & Reliability. JASON W. RUPE [SM] (
[email protected]) wants to make the world more reliable. He received his B.S. (1989), and M.S. (1991) degrees in industrial engineering from Iowa State University, and his Ph.D. (1995) form Texas A&M University. He worked on research contracts at Iowa State University for CECOM on the Command & Control Communication and Information Network Analysis Tool, and conducted research on large-scale systems and network modeling for Reliability, Availability, Maintainability, and Survivability (RAMS) at Texas A&M University. He has taught quality and reliability at these universities, published several papers in respected technical journals, reviewed books, and refereed publications and conference proceedings. He is a Senior Member of IIE. He has served as Associate Editor for IEEE Transactions on Reliability, and currently is its Managing Editor. He has served as Vice-Chair of RAMS, on the program committee for DRCN, on the advisory board for IIE Solutions magazine, and as an officer for IIE, QCRE division. He has worked at USWEST Advanced Technologies, and has held various titles at Qwest Communications Intl., Inc, most recently as director of the Technology Modeling Team, Qwest’s Network Modeling and Operations Research group for the CTO. He has always been those companies’ reliability lead. Currently, he is an adjunct professor at Metro State College of Denver, Colorado, and director of operational engineering at Polar Star Consulting, where he helps government and private industry to plan and build reliable network services. He holds two patents.
27
MORRISON LAYOUT
12/16/10
2:09 PM
Page 28
NETWORK DISASTER RECOVERY
Rapidly Recovering from the Catastrophic Loss of a Major Telecommunications Office Kelly T. Morrison, AT&T
ABSTRACT Ready access to communications networks has become a necessity for nearly every type of organization (business, non-profit, government) and for individual users. If a catastrophic disaster destroyed a city’s central network office, telecommunications users in that region would lose their network connectivity — data, voice, cellular — until the office’s capabilities could be restored. That isolation could put lives and livelihoods at risk as an affected city tried to respond to a large natural or man-made disaster. AT&T has a mature network emergency management and business continuity program that plans for and responds to events that affect the AT&T network and its support systems around the globe. Within that NEM plan, the AT&T Network Disaster Recovery team is responsible for the restoration of a failed network office’s services. This article will describe NDR’s mobile response process and how it fits within AT&T’s overall response to a disaster.
INTRODUCTION A large telecom central office is the nexus for the communications services for an entire city or region; it manages special services (911, video, etc.), routes traffic within a community and moves data on and off the global fiber network. A central network office contains equipment that supports a broad array of services-from simple, twisted-pair residential dial tone to cellular traffic to high-speed, multi-spectrum IP data. Together, the various telecom technologies form a large, complex machine that typically spans several floors of a dedicated building. Each network office (central office) has a unique profile based on the equipment installed in the building and on the services that are provided to its region. The central office buildings are robust. Oscillations from the 6.7 Mw Northridge, California, earthquake in 1994 damaged the walls of AT&T’s Sherman Oaks office, but operations continued and the building was declared safe. Network offices in flood plains and along the hurricane coasts are elevated or use lower floors
28
0163-6804/11/$25.00 © 2011 IEEE
for administrative space. All of the central offices have redundant power sources — commercial feeds from local utilities, back-up generators with dedicated fuel supplies, and large battery strings that provide the equipment with uninterrupted power. AT&T’s network architecture and network management practices minimize the potential for outages to affect its customers. Diverse network paths and automated rerouting provide nearly instantaneous responses to cable cuts or congestion. Network management becomes even more proactive in advance of known events, such as approaching hurricanes or large civic or sports events, when traffic is rerouted away from parts of the network that may be impacted. It’s an effective system — AT&T’s global network moves over 20 petabytes of data a day with a 99.99+ percent reliability rating. But the “what if” question remains. What could be done if a large network office was completely destroyed? Traffic management and rerouting could do nothing to restore network access to the community and to the businesses directly served by that failed office. Normal communications services could not be reestablished until the capabilities of the central network office were restored. A traditional brick-andmortar restoration could take months or longer — an unacceptably long period that would leave an area ill-equipped to recover from a disaster.
PLANNING FOR THE WORST: CATASTROPHIC DISASTER RESPONSE AT&T’s Network Disaster Recovery program was formed in 1991 to develop a way to respond to that “what if” scenario — the loss of an entire central office. The NDR solution combines network infrastructure and support trailers, recovery engineering software applications, and a response team with both full-time and volunteer members from AT&T. The trailers provide the physical components that carry the restored network traffic. The software platforms allow those components to take on the services of the failed building. The team members create, connect, turn up, and manage the recovery complex (Fig. 1).
IEEE Communications Magazine • January 2011
MORRISON LAYOUT
12/16/10
2:09 PM
Page 29
NDR RECOVERY TRAILER FLEET Each NDR technology trailer contains a telecom infrastructure element that is present in a normal AT&T central office. Installing, configuring and testing the equipment in advance eliminates the delay of having first-off-the-line equipment sent from vendors after an office has been lost, and it also eliminates the need to find a suitable replacement building in an area devastated by a disaster. The recovery trailers allow the equipment to arrive at a stricken city ready for service, in powered containers that, when interconnected, serve as a temporary network office (Fig. 2). The equipment is installed in bays/racks down each side of the trailer, leaving a working aisle down the center — like the layout of equipment bays in a permanent office. The equipment is powered by battery, through a rectifier, and the batteries are constantly charged from commercial power, from a large portable generator (e.g., 600kW) or from a dedicated generator (usually built in to the front of each trailer). Fiber-optic and/or copper-T3 cabling connections to the equipment are made in internal and/or external cabling bulkheads on each trailer. The technologies present in the trailers have evolved since the early 1990s following the changes in the AT&T network. The trailers now include IP and OC-768 (40 Gb/s) fiberoptic transport systems that support the high volume of data that traverses the AT&T network. The IP trailers, when fully-equipped, can scale up to a capacity of over 10,000 T3s. The technology fleet includes ATM, Frame Relay, DMS, and 5E switch trailers, and lower speed fiber transport and Digital Access Cross-Con-
Figure 1. NDR exercise in Washington, DC, in July 2009. nect Systems (DACS) trailers to allow intercity/intra-city services and interfaces between the network’s copper and fiber-optic infrastructure. AT&T’s AGN network in the global markets is supported with disaster recovery trailers staged in both the United States and in Europe. Some of the AGN equipment is installed in fly-away containers that can be shipped by commercial air carrier. Following the earthquakes in Chile in February 2010, AGN recovery nodes were flown to Santiago and set up near a permanent telco office in case aftershocks made that building, and its equipment, unusable (Fig. 3). NDR’s recovery equipment is maintained
Figure 2. NDR technology recovery trailers.
IEEE Communications Magazine • January 2011
29
MORRISON LAYOUT
12/16/10
2:09 PM
Page 30
NDR EMERGENCY COMMUNICATIONS
Figure 3. NDR AGN recovery nodes in Santiago, Chile. March 2010.
in warehouses across the United States and in Europe — positioned strategically so the equipment can reach AT&T’s network offices quickly. The equipment doesn’t sit idle. It is kept powered up and on-network so updates can be installed and so it can be tested both on-site and remotely. NDR’s warehouse teams are responsible for the health and welfare of the equipment inside the trailers and for the running gear that lets the equipment travel. It’s a unique blend of skills that crosses the spectrum of tasks from machining parts to configuring core network routers. In 2010, the eighty-sixth recovery technology trailer was added to the fleet. NDR’s total inventory includes over 350 pieces of equipment, including large power and support trailers, emergency communications vehicles, hazmat trailers, and escort vehicles.
NDR’s deployment plan includes an assumption that no normal communications channels will be available when the team arrives at a recovery site in a disaster area. The team establishes firstin communications capabilities using an emergency communications vehicle (ECV). The ECVs are four-wheel drive vans or SUVs that use a satellite link to provide broadband LAN, Wi-Fi, and voice (VoIP) connectivity for the team at a recovery site. The ability to set up a small footprint of cellular coverage using a microcell was added to the ECVs in 2010. The ECVs can be set up rapidly (15–20 minutes) and are self-sufficient — they have dedicated generators and tow support supplies in a small trailer. The ECV data/voice feed is connected to a router and to a PBX to distribute LAN connectivity and dial-tone to each trailer on the recovery site (Fig. 4). Because of their rapid response capability, the ECVs have been frequently deployed to provide emergency communications capabilities for other responders or to provide service at intake centers for disaster victims. AT&T deployed four ECVs and a fly-away satellite unit during its Hurricane Katrina response in 2005 and used ECVs as recently as May 2010 to provide Wi-Fi service for flood victims in Nashville, Tennessee. The team can also use 5.8 GHz microwave radio links to rapidly establish communications channels from a disaster area back to a functioning AT&T network office or point of presence (POP). NDR practices setting up these links during exercises and uses the capability to remotely monitor the trailers and to provide higher capacity data service for the site. These
Figure 4. Emergency communications vehicle and microcell at NDR’s exercise in San Jose, California, July 2010.
30
IEEE Communications Magazine • January 2011
MORRISON LAYOUT
12/16/10
2:09 PM
Page 31
temporary microwave radio links were used to recover service in 2008 after damage from Hurricane Ike isolated some of the cell towers on Galveston Island. They were used for the same purpose after Nashville’s flooding in May 2010. Recovering a disaster area’s cellular communications requires a functional central office and the ability to restore the capabilities provided by individual cell sites. A portable cell site — a cell on light truck (COLT) or cell on wheels (COW) — can be used to replace the service provided by a failed site. Cellular antennas are attached to a pneumatic mast on the COLT or COW and connected to the same backhaul network feed that served the permanent site. COLTs and COWs are also frequently used in business as usual conditions to augment existing service to an area during large-scale events, such as civic or sporting events. If backhaul facilities have also been destroyed or are not available, the data from the temporary cell site can be passed back to the AT&T network with a satellite link. NDR’s satellite COLTs were used on Galveston Island to reestablish cellular communications for first responders after Hurricane Ike’s landfall in September 2008. The satellite COLTs can be configured to provide controlled network access to specific devices preventing oversubscription to a COLT’s service. Like the ECVs, the Satellite COLTs have been used frequently to provide emergency communications capabilities for first responders in areas that don’t normally have cellular coverage. Two of NDR’s satellite COLTs were deployed to Arkansas in June 2010 after a flash flood washed campers away near Caddo Gap; a satellite COLT provided additional cellular coverage in Montcoal, West Virginia, after the coal mine explosion in February 2010 (Fig. 5).
RECOVERY ENGINEERING The NDR strategy demands the ability to reengineer — quickly — a permanent office using the technology trailers as the puzzle pieces. When the team is activated, the NDR recovery application is initiated; it polls the network office records and assembles an inventory of the lost equipment, the services the failed office supported and the configuration profiles of the office’s systems. The NDR engineering team uses the data to build a list of technology trailers that will be necessary to recover the capability of the failed or destroyed office. This first step happens rapidly, allowing the NDR warehouse teams to get the correct set of technology trailers on the road to the recovery location. The NDR engineering team will begin creating a cabling plan that will allow the individual technology components in the trailers to work together as the recovered network office. The trailers pass data between one another through fiber-optic and/or electrical DS3 cables that are connected to particular ports in bulkhead panels on each trailer. A large office recovery complex could include over 10,000 individual T3 and optical bulkhead connections. The assignments for the connections are unique — allowing the NDR equipment to be
IEEE Communications Magazine • January 2011
Figure 5. NDR Satellite COLT at an exercise in San Diego, CA, in October 2010.
configured to restore the services of the lost office. Those assignments are made using a software application developed for and by the team since the early 1990s. The latest release of the engineering platform, called RUBY, was issued in late 2009. The cabling plan is developed while the equipment is on the road and being set up at a recovery site. The plan is downloaded at the recovery location over temporary satellite or digital radio network links.
RECOVERY OPERATIONS TEAM NDR has a permanent staff assigned to day-today operations (equipment, engineering, incident management, emergency communications, hazmat) and a roster of other AT&T employees who volunteer to work on the team during training exercises and deployments. A subset of the team is trained as hazardous materials technicians who can work in areas that have been contaminated by CBRN hazards (see hazmat sidebar). All Operations Team members receive training that prepares them to build a recovery site with NDR’s specialized equipment and to perform that task in challenging conditions. The roles of most of the team members evolve throughout a disaster response. In the early phase of the effort, they will be assigned to construction crews as the equipment is brought on to a recovery site, grounded, and connected with power and communications cables. They regroup into technology teams turning up the equipment trailers and preparing the recovery complex to begin acting in place of the failed office. Each of NDR’s three components — the equipment, the engineering system, and the trained team — is essential to perform a rapid network office recovery. AT&T has invested more than half a billion dollars in the program since 1991, but that would provide little meaning if the capability wasn’t tested. From its inception, field testing has been an integral part of the NDR program.
31
MORRISON LAYOUT
12/16/10
2:09 PM
Page 32
Figure 6. NDR exercise in Dearborn, Michigan, in May 2010.
TESTING ON PAVEMENT (NOT JUST ON PAPER) The NDR team held its first recovery exercise in early 1993 near Atlanta, Georgia, and completed its 59th field exercise in October 2010 in Philadelphia, Pennsylvania. The NDR drills are used to validate and refine the recovery processes, to train team members and to work with new recovery equipment as it comes into the trailer fleet. The exercises also give the NDR members opportunities to work with local emergency management agencies. An NDR exercise is usually held on an open parking lot, but exercises have been held on city streets (Phoenix, Arizona), vacant urban land (St. Louis, Missouri), and open fields (Arlington, Virginia). And the drills have been held in a variety of weather conditions: 100˚ F heat (Denver, Colorado), 95˚ F heat plus humidity (Arlington and Tampa, Florida), and 24 inches of snow (Salt Lake City, Utah). Working through real-world variables keeps the team prepared for a response in any location at any time of the year (Fig. 6). One or two days before an exercise, NDR team members escort the recovery equipment convoys from the warehouses to a staging area near the recovery location. Over the course of the next two to five days, an empty lot is transformed into a modern telecommunications network office, replicating the same steps that would be used in an actual response. Throughout each drill, metrics and issues are logged by team leaders and by the incident management team (see Incident Management sidebar). A standards team reviews issues after each drill or deployment and documents their resolution. This quality cycle has been a key feature in the program’s development — eliminating predictable problems, highlighting training needs, and fine-tuning the recovery processes.
PUTTING THE PUZZLE TOGETHER When the decision is made to deploy equipment, the trailers are powered down and prepared for travel while the semi-tractors are dispatched to the warehouse(s). (For a large central office
32
recovery, equipment could be deployed from two or three warehouses, putting thirty or forty trailers on the road.) The warehouse teams escort the equipment to the recovery site, protecting AT&T’s investment while it is on the highway. Once on site, the escort crews take on roles on the recovery operations team. An ideal recovery location would be adjacent to a failed office because it would allow the easiest access to the fiber-optic cables that served that office — the recovery complex will eventually be spliced into those cables. But local conditions may prohibit that convenience and the recovery site may be some distance from the original office. (NDR’s World Trade Center recovery site was established in New Jersey in September 2001, directly across the Hudson River from lower Manhattan; the trailer complex was spliced into the fiber optic ring that had served the destroyed office.) Once a suitable recovery site is secured, the NDR operations team begins its phase of the recovery process. The trailers are brought onto the site following an engineered plan that assures that all of the equipment can be joined together with both communications and power cables. The construction phase — trailer positioning, leveling, grounding, and cabling — can last several shifts with sixty or more NDR members working in small teams. As cabling is completed, the trailers are powered up and NDR SMEs turn the equipment on — booting up the recovery complex. After the equipment comes online, it is configured to take on as many of the capabilities of the lost office as possible. The recovery site will first be used to establish through-service on the network, and then services providing access to the community will be added. In the United States, telecommunications carriers must follow the recovery priorities provided by the federal government’s Telecommunications Service Priority program. As conditions in the impacted city normalize and the local AT&T workforce is prepared to return to work, responsibility for the recovery site would be transferred to the local teams. The NDR equipment would remain in place until the failed office’s services are restored in permanent buildings.
NDR DEPLOYMENTS: SEPTEMBER 11, 2001 The Network Disaster Recovery team’s largest deployment was in response to the World Trade Center disaster in September 2001. A small local network switching office in a building adjacent to the WTC was destroyed when the towers collapsed. NDR was deployed to support the recovery of that office’s services and to provide emergency communications for the relief work in lower Manhattan. Equipment began arriving at a staging location in New Jersey at 6:00 a.m. on September 12, 2001. A construction lot in Jersey City, across the Hudson River from the WTC site, was secured as the technology recovery site that afternoon. The team and its equipment were escorted to the site, and the initial site setup was
IEEE Communications Magazine • January 2011
MORRISON LAYOUT
12/16/10
2:09 PM
Page 33
Figure 7. NDR WTC recovery site in Jersey City, New Jersey, in September 2001.
Predictable Management in Unpredictable Settings NDR manages its events and exercises with a modified version of the Incident Management System (IMS) that is used by government and private sector emergency responders in the United States and around the globe. NDR’s model, in use since 2000, was modified for telecommunications needs and to work with the team’s recovery practices. IMS is a role-based management system that scales in size to match the complexity and risk-level of the response. It allows the NDR team to effectively work within AT&T’s larger incident management system and to work easily with other responders in a disaster area. The IMS model places an emphasis on protecting responders’ safety and on personnel accountability — a critical element in high-risk areas. Information and assignments are tracked with a web-based incident management application that allows on-site and remote command team members to see and update real-time information.
Sidebar 1. Incident management. completed shortly after midnight on September 13 (Fig. 7). The site was joined to the same fiber optic ring that had served the destroyed office and began supporting through-service, as the lost office, on September 15. The site remained active until October 24, 2001, after permanent network offices in the area took over the services of the WTC office. An ECV was deployed to the NYPD’s command center in lower Manhattan on September 12 to provide emergency communications. The ECV was moved to the North Cove Yacht Harbor, adjacent to the WTC disaster site, on September 21. It provided free telephone service for the relief workers until October 3, 2001.
NDR DEPLOYMENTS: GALVESTON ISLAND, SEPTEMBER 2008 NDR’s deployment to Galveston Island after Hurricane Ike’s landfall on September 13, 2008, was the team’s first response representing the new, larger AT&T (including the
IEEE Communications Magazine • January 2011
networks of the former AT&T, SBC, BellSouth, and Cingular). While it was not a large activation like the 9/11 response seven years earlier, it did test and demonstrate the team’s added responsibilities and capabilities (Fig. 8). Team members arrived at Galveston’s UTMB hospital on September 14 and established cell service with a satellite COLT. The COLT remained in use until September 20, when the permanent cell site on top of the hospital was repaired and brought back on the air. A second satellite COLT was deployed on September 14 to Ball High School in Galveston to provide cellular service for the Texas Department of Public Safety Emergency Operations Center. It was moved to a DPS checkpoint on I-45, north of Galveston Island, on September 21, and remained active until October 1, 2008. The team set up a command location at an AT&T logistics-staging compound in Pasadena, TX, with five personnel, a small command center and two ECVs. They supported the work being done by other AT&T responders and
33
MORRISON LAYOUT
12/16/10
2:09 PM
Page 34
Figure 8. NDR Satellite COLT at UTMB Hospital on Galveston Island, Texas, September 2008.
Special Operations (Hazardous Materials Response) NDR formed its Special Operations team in 2002 to provide AT&T with the ability to maintain the telecommunications and support equipment in an office that has been contaminated by chemical, biological, radiological, or nuclear (CBRN) hazards. The team includes over twenty-five NDR members who have completed the training to become hazardous materials technicians and who receive ongoing training each year to maintain their skills and certifications. Twelve members are Telecom Hazmat Specialists (North Carolina Occupational Safety and Health Education and Research Center) and were the first responders to receive this industry certification. AT&T would activate its NDR Special Operations Team to assess damage to an office (after it was released by governing authorities) that may have been compromised by hazardous materials. The team would perform the initial reconnaissance of the office and then perform maintenance tasks until the contamination was contained or otherwise remediated. The team could also be called upon to salvage critical network infrastructure from a contaminated office. The team is equipped with self-contained breathing apparatus (SCBAs) and a variety of protective suits, including Level A suits that fully-encapsulate the responder. NDR joins with AT&T’s Environmental Health and Safety (EH&S) organization to assure that the team members are well-trained and that they are properly protected during building entries. The EH&S members act as the team’s safety officers, write the event’s health and safety plan, and are responsible for the ongoing hazard assessment (determining level of risk and level of protection the team will need to use in the contaminated hot zone).
Sidebar 2. Hazmat sidebar. repair crews working in Ike’s landfall area, helping to increase the speed and capabilities of the local response. In total, the team managed the deployment of five satellite COLTs and one satellite COW from Sugar Land to Galveston Island. The team also provided 24 5.8 GHz microwave radio hops (links) to restore communications from isolated cell sites on Galveston Island and the Bolivar Peninsula to the east. An additional NDR team was deployed on September 14 to recover the services of a small local network office on Galveston Island that had been inundated by Ike’s tidal surge. A technology trailer, a command center, an ECV and a tool/supply trailer arrived at the recovery site (adjacent to the network office) on September 16. Local technicians and network SMEs restored the damaged office’s throughservice early on September 17. NDR turned the recovery site over to the local workforce on September 18, 2008; the local AT&T staff managed the equipment until the office was fully restored. The team’s Hurricane Ike response included elements from all parts of the new AT&T Network — local, wireless, and core network services.
34
NDR WITHIN AT&T’S NETWORK EMERGENCY MANAGEMENT PLAN NDR is only one part of AT&T’s overall Network Emergency Management (NEM) plan. The NEM structure includes the AT&T Global Network Operations Center (GNOC), regional emergency operations centers (EOCs), local response centers (LRCs), other first-in teams (damage assessment, power/HVAC, etc.), and logistical, medical, and safety support organizations. All of the organizations report upwards to the GNOC. The GNOC’s leadership team is responsible for setting the strategic direction for all of the NEM organizations’ activities during a network disaster response. During a large deployment, NDR would work closely with LRCs and EOCs to obtain logistical support and access assistance from government agencies as the recovery team and its equipment move into a disaster area. For example, special access passes may be required to enter an area under the control of law enforcement agencies. The AT&T EOC staff would be in contact with their peers in government EOCs activated for the response and would be aware of special conditions that could affect the AT&T responders.
IEEE Communications Magazine • January 2011
MORRISON LAYOUT
12/16/10
2:09 PM
Page 35
CONCLUSION AT&T’s Network Disaster Recovery program provides the company with a predictable and proven way to respond to the catastrophic loss of a large network office. The capability is built upon a unique inventory of network technology trailers, custom recovery engineering software applications and a well-practiced team of employees trained as telecom disaster responders. The mobilized equipment allows for rapid deployments that are tailored to a particular event’s needs. NDR is one of the field teams in AT&T’s overall Network Emergency Management program.
ACKNOWLEDGMENTS The author would like to thank the AT&T NDR process managers and GNOC program leaders who participated in the review of this article: Tim Davis, Bob Desiato, Mark Francis, Sandy Lane, Kevan Parker, Steve Poupos, Joe Starnes, Ray Supple, and Gary Watko. The author would also like to thank Chi-Ming Chen, AT&T Labs, for his support of NDR and this article.
BIBLIOGRAPHY [1] FEMA, “About the National Incident Management System (NIMS)”; http://www.fema.gov/emergency/nims/AboutNIMS.shtm
[2] U.S. Department of Homeland Security, “About TSP (Telecommunications Service Priority)”; http://tsp.ncs.gov/about_tsp.html [3] AT&T, “Business Continuity Preparedness Handbook: A Proactive Approach is Key,” July 2010. [4] C. Kalmanek, S. Misra, and Y. R. Yang, Guide to Reliable Internet Services and Applications, Springer, 2008. [5] M. Buchanan, “Inside AT&T’s National Disaster Recovery Batcave: Who AT&T Calls When the Death Star Explodes,” Gizmodo, Aug. 2, 2010; http://gizmodo.com/5602414/inside-atts-national-disaster-recovery-bunker-who-att-calls-when-the-death-starexplodes [6] N. Patel, “On the Ground with AT&T’s Network Disaster Recovery Team,” Engadget, May 29, 2008; http://www.engadget.com/2008/05/29/on-the-groundwith-atandts-network-disaster-recovery-team/ [7] North Carolina Occupational Safety and Health, “Telecommunications HAZMAT Specialist Certificate”; http://osherc.sph.unc.edu/hst_cert.htm
BIOGRAPHY K ELLY T. M ORRISON (
[email protected]) obtained his B.A. (English/history) from Westminster College in Salt Lake City, Utah, in 1983 and pursued graduate coursework in technical journalism at Colorado State University in Fort Collins. He joined AT&T in 1990 and worked in network documentation groups until becoming a full-time member of the AT&T Network Disaster Recovery program in 1998. He serves as public information officer on NDR’s incident command team and supports the NDR and Global Network Operations Center web sites. He was deployed for and documented NDR’s responses to the F5 tornado in Moore, Oklahoma, in 1999, the 9/11 attack in New York City, Louisiana/Mississippi following Hurricane Katrina’s landfall in 2005, and Galveston Island, Texas, following Hurricane Ike’s landfall in 2008.
AT&T’s Network Disaster Recovery program provides the company with a predictable and proven way to respond to the catastrophic loss of a large network office. The mobilized equipment allows for rapid deployments that are tailored to a particular event’s needs.
STAY CONNECTED THE WAY YOU LIKE IT!
facebook.com/IEEEComSoc twitter.com/comsoc linkedin.com/in/comsoc
Simply scan n the QR code with your smartp tphone to get connected.
youtube.com/IEEEComSoc facebook.com/IEEEWCET facebook.com/ComSocTraining
IENYCM2822.indd 1
IEEE Communications Magazine • January 2011
@ComSoc 12/16/10 6:02:13 PM
35
OBERG LAYOUT
12/16/10
12:36 PM
Page 36
NETWORK DISASTER RECOVERY
Disasters Will Happen — Are You Ready? J. Chris Oberg, Verizon Wireless Andrew G. Whitt and Robert M. Mills, Verizon VSO
ABSTRACT This article describes the important considerations and critical steps necessary to develop and maintain a credible disaster response capability based on the experience of a major telecommunication services provider.
INTRODUCTION Disasters can strike anywhere at any time. Disasters may strike your company or the community you serve. Disasters may be man-made such as the 9/11 attacks or the Madrid transit bombings, or an Nuclear-Biological-Chemical (NBC) attack such as in the Tokyo subway. Disasters may be technological such as the Northeast Power Grid Blackout, or the Minneapolis-St. Paul interstate bridge, or Three-Mile Island. Disasters may be natural phenomenon such as hurricanes (Katrina, Ike, etc.), earthquakes (Haiti, Chile, etc.), tsunami (Malaysia, Samoa, etc.), tornados (Kansas, Mississippi, etc.), floods (Tennessee, New Hampshire, etc.), wildfires or forest fires (Southern France or Southern California, etc.), or even pandemics (1919 Influenza, Avian Flu, Swine Flu, etc.). Disasters may affect hundreds of thousands of people, multiple communities, entire countries, or just one company.
IS YOUR COMPANY READY? DISASTER PLANNING There are numerous considerations during disaster planning; a few key elements that must be in your plan include Risk Analysis, Business Impact Analysis, Strategy and Plan Development, and Exercising the Plan. Risk Analysis — During the assessment of your facilities, consider all risks, internal and external, to each facility itself, to the systems or services it houses, and to the people who work there. The risks to each facility may come from any of the disaster scenarios mentioned above, from the nature of the building itself, the building systems, its location, its neighbors, from the equipment it houses, or simply because humans occupy it. The risks to the systems or services can arise from the facility itself, from physical security threats, from logical security threats, from connectivity issues, hardware or software issues, or simply because humans have access to it. Risks to the people can arise from any disas-
36
0163-6804/11/$25.00 © 2011 IEEE
ter or threat scenario, from injury to fatality, including loss of key personnel needed for business continuity and/or response and recovery efforts. You need to be careful to avoid tunnel vision focusing on the obvious risks and to take an all hazards approach. There are four ways to manage risk. You can accept it, transfer it, eliminate it, or mitigate it. Accepting risk means that you acknowledge the risk, and decide it is sufficiently unlikely to occur or have a significant impact, you need do nothing about it. Transferring risk generally refers to getting insurance to protect yourself against it occurring at all or with sufficient impact to hurt you. Eliminating risk means taking whatever steps are necessary to prevent whatever threat from materializing; actually eliminating risk is highly unlikely, and comes with significant expense. Mitigating risk is to acknowledge that it has some likelihood of occurring, and decide there are acceptable measures to reduce the likelihood of it occurring or reduce the impact to an acceptable level. This is when you prepare and plan for potential disaster scenarios. Mitigation plans are where backup power plants and generators come from, where diverse circuit routing and network resiliency comes from, where geographic or component redundancy comes from, and where backup tapes or servers come from. Mitigation planning gives rise to hot sites, off-site archives, cross-training people, and exercising your response and recovery capabilities. Mitigation leads to Recovery Time Objectives (RTOs), Recovery Point Objectives (RPOs), Service Level Agreements (SLAs), and disaster plans. Mitigation is the necessity for Cells On Wheels (COWs) and Cells On Light Trucks (COLTs), Generator On A Truck (GOATs), and Switches On Wheels (SOWs). Business Impact Analysis — Mitigation comes with significant costs of time, money, and human effort. These costs must be weighed against the potential impact to your business. A good Business Impact Analysis (BIA) must consider all functions your business units perform, and grade them for criticality across several impact categories. For example, see Table 1. The Business Impact Analysis (BIA) becomes a key tool in gaining your management’s acknowledgment of the business risks, and their concurrence and support in mitigation efforts. The resulting criticality ranking will point out the functions that warrant the most attention,
IEEE Communications Magazine • January 2011
OBERG LAYOUT
12/16/10
Criticality level
Ranking is for management use to rate functions and focus resources, based on the criticality of the function. Criticality is determined based on the level of impact in the following areas: Financial, Stakeholder, Customer, Regulatory, and Dependencies.
12:36 PM
Page 37
Financial impact of disruption
Stakeholder impact of disruption
Customer impact of disruption
Legal, regulatory impact of disruption
Dependency impact of disruption
Vulnerability level
Time sensitivity
High > $$$/day
High Example: High media attention, high brand impact
High Impacts customer voice/data service
High Significant regulatory or contractual impact
High Numerous critical functions affected
High Single point of failure, Limited recovery capabilities
High Sensitive to interruptions. Function can’t afford any down time
Medium > $$/day
Medium Example: Medium media attention, medium brand impact
Medium Impacts customer, for other than voice/data service
Medium Some regulatory or contractual impact
Medium Some functions affected
Medium Recovery sites available, but not immediately
Medium Sensitive to interruptions. Function may be able to afford up to 48 hrs. of down time
Low < $/day
Low Example: Insignificant media attention or brand impact
Low Insignificant customer impact
Low Insignificant regulatory or contractual impact
Low Minimal impact to other functions
Low Auto-failover, geographically diverse, redundant capabilities
Low Less sensitive to interruptions. Function can afford more than 48 hrs. of down time
Tier 1 functions typically scored High in at least one of these areas. Criticality tier levels for recovery: 1 = Mission critical 2 = Important 3 = Less critical
Table 1. Business impact analysis. and are most deserving of mitigation efforts, especially when funding is tight. It will also point out those functions that are in need of disaster recovery plans and exercises to test those plans. Strategy and Plan Development — Armed with the Risk Analysis and the BIA, in conjunction with management’s direction and encouragement, you can develop specific strategies for each critical function. Writing disaster plans can become a line of business by itself, but few companies today have the luxury of allocating high value talent to a dedicated disaster planning effort. More likely, a company will have a small number of individuals responsible for the business continuity portfolio, overseeing the company’s preparedness and response program. They, in turn, will rely on the key leaders who own and operate the critical functions performing risk analyses and initial business impact analyses, offering the strategies that can be executed effectively to respond and recover. Senior management will complete the BIAs, and decide which strategy to execute. Senior management must determine who will lead the response and recovery effort. The military have a saying, “The best plan will not survive first contact with the enemy.” That being said, you MUST have a plan. You need to clearly define your goals and state your strategy for achieving them. At the outset of a disaster situation, you will be facing chaos. You will be getting too much information, or conflict-
IEEE Communications Magazine • January 2011
ing information, or no information at all. You may have damaged facilities, injured employees, or failed systems. A good plan is the cornerstone for everything that follows, a rallying point for your employees, a level set from which response and recovery begins. So how far down the road do you want to go with developing your plan? A large company will likely have many critical business functions. Do you write a plan for each function and how to deal with each risk that it faces? You could. You might look at all hazards, at all of your risks, and all of your critical functions, deciding they all come down to a handful of circumstances and corresponding recovery strategies. They might include loss of a key facility housing critical functions, loss of the systems supporting critical functions, loss of a third party that provides critical functions, loss of key personnel, or an external disaster that doesn’t directly affect you but does significantly impact your community or customers. Your plan must, at a minimum, include guidelines for plan activation, an immediate incident management structure for the response phase, and initial steps required to identify which strategy will be undertaken. You will need a tactical checklist to aid in the execution of your plan when the event occurs. Your plan should include call out lists, notification lists, escalation lists, resources lists, locations information, roles and responsibilities, vendor information, etc. Your plan should identify transition points between
37
OBERG LAYOUT
12/16/10
12:36 PM
An absolute requirement of any sort of disaster response or recovery plan is communications. You need to be able to communicate with your people to assess their condition, and the scope of the situation you face.
Page 38
the response phase, the recovery phase, and return to business as usual. Your plan might go further to document everything from choices on how to implement a given strategy, to procedures for recovering a specific system or service, but we don’t recommend that. One important consideration, particularly if you are a technology company and you have technicians that can repair systems and recover services, is when to invoke your disaster response plan? At what point does a problem grow in scope to become a disaster that would activate your response and recovery plan? If it is a large scale disaster with significant damage, with many systems or services down, it would probably be easy to make that determination. But if it is a smaller situation, particularly one that only directly impacts you, when do you stop your technicians from trying to fix whatever it is and turn to executing your emergency plan? At what point does replacement become more cost effective than continued efforts to fix it? That is something your business impact analysis can help you understand, and exercising your response plan can help validate that determination. This is a critical decision your incident commander or incident management team must make. Another important consideration is maintaining your plan. Your plan should be as tight and concise as you can make it. The more detailed a plan is, the greater the depth that it goes into, and the more maintenance it will require, especially in technology companies. We all know how fast our technology grows, and the speed at which it evolves. If your plan deals with details of the technology your critical functions use, you will have to establish a process for regular review and update of the plan to keep up with the changes. Do not, however, succumb to the temptation of not having detailed procedures on hand because your folks know how to install, operate, and maintain those systems. In a disaster situation, it may not be your regular employees or subject matter experts who are called upon to execute the response and recovery. Those detailed procedures almost certainly exist, are maintained for normal use, and can be included by reference without being part of your plan. Exercising your Plan — If you don’t plan to exercise your plan, don’t even write it! If you don’t exercise it, how will you know if it includes what you’ll need in a disaster situation? Is it up to date? Does it provide adequate direction to your employees and their managers? Does it go into too much detail or not enough? If the most competent and capable people are not available when a disaster occurs, will your company be able to respond and recover? Do your employees have the resources they will need? Does your plan help you to achieve your objectives for response and recovery? An exercise scenario should be realistic. You can use examples from actual disasters to add a reality factor; there are plenty to choose from in newspapers or trade publications. The key results from an exercise are what you learn from it. What worked? What didn’t? What could be done better? Did we spend too much time and energy on one thing and not enough on
38
another? Did you have the required resources for the tasks at hand? If you could not achieve a particular objective within the desired time, do you need to reconsider that objective, or the resources applied to it? Do you need to reconsider the recovery strategy for that particular function? Do you need to improve training? Exercises can take a number of forms. You should do a tabletop exercise for familiarization with the plan and processes for the response and recovery phases. You should then progress to a walk-through exercise for hands-on familiarization and training. And then you should have a no-notice exercise, with executive participation, to verify all the pieces fit and work together, recording how close you come to achieving your objectives. If you have new personnel, personnel borrowed from another department or location, or equipment that is only used for emergency response, you need to train and exercise those resources. If you have back-up media for your critical applications and data, when was the last time you tried bringing that system up using those back-ups? If you have a hot-site, when was the last time you recovered a critical function there? Has it been since you did the rev 4.2 update? If you have automatic fail-over from one location to another or from one machine to another, when was the last time you induced a fault condition to see if the fail-over occurred as expected? Completely rebuilding hardware systems is easy when compared to the thousands of man-hours it would take you to recreate critical applications and data that are critical to your business operations.
COMMUNICATIONS ARE ALWAYS CRITICAL An absolute requirement of any sort of disaster response or recovery plan is communications. You need to be able to communicate with your people to assess their condition, and the scope of the situation you face. You need to be able to communicate with your suppliers and your customers, with regulatory and government, and your stakeholders. You need to plan for communicating with the media, decide what will be said, and identify who will do the talking. If nothing else, you need a communications plan. You need to document it, equip for it, and practice it. Everyone, top to bottom in your organization, needs to understand it, understand what is expected of them, understand how to do what is expected of them, and know what their options are if the primary communications they normally rely on cannot be used for whatever reason. Further, you need to know how to communicate with people and organizations above you and below you in your own company, and in your supply chains. Further still, you need to be able to do these things quickly. Have those communications paths immediately available; you won’t have time to hunt for the pieces and instructions, and assemble them on the fly. As network service providers, we have all those needs ourselves, and we have our customers looking to us to satisfy those needs for them as well (Table 2).
IEEE Communications Magazine • January 2011
OBERG LAYOUT
12/16/10
12:36 PM
Page 39
EMERGENCY OPERATIONS CENTER There must be some central point where information comes together to be analyzed and presented to those responsible for directing the response and recovery efforts: an Emergency Operations Center (EOC). An EOC function can exist at several levels in a company, and may exist at several levels at the same time in a large company. Where precisely it resides will depend on the scope and severity of the situation. An event affecting a single region might have an EOC operating at that regional level while for an event affecting multiple regions the EOC would likely be at the headquarters (HQ) level. During the 2006 Northeast Power Grid Blackout, our eastern wireless Network Operations Center (NOC) took up the EOC role for the regions affected by the blackout. The western NOC continued normal monitoring of the western US, and took on monitoring of those regions in the eastern US not affected by the blackout. Your decision makers should be at the EOC. The EOC needs to be scalable to meet the needs of the situation. It may need additional rooms for break-out meetings or for functional area teams. A key role in EOC operations is a dedicated scribe or recorder to keep a log of activities, and keep track of information or requests coming in and going out. Status reports must be prepared and distributed both up and down the chain of command. Information must be given to other departments, and to other stakeholders, both within the company, and externally. Another key role for the EOC is fielding questions from many quarters so that the people on the ground dealing with the situation are not being interrupted. Yet another will be, at the end of the recovery, to help pull together a comprehensive report of what happened, how it was responded to, the results, and the lessons learned from the entire process. An EOC will benefit from a tool for collating and presenting information, maintaining current status information, tracking requests and directives, generating reports, and managing trouble tickets. There are a number of such tools available to choose from; several of our units use WebEOC. Conference calling capabilities are a must. An EOC, whether at a HQ or an area or regional level, should have its own conferencing capability. Our NOCs maintain multiple dedicated conference bridges for normal use, and for EOC emergency use. In any given response situation, there will be several bridges in use simultaneously. A typical set up for a large regional scale response effort might see one bridge in use for the field technicians and their managers, another for the vendors supporting the response, another for the switch techs and their managers, and another set aside for management reporting and updates. Either the NOC or the controlling EOC will have the conference bridges up on speakers to monitor activity and jump in if necessary. Having your own conference bridges is strongly recommended, both for privacy and because commercially available bridges can become saturated in a large-scale event.
IEEE Communications Magazine • January 2011
Traditional wireline (PSTN) telephone service
Data
Wireless telephone service
Email
Cable telephony
Instant messaging
Satellite telephony
Facsimile
Two-way radio
Video
Pagers
Telemetry
VOIP
SCADA (supervisory control and data acquisition) systems
Broadband Internet (wired or wireless)
Couriers
Broadcast TV and radio
Table 2. How to communicate?
Toll-free numbers are fine for business as usual activities. If there is a vendor (or anyone or anything else) that you must be able to reach in an emergency, make certain that you have the real 10-digit telephone number that will get to them. Toll-free calling may not work in an emergency because of congestion in the networks or damage to the networks. Similarly, if you have toll-free numbers for people to reach you, make sure that the people that you want to hear from have a real 10-digit number for you. International dialing may or may not be something you do routinely, and your phone system or phone service may not be programmed to allow it. If you plan to use satellite phones for bypassing the terrestrial network to reach an isolated location, it may be necessary for you to have international dialing so that you can call those phones. If you get tech support from a vendor overseas, but you normally call a gateway number in the US which routes you to them, you may need to have international dialing to be able to reach them. All these things you need to know and plan for and equip for and practice before something happens.
MONEY MATTERS If your disaster includes widespread power and telephone service outages, you (and your employees) will probably find out pretty quickly that the banks are not working, nor are their ATMs. If you use company credit cards for small purchases, you are likely to find that credit card validation systems at retail locations are not working, nor for that matter are the retailers’ cash register systems likely to be working. Where, and how, are you going to obtain and pay for fuel for your technicians’ vehicles? If the outages persist, how are you going to pay your employees? How are they going to pay for the things their families need? Don’t lose sight of that issue; you won’t get far without your employees. You will want to keep track of your disaster response expenditures. You may find yourself
39
OBERG LAYOUT
12/16/10
12:36 PM
Page 40
Agriculture/food
Emergency services
Banking and finance
Government facilities
Chemical
Healthcare and public health
Commercial facilities
Information technology
Critical manufacturing
National monuments and icons
Dams
Nuclear reactors, materials, and waste
Defense industrial base
Telecommunications
Energy (except nuclear power)
Postal and shipping
Transportation systems
Water
Table 3. Critical infrastructures. paying for repairs to your buildings, buying a lot of fuel for generators, buying replacement hardware, ordering new circuits, hiring clean-up workers, paying a lot of overtime to your employees, etc. You may have to arrange transport for employees with copies of your applications and data to a distant hot site, and you may have to pick up their expenses at that location for quite some time. You should ask your accounting people to establish particular accounts for all this well in advance of any disaster events, and to work out with them how that will actually be done in the throes of a disaster response. Talk to your insurers. Insurance is all about risk, and good insurance companies will have risk management programs that can help with risk analysis, and business impact analysis and recovery. They don’t want a disaster to happen to you anymore than you do, and will want to see you come back quickly and strongly. If you do suffer significant damages, make sure to do accurate damage assessments (pictures or video will be very helpful), and to track repair or replacement costs, labor costs, etc. Even big firms with high deductibles in their coverage can find themselves overrunning their deductibles in a hurry in a major disaster.
PERSONNEL MATTERS No response or recovery effort is going to succeed without people than can apply the necessary skills, resources, and energies to solving the problems. Your people are themselves a key resource, and must be cared for physically and psychologically. From the very beginning of a major disaster situation, you will need to account for your people in the affected area, determine their locations and the condition of those locations, and determine their condition and that of those around them including their families. If an employee is worried about their own health and well-being, or that of their family, they are not going to be entirely effective in helping with your response efforts. In Haiti, we saw the local wireless companies setting up tent camps on their own properties to provide shelter, food, water, and security for
40
their employees’ families so that their employees could go out and work on recovering their networks. Early in a disaster situation, management will need to assess the situation, the damages incurred, and the resources available for response and recovery. The personnel situation needs to be part of the consideration. Do you have the right people available to respond with, and are those people actually available? How long is the response going to take? Do you need reinforcements in any particular skills or areas? Where do those reinforcements come from? How do they get to where they’re needed? How do you care for their needs once they get there? Can you work around the clock or only during daylight hours? How do you care for anyone who may be injured in the effort? Are you bringing in vendor personnel to assist? Answer all the same questions for them, too. Not just the workers, but managers too. Not just the folks out in the field, but also the supporting personnel in the engineering offices and the operations centers. You will need to make it clear to all concerned from the outset what your expectations with regard to personnel issues are. Your employees need to know what you expect of them, and their managers, in terms of care for their own safety and security, and job performance. You will get their best efforts if you and your managers make your best efforts to watch out for them.
THE ROLE OF GOVERNMENT In any major disaster, government entities at the federal, state, and/or local levels will play an important role. Your response leaders should be trained in the National Incident Management System (NIMS) so that they can understand and effectively communicate with government emergency managers. The US Government’s National Response Framework (NRF) defines what it refers to as Critical Infrastructures (CI), which are essential to the survival of the country (Table 3). Most of the key resources of the CIs belong to the private sector. Each of the CI sectors has an Information Sharing & Analysis Center (ISAC) where government and industry can come together to share information on preparedness and response issues. Links to information on the CIs, the ESFs, and ISACs can all be found on the Department of Homeland Security (DHS) web site. The NRF also describes Emergency Support Functions (ESFs) which must be maintained in order for the country to respond to and recover from a disaster (Table 4). The private sector is a key player in emergency response and recovery, and works alongside government in each of the ESFs. Communications is recognized in the US Government’s National Response Framework (NRF) as a Critical Infrastructure (CI), and as one of the 15 key Emergency Support Functions (ESFs). All of the other CIs and ESFs rely on communications in order to do their part in a disaster response and recovery. Governments will have their own communica-
IEEE Communications Magazine • January 2011
OBERG LAYOUT
12/16/10
12:36 PM
Page 41
Emergency support functions
Federal sponsor agencies
ESF #1 – Transportation
Dept. of Transportation
ESF #2 – Communications
DHS (National Communications System)
ESF #3 – Public works and engineering
Dept. of Defense (Corps. of Engineers)
ESF #4 – Firefighting
Dept. of Agriculture (U.S. Forest Service)
ESF #5 – Emergency management
DHS (FEMA)
ESF #6 – Mass care, emergency assistance, housing, and human services
DHS (FEMA)
ESF #7 – Logistics management and resource support
General Services Administration and DHS (FEMA)
ESF #8 – Public health and medical services
Dept. of Health & Human Services
ESF #9 – Search and rescue
DHS (FEMA)
ESF #10 – Oil and hazardous materials response
Environmental Protection Agency
ESF #11 – Agriculture and natural resources
Dept. of Agriculture
ESF #12 – Energy
Dept. of Energy
ESF #13 – Public safety and security
Dept. of Justice
ESF #14 – Long-term community recovery
DHS (FEMA)
ESF #15 – External affairs
DHS
Table 4. Emergency support functions (and their federal sponsor agencies).
tions networks, but almost all of that relies upon systems and facilities provided by the private sector. To a large extent, the private sector also provides the interconnection and interoperability between those networks. The priority services that governments depend on for their communications in a disaster are largely provided by the private sector. Governments will rely heavily on the private sector to restore communications in the wake of a disaster. In a disaster response, the private sector must depend on its own plans and its own resources. The only major things we ask the government for in an emergency are “Access, Fuel, and Security”. In an emergency, police (and perhaps military personnel) will control access to the affected area. If we are to assess any damage to our systems and services, we need to be able to get in and inspect them. There is not, in the US, a standard for access control that is recognized by law enforcement at all levels. Each US state is responsible for establishing those rules within its own boundaries, and even then authorities at the county or city level may take it upon themselves to establish their own criteria. Numerous schemes have been put forward by governments at different levels and by the private sector, but none are universally recognized. But we as communications providers nationwide must be able to get access, and are thus forced to work out recognition and access processes with each jurisdiction we serve.
IEEE Communications Magazine • January 2011
Fuel is certainly a necessity at many levels of a response situation. Relief centers, hospitals, emergency operations centers, airports, helicopters, fire trucks, power crews, and so on, all need fuel. All communications systems need power. Our core network systems typically run on –48 VDC, and we typically have sizable backup battery plants to support them. And those will be backed up with large generators that need large amounts of fuel. A major switching center will have one or more generators on the order of 1000 KVA, and fuel for 48–72 hours to support the systems and their cooling plants. As the systems fan outward toward the edges of the network, you find less equipment, but more of it in more locations, and all of that requires power. Battery plants and generators, albeit smaller, will still be found, and again they will need fuel. The circuits connecting the edge of the network to the core will route through equipment that needs power. And right at the edge, the customers’ devices will require power whether it’s for a network interface device or for the terminal equipment. Battery powered laptops and wireless phones will need to be charged. Our technicians will be relying on their vehicles to get around to all these locations to repair or replace equipment to restore service, and they need fuel. Getting fuel to all these locations, as well as to all the other sectors that need it, becomes a major effort which government can assist with. If fuel supplies are not adequate to the demand, then government can step in to prioritize distribution and delivery.
41
OBERG LAYOUT
12/16/10
12:36 PM
We’ll conclude with the three components that you must have to succeed in a response and recovery situation: a well thought out and exercised recovery plan; a well thought out and exercised communications plan; and strong, competent, adaptable incident command leadership.
Page 42
Security becomes an ever greater concern as the impact of a disaster becomes broader, and the duration of a state of emergency becomes longer. Security must be provided for medical supplies and personnel, and delivery of medical services, for supplies of food and water, and their delivery to where they’re needed. Security must be provided to locations providing essential services, and to those engaged in delivering those services. In a major disaster situation, the whole of the affected population, and particularly all those involved in the response and recovery efforts, must have a sense of security so that they can do what needs to be done. This is clearly a key role for government. There are some other things that government can do to help. The US Government, collectively, is a very big customer of the telecommunications industry. A couple of the tools we have developed for them have been made available for broader use in emergencies. These include Telecommunications Service Priority (TSP), Government Emergency Telecommunications System (GETS), and Wireless Priority Service (WPS). The government is currently pursuing initiatives aimed at extending those capabilities into Next Generation Networks. Industry and government need to continue to work together to make that happen. The government also provides a vast amount of information on emergency preparedness and response, for private citizens, for businesses, and for government entities. The FEMA and DHS websites include links to this type of information. Many other government agencies also have disaster information on their web sites particular to their areas of responsibility.
CONCLUSION Within the bounds of this article, we cannot itemize or describe all aspects of a disaster response or recovery plan. It is our intent to share with you our real-world experience, and to raise issues that are critical for you to consider as you develop your plan. A disaster can be com-
42
pared to a “come as you are” party. When it begins, you can only bring whatever you have immediately at hand. We’ll conclude with the three components that you must have to succeed in a response and recovery situation: • A well thought out and exercised recovery plan • A well thought out and exercised communications plan • Strong, competent, adaptable incident command leadership
BIOGRAPHIES J. CHRIS OBERG (
[email protected]) is a 20year veteran of the U.S. Army Signal Corps and the White House Communications Agency, and a 20-year veteran of the wireless industry with Bell Atlantic Mobile and Verizon Wireless. He has served in installation, operations, maintenance, administration, compliance, and planning roles in both careers. He is currently responsible for network emergency preparedness, risk management, and physical security, and represents Verizon Wireless in various industry and government fora. A NDREW G. W HITT has over 32 years experience in the telecommunications industry, most of that time serving in a technical or operational support capacity. As head of Verizon Wireline’s National Switching organization, his responsibilities includes overall switching network reliability, service impacting event management, and catastrophic outage response/restoration. Disaster recovery experiences include recovery from floods, tornados, hurricanes, 9/11 terrorist attack, and various other larger scale outage events. He is the current chairman of National Electronic Services Assistance Center (NESAC), a North American technical support leadership forum associated with QuEST Forum. R OBERT M. M ILLS has held various engineering and operations positions within in the telecommunications industry over 27 years. He began his career with New York Telephone/NYNEX in Buffalo, New York, as a network engineer. He then moved to MCI Communications into a network operations role. He rose through the ranks to director, and he is now a member of Verizon. He held the position of Global Transport and Switching NOC director for over 10 years being responsible for one of the world’s largest transport networks in over 140 countries across the globe. He holds a B.S. degree in electronics engineering from the Ohio Institute of Technology and an M.S. degree in telecommunications and computing management from Polytechnic University, New York.
IEEE Communications Magazine • January 2011
RAN LAYOUT
12/16/10
12:35 PM
Page 44
NETWORK DISASTER RECOVERY
Considerations and Suggestions on Improvement of Communication Network Disaster Countermeasures after the Wenchuan Earthquake Yang Ran, China Academy of Telecom Research
ABSTRACT This article, by analyzing the damage on the communication network caused by the Wenchuan Earthquake, and the priorities of the communication restoration construction, points out that in the future emphasis shall be laid on construction of the emergency communication capability by providing priority service functions for the public switched telephone network, and attaching importance to wireless communications. It also proposes the priorities of the postdisaster network planning and construction, as well as a study on the future-oriented disaster countermeasures of networks and technologies. An earthquake measuring 8 on the Richter scale struck Wenchuan County of Aba Prefecture of Sichuan Province at 2:28:04 pm of May 12, 2008. The terrible catastrophe posed an allround test to China in terms of disaster management, a test not only on the emergency response capability of all government departments, but also on the abilities of the whole society and ordinary citizens to deal with a catastrophe. It especially put the emergency capability and fast response capability of the communication sector, the Life Line, to a severe test.
RESPONSE OF THE COMMUNICATION SECTOR TO WENCHUAN EARTHQUAKE Due to the earthquake, the eight counties in the worst hit area suffered a complete outage of communication connection with the outside for some time, and became an Isolated Information Island, which could be attributed to several failures: • Destruction of the communication infrastructure: 3897 telecom offices were hit in the Sichuan Province, Gansu Province, and Shaanxi Province. In addition, 28,714 mobile and PHS (personal handy-phone system, a kind of cordless telephone system) base stations were ruined, 28,765 sheath kilometers of optic cables were dam-
44
0163-6804/11/$25.00 © 2011 IEEE
aged, and 142,078 telecom poles fell broken during the strong earthquake and a number of subsequent aftershocks. • Failure of timely power and fuel supply: Some undamaged networks also could not work because the earthquake caused power failure, and fuel couldn’t be fed to the engine of the backup power systems due to traffic paralysis. • Network congestion: After the earthquake, telecommunications traffic proliferated in volume. According to statistics provided by China Mobile, the number of the postearthquake calls made by the local Sichuan subscribers was 10 times that prior to the earthquake; the long-distance telephone traffic to Sichuan from the rest of China was 5 to 6 times as much as at normal times; and the number of the calls made by Beijing subscribers to Sichuan was 80 times as much as at normal times. No calls could be successfully made for some time due to network congestion. The telecom sector conducted the earthquake relief efforts in three phases. The first phase was from May 12 to May 16; the target was to restore public communications between the disaster-hit county seats and the outside. The second phase ended on May 22; the target was to guarantee the communications at the county level while conducting urgent repair of the communication facilities at the township level to restore the communication for 109 towns and townships in the worst hit areas. Then the third phase started, to restore public communications in the disasterhit areas completely, and to start the post-disaster reconstruction. The communication sector of China developed three ways to approach the disaster-hit areas at all costs: where there was a road to the disaster-hit areas, they used emergency communication vehicles; where there was no road to the areas, they organized a commando with emergency communications equipment to travel on foot; where there was no ground access to the areas, they used army helicopters to airdrop personnel and equipment. Technically, the commu-
IEEE Communications Magazine • January 2011
RAN LAYOUT
12/16/10
12:35 PM
Page 45
nication sector implemented three schemes simultaneously: intermediate data rate (IDR) or very small aperture terminal (VSAT), hand-held satellite phones and optic cables, and rushed to resume the communication before long after arduous efforts day and night. These efforts proceeded despite various difficulties, such as frequent aftershocks, traffic interruptions, bad weather, etc. They restored the public communication between the disaster-hit county seats and the outside on May 16, and achieved the first phase target of urgent establishment, and guarantee of command communication and emergency communication for the disaster relief. The top priority of the communication reconstruction was Three Guarantees for guarantee of the counties, guarantee of the townships, and guarantee of the disaster relief; and Five Purposes which we explain below. The guarantee of the counties was to increase the public communication capability of the county seats, expanded the capacity immediately, and deploy network via diverse paths; the guarantee of the townships was to accelerate the communication coverage of towns and townships, and improve the comprehensive anti-destruction ability; the guarantee of the disaster relief was to provide the communication guarantee for the priorities of the disaster relief work. Of the Five Purposes, the first was to provide the communication guarantee for the troops to conduct rescue operations in villages and families; the second was to provide the communication guarantee for the water resources departments to prevent and control secondary disasters in the high risk areas involving a barrier lake, dam, etc.; the third was to provide the communication guarantee for the heath department, and the agriculture department to prevent the occurrence and spread of epidemics; the fourth was to provide various services for the public according to the disaster victims relocations plan of the civil administration department to guarantee the rescue and transfer of the disaster victims; the fifth was to provide communication support and service to maintain the social harmony and stability of the disaster-hit areas. The communication sector had already started the reconstruction plan while carrying out the guaranteed restoration of the communication network in the disaster-hit areas.
STRENGTHENING EMERGENCY COMMUNICATION CAPABILITIES Emergency communication, as a special temporary communication mechanism, refers to communication means and methods required for guarantying the rescue, emergency aid, and necessary communication by a comprehensive use of various communication resources in case of a natural or artificial sudden emergency. Wenchuan Earthquake has given us a more profound thought and understanding on the emergency communication. Drawing lessons from the emergency communication experience in catastrophes in the world, we shall strengthen the construction of emergency communication capabilities in the following three aspects.
IEEE Communications Magazine • January 2011
DEPLOYMENT OF PRIORITY SERVICE FUNCTIONS IN PSTN It is the bound duty of the communication sector to guarantee communications for the government departments, emergency command departments, and emergency rescue departments. In China, communication for the government and emergency departments is mainly provided via a dedicated network. However, due to the advantage of the public network in terms of coverage and resources, in cases of emergency, the government and the emergency departments need the operators to temporarily dispatch the network resources and provide priority service to guarantee communication of the important departments. As a matter of fact, in the US, UK, Canada, and Japan, the public network has long been the most important part of the emergency communication and public security communication. Thanks to the excellent coverage and increasing security and reliability of the public network, its operators have provided priority communication for the national security and emergency preparation (NS/EP) since the 1990s, such as the US Government Emergency Telecommunications Service (GETS), and Wireless Priority Service (WPS). GETS allows offer of the cross-operator emergency access and priority treatment for NS/EP in case of network congestion, while WPS can preferentially provide the spectrum resources for NS/EP wireless calls for priority connections. In the 9/11 attacks, Hurricane Katrina, and the July 7, 2005 London bombings, these services played a key role in the communication for the government and emergency departments. China shall also duly deploy priority service in the public communication network while the government and emergency departments can subscribe to this service, so as to guarantee smooth communication for important departments with a prepared technical proposal in an emergency.
Emergency communication, as a special temporary communication mechanism, refers to communication means and methods required for guarantying the rescue, emergency aid, and necessary communication by a comprehensive use of various communication resources in case of a natural or artificial sudden emergency.
ATTACHING IMPORTANCE TO APPLICATIONS OF WIRELESS MEANS In terms of specific measures of different countries for emergencies, wireless communication has been the most important means of communicating in the emergencies. Trunking communication, satellite communication, WiFi, and WiMAX all have played an important role in disaster relief, in which satellite communication is particularly convenient, and rapidly to deploy. As satellite communication is characterized by wide coverage, rapid deployment, little reliance on the power system, and immunity from the geographical condition and ground disasters, it can guarantee smooth communications in cases of paralysis of primary communication networks. After the Wenchuan Earthquake happened, only satellite communication could function properly in some places due to blocked roads, and bad weather. We must attach importance to the satellite communication in dealing with an emergency, and support the idea of maintaining a satellite for a thousand days to use it for an hour; as China is geographically complicated with vast land, satellite communication
45
RAN LAYOUT
12/16/10
12:35 PM
We will not only need a mechanism to fast allot and coordinate the necessary spectrum for the emergency departments, but also reserve more resources for them in the spectrum planning. The US has arranged for more spectrum for emergency communications in the recent 700 MHz spectrum allotment.
Page 46
is especially indispensable. However, satellite communication is limited in that it is less reliable in narrow city streets, and in the rainy and snowy weather. The government and the emergency response teams often need the trunking system to direct and organize the work on site, while such functions of the trunking system as group calling and call over-ride are unable to be offered by other networks. But currently, the trunking systems of all emergency departments in China are independent and less interoperation with different work ways, and it will be a serious problem for coping with emergencies. As we know, in the wake of the September 11, 2001 terrorist attacks in the US, there appeared a serious problem from ineffective interoperation among the systems of different emergency teams (the police, fire, coast guard, etc.); in 2004, a report released by the 9/11 Commission claimed that the communication failure and insufficient coordination of the corresponding departments were significant contributors to over 2000 deaths in the 9/11 attacks. We shall establish standards and build a platform at an early date to promote and guarantee interwork among different emergency trunking systems so to not suffer the same fate. Wireless communication requires spectrum. During the earthquake, the national and provincial radio administration departments positively assisted the relevant departments to keep the order of air radio waves and efficient use of the frequency spectrum. In the future, we will not only need a mechanism to fast allot and coordinate the necessary spectrum for the emergency departments, but also reserve more resources for them in the spectrum planning. The US has arranged for more spectrum for emergency communications in the recent 700 MHz spectrum allotment.
MORE SCHEMES TO HANDLE HIGH TRAFFIC After the earthquake, call demand was over ten times as much as at normal times, leading to network congestion for some time. Telecom operators promptly took some measures to ease the congestion, such as increasing network resources, adjusting network configuration, optimizing the service structure, and conducting telephone traffic distribution through various means. In addition to allotting resources and distributing traffic while dealing with an emergency, some countries also adopt other means such as half rate coding, SMS transmission delaying at the peak, etc. Half rate coding is an effective way to double the network capacity in case of a severe shortage of spectrum resources. In speech coding, the bit rate of the full rate coding is 13 kb/s while the bit rate of the half rate coding is 6.5 kb/s. The mobile operators will adopt the half rate coding when the traffic volume rises to the peak suddenly. The half rate coding is also an effective means for foreign operators to cope with high telephone traffic in an emergency. For instance, mobile operators adopted such scheme after the July 7, 2005 London bombings. We shall also prepare a pre-scheme involving various means to cope with telephone traffic congestion following an emergency.
46
STUDYING FUTURE-ORIENTED DISASTER COUNTERMEASURES OF NETWORK AND TECHNOLOGY As the network evolves into its next generation, future-oriented disaster countermeasures for networking technology has been initiated in Japan and Europe. The Ministry of Internal Affairs and Communications of Japan established a study team in Feb. 2006 to study future security technologies and measures necessary to realize ubiquitous security in society, and has planned to design the next generation disaster countermeasures of communication networks with following characteristics: • Information exchange with disaster-hit personnel: To promptly send warnings and messages to the disaster-hit personnel with the mobile terminals • Backbone network: To increase the resistance and availability of the network against disasters, and ensure security and reliability • Mobile communication: To realize highcapacity communication transmission with the latest technology, and transmit images in the mobile environment • Disaster information collection: To promptly send pictures obtained from a helicopter or information obtained by the ground sensor to the disaster control center so that it can act on the real disaster situation within a short period of time, even at night or in bad weather • Information processing and analysis: To efficiently and automatically process and analyze information gathered through new technologies such as ubiquitous network technologies, the 3D Geographic Information System (GIS), and the like They chose five core technologies as study priorities: broadband mobile communications systems for disaster sites; a dual terrestrial/satellite mobile phone system that enables reliable communications at disaster times; real-time image gathering by helicopters, airplanes, and observation satellites; high-precision observation of disaster conditions, and high precision observation and information analysis of abnormal weather phenomena; and improvement of the disaster resistance of disaster information transmission systems. The development of the satellite, plane, and sensor have provided a good communication and surveillance platform for us, while some foreign study projects based on wireless communications and helicopters are quite worthy of our attention and reference. In 2006 the EU funded a project titled the Mobile Autonomous Reactive Information System for Urgency Situations (MARIUS). The project aims to build an autonomous network using helicopters, a mini mobile network which allows rescue personnel to conduct communication from the helicopter. This technology is very suitable for communication during the elimination of the barrier lake related dangers from the earthquake. It can be deployed swiftly to guarantee the connections between the disaster relief personnel, as well as between the disaster relief personnel and the outside.
IEEE Communications Magazine • January 2011
RAN LAYOUT
12/16/10
12:35 PM
Page 47
The US also initiated some study projects after the 9/11 attacks, including the software defined radio (SDR) and intelligent antenna, mesh networks, cognitive radio, etc. In addition, it is also very important to develop small, energy-saving emergency communication devices because they are power-efficient, and easy to carry, which is very critical during the disaster period. After such a severe earthquake, China shall also promptly initiate the corresponding disaster prevention network and technical study project at the national and industrial levels to cope with future disasters and crises. The good news is that China has considered to strengthen study on wireless sensors, ad hoc networking, space communications, machine to machine (M2M) communication and other new technologies used in emergency communications, then push to deploy emergency call processing abilities on next generation networks, satellite-based emergency calls, priority calls in the public switched telephone network (PSTN), and early warning system.
CONCLUSION An earthquake measuring 8 on the Richter Scale struck Wenchuan County at May 12, 2008. Eight counties in the worst-hit area suffered a complete outage of communication connection with the outside world for some time, and became an Isolated Information Island. We
IEEE Communications Magazine • January 2011
should enhance emergency communication capabilities by providing priority service functions for the PSTN, attaching importance to wireless communications, and finding more schemes to handle high traffic. Consideration on strengthening research and development of new technologies for emergency communications by Chinese government must be very helpful for us to reach this target.
BIBLIOGRAPHY [1] The 9/11 Commission Report; http://www.gpoaccess.gov/911/pdf/fullreport.pdf [2] MIC, “MIC Communication News,” vol. 18, no. 5. [3] MIIT, “Sichuan Communications Completed the First Phase Goal of Recovery Ahead of Schedule”; http://www.cww.net.cn/news/html/2008/6/10/20086108595 255.htm [4] X. Guohua, “The Reconstruction Effort to Improve the Emergency Response Mechanism”; http://tech.sina.com.cn/t/2008-0715/10022327183.shtml
We should enhance emergency communication capabilities by providing priority service functions for the PSTN, attaching importance to wireless communications, and finding more schemes to handle high traffic.
BIOGRAPHY YANG RAN (
[email protected]) is the deputy chief engineer of Teleinfo Institute, China Academy of Telecom Research (CATR). She received her B.E. degree in telecommunication engineering from Beijing University of Posts and Telecommunication in 1986. She joined CATR in 1986 and engaged in the emerging technologies, new services, and related regulatory and policy research for more than 20 years. Now she is vice-chair of the Next Generation Network at CATR. She has done a lot of research on emergency communications and cyber-security recently and became a member of the Experts Group on Emergency Communications.
47
KROCK LAYOUT
12/16/10
12:34 PM
Page 48
NETWORK DISASTER RECOVERY
Lack of Emergency Recovery Planning Is a Disaster Waiting to Happen Richard E. Krock, Alcatel-Lucent
ABSTRACT Disasters are rare but inevitable events. Because of the pivotal role communications plays in our private lives, business dealings, and national security, the ability of communications networks to quickly recover from disasters is imperative. This article will highlight the need for emergency preparedness, discuss several key actions needed to properly prepare for an emergency, and cite a recent example of industry’s efforts to anticipate and prepare for an emergency situation.
THE NEED FOR PREPARATION THE NATURE OF DISASTERS Disasters are, by definition, relatively rare events, and because of this there is a tendency to spend little time in preparing for them, or worse, not preparing for them at all. If the chance of something occurring is small, why not just hope it doesn’t happen and simply deal with the consequences if it does occur? Such a philosophy works for some things, where the consequence will not have a major impact, but when the consequences of an event are significant, then preparation is essential. An example of this in everyday life is automobile insurance. Many car insurance plans offer towing insurance. It’s a convenience but hardly a necessity since the expense of having a car towed does not put an undue financial burden on most people. Liability insurance on the other hand, protects a person should their car cause serious harm to persons or property. Expenses associated with that type of occurrence could have a very significant impact on a person’s financial well-being, and for this reason liability insurance is a must, and often required by law. In the communications business, a disaster event usually has significant consequences. For the network operator outages caused by disasters can mean a loss of revenue in the short term and quite possibly the long term loss of customers. For the individual users of communications services, the impact can range from simple inconvenience all the way up to matters of life and death. For companies that depend on communications services to conduct their business it can result in lost business, lost customers, and ultimately loss of the business. Because govern-
48
0163-6804/11/$25.00 © 2011 IEEE
ments generally depend on the communication services provided by private industry, loss of communications due to a disaster can limit a government’s ability to provide services to its citizens, to coordinate services within the government, and to communicate between government branches. Loss of communications within the government may even pose a threat to national security should communications with the military be disrupted. The key role that communications plays in personal lives, commerce, and government makes the impact of a communications disaster far too significant to wait until disaster strikes to figure out how to deal with it.
WHY PREPARE FOR DISASTERS? The obvious question is “What is the value of preparation?” Why not wait until something happens, evaluate the situation, and then decide how to react? There is an old adage that says, “Practice makes perfect.” That’s what preparation is. In almost any field of endeavor where people seek to perform at their best, they practice first. In the area of sports, it’s often said that practice cannot accurately simulate game conditions, but all good athletes continue to practice. Why? Because while practice can’t simulate all the things encountered in a game, it prepares people to react properly to various things they may encounter. The same holds true for practicing for disasters. Yogi Berra, the New York Yankee catcher is quoted as saying, “It’s tough to make predictions, especially about the future.” [1] No one can know what form the next disaster will take, but by practicing responding to a variety of disasters, individuals and organizations learn how to respond to whatever disaster occurs. General Dwight D. Eisenhower said, “”In preparing for battle I have always found that plans are useless, but planning is indispensable.” [2] The very act of planning is what enables an organization to quickly respond to a disaster and begin recovery. An unprepared organization stumbles with recovery and loses time determining how to react. The practiced organization quickly assesses the situation and while their preparation may not have been for the exact situation that has occurred they can quickly adapt their preparation to the specifics of the event at hand. They know the steps to take and the contacts to make to begin the recovery effort. With the stakes so
IEEE Communications Magazine • January 2011
KROCK LAYOUT
12/16/10
12:34 PM
Page 49
high, no communications provider can afford to wait for a disaster to strike before formulating recovery plans. These plans have to be in place, practiced and ready to go.
AVAILABILITY AND ROBUSTNESS OF ELECTRONIC COMMUNICATIONS INFRASTRUCTURES THE ARECI STUDY Realizing the vital role that communication plays in almost every facet of life, in 2006 the European Commission contracted Bell Laboratories to conduct a study on the “Availability and Robustness of Electronic Communications Infrastructures.” [3] This study analyzed the thoughts and opinions of over 150 key stakeholders from many European countries on topics related to reliability, security, and emergency preparedness. This information, gathered from workshops, personal interviews, and questionnaires, provides an insightful look at the state of Europe’s communications networks. In-depth analysis of the data identified 100 key findings, covering a wide variety of topics. These key findings led to 10 bold recommendations six of which have applicability to disaster situations. In addition, 71 European Industry Best Practices were identified and confirmed [4]. Best Practices provide guidance from industry experts and, to be considered a Best Practice, must be implemented by at least one company. As opposed to standards or regulation, implementation of Best Practices is voluntary. This article will highlight two of the recommendations that specifically cover disaster recovery planning.
ARECI RECOMMENDATION 1: EMERGENCY PREPAREDNESS The first ARECI Study recommendation calls for emergency exercises and pre-planning: “The Private Sector and Member State governments should jointly expand their use of emergency exercises and establish pre-arranged priority restoration procedures for critical service to better meet the challenges of inevitable emergency incidents” [5]. The ARECI study found that while almost all communications companies said they had a disaster recovery plan and practiced emergency preparedness, the completeness of the plans varied greatly as did the amount of effort put into emergency preparedness. Some plans had great detail, were regularly updated, and were periodically exercised. Others consisted of nothing more than a vague idea in the mind of the person interviewed of whom to call in case of emergency — in essence no plan at all. A key aspect of this recommendation also calls for having an established priority restoration scheme in place prior to a disaster. Having such a scheme saves valuable time during a disaster and restores service to the parties that have been pre-determined to need it most. The recommendation also points out some of the consequences of not implementing a disaster recovery plan. Delays in response result in more lengthy
IEEE Communications Magazine • January 2011
outages and may result in wider spread outages. Unprepared personnel attempting to implement a recovery have a significantly greater chance of impeding restoration and may even extend the outage to previously unaffected areas. Relying on one or two specific individuals for emergency restoration can result in a single point of failure. This becomes especially evident if the individuals are either directly impacted by the disaster or unable to communicate because of it. Non-prioritized restoration may leave the most essential users out of service while well intended but misguided effort is spent on restoring less critical customers. Each of these consequences can be prevented by proper disaster planning. Having an emergency plan is imperative, but owing to the nature of disasters, that plan, thankfully, is seldom used. How then can an organization know if they’ve got a good emergency plan? Yogi Berra, who also managed the New York Yankees once said, “In theory there is no difference between theory and practice. In practice there is.” [6] An emergency plan is just theory until it is actually put into practice. A disaster is a poor time to find out that an emergency plan has gaps or hasn’t accounted for something. Emergency exercises are an opportunity to view the plan in practice, and to find and fill the gaps before the plan is needed in a real situation. Emergency exercises aren’t free, and in times of budget pressure, there’s a tendency to eliminate this type of effort, especially if a disaster hasn’t occurred recently. Having no emergency plan is an invitation to disaster — having an untested plan is only slightly better. It is no accident that the call for emergency preparedness is the first of the ten recommendations. When it comes to ensuring a robust and dependable communications infrastructure, nothing pays greater dividends than preparation.
The ARECI study found that while almost all communications companies said they had a disaster recovery plan and practiced emergency preparedness, the completeness of the plans varied greatly as did the amount of effort put into emergency preparedness.
ARECI RECOMMENDATION 3 — MUTUAL AID The third ARECI Study recommendation calls for pre-established formal mutual aid agreements between communications providers: “The Private Sector should establish formal mutual aid agreements between industry stakeholders to enhance the robustness of Europe’s networks by bringing to bear the full capabilities of the European communications community to respond to crises” [7]. John Donne said, “No man is an island entire of itself; every man is a piece of the continent; a part of the main …” [8]. The same could be said for communications companies. What happens to one communications company has a bearing on all those with which it connects — and in today’s communications environment, practically all networks are somehow connected. In times of crisis, the communications industry has an unmatched record of coming together and cooperating to restore service. Competition is put aside in the name of restoring service for the good of the people, enterprises, and governments that depend on it. While this is laudable, preparing for this before an incident occurs makes the joint response quicker and more efficient. The ARECI study found that while mutual aid existed it was largely informal and dependent on personal contact between two people. This
49
KROCK LAYOUT
12/16/10
12:34 PM
A documented, tested emergency plan, combined with one or more mutual aid agreements, offers the communications provider the best chance of withstanding the inevitable disaster, and providing its customers and community with the reliable communications that are so imperative in times of crisis.
50
Page 50
arrangement may seem efficient, but it is subject to all the weaknesses associated with single points of failure. Should one of the persons be unavailable or perhaps even caught in the disaster, the arrangement breaks down and critical time may be lost in re-establishing lines of communications. A far better solution is for network operators to establish formal lines of communication with other operators to be used in times of crisis that do not depend on one or two people. Sharing contact information for control centers or formal disaster recovery centers removes the single point of failure vulnerability and formalizes the contact procedure. The nature of disasters, especially natural disasters, is that they are often geographically contained. Hurricanes, earthquakes, and floods hit a specific area, albeit possibly a wide spread area. Because of this, communications companies in the same area are all likely faced with disaster recovery and may not be in a position to provide assistance to one another. For this reason, companies should consider entering into mutual aid agreements with companies that do business outside of their local area. These companies may not have been impacted by the event and may be in better position to provide equipment or personnel to assist with the restoration. What then is a formal mutual aid agreement? Simply put, it is an expression of intent to work together, if both sides agree at the time, and can spell out financial and legal agreements that have been established in a controlled negotiation rather than in the heat of the moment during a disaster. A key point is that a mutual aid agreement does not obligate either side to provide assistance or to accept it. It simply establishes the ground rules that will be used if both agree to implement it. If the situation is such that one of the companies feels it cannot provide what is requested, they are under no obligation to do so. The National Coordinating Center for Telecommunications (NCC) has a mutual aid template that is used as the foundation for mutual aid agreements by many of the network operators in the US [9]. It identifies some of the types of aid that might be considered including equipment, vehicles, network capacity, and personnel, and calls out many of the legal and financial considerations that might be addressed in a mutual aid agreement. Several follow-up ARECI workshops [10] have attempted to address the issue of mutual aid in Europe, which appears to be somewhat more complicated than in the U.S. in that it may require crossing national boundaries and dealing with multiple equipment standards. Upcoming ARECI workshops will likely again attempt to deal with this issue. A documented, tested emergency plan, combined with one or more mutual aid agreements, offers the communications provider the best chance of withstanding the inevitable disaster, and providing its customers and community with the reliable communications that are so imperative in times of crisis. During the past few years network operators in the United States have experienced a number of well known communications disaster situations, including Hurricanes Katrina and Rita, flooding in the Midwest, and wildfires in Califor-
nia. In these, and in many other cases, disaster recovery plans and mutual aid agreements have demonstrated their value over and over again. There is a wealth of publicly available information on these disasters and industry’s response to them, but we will now examine a disaster that didn’t happen to get a glimpse of industry emergency preparation, both in the United States and in Europe.
CASE STUDY: PANDEMIC PLANNING In 2009, the possibility of a flu pandemic was a world-wide threat. A new strain of H1Ni influenza virus, commonly referred to as Swine Flu, appeared in various countries [11]. From the communications industry perspective, there was concern with keeping employees healthy and the possibility of having large numbers of personnel in a particular area out sick, leaving their respective companies unable to perform installation and maintenance activities. There was also significant concern with the network’s ability to handle the expected load that would be generated with many people working from home, either because they were sick or to avoid becoming exposed to the illness in the workplace. When the concern was first raised, many network operators and equipment suppliers formulated their own plans for business continuity. Included in these plans were strategies for stockpiling additional equipment, establishing travel restrictions, and developing tactics for backfilling technicians in affected areas with technicians from unaffected areas. This planning also included evaluating and, if necessary, expanding the capacity of their networks, ensuring that sufficient capacity existed on their own internal corporate networks to allow a greater number of their employees to work from home. Early in 2009, industry members of the ATIS Network Reliability Steering Committee (NRSC) identified the need to share knowledge related to pandemic preparation. The effort began with teams of industry experts reviewing existing industry Best Practices [12] to identify those with specific applicability to a pandemic event. The teams then identified pandemic related preparation that might not be covered by an existing Best Practice. The 8 ingredient model [13], which breaks communications infrastructures down into 8 ingredients, was used to help focus each team on various aspects of the communications infrastructure. The recommendations from each team were then reviewed by the NRSC as a whole. On August 31, 2009 ATIS issued a press release [14] announcing the availability of the Pandemic Planning Recommendations, which included 56 industry Best Practices — 43 existing Best Practices and 13 newly identified Best Practices. These Best Practices cover a wide range of topics, from providing personal protective equipment to arranging for remote access to working with government agencies to ensure physical access. The Pandemic Planning Recommendations [15] are publicly available and demonstrate the importance industry places on working together to jointly address disasters and potential disasters. Pandemic planning was not limited to the United States. One of the action items that came out of the May 2009 ARECI Follow-up work-
IEEE Communications Magazine • January 2011
KROCK LAYOUT
12/16/10
12:34 PM
Page 51
shop was an agreement for parties in Europe to meet on a series of conference calls to discuss the status of the pandemic and share information about their individual organizations’ preparations [16]. These conference calls continued through the end of 2009, with both industry and government personnel participating and sharing details about their preparation and plans. While the specific preparation steps varied, each represented organization had a well-defined plan and considered making enhancements to it based on the discussions at these meetings. Although the flu pandemic never materialized to the extent that was predicted, the Best Practices that were identified and the individual company plans that were established can be used for future pandemics. These efforts clearly demonstrate the importance that industry places on disaster planning and preparation.
SUMMARY Communications is a foundational unit upon which society depends — especially in times of crisis. Having a documented and tested emergency plan should be a requirement for all providers of communications. Disasters are rare events, and while some may view preparation for such events as unnecessary overhead or even paranoia, history has shown us that although the possibility of a disaster striking is small, it is not zero. Disaster situations will occur, and just as when purchasing insurance, the amount of money, time, and effort spent to protect against a disaster should be directly related to the value that is placed upon what may be lost. The indisputable importance of communications in today’s societies requires a strong commitment to timely restoration in times of disaster. This must be considered when decisions on emergency preparation expenditures are being made. Alexander Graham Bell, the inventor of the telephone said, “Before anything, preparation is the key to success” [17]. He may well have been talking about planning for disasters. Too much is at stake to first consider the consequences once a disaster has occurred.
REFERENCES [1] Famous Quotes and Quotations, “Yogi Berra Quotes…”; http://www.famous-quotes-and-quotations.com/yogiberra-quotes.html.
IEEE Communications Magazine • January 2011
[2] ThinkExist, “Dwight David Eisenhower Quotes”; http://thinkexist.com/quotes/dwight_david_eisenhower/. [3] European Commission Information Society, “Availability and Robustness of Electronic Communications Infrastructures (ARECI): ARECI Study”; http://ec.europa.eu/information_society/policy/nis/strategy/activities/ciip/areci_study/index_en.htm. [4] Alcatel-Lucent Bell Labs, “European Expert-Confirmed Discretionary Best Practices — Selector Tool”; http://www.bell-labs.com/EUROPE/bestpractices/. [5] European Commission Information Society, “Availability and Robustness of Electronic Communications Infrastructures (ARECI): ARECI Study,” Mar. 2007, p. 78. [6] Brainy Quote, “Yogi Berra Quotes”; http://www.brainyquote.com/quotes/quotes/y/yogiberra141506.html. [7] EC Info. Soc., “Availability and Robustness of Electronic Communications Infrastructures (ARECI): ARECI Study,” Mar. 2007, p. 84. [8] The Quotations Page, John Donne Quotes; http://www.quotationspage.com/quotes/John_Donne/. [9] NCC; http://www.ncs.gov/ncc/resources.html. [10] Alcatel-Lucent, “Update! ARECI Report”; http://www.bell-labs.com/ARECI. [11] Wikipedia, “2009 Flu Pandemic”; http://en.wikipedia.org/wiki/2009_flu_pandemic. [12] Alcatel-Lucent, “NRIC Best Practices — Selector Tool”; http://www.bell-labs.com/USA/NRICbestpractices/. [13 K. Rauscher, R. Krock, and J. Runyon, “Eight Ingredients of Communications Infrastructure: A Systematic and Comprehensive Framework for Enhancing Network Reliability and Security,” Bell Labs Tech. J., vol. 11, no. 3, 2006, pp. 73–81. [14] ATIS, “ATIS Develops Pandemic Planning Recommendations”; http://www.atis.org/PRESS/pressreleases2009/083109.htm. [15] ATIS, “NRSC Pandemic Checklist”; http://www.atis.org/nrsc/Docs/NRSC_Pandemic_Checklist_Final.pdf. [16] Alcatel-Lucent, “ARECI Workshop”; http://www.alcatellucent.com/wps/portal/!ut/p/kcxml/04_Sj9SPykssy0xPLMnMz0vM0Y_QjzKLd4w39w3RL8h2VAQAGOJBYA!!?LMS G_CABINET=Bell_Labs&LMSG_CONTENT_FILE=News_Fe atures/News_Feature_Detail_000554.xml. [17] ThinkExist, “Alexander Graham Bell Quotes”; http://thinkexist.com/quotes/alexander_graham_bell/.
Alexander Graham Bell said, “Before anything, preparation is the key to success.” He may well have been talking about planning for disasters. Too much is at stake to first consider the consequences once a disaster has occurred.
BIOGRAPHY RICHARD E. KROCK (
[email protected]) is a technical manager in the Quality Assurance and Compliance organization of Alcatel-Lucent. He has 39 years of experience in the communications industry, having previously worked for a network operator. During his career he has led various disaster recovery efforts, most notably the switching recovery and overall quality control effort associated with the Hinsdale Fire, and has led numerous workshops on emergency preparedness and disaster recovery in the United States and Europe. He was program co-chair of the IEEE Communications Quality and Reliability Workshop in 2007 and 2008, and is the current chair of the Network Reliability Steering Committee Best Practices subcommittee. He holds a B.S.E.E. from Valparaiso University and an M.B.A. in telecommunications from Illinois Institute of Technology. He is a licensed professional engineer.
51
MASE LAYOUT
12/16/10
12:33 PM
Page 52
NETWORK DISASTER RECOVERY
How to Deliver Your Message from/to a Disaster Area Kenichi Mase, Niigata University
ABSTRACT This article sheds light on communication needs of evacuees in shelters during a post-disaster period and advocates that it is essential to develop a completely new communication service for those in shelters to maintain communication channels with those outside shelters as well as with those in other shelters. We then present assumptions and requirements in developing such service and show a service and system concept, termed Shelter Communication System. SCS is composed of a computer (termed Shelter Server) connected to the Internet and a set of personal computers (termed shelter PCs), one in each shelter, also connected to the Internet via an appropriate Internet access connection such as High Speed Packet Access. Shelter Server and Shelter PCs cooperatively provide a message communication service for those in shelters and those outside shelters. A prototype of SCS is presented to demonstrate feasibility of SCS. A simple evaluation shows that SCS potentially provides a message communication service for enormous number of evacuees in shelters in the case of large-scale disasters and surpasses other message communication services such as cellular phone mail and facsimile.
INTRODUCTION When a large-scale disaster occurs, many people who live in the stricken area may lose their homes and are obliged to evacuate and stay for some days, weeks, or even months in shelters provided in public facilities such as a school gymnasium. It is natural that all efforts have to be made to supply food, water, blankets and other daily necessities for evacuees to survive. Electric power, possibly using emergency power generators, must also be provided for shelters for lighting and other needs. Last but not least, communication means are also essential for evacuees. However, telephone and cellular phone services may not work as usual for some days, weeks, or even months after a disaster depending on its scale due to damages to communication network facilities and traffic overloads, resulting in gap between supply and demand of communication infrastructure and facilities. Service providers and network operators may bring and set up earth stations for establishing satellite links in the disaster areas, and provide free public telephone terminals for evacuees in
52
0163-6804/11/$25.00 © 2011 IEEE
shelters [1]. However, the number of telephone terminals is limited depending on the capacity of the satellite link as well as human and other resources, and may not always be sufficient to meet the huge call demands. They may also provide portable cellular base stations using satellite links [1, 2]. Again, the number and capacity of base stations may not always be sufficient. There is no doubt that those in shelters have indispensable and urgent communication needs in addition to those necessary in everyday life. It is also essential for them to maintain communication opportunities with those outside to mitigate stress under abnormal and extreme conditions such as survival in shelters. The necessity of communication is not only for evacuees, but those outside the disaster area. Largescale disasters may bring serious damages on many aspects of social and economical activities. To minimize the effect on business activities due to disaster, Business Continuity Plan (BCP) draws attention and how to maintain communication within enterprises and with customers in post-disaster period are of serious concern. As such it is essential for all evacuees in shelters to be equally provided with at least minimum-level emergency communication service that supports communication channels with those outside the disaster area and with those in other shelters until original telecom services are recovered. There are in general two approaches to reduce the gap between communication demand and supply during post-disaster period, that is, increase in supply or decrease in demand. The first approach is of course desirable, but not realistic to fully cover enormous traffic demands that may occur during the post-disaster period due to resource limitations in Telecom companies. The second approach is more realistic, enforcing strong traffic regulations to suppress network congestion, resulting in unacceptable call success ratio during postdisaster period. This approach may become only acceptable when it is realized by a method other than regulations, keeping users’ satisfaction as high as possible. For this goal, it is essential to develop a new application that fulfills the enormous demands without causing such load on communication resources as in legacy applications. To demonstrate this approach, we propose a novel message communication service that is valuable for those in shelters during post-disaster period. We discuss the technical challenges and mechanisms to realize such a message communication service,
IEEE Communications Magazine • January 2011
MASE LAYOUT
12/16/10
12:33 PM
Page 53
Shelter Communication System (SCS). We evaluate the feasibility of the SCS based on the performance evaluation of the developed prototype. We finally compare the service of the SCS with some conventional message communication services and demonstrate the advantages of the SCS.
ASSUMPTIONS AND REQUIREMENTS In designing SCS, we set up assumptions as follows: A1 — Electric power is recovered and available for shelters, possibly using power generator. A2 — Those in shelters do not have a personal device such as cellular phone, personal computer (PC), etc. and/or skill to use them. A3 — At least one PC (termed shelter PC) is available in every shelter, set up and connected to the Internet, possibly using a temporal access line. A4 — As the main component of SCS, a computer (termed Shelter Server) is set up and connected to the Internet. Shelter Server may be operated by a national agency or a private sector such as a group of network operators. It is standing equipment and is ready to operate whenever and wherever a disaster occurs. To realize A3, we may use a high-speed data communication service such as High Speed Packet Access (HSPA). Such data communication service may be available either using existing telecom facilities without serious damage or using temporally recovered facilities such as portable cellular phone base stations connected to satellite earth stations. In either case, the network facilities may be severely congested and may not be able to provide data communication service with acceptable quality of service (QoS). Note that we only need a single data communication session at each shelter, serving everyone there. Such public communication requests are qualified to be given higher preference over general communication requests in cooperation with Telecom operators, based on social recognition of SCS service. It is essential that all evacuees are fairly treated in provision of living space, food, water, and life necessities in the shelters. Communication opportunities are also not exceptional. When the number of free telephone terminals provided by Telecoms is of shortage, long queue may be formed before each terminal to wait for service. Such long queues give rise to big stress for evacuees and should be avoided. Finally, communication should be reached to anyone those outside disaster area as well as those within disaster area, who may stay in other shelters. Based on the observation above, we set up the three requirements in designing SCS as follows. SCS shall provide the service, which: R1 — Is available for everybody R2 — Is available anytime without waiting for a long time R3 — Supports communication with anyone and anywhere It is inconvenient and impractical for those in a shelter to use only a single Shelter PC in turn. Moreover, not all of them can use computer. It is a big challenge how to realize an emergency message communication service, meeting the requirements mentioned above.
IEEE Communications Magazine • January 2011
SERVICE CONCEPTS AND PRINCIPLES SERVICE CONCEPTS While we assume A2 for those in shelters, e-mail like message delivery services using the Internet and cellular phones are convenient and indispensable for those outside the disaster area to maintain communication with those in shelters. They can only use applications that they are accustomed to use in daily life, while they may fail to use the application that is developed to be used only in the case of disaster. The goal of SCS is to solve this information gap between inside and outside the disaster areas. For this goal, we introduce the following ideas: • Those in shelters write messages by hand on the specially formatted sheets of paper (message sheets). A number of collected message sheets are input into Shelter PC with scanner, allowing only a single PC to be shared by everybody. • The Shelter PC creates a file of a number of input message sheets, termed message file, and uploads the message file to the Shelter Server. • The Shelter Server forwards each message to the cellular phone or PC of the receiver outside the disaster area in the form of an e-mail. • Messages from outside the disaster area and messages from shelter to shelter, received by the Shelter Server, are classified by shelters and downloaded to each Shelter PC. Each message is printed on a sheet of paper by the Shelter PC and delivered to each receiver. The SCS makes the best use of the advantages of message-based communication, summarized as follows: • A message can be handwritten on a sheet of paper without special devices and/or skill. • A message can be printed on a sheet of paper and can be read without special devices and/or skill and anytime later. • A message can be sent and received anytime regardless of the status or surroundings of the sender and receiver. Imagine many people talking with cellular phones within a densely-populated shelter. • Messages can be stored and aggregated before transmission enabling effective use of the precious bandwidth of the Internet access under emergency. • Nowadays, message communication tools such as e-mail using cellular phone and PC are quite popular.
While we assume A2 for those in shelters, e-mail like message delivery services using the Internet and cellular phones are convenient and indispensable for those outside the disaster area to maintain communication with those in shelters.
KEY IDEAS TO REALIZE SERVICE CONCEPTS To realize service concepts mentioned above, we have come up with three key ideas, listed as follows. We use the telephone numbers, hand-written in a message sheet and recognized with Optical Character Recognition (OCR), to identify the sender and receiver of a message. It is not necessary to input other information such as e-mail address even when the message will be eventually sent to PC or cellular phone outside the disaster area in the form of e-mail. The reasons of using telephone numbers as sender’s and receiver’s ID are three fold:
53
MASE LAYOUT
12/16/10
12:33 PM
Page 54
Shelter server e-mail sending Message file uploading and downloading Web mail Internet
Gateway
Cellular phone network
Internet access line Shelter PC
Shelter PC Disaster area
Aiming to make system operation easier, we adopt the basic principles in designing the system as follows: • The functionalities of Shelter PC are to be designed as simple as possible. It can start to work when it is powered on and connected to the Shelter Server through the Internet. • Sophisticated functionalities are to be centralized into the Shelter Server. • SCS webpage is provided as the service interface to users. Based on the principles above, user accounts and message holders required for message communication are created and maintained only in the Shelter Server. As a result, Shelter PC can be powered off and replaced anytime.
SHELTER PC FUNCTIONALITIES Figure 1. An overview of SCS and related systems. • Telephone numbers are more familiar than e-mail addresses. • It is easier for a person to write-in telephone numbers on the message sheet than e-mail addresses. • Nearly 100 percent recognition success ratio is expected for numerical numbers, each of which written within the border, with OCR. We do not use OCR for recognizing handwritten free sentences which are simply be treated just as image data, considering that satisfactory level of success ratio in mechanical reading may not be obtained. Note that we cannot depend on any device for those in shelter to input text as mentioned in A2. The image data is either directly attached to the e-mail sent to the receiver or can be accessed for the receiver to click the URL included in the received e-mail text. When a message is sent to the receiver in the form of e-mail, his e-mail address needs to be resolved from the telephone number (address resolution). We depend on the active role of SCS users outside the disaster area to perform this mapping. Specifically, when a person outside the disaster area sends a message to someone in a shelter using SCS, he is requested to input his telephone number as well as e-mail address. The Shelter Server records this telephone number to email address mapping information and may use it later for address resolution, when his telephone number is found as the receiver’s telephone number of a message sent from a shelter. If the receiver has not yet accessed to the SCS before, mapping information is not available in the Shelter Server. In this case, the Shelter Server may automatically make a phone call to the receiver’s telephone number on the message sheet so as to make him aware of coming messages and to trigger him to access SCS to input the mapping information.
DESIGN PRINCIPLES AND FUNCTIONALITIES SCS DESIGN PRINCIPLES An overview of SCS is shown in Fig. 1 together with the related systems. The major components of SCS are the Shelter Server and Shelter PCs.
54
To begin the SCS service in a shelter, someone there activates the web browser of the Shelter PC and accesses the web page of the SCS to register the shelter ID. The shelter account is then created in the Shelter Server and the SCS service is ready to be used. A Shelter PC is equipped with an auto document feeder (ADF), scanner, and printer. Written message sheets are set in the ADF, scanned sheet-by-sheet, transformed into electronic data, and input to the Shelter PC. The Shelter PC aggregates and compresses input electronic data into one file (message file) when a given time has elapsed after the last message sheet was input or when the size of input messages reaches a pre-determined threshold, and uploads the message file to the Shelter Server. On this opportunity and/or periodically, it downloads the message file composed of the received messages from the Shelter Server. The Shelter PC then splits the received message file and prints each message.
SHELTER SERVER FUNCTIONALITIES Shelter Server maintains user accounts and message holders. A user account is comprised of a telephone number, Shelter ID, and an e-mail address. Either shelter ID or e-mail address is filled in principle. User accounts are created and maintained automatically upon receiving message communication requests from users and no preregistration is required. Message holders are composed of common message holder and individual shelter message holders, one for each shelter. The common message holder is the place to temporarily store messages when the address information (shelter ID or e-mail address) to deliver them to the receiver is not available in the corresponding receiver’s user account. A shelter message holder is the place to store the messages for which shelter ID is filled in the receiver’s user account. Shelter Server performs the following major functions in collaboration with Shelter PCs. Shelter Server receives the message file uploaded by each Shelter PC and splits the message file into each (electronic) message. For each received message, it checks if there is the existing user account for each of the message sender’s and receiver’s telephone numbers, and creates it if there is not. The shelter ID is set to that of the uploading Shelter PC and the e-mail address is kept in blank for message sender’s account. The Shelter ID and the e-mail
IEEE Communications Magazine • January 2011
MASE LAYOUT
12/16/10
12:33 PM
Page 55
address are kept in blank for receiver’s account when it is newly created. The Shelter Server acts as a proxy of each message sender in shelter to send the message to the receiver for each of the received messages. If there is an existing user account for the receiver’s telephone number, and the corresponding e-mail address or Shelter ID is available, the Shelter Server sends the message in the form of e-mail or stores the message in the corresponding shelter message holder, respectively. If not, it stores the message in the common message holder. It will send the message in the form of an e-mail or move the message from the common message holder to the message holder of the corresponding Shelter ID, when the corresponding receiver’s user account is updated and its e-mail address or shelter ID becomes available in the future. This procedure is illustrated in the processing sequence in Fig. 2. The SCS provides a web mail interface for those outside the disaster area to send messages to those in shelters. When a person outside the disaster area accesses the web mail page of the SCS, he is requested to input his own as well as the receiver’s telephone numbers. Shelter Server checks if there is the existing user account for him. If there is not, he is requested to input his Internet (fixed or mobile) e-mail address. The Shelter Server creates his user account based on the received telephone number and e-mail address, which are used for address resolution as described before. It also checks if there is the existing user account for the receiver’s telephone number, and performs similar process, as mentioned for the message sent from the shelter, to send the message to the receiver in the shelter. This procedure is illustrated in the processing sequence in Fig. 3. On receiving message file downloading request from a Shelter PC, the Shelter Server collects the messages in the corresponding message holder, aggregates them into one file and sends the message file to the requesting Shelter PC.
PROTOTYPE DEVELOPMENT AND EVALUATION A prototype of SCS has been developed based on the system design principles and functionalities given in the previous section [3]. We consider message sheet uploading & processing time that is defined as the time from the initiation of scanning a given number of message sheets set up in the ADF of Shelter PC to the completion of processing the uploaded message file in the Shelter Server. We have conducted experiments to measure the details of message sheet uploading & processing time for varying the number of message sheets. In this experiment, one Shelter PC is directly connected to the Shelter Server with link bandwidth of 32 kb/s. The results are given in Table 1. We have derived an experimental equation on the total usage time of the Shelter Server (message sheet uploading time + Shelter Server processing time), Tu (sec), for the given number of message sheets per day, n, assuming to scan and upload message sheets in the unit of 40 using the results of Table 1. The number of uploading per shelter per day is n/40, neglecting fractions. The message file transmission time is 258 * 32 kb/s/B, where B is the effective bandwidth
IEEE Communications Magazine • January 2011
Shelter server
PC/cellular phone Outside disaster area
Identify message type and search/create accounts No
Receiver’s address available?
Store the message in the common mail holder
Yes
Receiver’s account updated
Shelter-ID
Receiver’s address
e-mail
Store the message in the shelter mail holder Send e-mail
Exit
Figure 2. Message sheet processing.
PC/cellular phone
Shelter server
Outside disaster area
Access web mail page
Input sender’s ID (telephone no. and e-mail address), receiver’s telephone no. and message Register Search/create accounts
Receiver’s shelter ID available?
Exit
No
Yes
Store the message in the common message holder Receiver’s account updated
Store the message in the shelter message holder Exit
Figure 3. Web mail reception. between Shelter PC and Shelter Server. The total usage time of the Shelter Server, Tu (sec), is then represented by the equation as follows: ⎛ 206 ⎞ Tu = ⎜ + 0.15 ⎟ n ⎝ B ⎠
(1)
SERVICE CAPACITY ESTIMATION In the experiment above, only a single Shelter PC was used. In actual disaster cases, a number of shelters may be open with a Shelter PC set up in each shelter. The web server of the Shelter Server thus
55
MASE LAYOUT
12/16/10
12:33 PM
Page 56
No. of message sheets
Shelter PC processing time Scan
Coding
10
8
20
Compression
Shelter Server processing time
Uploading time
8
7
2
63
16
16
7
4
120
30
24
24
9
5
170
40
32
32
12
6
258
Table 1. Message sheet uploading and processing time (sec).
Usage of shelter server per day (hour)
35 30 25
M=400
M: The number of message sheets per day per shelter
20 M=200 15
Available time
10 5 0 0
200
400 Number of shelters
600
800
Figure 4. The relation of the number of shelters and the total usage of the Shelter Server for message sheet uploading per day. needs to handle a number of simultaneous sessions from multiple Shelter PCs simultaneously. In this case the Shelter Server usage time per day is estimated as Tu * S, where S is the number of shelter PCs. Assuming that the number of evacuees in a shelter is 100, each of them sends 2 or 4 messages per day on average, the total number of messages per shelter is 200 or 400. Note that it is difficult to produce tens of messages in handwriting and busy persons typically spend long time outside the shelter. Elder people and children do not generate many messages. It is thus reasonable to assume that the number of messages per day on average is relatively low. We further assume that the bandwidth of the Internet access link is 1Mb/s per shelter. This bandwidth is available in today’s high-speed cellular data communication services such as HSPA. The Shelter Server usage time per day is shown in Fig. 4 with regard to the number of shelters, S. The dotted line represents the 12 hours that is the upper limit in the total server usage time, assuming the remaining 12 hours are needed for message downloading. It is shown in this figure that 600 or 300 shelters can be served when the total number of messages per shelter is 200 or 400, respectively. This calculation shows that only one Shelter Server of the developed SCS prototype can deal with a middle class disaster. Assuming a mega class disaster with 2 million evacuees and 100 evacuees per shelter, 20 thousand shelters are then needed. If we use a desktop with similar capacity as used in our prototype, load sharing with 30–60 Shelter Severs are required. The number of Shelter Serves can be reduced by using higher-class computers. It is still necessary to verify the assumption we use in this estimation, such as communication
56
needs of the evacuees, to make SCS into practice. Middle class disasters happen more frequently than mega class disasters. We could verify our assumptions by actually using SCS for these middle class disasters to prepare large-scale disasters.
SERVICE COMPARISON We select two existing message communication services, cellular phone e-mail service and facsimile service that may potentially be used in shelters for qualitative comparison with the SCS service, assuming that cellular phone terminals are owned by individual evacuees in shelters in the former case and a single facsimile terminal is provided each shelter in the latter case. The results are given in Table 2. We first discuss three criteria that correspond to the requirements R1–R3 in developing the SCS. The first criterion, beneficiary, corresponds to R1 and means the number of prospective users. Cellular phone e-mail is not for everybody in shelters, since some evacuees do not have cellular phone and others may not have a skill to use cellular phone e-mail. On the other hand, facsimile and SCS services can be used by almost anybody, since message can be written by hand and received in print on sheets of paper. The second criterion, waiting for service, corresponds to R2. We choose the time required to complete message transmission processing (message transmission waiting time) for this criterion. To send a facsimile, the sender needs dialing. When many people wish to send facsimile simultaneously using only one facsimile terminal in a shelter, they have to wait their turn in a long queue or with order ticket to dial the facsimile number to send their message. In case of cellular phone e-mail, such problem does not occur since the senders use their own phone terminal. In SCS, each user does not have to wait their turn to use the Shelter PC, though it takes some time for the message sheets to be collected and set up in the scanner. The third criterion, reachability, corresponds to R3. Penetration ratio of facsimile terminal is significantly lower than that of cellular phone and PC in ordinary homes. Cellular phone service may not be available to/from the receivers in the disaster areas due to network damage and/or strong traffic regulation. On the other hand, in SCS, messages can be delivered to those outside the disaster area with their cellular phone or PC connected to the Internet as well as to those in other shelters, and received from those outside shelters through the Internet, assuming priority access mentioned before with regard to A3, leading to the highest evaluation. We next discuss other criteria. Privacy of correspondence is lower evaluated in facsimile and SCS services, where messages written or printed on the sheet of paper may be read by others, than in cellular phone e-mail, but it may not be a serious defect since unsealed message delivery service such as facsimile itself and other service such as postal card have been generally accepted by users in daily life. With regard to Misdelivering, human error such as misdialing can occur in cellular phone and facsimile. Since in the SCS manual intervention is required to collect message sheets and input them into Shelter PC as well as to distribute printed message sheets to individual receivers, message sheet loss may happen due to human error. In addition negligible OCR error may occur in the
IEEE Communications Magazine • January 2011
MASE LAYOUT
12/16/10
12:33 PM
Page 57
Key Factors Service
Other factors
Beneficiary
Waiting for service
Reachability
Privacy of correspondence
Misdelivering
Networking cost/user
Terminal cost/user
Cellular phone mail
Limited
Not necessary
Limited (service may not be available)
Satisfactory
Acceptable (dialing error)
High (a number of sessions required)
Not required (user’s own terminal is used)
Facsimile service
Not limited
Necessary (dialing is required)
Poor (less popular)
Acceptable
Acceptable (dialing error)
Not required
Acceptable (one facsimile terminal per shelter)
SCS
Not limited
Not necessary
Satisfactory
Acceptable
Acceptable (message sheet loss/OCR error)
Acceptable (Shelter Server is required)
Acceptable (one Shelter PC per shelter)
Table 2. A comparison on message communication services in shelter. SCS, but these happenings can be controlled within an acceptable range. With regard to networking cost/user, enormous communication demands may occur around shelters in the same time period in the cellular phone service, and many portable cellular phone base stations need to be provided to deal with the number of simultaneous sessions in order to satisfactorily meet the demands in the disaster areas, needing high investment to perform such service recovery arrangements. In facsimile and SCS, only a single session is required per shelter in principle. In the SCS, SCS server cost is additionally required, but investment per user is minor, since a single desktop can deal with a number of shelters and evacuees as shown in the previous section. As such network cost/user is within an acceptable range both for facsimile and SCS. With regard to terminal cost, no new cost occurs in cellular phone since users use their own terminals. In facsimile and SCS, a set of terminals is needed in each shelter, which is also within an acceptable range per user. In summary, SCS is advantageous over cellular phone and facsimile in three major criterions and does not have major defects also in other criterions.
CONCLUSIONS We considered the communication environments and needs of the evacuees accommodated in shelters in a post-disaster period and proposed a novel message communication service and system, termed Shelter Communication System (SCS). SCS is composed of a computer (termed Shelter Server) connected to the Internet and a set of personal computers (termed Shelter PCs), one in each shelter, also connected to the Internet communicating with the Shelter Server. Each Shelter PC needs only a single Internet access connection to deal with whole communication needs of one shelter, efficient and promising in the post-disaster period, when network may be severely damaged and traffic overload may occur. High-speed data communication service of the cellular phone network may be used for each Shelter PC to access the Internet possibly with high priority to keep QoS based on social recognition and consensus. The SCS provides message communication service between shelters as well as between shelters and the outside the disaster area.
IEEE Communications Magazine • January 2011
A prototype of SCS was developed and a simple evaluation showed that SCS could provide service for a number of shelters and evacuees in case of large-scale disasters. Finally, qualitative evaluation of SCS and other message services, cellular phone mail, and facsimile, was presented and the SCS service advantages are demonstrated. The prototype of the Shelter Server using a desktop computer could support middle class disasters and may be actually used for the real shelters to explore issues for further improvement. Use of multiple Shelter Servers is mandatory for load sharing and/or reliability to deal with largescale disasters. The SCS is expected to maximize the role of the Internet as the social infrastructure to contribute to rapid disaster recovery.
ACKNOWLEDGMENT This research was partly supported by JSPS Grants-in-Aid for Scientific Research and the Strategic Information and Communications R&D Promotion Programme, Ministry of Internal Affairs and Communications Japan.
REFERENCES [1] T. Kitaguchi and H. Hamada, “Telecommunications Service Continuity and Disaster Recovery,” IEEE Commun. Society Commun. Quality & Reliability Wksp., Apr. 2008. [2] N. Fukumoto, “Business Continuity and Disaster Recovery in KDDI,” IEEE Commun. Society Commun. Quality & Reliability Wksp., Apr. 2008. [3] K. Mase, H. Okada, and N. Azuma, “Development of an Emergency Communication System for Evacuees of Shelters,” IEEE WCNC, 2010.
BIOGRAPHY KENICHI MASE [F] (
[email protected]) received B. E., M. E., and Dr. Eng. degrees in electrical engineering from Waseda University, Tokyo, Japan in 1970, 1972, and 1983, respectively. He joined Musashino Electrical Communication Laboratories of NTT Public Corporation in 1972. He was executive manager, Communications Quality Laboratory, NTT Telecommunications Networks Laboratories from 1994 to 1996, and Communications Assessment Laboratory, NTT Multimedia Networks Laboratories from 1996 to 1998. He moved to Niigata University in 1999 and is now a professor, Graduate School of Science and Technology, Niigata University, Niigata, Japan. He received the IEICE best paper award for the year of 1993 and the Telecommunications Advanced Foundation award in 1998. His research interests include communications network design and traffic control, quality of service, mobile ad hoc networks, and wireless mesh networks. He was President of IEICE-CS in 2008. He is an IEICE Fellow.
57
LYT-GUEST EDIT-Sanchez
12/16/10
1:17 PM
Page 58
GUEST EDITORIAL
FUTURE CONVERGENT TELECOMMUNICATIONS SERVICES: CREATION, CONTEXT, P2P, QOS, AND CHARGING
Antonio SánchezEsguevillas
T
Belén Carro-Martínez
his series is an attempt to provide a holistic view of telecommunications services (having covered previously both enterprise [1] and residential [2] segments). The current issue is the second part of the New Converged Telecommunications Applications for the End User feature topic. In the previous issue ([3]), two published articles described services that rely on IP Multimedia Subsystem (IMS) infrastructure (which is progressively being deployed by telcos), related to lookup services and vehicular technology. In this second part a total of eight articles complete the selection of material fulfilling the call for papers’ intention of providing an update and new insights into the applications and services a converged world is bringing, concentrating on application-only topics in the field of services for mass market users. The articles presented cover a variety of aspects including user-driven service creation, context awareness (four articles, including two related to eHealth), combination of peer-to-peer (P2P) and next-generation networks (NGN), quality of experience (QoE), and service charging. We hope these articles will provide readers with an overview on how new infrastructures can facilitate the existence of new applications and services that will, in the end, make our lives easier. The articles composing this special issue are organized as follows. The first article is “An Ontology-Based Context Inference Service for Mobile Applications in Next Generation Networks” by Philipp Gutheim et al. As we know, context-enabled mobile applications are considered to provide a richer experience and enhance user interaction by acquiring information that allows the identification of the user’s current situation. Modern context inference infrastructures can source, process, and deliver user information. However, commercialization of a context service has still been prohibited by the need for global service coverage and accurate context identification. This article demonstrates how a telecom operator can leverage the potential of converged networks by providing a context inference platform that enables users to enrich mobile applications with context information. Its motivation is to outline a service that requires an innovative implementation on the basis of NGN, is compliant with current standards, and is designed for large-scale deployment. The proposed implementation involves third party
58
Vishy Poosala
applications within the context inference process itself and identifies improvements to previous implementations: increased service adoption, higher accuracy of context, and increased robustness to errors. Context awareness is a very large research domain, encompassing many issues ranging from physical measurement of a given situation right to the question of social acceptance. It appears as one of the most promising technologies to evolve current communication services into fluid, flexible, intuitive communication means. In “Interpersonal Context-Aware Communication Services” by François Toutain et al., a review of the various applications of context awareness to convergent interpersonal communication services is presented, making clear the evolution potential introduced by these techniques, from the classical digital phone network to truly smart services, many of which remain to be invented. Continuing with user services, the next article is “Employing Collective Intelligence for User Driven Service Creation” by Yuchul Jung et al. Currently, web services/mashups cohesively connected are covering users’ needs for services. However, most service creation environments do not have knowledge enough (especially of available services and their functionality) to support users’ service creation. The problem of knowledge scarcity means users have difficulty finding relevant open application programming interfaces (APIs) for a given situation, resulting in quite simple services. In this article two kinds of collective intelligence for user-driven service creation are presented: user experiences of service composition and activity knowledge from the web. They will assist end users’ service composition by enforcing the knowledge support in aspects of user experiences and activity-aware functional semantics and will finally accelerate the development of various kinds of converged applications. With the beneficial roles of collective intelligence as a key enabler of the future service creation environment, this article also shows the new potentials of user-driven composite services for the next few years. From a more technological perspective, the next article talks about communications with P2P interoperating with NGN, as proposed in “VITAL++: A New Communication Paradigm Embedding P2P Technology in Next Generation Networks” by Nikolaos Efthymiopoulos et al. This article
IEEE Communications Magazine • January 2010
LYT-GUEST EDIT-Sanchez
12/16/10
1:17 PM
Page 59
GUEST EDITORIAL describes a novel communication paradigm that fulfills the requirements of both users and operators, and proposes a new architecture for content distribution systems. It combines the best features of two apparently disparate worlds, P2P and NGN (in particular, IMS), used to support multimedia applications and content distribution services. To this end, P2P is enhanced with advanced authentication and digital rights management (DRM) mechanisms, while NGN becomes more scalable and reliable, and less centralized by exploiting P2P self-organization properties. Novel P2P algorithms for optimizing network resources in order to efficiently distribute content — offering live streaming — among various users without resorting to the laborious management operations required in NGN are described. The innovation of the work lies in the methodology of how P2P can be embedded in an NGN architecture like IMS and vice versa, rather than the specifics of IMS. To this end, any future NGN architecture can be integrated with P2P in a similar manner. Another topic of utmost importance is the quality of service/experience (QoS/QoE) offered to the final user. The article “Application-Based NGN QoE Controller” by Janez Sterle et al. covers this aspect. There is significant research in efficient QoE assurance solutions appropriate for modern NGN service environments (still, QoS methodologies from legacy networks are predominant, and loosely related to NGN standards and technologies). In this article an application-based QoE controller is presented, proposing a solution for objective and context-aware end-to-end QoE control in the NGN. The proposal introduces into the NGN service environment a novel value-added service enabler for the service of in-session QoE control, which is accomplished through context-based QoE modeling, whose role is to provide a detailed description of the circumstances under which the communication is established and by which the end user’s QoE is affected. The service is available to end users who wish to benefit from QoE optimizations in the NGN environment while accessing other available services within their multimedia communication. The following two articles deal with how NGN can positively influence the development of health-related services and applications. In “A Context-Aware Service Architecture for the Integration of Body Sensor Networks and Social Networks through IP Multimedia Subsystem” by Mari Carmen Domingo, a new context-aware architecture is proposed for the integration of body sensor networks (BSNs) and social networks through IMS, enabling authorized social network members to monitor real-time BSN data of other users, and publish it in a simple and efficient way. BSNs use IMS as a transport platform to access the web services of social networks, which use the services of IMS to run applications based on data of BSNs. The integration of the sensing capabilities of BSNs and social networks in the IMS facilitates more intelligent and autonomous applications that depend on or exploit knowledge about people, including their identities, updated vital signs, or current locations in the physical world. It provides new and personalized services to end users such as pervasive gaming, enhanced emergency services, and wireless healthcare applications. These context-aware applications are able to adapt to the changing communications environment and needs of the user. “Visualization of Data for Ambient Assisted Living Services,” by Maurice Mulvenna et al., identifies a new problem emerging from the many different viewing options utilizing converged networks and the resulting explosion in data and information for the new ambient assisted living (AAL) services struggling to convey meaningful information to different
IEEE Communications Magazine • January 2010
groups of end users. Ambient assisted living services provide support for people to remain in their homes, providing additional information through location awareness, presence awareness, and context awareness capabilities. The article discusses visualization of data from the perspective of the needs of the differing end-user groups, and discusses how algorithms are required to contextualize and convey information across location and time. In order to illustrate the issues, current work on nighttime AAL services for people with dementia is described. Finally, let us take a look at service charging. The charging model in telecommunication networks has evolved a lot in the last years. The original model in legacy fixed telephony networks was in most cases quite simple: the price of the communication depended on the destination and duration of the call. With the introduction of mobile and multimedia networks, the charging model got more complicated. More and more often, a subscriber is able to choose between several options on top of his/her standard tariff. Each option is then applicable to a certain kind of traffic only. The article “Service Charging in Converged Networks” by Marc Cheboldaeff studies how these new charging models impact the technology of online charging systems (OCS). Last but not least, we would like to thank again all the stakeholders who have made this special issue possible: colleagues who spread the word advertising the issue to attract attention, authors who sent their article proposals, the team of reviewers who helped select the papers and gave very valuable comments for the improvement of the selected ones, Editor in Chief Steve Gorshe (as well as his predecessors and members of the editorial board who provided comments to the proposal and Call for Papers) for his invaluable help and for hosting this special issue, and the ComSoc editorial staff who helped with the article processing and produced the final material. We hope that all the effort undertaken is of full satisfaction to the magazine readership for whom this feature topic has been prepared.
REFERENCES [1] A. Sánchez et al., “Applications and Support Technologies for Mobility and Enterprise Services,” IEEE Wireless Commun., July 2009. [2] A Sánchez et al., “Digital Home Services,” IEEE Network, Nov. 2009. [3] A. Sánchez, B. Carro, and V. Poosala, “New Converged Telecommunications Applications for the End User,” IEEE Commun. Mag., Nov. 2010.
BIOGRAPHIES A NTONIO S ANCHEZ E SGUEVILLAS (
[email protected]) [SM] has managed innovation at Telefónica (both Corporation and R&D), Spain. He is also an adjunct professor at the University of Valladolid. His research interests relate to services and applications. He is an Editorial Board member of IEEE Communications Magazine and IEEE Network, founder and Chairman of the IEEE Technology Management Council Chapter Spain, a guest editor for IEEE Wireless Communications and IEEE Network , and has been recently serving on the TPCs of ICC, GLOBECOM, PIMRC, WCNC, Healthcom, CCNC, and VTC. BELÉN CARRO (
[email protected]) is an associate professor at the University of Valladolid, where she is director of the Communication and Information Technologies (CIT) laboratory. Her research interests are in the areas of service engineering, IP broadband communications, NGN, voice over IP, and Quality of Service. She has extensive research publication experience as author, reviewer, and editor. V ISHY P OOSALA (
[email protected]) is head of Bell Labs India, where he leads a world-class R&D team in delivering research breakthroughs and internal technology ventures, and growing overall AlcatelLucent business. His current focus includes creating novel applications and networking technologies, especially for the emerging markets. He has published over 20 technical papers, been granted 18 patents, and won the Bell Labs President’s Gold Award three times.
59
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 60
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
An Ontology-Based Context Inference Service for Mobile Applications in Next-Generation Networks Philipp Gutheim, University of California Berkeley
ABSTRACT Context enabled mobile applications are considered to provide a richer experience and to enhance the user interaction by acquiring information that allows the identification of the user’s current situation. Modern context inference infrastructures can source, process and deliver user information. However, a commercialization towards a context service has still been prohibited by the need for global service coverage and accurate context identification. With the advent of Next Generation Networks, telecom operators can leverage All-IP networks to design external service interfaces that integrate a diverse set of sources and context inference processes that are easily scalable, extendable, and robust at the same time. This article presents a telecom operator service that supplies mobile applications with context information to illustrate how context infrastructures can leverage NGN capabilities. The article introduces an innovative context inference approach involving third Party applications within the inference process itself. This is done by structuring the ontology context model in layers of complexity and inferring particular context information via modules, which are designed in collaboration with third party developers. Furthermore, the service is compliant with state-of-the-art IP Multimedia Subsystem infrastructures and provides an interface that uses HTTP/SOAP and HTTP/REST communication as well as the Session Initiation Protocol of the IMS. A first proof of concept indicates an increased service adoption, a higher accuracy of context, and an increased robustness towards errors.
INTRODUCTION The notion of context-enabled mobile applications has been discussed by researchers and practitioners for several years. These applications are expected to enhance the interaction with the user and to improve the experience by collecting information that supports the identification of the user’s current situation. For instance, a cell phone that is aware of the users daily commuting can send out reminder when the train is about to
60
0163-6804/11/$25.00 © 2011 IEEE
arrive and suggest to buy a ticket automatically. However, the development of a context-aware application or service is challenging. This is because the context of a person is defined by various factors such as location, social environment or time and is inferred by using complex and mostly implicit rules. This requires a development of systems designed to support context-awareness. By offering context-aware applications, developers would relief users of the burden to explicitly define what information is currently relevant to them and could offer enriched services that are specifically tailored to the user’s situation. Hence, the complexity of interaction with mobile applications is reduced which is of major importance especially in mobile environments where the same service is used within various environments. The need for a system design that supports context-awareness has led to the development of several frameworks and architectures providing concepts and concrete examples of potential application designs. The goal of the architectures is to simplify the development of context-aware applications, to reduce redundant development efforts by consisting of reusable components as well as to provide an infrastructure that allows multiple applications to gain access to the system. The advent of next-generation networks (NGN) supports context inference in a way that it enables a service to acquire rich user information from both Internet and mobile spheres on the basis of common IP-based communication standards. This is realized by the IP Multimedia Subsystem (IMS) specification that has been developed as a potential standard for NGN architectures. The standard specifies the access on mobile services from several networks such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), wireless local area network (WLAN), or wireline broadband, and uses the asynchronous Session Initiation Protocol (SIP) for a secure and reliable communication. However, a context service requires an interface that serves both web and IMS purposes and provides a context model that is simple, flexible, and expressive [1]. In respect of the inference of context, existing architectures source and process user information
IEEE Communications Magazine • January 2011
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 61
Widget model Application
Blackboard model Application
...
Application
Application
...
Application
Application
Blackboard
Widget
Widget
...
Widget
Widget
Sensor
Sensor
...
Sensor
Sensor
Sensor
Wrapper
...
Sensor
Sensor
Wrapper
Sensor
Figure 1. Widget and blackboard context management models. in an adequate manner. These infrastructures have been developed for both experimental usage and large-scale environments and distinguish executable functions such as information retrieval, storage, learning algorithms, and the inference itself. However, an NGN context service has to ensure service coverage as well as a reliable and accurate inference of context to meet user’s expectations. This requires an infrastructure, which provides a scalable and extendable inference process that pays particular attention to handle complexity, ambiguity, and uncertainty of information [2]. This article demonstrates how a telecom operator can leverage the potential of converged networks by providing a context inference platform that enables users to enrich mobile applications with context information. Our motivation is to outline a service that requires an innovative implementation on the basis of NGN, is compliant with current standards and is designed for large-scale deployment. We propose an implementation that involves third party applications within the context inferences process itself and identify improvements to previous implementations, namely an increased service adoption, a higher accuracy of context, and an increased robustness to errors. For that reason, we initially review two general context management models and the common layer design architecture of current context inference systems. On that basis, we outline the Telco service and derive an implementation that uses the proposed module-layer structure in combination with existing web and communication standards. Afterwards, we present a first evaluation of the implementation and finally analyze how the proposed model contributes to a future execution of context inference service in NGN.
BACKGROUND ON CONTEXT FRAMEWORKS Most research in context-aware computing has focused on developing software architectures to allow an application to process and use context information. Two distinct approaches can be identified: architectures that follow a widget model and those which use a blackboard model [3]. In this section, both approaches are
IEEE Communications Magazine • January 2011
reviewed. Afterwards, a more abstract model in form of a layered middleware is presented.
CONTEXT MANAGEMENT MODELS Widget architectures allow applications to gain access to context information by a direct request to a particular widget. Widgets are the intermediary between sources and applications and provide an external interface. For instance, a location widget retrieves information from the GPS sensor which provides the longitude and latitude position of the user, and establishes a public interface for applications. To access the data, an application posts a request to the widget and obtains the location information to use it internally. The widget design constitutes a process oriented architecture model that uses asynchronous requests (publish/subscribe). An advantage of the model is that multiple sources can be combined to derive context information of higher complexity. In addition, the model enables the exchange of widgets of the same type of information such as a GPS widget and a W-LAN widget for location information. However, the model is less flexible for applications to subscribe and unsubscribe to widgets when user contexts quickly changes. Furthermore, the architecture is less robust to failures of internal components [4] (Fig. 1). A more sophisticated approach is the blackboard architecture that enables applications to retrieve context by means of a subscription to a centralized component, the blackboard. This blackboard manages the internal information retrieval and notifies applications when a predefined event occurs. To obtain specific context information, an application requests the blackboard or gets a notification about an event that occurred. The blackboard design is a data-centric architectural model which provides asynchronous notifications to inform subscribed applications. On the one hand, the advantage of the model is the ease of adding new or exchanging sensors and the ability to provide a flexible configuration for dynamic changes. On the other hand, the model is less efficient in communication because requests are made via a centralized server (blackboard) as well as synchronization flaws occur since the order of inference operations is not predefined [5].
61
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 62
(1) Applies to the context service
Although the con-
(2) Confirms after review
text management models differ in their (3) Connects application
design approach, most of the context
(5) Provides context information Context service interface on the cell phone
User
architectures apply a layered structure that
(4) Provides user information
(6) Enables context-aware services
Applications
Figure 2. Applications need to register in advance to be connected by the user to the context service.
splits the process of retrieving context
LAYER DESIGN FOR CONTEXT INFRASTRUCTURES
information, process-
Although the context management models differ in their design approach, most of the context architectures apply a layered structure that splits the process of retrieving context information, processing the information, and providing it to the application into five sub steps [3]. The sensor layer detects and registers new sources for context relevant information which can origin from physical, software, or logical sensors. For instance, this can be a GPS system (physical), a browser sensor to identify current activities (software), or a combination of sensors to derive information by logical reasoning (logic). The raw data retrieval layer provides interfaces to retrieve sensor information using more abstract functions. For example, the function getPosition() can be provided by the layer to abstract data retrieval from GPS or WLAN sources. The preprocessing layer raises the complexity into high level context information by interpreting the available raw data. This is done by quantification and extraction of raw data and the handling of conflictive, ambiguous, or uncertain information. For instance, when sensor data shows that it is Sunday evening, the user is at home and the browser accesses a social network, the context of the user can be interpreted as leisure time. The storage/management layer stores context information enables learning algorithms to enhance the inference and provides public interfaces using both synchronous and asynchronous communication approaches. The application layer implements the application and is primarily concerned with the user interaction. Depending on the implementation, this layer can be detached from the infrastructure and can be integrated in third Party services.
ing the information, and providing it to the application into five sub steps.
SERVICE DESIGN AND ARCHITECTURE OVERVIEW In this article, we propose a telecom operators service which allows users to enrich their mobile applications with context information and thus enable their cell phone to be context aware. A telco is particularly suitable to offer such a platform because it is an intermediary between the user, who gets access to the communication network, and software developers, who use the NGN to provide enriched software solutions beyond the boundary of Internet. To leverage the position as an intermediary is of major importance because a single application has less potential to aggregate all necessary information which is required to implement the full functionality of context inference and to include sensors in a uniform, stan-
62
dardized manner. The service itself has a user interface on the mobile device which allows users to connect an application to the platform. Once connected the application transmits user information to the platform and in turn gains access to inferred context information. This confirmation addresses potential privacy concerns of the user and is similar to the approach that has been taken by social networks or mobile operating systems when installing new software (Fig. 2, steps 3–6). Unlike context inferences services that have been proposed for large-scale deployment so far [6], the telco service involves mobile application developers within the context inference process itself. In other words, developers can suggest rules how the information they provide have to be processed with existing context information from the platform in order to infer additional context information of higher meaning. To realize this, the service requires developers to apply at the telco for authorization at first (Fig. 2, steps 1 and 2). During this review process, the telco provides the current context model to developers. That can be a list of high level context categories such as location, social environment, or activity. For each category, a set of predefined states and the information sources that support the particular inference is given. As an example, the category social environment could distinguish between business context and leisure context and enlist social networks, calendar, and GPS information as sensors to identify the state. Developers in turn can analyze how their information can contribute to the inference of a state or can identify new, more complex states. Additionally, they indicate which rules have to be applied for the inference. That could be a set of statements in the form of “if state in category X is A, then the state in category Y is B.” In case an application gets the approval by the telco, users are able to connect the application into the context service. In that case, the service informs the user about all information that is exchanged between the platform and the particular application. Even though the service design heavily involves application developers, the right to process, manage, and distribute context information still remains exclusively at the telco, which was authorized by the user at first place.
BENEFITS TO THE USER, DEVELOPERS, AND TELCOS When taking the user’s perspective, the platform enables the cell phone and its applications to be smart and context-aware as described initially. Two distinct improvements in mobile applica-
IEEE Communications Magazine • January 2011
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 63
tions can be distinguished, that is to pre-select information, which is pulled by the user to reduce the user interaction or to proactively push information to the user in order to enable new forms of interaction. For instance, a cell phone can preselect the business calendar or contacts when being at work or proactively remind the user to by items in a shop nearby when leaving the office to meet with friends. Such services reduce time and effort for the user to search for relevant information as well as enable a new set of innovative mobile applications. In consequence, telecom operators could consider a fix rate plan for the user market as an eligible option to build a business case. This would enable telcos to leverage their intermediary position and to bind their customers closer to them. Application developers on the other side get the opportunity to enhance their service offering and thus increase the potential for additional revenue. That allows a developer business case which could be in the form of a revenue sharing model. In general, the crux of the service design is to provide incentives to the developers to contribute high quality user information. A business model could charge every registered application with a subscription fee. These fees would be redistributed among the developers according to how often their contributed information is used by other developers to infer a context. For instance, the shopping reminder uses location and social environment information to measure if it is appropriate to push shopping information. Hence, developers of these sensors would earn a share of the fee. As a consequence, those developers who drive sales when using context information are willing to pay for it and those who can provide valuable user information can generate revenue beyond the subscription fee. In this business model, telecom operators could set a service fee of several percentages for providing the infrastructure and maintenance of the system. On the telecom operator side, the service design has particular benefits in comparison to other approaches that have been proposed [6]. First, less effort has to be spent to have the service scaling up and to become widely used. By providing incentives to register an application and to offer valuable user information, third party developers relief the burden of the telco to aggregate and preprocess user information on its own. Second, there is more context information available which enables a more accurate and less error-prone context inference. This is because a high amount of sensors allows compensating sensor failure with complementary ones and increase reliability of inference. However, particular attention has to be paid to ambiguity or conflicts in inference rules and sensor information as well as to complexity in the inference structure.
IMPLEMENTATION The implementation of the service is embedded within the IMS infrastructure of the telco. It uses a state-of-the-art blackboard context management model and organizes the infrastructure in five functional layers, i.e., sensor, raw data retrieval, preprocessing, storage/management, and application layer. The interface implements common SIP,
IEEE Communications Magazine • January 2011
HTTP/SOAP, or HTTP/REST protocols and uses an ontology-based context model in combination with XML. This section focuses on the interface design and the inference process and presents a new approach to process the context information within the preprocessing layer. The article makes particular effort to illustrate how that is suitable to the service design described earlier.
INTERFACE DESIGN TO DEVELOPERS Regarding interfaces to developers, it is important to distinguish among interfaces to external sensors to obtain user information and interfaces to applications to provide context information. External sensors can be integrated either by the Session Initiation Protocol (SIP) or HTTP using SOAP or REST compliant communication. In particular, the interface uses the SUBSCRIBE/NOTIFY capabilities of the SIP Event Framework to implement asynchronous event handling and reliable session control. In this case, telcos will poll the information, external parties will upload the data or event-based communication will be used. On the other hand, HTTP/SOAP and HTTP/REST represent common synchronous and client-triggered communication protocols for web services. This approach allows developers to use existing web standards and to implement the interface with less effort. The context retrieval instead implements only the SIP protocol because changes in context are frequently sent between the server (platform service) and the client (application). An implementation using synchronous protocols such as HTTP would require the client to constantly poll for updates from the server or to develop workarounds. Towards applications, the platform provides a SIP API for Java and major mobile operating systems such as Android (Google) , iOS (iPhone/Apple), and webOS (Palm). The Java API can be implemented by J2ME-based Midlets on mobile devices as well as Servlets of web services. The SIP communication is realized using SIP J2ME Specification (Java Specification Request [JSR] 180) that is based on Mobile Information Device Profile (MIDP) 2.0 and SIP Servlet Specification (JSR 116) for web services. The SIP interface has been developed by the Java Community Process to enable SIP services to resemble the common Java Servlet architecture, also called SIP servlets. For applications that are based on the Android OS, iOS or webOS, a similar approach is taken. Even though a particular SIP API has not been released for these operating systems so far, SIP arises as highly suitable to leverage proactive, push-based communication and low battery drainage. The SIP API particularly provides listener and notification classes to enable asynchronous communication that can be used between the platform and the sensors either to provide context information or transmit user information. The HTTP/SOAP API and HTTP/REST API allow establishing a loosely coupled communication towards sensors enabling developers to implement lightweight protocols on the basis of popular web standards and XML communication. This synchronous API can be used for providing user information to the platform (Fig. 3).
The implementation of the service is embedded within the IMS infrastructure of the telco. It uses a stateof-the-art blackboard context management model and organizes the infrastructure in five functional layers: sensor, raw data retrieval, preprocessing, storage/management, and application layer.
63
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 64
We present a module-layer
Sensors
structure for the context manager
Mobile devices
which is located in
Web services
Urban environment
the preprocessing layer of the infra-
Communication infrastructure
structure. This
SIP/HTTP
SIP/HTTP
SIP/HTTP
SIP/HTTP
approach delivers
SIP/HTTP www
several benefits: it
Workstations
allows inferring context information
SIP/HTTP
more gradually, hence reducing complexity, and enables
Context service
developers to extend and adapt the
Context service
functionality easily at SIP
runtime.
www
SIP Context-enabled mobile applications User’s cell phone
Figure 3. The service architecture.
COMMUNICATION LANGUAGE FOR CONTEXT In respect of the communication language, an ontology-based context model such as Web Ontology Language (OWL) that is embedded in a PIDF specified XML-based communication is used. The context model provides high-level ontologies for context information such as movement and position for the domain location. If the domain allows a more detailed specification, additional subdomains are provided. For example, position could be distinguished in more detail, namely at home, office, and city center. This structure forms an ontology-based hierarchical tree as a suitable model to provide third party developers with a simple, flexible and expressive context model [7]. To realize this approach, the service uses the Presence Information Data Format (PIDF) as a XML specification for exchanging context information (RFC 3863). Additionally, the Rich Presence Information Data Format (RPIDF) and its Timed Presence extension, both backwards compatible with PIDF, are used to include more complex and historic/future context information if available (RFC 4480 and RFC 4481). This
64
approach particularly ensures a standard compliant communication and allows applications and platform to exchange required information such as context domain, time stamp of the information and a confidence value to trade-off between conflictive information.
MANAGEMENT AND PROCESSING OF CONTEXT ON TELECOM OPERATOR SIDE We propose a context architecture that resembles a state-of-the-art context layer design (sensor, raw data retrieval, preprocessing, storage/management, and application layer) and is designed after the blackboard context management approach. In particular, the sensor layer is realized by a set of distributed sensors (hardware, software, or logical sensors), which register via SIP REGISTER message or HTTP POST requests at the raw data retrieval layer (Fig. 4). Having registered to the platform, the sensors implements a topic-based SIP PUBLISH/SUBSCRIPTION communication or again HTTP to transmit the user information to the platform. Having obtained the sensor information, the platform categorizes the user information according to its context domain. This allows the raw data retrieval layer to provide
IEEE Communications Magazine • January 2011
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 65
functions to the preprocessing layer that abstract from the type of sensor and instead focus on its context domain. The inference of context itself is done in the preprocessing layer using a modulelayer design. This step is described in more detail in the next section. The storage/management layer handles the storage of information, the learning process based on previous (domain and context) information as well as provides an interface to the applications. In particular, the storage of information is done by calling the functions of the raw data layer or requesting context information from the preprocessing layer and storing the information into the database. Additionally, learning algorithms are applied on raw data and high level context information to derive pattern that contribute to the quality of context inference. The interface uses SIP protocol and applies the PIDF or RPIDF specification for XML-based communication to provide context information to a set of distributed clients that are connected to the platform.
Applications
Context service
Context information Storage/management
Context information Domain information
Preprocessing/context inference
Domain information Raw data retrieval
A CONCEPT TO PROCESS CONTEXT The inference of context in particular is a complex process since information can be conflictive, ambiguous or uncertain and because context is mostly inferred by implicit rules. Therefore, we present a module-layer structure for the context manager which is located in the preprocessing layer of the infrastructure. This approach delivers several benefits: it allows inferring context information more gradually, hence reducing complexity, and enables developers to extend and adapt the functionality easily at runtime. In addition, it incorporates a structure that countervails synchronization issues as well as it resembles the internal ontology based context model. The design organizes three layers according to the degree of their complexity starting from processing raw data up to high-level context information (Fig. 5). Furthermore, each layer can use the information that has been inferred within one of the layers before. For instance, the low complexity layer requests location information (e.g., GPS) and the browser activity from the raw data layer. This information is used to infer what location domain (e.g., home, office, city center) and browser activity domain (e.g., social networks, company’s intranet, and news) applies to the user’s current situation. The medium complexity layer uses this context information to identify the user’s activity domain. As an example, the user is at home but accesses the office computer at work via browser login. Hence, the user’s activity domain is work. The high complexity layer requests time and date from the raw data layer and the activity domain from the medium complexity layer to anticipate potential activities based on learning algorithms. For example, the user always works from home on a Monday morning but leaves at 1:30 pm to take public transportation. Consequently, the high complexity layer could infer a high probability for the upcoming context to be train for the movement domain. To translate lower level information into higher-level information, each layer contains a set of modules. A module infers a particular aspect of the user’s context by resembling the domains of the ontology based context model. In
IEEE Communications Magazine • January 2011
User information
Sensors
Figure 4. Processing of context information within the service infrastructure.
the example above, raw data information was used to infer the location domain and browser activity. To realize this, the low complexity layer would contain two modules, one for the location domain and one for the browser activity. These modules infer the particular context domain and provide the information back to the low complexity layer which in turn can provide the information to the medium or high complexity layer. Modules can be added or adapted at runtime and sensors can be connected or disconnected since they can be substituted by others. Also information conflicts, uncertainty, and ambiguity are managed by the modules internally. This is done by requesting required information, quantification of the information using statistical methods, fuzzification, which is to analyze if the quantified value is within or outside a predefined value and finally to apply rules to infer the context information [8, 9].
FIRST EVALUATION AND LESSONS LEARNED As a first prove of concept for the module-layer context inference design, a simplified version of the service has been implemented in the form of a Java Servlet (platform) and an Android-based mobile client. Three sensors were developed, a mobile GPS (hardware) sensor, a social network (software) sensor, and a logical sensor and implemented in the module-layer structure within the preprocessing layer. By using the concept of neural networks, particularly the Neuroph Java Neural Network Framework, the applicabil-
65
GUTHEIM LAYOUT
12/16/10
12:32 PM
Page 66
Context awareness services have the
Storage/management
potential to enhance user interaction on
Preprocessing/context inference
mobile devices if the context inference
High complexity layer
Movement domain module
Medium complexity layer
Activity domain module
Low complexity layer
Location domain module
process is scalable, robust and extendable as well as the interface design provides a SIP- and
Browser activity module
HTTP-based communication. Raw data retrieval
Figure 5. The inference model establishes a three-layer structure. ity of rule-based context inference within a module and the structure as a hole was tested. Even though not fully implemented, several lessons learnt could be derived from the test implementation. The approach supports the service design with an ontology context model and a rule-based reasoning that is particularly feasible for application developers since it resembles daily life issues of users. Additionally, the layers provide a clear hierarchy, which breaks up the context inference into several sub-steps, and the modules abstract from actual sensors. This lowers the complexity of inference and makes the platform dynamically extendable and adaptable. However, tests show that particular attention has to be paid to the governance of the platform, which requires telcos to maintain the logic order of modules within the layer structure and monitor conflictive rules and occurring errors. Moreover, a fast scaling service can cause an excessive server-client communication particularly for intermediaries such as SIP Proxies. In order to address this issue, learning algorithms can be used to reduce context updates and schedule requests to the server.
CONCLUSION AND FUTURE WORK Context awareness services have the potential to enhance user interaction on mobile devices if the context inference process is scalable, robust and extendable as well as the interface design provides a SIP- and HTTP-based communication. This article presented a service that allows users to enrich mobile applications with context information by connecting them with a telecom operator platform. Based on an overview about current models to process context, the service design and its implementation have been described. The article presented a new module-layer approach to resemble context domains within a layered inference structure and a first prove of concept was outlined. More specifically, the approach promises more accurate context inference, which is less error-prone at the same time. Additionally, the
66
service design is supposed to have a faster adoption by third party developers and therefore scales up faster. Future research needs to investigate how the module-layer structure can be maintained and governed as well as the server-client communication can be reduced.
ACKNOWLEDGMENT This research was funded by Vodafone R&D Group Munich, Germany. The author thanks Anas Al-Nuaimi, Petromil Petkov, and Julian Pye for their review and valuable input.
REFERENCES [1] M. Ilyas, IP Multimedia Subsystem (IMS) Handbook, CRC Press, 2009. [2] J. Krumm (Ed.), Ubiquitous Computing Fundamentals, CRC Press, 2010. [3] M. Baldauf et al., “A Survey on Context-Aware Systems,” Int’l. J. Ad Hoc Ubiquitous Comp., vol. 2, no. 4, 2007, pp. 263–77. [4] A. K. Dey and G. D. Abowd, “A Conceptual Framework and a Toolkit for Supporting Rapid Prototyping of Context-Aware Applications,” Human-Comp. Interactions J., vol. 16, no. 2, 2001, pp. 97–166. [5] T. Gu, H. K. Pung, and D. Q. Zhang, “A Service-Oriented Middleware for Building Context-Aware Services,” J. Net. Comp. Apps., vol. 28, no. 1, 2005, pp. 1–18. [6] Y. Oh, J. Han, and W. Woo, “A Context Management Architecture for Large-Scale Smart Environments,” IEEE Commun. Mag., vol. 48, no. 3, 2010. [7] T. Strang and C. Linnhoff-Popien, “A Context Modeling Survey,” Wksp. Advanced Context Modeling, Reasoning, Management, Ubiquitous Comp., 2004. [8] L. A. Zadeh, “Fuzzy Sets as a Basis for a Theory of Possibility,” Fuzzy Sets Sys., vol. 100, no. 1, 1999, pp. 9–34. [9] S. Guillaume, “Designing Fuzzy Inference Systems from Data: An Interpretability-Oriented Review,” IEEE Trans. Fuzzy Sys., vol. 9, no. 3, 2001.
BIOGRAPHY PHILIPP GUTHEIM (
[email protected]) is currently working toward his M.S. in information management and systems at the University of California Berkeley. His research interest includes new end-user services in NGN, IMS, context-aware systems, and ubiquitous computing. He received his B.S. degree from the Munich School of Management, Ludwig-Maximilians-University Munich in 2009 and an Honor degree in technology management from the Center of Digital Technology and Management in Munich.
IEEE Communications Magazine • January 2011
BOUABDALLAH LAYOUT
12/16/10
12:22 PM
Page 68
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
Interpersonal Context-Aware Communication Services François Toutain, Orange Lab Lannion Ahmed Bouabdallah, Telecom Bretagne Radim Zemek, Orange Lab Tokyo Claude Daloz, Orange Lab Lannion
ABSTRACT Context awareness is a very large research domain, encompassing many issues ranging from physical measurement of a given situation to the question of social acceptance. It appears as one of the most promising technologies to evolve current communication services into fluid, flexible, automagic, intuitive communication means. We review in this article the various applications of context awareness to convergent interpersonal communication services, making clear the evolutionary potential introduced by these techniques, from the classical digital phone network to truly smart services, many of which remain to be invented.
INTRODUCTION Voice and data services involving interpersonal communication represent a substantial part of the businesses of telecom operators as well as of many new emerging players. Interpersonal communication characterizes written and verbal exchange of information, which can take place in both one-to-one and group settings. However, the way people contact each other, the media, and the communicated information often depend on their own situation and what they know or assume about their contact’s situation. While current communication services provide basic means to set up and manage conversations, they do not integrate knowledge about the parties’ environments, capabilities, and availabilities. There is considerable room for improvement in this matter. A possible answer lies in the contextualization of communication services, which takes into account the situation of the parties to adapt the different steps of the communication (from initiation to termination) to users’ needs. Given this ability, these new services can be better suited to users, providing more comfort, more relevant content, and more accurate, natural, or intuitive behaviors. They can also provide real-time information about their contacts; in short, empower users.
68
0163-6804/11/$25.00 © 2011 IEEE
Several context-aware communication services are already deployed or have been prototyped in laboratories. However, the domain is still in its infancy, and we expect numerous innovations to emerge in the coming years. In this article we present some of the most significant work done so far and expose possible directions for future research efforts. This article is organized as follows. The first two sections provide definitions and examples of such context-aware services. We then present several significant achievements in the domain, from current industrial offers to research prototypes, and review the associated challenges posed by the various approaches. We describe the gap between research advances and largescale industrial deployment, and provide a tentative timeline for the introduction of future context-aware services. The final section concludes the article.
SOME BASIC DEFINITIONS CONTEXT DATA HANDLING Context represents any information that can be used to characterize the situation of an entity, where an entity is a person, a place, or an object that is considered relevant [1]. Fundamental context data, called the primary/primitive types, encompass identity, location, activity/status, and time. Beyond that, a great diversity of data sources can be used as contextual information. Focusing on communication applications, we identify five different context dimensions: • User (identity, activity, status, agenda, location, movement, mood, age, etc.) • Network (access networks, identifier [e.g., cell ID], call records, received signal strength indicator [RSSI], round-trip time, etc.) • Social (people around the user, relationships with them, intimacy level, is the user attending a public event, subject of a conversation, etc.)
IEEE Communications Magazine • January 2011
BOUABDALLAH LAYOUT
12/16/10
12:22 PM
Page 69
• Physical (illumination level, ambient noise level, temperature, weather, etc.) • Device (devices’ capabilities, energy used, remaining energy level, installed applications, nearby devices, current volume setting, alert mode, etc.) Different sources can provide the same data; however, the accuracy and credibility levels may be different. For example, location information about a user’s whereabouts using the user’s schedule information may be different from location information using GPS or cell-ID location. This discrepancy can indicate several different situations. For example, it may indicate that the user has forgotten about the appointment, that the appointment has been cancelled, or that the mobile phone has been stolen. Therefore, it is crucial for the success of context-aware services that context data accuracy and confidence be properly managed. Many data can be provided by way of automated processes, working on raw data acquired by some sensors: • Physical sensors (e.g., accelerometers), perceiving a physical property and providing a continuous, discrete, or symbolic measure. • Logical sensors, a device or computer process able to sense some logical state of a given process and provide a measure (e.g., computer activity). • Aggregated sensors, a logical construction providing data, which is derived from another process, typically by fusion from several sensors. An example is a virtual human presence sensor which can be built with a movement detector + noise sensor + chair pressure sensor + computer activity sensor + …. In addition to sensors, a great variety of context data can be acquired by operators from their logs, regarding call activity, mobile phone location, service configuration, and so on. Context data are generally aggregated into a model. According to [2], a well designed model is a key accessor to the context in any contextaware system. The current research aims at delivering generic context models, which allow sophisticated querying as well as reasoning algorithms that facilitate context sharing and interoperability of applications. The most promising approach uses semantic models based on ontologies [2]. Implementation of context data models is the subject of various storage and distribution strategies [3], giving rise to architectural alternatives between centralized vs. distributed systems, and between classical vs. innovative approaches (e.g., a micro-blogging platform used for context publishing).
CONTEXT-AWARE SERVICES Context-aware processing generally involves adapting to context conditions or using context information. From a low-level operational point of view, there are two fundamental modes of operations for a process to be context-aware. The first mode is context-triggered action as described in [4], whereby the transition to a given context state triggers an action. In this mode a change in the context situation is thus
IEEE Communications Magazine • January 2011
the cause for performing an action, which may impact the current context or not. An example application is a contextual reminder service, which is loaded with an IF-THEN rule describing the context conditions giving rise to a notification message. As soon as the context situation is similar (or close enough, depending on the rule characterization) to the one described, the context reminder service fires and delivers the associated message. The second mode of operation can be described as context-dependent reaction. In this mode the process adapts its response to stimulation according to the user’s context data (e.g., an event or a user action). More precisely, the application response depends on the event and the current context. An example of a contextdependent reaction could be a phone service, which, upon receiving the event of an incoming call, would use its context data telling it the user is at the movies to adapt its response and choose to deflect the call to a messaging service. In realworld situations, the response also depends on user preferences and service parametering (which may be perceived as part of the user’s context). Also, the event can be multifaceted (e.g., an incoming call from a given caller, with a given call priority, subject, etc.) Context-aware services, using the aforementioned operation modes, will allow operators to provide truly personalized services, taking into account the users’ context as it evolves over time.
Context-aware processing generally involves adapting to context conditions or using context information. From a lowlevel, operational point of view, there are two fundamental modes of operations for a process to be context-aware: context-triggered action and context-dependent reaction.
HOW CONTEXT AWARENESS COULD ENRICH INTERPERSONAL COMMUNICATION SERVICES Context-aware services range from classical applications augmented with some degree of context awareness to a whole new class of innovative services specifically built around context. In the rest of this section we illustrate the potential of context-aware services with several scenarios, some features of which are inspired by known studies.
ADAPTIVE MOBILE PHONE SCENARIO My mobile phone changes its operation mode depending on my situation. It recognizes situations such as when I am working, when I am driving, when I am socializing, and when I need privacy. For each mode the address book can present different contacts, sorted differently. The application menu also adapts so that, for instance, professional applications are in front only when I am working. The call handling system knows how to prioritize incoming calls and whether to present them or not in order to minimize disruption, depending on my current activity, my capabilities, and the caller’s situation. In addition, my mobile phone evolves with me, by suggesting new functionalities when I may need them.
ADAPTIVE MESSAGING SERVICE When I am watching TV, an incoming message is displayed on the TV screen rather than on my mobile. As soon as someone enters the living
69
BOUABDALLAH LAYOUT
12/16/10
When I am watching TV, an incoming message is displayed on the TV screen rather than on my mobile. As soon as someone enters the living-room, the display of the message is obfuscated, or switched to my mobile phone, in order to preserve my correspondence privacy.
12:22 PM
Page 70
room, the display of the message is obfuscated or switched to my mobile phone in order to preserve my correspondence privacy. The service knows which messages are relevant and which can be safely displayed in the presence of other people. The same messaging application is able to adapt its interface to the abilities of my fiveyear-old daughter. When I join her in front of the TV set, additional tools are displayed for my benefit in case I wish to help my daughter with some difficult operation. The two scenarios above describe dynamic adaptation to changing conditions. Adaptation can concern interfaces, service behavior, dynamically changing priorities, content adaptation, and also call and message routing on various devices. The general benefits for users are improved relevance and comfort.
issue at home, the device suggests a given plumber based on location and ability to fix the issue quickly. My 15-year-old niece uses the same service with her best friend, so audio communication is automatically set up whenever one of them is engaged in some interesting activity. Thus, they share a strong, close relationship.
CONTEXT-AWARE ADDRESS BOOK
My telecommunication services usage habits are monitored and analyzed in order to provide relevant automated enhancements. After detecting that I never answer a call when driving, the service offers to automate call transfer to voice mail, while providing me with a means to cancel it easily on my mobile phone. After detecting that I often call my wife when I visit a supermarket, the service asks my authorization, then offers to notify her whenever I go shopping, providing her with the opportunity to brief me about the grocery list before I leave the store. The last three scenarios are modest attempts at describing applications specially designed to make use of context data about users. Indeed, the increased availability of context data is envisaged to foster the emergence of a whole class of innovative applications and services, which are specifically built around context. These new services are nowadays largely unheard of.
On my devices, I can consult the real-time situation of my contacts. Every contact is displayed with animations and data informing me clearly and without effort. At a glance, I can see that a given contact is unreachable/busy/currently at work/inside a movie theatre/ commuting/with other people/currently standing on line and so on. I can sort the contacts according to a given context dimension, using display strategies that give the best possible rendering (e.g., a map for location-based data, or a list sorted by decreasing availability, mood, etc.). I can see which contacts are in front of a TV set and what they are watching, people currently listening to music, browsing the web on which site, and so on. I can also review historical data and get some predictions about the future situation of any given contact. I can infer who will be the first available for a chat, who is nearby for a coffee break, at what time of the day I can reach someone, and so forth. The above scenario describes an alternative way to make use of context knowledge by providing it to the user, thereby empowering her, rather than imposing on her some predefined adaptation, be it well received or not. Presenting the user with complex context data is an important issue. For instance, there are several ways by which a caller could be informed that the callee is at the movie before the call goes through. One is the status mechanism, which may be automatically filled for better user experience and service performance. Another is for the network to build up some aggregate context state, using the various context data it can gather from the callee’s current situation. Yet another way is for the authorized caller to access the callee’s device and get a taste of the current situation of the callee.
CONTEXT-SENSITIVE DIALER My device provides me with call suggestions. By analyzing availability, mutual interests, and the history of common calls and their subjects, it determines when I can best call a friend, and can even initiate the call if my configuration allows it. For instance, the device can suggest calling a contact in order to set up a coffee break, given our current location and agendas, and the fact that we usually take the opportunity to share a break when possible. When I face a plumbing
70
CONTEXT-SENSITIVE NOTIFIER With this application, I can be notified whenever some specified event occurs automatically, concerning me or my contacts, with no manual update from either me or my contacts. For instance, I can set up a notification as soon as I come back home, or as soon as Alice is back at her office and available for a chat, or whenever I meet Bob out of the workplace.
HABIT-BASED AUTOMATION
FROM EXISTING SERVICES TO RESEARCH QUESTIONS The domain of context-based services is an active area, integrating preliminary industrial achievements as well as academic studies focusing on remaining tough questions. In this section we illustrate this problematic area by first looking at some significant work, from which we infer the research issues.
KNOWING ABOUT HIGH-LEVEL CONTEXT IS COMPLICATED CenceMe [5] classifies the user’s current location, physical activity (e.g., walking, sitting, running, mobile), whether the user is in a conversation, and other aspects of the user’s context like whether the user is at a party. CenceMe demonstrates how mobile sensors of multiple types (e.g., accelerometer, GPS, sound, camera) can process raw sensor data to produce a richer description of the user’s situated context. All recognitions are eventually uploaded to a server and shared with the user’s Facebook social network (Fig. 1). SenSay (Sensing and Saying) [6] is a contextaware mobile phone that adapts to dynamically changing environmental and physiological states.
IEEE Communications Magazine • January 2011
BOUABDALLAH LAYOUT
12/16/10
12:22 PM
Page 71
Orange Lab Tokyo has been working on a context awareness enabler and prediction engine, tailored to enable user habit detection with respect to communication services. This allows the user to easily configure a Figure 1. CenceMe user interface.
standard profile with respect to two ser-
In addition to manipulating ringer volume, vibration, and phone alerts, SenSay can provide remote callers with the ability to communicate the urgency of their calls, make call suggestions when they are idle, and provide the caller with feedback on the current status of the SenSay user. A number of sensors including accelerometers, light, and microphones provide data about the user’s context. A decision module uses a set of rules to analyze the sensor data and manage a state machine composed of uninterruptible (busy), idle, active, and normal states. The phone alleviates cognitive load on users by various methods including detecting when the user is uninterruptible and automatically setting the ringing mode to manner mode. Although some efforts have already shown great results as presented before, the general problem of knowing about high-level context is a very difficult issue. This problem is best described by considering the gap between lowlevel raw data provided by the known context sensors, and the high-level knowledge that can be required by smart applications involved in interactions with human users. Let us illustrate this with an application pondering whether to interrupt a user with an incoming event, considering that the user is sitting at her desk with no recent activity on the devices (mouse, keyboard, etc.). Is the user deep in thought, about to make a mathematical breakthrough (better not interrupt her in these cases), daydreaming, or maybe waiting for a call for lunch?
LEARNING FROM CONTEXT Orange Lab Tokyo has been working on a context awareness enabler and prediction engine, tailored to enable user habit detection with respect to communication services. This approach allows the user to easily configure a standard profile with respect to two services: incoming call management and notification. This simple profile is then enriched by detecting some habits related to the user context. Contextual data is analyzed by the prediction engine to understand user preferences and communication habits. In the same way, the management of incoming calls has been addressed by Microsoft Research in their Bayesphone service. This client-server system predicts whether the user
IEEE Communications Magazine • January 2011
would like to be interrupted by a phone call during a meeting or have the system take a message instead [7]. The two previous approaches rely on singleuser context-based learning techniques. The inherent limitations of such an approach are linked to the learning duration necessary before the system becomes effective, and their inability to adapt to a user’s habit change. To address these shortcomings, a natural way consists of taking into account the collective behaviors of a large user population in order to determine some common behavior actions. For instance, a rule obtained through this approach could state that users who are at a movie theatre usually turn off ringing. Applying such a technique (e.g., from ethnography) to context awareness can be a creative way to extract knowledge from a large set of users, which can prove very effective for some applications. The subject is described in some detail in [8], where the authors introduce a context-aware system aimed at predicting a user’s preference using past experiences of likeminded users. It remains to be assessed how and to what extent this principle can be applied to communication services.
vices: incoming call management and notification.
ANY BUSINESS HERE? Due to the fact that context awareness is a recent domain in the industry, such services are only beginning to emerge on the market. For example, FourSquare may well be one of the first companies monetizing context awareness activities. Foursquare offers location services delivered by way of embedded software available for Android phones and the iPhone. The startup is very innovative in defining fun and appealing functionalities, such as for an individual to be recognized as the mayor of a place where she regularly shows up. The service incites commercial locations to provide special offers to their mayors, thus setting up friendly competitions, and compelling users to attend the place and buy food/drinks or other services. An online forum dedicated to the business is also offered, allowing buzz discussions around it, with mutual benefits. Actually, FourSquare decided early to provide context application programming interfaces (APIs) to third parties, and several services have already been built on top of these APIs (Fig. 2).
71
BOUABDALLAH LAYOUT
12/16/10
12:22 PM
Page 72
The user understanding issue is that context-aware computing does not typically give the user feedback on which context information is being used as input to control and decision making. Interface intelligibility of context-aware systems is a big research opportunity
Figure 2. FourSquare user interface.
as not much progress has been made here.
Despite this interesting approach, several experts recall that users are often unwilling to pay for mobile context-aware services [9]. Prospective marketing work is still needed in this area. For instance, the various actors may investigate fulfilling specific roles in the context management ecosystem. A plausible evolution of the industry is to implement a federation model, with some actors being context providers, context aggregators, or context brokers. Multiparty agreements may well be found between these actors, allowing value creation and distribution. This emergence of a lucrative context information business is yet to be witnessed.
USER ACCEPTABILITY IS YET TO BE CONQUERED Word of Mouth is an experimental communication service developed by Orange Lab Lannion, aimed at web surfers, providing them with some context information about other people who happen to visit the same website they are visiting, in real time. The service is implemented as a plug-in for the Firefox browser, and provides a chat window for shared textual communication, along with access to the user’s address book. By using an authenticated identity, the user can actually use her Orange address book. This enables users to be informed about the websites their contacts are currently visiting, and join them in a chat session by visiting the same website, or even to place a web phone call thanks to an embedded flash phone application. Privacy is ensured by way of a single-click setting for private surfing. This rather basic approach is a first step toward allowing users to control their privacy (Fig. 3). Concerns for privacy are a very current issue, with publicized incidents involving superstar web services such as Google Mail or Facebook. Nevertheless, the current trend leads to a trade-off between the perceived benefits of the service or the product, and the perceived (or assumed) consequences of allowing a third party to access some private data. Given this situation, privacy management applied to the context awareness domain is of paramount importance, and could well be the differentiating factor between future
72
service successes and failures. Some of the big questions to solve include: • Which context data are more sensitive to privacy concerns? • Are some application classes better perceived than others? • Is a telecom operator a trusted party with respect to context data? • How can consumers’ trust be increased? The more general issue is related to acceptability of context-aware services, which includes, besides privacy issues, the questions of service accuracy and credibility, and user understanding. Users will accept context-aware services only if they keep control over the data and services. This issue includes the ability to disable services, data portability between actors, right of access, right of inspection, and right to oblivion. Context-aware services would be rejected if the input data and inference processes are not accurate, because they would lead to misunderstanding and inappropriate behaviors (missed calls, lost messages, etc.) The user understanding issue is that contextaware computing does not typically give the user feedback on which context information is being used as input to control and decision making. Interface intelligibility of context-aware systems is a big research opportunity as not much progress has been made here. On a more general level, context-aware systems can give rise to concerns related to state agencies accessing private data, subject to specific laws and regulations, which resort to legal issues, out of the scope of this article.
EN ROUTE TO FULL CONTEXT-BASED SERVICES Figure 4 depicts a timeline we envisage for the mass market introduction of context-aware services. The first services to be deployed, including some applications already available, focus on the supply of context data to users, and let them make informed decisions regarding their communication needs. These services, based on low-level automation, are expected to become available within three to five years. Then a second wave of offerings will focus on higher-level automation, and provide applications capable of automatic service adaptation. From moderately complex to advanced applications, these services are expected to come into play within five to ten years.
FROM RESEARCH PROTOTYPES TO FULL-SCALE CONTEXT MANAGEMENT SYSTEMS A great amount of research effort related to context-aware systems has paved the way toward industrial context-aware systems, to be deployed by telcos as a part of the new infrastructures allowing converged services (e.g., IP Multimedia Subsystem [IMS] and Long Term Evolution [LTE]). Although no large-scale advanced context-aware system has been implemented yet, it stems from research results that the heart of context-aware systems depends on the way context data is structured and managed. A consensual architecture is made of three layers: • A lower layer is concerned with managing sensors and gathering context data from them.
IEEE Communications Magazine • January 2011
BOUABDALLAH LAYOUT
12/16/10
12:22 PM
Page 73
Figure 3. Orange Lab's Word of Mouth user interface, as a plug-in within Firefox.
• A middle layer embeds data models, handling data abstraction through a semantic approach, which provides powerful tools such as inference and conflicting data handling. • A higher layer exposes an API to client services, with generic primitives to harness the power of context data (e.g., primitives concerned with notifying applications, e.g., primitives based on the nature of context data, similar to geographical databases). Implementing such a system within a telco infrastructure leads to a horizontal approach, introducing a new enabler dedicated to context data management. Much work is still necessary to define, in interoperable ways, the details of this enabler. Regarding standardization, such an enabler is yet to be described, for instance, by the Open Mobile Alliance (OMA), along with the necessary interfaces and protocols to interoperate with the existing enablers (e.g., user profile). The issues of privacy and user control must be addressed, taking into account the existing telco databases and policies. The emergence of context-aware architectures is also tied to increasing capabilities at the device and sensor levels. Further effort is also required at the interpersonal communication service design stage, in order to harness the potential of contextual information.
CONCLUSION In this article we have reviewed the various applications of context awareness, making clear the evolution potential introduced by these techniques, from the classical digital phone network to truly smart services, many of which remain to be invented. After a dozen years of research efforts, many interesting results have been achieved, and many more questions have been raised, indicating a very promising field for future innovation. We have presented four broad questions, which have already been investigated
IEEE Communications Magazine • January 2011
+
Service automation and personalization, context data variety
Context-based services: automatic service adaptation
Empower users: provide contact’s knowledge to the user
Tough questions: • Acquiring and managing large datasets, required to develop the various service components. • Combining raw context data to determine consistent user context, involving modeling and machine learning techniques
Tough questions: • Acceptability of context-aware services, largely related to the management of users’ privacy • Interface intelligibility, user control
2010
2013
2016
Figure 4. Deployment projection of context-aware services and applications.
for some time by the research community, but which we believe can still yield considerable advances: • How to combine raw context data into highlevel context knowledge • How to leverage large context datasets to identify users’ behaviors, and how to design adaptive services • How to generate business from context services • How to gain users’ trust and acceptance in these services We have also summarized the next steps toward industrial-grade context management systems, which will become major assets of telcos’ converged infrastructures. There is a growing awareness among IT specialists that context awareness is one of the key evolution tracks for future business and personal IT services. Using context-aware capabilities, communication services will adapt to human users, whereas before users had to adapt to the technology of communication services.
73
BOUABDALLAH LAYOUT
74
12/16/10
12:22 PM
Page 74
REFERENCES
BIOGRAPHIES
[1] A.K. Dey and G. D. Abowd, “Toward a Better Understanding of Context and Context-Awareness,” Proc. 1st Int’l. Symp. Handheld Ubiquitous Comp., LNCS, vol. 1707, 1999, pp. 304–7. [2] T. Strang and C. Linnhoff-Popien, “A Context Modelling Survey,” 6th Int’l. Conf. Ubiquitous Comp. ‘04, Wksp. Advanced Context Modeling, Reasoning, Mgmt., Sept. 2004. [3] M. Baldauf, S. Dustdar, and F. Rosenberg, “A Survey on Context-Aware Systems,” Int’l. J. Ad Hoc Ubiquitous Comp., vol. 2, no. 4, 2007, pp. 263–77. [4] B. Schilit, D. Hilbert, and J. Trevor, “Context-Aware Communication,” IEEE Wireless Commun., Oct. 2002, pp. 46–54. [5] E. Miluzzo et al., “Sensing Meets Mobile Social Networks: The Design, Implementation, and Evaluation of the CenceMe Application,” ACM SenSys, Raleigh, NC, 2008, pp. 337–50. [6] D. Siewiorek et al., “SenSay: A Context-Aware Mobile Phone,” Proc. 7th IEEE Int’l. Symp. Wearable Comp., White Plains, NY, Oct. 2005, pp. 248–49. [7] E. Horvitz et al., “Bayesphone: Precomputation of Context-Sensitive Policies for Inquiry and Action in Mobile Devices,” Proc. User Modeling, 2005. [8] A. Chen, “Context-Aware Collaborative Filtering System: Predicting the User’s Preference in the Ubiquitous Computing Environment,” Proc. LoCA ‘05, LNCS, vol. 3479, 2005, pp. 244–53. [9] M. de Reuver and T. Haaker, “Designing Viable Business Models for Context-Aware Mobile Services,” Telematics Informatics Rev., no. 26, 2009, pp. 240–48
F RANÇOIS T OUTAIN received his Ph.D. in computer science from the University of Rennes, France, in 1998. Prior to joining Orange Labs in 2004, he worked as a CTO for a startup company dedicated to the web and telecom convergence. He is now a project manager within Orange Labs, with research interests focusing on interpersonal communication and context-aware services. A HMED B OUABDALLAH (
[email protected]) joined Telecom Bretagne in 1994 as an assistant professor in the Network, Security, and Multimedia Department. Prior to joining Telecom Bretagne, he received his Ph.D. in computer science from the University of FrancheComté in 1993. R ADIM Z EMEK received his M.Sc. degree in electronics and informatics from Brno University of Technology, Czech Republic, in 2003. From April 2004 to March 2005 he was a research student at Osaka University, Japan. He then entered Osaka University to obtain a Ph.D. degree in information and communications technology, which he received in 2008. He joined Orange Labs in Tokyo in 2008 where he works on topics related to broadband access and services. CLAUDE DALOZ received his M.Sc. in computer science from the University of Besançon in 1997 and his telecommunication engineering degree from the National Institute of Telecommunication at Evry in 1999. He is responsible for a research program on interpersonal communication for Orange Labs.
IEEE Communications Magazine • January 2011
JUNG LAYOUT
12/16/10
12:32 PM
Page 76
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
Employing Collective Intelligence for User Driven Service Creation Yuchul Jung, Yoo-mi Park, Hyun Joo Bae, and Byung Sun Lee, Electronics and Telecommunications Research Institute Jinsul Kim, Korea Nazarene University
ABSTRACT With advances in computing technologies and active user participation through smart devices such as the iPhone and Android, user needs are becoming varied and complex. It is quite natural, then, that a single Web service may not be sufficient to fully satisfy the diverse goals of users in their daily lives. A set of cohesively connected Web services/mashups may be able to deal with these goals. An increasing number of open APIs can facilitate various types of service compositions with users as the service creators. Recently, Internet, telecommunications, and third-party providers have opened their services to the public in the form of open APIs, a trend following the Web 2.0 paradigm. However, most service creation environments do not have sufficient knowledge (particularly, available services and their functionality) to support service creation by users. The problem of knowledge scarcity is that users may have difficulty in finding relevant open APIs for a given situation, finally resulting in rather straightforward types of service. In this article we present two kinds of collective intelligence for user-driven service creation: the user’s own experiences in service composition, and activity knowledge from the web. These collective intelligence types will aid in creating enduser service compositions by enforcing knowledge support in terms of user experiences and activity-aware functional semantics, and will finally accelerate the development of various kinds of converged applications. Using the beneficial roles of collective intelligence as key enablers of future service creation environments, this article also shows a new potential for userdriven composite services within the next few years.
INTRODUCTION With the advent of Web 2.0 and the efforts of service providers, there has been a growing number of open application programming interfaces (APIs) from Internet, telecommunications, and third-party services. Social networking com-
76
0163-6804/11/$25.00 © 2011 IEEE
panies such as Facebook and Twitter provide various types of open APIs for searches, communication, statistics, and so on. Telcos have changed from isolated, monolithic legacy stovepipes to a much more modular, Internet-style framework that has opened up telecommunication features (voice calls, presence, messaging, contact management, session initiation protocol (SIP), and so on) to the public. Moreover, new IT service providers are offering more advanced web services that are more closely integrated with the plethora of information available on the web. The availability of open APIs enables end users to generate new converged services more easily without too much consideration on the underlying infrastructure and mechanisms required. Through a highly intuitive service creation environment (SCE), users can create a new service by composing previously existing open APIs in a more personalized manner. The Open Platform for User-Centric service Creation and Execution (OPUCE) [1] provides the tools (e.g., web and mobile editors) that are necessary to allow everyone to build his/her own communication services without having specific background knowledge in computing. More recently, SoCo [2], a Computer Supported Cooperative Work (CSCW) mashup creation environment, has aimed to support a social-aware mashup to end users by providing social knowledge extraction and service recommendations. In addition, there is ongoing effort toward mixing telecommunication features and web capabilities for a more open and user-centric environment. For example, some approaches have added voice capabilities to their web mashup, and several of Apple’s iPhone APIs follow a form of mixed services creation (web + Telco). In those SCEs, end users expect the open APIs to be discovered based on their pursued goals. This falls into the initial concern of a knowledge base, which contains information about which actions (or functions) are required for goal completion, their composition sequence, and so on. However, currently available SCEs do not have sufficient knowledge to fully sup-
IEEE Communications Magazine • January 2011
JUNG LAYOUT
12/16/10
12:32 PM
Page 77
port the requirements of end users under diverse situations. To reinforce the knowledge layer (or component) of SCEs, herein we suggest two kinds of collective intelligence. The first is experience with mashups (or service compositions), and the other is activity knowledge extracted from the web, which will be useful for dealing with diverse service flows of the same goal. Each type of intelligence is complementary to the other in that the latter replaces the former when there is no stored case for a given goal, or when users have more diversified sub-branches. In a service composition, one of the best knowledge resources is experience in itself. However, experience in a service composition requires the inclusion of a set of functional semantics, doable actions, or functions of the composed services, rather than the service type, service format, service address, and so on. This is because service APIs available for a given time can be changed unexpectedly according to the user’s situation, service availability, and network conditions. By interpreting and maintaining a set of functional semantics for each experience of service composition, SCEs can deliver several open APIs that have the same functional semantics. The other kind of the collective intelligence is activity knowledge that can be generated by mining human activities from the web. Promising knowledge sources include human-annotated web sites that maintain step-by-step how-to articles and blogs that describe human daily life activities. By mining such activities from web resources, we can obtain a significant knowledge base of service flows that are equivalent to human action sequences for achieving certain goals. Collective intelligence is expected to play an important role in covering the knowledge scarcity problem of SCEs. Even state-of-the-art SCEs match against a relatively small set of inference rules designed for understanding user intent and predicting user behaviors when recommending relevant service units. They often fail to deal with diverse situations that may arise in the real world due to a lack of knowledge in interpreting user activities. In addition, web mining techniques can cover new domains without requiring a tremendous amount of manual effort in constructing such knowledge. In this article, we aim to explain user-driven service creation, which utilizes these two kinds of collective intelligence. To do so, we illustrate how collective intelligence can be seamlessly integrated to create a new composite service in a personalized manner, reveal their knowledge resources, and discusses their extensibilities with existing data sharing ontologies, such as SIOC and FOAF. Our objective is a novel presentation of collective intelligence in order to pave the way for a future service creation environment enabling a user-driven service composition. In the following sections, after a review of related work on the current status of SCEs, we describe the details of the two kinds of collective intelligence. However, detailed matchmaking procedures that may arise during the recommendation of relevant open APIs go beyond the scope of this article.
IEEE Communications Magazine • January 2011
BACKGROUND CURRENT STATUS OF MASHUP/SERVICE COMPOSITION IN SCES Currently, end users can develop their own applications by following mashup- or widgetbased models. In the former case, end users combine existing services and web feeds from multiple sources into a single web-based application using a specialized mashup editor in SCE. There are two major drawbacks of this approach: the modeling skills needed to understand the data flow between services, and a strong emphasis on data aggregation while giving less importance to functionality aggregation. While the mashup-based model is complex and lacks flexibility due to the previously defined procedures without consideration of user convenience, the widget-based model does not support any interaction between services offered by different service providers. Meanwhile, there is an ongoing project called SOA4All [3] that has overcome the drawbacks of both models. The approach used in SOA4All for opening up services to all users is based on the use of Web 2.0 principles and state-of-the-art techniques for semantically tagging, retrieving, and composing services. In particular, SOA4All studio aims to facilitate the composition of webservice-based applications for novice users. This helps end users with the selection and placement of related web services through an intuitive user interface.
In spite of an increasing effort to achieve advanced SCEs, we are currently still in the infancy stage of knowledge support in this field. For example, if an end user wants to buy a phone, two types of services may be required: a find-aphone service and a buy-a-phone service.
INFANT STAGE OF KNOWLEDGE SUPPORT IN SCE In spite of an increasing effort to achieve advanced SCEs, we are currently still in the infancy stage of knowledge support in this field. For example, if an end user wants to buy a phone, two types of services may be required: a find-a-phone service and a buy-a-phone service. Currently existing SCEs do not have sufficient knowledge to allow the end user to find suitable services in terms of functionality and relevancy. Thus, they have difficulty in assisting with the service composition processes of end users, such as automating the composition process or suggesting possible actions to the user. A recently introduced approach toward an automatic composition of web services is to utilize a domain ontology as well as semantic descriptions of services using service description languages such as OWL-S, WSMO, and SAWSDL. In a semantic description, a user’s request is represented by means of Inputs, Outputs, Preconditions, and Effects (IOPE) specifications, which are the basis for an automatic composition of web services. Considering that two services with identical IOPEs are functionally identical, a composite web service satisfying the IOPE of a user request can be chosen. This method may fail to generate composite web services suited to the user’s intention, however, when services or user requests do not have a complete specification, for example, when there are missing preconditions or effects. To overcome this drawback, a web service composition based on a
77
JUNG LAYOUT
12/16/10
12:32 PM
Page 78
Shopping google.map* ListShoppingMalls NavigateOutdoor SearchNearbyShop
yahoo.nearbyShop (”Nike jogging shoes”) bing.compare(”Nike 343_x_y”)
NavigateIndoor Experience of service composition (ESC)
SuggestProduct
Mall.suggest (”Nike 343_x_y”)
ComparePrice
Activity knowledge base (AKB)
Mall.direction (”Nike”)
API provided by department store
twitter.search, twitter.trend
GetOpinions AskToFriends
No
Yes
Telco.getLocation Telco.getUserPresence
Buy? Payment
Telco.send SMS/MMS
Teleco web service (e.g. Parlay X)
Telco.chargeAmount Complete shopping
Figure 1. Example of a converged service composition.
hierarchical task network (HTN) was used to consider the functional semantics of services. However, it is almost impossible to ascertain the functionality of every service, or the composition or decomposition relations between entire services. A work that is closer to our approach is described in [4], where semantic web and NLP techniques are combined to support web service discovery and selection, based on information extracted from an informal user input, which is processed via a network of semantic metadata driven from commonsense knowledge [5]. However, that project [4] focuses only on semantic service selection and does not try to synthesize a composed valued-added service based on open APIs. Although a ‘learning from the web’ idea that utilizes an NLP technique to learn new pieces of information by reading web pages was mentioned in the project as the authors’ future work, it has not yet been realized. In addition, context/situation-aware systems are still in their infancy stage as far as knowledge support is concerned. Although they focus on recommending relevant items, services, and media based on their own knowledge base, they have one or more of the following limitations: they are bound to a small set of predefined situations, they require users to proactively train the
78
system before using it for receiving recommendations, and they require a cold-start data generation process. More specifically, they do not have a large-scale, easily accessible knowledge base to provide better support for understanding context, and they suffer from such limitations as personal information acquisition and a lack of abundant, yet trustable, external resources.
EXAMPLE SCENARIO: COLLECTIVE-INTELLIGENCE-BASED SERVICE COMPOSITION The following scenario (Fig. 1) highlights the challenges of a converged service composition where end users generate their own services in the real world by referencing the above-mentioned collective intelligence types: Experience of Service Composition (ESC) and Activity Knowledge Base (AKB). It should be noted that the functional semantics in an ESC sometimes coincide with those in an AKB because knowledge obtained from the web reflects daily life and the problem-solving patterns of the general public. Both an ESC and AKB are used for recommending relevant open APIs that can accomplish the functionality of each step.
IEEE Communications Magazine • January 2011
JUNG LAYOUT
12/16/10
12:32 PM
Service repository
Page 79
Within the concept
User driven service creation environment (SCE)
Telecommunication (voice, messaging, presence, etc)
of user-centric
Collective intelligence
service creation, a paradigm shift in the
Web service APIs (by Google, Twitter, Yahoo, etc)
Experience of service composition (ESC)
use of SCEs is occur-
Service recommender End users
ring since service creators are not spe-
Service provider’ APIs
Activity knowledge base (AKB)
Interactive service composer
cialists or professionals. These service creators (i.e., end
Figure 2. Roles of collective intelligence in a user-driven service creation environment.
users) often have little knowledge (or background) in not
Mr. Kesla is a businessman living in LA, and has been in Boston on business since last weekend. He intends to buy cheap jogging shoes for tomorrow morning, so he asks an ESC how and where he can buy them. Referring to other user behaviors in a similar situation, the ESC can provide Mr. Kesla with a set (series) of services to facilitate his shopping experience. First, with the help of open APIs supported by Google maps, Mr. Kesla receives a list of shopping malls and their maps on his mobile phone. Then, from the list, Mr. Kesla tries to find a nearby shopping mall where he can buy Nike jogging shoes. After choosing a shopping mall, the map shows him the directions to the mall. As he enters the mall, an indoor navigator service provided by the mall guides him to the Nike shop. Next, a suggestion service provided by Nike is executed to help him buy cheap jogging shoes by providing a list of recommended models. After he chooses a pair of shoes, Nike 343_x_y, an AKB recommends a ComparePrice service based on functional semantics that are maintained in the buying something domain activity knowledge base. Even though Mr. Kesla usually buys things without considering the prices at other stores or the opinions of his friends, the AKB can offer easy-to-follow procedures and a meaningful course of actions that were previously unknown to him. After learning the given prices at other shopping malls, he may also want to know other user opinions on the chosen model. If he wants to receive general opinions from the public, he can choose a GetOpinion service provided by social communication APIs. If he wants to ask friends who are currently on-line, he can invoke AskToFriends, a composite API that merges relevant Teleco-provided open APIs. Upon his acceptance of the selected model, the Payment API, which he can use to choose a payment option of a pre-paid card, credit card, bank transfer, and so on, appears. The above description is an example of a service composition created by using two kinds of collective intelligence: user experience and user activities. The intelligence types include various open APIs offered by Internet service providers, Telcos, and domain-specific service providers. The above description also reveals a telecommu-
IEEE Communications Magazine • January 2011
nications feature — in this case, connecting friends/groups together in a multimedia session — as a web service, such that the service may be invoked by other web components. A user-driven service composition is not a one-shot production process. It accompanies partial (or total) modifications during service execution to meet a user’s immediate needs that are more or less deviated from his/her original goal. Two or more converged applications are expected to merge in other complex situations. In these kinds of dynamic and complex situations, the ESC and AKB will intrinsically support convincing clues for finding relevant open APIs from a service repository. In addition, social networking with friends (or business colleagues) will certainly provide possible answers for a given situation by enabling the sharing of functional semantics of a service.
only computer technologies, but also available open APIs.
COLLECTIVE INTELLIGENCE FOR USER-DRIVEN SERVICE CREATION ENVIRONMENT Within the concept of user-centric service creation, a paradigm shift in the use of SCEs is occurring since service creators are not specialists or professionals. These service creators (i.e., end users) often have little knowledge (or background) in not only computer technologies, but also available open APIs. As traditional SCE interfaces are difficult to handle for the average user, the entire creation process needs to be supported via intuitive graphical wizards or assistants. In particular, these difficulties need to be minimized with the help of knowledge components that assist users to find relevant open APIs that satisfy the input/output constraints with functional semantics. We have a firm belief that SCEs will be equipped with context-aware service recommendation capabilities [6] regarding the user situations in the near future. When a user’s situation is recognized, recommendations for composing a service are made through an intuitive user interface of the user’s device. For example, the user first chooses a goal among multiple service alternatives provided by the SCE. Upon receiving feedback from the user, i.e., the user’s goal, the
79
JUNG LAYOUT
12/16/10
12:32 PM
Page 80
Raw: A set of used open APIs
Eventually, the collective intelligence
Airline reservation service
will contribute toward the discovery of a killer application
Goal =’Travel’
Weather forecast service
Hotel reservation service
Train reservation service
Deduced Functional Semantics
in next-generation services by allowing
(buy, airline ticket)
(check, weather)
(reserve, hotel)
(reserve, train ticket)
end users to build their own
Figure 3. A service composition case and its deduced functional semantics.
applications that are completely tailored to their personal needs.
system starts a new service composition to achieve that goal. The system aids the user in his service composition behavior for the given context and obtains a new composite service specific to the user. The newly composed service is then stored as an experience, i.e., a user case, in a repository for service composition experiences. To effectively support a user-driven service composition in an SCE in terms of enforcing the knowledge component, we can employ two types of collective intelligence as shown in Fig. 2. First, service composition experiences allow end users to start a service composition even if they do not have domain knowledge. Second, activity knowledge obtained from the web covers diverse user goals and their sub-branches. Activity knowledge complements gaps between the dynamics of real-world cases and previously stored experiences because it covers almost every domain (e.g., health, travel, and art) of human life. Maintaining a significant amount of collective intelligence (composition experiences and activity knowledge) will satisfy the diverse and dynamic needs of end users during a service composition by providing meaningful clues for recommending relevant open APIs from Internet, Telecom, or third-party services. Once end users utilize their collective intelligence to compose their own services to achieve their goals, the newly composed services are stored in the repository, and these stored services are reused by friends/groups to assist the completion of their tasks. Eventually, the collective intelligence will contribute toward the discovery of a killer application in next-generation services by allowing end users to build their own applications that are completely tailored to their personal needs.
COLLECTIVE INTELLIGENCE TYPE 1: EXPERIENCE OF SERVICE COMPOSITION An Experience of Service Composition (ESC) collects newly created (or modified) service composition cases from users. Basically, it can be designed for both individuals and the general public. An individually confirmed service composition experience, a personal ESC, can be used as part of a user stereotype or model for more individualized recommendations in the future. On the other hand, by gathering confirmed service composition experiences from multiple users for a particular goal, we can update the public’s ESC more fruitfully.
80
With a large number of service composition experiences distilled and indexed in an ESC, it is feasible to provide a solution for diverse user goals. A large-scale ESC can be utilized to support automated service compositions and userdriven service compositions. Basically, an ESC stores the name of open APIs used to achieve particular goals. However, its use will be restricted if the open APIs are unavailable (e.g., a temporary pause in web service or a deprecation of web service APIs). To deal with an unavailable service status, we need to design an ESC to include the functional semantics of each open API. Shin et al. [7] used the functional semantics of a service, which define what the service actually does by representing its action and the objective of that action. Functional semantics implies a doable action item so that it can be effectively used as a decision clue for recommending relevant and available open APIs. Figure 3 describes a service composition example of a traveling domain, as well as the deduced functional semantics for each open API used. As the number of cases in a distinct goal in an ESC increases, the goal can have several types of service composition patterns in terms of the service flows. These patterns may have different sub-branches, either a sequential pattern or non-sequential pattern, or a combination of the two.
COLLECTIVE INTELLIGENCE TYPE 2: ACTIVITY KNOWLEDGE EXTRACTED FROM THE WEB Unknown goals/activities, or diverse sub-branches of the goals, can be covered through information extraction from Web 2.0 resources. We assume that the information can be obtained mainly from two resources: how-to documents and blogs describing human activities. For example, the eHow site (http://ehow.com) currently contains more than one million how-to articles. Figure 4 (left) shows recent statistics for the 24 topic categories provided at eHow. An example based on the activity knowledge model is shown on the right. Each goal is connected to a topic category that can be found in the eHow hierarchy. Goals are connected when they share a particular action. Each goal has several action steps that are required to reach that goal. Such web resources collectively cover almost every domain of daily life, including business, cars, computers, education, health, travel, and weddings. An article from one of these resources can be converted
IEEE Communications Magazine • January 2011
JUNG LAYOUT
12/16/10
12:32 PM
Category
Page 81
# Content
Percentage
Health
122,152
12.1%
Home & Garden
102,843
10.2%
Food & Drink
75,842
7.5%
Sports & Fitness
74,930
7.4%
Hobbies, Games & Toys
74,216
7.3%
Arts & Entertainment
68,165
6.7%
Fashion, Style & Personal Care
49,270
4.9%
Computers
47,450
4.7%
Personal Finance
41,086
4.1%
Careers & Work
39,291
3.9%
Business
31,846
3.1%
Cars
30,900
3.1%
Education
30,677
3.0%
Pets
30,017
3.0%
Travel
29,359
2.9%
Culture & Society
26,508
2.6%
Relationships & Family
25,220
2.5%
Internet
24,938
2.5%
Holidays & Celebrations
22,632
2.2%
Parenting
19,427
1.9%
Electronics
18,876
1.9%
Legal
9,805
1.0%
Parties & Entertaining
8,874
0.9%
Weddings
8,449
0.8%
1,012,773
100.0%
Total
Travel domain
Health domain
Enjoy vacation in Scandinavia
How to eat healthy to lose weight
Goals
hasAction hasAction
Action
Book air passes
Eat breakfast
hasNextAction
hasNextAction Check the weather
Drink water
hasNextAction Book the room
hasNextAction
hasNextAction Do tax-free shopping
Eat fruit hasNextAction
hasNextAction Buy a ScanRail & Drive Pass
Get exercise
Activity knowledge
Figure 4. Statistics of ehow (left) and examples of activity knowledge (right). into a sequence of functional semantics based on state-of-the-art natural language processing (NLP) techniques to assist with the various topic categories of a service composition. Jung et al. [8] employed both a syntactic pattern-based method and a probabilistic machine-learning method to extract actions (in the form of verbs) and associated ingredients (time, location, objects, etc.) from how-to articles with high coverage and precision. This approach is versatile and scalable in terms of automatically extracting functional semantics from shared web 2.0 articles that describe step-by-step instructions or facts related to human activities. Functional semantics is congruent with actions in the activity knowledge model in that the latter specifies actions to be taken in order to accomplish a particular goal. In addition to the availability of goals and actions that can be converted into functional semantics, a wide coverage of activity knowledge can provide an added benefit, particularly when a service has to be specified using multiple operations due to its multiple functionalities, as in the examples shown in the right-side of Fig. 4 (i.e., “enjoy vacation in Scandinavia” and “how to eat healthy to lose weight”). In particular, the availability of activity knowledge makes it possible to compose (semantic) web services comprising Input, Output, Precondition, and Effect (IOPE), even when the last two elements, Precondition and Effect, are not available. In addition, as the number of Web 2.0 resources increases, the coverage and depth of
IEEE Communications Magazine • January 2011
functional semantics will extend further. This is also very useful for upgrading the infancy stage of context/situation-aware systems in terms of their knowledge support as they are undergoing difficulties due to one of more of the following limitations: being bound to a small set of predefined situations, requiring users to proactively train a system before using it as a source of recommendations, and requiring a cold-start data generation process.
EXTENDING THE COLLECTIVE INTELLIGENCE In addition to the knowledge support characteristics of a service creation environment, we also need to think about how to extend our collective intelligence through social-networking. The sharing of personally created services, if successful, can increase user confidence with a system and speed up its proliferation across a user’s social sphere for those individuals who may have an interest in using (or personalizing) it or have a similar goal to achieve. For example, during a user-driven service composition, users may have difficulty in finding relevant open APIs due to a lack of domain knowledge or composition experience regarding the new situation. In the era of social networking era, this problem can be effectively handled using the following actions: asking for colleagues’ opinions, sharing other users’ service composition experience, and utilizing a collective intelligence on the web. The first one is
81
JUNG LAYOUT
12/16/10
12:32 PM
Page 82
Functional Semantics hasFuncSemantics
SIOC Item
hasItem
Web site
API
hosts
uses
User
creates
Service
hasMember
hasGoal
User group
Goal
FOAF
Proposed Data Model
cally relevant open APIs from the increasing number of Internet, Telecom, and third-party services. If recommended services do not fit a current situation, the user can revise them through a user-friendly SCE in a personalized manner. In addition, user intervention during a service composition will induce more interactive social communication (e.g., simple Q&A with friends or business communication with colleagues) according to the user’s pursued goals. Such interaction will contribute to a re-proliferation of collective intelligence. To realize an ESC and AKB in a user-driven service-creation environment, however, several complex features such as dynamic service discovery, different types of service invocations, and cooperation between standard and non-standard services are also entwined. More specifically, when we merge different types of web services (e.g., web and Telecom APIs), we need to balance ad hoc service compositions considering their associated quality and API interface problems, such as between SOAP-based and Representational State Transfer (REST)-based services.
Figure 5. Connection with other ontologies for sharing and linking of data.
CONCLUDING REMARKS already possible through social networking portals (e.g., Facebook and Twitter) or Telco features (e.g., SMS, voice calling, and presence). The two remaining actions are in somewhat premature stages, but they will soon become a reality considering the popularity of social networking and the increase of web resources related to human activities. A data model for service composition results can include the name of the user-created service, its goal, a set of open APIs that the service uses to achieve the given goal, and functional semantics that correspond to each open API. To keep up with the semantic web area’s ontology in terms of social networking, other ontologies for the sharing and linking of data, such as Friend of a Friend (FOAF) [9] and Semantically Interlinked Online Community (SIOC) [10] can be connected. FOAF specifies the most important features related to people active in online communities, such as a web site, web site creator, and user group. SIOC provides the main concepts and properties required to describe information from online communities (e.g., message boards, wikis, and blogs). Figure 5 illustrates the structure of our proposed data model and its relationships with FOAF and SIOC. To allow social networking within a group, our proposed data model can additionally include user and user-group attributes that the SIOC and FOAF share simultaneously. Based on the shared characteristics that the data model permits, we can expect a more interactive participation and collaboration among end users (or groups).
DISCUSSION With the help of collective intelligence, ESC and AKB, end users can obtain a set of functional semantics required for a given goal. Such functional semantics are helpful for finding semanti-
82
Technology-driven societal change is a hallmark of our era, but the collective intelligence driven by the Web 2.0 paradigm is expected to solicit new changes in service composition in upcoming social networking environments. In particular, two forms of collective intelligence, experiences of service composition, and activity knowledge from the web, will effectively support the service creation behaviors of end users in unplanned and unprecedented ways. For example, users can combine various kinds of web services from the growing numbers of Internet, Telecom, and third-party services to achieve their goals under dynamic and complicated situations. To realize these challenging changes, however, every shareholder (service provider, web service portal, network operator, end user, and so on) should become more active in collaborating with other holders to create new business opportunities.
ACKNOWLEDGMENT This work was partially supported by the IT R&D program of KCC/KORPA of South Korea [KI002076, Development of Customer Oriented Convergent Service Common Platform Technology Based on Network] and the R&D program of Korea Nazarene University (KNU) of South Korea.
REFERENCES [1] J. Sienel et al., “OPUCE: A Telco-Driven Service Mash-Up Approach,” Bell Labs Tech. J., vol. 14, no. 1, 2009, pp. 203–18. [2] A. Maaradji et al., “Social Composer: A Social-Aware Mashup Creation Environment,” ACM CSCW ‘10, 2010, pp. 549–50. [3] SOA4All, accessed Apr. 1, 2010; http://www.soa4all.eu/. [4] A. Faaborg, S. Chaiworawitkul, and H. Lieberman, “Using Common Sense Reasoning to Enable the Semantic Web,” accessed Aug. 1, 2010; http://agents. media.mit.edu/projects/semanticweb_old/. [5] MIT Media Lab, “OpenMind and ConceptNet”; http://agents.media.mit.edu/projects/commonsense/
IEEE Communications Magazine • January 2011
JUNG LAYOUT
12/16/10
12:32 PM
Page 83
[6] Z. Zhao, N. Laga, and N. Crespi, “The Incoming Trends of End-User Driven Service Creation,” DigiBiz ‘09, LNICST, vol. 21, 2009, pp. 98–108. [7] D.-H. Shin et al., “Automated Generation of Composite Web Services Based on Functional Semantics,” J. Web Semantics, vol. 7, no. 4, 2009, pp. 332–43. [8] Y. C. Jung et al., “Automatic Construction of a LargeScale Situation Ontology by Mining How-to Instructions from the Web,” J. Web Semantics, vol. 8, no. 2, 2010, pp. 110–24. [9] D. Brickley and L. Miller, “FOAF Vocabulary Specification 0.97,” 2010; http://xmlns.com/foaf/spec/. [10] J. G. Breslin et al., “Towards Semantically-Interlinked Online Communities,” ESWC ‘05, 2005, pp. 500–14.
BIOGRAPHIES Y UCHUL J UNG (
[email protected]) received a B.S. degree in computer science from Ajou University, South Korea, in 2003, a M.S. degree in digital media from KAIST (Korea Advanced Institute of Science and Technology), Daejeon, South Korea, in 2005, and a Ph.D. degree in computer science from KAIST, in 2011. Since 2009, he has been working as a researcher in Service Convergence Research Team, Internet Research Division, Electronics, and Telecommunications Research Institute (ETRI), Daejeon, Korea. His research interests include Semantic service discovery, commonsense computing, context-aware computing, large-scale knowledge construction, and IR & NLP. YOO-MI PARK (
[email protected]) received a B.S. degree in computer science from Sookmyung Women’s University, Seoul, South Korea in 1991, and M.S. and Ph.D. degrees in Computer Engineering from Chungnam National University, Daejeon, South Korea in 1997 and 2010, respectively. She is currently working as a principal member of engineering staff at ETRI, Daejeon, South Korea. Her research interests include context-aware mobile service and semantic web service composition.
IEEE Communications Magazine • January 2011
H YUN J OO B AE (
[email protected]) received B.S and M.S degrees in Computer Science from Pusan National University, South Korea in 1988 and 1991 respectively. Since joining ETRI in 1991, she has been engaged in research on telecommunication services and converged service delivery platform. She is currently a project leader in Service Convergence Research Team, Internet Research Division, ETRI, Daejeon, Korea. B YUNG S UN L EE (
[email protected]) received a B.S degree in Mathematics from Sungkyunkwan University in 1980, a M.S. degree in Computer Science from Dongguk University in 1982, and a Ph.D. degree in Computer Science from KAIST in 2003. He has developed switching systems including softswitch and IMS as a project leader. He is currently the director of Service Platform Research Group, ETRI. His research interests include verification of real-time software, software fault-tolerant computing, and context-aware service technology over ambient network. J INSUL K IM (
[email protected]; corresponding author) received a B.S. degree in computer science from University of Utah, Salt Lake City, Utah, USA, in 2001, and M.S. and Ph.D. degrees in digital media engineering, department of information and communications from KAIST (Korea Advanced Institute of Science and Technology), Daejeon, South Korea, in 2005 and 2008. He was worked as a researcher in IPTV Infrastructure Technology Research Laboratory, Broadcasting/Telecommunications Convergence Research Division, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea from 2005 to 2008. Since 2009, he has been working as a Professor in Korea Nazarene University, Cheonan, Korea. He has been invited reviewer for IEEE Transactions on Multimedia since 2008 and TPC (Technical Program Committee) for IWITMA (International Workshop on IPTV Technologies and Multidisciplinary Applications) 2009/2010. His research interests include QoS/QoE, measurement, management, IPTV, mobile IPTV, ubiquitous networks, multimedia communication, and voice/video over IP.
83
EFTHYMIOPOULOS LAYOUT
12/16/10
12:30 PM
Page 84
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
VITAL++, a New Communication Paradigm: Embedding P2P Technology in Next Generation Networks Athanasios Christakidis and Nikolaos Efthymiopoulos, University of Patras Jens Fiedler, Fraunhofer Fokus Shane Dempsey, Waterford Institute of Technology Konstantinos Koutsopoulos, Blue Chip Technologies S.A. Spyros Denazis and Spyridon Tombros, University of Patras Stephen Garvey, Waterford Institute of Technology Odysseas Koufopavlou, University of Patras
ABSTRACT This article describes the major components and their interactions of a novel architecture called VITAL++ that combines the best features of the two seemingly disparate worlds, peer-to-peer and NGN in particular IMS, which is then used to support multimedia applications and content distribution services. To this end, P2P is enhanced with advanced authentication and DRM mechanisms while NGN have become more scalable, reliable and less centralized by exploiting P2P self-organization properties. We describe novel P2P algorithms for optimizing network resources in order to efficiently distribute content among various users without resorting to laborious management operations required in NGN.
INTRODUCTION The continuing evolution of the Internet has elevated digital communications to higher levels and made audiovisual communications, such as content distribution, digital TV, and video-ondemand mainstream among the Internet applications of today. These emerging types of applications, rich in user-created content, enabled by peer-to-peer (P2P) technology, with high demands for network resources are rapidly changing the landscape of network operations and requirements. Replacing the traditional client-server operations with light-weight and highly distributed architectures for content delivery creates new challenges in network and service management, configuration, deployment, protocols, and so on.
84
0163-6804/11/$25.00 © 2011 IEEE
Cisco predict that by 2014 more than 90 percent of the traffic traversing the internet will be video content, whether delivered using P2P networks or streamed from servers [1]. Deep-packet inspection vendor, Ipoque, have released a study [2] showing P2P traffic dominates as a percentage of the total Internet traffic. Much of this P2P traffic consists of BitTorrent file-sharing and the Skype telecoms platform which are viewed as a nuisance by many network operators as they consume significant bandwidth, see unlicensed content distributed throughout the network and, in the case of Skype, subtract from Network Operator’s voice revenues. Recent advances in P2P streaming [3, 4] indicate the potential for optimum exploitation of the available user resources, e.g., up to 95 percent utilization of the total available upload bandwidth in the network. This combined with the ease of introducing and integrating dynamically resources in P2P overlays, wherever they are needed, without resorting to laborious and costly management operations, makes P2P a promising technology that can be integrated with existing telecommunication practices and efficiently distribute all content. Initiatives like ALTO [5] in the IETF recognize that consumer demand drives the distribution of content to the edge and access networks where P2P may play an important role. Accordingly, Content Distribution Network (CDN) architectures may be created to support more efficient distribution of content by unburdening the network core. The International Telecommunications Union (ITU) study group on IPTV described CDN and P2P solutions for content distribution, which are compatible with the IPTV architectures pro-
IEEE Communications Magazine • January 2011
EFTHYMIOPOULOS LAYOUT
12/16/10
Scalability
12:30 PM
Page 85
P2P
IMS
Very good
Difficult
Single points of failure
No
Yes
Users as content providers
Yes
No
DDoS vulnerable
No
Yes
Access
Easy
Difficult
Security/AAA
Bad
Good
Topology aware
Difficult
Yes
Standardized
No
Yes
Quality of service
No
Yes
NAT client problem
Difficult
No
Service deployment
Difficult
Easy
Table 1. IMS vs. P2P comparative overview. posed by ETSI and ITU standardization groups [6]. Their solution is primarily based on the extensive use of Content Distribution Servers. As CDN servers cannot be installed infinitely and the number of clients that can be served at any time depends on the power and number of CDN servers, a P2P system is required that increases scalability and allows for situations where “each end node can be both a media producer as well as a media consumer”. The arguments above provide a strong motivation towards investigating and experimenting with ways of integrating P2P with telecommunication network architectures, standards and practices for multimedia distribution services. This leads to a managed P2P architecture that incorporates features required by network operators and service providers such as authentication, accounting, digital rights management, and configurable QoS for content delivery applications such as IPTV. Next Generation Networks (NGN) represents the latest architectural evolution in core and access networks aiming at creating converged IP communication platforms with IP Multimedia subsystem (IMS) rapidly becoming one instance of a NGN control plane technology. IMS primarily addresses issues of heterogeneity of access technologies, addressing schemes, AAA, QoS, security, and mobility management from an operator’s perspective. To this end, it becomes the ideal candidate for embedding desirable P2P functionality. However, these two seemingly competing technologies have thus-far been deployed independent of each other, therefore failing to mutually exploit their strengths towards creating a new and more powerful paradigm. When comparing IMS and P2P we compare two inherently different worlds (Table 1). IMS as a technology for controlling media flows, administer subscribers, and control access to services in a high-
IEEE Communications Magazine • January 2011
ly centralized system. In contrast, P2P is a highly distributed system that has been designed to be scalable, adaptable, and failure resilient, mainly for the distribution of media (files, streams). This article describes the VITAL++ architecture [7], which is a communication paradigm that fulfils the requirements of both users and operators and proposes a new architecture for content distribution systems. More specifically, in the next section we identify the requirements to be taken into account and the VITAL++ architecture that meets these requirements. We note here that the innovation of our work relies on the methodology of how P2P can be embedded into an NGN architecture like IMS and vice versa, rather than the specifics of IMS. To this end, any future NGN architecture can be integrated with P2P in a similar manner. We then show how innovative P2P algorithms developed as part of VITAL++ are capable of optimally utilizing the available user resources, thus offering live streaming without additional management overhead. We then present our conclusions.
With VITAL++, we propose a network operator and service provider managed P2P architecture, compatible with the ITU-specified P2P IPTV architecture. Our architecture is modular and can also deliver other applications including file-based content distribution and video on demand.
VITAL++ REQUIREMENTS AND ARCHITECTURE With VITAL++, we propose a network operator and service provider managed P2P architecture, compatible with the ITU-specified P2P IPTV architecture. Our architecture is modular and can also deliver other applications including file-based content distribution and video on demand. We annotate the major functionalities that our system has to deliver in five categories that also form our sub-architectures (SA). The first is the P2P authentication (P2PA) SA that is responsible for enabling clients (peers) to authenticate messages in order to ensure that those received messages come only from authorized peers. The Content Index (CI) SA has three major functionalities. It allows the publishing of objects from users and/or content providers, enables queries for objects that our system maintains and distributes, tracks which peers have what objects, and provides the initial insertion of a peer to the overlay that distributes the object that it requests. The next major functionality of our system is the Overlay Management (OM) SA. This SA is responsible for the management of the Content Diffusion Overlay (CDO). The CDO is a graph that participating peers cooperatively form and maintain by selecting dynamically a small subset of peers that act as its neighbors. The purpose of the CDO is the distribution of the content, which users exchange with their neighbors real time in the form of data (content) blocks. This graph determines the network paths that the system uses in order to distribute the content according to the user requests. The system creates and maintains one CDO for each media object that it distributes. The Content Security (CS) sub-architecture has been designed to enable content providers to control the distribution of the content using Dig-
85
EFTHYMIOPOULOS LAYOUT
12/16/10
P2P is primarily an end-users’ technology that fosters selfdeployment and self-organization while it achieves optimized resource utilization for the deployed applications and services. With the development of our QoS SA we succeeded where QoS mechanisms have failed to be deployed and operate at large scales.
86
12:30 PM
Page 86
ital Rights Management technology. VITAL++’s DRM was designed to satisfy real content provider requirements and so it is related to real-world business requirements. More specifically, the requirements range from Conditional Access to streaming Content, Encryption of file-based and streamed content where appropriate, flexible rights expression, integration with Accounting, respect for privacy and consumer rights, assertion of fair-use for purposes such as backup/education etc., and identity-based conditional access (providing a better alternative to Geo-IP blocking). The QoS SA manages the network resources in order to dynamically guarantee their availability for the distribution of every object. In pure peer-to-peer content distribution systems, the upper bit rate that can be delivered successfully to all the participating peers is bounded by the average of their upload bandwidth. As a result, a pure peer-to-peer architecture can’t guarantee delivery with a specific bit-rate, as the average value of the peers upload bandwidth varies, due to dynamic behavior of the peers and the dynamic conditions of the underlying network. Our proposed solution is scalable, calculating accurately and dynamically the minimum amount of bandwidth resources that servers have to contribute towards this goal. Finally an application specific P2P BlockExchange Scheduling Algorithm (P2P-BESA) is a distributed scheduling algorithm that ensures the complete distribution of each object to every user on time. These SAs have been designed in such a way as to meet the following requirements that are necessary towards the fulfillment of the users, the service providers and the network provider’s needs. The first one is scalability in terms of participating users. The functionalities that introduce high overhead to the system are: the bandwidth provision for the distribution of every object, the management and the dynamic adaptation of the CDO according to varying traffic conditions and peer behavior, the authentication between peers and finally the block scheduling process. We implemented these functionalities using distributed architectures in order to ensure scalability in our system. We ensure the high performance of the system by the use of a specific graph structure of the CDO that exploits all the available resources, an intelligent P2P-BESA that guarantees the diffusion of every object to every peer on time and the QoS SA that controls the resources that are required towards the distribution of each object. Finally P2PA ensures the entrance in the system of authorized peers The minimization of the network traffic that our system introduces to the underlying network is realized by a CDO where each peer has neighbors close to it in the underlying network. The fault tolerance of the system is guaranteed by the continuous adaptation of the CDO to peers arrivals and departures and in link failures in the underlying network. Additionally the flexibility that P2P-BESA introduces in the data flows between peers guarantees the on-time delivery of every data block before its playback deadline.
Under normal circumstances, a P2P network is not vulnerable to DDoS attacks, because an attacked node will behave as a single failure, which is subject to self-healing in the rest of the network. IMS is more vulnerable and so P2P instills resilience in the system to DDoS attacks. The CI-SA enables the ability of participating users to act as content providers while simultaneously ensure the manageability of the objects from a network operator and a service provider perspective. User authentication is a basic requirement in order to enable secure P2P messaging and so avoid unauthorized access in the system. Anonymity is also preserved as the key distribution is done only when a peer enters the system from an application server and the only information that each peer reveals to other peers is that it is an authorized peer in the system. Accounting functionality maintains an audit trail of licenses granted for content and the network bandwidth and overlay statistics associated with delivering that content. Content protection provides digital rights management whereby content providers can specify licensing rules and costs for individual users based on details such as their subscriber id, location, network provider, and service package (e.g., gold, silver, bronze). P2P is primarily an end-users’ technology that fosters self-deployment and self-organization while it achieves optimized resource utilization for the deployed applications and services. With the development of our QoS SA we succeeded where QoS mechanisms have failed to be deployed and operate at large scales. Finally VITAL++ through the use of IMS provides a well-defined, modular, and standardsbased way to deploy applications, reusing supporting services such as security and accounting. This enhances P2P systems where services are usually designed to meet one specific use case (e.g., file sharing). From these design considerations, it has been decided to position IMS sided functionalities of the VITAL++ architecture in various application servers; while the client sided functionalities are located directly in the client so that no additional components are necessary. Figure 1 illustrates an overview of the architecture and its functional blocks, which are explained in the remaining sections of this work. For the sake of complexity we consider the QoS SA part of the content indexing and we also consider CDO and P2P-BESA part of overlay management.
P2PA-SA The purpose of the P2P-Authentication SubArchitecture (P2PA-SA) is to enable clients (peers) to verify the authenticity of messages which have been sent by other clients directly to them, without passing through any operator controlled entity. This envisages the security of services, which are based on pure P2P message exchange, like sharing of contacts or media, etc. The P2P-Authentication sub-architecture works with certificates, which describe an entity and its properties. In VITAL++, three types of certificates are distinguished. The root certificate
IEEE Communications Magazine • January 2011
12/16/10
12:30 PM
Page 87
CONTENT SECURITY CS-SA The CS-SA provides the VITAL++ platform with Digital Rights Management (DRM) functionality. It provides block and stream encryption features so content can be distributed throughout overlays in encrypted form. A licensing feature enables users to obtain a decryption key in order to play the multimedia content. Content providers can create rules describing when, to whom, where and how their content can be licensed and how much it will cost to do so. The IMS subscriber identity of the licensing user is used to identify the user meaning that over-simplistic access restrictions such as geoblocking of IP’s need not be applied. This
IEEE Communications Magazine • January 2011
Client(s)
Network
IMS
P2P-authentication
Context indexing
Content security
IMS/NGN — Functions
that is self-signed and pre-installed in every client and P2P-authentication server module. The server certificate that is signed by the RootCA, is pre-installed in every P2P-Authentication server module, describes the identity of the server domain and its public key and is acquired by each client during registration. The third is the client certificate that is signed by a P2P authentication server on request and describes the identity of the client and its public key. Finally, each client is equipped with these three certificates, which allow it to perform all authenticity transactions and checks as explained below. The relation between the certificates and their use in order to enable authentic message exchange is depicted in Fig. 2. In every transaction there is either a certificate or signature being transported between the entities. For the initial certificate provision the P2PAuthentication module in VITAL++-application server (AS) will process the registration request and supply the newly registered user with its server certificate, signed by the common Root-CA, For the client certificate authorization it generates its personal private-public key-pair and creates an unsigned certificate with its identity and public key. This is then sent to the VITAL++AS, which checks the identity and other fields of the certificate before he signs it with his private server key. The signed certificate is then sent back to the client, which stores it as its own personal certificate. Then, the client owns a valid certificate verifiable by every instance, which also knows the server certificate. As this process is vulnerable against a man-in-the-middle attack, it is advised to encrypt this transaction. For that purpose, we suggest a Diffie-Hellman key agreement transaction before the main transaction in order to establish a common secret knowledge, which is then used to generate a symmetric key for message encryption. For the client-to-client message authentication the sender creates a text message, which he signs with his private key, which corresponds to its own client certificate. He then sends the text message along with his client certificate and the message signature to the receiver. This one can then first check the authenticity of the client certificate using its server certificate, followed by checking the message signature with the public key from the client certificate and inform the user accordingly.
Client 0 — Functions
EFTHYMIOPOULOS LAYOUT
Overlay management
Message and media exchange
Interaction
Figure 1. VITAL++ abstract view of the overall architecture. addresses a requirement of national and regional broadcasters within several EU countries to provide free services to citizens of those countries, even if they are traveling abroad. The process of licensing a piece of content follows a Request/Response model and uses the SIP Instant Messaging conversation mechanism defined by the 3GPP. The content licensing process is orchestrated using a Licensing Conductor, implemented to the design specified by the Open Media Commons [8] group. The licensing process is described graphically as a web-service workflow which orchestrates the flow of information between services used by the licensing process: authentication, content transcoding, and accounting. Licensing rules are encoded in text-based policy statements (e.g., if then ), according to the specifications of the Drools rules engine [9]. The content provider registers licensing rules with the CS-SA. These rules can be parameterized and hence associated with individual users, user groups, content types, network context (e.g., user location) and billing scenarios (e.g., pre-pay, post-pay). The content provider, acting as a licensee, may programmatically allow content to be distributed under a fairuse license for education, criticism or parody. The licensor must explicitly assert their intentions to make fair use in their licensing request. For the Identity Management functionality The CS-SA uses PKI to mutually authenticate content provider and content consumer. The CSSA acts as a trusted intermediary meaning that the content consumer and provider do not have to interact directly in the licensing process. This is necessary as the content is super-distributed among peers in the overlay. Mutual authentication means that the content consumer can be confident the licensed content is being licensed
87
EFTHYMIOPOULOS LAYOUT
12/16/10
12:30 PM
Private Key
Page 88
Private Key
Server-Cert. Server-ID Public Key
Client1-Cert. Client1-ID Public Key
Signature
Signature
Signature
Pre-installed
Acquired
Generated
Used to sign
Root-ID Public Key
Used to sign
Root-Cert.
Used to sign
Used to sign
Private Key Text Message Client 1 Signature over text
P2P Message Transfer over SIP
Root-Cert. Root-ID Public Key
Server-Cert. Used to verify
Client1-Cert.
Server-ID Public Key
Client1-ID Used to verify
Public Key
Signature
Signature
Signature
Pre-installed
Acquired
Transmitted
Used to verify
Text Message Client 2 Signature over text
Figure 2. Relation between certificates and messages.
from the correct provider and hasn’t been tampered with. The content provider similarly benefits from IMS Authentication and PKI being used to identify the consumer. A Certificate Authority (CA) is used to associate public-private key pairs with IMS identities. The Accounting subsystem facilitates micro charging for content items. Charging Detail Records for individual content items are produced during the licensing process by the CSSA. The Accounting subsystem aggregates these with CDR’s produced by IMS network elements involved in the transport of the licensed content to the subscriber. The IMS charging data is obtained from an IMS Charging Gateway Function (CGF) using the Diameter protocol in line with the relevant ETSI and 3GPP specifications. A Billing and Rating Function (BRF) generates a user bill showing the charge for the item, the associated network utilization charges and an incentives-based discount arising from good overlay behavior using statistics provided by the Overlay Manager. Arbitrarily complex charging schemes may be created using spreadsheets which are then loaded into the system. An earlier version of this technology is described in [10] and further developed in [11].
OVERLAY MANAGEMENT SA The objective that we fulfill through the OM-SA is the creation and the maintenance of a scalable system, in terms of participating peers, through the distribution of this its management and organization process to them. Additionally we focus on adapting the system to dynamic peer arrivals and departures and continuously reorganize it according to them. Special attention has been given to the adaptation of Content Diffusion Overlay (CDO) to the dynamic network condi-
88
tions and exploitation of network locality in the selection of neighbors from each peer. Finally innovative algorithms have been designed and run in the CDO that deploy a graph structure that ensures the maximization of the utilization of upload bandwidth contributed by highly heterogeneous participating peers. The CDO graph structure (Fig. 3, left) consists of two interacting sub graphs. In the first graph we insert only peers (class 1 peers) that their upload bandwidth exceeds the bit rate of the service rate that our system has to sustain while in the second we insert the rest (class 2 peers). These two graphs are constructed in such a way that all nodes have an equal number of connections. The interconnection between two graphs is done with connections that the class 1 peers create in order to provide peers of class 2 with additional upload bandwidth resources. The number of these connections is analogous to the upload bandwidth of class 1 peers. These connections are assigned uniformly in peers of class 2. In both graphs, all the peers periodically execute a distributed optimization and maintenance algorithm (DOMA) that reorganizes the neighborhoods of CDO in order to keep the structure of the graph optimal for content delivery even during peer arrivals and departures. The algorithm makes use of an energy function that captures the impact of specific parameters e.g., network latency, between any two nodes in the overlay. DOMA is executed between two neighbors that we note as initiators and their direct neighbors that we call satellites. Its purpose is to minimize the overall sum of the energy functions between initiators and satellites under the constraints on the number of neighbors that the aforementioned graph structure implies. In Fig. 3 (right) the length of the arrows expresses the
IEEE Communications Magazine • January 2011
EFTHYMIOPOULOS LAYOUT
12/16/10
12:30 PM
Page 89
Class 1 graph
Initiators
Initiators
Satellites
Satellites
Interconnections
Class 2 graph Class 1 peers Class 2 peers
Figure 3. The graph structure of the CDO (left); execution of DOMA (right).
value of the energy function. The one initiator in the left figure has more bandwidth than the other. We observe that the execution of DOMA minimizes the sum of energy functions while it reassigns the number of neighbors according to their upload bandwidth resources. Every change in the underlying network, in the resources of a peer, peer arrivals and departures or execution of DOMA in neighboring nodes triggers new changes in the CDO while it always converges to the desired graph structure and to a minimized sum of energies [3].
CONTENT INDEX SA In our proposed system content indexing is used not only for accelerating the content searching process, but also as a tool for content publication, together with content description information. CI-SA is implemented using a SIP Instant Messaging mechanism and offers the following services: Content Publication that can be used by IMS users interested in offering content. The service works by declaring content availability to the network that may be fed to the users through the CI-SA. Context searching and download is possible from third-parties by executing network search on the basis of content description information publicized along with the content. Content Searching, whereby users looking for particular content are browsing other users’ publicized content on the basis of certain criteria. These criteria are submitted to the CI-SA and the result is fed to the requesting users as a list of descriptions of available content items. The list also contains matching criteria, which are used as filters against relevance of the result for presentation to the user. Overlay Bootstrapping/Maintenance: Whenever a VITAL++ user enters the system they contact the CI-SA, in order to be able to acquire content indexing information. For this purpose, once the user has selected a specific content item to be retrieved and reproduced locally, this has to be communicated to the CI-SA. In this case the CI-SA interacts with the OM-SA in order to either create a new overlay or to update an existing one. In any case the outcome of the
IEEE Communications Magazine • January 2011
OM-SA, which is a list of peers per overlay member, is sent either to a newly added member of an overlay or to an existing member for which the list of its peers has been updated.
QOS-SA The QoS-SA is responsible for dynamically provisioning the correct amount of bandwidth resources for the uninterrupted distribution of each object. It is comprised of two components: the monitoring component and the resource allocation component. The monitoring component is responsible for monitoring the resources of each CDO and for calculating the additional bandwidth needed for the uninterruptable object delivery. A server samples periodically a small subset of the participating peers from CDO and calculates the mean of their incoming flows in bytes and the mean of the time in which they were transmitting content during that period. The attributes of P2P-BESA are useful for a very good approximation of the average value of upload bandwidth that sets of peers in a CDO have. Using these values, the monitoring component is able to calculate the average upload bandwidth of the peers and thus the minimum additional bandwidth, if needed. The resource allocation component uses a set of bandwidth provisioning servers, which can control the upload bandwidth. These have been designed to comply with ETSI’s Resource and Admission Control System (RACS) functionality, first defined in the 2006 release of their NGN specifications [12]. Having as input the output of the monitoring component, the QOS-SA provisions the excess bandwidth that is required in such a way that each server is connected with peers, which belong to the same ISP (if possible) in order to minimize the inter-ISP traffic.
VITAL++ FUNCTIONALITY FOR LIVE STREAMING When using the VITAL++ platform in a livestreaming scenario, each multimedia stream generated by individual users and/or content providers is divided into blocks and distributed
89
EFTHYMIOPOULOS LAYOUT
12/16/10
12:30 PM
Page 90
1.0
1.0
0.9
0.9
0.8
0.8
0.7 CDF of nodes
CDF of nodes
0.7 0.6 0.5 0.4
0.6 0.5 0.4 0.3
0.3
0.2
0.2 % Final energy % Initial energy
0.1
0.0 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20
0.1 0.0 0.960 0.965 0.970 0.975 0.980 0.985 0.990 0.995 1.000
Energy of node
Probability of successful block receptions
Figure 4. CDF of the average network latency (left); CDF of the successful block receptions (right). by overlay CDO. A P2P Block Exchange Scheduling Algorithm (P2P-BESA) — also part of the VITAL++ client — ensures the distribution of each block to every user that requests the specific multimedia stream with low latency. This latency is known as setup time and it is defined as the time interval between the generation of each block from the stream producer and its delivery to every participating peer. An efficient P2P-BESA has to maximize the delivery rate of the multimedia stream with respect to the participating peers uploading capabilities while ensuring the reliable delivery of the stream in the presence of dynamic conditions such as batch peer arrivals and departures, dynamic network latencies and path bit-rates. Neighbors in the CDO periodically exchange the set of blocks that they have. Each receiver exploits this information and proactively requests blocks from its neighbors in the CDO in order to: avoid the duplicate block transmissions from two peers, eliminate starvation of blocks, and guarantee the diffusion of newly produced blocks and/or rare blocks in a neighborhood. In contrast, each sender every time that it is ready to transmit a new block examines the set of blocks that its neighbors have and using as criteria the most deprived neighbors (miss the largest number of blocks) and neighbors with high capabilities of upload bandwidth selects one of them and transmits to it a block. Graphs in Fig. 4 depict the performance of our system. In the left graph we demonstrate the CDF of average network latency with their neighbors (energy required for transmission) that peers have in a randomly formed overlay and one built by our CDO, noted as Liquid stream. We observe a reduction of energy by approximately 90 percent. In the right graph we depict the CDF with the percentage of the successful block transmissions that each peer that participates in our system has. We mention here that the video streaming rate is 95 percent of the average upload bandwidth of the participating peers and the latency between the generation of a video block and its distribution in every peer in the system is 4 seconds. Through these graphs we observe the optimal and stable delivery of a
90
video (right graph) while simultaneously our system minimizes the traffic that it introduces in the underlying network (left graph).
CONCLUSIONS In this work we have described an overview of a novel architecture that combines the benefits and features of NGN technology like IMS and P2P in order to deliver content in a secure and efficient way. To the best of our knowledge this is the first attempt to provide a holistic operational solution. Introducing P2P allows for scalable, adaptable, and resource optimal content distribution. When combined with NGN operations and procedures, it produces a system that is secure and reliable, properties that are currently lacking in common P2P systems. NGN in turn benefits from reduced resource management and signaling overhead when P2P-optimization is incorporated within the data transport function. We are currently completing implementation of the architecture and the VITAL++ client and we are about to start large-scale experimentation with real users.
ACKNOWLEDGMENTS This work is funded from the European project VITAL++ with Contract Number: INFSO-ICT224287.
REFERENCES [1] CNET, “Streaming Video to Outpace P2P Growth (Referencing Cisco Visual Networking Index Forecast 2009–2014”; http://news.cnet.com/8301-30686_320006530-266.html. [2] Ipoque, “Internet Study 2008/2009”; http://www.ipoque.com/resources/internet-studies/internet-study-2008_2009. [3] N. Efthymiopoulos et al., “LiquidStream — Network Dependent Dynamic P2P Live Streaming,” Peer-to-Peer Net. Apps., vol. 1, no. 1, Jan. 2011. [4] D. Wu, Y. Liu, and K. W. Ross, “Queuing Network Models for Multi-Channel Live Streaming Systems,” IEEE INFOCOM, Apr. 2009, pp. 73–81. [5] IETF, “Application Layer Traffic Optimization (ALTO) Working Group”; http://datatracker.ietf.org/wg/alto/charter/. [6] ITU-T, “IPTV Focus Group Proceedings,” 2008. [7] ICT-VITAL++ Project; http://www.ict-vitalpp.eu/. [8] Sun Labs, “Open Media Commons”; http://www.openmediacommons.org.
IEEE Communications Magazine • January 2011
EFTHYMIOPOULOS LAYOUT
12/16/10
12:30 PM
Page 91
[9] JBoss Community, “Drools Business Logic Integration Platform”; http://jboss.org/drools. [10] J. Brazil et al., “Workbook Approach to Algorithm Design and Service Accounting in a Component Orientated Environment,” IEEE Wksp. IP Ops. Mgmt., Dec. 2002, pp. 44–48. [11] B. Jennings and P. Malone, “Flexible Charging for Multi-Provider Composed Services using a Federated, Two-Phase Rating Process,” IEEE/IFIP NOMS, 2006, New York, pp. 13–23. [12] ETSI ES 282 003, v. 1.1.1, “Telecommunications and Internet Converged Services and Protocols for Advanced Networking (TISPAN); Resource and Admission Control Sub-System (RACS); Functional Architecture,” June 2006.
ADDITIONAL READING [1] J. Fiedler, T. Magedanz, and J. Mueller, “Extending an IMS Client with Peer-to-Peer Content Delivery,” Social Informatics and Telecommun. Eng., LNCS, Springer, 2009, vol. 7, pp. 197–207.
BIOGRAPHIES A THANASIOS C HRISTAKIDIS (
[email protected]) received his Electrical Engineer degree in 2004 and his Ph.D. in the field of distributed network systems in 2010 from the University of Patras, Greece. He has participated in various IST projects and has been a co-author in a number of papers since 2004 in the areas of P2P systems. His current work involves further research in distributed video streaming, content distribution networks and distributed storage systems as well as the development of distributed algorithms towards the realization of an actual P2P client. NIKOLAOS EFTHYMIOPOULOS (
[email protected]) he received his Ph.D. from University of Patras, Greece in 2010. Since 2007 he was a Lecturer in Telecommunications Systems and Networks Department and now he works as an Assistant Professor in Informatics & MM Department in Greece. His main research interests are network optimization, distributed video streaming, distributed searching, and achieving QoS in content distribution networks. He has around 15 publications in these areas. He has participated in various IST projects since 2004. JENS FIEDLER (
[email protected]) finished his diploma in computer science in October 2004 at the Technische Universität Berlin (TUB). Since May 2005 he works as a researcher at the Fraunhofer institute for open communications systems (FOKUS) in the competence center for next generation network infrastructures (NGNI). His expertise includes knowledge in several programming languages (e.g., C/C++, Java). His core competences are VoIP infrastructures, high availability, reliability, and scalability in VoIP infrastructures, peer-to-peer technologies, P2P integration, and general network protocols. He worked in projects like 6NET, SNOCER, and VITAL++ (FP7). He is currently involved in several industrial Evolved Packet Core related projects.
IEEE Communications Magazine • January 2011
S HANE D EMPSEY (
[email protected]) is Next Generation Network (NGN) Architect within TSSG. He graduated with 1st class honors in Industrial Computing from WIT in 1999 and went on to obtain an M.Sc. in Telecoms in 2001. His research interests include AAA for composed services, SOA tools for telecoms and financial software. He has published papers in these areas, contributed to research commercialization initiatives, and served as a technical committee member of several conferences. KONSTANTINOS KOUTSOPOULOS (
[email protected]) received a degree in Electrical Engineer and his Ph.D. in the field of Personal and Mobile Telecommunications from the National Technical University of Athens. He has participated in various IST projects since 1998. He has experience in mobile communications, security, networking, and software development. His research interests include networking, embedded systems, security, and software techniques. He has been working for BCT since March 2006. S PYROS D ENAZIS (sdena@ upatras.gr) received his B.Sc. in Mathematics, University of Ioannina, Greece, in 1987, and in 1993 his Ph.D. in Computer Science, University of Bradford, UK. He worked the European industry for 8 years, and he is now an assistant professor at the Department of Electrical and Computer Engineering, University of Patras, Greece. His current research includes P2P, and Future Internet. He works in PII, VITAL++, and AutoI EU projects. He has co-authored more than 40 papers. SPYRIDON TOMBROS (
[email protected]) Electrical, Electronics Engineer received his Ph.D. in broadband communications from the National Technical University of Athens and his Master on the same faculty from university of Patras. His research interests are in the field of protocols and physical communication systems design for mobile, wireless, and home networks. He has many years of working experience on network testfloors and test tools manufacturing and over 30 scientific publications and book contributions in the same area. S TEPHEN G ARVEY (
[email protected]) has a B.Sc. (Hons.) in Software Development, a H. Dip in Business Systems Analysis and is currently completing a M.Sc. in Distributed Computing. Stephen has over a decade worth of experience in the ICT sector having worked for several multinational technology in a variety of roles such as information architect, software engineer, technical lead, etc. As well as running his own software development and consultancy business, he was also chief architect and engineering manager for Nubiq Ltd. He is now involved in NGN research within the TSSG. ODYSSEAS KOUFOPAVLOU (
[email protected]) University of Patras, Greece. From 1990 to 1994 he was at the IBM Thomas J. Watson Research Center. He is currently Professor with the Department of Electrical and Computer Engineering, University of Patras. His research interests include computer networks, high performance communication subsystems architecture and implementation, and VLSI. He has published more than 200 technical papers and received patents and inventions in these areas. He has participated as coordinator or partner in many Greek and European R&D programs.
91
STERLE LAYOUT
12/16/10
12:43 PM
Page 92
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
Application-Based NGN QoE Controller Janez Sterle, Mojca Volk, Urban Sedlar, Janez Bester, and Andrej Kos, University of Ljubljana
ABSTRACT In this article, a specification and testbed implementation results of an application-based QoE controller are presented, proposing a solution for objective and context-aware end-to-end QoE control in the NGN networks. The proposed solution bases on standardized NGN service enabler operation principles that allows for efficient in-service QoE estimation and optimization. QoE control is accomplished through context-based QoE modeling, the principal role of which is to provide a detailed description of the circumstances, under which the communication is established and by which the end-user’s QoE is affected. Implementation results and findings confirm feasibility and efficient design of the QoE controller proposal as well as full compliance with the requirements for its deployment into real-world NGN environments.
INTRODUCTION Witnessing fierce competition of alternative and Internet-based service providers, the prospects of incumbent operators are increasingly focused on quality-assured and open service provisioning principles. In such environments, success is conditioned with efficient exploitation of modern technologies, which provide added value to multimedia communications by assuring their quality and security. Next Generation Networks (NGN) [1], built as a heterogeneous concatenation of various access, transport, control and services solutions, merged into a single multimedia-rich service provisioning environment, are said to be the de facto infrastructure of the future. One of the principal advantages of the NGN is a service-oriented approach providing transport-independent service provisioning, that is, various multimedia-rich communication services are provided within a unified service environment, while dynamically exploiting various broadband and narrowband transport technologies to complete the delivery [2]. This is achieved through dynamic service-oriented admission control using complex quality negotiation procedures [3]. Arbitrary functionalities are employed to mediate the differentiations between service and transport layers in terms of technological dependencies and the belonging transport quality mechanisms and procedures [4]. Hereby, the service environment
92
0163-6804/11/$25.00 © 2011 IEEE
becomes access agnostic and end-users access their services from any chosen access network. Today, a recognized NGN architecture comprises Internet Protocol Multimedia Subsystem (IMS) that provides unified signaling and service control to the overlay service environment, and Resource and Admission Control Subsystem (RACS) and Network Attachment Subsystem (NASS) for arbitrary mediation layer, as depicted in Fig. 2. However, such service-oriented approach is relatively new and differs substantially from the principles of legacy telecommunications systems, which presents several quality-related challenges to operators and service providers. Foremost, such inherent heterogeneity may provoke diminished and inconsistent service operation compared to legacy telecommunications systems, unless appropriate technologies and mechanisms are implemented. Service provisioning needs end-to-end attention and cross-layer consolidations to provide a uniform and sustainable experience. There is another issue regarding selection of appropriate quality-related metrics to assess the achieved level of provisioned quality. In general referred to as quality, two mainstream approaches are presently discussed, that is, quality of service (QoS) and quality of experience (QoE) [3, 5, 6]. QoS represents an objective network-oriented measure of efficiency when providing a service, typically expressed with delay, jitter, lost information, and throughput parameters. On the other hand, QoE is a subjective and networkindependent measure of service efficiency as perceived by the end-user when consuming the service as well as a measure of ability of the system to achieve the end-user’s expectations. Today, the reference metric is believed to be the QoE rather than QoS. For both issues, approaches and interpretations differ substantially, and standardization is insufficient and weakly consolidated [5, 6]. Practical experiences are scarce, and implementations are mainly related to focused and specialized areas. As a consequence, aside the arbitrary mediation layer, QoE assurance in the NGN is predominantly based on legacy QoS approaches. The challenge is even bigger due to the fact that thorough NGN quality assurance requires an end-to-end approach, aiming at enhanced end-user’s QoE rather than mere QoS provisioning.
IEEE Communications Magazine • January 2011
STERLE LAYOUT
12/16/10
12:43 PM
Page 93
Static (administrative) profiles
Dynamic (operative) profiles
The predominance of network-oriented solutions has incited
User profile server (UPSF) Home subscriber server (HSS)
Other ingress information Service subscription (end-user and service profile)
Static profile information
direction of end-to-end consolidation
SIP/SDP service request
standards, Session control server (S/I/P-CSCF)
Content profile (metadata)
further efforts in the
interconnection and
SIP AS (B2BUA mode)
NASS transport profile
IP Multimedia Subsystem (IMS)
Resource and admission control subsystem (RACS)
grounds for NGN-based quality assurance
Ingress quality context (QC) parameters
Network attachment subsystem (NASS)
representing the
frameworks.
RACS transport profile
Standardized NGN information exchange Required
Dynamic profile information
Optional
Figure 1. Converged dynamic quality related profiles in the NGN and ingress QC parameterization. Numerous research efforts and proposal are aiming at resolving these issues. According to results and findings in scientific literature, the majority of existent approaches are focused on development of advanced network-level QoS mechanisms and end-to-end communication path models [7]. There are also various proposed quality assurance solutions that are focused on independent agent-based arbitrary and admission control functionalities, related to functional enhancements of the RACS subsystem [8]. To the best of our knowledge, existent proposals are prevalently proprietary and only loosely related to NGN standards and technologies. The key drawback of such network-level approaches is their technological dependency, limitation to specific transport layer segments and potential increase in admission control complexity, which is not acceptable in terms of service-oriented and technology-independent quality control concept, required in the NGN [5, 6]. The predominance of network-oriented solutions has incited further efforts in the direction of end-to-end interconnection and consolidation standards, representing the grounds for NGNbased quality assurance frameworks. Some standardized frameworks are readily available, e.g., ITU-T G.10x0 recommendations series resolving
IEEE Communications Magazine • January 2011
parameterization and end-to-end service performance assessment, and non-invasive quality assessment models for multimedia services [9] (e.g., ITU P.NAMS and P.NBAMS). Also, there are numerous proposals on predictive differential media quality assessment methodologies using a combination of QoS metrics and other additional parameters, e.g., codec effects, distortions, application effects, etc. [5, 6, 9]. However, these frameworks are again derived predominately from existent QoS-based methodologies and in addition lack considerable practical experience [5, 6]. In our opinion, there is significant research indigence in efficient QoE assurance solutions, appropriate for modern NGN service environments. Therefore, this article presents an application-based QoE controller proposal. The proposed solution introduces into the NGN service environment a novel value-added service enabler for the service of in-session QoE control. The service is available to end-users who wish to benefit from QoE optimizations by the NGN environment while accessing other available services within their multimedia communication. In the following section, QoE controller proposal is presented in detail. Service enabler
93
STERLE LAYOUT
12/16/10
Despite numerous
12:43 PM
Page 94
Applications, third party service, and content provisioning
research efforts in defining QoE-based
IP Multimedia Subsystem (IMS) User profile server (UPSF)0. Service profile Sh Home subscriber acquisition server (HSS)
are not yet trans-
and any extensive practical experience oriented and base on QoS principles, for NGN service environments.
ce ur n so atio t e R rv s 6. se que re re
Resource and admission control subsystem (RACS)
4b. Quality profile modification
e2
Network attachment subsystem (NASS)
Transport layer
IP-based transport networks and various access networks
Arbitrary mediation layer
which is not suitable
X. Service request, final configuration (PRACK)
Cx
4a. Service logic execution (context → QoS)
Service layer
are mainly network-
1. Service request (INVITE)
Application logic
t ues )ISC req SIP AS ice arded v t r e ues (B2BUA mode) 3. S TE foerw req ied) I c i V f (IN. Serv modi Session control server 5 NVITE, (I (S/I/P-CSCF) 6. Service request (INVITE, modified, forwarded)
2. Service triggering (SPT match)
Real-world solutions
0. Service profile a.
ferred into practice.
Diameter SIP interface interface
frameworks, these
Figure 2. NGN network architecture and a successfully completed QoE controller service, hosted on a SIP Back-to-Back User Agent (B2BUA) application server. design principles are discussed, followed by a detailed overview of its construction and workflow. Finally, implementation into a laboratory NGN testbed is presented and the respective validation results are given.
APPLICATION-BASED QOE CONTROLLER Despite numerous research efforts in defining QoE-based frameworks, these are not yet transferred into practice. Real-world solutions and any extensive practical experience are mainly network-oriented and base on QoS principles, which is not suitable for NGN service environments. In the latter, service provisioning aims at achieving an appropriate level of QoE through multimedia-rich services while employing various available transport networks for their delivery. For this reason, the quality assurance requires an end-to-end attention to achieve the target QoE as well as appropriate cross-layer mediation to reflect the QoE in particular QoS configurations of the respective transport path. From such conditions, the following deduction applies: while available network-based solutions suffice for QoS control in separate transport networks, an independent overlay end-to-end QoE-based solution is required in the service environment. Their interrelationship can be inclusive, where the overlay QoE solution conducts the underlying QoS solutions operation to provide the QoEQoS translations, or exclusive, where the QoE solution operates independently with an assumption that appropriate QoS assurance is provided on the transport layer at all times. Since the first
94
option corresponds to RACS and NASS mediation layer, already provided in the NGN, it is reasonable to consider the second option. Three more reasons are in favor of such conclusion. First, the transport infrastructure employed by the NGN service environment typically comprises established QoS-assured networks, in which quality is provided by means of legacy QoS principles, while dynamic QoS adjustments are rather limited. This significantly restrains QoE from taking effect through crosslayer operation, resulting in ineffective QoE provisioning. Second, if QoE assurance solution is limited to service environment, its transport independence is ensured. And third, QoS and other transport-layer effects represent only a small portion of the entire range of effects determining the perceived end-user’s QoE. Even if the solution is limited to service environment only, excluding QoS control, the target QoE can be handled efficiently based on a large volume of effects, available in the NGN service environment [5], as discussed in the following section. The solution proposal in this article is given for the case of an NGN environment comprising hybrid transport infrastructure, IMS and arbitrary mediation layer implementing RACS and NASS functionalities. Concepts and limitations regarding implementation, service triggering, and communication provisioning are considered and adopted from the respective recommendations [3, 7]. Nevertheless, the presented ideas are applicable to a much wider range of challenges, including present and future internet systems and should therefore be discussed further also in this respect.
IEEE Communications Magazine • January 2011
STERLE LAYOUT
12/16/10
12:43 PM
Page 95
End user and administrative interfaces
By applying the Ingress QC paramenters (locally available in the AS)
extended profiles to an appropriate
Dedicated interfaces
Other information acquisition
SIP/SDP protocol messages processor
QoE-based DECISION LOGIC
communication model, a detailed
Sh
0. Service profile acquisition
end-to-end ISC
5. Service request (INVITE, modified)
Diameter SIP protocol stack protocol stack
3. Service request (INVITE forwarded)
description of service
DIAMETER protocol messages processor
QoE estimation algorithm
provisioning Result
circumstances could be achieved, representing the
Other protocol messages processors
so-called quality context (QC).
Request denied APPLICATION LOGIC
Request adjusted
New SIP/SDP message
Adjusted SIP/SDP message
Profiles adjusted
Adjusted profile information
Figure 3. Proposed service enabler application logic architecture.
QUALITY CONTEXT A thorough inspection of the NGN service environment shows that a large volume of information relating to the effects of QoS and QoE is available throughout various IMS, RACS, and NASS entities [5, 10]. First, various information on end-users, services and relevant conditions of the transport infrastructure is available, organized into several end-user and service profiles, i.e., User Profile Server Function (UPSF) or Home Subscriber Server (HSS) profiles, and NASS transport profiles. Second, when building a communication, the initial incoming communication request from the end-user in the form of a Session Initiation Protocol (SIP)/Session Description Protocol (SDP) request and the respective profiles configure the provisioning chain and decide the provisioned QoS as well as the perceived QoE. The latter is, however, not optimized for many scenarios and requested services and requires adjustments (e.g., unsuitable terminal equipment, diminished network capacities, etc.). In the current NGN environments, however, service- and transport-layer profiles are provided statically. Their correlation to the dynamically-provided real-time quality-related information is incomplete and lacks detailed interpretation and consolidation amongst the involved subsystems. Moreover, the SIP/SDP request message information is currently not interpreted as a profiling compound even though it carries vital quality information in a standardized form. Also, there is a variety of other, either derived or by other means provided out-of-band information available within the NGN that have until
IEEE Communications Magazine • January 2011
now neither been considered as important nor exploited for the purposes of quality assurance, even though they clearly are of significance to the QoE. Assuring an optimal interpretation, such information could serve for enhancements of existent profiling principles of the NGN to achieve converged and dynamic quality-related profiles. The proposal is presented in Fig. 1, further findings and design principles are available in [10]. As denoted in Fig. 1, it interprets existent service and transport layer profiles as static profiles (i.e., service subscription, content profile, NASS transport profile), while SIP/SDP and RACS-provided quality information represents real-time dynamic profiles (SIP/SDP service request, RACS transport profile). Furthermore, by applying the extended profiles to an appropriate end-to-end QoE-based communication model, a detailed description of service provisioning circumstances could be achieved, representing the so-called quality context (QC). Employing context-based principles in the QoE controller operations represents a novel approach to such NGN solutions. Hereafter, the QoE-based communication model is referred to as the QC model and the extended profiles are collectively referred to as the ingress QC parameters.
SERVICE ENABLER DESIGN The following requirements and guidelines have been identified for this proposal. The aim of the solution is to provide QoE optimization to NGN end-users who wish to receive such enabling service while consuming other services. QoE control is achieved by means of the QC, the principal role of which is to pro-
95
STERLE LAYOUT
12/16/10
The QoE controller service is available only to subscribed end-users. The standardized NGN enduser and service profiles are taken for this proposal and the service triggering procedure is standards-based using initial filter criteria (iFC) within the Serving-Call/Session Control Function (S-CSCF) entity.
96
12:43 PM
Page 96
vide a detailed description of the circumstances, under which the communication is established and the end-user’s QoE is affected. The QC model serves as the core component of the proposed QoE controller application logic, providing final QoE estimation and optimizations. It comprises a comprehensive set of parameters and mappings among them to construct an end-to-end QoE model of a generalized NGN communication provisioning process. Aside the ingress QC parameters, there is a comprehensive set of derived parameters, the value of which is defined with ingress parameters and the respective mappings. A single egress parameter represents the resulting QoE estimation. This is opposite to measuring QoE directly by relying on subjective end-user feedback (e.g., MOS methodology). Based on an accurate QoE estimation, the QoE could be further optimized by means of appropriate ingress QC parameter adjustments. Although numerous approaches to objective and subjective QoE estimation are available, using a variety of methodologies and information, this proposal is limited to QoE estimation methods based on objective parameters. As this represents a novel approach, it is subject to comprehensive research activities and requires detailed attention in a separate work. In the process of QoE estimation and optimization, the QoE controller service is required to operate objectively, that is, the solution should handle all end-users and compounds of any multimedia-rich communication equally throughout the process of QoE estimation. This means that the QoE controller is required to provide a deterministic service behavior, for various endusers at various times and for various conditions, defined with service environment and transport infrastructure, that affect the communication, to which the QoE controller service should be applied to. This is evident also from ingress QC parameter set construction in Fig. 1, clearly denoting end-user, service, and content profiles. Also, to assure objective QC operation, all QC parameters should be unambiguously definable through objective methods only. Due to subjectivity and non-linearity of enduser’s perception, the QoE is required to be sustained at its maximum at all times. Therefore, the QoE controller service should operate proactively, that is, the service should take effect in the process of communication establishment prior to the actual QoE taking effect on the enduser. Also, for appropriate in-service operation, the solution should take effect in real time and should be focused on non-intrusive operation principles, i.e., minimum signaling complexity, optimized delay management, pre-processing, etc. The solution should avoid transport infrastructure dependencies and should base on standardized technologies, protocols and procedures to meet the NGN concept requirements. By taking into account the described design principles, it becomes apparent that the proposed solution can optimally be implemented in the NGN environment as a value-added service enabler within a standardized IMS-based application server (AS).
Service enabler operation is thoroughly discussed in the next sections, followed by a detailed insight into QoE estimation and optimization principles.
SERVICE ENABLER WORKFLOW The proposed QoE controller service aims to provide QoE optimization through appropriate ingress QC parameters adjustments in real time in the process of communication establishment using standardized NGN service control mechanisms. Therefore, the NGN service control model is engaged as depicted in Fig. 2, placing the proposed QoE controller as the first service enabler in the service triggering chain. Prior to further services triggering, the request is forwarded to the QoE controller AS where QoE optimization is achieved respecting the QoE estimation, provided by the QC model. The procedure is as follows. The QoE controller service is available only to subscribed end-users. The standardized NGN end-user and service profiles are taken for this proposal and the service triggering procedure is standards-based using initial filter criteria (iFC) within the Serving-Call/Session Control Function (S-CSCF) entity. The iFC is provided with the profiles and represents instructions for service request processing, i.e., orders and commands for received SIP/SDP message processing and interpretation. Referring to Fig. 2, the QoE controller service execution point trigger (SPT) represents the incoming request in the form of SIP/SDP INVITE message from or on behalf of the enduser. Based on the incoming request (1. Service request), the S-CSCF executes initial filtering (2. Service triggering). If triggering is successful, the request is forwarded to the QoE controller AS via the IMS Service Control (ISC) reference point (3. Service request). The AS receives the incoming service triggering request and executes the application logic (4a. Service logic execution). For the purpose of application logic execution, the AS aside the received SIP/SDP request retrieves and applies other information representing the ingress QC parameters. These are profiles formerly or in real time acquired from the UPSF/HSS database via Sh reference point (0. Service profile acquisition), as well as other information available locally or through the AS. When acquired, the QoE controller service feeds the ingress QC parameters to the QoE estimation logic and determines the QoE estimation. When determined, the QoE estimation logic attempts to optimize (increase) the QoE estimation by adjusting a precisely selected set of ingress QC parameters, that is, either through modifications in the received SIP/SDP message prior to its forwarding back to the IMS or through profile adjustments. Details on information retrieval, and ingress QC parameter definition and adjustments are discussed in the following section. After the QoE optimization process is completed a (modified) SIP/SDP request is forwarded back to the respective S-CSCF, either:
IEEE Communications Magazine • January 2011
STERLE LAYOUT
12/16/10
12:43 PM
Page 97
• In the form of original or adjusted SIP/SDP request message (5. Service request) with eventually adjusted media construction and parameter values, for the case of a successful QoE optimization or • In the form of communication release message (typically SIP BYE message method, e.g., 5. Service termination), for the case of an unsuccessful QoE optimization. In both cases, if changes have been made to the ingress QC parameters, acquired from the profile databases, these are updated, respectively (4a. Quality profile modification). According to routing information in the initial request message header, the modified SIP request/BYE message is forwarded back to communicating S-CSCF. The following procedures are standard service control procedures as applied to the communication request within IMS (6. Service request), and RACS and NASS subsystems (6. Resource reservation request), respectively. The proposed service application logic requires implementation of a Back-to-Back User Agent (B2B UA) AS entity. B2BUA implementation is suitable for complex application logic requiring third party call control functionalities, i.e., enabling appropriate incoming session termination and outgoing session establishment, as well as message header and body modifications. In this proposal, the incoming communication request processing and adjustment procedure are limited to the first incoming request, i.e., SIP INVITE message for the case of an originating end-user. However, as denoted in Fig. 2, QoE control could also be applied to the second communication establishment negotiation cycle, i.e., SIP PRACK messaging (X. Service request, final configuration). Inclusion of SIP PRACK and SIP UPDATE message handling should be further discussed in relation to the issues of end-toend consolidations and technology-dependent cross-layer QoS and QoE parameter translations in the NGN. Also, support for terminating enduser scenarios and other SIP request methods SPTs (e.g., SIP MESSAGE) are for further work.
QOE OPTIMIZATION Figure 3 represents detailed architecture of the QoE controller service enabler application logic. The application logic is modular. The principal component represents the decision logic, providing QoE estimation as a result of applying QoE estimation algorithm to the ingress QC parameters. The QC model serves as the guiding principle for the design of the QoE estimation algorithm. Therefore, the construction of the QC model in this respect requires careful application-based design, as follows. For ingress QC parameter acquisition, SIP and Diameter protocol stacks interconnected to ISC and Sh reference point implementations are required within the service enabler AS, as well as other dedicated interfaces towards other services and applications available in the respective NGN or specialized systems, if considered (e.g., monitoring and measurement systems, billing systems, etc.). These provide AS with further QoE-related information, e.g., presence informa-
IEEE Communications Magazine • January 2011
tion, management status reports, etc. The reference points serve as standardized NGN mechanisms for correct SIP/SDP request message reception and for ingress information retrieval through appropriate protocol message reception and processing (i.e., for SIP/SDP, Diameter and other dedicated protocols). Since AS deployment capabilities and the available reference points are respected, the organization and interpretation of the ingress QC parameters follows the means of their acquisition by the service enabler via available reference points in the process of communication establishment. Referring to Figs. 1 and 3, in addition to standardized UPSF/HSS-based profiles, in our proposal two groups of ingress QC parameters are of principal importance: 1 SIP/SDP information derived from the received request message 2 Any other objective QC information, readily available either in UPSF/HSS, AS and optionally in other applications or dedicated systems. Further groups of ingress QC parameters are: • Available network-related QoS parameters (IP packet delay, throughput, etc.), and application-layer parameters, (buffering effects, Forward Error Correction (FEC) effects, etc.) • Parameters relating to services (content media type and service delivery media type, service rating, coding technologies, etc.) • End-user’s QoE-related parameters (e.g., age, subscription profile, personal content prioritization, usefulness, expectations, etc.), resulting in egress QoE estimation The trigger point for QoE estimation represents the received request message (Figs. 2 and 3, Service request, INVITE forwarded), while the information retrieval could be accomplished in advance, periodically or on demand (Fig. 2, 0. Service profile acquisition and Fig. 3, Other information acquisition). The information acquisition is subject to further research, primarily addressing the issue of application logic and the respective communication set-up delay optimizations related to the frequency of the information retrieval processes. The QoE estimation is achieved by acquiring and applying the ingress QC parameters to the QC model. Based on the acquired QoE estimation, the decision logic either admits the incoming request with no modifications or enforces QoE optimization. The decision is achieved using a QoE scale that determines appropriate and inappropriate estimated QoE levels. If inappropriate, the QoE optimization enforces a range of adjustments on a precise selection of the ingress QC parameters using strict rules and limitations, which is followed by repeated QoE estimation. The procedure is repeated until an adequate QoE level is achieved using the respective adjustments, or until the maximum number of estimations is reached (obeying constraints on the maximum number of estimations or their duration). Afterwards, standard NGN service control procedure is resumed. Additionally, the respective profiles affected by modifications in the ingress QC parameters are appropriately updated.
Since AS deployment capabilities and the available reference points are respected, the organization and interpretation of the ingress QC parameters follows the means of their acquisition by the service enabler via available reference points in the process of communication establishment.
97
STERLE LAYOUT
12/16/10
Page 98
Laboratory NGN testbed
testbed is standards compliant, which has been verified using reference NGN test procedures and the
FHoSS (FOKUS home subscriber server) Fraunhofer FOKUS Open IMS Core
Sh ISC
Session control server (S/I/P-CSCF)
Diameter SIP interface interface
The deployed NGN
12:43 PM
Application logic SIP AS (B2BUA mode)
respective methodologies and metrics, defined with ETSI TS 186.008-1/2/3 and by employing a selection of test tools, e.g., Wireshark, IMS Bench
XML driven data repository
PCRF (policy and charging rules function) PCEF (policy control and enforcement function)
UCT IMS Client
UCT IMS Client
Mercuro IMS Client
Mercuro IMS Client
UTC Policy Control Framework
SIPp, and RightMark CPU Clock Utility. Lab intranet (Ethernet)
SSL VPN (Cisco AnyConnect VPN ASA) IPSec VPN (Juniper Netscreen)
UMTS HSDPA/Bluetooth
WiMAX-mobile
Figure 4. NGN testbed deployment.
Precise definitions of QC model construction, QoE scale levels, ingress QC parameter selection and modification rules, maximum number of estimations, etc., are sensitive decisions and should be made in relation to real-life implementations only. Specific details for the case of a laboratory implementation are available in the following section.
IMPLEMENTATION AND VALIDATION The proposed QoE controller service enabler has been implemented and validated within a laboratory environment. The laboratory NGN testbed is designed to provide the basic functionalities of a real-world NGN deployment, comprising the following components (Fig. 4): • Fraunhofer FOKUS Open IMS Core • berliOS UCT Policy Control Framework • UCT IMS Client and Mercuro Silver IMS Client • Laboratory NGN infrastructure with Iskratel SI3000 solution • Ethernet, WiMAX, and UMTS/HSPA access networking deployments The deployed NGN testbed is standards compliant, which has been verified using reference NGN test procedures and the respective methodologies and metrics, defined with ETSI TS 186.008-1/2/3 and by employing a selection of test tools, e.g., Wireshark, IMS Bench SIPp, and RightMark CPU Clock Utility. The following aspects of the proposed QoE controller service enabler have been tested and validated.
98
First, standards compliance of the QoE controller service enabler has been validated through message flow and functional design observations. Full correspondence has been validated against ETSI TS 186.008-1/2/3 tests methodology. Protocol message composition and message flows have been captured and studied in detail using SIPp and Wireshark tools, confirming full compliance of the QoE controller procedures with the NGN testbed implementation as defined with [1] (Figs. 2 and 5). A standards-based service enabler is established, allowing for proactive and in-service operation. For QoE estimation algorithm, a QC model has been established comprising 56 ingress QC parameters and 134 QC model parameters in total. The model construction is taken after various objective parametric end-to-end QoE modeling recommendations and principles, available in research literature, and comprises 24 nontechnical parameters, the value of which is objectively definable through mappings that model subjective end-user’s perception, leading to QoE estimation (Figs. 6a and 6b). Examples of such parameters are Presence, QoE suggestion, Responsiveness, Usefulness, and End-user’s age. Also, the QC model incorporates cross-layer translations between transport-independent service-layer parameters and transport-dependent networklayer parameters for typical network-based QoS mechanisms after available standards and recommendations. Such examples are translations from Application class of service derived from Service priority to Network class of service and from
IEEE Communications Magazine • January 2011
STERLE LAYOUT
12/16/10
12:43 PM
Page 99
Figure 5. SIP/SDP INVITE protocol request message construction analysis for ingress QC parameter availability confirmation (Wireshark capture). Media priority through Transport Service class to QoS class identifier after the ITU-T Y.1541 and Y.1221 recommendations. QoE controller ingress QC parameter availability has been fully confirmed for the implemented FOKUS Home Subscriber Server (FHoSS, Fig. 4), ISC and Sh reference points, and SIP/SDP INVITE message construction in UCT IMS Client and Mercuro Silver IMS Client. An example analysis of SIP/SDP INVITE request message construction for ingress QC parameters availability inspection is represented in Fig. 5. Next, the QoE estimation algorithm has been tested for 30 communication construction scenarios with varied ingress QC parameters: SDP Media types, SDP Requested datarate, Requested codec, Transport mode, Service Bandwidth, Content types (including Content media type, Content datarate, Content codec, Content priority, etc.). Test scenarios have been taken after principal NGN standards recommendations, referenced in [1]. A portion of results, displayed in Matlab programming environment, is given in Fig. 6c, displaying effects of SDP Media types and Content types on QoE estimation. QoE scale in the QoE estimation algorithm has been set to [0,100] with admittance threshold of 60 (Fig. 6a). Ingress QC parameter adjustments are allowed on SDP Media type, SDP Requested datarate, Requested codec, Content types, Media structure, Transport mode, and Service bandwidth parameters and a series of limitation are set to protect the algorithm from performing uncontrollable or malicious effect to the communication, the end-user or the service provider. Finally, service enabler workflow construction and efficiency have been thoroughly investigat-
IEEE Communications Magazine • January 2011
ed. Three principal factors have been identified: 1 AS implementation efficiency 2 Introduced service enabler signaling overhead 3 Application logic operation efficiency Again, ETSI TS 186.008-1/2/3 methodology has been applied using Wireshark, IMS Bench SIPp, RightMark CPU Clock Utility, and JMeter tools. The respective IMS Bench SIPp test scenario comprised 20.000 end-users with an average session duration time of 30 seconds. The results are as follows. Ad (1), AS implementation efficiency greatly depends on selected AS implementation that is to be included in the NGN system. For the case of a Tomcat AS implementation, performance results are favorable. HTTP interface performance tests for 10 sessions have shown a capacity of 30 scenario attempts per second (SAPS) with CPU load at 98 percent. Similarly, SIP interface performance tests have shown 10 SAPS at first inadequately handled scenarios (IHS) and 60 SAPS in maximum. Ad (2), introduced service enabler signaling overhead comprises two Diameter profile acquisition procedures comprising Cx/Sh-Pull — Cx/Sh-Pull-Resp message pair, separately for S-CSCF and AS, one SPT triggering with the belonging iFC and SPT records in the FHoSS and S-CSCF databases, and one service request forwarding message pair comprising two SIP INVITE message exchanges over the ISC interface. This confirms minimal induced signaling overhead and a design, limited to basic service triggering and session control procedures required for service enabler operation in the NGN. Ad (3), application logic operation efficiency
99
STERLE LAYOUT
12/16/10
12:43 PM
Page 100
Codec 1 Loss 0 Delay 0 ms
QoE as function of Happiness and Costliness
Codec 1 Loss 0.002 Delay 0.2 ms
QoE estimation (x 100)
10,000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 100
QoE [/]
30 20 10
0 Silver
10
100 ms
QoS loss rate [/] 1.0e-004*0.5
QoE [/]
10 5
0 0 Text removed
50 1000 1500
ud
ts en m ire ) qu re ] 10 th kb/s eo* [ id /v io
id
(a
/
5
w
4500 kbps
4
3 2 Price range [/]
1
(b)
nd
Video bandwidth req. [kbps]
Premium 0
QoE as function of media structuring of the service and bandwidth requirements
Ba
25 ms
Gold
(a)
Test scenario setup parameters (partial) Available network 30000 kbps bandwidth (kbps) Access network 3GPP-E-UTRANtechnology [/] FDD Terminal G.711, G.722, supported MPEG1, MPEG2 codecs [/]
Application transfer latency [ms]
20
100
[/]
0
80 90 60 70 30 40 50 Costliness price range) (User income vs.
g tin ra
80 60 uln Hap ess pin 40 vs. ess 20 ex pe 0 cta tio ns )
Forbidden content types [/] Service rating [/]
50 40
sef
Delay limit [s]
60
ice rv Se
(U
QoE as function of Realtime and Duration according to codec performance
2000
2500 3000
5
0
(c)
Data removed
15 10 [/] Media structure
20 Text added
25
30
Gold
Figure 6. QoE estimation algorithm test results (capture in Matlab programming environment). is implementation-dependent and is subject to selected implementation technologies as well as efficiency of internal and external logic operation and interface signaling. In this respect, from the NGN service environment and the end-user’s viewpoints, the principal metrics represents the delay induced due to service enabler operation. As the latter is of great importance and has effects on the resulting QoE itself while providing QoE optimization service, further attention is of vital importance. In further studies, attention will be placed on two key aspects: internal QoE enabler application logic performance optimization and ingress QC parameter acquisition efficiency. Herewith, the service enabler design and operation have been successfully validated for standards compliance, in-service operation, signaling overhead efficiency and objective application-based QoE control. A basic laboratory testbed NGN implementation suffices and provides all required functionalities, mechanisms and procedures. With the presented implementation results and findings, a conclusion has been drawn that the QoE controller proposal is readily designed in compliance with the requirements for deployment into real-world NGN environments.
100
CONCLUSIONS AND FUTURE WORK In this article, a QoE controller service enabler solution has been presented. The proposal represents a novel application-based approach to inservice QoE control. The QC serves as the guiding principle for the design of QoE estimation logic, enabling end-to-end QoE modeling, and appropriate quality-related configuration adjustments for QoE optimization purposes. By employing a standards-compliant service enabler and the respective NGN service control and triggering principles, QoE control is provided as an enabling service, triggered in the communication establishment process. The testbed implementation results have successfully confirmed that the proposal is efficiently designed in the direction of full standards compliance, favorable signaling overhead efficiency, and objective application-based QoE control principles, therewith presenting a nonintrusive proposal for in-service QoE control for any NGN-based service environment. There are many interesting research challenges that require further attention. First, the design of appropriate QoE estimation algorithm and the respective QoE modeling, estimation, and opti-
IEEE Communications Magazine • January 2011
STERLE LAYOUT
12/16/10
12:43 PM
Page 101
mization are highly complex tasks that should be addressed in further work. Specific attention should be placed on end-to-end consolidations and transport-dependent cross-layer QoS and QoE parameter translations beyond the NGN service environment. Also, further work is required to provide validated objective QoE estimation and optimization effects against selected methodologies involving end-user feedback in real-world implementations. This can be achieved though applied use-cases in relation to other value-added services available to the end-users. Next, the presented service enabler proposal requires tight coupling with the respective NGN service environment. For this reason, any further enhancements and optimizations of this proposal are heavily dependent of the respective real-life NGN deployments and are subject to case-by-case design. Herein, business model effects on the service construction should be analyzed and discussed in detail, addressing also the potential of the proposal to serve as a value-added service. Nonetheless, the proposal carries potential for cost-efficient application-based quality control solution in place of existent complex and dedicated admission control solutions in transient and next generation systems, which needs further attention.
ACKNOWLEDGMENTS The research and development work was supported by the Slovenian Research Agency.
REFERENCES [1] ITU-T Rec. Y.2011, “Next Generation Networks — Frameworks and Functional Architecture Models: General Principles and General Reference Model for Next Generation Networks,” 2004. [2] P. Reichl, “From ‘Quality-of-Service’ and ‘Quality-ofDesign’ to ‘Quality-of-Experience’: A Holistic View of Future Interactive Telecommunication Services,” 15th Int’l. Conf. Software, Telecommun. Comp. Net., 2007, pp. 1–6. [3] A. Kos, M. Volk, and J. Bester, “Quality Assurance in the IMS-Based NGN Environment,” in Handbook of Research on Wireless Multimedia, N. Cranley and L. Murphy (Eds.), Information Science Reference, 2009, pp. 240–57. [4] ITU-T Rec. Y.2111, “Next Generation Networks — Quality of Service and Performance: Resource and Admission Control Functions in Next Generation Networks,” 2006. [5] M., Fiedler, T., Hossfeld, and P. Tran-Gia, “A Generic Quantitative Relationship between Quality of Experience and Quality of Service, “ IEEE Network, vol. 24, no. 2, Mar./Apr. 2010, pp. 36–41. [6] P. Brooks and B. Hestnes, “User Measures of Quality of Experience: Why Being Objective and Quantitative is Important, “ IEEE Network, vol. 24, no. 2, Mar./Apr. 2010, pp. 8–13.
IEEE Communications Magazine • January 2011
[7] S. R. Lima, P. Carvalho, and V. Freitas, “Admission Control in Multiservice IP Networks: Architectural Issues and Trends,” IEEE Commun. Mag., vol. 45, no. 4, Apr. 2007, pp. 114–21. [8] E. Z. Tragos et al., “Admission Control for QoS Support in Heterogeneous 4G Wireless Networks,” IEEE Network, vol. 22, no. 3, May/June 2008, pp. 30–37. [9] A. Takahashi, D. Hands, and V. Barriac, “Standardization Activities in the ITU for a QoE Assessment of IPTV,” IEEE Commun. Mag., vol. 45, no. 2, Feb. 2008, pp. 78–84. [10] M. Volk et al., “Quality-Assured Provisioning of IPTV Services within the NGN Environment,” IEEE Commun. Mag., vol. 46, no. 5, May 2008, pp. 118–26.
The proposal carries potential for cost-efficient application-based quality control solution in place of existent complex and dedicated admission
BIOGRAPHIES
control solutions in
J ANEZ S TERLE (
[email protected]) graduated in 2003 from the Faculty for Electrical Engineering, University of Ljubljana, where he is currently pursuing a postgraduate degree. Since 2002 he has been working in the Laboratory for Telecommunications. His educational, research, and development work is oriented towards design and development of next generation networks and services. Current research areas include next generation internet protocol, network security, traffic analyses, QoE modeling, QoS measuring, and development, and deployment of new integrated services into fixed and mobile networks.
transient and next generation systems, which needs further attention.
M OJCA V OLK (
[email protected]) was awarded her Ph.D. from the Faculty of Electrical Engineering, University of Ljubljana, in 2010. She is currently with the Laboratory for Telecommunications as a researcher. Her main educational and research interests are next-generation networks, protocol and mechanisms, and design and deployment of broadband IP multimedia services within service delivery platforms. Her current work is focused on admission control and quality assurance areas in the next generation multimedia services environments. URBAN SEDLAR (
[email protected]) was awarded his Ph.D. from the Faculty of Electrical Engineering, University of Ljubljana, in 2010. He is currently working as a researcher at the Laboratory for Telecommunications. His educational, research and development work focuses on Internet technologies and protocols, quality of service in fixed and wireless networks, web-based services and applications, and multimedia service architectures, as well as development and realization of integrated multimedia services. J ANEZ B ESTER (
[email protected]) is a professor and head of the Laboratory for Telecommunications at the Faculty of Electrical Engineering, University of Ljubljana. His work focuses on planning, realization, and management of telecommunications systems and services; implementation and application of information technologies in education; and economic opportunities for knowledge-based societies. ANDREJ KOS (
[email protected]) is an assistant professor at the Faculty of Electrical Engineering, University of Ljubljana. He has extensive research and industrial experience in analysis, modeling and design of advanced telecommunications elements, systems, and services. His current work focuses on managed broadband packet switching and next generation intelligent converged services.
101
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 102
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
A Context-Aware Service Architecture for the Integration of Body Sensor Networks and Social Networks through the IP Multimedia Subsystem Mari Carmen Domingo, Barcelona Tech University
ABSTRACT In this article a new context-aware architecture is proposed for the integration of body sensor networks and social networks through the IP Multimedia Subsystem. Its motivating application scenarios are described. The benefits and main research challenges for an efficient communication using the proposed architecture are outlined.
INTRODUCTION Next-generation networks (NGNs) are characterized by one common network core, which is independent of the physical layers and the access network technologies. It is responsible for the session management operations. On top of this core layer, there is a unified service layer. In this article, the integration of body sensor networks (BSNs) and social networks through an NGN has been analyzed. A context-aware service architecture has been proposed in order that authorized social network members can monitor real-time BSN data of other users. BSNs consist of a set of wearable or implanted sensors, which monitor the vital signs or movements of the human body in a ubiquitous computing environment. Social networks are social structures made of individuals connected by one or more specific types of interdependency. The selected generic NGN platform for the integration of BSNs and social networks is the IP Multimedia Subsystem (IMS) [1]. IMS is a key component of third-generation (3G) networks. It consists of a horizontal control and service layer developed on top of the IP layer. Multimedia services are delivered to end users over any device and anywhere. IMS is a suitable platform for developing a highly scalable framework for access to information stemming from BSNs. IMS has some characteristics that make it well suited for selection as as a transport plat-
102
0163-6804/11/$25.00 © 2011 IEEE
form. All services offered to a particular customer can access the same subscriber database. Therefore, a consistent set of personalized services are offered by service providers to their customers, independent of the access media used. As a result, heterogeneous sensor networks using different access technologies can be integrated easily. It allows service providers to roll out new applications and services faster by reusing well defined common functions such as authorization, authentication, service provisioning, billing, group management, and presence. BSNs can use the IMS as a transport platform to access the web services of social networks. Social networks can also use the services of IMS to run applications based on data of BSNs. The integration of the sensing capabilities of BSNs and social networks in the IMS facilitates more intelligent and autonomous applications that depend on or exploit knowledge about people, including their identities, updated vital signs, or current locations in the physical world. It provides new and personalized services to end users such as pervasive gaming, enhanced emergency services, and wireless healthcare applications. These context-aware applications are able to adapt to the changing communications environment and needs of the user. In [2] a framework to pass sensor data from a BSN to a social network is introduced. However, the proposed solution does not take advantage of a ubiquitous NGN for the integration. To the best of our knowledge, this is the first article that analyzes the integration of BSNs and social networks through IMS. The article is structured as follows. In the following section we discuss the proposed contextaware service architecture. Next, its application scenarios are described. Later, the benefits and main research challenges of the proposed architecture are introduced. Finally, the article is concluded.
IEEE Communications Magazine • January 2011
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 103
CONTEXT-AWARE SERVICE ARCHITECTURE The proposed context-aware service architecture is shown in Fig. 1. In this architecture, multimedia services are accessed by the user from several wireless devices via an IP or cellular network based on the vital signs monitored in a BSN. The underlying architecture can be divided into four layers. Their basic functionalities are summarized as follows: Device layer: In BSNs the vital signs of a person are monitored and sent to a gateway located on the body surface, which forwards this data to the monitoring station. The monitoring station is a wireless device such as cell phone, tablet PC, smart phone, PDA, and so on, which is IMS capable and able to connect to the IMS infrastructure using the access network. The sensors communicate with the gateway using ultra-wideband (UWB) or the new standard under development, 802.15.6. Bluetooth or Zigbee can be used to forward the data from the gateway to the monitoring station. Access layer: This layer is responsible for the access of the monitoring stations to the radio channel using transmission media including WLANs, WiMax, general packet radio service (GPRS), and wideband code-division multiple access (WCDMA). Control layer: It controls the authentication, routing and distribution of IMS traffic. At the core of this layer is the call session control function (CSCF), which refers to the Session Initiation Protocol (SIP) servers or proxies used to process signaling packets in the IMS. There are two different types of IMS databases: home subscriber server (HSS) and subscriber location function (SLF). Service layer: Attached to the core network are the servers, which are used to store data, execute applications, or provide services based on the data. Next, we describe in greater detail the components of each layer.
DEVICE LAYER Context-aware services are designed to adapt to the current conditions of users and provide them the right services at the right time. Accurate context information should be provided to these services so that they work efficiently. Contextaware sensing in a BSN is essential in pervasive monitoring to interpret the physiological signals properly, taking into account the environmental factors and the evolution of the body state. Environmental factors such as season, meteorological conditions, and time of day help determine the context of the individual. Vital signs of the human body such as breathing are affected by several factors such as sex, age, frequent exercising or lack thereof. The body state can experience changes based on a person’s current activity (e.g., heart rate variability under stress experienced by soldiers in the battlefield and firefighters) as well. Heart rate variability is derived from the electrocardiogram (ECG) but cannot be interpreted reliably without context information related to the patient activity, since changes in the heart rate may be due to motion rather than to imminent seizure. Therefore, context-aware sensing is essential to relate sensor
IEEE Communications Magazine • January 2011
IMS Web 2.0 gateway SIP
SIP HTTP
SIP SIP Presence (application server)
SIP
HSS
SOCIAL NETWORKS Facebook, PatientsLikeMe, etc.
SIP Vital sign monitoring (application server)
SIP Group list management (application server)
SIP
SIP
S-CSCF
CSCF
I-CSCF
P-CSCF
Web service databases
Service layer
Control layer
SLF IP-SIP gateway IP network
GPRS/WCDMA
WLAN
Access layer WLAN
WiMAX
Device layer Cell phone
Tablet PC
Smart phone
Bluetooth or Zigbee
PDA
Monitoring stations
Bluetooth or Zigbee Body sensor networks Sensor nodes Gateway UWB or IEEE 802.15.6
Figure 1. Proposed architecture.
readings to the circumstances and interpret health emergencies accurately. Orthostasis is a disorder characterized by a sudden drop in blood pressure in patients going from a sitting to a standing position. It can be detected using accelerometers (change in position), ECG, Beat-to-beat (for continuous blood pressure), and pulse oximeter (for blood oxygen levels) sensors of BSNs. The context-aware sensed information is sent to the gateway located on the body’s surface, which forwards it to the monitoring station. An algorithm is applied to this data to deduce context for accurate episode detection. Different approaches can be applied for this purpose [3]: artificial neural networks, Bayesian networks, and hidden Markov models.
ACCESS LAYER To avoid overloading the IMS core network, data should be aggregated, abstracted, and filtered before [4]. Since the gateway of the BSN is not well suited to perform these functions (low
103
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 104
These policies have been previously defined and stored
Registration, Authentication and Authorization
Subscription authorization policies
Data aggregation Consistency Checking
Policies respository
Information respository
Sensed data
in the policies
Body sensor network interface
repository of the monitoring station. They depend on the application and indicate which informa-
XML formatter: RPID or OpenXDF
Mapping tables
tion should be published regularly
IMS network interface
Data publication
and which should be published based on emergency events.
Abstraction layer
Connectivity layer
Figure 2. Monitoring station architecture. processing capability, limited battery capacity), the monitoring station is selected. In [5] the monitoring station architecture is described (Fig. 2). It consists of a connectivity layer and an abstraction layer. The connectivity layer has an interface for the BSN and another for the IMS core network. In the abstraction layer, sensor data should be processed, formatted, and sent to the application servers of the IMS core network. The IMS registration, authentication, and authorization procedures, which control security and privacy, are first executed at the monitoring station. Afterwards, the subscription authorization policies between the monitoring station and the application server must be established. These policies have been previously defined and stored in the policies repository of the monitoring station. They depend on the application and indicate which information should be published regularly and which should be published based on emergency events. The acquired data is processed at the monitoring station, where data aggregation and consistency checking techniques are applied. Afterwards, this data is stored in the information repository. Next, it is published based on the publication policies defined at the monitoring station and it must be represented in an IMS-compatible format. RPID has been selected as an open XMLbased standard for the storage and exchange of exhaustive presence information of the user (activities, mood, location, etc.) with the presence server. OpenXDF has been selected as an open XML-based standard for the digital storage and exchange of time-series physiological signals and metadata between the monitoring station and the vital sign monitoring server. Thus, the data is formatted according to these standards and consulting the IDs mapping tables between sensors and IMS. The resulting XML documents are sent to their respective application servers to be published. The SIP standard IMS is selected for the communication between the monitoring station
104
and the application servers. In addition, the monitoring station can also receive a publication trigger from the application servers asking for sensor data to be published.
CONTROL LAYER The control layer is a common horizontal layer that isolates the access network from the service layer (Fig. 1). At this layer, the CSCF is the SIP server and is responsible for processing SIP signaling in the IMS. The CSCFs can be divided into three categories: the proxy CSCF (P-CSCF), interrogating CSCF (I-CSCF), and serving CSCF (S-CSCF) [6]. The P-CSCF is the first point of contact between the monitoring station and the IMS network; it validates and forwards requests from the monitoring station, and then processes and forwards the responses to the monitoring station. The P-CSCF also authenticates the user and verifies the correctness of the SIP requests. The I-CSCF is located at the edge of the administrative domain. It retrieves user information from the HSS and SLF databases, and routes the SIP request to the appropriate destination (typically an S-CSCF). The S-CSCF is a central node of the control layer, which acts as a SIP server but also as a SIP registrar. It maintains a binding between the IP address of the terminal user (monitoring station) and the user’s SIP address of record (public user identity). It also enforces the network operator’s policies and determines if the SIP signaling should access one or more application servers (to trigger appropriate services) toward the final destination. The HSS database stores the subscriber profile [6]. The SLF database maps the users’ addresses to HSSs when there is more than a single HSS due to the high number of users.
SERVICE LAYER This is the top layer of the IMS service architecture. Many multimedia services can be offered at this layer. The services are run by application servers, which host, execute the services and pro-
IEEE Communications Magazine • January 2011
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 105
vide the interface to communicate with the control layer using the SIP protocol. The horizontal architecture of IMS provides a set of common functions named service enablers such as the IMS presence service and the IMS group list management service, which can be used by several applications by any device anywhere. In this article, the SIP presence server and the SIP group list management server are used, and the novel SIP vital sign monitoring server is introduced (Fig. 1). The SIP presence server is responsible for collecting, managing and distributing real time availability and location of a user. The recognized context information of BSN users is sent through IMS to the presence server for its storage and exchange. Users are allowed to both publish their presence information and subscribe to the service in order to receive notification of changes by other users. The members of a social network can subscribe to receive current presence information about another user in his/her circle of friends. A notification is received when the presence status such as real-time location, activity and status of the individual changes. This information is very useful to properly interpret changes in the physiological state of a person. For example, the rapid increase in a patient’s heart rate can be symptomatic of a heart attack, or it can just be related to a change in the physical activity, for example, if the patient starts jogging. The SIP group list management server allows users or administrators to manage (i.e., create, modify, delete, and search) the network-based group definitions and the associated lists of members. The SIP vital sign monitoring server stores the vital signs of the people being monitored in a database. This server allows users to register and monitor the vital signs of another person, who has authorized them. These users can be the registered members of a social network or only a selected group of them. It also allows registered authorized members (physicians, emergency services, family members, and caregivers) to receive emergency notifications if the vital signs of an individual show some abnormal values. The IMS Web 2.0 gateway (Fig. 1) interconnects the IMS and Web 2.0 (social network) domains. Using representational state transfer (REST), it is possible to represent the sensory data by unique HTTP uniform resource identifiers (URIs) such as http://ims.provider.com/sensor readings/sensor1. This data can be accessed by social networks using the HTTP GET method, since the IMS Web 2.0 gateway translates the request into a SIP SUBSCRIBE message and forwards it to the vital sign monitoring server. The server replies with the SIP NOTIFY message containing the URI and information about the sensor readings. Similarly, services in the web domain advertise their resources using the HTTP POST method through the IMS Web 2.0 gateway, which translates them to SIP PUBLISH requests and sends them to the appropriate application server. We consider an example where user A wants to invite user B to join a group on Facebook. We propose a web service, which contains web methods that return a dataset after querying a web services database of the friends of user A.
IEEE Communications Magazine • January 2011
As the dataset related to user B is returned, user A can send user B a request asking if he/she would like to join the group. Facebook can also publish this and other services through the IMS architecture using HTTP POST messages. The IMS Web 2.0 gateway translates them to SIP PUBLISH requests and sends them to the corresponding application servers to be published.
The horizontal architecture of IMS provides a set of common functions named service enablers such as the
APPLICATION SCENARIOS
IMS presence service
Next, several application scenarios that benefit from the integration of BSNs and social networks through the IMS are introduced.
and the IMS group
HEALTHCARE SCENARIO This scenario refers to the sensing of the physiological signals of a patient to monitor his/her health. We highlight the following application.
list management service, which can be used by several applications by any device anywhere.
Patient Social Networks — In patient social networks patients exchange information about their diseases. The patient network PatientsLikeMe [7] helps people to find data concerning their diseases. Patients are able to compare symptoms, to discover which treatments work best for other patients like them to treat their conditions (Fig. 3). They are also able to track their progress with tools that analyze the effects of new treatments on their bodies such as blood levels of substances, symptoms, side effects, etc. Currently, patients introduce their symptoms and drug doses by hand. However, the information concerning the vital signs and drug doses of patients could be automatically sent by BSNs in real-time. The use of BSNs is a more practical and accurate way to obtain such data. In this application, the SIP group list management server is used to register the social network members in the IMS framework. When a patient using a BSN is available, this information is conveyed from his/her monitoring station to the presence server in the IMS network, which publishes it. The published information is accessed by the vital sign monitoring server using SIP via subscription. In this way, the registered social network patients start receiving the vital signs of the patient being monitored.
EMERGENCY SCENARIO This scenario refers to the monitoring of people exposed to critical environments or life-critical situations. The data is usually very sensitive and disseminating it to a number of interested parties (emergency services, family members, etc.) in real-time is crucial. We highlight the following application. Disaster Relief Operation — In disaster relief operations (Fig. 4), coordination between care provider teams is essential to provide proper trauma care. The members of the distributed response team, such as emergency management teams that arrive first at the disaster scene, paramedics and hospital crews, will belong to the same social network, which is managed by the SIP group list management server. The registered professionals of the disaster relief social networks will be able to log on to get real-time information about the disaster updated by the emergency management team, including details about vic-
105
DOMINGO LAYOUT
12/16/10
The published information is accessed by the vital sign monitoring server using SIP via subscription. This
12:31 PM
Page 106
Side Effects Side effects as an overall problem
Most commonly reported side effects
Severe Moderate Mild None
Fatigue Nausea, stomach and intestinal pain Weakness in tips
18
51%
8
23%
7
20%
Muscle weakness Sensation in tongue
5 5
14% 14%
10 8% 13 10% 17 13% 87 69%
way, the authorized
Cold feet and legs
social network
3 Mild
9% Moderate
Severe
members start receiving information about the avatar
Figure 3. Statistics related to the use of the drug Riluzole by amyotrophic lateral sclerosis (ALS) patients. Source: PatientslikeMe. March 2010.
movements of other players and can play together.
tims and the types and severity of injuries. The bodies of injured people will be outfitted with BSNs to monitor their vital signs and transmit this information in real-time to the registered social network users authorized by the SIP vital sign monitoring server in the IMS network. This way, an accurate perspective about the number and state of the casualties will be maintained, which will result in a better coordination between the paramedics and the hospitals. Notification messages can also be sent to family members and friends of the injured persons belonging to the same traditional social networks (e.g., Facebook) to inform them of their conditions.
ENTERTAINMENT SCENARIO This scenario refers to the development of applications to amuse people. We highlight the following application. Game Systems — Body motion is captured for video games via a BSN. The BSN is used to monitor the real-time body motion of the player to control his/her avatar. The gaming experience can be further improved by using force feedback actuators to determine the swiftness of the player’s motion and replicate it onto the avatar’s motion. The integration of social networks and BSNs through IMS enables online game players to join a social network, interact with avatars of other users and play different online games together. The social network of players can be constituted only by friends (such as on Facebook) or bring together people from around the world. The IMS game server is used to register the players and this information is managed by the group list management server. The data about the movements of an avatar controlled by the readings of the BSN of a player is sent from the monitoring station to the SIP presence server, which is responsible for publishing the availability of the player. The published information is accessed by the vital sign monitoring server using SIP via subscription. This way, the authorized social network members start receiving information about the avatar movements of other players and can play together. Gaming can also be therapeutic by improving a patient’s physical health [8]. Motion sensor nodes in a BSN can be attached to a cap, the hands or other body limbs to monitor head and limb movements. This information is sent to a monitoring station and finally to a middleware
106
component, which filters the data and converts it to 3D data serializations. The motion of the avatar is controlled using language optimizations. Such games can be used for the physical rehabilitation of patients experiencing transient disabilities. In this case, the game requires the patient to perform particular movements (e.g., to stretch their arms over their head to collect an object) to progress to the next level. Patients perform these movements for the purpose of entertainment, while improving their physical conditions. Their progress can be easily accessed by authorized social network members such as physicians. The data acquisition tool manager of the social network enables doctors to generate different reports and graphs, which are very useful for analyzing patients’ physical performances.
SPORTS SCENARIO This scenario refers to the development of applications related to sport. We highlight the following application. Sports Training — The movements of sportsmen/sportswomen are monitored and analyzed for training purposes using different motion sensors (Fig. 5). The trainer(s), the team of physiotherapists, doctors, etc. are registered users of the sports training social network. They can analyze the sportsman/sportswoman’s progress or monitor his/her health, depending on their profession. They can also analyze which muscles sportsmen/sportswomen work most and schedule appropriate exercises for future trainings.
BENEFITS OF THE INTEGRATION Next, the benefits of using IMS as a transport platform to integrate BSNs and social networks are being discussed. Social networks exploit real-time vital sign monitoring information to add new value to existing social networking; this data is available through different access networks. Social networks benefit from this context-aware service architecture because service providers can easily reuse the IMS service resources. IMS service enablers can be used by social networks to create services only once rather than multiple times for different access types. As a result, the cost and deployment time of new services such as vital sign monitoring, is reduced. Social networks can also benefit from
IEEE Communications Magazine • January 2011
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 107
The acquisition of real-time context-
Social network
aware data using BSNs is still a chal-
Disaster scene
lenging issue due to the diversity of context-aware environments, the range of physiological condiIMS Web 2.0 gateway Router
Monitoring station
tions and the dynamic nature of
IP Multimedia Subsystem (IMS) core network
these networks. A difficult aspect is how to handle the
Gateway
mobility of BSN and social network users. Body surface sensors
Figure 4. Disaster relief application.
the IMS robust security mechanism to mitigate the security risks, which is a fundamental issue if personal information of individuals is being recorded. BSN users benefit from this context-aware architecture because they can create but also access IMS services according to their needs. For example, an automatic ambulance call is set up in case of oncoming stroke to the nearest Publicsafety Answering Point (PSAP), which is essential in emergency scenarios. They also enjoy the benefits from pervasive computing, since social network services are available anytime and anywhere. Another important advantage is that users will be able to send data to several members of different social networks simultaneously, a feature not currently supported. In addition, many novel multimedia services for end users will be developed due to the integration of BSNs and social networks through IMS and only one subscription is required to enjoy these services.
RESEARCH CHALLENGES Next, the research challenges to be addressed are introduced. The acquisition of real-time context-aware data using BSNs is still a challenging issue due to the diversity of context-aware environments, the range of physiological conditions and the dynamic nature of these networks. A difficult aspect is how to handle the mobility of BSN and social network users. Another important challenge is the implementation of a secure service delivery, since sensor data in the BSN is updated continuously in a
IEEE Communications Magazine • January 2011
dynamic environment and the inability to securely store and transfer authentic data can prevent the patient from being treated effectively [9]. Privacy is another important issue. It must be strictly guaranteed that sensitive and private patient-related data is only accessed by authorized users [9]. Data should be authenticated before being used, to prevent false data injection and denial-of-service (DoS) attacks. A finegrained access policy should be defined to specify different access privileges for different users. Since medical data shall not be disclosed to telecommunication operators and social network managers, it should be stored in an anonymous format. Data anonymization techniques include the obfuscation of the IP source address and an anonymous authentication scheme. Scalability is important to guarantee that sensor data is available to a large number of subscribers in the social network. Various types of information (spatial, physiological and environmental sensor data) should be accessible through IMS in the social network to detect the availability, location and physiological state of a BSN user. Data aggregation techniques such as semantic fusion are required to efficiently eliminate redundant information in the IMS network. Another key challenge is the provision of reliability in this architecture. Emergency applications are delay as well as loss-sensitive. If alert packets are lost or corrupted, the functioning of the system is seriously damaged. In addition, BAN devices have limited memory resources to store unacknowledged data. As a result, robust error detection and correction schemes and efficient acknowledgment retransmission mechanisms have
107
DOMINGO LAYOUT
12/16/10
12:31 PM
Page 108
Another key challenge is the provision Social network
of reliability in this architecture. Emer-
Gateway
Router
gency applications Monitoring station
are delay- as well as loss-sensitive. If alert packets are lost or corrupted, the func-
Body surface sensors IMS Web 2.0 gateway
tioning of the system Router
is seriously damaged. In addition, BAN
IP Multimedia Subsystem (IMS) core network
Gateway
devices have limited
Monitoring station
memory resources to store unacknowledged data.
Gateway
Monitoring station
Router
Figure 5. Training application. to be defined to provide reliability [10]. It is necessary to design and implement fail-safe fault-tolerant algorithms for emergency applications. It is required to analyze how IMS can guarantee different quality of service parameters. Finally, evolving from a user-generated content to a user-generated services network remains as a difficult but interesting challenge. Currently, people join communities and make use of the Web 2.0 to create content on-line, which is shared with other users according to their interests. User-centric service creation enables users not only to generate content but also to create services, which can be shared in social networks. BSN users can stimulate their creativity generating new innovative services as well. Web 2.0, SIP, and peer-to-peer (P2P) technologies allow for easy integration of these new services through IMS. For example, BSN users can generate mashups, which are applications that interconnect smaller web services to create new ones. This way, innovative personalized context-aware services are available. The servers which provide mash-up service components are linked as a P2P network. Online help and easy-to-use editing tools in a ubiquitous environment are also required to assist the user in the creation process [11].
CONCLUSION In this article a context-aware service architecture for the integration of BSNs and social networks through IMS has been introduced. The relevant application scenarios and main benefits for this architecture have been described. The research challenges have also been surveyed. These research issues remain wide open for future investigation.
108
REFERENCES [1] 3GPP Tech. Spec., “IP Multimedia Subsystem (IMS) (Release 7),” 2006; http://www.3gpp.org/ftp/ Specs/html-info/23228.htm. [2] M. A. Rahman et al., “A Framework to Bridge Social Network and Body Sensor Network: An E-Health Perspective,” Proc. ICME ‘09, June 2009. [3] B. T. Korel and S. G. M. Koo, “Addressing Context Awareness Techniques in Body Sensor Networks,” Proc. AINAW ‘07, May 2007. [4] A. Gluhak et al., “e-SENSE Reference Model for Sensor Network in B3G Mobile Communications Systems,” Proc. IST Mobile Summit ‘06, June 2006. [5] M. El Barachi et al., “The Design and Implementation of a Gateway for IP Multimedia Subsystem/Wireless Sensor Networks Interworking,” Proc. VTC ‘09, Apr. 2009. [6] G. Camarillo and M. A. García-Martín, The 3G IP Multimedia Subsystem (IMS): Merging the Internet and the Cellular Worlds, Wiley, 2004. [7] PatientsLikeMe; http://www.patientslikeme.com. [8] P. Fergus et al., “A Framework for Physical Health Improvement using Wireless Sensor Networks and Gaming,” Proc. Pervasive Health ‘09, Apr. 2009. [9] M. Li, W. Lou, and K. Ren, “Data Security and Privacy in Wireless Body Area Networks,” IEEE Wireless Commun., vol. 17, no. 1, Feb. 2010, pp. 51–58. [10] M. Patel and J. Wang, “Applications, Challenges, and Prospective in Emerging Body Area Networking Technologies,” IEEE Wireless Commun., vol. 17, no. 1, Feb. 2010, pp. 80–88. [11] J. Sienel et al., “OPUCE: A Telco-Driven Service Mashup Approach,” Bell Labs Tech. J., vol. 14, no.1, 2009, pp. 203–18.
BIOGRAPHY MARI CARMEN DOMINGO (
[email protected]) received her Lic. degree and Ph.D., both in electrical engineering, from Barcelona Tech University, Spain, in 1999 and 2005, respectively. She currently works as an assistant professor in the Electrical Engineering Department of the same university. Her current research interests are in the areas of social networks, IMS-based services, and body sensor networks.
IEEE Communications Magazine • January 2011
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 110
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
Visualization of Data for Ambient Assisted Living Services Maurice Mulvenna, William Carswell, Paul McCullagh, Juan Carlos Augusto, and Huiru Zheng, University of Ulster Paul Jeffers, Fold Housing Association Haiying Wang and Suzanne Martin, University of Ulster
ABSTRACT Ambient assisted living (AAL) services that provide support for people to remain in their homes are increasingly being used in healthcare systems around the world. Typically, these ambient assisted living services provide additional information though location-awareness, presence-awareness, and context-awareness capabilities, arising from the prolific use of telecommunications devices including sensors and actuators in the home of the person receiving care. In addition there is a need to provide abstract information, in context, to local and remote stakeholders. There are many different viewing options utilizing converged networks and the resulting explosion in data and information has resulted in a new problem, as these new ambient assisted living services struggle to convey meaningful information to different groups of end users. The article discusses visualization of data from the perspective of the needs of the differing end user groups, and discusses how algorithms are required to contextualize and convey information across location and time. In order to illustrate the issues, current work on nighttime AAL services for people with dementia is described.
INTRODUCTION Demographic ageing is the term given to the anticipated huge increase in the number of older people in our society over subsequent generations. It is accepted that our health services cannot continue to provide hospitalbased care for this growing cohort of older people. Increasing numbers of people will experience a new, evolved health service provision in their later years, which will be techn o l o g i c al l y- d r i v en and wil l pro vide care services or access to care services in the person’s own home. Such services could include,
110
0163-6804/11/$25.00 © 2011 IEEE
for example, activity reminders for people with mild dementia, or medication reminders for people complying with complex medication regimes. These new ambient assisted living (AAL) services, underpinned by home-based assistive technologies offer data streams that will provide rich sources of useful knowledge about the behavior and wellbeing of people accessing care, either singly or in aggregated form. The deployed Information and Communications Technology (ICT) also facilitates new modes of interaction between the person at home, their family, carers, and other healthcare professionals involved. The AAL services that provide support for people to remain in their homes are increasingly being used in healthcare systems around the world. Typically, these AAL services offer additional information though activity-sequenceawareness, location-awareness, presenceawareness, and context-awareness capabilities, arising from the pervasive use of telecommunications devices including processors, sensors, and actuators in the home of the person receiving care. There are many AAL endpoints now utilizing and feeding information to converged networks and the resulting explosion in data and information has resulted in a new problem. These new converged AAL services struggle to convey meaningful and timely information to divergent groups of end users, with different information needs. In this article AAL technology and services are explained, before examples of the services in commercial systems and in academic research are explored. Visualization in AAL services is then described from the needs perspective of a person availing of the support — the care recipient — explaining how visualization must convey information across location and time. In order to illustrate the challenges for inclusive design, current research on night-time AAL services for people with dementia is then described in a case study.
IEEE Communications Magazine • January 2011
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 111
User
Role
Care recipient
User of the services; needs to be able to interact with the services to gain assistance to provide feedback where needed.
Formal carers on-site
Interacts with the services in tandem with the care recipient to assess progress in care regime; interacts to add information into system, for example, to add medication reminders.
Formal carers off-site
As above for formal carers on-site but may also interact remotely to update information and interact with services that measure or provide care to more than one home.
Telecare/Telehealth remote monitors off-site
Managing remote care provision; providing care triage in decision support for interventions; monitoring relatively large numbers of care recipients.
Informal carers on-site
Accessing information on health and wellbeing of care recipient; working with care recipient to understand health and wellbeing of care recipient and communicate to care recipient.
Informal carers off-site
Remote access to monitor key information on care recipient, for example, family member concerned about quality of life of care recipient.
Technical maintenance
Deploying tools to assess quality of data being gathered, decision support, and reporting on metrics from homes of care recipients; detection of errors.
Table 1. AAL service users and their roles.
DEFINING AAL TECHNOLOGY AND SERVICES Assisted living is the term given to the provision of care to people either in their own homes or in supported housing, underpinned by technology. The provision of care, augmented by assisted living technologies, is growing because of the increasing demand and also because of the maturing of many of the underlying technologies that make assisted living possible. In parallel with the development of assisted living, researchers in computing have been exploring the emerging area of ambient intelligence which applies automated reasoning and other artificial intelligence techniques to the understanding of the behavior of people in their environments. Ambient intelligence has evolved with great pace over the past ten years, from the early beginnings when the European Commission ISTAG group presented their vision for ambient intelligence [1] to research on the many applications. Ambient intelligence applications have become more complex and are now integrated into many other systems in the home, medical and occupational environments. Ambient intelligence based systems provide feedback to users and carry out specific actions based on observed patterns and pre-programmed algorithms. Some systems are also aware of their surroundings and can function independently, offering capabilities including “sensitive, responsive, adaptive, transparent, ubiquitous, and intelligent” [2]. These two areas of assisted living and ambient intelligence are converging towards a new paradigm in social computing called Ambient Assisted Living (AAL). AAL services have benefited from advances in sensor technology, hardware, software, and communication paradigms to such an extent that AAL services have gained market penetration into the home, work and health environments. AAL may be implemented to either replace or complement the care provid-
IEEE Communications Magazine • January 2011
ed by the carer. As technology becomes increasingly mobile, ubiquitous and pervasive, it is of course likely that the wider population will become beneficiaries of AAL and may lead an increasingly technology-augmented lifestyle. AAL technologies have been outlined as technologies that may help to extend the time that older people can live at home by “increasing their autonomy and assisting them in carrying out activities of daily life” [3]. The services that AAL technologies may use include functional, activity, cognitive, intellectual, and sensory support. Examples of functions that the AAL technologies may provide include alarms to detect dangerous situations that are a threat to the user’s health and safety, monitoring and continuously checking the health and well-being of the user, and the use of interactive and virtual services to help support the user. AAL technologies may also be used for communication, enabling the user to keep in touch with family, friends, and carers, and for example, in support of reminiscing.
VISUALIZATION OF AAL SERVICE DATA: USERS, ROLES, AND EXAMPLES OF USE AAL services have to support very different kinds of technologies encompassing sensors, actuators, communication hubs and interfaces. There are a number of different classes of users of the services. In essence, AAL services utilize data and information from these devices using different protocols and orchestrate this information for the different users. The primary user of AAL services is the care recipient, but there are other important users including on-site formal carers, remote carers or monitors of AAL services, informal carers (including family, neighbors and friends) and those in charge of maintaining the quality of service in a technical
111
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 112
Figure 1. Illustration of one-day activity profile recorded from a 72 year-old subject using ActivPal wearable sensor (PAL technologies, 2010). The recording covers his walking day of just over 14 hours.
1
http://www.paltechnologies.com/ 2 http://www.tunstall.co.uk/
112
service provision. Each of these classes of users has a role or roles to play in AAL service provision and its use (Table 1). AAL services have evolved from relatively simple telecare services such as emergency fall alarm provision into more sophisticated telehealth services supporting people with long-term chronic health conditions such as Alzheimer’s disease, in the assessment of their symptoms. In this evolution, the data generated has increased in volume and complexity. The task of interpreting the meaning in the data relating to the wellbeing and health of the care recipient has also become increasingly more complex and the diverse range of types of user and their overlapping roles in AAL service provision and efficacy has increased markedly. In determining appropriate AAL services, understanding the behaviour of the care recipients in terms of their Activities of Daily Living (ADL) is of particular value. For example, probabilistic models have been used to manage uncertainty and incompleteness of data as the ADLs are undertaken. ADLs comprise common activities such as making tea, using the telephone, etc. Conveying the information through visualization of such activities is much easier for a carer to interpret and comprehend compared to data-rich, lower-level data about movement within a room, for example. However, incorrect ADL identification has the potential to introduce more significant errors which could be harmful or even life threatening for the care recipient.
There is a need for the AAL services to communicate vitally important information in an easy-to-understand manner to all users while maintaining the privacy of the care recipient and their informal carers. The manner in which these AAL services communicate information draws heavily on visualization techniques. The most important user, the recipient of AAL care services, is more likely to be someone who is not familiar with ICT or computers in general and this is a major complicating factor in the successful provision of visualization of data and information in AAL services. Data collected in AAL services can include movement information, used to: • Alert an emergency incident such as a fall • Alert for unexpected behaviors, such as an older person going out late at night • Remind and assist daily living activities, such as doing physical activities and taking medications The AAL services store personal activity profiles over periods of time. The trend of user behavior and the pattern of user activity events provide rich information in the analysis of care recipients’ behavior patterns. Visualization of information for AAL systems needs adaptive interfaces that target specific needs and preferences of the recipients of care services. We may refer to this as user-awareness, but overall this type of knowledge that the system uses to deliver a more useful service can be considered part of the most widely standard term of context-awareness. For example, for the person being cared for different reminders may be suitable in the morning (e.g., “time to take your medicine,” “you did not have breakfast yet”) than in the afternoon (e.g., “you did not have lunch yet”). Day and night require different types of content given that the person it is delivered to may have different level of consciousness and alertness. It is also expected that the messages to be delivered during the night will be focused on sustaining a healthy sleeping pattern whilst during the day it could refer to many other daily activities. The place of the house where a reminder has to be delivered can make a difference as well (e.g., kitchen or living room displays may be assumed to be used with normal levels of lighting whilst those in the bedroom which may be mostly used with low lighting or darkness).
COMMERCIAL EXAMPLES There are several commercial products providing AAL services that include a visual element through text-based, colored tables and charts. Each vendor tends to differ in terms of how it visually represents data. While some systems are able to push pre-structured alerts to carers when certain events are triggered, the majority of visual representation of data is provided in response to user-driven queries from the carers rather than automatically being streamed to carers in real time. An example of an advanced system is the use of the ActivPal1 system in ambulatory monitoring, which has proved successful, in many respects due to powerful visualization software. This provides a concise easy-to-interpret graphi-
IEEE Communications Magazine • January 2011
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 113
The AAL services
Zone Sleeping
2000
Early morning
Late morning
Lunch
First medication change
Sensor Firings
Afternoon
Evening
Late evening
store personal activity profiles over
Second medication change
periods of time.
1800
The trend of user
1600
behavior and the
1400
pattern of user
1200
activity events provide rich
1000
information in the
800
analysis of care
600
recipients’ behavior
400
patterns.
200 0 05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Week
Figure 2. Illustrating sensor activations per day split into time interval [6]. cal representation of the motion activity of a person for a day classified as sitting, standing or walking. Meaningful feedback of the user profiles has proven to be an important tool in assessing users’ behavior. Figure 1 illustrates a recording of a user’s 14 hours activity using an ActivPal sensor. The profile displays the activity pattern during a day. It shows that the user was inactive most of the time and was only active at 9 am, 10 am, and 4 pm. Using this daily profile, carers would be able to spot the periods where the care recipient can improve their activity level. Comparing the daily profile to the user’s weekly profile, many questions can be answered or explored, for example, if the user had increased/decreased his activity level compared to the previous days? Did the user change their life style this day (for example, the user didn’t get up for lunch)? If yes, can we infer the reason? Tunstall’s ADLife 2 service provides a user interface for representing client data, which is intended for use by healthcare professionals. The setup provides the user with information on individual care recipients in the form of a table listing the total activations of all the sensory equipment over a day, week and month. Each cell is color coded to represent activity based on a standard deviation from the previous week’s pattern, with green showing under use and red over usage. Each cumulative activation number cell and date cell is hyperlinked to the complete data sets for that period, which are returned in the form of bar charts. These indicate periods of activations over time and the total number of activations for each sensor device. The QuietCare system3 caters for formal and informal carer needs. The home page is organized around activities rather than specific sensors providing sensor information in a more meaningful structured output. A simple traffic light indicator is used as the status display for
IEEE Communications Magazine • January 2011
each activity. In addition there is meaningful text related to each area. Contact information is also provided for alarm conditions. If the user wants to view specific trend details they can select the view buttons, which provides information for each activity on the last week, along with a textual description. If the user requests more detailed data they can select to view it from a sub-menu. While both these examples provide visual representation of data through imagery or well-structured text, there is variation as to how service data is presented.
ACADEMIC EXAMPLES In academia, researchers have focused on lowering the cognitive processing required in order to interpret the data being visualized. For example, [4] developed an ambient facial interface used to provide visual feedback and confirmation to the user in a manner independent of age, language, culture and mental alertness. Their work used animated or actual human faces to display emotional expressions easily recognizable by an elderly person with non-verbal feedback. The facial responses were controlled by measurements or interpreted data on the user’s current state or the products and objects with which he or she is interacting. By combining these measurements into a single facial expression, which is displayed to the user it assisted the user to evaluate their task. Reference [5] commentated on the growing area of application development. Their system moved application creation from the developer’s perspective whose focus was on devices and their interactions, to end users’ interpretation of goals or tasks. Comic scripts scenarios depicting assistive aids were presented to participants to explain the applications in their own words. This study highlighted the separation between the ages and living conditions of the designers and the people using AAL services. To over-
3
http://www.quietcaresystems.co.uk
113
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 114
Data
User type
Data capture/ viewing/ analysis
Care recipient
Data captured from sensors using internal infrastructure network; user responses
Formal carers on-site
Real-time, archive
Real-time, archive
Formal carers off-site (contact by networks)
Telecare/Telehealth monitors offsite (contact by networks)
Decision support Data abstraction
Visualization →
Providing information
Managing nuances of alarm escalation
Supply instruction; multimodal; communication (e.g., audible)
Graphical (simple, metaphor for day, e.g., traffic lights)
Advice on activities of daily living
Self care
Abstracted text information on single care recipients
Graphical (summary day chart + weekly/monthly trend)
Support for assessments
Assessment inferences on care recipient
Abstracted text information on groups of care recipients
Graphical (summary day chart + weekly/monthly trend)
Assessment inferences on care recipient(s)
Graphical (summary day chart + weekly/monthly trend + statistical metrics)
Real-time, archive
Informal carers on-site
Non-medical information abstracted for informal carers
Graphical (simple, metaphor for day e.g., traffic lights)
Informal carers off-site (contact by networks)
Non-medical information abstracted for informal carers
Graphical (simple, metaphor for day e.g., traffic lights)
Technical maintenance (network access for remote monitoring)
Communication of advice
Contact by mobile networks or broadband networks
Abstracted alarm dashboards for single and groups of care recipients; Contact by mobile networks
Contact by mobile networks
Anonymous access to data; time restricted; real-time analysis
Status reports
Table 2. How visualization supports decision support across different users and roles.
come the disparity end user could develop applications using a text-based interface. End users would drag and drop scenarios of daily living using a list of predefined words into a window and click a run button. The system would then parse the words and develop a specification for an application to meet the needs of clients involved. Reference [6] modeled ADLs for care recipients, and found that using these life patterns could provide enhanced home-based care (Fig. 2). The study found that changes in a busyness metric were visible and detectable even with irregular behavior. Reference [7] indicated that a graphical user interface significantly improved performance response times, error rates, and satisfaction ratings compared with an existing text based interface for nurses using an information system. The study highlighted that staff found the
114
advent of a more functional, intuitive and user friendly GUI easier to learn to aid navigation through the system and to prioritise management tasks.
CASE STUDY: NOCTURNAL AAL SERVICES FOR PEOPLE WITH DEMENTIA The University of Ulster and the Fold Housing Group collaborate in the NOCTURNAL project on issues including data visualization for AAL services. The goal is to develop a solution that supports older people with mild dementia in their homes, specifically during the hours of darkness. This is a relatively new area of research and was identified as a key area of need for care recipients with dementia. It is also of interest
IEEE Communications Magazine • January 2011
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 115
Figure 3. Chart illustration of sensor data: a single sensor cell (left), location vs. time display and previous 28 days cell (right).
because of the negative impact that lack of sleep and consequent anxiety causes for the informal carer in the home of the person with dementia. In their literature review on night-time care of people with dementia using AAL services, [8] found in that only 7 percent of papers addressed night-time specific issues with a further 26 percent focusing on night and day activities together. Of the night-time specific papers, only half involved any form of visual data representation, indicating that there is need for research in this area. The focus on night-time AAL services centers around lighting and guidance, motion monitoring, and intervention decision-support. In the case study, the intention is for the AAL services to provide reassurance, aid and guidance for the general behavior of the care recipient and to support a stable circadian rhythm. This case study is appropriate to illustrate the issues in communication between AAL services and the different user types and roles — as identified in Table 1 — who all are potential actors. It also provides a clear example of the need for converged communication networks, as it uses a broad range of sensors, processors and actuators. In this case, the following communication networks and protocols are used: • Body sensor and personal area networks, relying on protocols such as Bluetooth, Zigbee, and Ant+ • Home networks, providing either wired and wireless connectivity with specialist protocols such as x10 used to supplement standard 802.x networks • Wider connectivity to remote stakeholders (cares, monitors and healthcare professionals) via secure broadband services
IEEE Communications Magazine • January 2011
• Enhanced cellular services (GPRS, 3G, UMTS, Long Term Evolution and beyond) facilitating a sort of virtual presence for family members and occasional carers. The visual representation of the data is a key component of the work, and visualization is designed based on the needs of the different users and their roles. Table 2 shows the results of an analysis of the users and their roles from the perspective of the translation of data to a mode where it supports decisionmaking and how visualization plays a part in that process. The visualization of AAL data is accessible on computer interfaces after authenticated access, but as Table 2 shows, different modalities of access are also possible. For example, alerts can be pushed to mobile clients as the alerts are triggered. Table 2 illustrates how the AAL services generate data, which is then abstracted before being made available for visualization in support of decision support. The three main decision support modes are: communication of advice, provision of information, and management of (nuances of) alarm escalation. Figure 3 shows the Tunstall-based interface for a technical maintenance user at single sensor level as well as multi-sensor over a 28day period. The need to visualize information temporally is clear in each illustration in Fig. 3 as is the requirement to show activities in different locations of the home of the care recipient. This type of visualization of activities across space and time displayed on two dimensions of the interface, as used in our project, is becoming the most common manner of displaying the wellbeing of the care recipient in many AAL services. However, this interface is not appropri-
115
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 116
Figure 4. Garden mappings and two sample gardens [9].
ate for the care recipient or carers using AAL services. A metaphor interface, for example, as described by [9], would be more appropriate to the abstraction and appreciation of the on-going AAL supported care. In their research, the interface (Fig. 4) was in a mobile device to monitor wellbeing, and it incorporate images such as flowers and butterflies that change as the person became more active in daily exercise regimes.
116
It is evident that there is currently a gap with regards to the supply of a fully functional, dependable and appropriate visualization of relevant service data, particularly with regards to the different user types and roles of people involved in the delivery of care. There is a need to design applications, which display AAL service data in a meaningful, holistic yet concise manner. Identifying the needs for each user group and how these needs are provided both physically and visually has to be resolved if AAL services are going to be utilized more fully in society. Gil et al. [6] suggested that the focus of visual representation should concentrate on living aspects that are regular, e.g., sleeping, eating etc., and have a relationship with wellbeing. A key requirement for AAL services is to minimize the cognitive overhead required to interpret information, by presenting normal behaviors and patterns in the most succinct form possible, and highlighting abnormal behaviors and alarm states tailored to each end user group’s needs. The work by Consolvo et al. [9] on visual metaphors that requires minimal cognitive processing by the users is promising and indicates possible development pathways that communicate regular behaviors and useful feedback to care recipients and their informal carers.
ACKNOWLEDGMENT We would like to acknowledge the support of the carers and people with dementia in Northern Ireland who contribute to the work of the Nocturnal project, which is supported by the United Kingdom Research Councils and the Technology Strategy Board’s Assisted Living Innovation Platform under grant award TS/G002452/1.
DISCUSSION
REFERENCES
In developed countries such as the UK, the increasing prevalence of chronic disease in the ageing population provides the context for more pervasive deployment of AAL services. Additionally, government policy such as the citizen’s right to high-speed connectivity by 2020 throughout the United Kingdom as published in [10] provides the social and technological climate for such applications to succeed. However, there is a significant risk that due to the large streams of data generated by these AAL services, poor visualization techniques in interfaces, as described, will become a key issue, leading to potential misunderstandings and misinterpretation of information and data generated from AAL services. This article has described the systems that provide AAL services in support of assisted living. It has examined exemplars in both commercial and academic areas. The NOCTURNAL research project was used to highlight the visualization components required to service the needs of the different user types. The paper highlights that a key requirement for successful AAL service uptake is to address the needs of the different user groups and translate these needs into interface and data visualization components that communicate clearly the wellbeing and health of the care recipient(s).
[1] K. Ducatel et al. (Eds.), “Scenarios for Ambient Intelligence in 2010,” IPTS-ISTAG, EC: Luxembourg, 2001; www.cordis.lu/ist/istag. [2] D. J. Cook, J. C. Augusto, and V. R. Jakkula, “Ambient Intelligence: Technologies, Applications, and Opportunities,” Pervasive Mobile Comp., vol. 5, no. 4, 2009, pp. 277–98. [3] M. Wojciechowski and J. Xiong, “A User Interface Level Context Model for Ambient Assisted Living,” Smart Homes and Health Telematics, 2008, pp. 105–12. [4] B. Takacs and D. Hanak, “A Mobile System for Assisted Living with Ambient Facial Interfaces,” Int’l. J. Comp. Sci. Info. Sys., vol. 2, no. 2, 2007, pp. 33–50. [5] K. Truong, K., E. Huang, and G. Abowd, “CAMP: A Magnetic Poetry Interface for End-User Programming of Capture Applications for the Home,” Proc. Int’l. Conf. Ubiquitous Comp., Nottingham, UK, 2004, pp. 143–60. [6] N. M. Gil et al., “Data Visualization and Data Mining Technology for Supporting Care for Older People,” Proc. 9th ACM SIGACCESS Int’l. Conf. Comp. Accessibility, Tempe, AZ, 2007, pp. 139–46. [7] N. Staggers and D. Kobus, “Comparing Response Time, Errors, and Satisfaction between Text-Based and Graphical User Interfaces During Nursing Order Tasks,” J. American Medical Info. Assoc., vol. 7, 2000, pp. 164–76. [8] W. Carswell et al., “A Review of the Role of Assistive Technology for People with Dementia in the Hours of Darkness,” Tech. Healthcare, vol. 17, no. 4, 2009. [9] S. Consolvo et al., “Flowers or a Robot Army?: Encouraging Awareness & Activity with Personal, Mobile Displays,” Proc. 10th Int’l. Conf. Ubiquitous Comp., Sept. 21–24, 2008, Seoul, Korea. [10] Digital Britain, “Final Report, Department for Culture, Media and Sport and Department for Business, Innovation, and Skills,” TSO, June 2009.
IEEE Communications Magazine • January 2011
MULVENNA LAYOUT
12/16/10
12:38 PM
Page 117
BIOGRAPHIES MAURICE MULVENNA [SM] (
[email protected]) is a professor of computer science at the TRAIL living laboratory in the University of Ulster. He publishes in the area of pervasive computing in support of older and disabled people, and reviews for IEEE Pervasive Computing and IEEE Pervasive Computing and Applications. WILLIAM CARSWELL is a researcher at the University of Ulster. He has experience in developing intelligent systems to collect and analyse sensory data and provide appropriate automated responses to cater for detected abnormal patterns. He is an IEEE affiliate member. P AUL M C C ULLAGH is a reader in computing and mathematics at the University of Ulster. He is on the Board of the European Society for Engineering and Medicine and BCS Health Northern Ireland. His interests include health informatics education, open health, interface design, and assisted living applications. Current research projects include EU FP7 BRAIN: BCIs with Rapid Automated Interfaces for Nonexperts, where he is ethics manager; and EPSRC SMART2 — Self Management Supported by Assistive, Rehabilitation, and Telecare Technologies. JUAN CARLOS AUGUSTO [M] focuses on the design and implementation of intelligent environments, to which he has contributed more than 100 publications, including the Handbook on Ambient Intelligence and Smart Environments. He is Editor-in-Chief of the Book Series on Ambient Intelligence and Smart Environments, Program Chair of Per-
IEEE Communications Magazine • January 2011
vasive Healthcare 2011 and IE ‘11, and Co-Editor-in-Chief of the Journal on Ambient Intelligence and Smart Environments. HUIRU ZHENG is a lecturer in computer science at the University of Ulster. She is an active researcher in the broad area of biomedical informatics and has published over 90 scientific papers. Her main research interests include data mining, machine learning decision support, pattern recognition, intelligent data analysis, gait analysis, and activity monitoring. PAUL JEFFERS works for Fold Housing Association as a technical researcher. He has a special interest in developing solutions to help deal with the ever-increasing demands from providing services to people with dementia. He has experience in sensor and vision systems, and is currently working on the NOCTURNAL and VirtEx ALIP projects. HAIYING WANG is currently a lecturer in the School of Computing and Mathematics at the University of Ulster. His research focuses on artificial intelligence, machine learning, pattern discovery, and visualization, and their applications in medical informatics and bioinformatics. SUZANNE MARTIN is an occupational therapist and, as a reader, is a full-time member of academic staff at the University of Ulster. Her research investigates the use of new and emerging technologies in health and social care. She is interested primarily in research methods that promote enduser participation to explore complex interventions. She is also a contributor to the Cochrane Library synthesizing the evidence base for a range of healthcare interventions.
117
CHEBOLDAEFF LAYOUT
12/16/10
12:28 PM
Page 118
NEW CONVERGED TELECOMMUNICATION APPLICATIONS FOR THE END USER
Service Charging Challenges in Converged Networks Marc Cheboldaeff, Alcatel-Lucent
ABSTRACT
CHARGING BASICS
The charging model in telecommunications networks has evolved greatly in the last years. The original model in legacy fixed telephony networks was in most cases quite simple: the price of the communication depended on the destination, time, and duration of the call. With the emergence of mobile and multimedia networks, the charging model became more complicated. More and more frequently, a subscriber is able to choose between a variety of options on top of the standard tariff. Each option is then applicable to a certain kind of traffic only. The goal of this article is to study how these new charging models impact the technology of online charging systems.
Let us have a look at the old charging models. The tariff rate, which was time-based for voice calls, depended on dynamic call data such as call time, originating zone, dialed digits, and so on. Based on these input parameters, the tariff plan of the subscriber gave the corresponding rate through a fixed mapping. As opposed to the contextual call data, let us call the subscriber data, like the tariff, static, even if the latter can change over time: for example, a subscriber can switch from one tariff to another. Such a tariff structure is represented in a simplified way in Fig. 1. The tariff structure has become more and more complex over time. First, communication modes other than voice traffic appeared, thus making time-based charging only one option among others for charging. For SMS, item-based charging is more appropriate than time-based charging; for Internet traffic, volume-based charging is more appropriate; whereas for multimedia message service (MMS), a combination of item-based and volume-based charging might be relevant. References [1, 2] describe these various charging schemes in detail, but even so not fully the possibility for the end subscriber to combine them freely into multiple tariff options. Second, the mapping of dynamic call data with static subscriber data in order to obtain the call or session final rate has become more complex. More and more frequently, the subscriber has not only a tariff ID providing the rate in all scenarios, but a variety of options on top of a default tariff. And the applicability of these tariff options might depend not only on the dynamic call data, but also on other options subscribed, or on some dynamic subscriber data like the value of some consumption counters in a given period. For example, an SMS weekend option might have a different meaning for subscribers in Tariff 1 than for subscribers in Tariff 2. A Mobile Internet option might define a more expensive rate when the monthly subscriber consumption reaches a certain threshold. The normal voice call rate might change when the subscriber is near expiry, and so on. This increased complexity is shown in Fig. 2.
INTRODUCTION In the first generation of fast-food restaurants, the menus looked quite simple: one h amb u r g e r , o n e po rtio n o f fries , and o ne drink. Over time, the menus got more varied, and now in some fast-food chains, it is even possible to choose all the features in your menu: which kind of bread, which kind of ham, which dressing, and so on. There are some analogies with the evolution in the telecommunications industry. In legacy public switched telephony networks (PSTNs), the subscriber was necessarily making a voice call from his/her home network at a certain time to a certain destination. The price structure was quite fixed, and the subscriber knew he/she had to pay €X per minute or second when making a call to a certain destination or country. With the proliferation of mobile multimedia networks, new kinds of traffic appeared such as short message service (SMS) or Internet. End subscribers do not all have the same consumption profile; therefore, new tariff structures arose. Of course, these new tariff structures have a technological impact. The goal of this article is to highlight some of the issues related to charging that pop up as a consequence of enhanced rating capabilities in converged networks.
118
0163-6804/11/$25.00 © 2011 IEEE
IEEE Communications Magazine • January 2011
CHEBOLDAEFF LAYOUT
12/16/10
12:28 PM
Page 119
The latest pricing models must take into consideration the dynamic status of the end subscriber, and this in real time for the purpose of online charging (i.e., not based on call detail records [CDRs]). The latter are reported after the call session (except mid-session CDRs), and can be processed with a certain delay (i.e., postprocessed) without affecting the accuracy of the rating. The goal of the present contribution is to show the influence of new tariff marketing concepts on the technology of online charging systems (OCSs). In short, it argues that the technological evolution is driven essentially by marketing, not by technology. For the latter, the reader might refer to the standard’s documents [3, 4] or the comprehensive tutorial in [5]. This is the reason engineering aspects are not discussed in this article, such as performance optimization, system capacity, scalability, redundancy, high availability, call gapping, subscriber distribution, load balancing, CPU consumption, memory leakage, and so on, which are indeed critical issues, but not specific to the topic at hand. Instead, we focus on the application level; that is, the changes in the service logic related to rating and charging, based on recent discussions with customers. Charging depending on the quality of service (QoS) of the transmission shall not be considered here either. Even if this article mostly focuses on identifying design issues for an OCS, actually once an issue has been identified, there is fundamentally no real obstacle to developing the software to cope with it. Complex algorithms can cover all scenarios presented, even if the programming effort is increased and more system capacity required. The latter is indeed an important topic; however, the present contribution just aims at making the reader aware of the potential design complexity at the application level.
BALANCE AND BUCKET In the past, any communication service was charged against the money account of the subscriber, usually called a balance. In case of prepaid service, this balance is positive and debited from an initial prepaid amount, which has been loaded in advance by the subscriber. In case of post-paid service, the balance is negative, and its value corresponds to outstanding charges, which shall be charged to the subscriber at the end of a billing cycle (e.g., at the end of the month). Some years ago, the concept of the bucket (or bundle) appeared in telecommunication services charging. A bucket is a kind of repository put in reserve, similar to a prepaid balance, but the bucket unit is not necessarily money, and it is applicable only to a specific kind of traffic. For example, a subscriber can get a bucket of 20 SMS uses. It means that when sending an SM, the subscriber’s balance shall not be debited; the bucket shall be decreased instead. Basically, the bucket unit can be money, time, a number of items (like for SMS), or volume for data traffic. The subscriber may or may not pay a fee to get the bucket. In some cases the bucket can be granted as a reward for high consumption or at
IEEE Communications Magazine • January 2011
Time Triggering protocol MSC address “Dynamic” call data
Dialed digits
Originating zone Terminating zone
Calling party category
Rate
Bearer capability
(i.e., price per time unit)
“Static” subscriber data
Tariff ID
Figure 1. Simplified view of the old tariff scheme.
Time “Dynamic” call data
MSC/GGSN address
Originating zone
Dialed digits/APN
Terminating zone
... “Dynamic” subscriber data
Usage counters
“Static” subscriber data
Tariff options
“Dynamic” call data
Applicable options
per time, item, volume
...
Default tariff
Rate
Default tariff
Figure 2. Simplified view of the new tariff scheme.
recharge. The bucket is valid as soon as the usage threshold is reached or when the recharge event happens, or it might be valid only in a future period, such as the following month. The bucket concept is particularly attractive to adapt to various marketing segments. For example, a heavy SMS user may buy a bucket of 100 SMS uses for €10, while a light SMS user may buy a bucket of 25 SMS for €5. With the emergence of the bucket concept, it appears that end subscribers are not just loading money onto their mobile account, but are buying products instead. Please examine Fig. 3: the left column is applicable to legacy PSTNs, while the right column would be applicable to multimedia networks. Each product can be considered as one of the components of the tariff offer, and all the components considered as a whole build the personalized offer. E-Plus, the German subsidiary of KPN, or rather Base, one of its mobile virtual network operators (MVNOs), represents the tariff offer as a flower, where the middle of the flower stands for the standard tariff, around which a subscriber can choose various products as different petals. Vodafone customers are encouraged
119
CHEBOLDAEFF LAYOUT
12/16/10
12:28 PM
Page 120
Monetary price
R a t i n g
Charging units
R a t i n g
Tariff ID
Call time, duration, origin, destination
Product 1 Product 2 ... Product n
Call/event/session list of parameters
Figure 3. Evolution of personalized tariff offers. to set up their tailor-made tariff in the Czech Republic, or to choose their Flexi plan, which consists of buckets, in Qatar, just to give some examples.
BUCKET PRIORITIES It seems quite straightforward that a bucket would have higher priority than the standard subscriber’s balance in order to fund the traffic supported by the bucket. If a subscriber has a bucket of 25 SMS uses and sends an SM, he/she expects the SMS to be charged from the bucket, not from the standard balance. Each bucket has its own applicability criteria. In case more than one bucket is applicable to the same call/event/session, business rules should determine which bucket should be applied first. For example, if a subscriber has a bucket applicable to data traffic in general, and one bucket specific to BlackBerry traffic in addition, the BlackBerry bucket might be applied first for a BlackBerry session, even if the latter is actually data traffic too. It shall be the case as well that the BlackBerry bucket is debited if a subscriber owns, besides the BlackBerry bucket, a monetary bucket applicable to any kind of traffic. Determining the applicable bucket in such a scenario is quite straightforward because the criteria are mutually inclusive, like Russian nested dolls (Fig. 4). However, if a subscriber has a monetary bucket valid for any kind of traffic but only on weekends, and another bucket valid at any time but only for SMS, which bucket should be used when he/she sends an SMS over the weekend? This use case is more complex because the two buckets have different applicability criteria with an overlapping subset, as illustrated in Fig. 5. This example shows that business priority rules need to be defined in order that the OCS knows from which bucket the call/event/session should be charged. The emergence of business rules in the OCS is mentioned in [6], which identifies a so-called business information module in the rating function. As well, it might occur that two buckets with
120
exactly the same criteria are simultaneously applicable. For example, a subscriber bought on January 1 a first bucket of 10 Mbytes applicable to data traffic and on January 10 a second bucket of 50 Mbytes applicable to the same kind of traffic. For a data session on January 15, which bucket should be used first? This example shows that chronological priority rules might need to be defined as well (e.g., first in first out) as part of the business priority rules.
HANDOVER FROM ONE BUCKET TO ANOTHER Taking the previous example, if the first bucket is exhausted in the middle of a session, it would still be possible for the end subscriber to keep the session active while further data traffic is charged from the second bucket. While the behavior above describing the handover from one bucket to another bucket is quite obvious when two buckets have exactly the same applicability criteria, the question then arises as to what should happen in case two buckets do not share exactly the same applicability criteria. If a subscriber purchased a bucket of 100 minutes for international calls to Europe, should he/she be able to use a monetary bucket applicable to any calls if the international bucket is exhausted in the middle of a call? Possibly, the resulting price for international calls would be cheaper when charged from the international bucket than when charged from a global monetary bucket or from the subscriber’s balance. This example shows that in case of multiple monetary buckets, each of them being specific to a certain use, the rate might change depending on whether the subscriber has a certain bucket or not. In this scenario buckets shall not be considered purely as optional components on top of a fixed tariff and decorrelated from the latter. On the contrary, the set of options a subscriber owns can actually influence the tariff applicable to this subscriber, thus making the rating framework more complicated. In the same way, if a subscriber purchases a volume bucket for data traffic next to a monetary bucket, should data traffic be charged from the monetary bucket when the data bucket is exhausted? Perhaps such handover from one bucket to another should be barred, in order to avoid, if the rate per megabyte is cheaper in the data bucket than in the standard tariff, the subscriber paying the normal non-discounted rate when the bucket is exhausted. The handover from one bucket to another bucket or to the normal balance might be allowed in some scenarios and barred in others, depending on the origin and target bucket/balance. In the context of network convergence, it may happen that a subscriber has not only multiple buckets, but multiple balances too; for example, one post-paid balance for professional calls and another prepaid balance for private calls. Handover from one balance to another, or priority of one balance over another depending on the call scenario should be defined according to business rules similarly as for buckets.
IEEE Communications Magazine • January 2011
CHEBOLDAEFF LAYOUT
12/16/10
12:28 PM
Page 121
If, for example, the handover of a data bucket to a monetary bucket is barred, the subscriber would be somehow encouraged to purchase the data bucket again to replenish the existing bucket. This leads to the definition of another kind of bucket: the periodical bucket.
Bucket A e.g. monetary bucket
Fix voice Mobile voice
PERIODICAL BUCKET As in the example above, for dedicated usage like data traffic, the subscription can be periodic. For example, every month a subscriber is willing to pay a certain amount, for which he/she shall be allowed to consume 50 Mbytes of data traffic. This way, the subscriber who pays a renewal fee is granted a certain usage each month for an inexpensive price. Here again, the basic charging model — fee per period — can evolve toward more complicated tariff schemes. The initial purchase fee might be different than subsequent renewal fees. This would correspond to a kind of activation fee. Or in some offers, the renewal fee might be discounted initially for a certain number of periods. For example, the monthly fee may be reduced for the first three months with the aim of attracting new customers. Conversely, a subscriber who did not have enough balance at the time of bucket renewal might have to pay a penalty or reactivation fee when the balance is sufficient again following a recharge. Between the bucket renewal failure and the next sufficient recharge, the subscriber may or may not enjoy the advantages of the bucket, possibly depending on whether he/she is a high-revenue-generating or premium subscriber. In the latter case, the telco might grant him/her the advantage of the bucket awaiting successful renewal. This kind of grace period might be shorter than the usual renewal period (e.g., just a few days after the failed renewal). If that is the case, the OCS needs to take both periods into account in order to know when to cancel the subscription. Once the periodic amount granted by the bucket in the current period has been reached, and if handover is allowed, the subscriber shall fall back into the normal rate, and the usage is then charged from a monetary bucket or the standard balance. On the other hand, if the actual consumption does not reach the periodical limit, what should happen with the remaining units? A basic tariff framework would be that periodical units are valid during the period only, and expire at the end of the period. In more evolved tariff schemes, the units of a period can be carried over to the next period. But probably they should be carried over once (i.e., to the next period only), or at least to a limited number of periods. Otherwise, the amount of remaining units could be substantial after many periods (e.g., after two years). All these aspects have to be taken into consideration when defining periodic buckets.
DISCOUNT Until this point, we have examined tariff options (i.e., the components of the personalized tariff offer) only in terms of buckets or
IEEE Communications Magazine • January 2011
ADSL SMS
BlackBerry MMS
Bucket B e.g. data bucket
Figure 4. Mutually inclusive bucket criteria.
Bucket A e.g. week-end
SMS week days
Voice week-end SMS week-end
SMS Premium
Data week-end
Bucket B e.g. SMS
Figure 5. Overlapping bucket criteria.
bundles, which mean a free amount of units applicable to a certain usage. It could be that a subscriber is willing to pay a periodical fee not to get a certain amount of free usage in advance, but just to benefit from a cheaper rate than the normal one. For example, a mobile subscriber might be interested in paying €15 per month to get calls free of charge to fixed destinations. Or a heavy Universal Mobile Telecommunications System (UMTS) user might be willing to pay €10 per month to pay 50 percent less than the standard UMTS traffic rate. Here again, the interaction between multiple discounts might become an issue. If a subscriber has, for example, a 20 percent discount applicable to voice calls only and a 50 percent discount applicable to any kind of traffic but only on weekends, he/she would be eligible for a 20 percent plus 50 percent discount when making a voice call on the weekend. Should this result in a 70 percent discount? In other words, each of the discounts would be applied to the initial rate. Or should 50 percent be applied on top of 20 percent? In this case the
121
CHEBOLDAEFF LAYOUT
12/16/10
This complexity is all the more increased as rating happens in real time, which is a necessity in the case of prepaid charging, but also even more and more often required in the case of post-paid charging in the context of pre-/post-paid convergence.
122
12:28 PM
Page 122
discounts would be applied in a cascade, and the final discount would not be 70 percent, but 60 percent, because 0.8 × 0.5 = 0.4 → 60 percent discount. On the other hand, it might happen that a telco decides as a business rule that some discounts cannot be cumulative. Otherwise, if, for example, a certain subscriber has a 50 percent discount on voice calls and a 50 percent discount on weekends, it would lead to voice calls completely free of charge on the weekends, which might not match the telco’s expectations. Such cases of discount accumulation are frequent when a discount at the subscriber level is applicable on top of a more general discount. For example, all the subscribers in a certain category (e.g., teen) might get a 50 percent discount on SMS, while only some of them could enjoy an additional option in order to get a 50 percent discount on the weekends. Here again, business rules should predict whether the subscriber level discount should apply together with the more general category level discount, or if it should simply replace it, based on some priority rules. The latter might depend on the discount definitions (e.g., discount A always has higher priority than discount B) or the subscriber profile (e.g., for subscriber X, the subscriber level discounts must have the highest priority). Besides, the applicability of some discount items might depend on criteria other than the subscriber’s tariff and the call context. This is the case of so-called usage-based discounts: a subscriber can get a cheaper rate when he/she reaches a certain consumption threshold over a period. Alternatively, he/she can get a bucket as a reward for high consumption, for example, 20 free SMs when making more than four hours of voice calls in a month. It means that subscriber’s dynamic data, which are not strictly tariff-related, do influence the final rate. Usage counters are not the only pieces of subscriber data that might influence the final rate. It might be the case for other pieces of data such as the tenancy of a subscriber. In other words, if you are a subscriber of the telco for more than one year, you can get a 10 percent discount, while if you are a subscriber for more than three years, you get a 20 percent discount. The life cycle state of a subscriber can also be such kind of data: if you are a subscriber near expiry, you cannot enjoy free calls on the weekend anymore. Furthermore, other kinds of discounts can be defined, for example, a discount activated by the first event in a period: at the first call of a day, the subscriber will pay a certain fee, but can then enjoy free calls or free Internet surfing for the remainder of the day or the next 24 hours. Whether a subscriber is eligible for such a discount might depend on his/her category (i.e., marketing segment). Or maybe the end user has to opt in beforehand, on the web or by calling the telco’s customer service. Of course, as discussed at the beginning of this section, all these various kinds of discounts can be cumulative, which increases the complexity of the OCS.
INTERACTION BETWEEN BUCKETS AND DISCOUNTS Now that we understand that a subscriber can get buckets as well as discounts, it may lead to some problems when the subscriber has buckets and discounts in parallel. Let us suppose that a subscriber has a bucket of 100 minutes for voice calls, but also an option where he/she has a 50 percent discount on voice calls. When he/she makes a call of 20 minutes, should the 20 minutes be subtracted from the bucket, or should only 10 equivalent minutes be subtracted because of the discount? Moreover, if a subscriber has a bucket of 100 minutes applicable to voice calls, but also a 100 percent discount applicable to the destination number of a friend, it seems obvious that the bucket should not be decreased when he/she calls the friend. In the same way, if a subscriber has a bucket of €10 for data traffic and a 100 percent discount applicable to a certain web portal, the bucket should not be decreased when he/she surfs on this web portal, even if the bucket would have been applicable. However, if a subscriber has a 50 percent discount instead of 100 percent, should the discount still be used for the charging or shall the bucket be used this time? And what happens if the subscriber has an 80 percent discount? Maybe the subscriber prefers to pay the remaining 20 percent of the rate, keeping the bucket unchanged for future use. And should the remaining 20 percent be deducted from the standard balance or from a monetary bucket? All these examples give a sense of the challenges we confront in the context of service charging in converged networks. But these scenarios can be extended to domains other than telecommunications as well, like smart metering. A smart meter device could transmit in real time or near real time information about a subscriber’s consumption of electricity, gas, and/or water to a rating engine. Most probably, volume-based charging, as for Internet traffic, would be the most appropriate charging method here. Similar tariff options as in telecommunications networks could be offered, for example, a bucket of X liters water per month for a certain fee, a discounted rate for electricity at night in order to encourage people to consume during low traffic hours, and so on.
CONCLUSION This article presents some of the typical issues that arise when designing payment software in order to charge end-user applications in new converged telecommunication networks. New tariff structures, along with the emergence of new charging concepts like buckets or discounts, lead to increased complexity. This complexity is all the more increased as rating happens in real time, which is a necessity in the case of prepaid charging, but also even more and more often required in the case of
IEEE Communications Magazine • January 2011
CHEBOLDAEFF LAYOUT
12/16/10
12:28 PM
Page 123
post-paid charging in the context of pre-/postpaid convergence. Implementing accurate charging is not only a technical challenge, but also a mandatory requirement with regard to existing legal authorities. Finally, the proliferation of new kinds of traffic such as multimedia broadband traffic leads more and more frequently to situations when concurrent charging occurs, for example, when multiple end-user call(s)/event(s)/session(s) happen and are charged simultaneously. This makes real-time charging again more complicated.
ACKNOWLEDGMENTS Much of the source material used for this article derives from work accomplished together with the technical teams of Alcatel Lucent, EPlus, HP, and Vodafone. The author would like to especially thank Justin Bayley, Jessica Han, Mohamed Kamal, Mourad Lahmadi, Hongwei Li, and Moataz Mostafa from Alcatel-Lucent, Jürgen Lessmann and Reimund Magon from E-Plus, Adrian Dilworth, Martijn de Jong, Rosita Schürmann, Angus White, and Michael Ziegelmann from Vodafone, and Barry Maybank and Rosalind Singleton from HP. The author would like to thank Marianne Cave for her review.
IEEE Communications Magazine • January 2011
REFERENCES [1] Z. Ezziane, “Charging and Pricing Challenges for 3G Systems,” IEEE Commun. Surveys & Tutorials, vol. 7, no. 4, 4th qtr., 2005. [2] V.G. Ozianyi and N. Ventura, “Efficient and Scalable Charging Systems for IP Multimedia Networks,” IEEE Africon, Nairobi, Kenya, 23–25 Sept., 2009. [3] 3GPP TS 23.228, “IP Multimedia Subsystem (IMS),” Release 9, Mar. 2010. [4] 3GPP TS 23.228, “Online Charging System (OCS): Applications and Interfaces,” Release 9, Dec. 2009. [5] R.Kühne et al., “Charging in the IP Multimedia Subsystem: A Tutorial,” IEEE Commun. Mag., July 2007. [6] H. Oumina and D. Ranc, “Specification of Rating Function of Online Charging System in 3GPP IP Multimedia System (IMS) Environment,” 2nd Int’l. Conf. New Tech., Mobility, Security, Tangier, Morocco, 5–7 Nov., 2008.
BIOGRAPHY MARC CHEBOLDAEFF (
[email protected]) received a Master's degree in electrical engineering from Télécom ParisTech, Paris, in 1995, writing his thesis at the Technical University of Aachen (RWTH), Germany. He began to work in the field of intelligent networks and network applications at Alcatel and later at Ericsson in France. In 2001 he joined Lucent Technologies in Germany, now Alcatel-Lucent, focusing on payment applications, where he acts now as a product and solution manager for EMEA. He is a current member of the Alcatel-Lucent Technical Academy (ALTA). He has also published several papers at the International Conference on Networks (ICN), for which he is now a member of the Technical Program Committee.
123
LYT-SERIES EDIT-Gebizlioglu
12/16/10
1:51 PM
Page 124
SERIES EDITORIAL
MEETING THE BANDWIDTH DEMAND CHALLENGE: TECHNOLOGIES AND NETWORK ARCHITECTURAL OPTIONS
Osman S. Gebizlioglu
I
Hideo Kuwahara
n this month’s Optical Communications Series (OCS), we have selected contributions addressing developments that are needed toward meeting the continuing growth in bandwidth demand. As the global economy’s recovery continues, telecommunications service providers and suppliers compete to announce initiatives addressing the current pace of global demand growth for Internet, video, TV, and telecommunications services along with forecasts for future needs. As described in our November 2010 editorial, we saw the introduction of fiber optic communications technologies and systems into the global telecom networks in the late 1970s and early 1980s. Although the initial capacities were based on DS-3, we have witnessed rapid multiplication of the transmission rates through the introduction of synchronous optical network (SONET) systems in the early 1990s with the initial standardization of OC-48 transmission rates. With the advancements that resulted in the availability of OC-192 and dense wavelength-division multiplexing (DWDM) systems placing 40 and 80 OC-192 systems on the same fiber pair, the transmission capacity appeared to be unlimited. However, we know today that the explosive demand growth we have been seeing was not anticipated. As discussed in the report by Prof. Fabio Neri, the global effort to meet the bandwidth challenge was recently showcased at the 36th European Conference in Optical Communications (ECOC) held in Torino, Italy, on 19–23 September, 2010. This premiere conference on optical communications presented ongoing work and developments addressing technology and architecture issues from the edges of the network — cell sites, businesses, and neighborhoods — into the core of the network toward finding ways to more effectively manage bandwidth and bandwidth demand. In parallel with the developments from ECOC 2010, we have selected two contributions to address the current patterns of bandwidth demand growth, and current and future needs in communications technology and architecture to
124
Vijay Jain
John Spencer
meet the demand forecasts. In the first contribution, entitled “Technology and Architecture to Enable the Explosive Growth of the Internet,” Adel Saleh and Jane Simmons address the technology and architecture needs to meet the anticipated thousand-fold growth in Internet traffic over the next 20 years. At current growth rates, Internet traffic will increase by a factor of 1000 or three orders of magnitude in roughly 20 years. It will be challenging for transmission and routing/ switching systems to keep pace with this level of growth without requiring prohibitively large increases in network cost and power consumption. The authors present a highlevel vision for addressing these challenges based on both technological and architectural advancements. At the current pace of growth, Internet traffic is doubling approximately every two years, leading to a factor of 1000 growth in the next two decades. They show that such staggering growth can indeed be supported, while keeping the network cost and power consumption in check. This requires advances in both technology and architecture, to increase the capacity of transmission and routing/switching systems, and effectively reduce the network capacity requirements, respectively. Although this article deals with only the core/backbone portion of the network, access networks will need to scale as well, through a combination of advanced broadband fiber, cable, and wireless technologies. Of course, while the pace of Internet growth can be expected to slow at some point, eventually the thousand-fold growth figure will be exceeded, and this will require even deeper and further innovations. The second contribution, “MC-FiWiBAN: An Emergency-Aware Mission-Critical Fiber-Wireless Broadband Access Network” by Ahmad Dhaini and Pin-Han Ho, addresses quality of service (QoS), security, and fault recovery issues in converging wireless and optical access networks as attractive economical broadband solutions. The emergency-aware architecture presented takes advantage of layer 2 virtual private networks (VPNs) to support mission-critical (MC) services. Each VPN is designed to
IEEE Communications Magazine • January 2010
LYT-SERIES EDIT-Gebizlioglu
12/16/10
1:51 PM
Page 125
SERIES EDITORIAL support a specific MC system requirements bundle that is stipulated in the service level agreement (SLA) and fulfilled via an effective resource management paradigm. Simulation results show that MC-FiWiBAN can commit to guaranteed QoS for emergency and non-emergency services. The MC coverage can be extended to improve public safety and disaster relief (PSDR) communications in rural areas and emerging MC multimedia services. A QoS-provisioning framework was also presented to address the resource management problem arising from the establishment of VPNs.
BIOGRAPHIES OSMAN S. GEBIZLIOGLU [M] (
[email protected]) is a principal consultant at Telcordia Technologies. Since he joined Bellcore in 1987, he has been involved with the development of performance and reliability assurance requirements for optical communications components. In addition to his work to support the implementation of optical communications technologies in major service provider networks, he has been involved in reliability assurance and failure analysis efforts on aerospace communications networks. He holds B.Sc and M.Sc degrees in chemical engineering (Middle East Technical University, Ankara, Turkey) and a Ph.D in chemical engineering and polymer materials science and engineering (Princeton University, New Jersey). Before joining Telcordia (then Bellcore) in 1987, he held Monsanto and ExxonMobil postdoctoral fellowships and research scientist appointments in mechanical engineering (Mechanics of Materials Division), chemical engineering (Microstructural Engineering Division), and the Center for Materials Science & Engineering at Massachusetts Institute of Technology, Cambridge. He is an active member of the American Chemical Society, American Institute of Physics Society of Rheology, Materials Research Society, International Society for Optical Engineering, IEEE Lasers and ElectroOptics Society, and IEEE Communications Society. He has extensively published in various professional society journals, presented his work at international conferences, and delivered invited talks at conferences and university colloquia. He holds five U.S. patents and is the former chair of the Telecommunications Industry Association TR-42.13 Subcommittee on Passive Optical Devices and Fiber Optic Metrology. He also serves as a Series Editor of the IEEE Communications Magazine Optical Communications Series. HIDEO KUWAHARA [F] (
[email protected]) joined Fujitsu in 1974, and has been engaged for more than 30 years in R&D of optical communications technologies, including high-speed TDM systems, coherent optical transmission systems, EDFA, terrestrial and submarine WDM systems, and related optical components. His current responsibility is to lead photonics technology as a Fellow of Fujitsu Laboratories Ltd. in Japan. He stayed in the United
IEEE Communications Magazine • January 2010
States from 2000 to 2003 as a senior vice president at Fujitsu Network Communications, Inc., and Fujitsu Laboratories of America, Richardson, Texas. He belongs to LEOS and ComSoc. He is a co-Series Editor of IEEE Communications Magazine’s Optical Communications Series. He is currently a member of the International Advisory Committee of the European Conference on Optical Communications, and chairs the Steering Committee of CLEO Pacific Rim. He is a Fellow of the Institute of Electronics, Information and Communications Engineers (IEICE) of Japan. He has co-chaired several conferences, including Optoelectronics and Communications Conference (OECC) 2007. He received an Achievement Award from IEICE of Japan in 1998 for the experimental realization of optical terabit transmission. He received the Sakurai Memorial Award from the Optoelectronic Industry and Technology Development Association of Japan in 1990 for research on coherent optical communication. VIJAY JAIN (
[email protected]) is general manager for Access Network Planning and Economics, India and South Asia at Bharti Airtel Limited, India. Prior to joining Airtel, he was program manager for FTTP and CO active and passive fiber optic components at Verizon, where he served as technical leader for risk analysis of FOC and CO components deployment into Verizon's network. He was involved in product identification, procurement, network planning, and field remediation. He has over 15 years of experience in the telecom industry and has worked in three countries (India, the United States, and Canada). Prior to Verizon, he worked as vice president and in management positions for telecom equipment manufacturer and test laboratories, which provided him 360-degree exposure to the overall telecom business and technologies. In the last 15 years he has worked in engineering, R&D, planning, strategic, and business development roles. Achievements include designing and testing of a GSM/CDMA-based wireless antenna, DSP-based VLSI chips, NMS for optical and wireless technologies, fiber optic components, and transport systems with up to OC-768 transmission rates. He holds two Master's degrees in telecom engineering, specializing in wireless technology from the Indian Institute of Technology, India, and in DSP technology from Concordia University, Canada. JOHN SPENCER [SM] (
[email protected]) is a telecom industry veteran with over 37 years of experience. He worked 29 years with BellSouth, 14 of those years as a member of technical staff in the Science & Technology Department. During that time he was involve,d in the introduction of SONET and Erbium doped fiber amplifiers (EDFAs), and had a team lead role in the introduction of DWDM technology in the BellSouth network. He worked four years as regional director, Product Marketing Engineering for Mahi Networks, Petaluma, California. He is currently a business and technology strategist for Optelian Access Networks, where he manages industry and customer direction to Optelian’s product line as well as playing a key role in Optelian’s AT&T account management. He was Conference Co-Chair for NFOEC in 1991 and 1998. He has served on the NFOEC Technical Program Committee for 10 years. He served as Secretary and Chairman of ANSI accredited committee T1X1, Digital Hierarchy and Synchronization, which developed the standards for SONET. He is a graduate of the Georgia Institute of Technology (B.E.E.) and is a registered Professional Engineer (PE) in the State of Alabama. He currently serves on the NFOEC/OFC Technical Program Committee.
125
SIMMONS LAYOUT
12/16/10
12:42 PM
Page 126
TOPICS IN OPTICAL COMMUNICATIONS
Technology and Architecture to Enable the Explosive Growth of the Internet Adel A. M. Saleh, DARPA Jane M. Simmons, Monarch Network Architects
ABSTRACT At current growth rates, Internet traffic will increase by a factor of one thousand in roughly 20 years. It will be challenging for transmission and routing/switching systems to keep pace with this level of growth without requiring prohibitively large increases in network cost and power consumption. We present a high-level vision for addressing these challenges based on both technological and architectural advancements.
INTRODUCTION
The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.
126
By numerous accounts, Internet traffic continues to exhibit exponential traffic growth. One of the leading sources monitoring Internet traffic levels indicates that the annual rate of growth is currently 40 to 50 percent [1]. A major IP router vendor has forecast that the Internet will grow at a compound rate of 35 percent from 2008 to 2013 [2]. As a related benchmark, a large telecommunications carrier has gauged their year-over-year IP traffic growth rate to be 45 percent [3]. If we assume a compound annual growth rate on the order of 40 percent, then Internet traffic will increase by a factor of 1000 in roughly 20 years, as shown in Fig. 1. Attaining this traffic threshold in this relatively short timeframe necessitates developing a high-level network evolution strategy today. Network capabilities, specifically transmission and routing/switching capabilities, will need to keep pace with the traffic in order for steady Internet growth to be sustained. While the capacity of fiber transmission systems has increased by a factor of roughly 100 over the past decade, attaining another three orders of magnitude growth will be very challenging. Fiber capacity, which once seemed to be almost infinite compared to the traffic requirements, is now approaching its theoretical limit [4]. Furthermore, building large-scale electronic routers and switches is already a challenge, which will only become more of an impediment in the future. Note that simply satisfying the thousand-fold growth requirement is not sufficient; it must be done in a manner that is cost effective and power efficient. A two-pronged approach will likely be needed to meet these challenges: technological advancements to increase the realizable capacity
0163-6804/11/$25.00 © 2011 IEEE
of fiber and routers/switches and architectural enhancements that effectively decrease the traffic burden on the network. The next section presents technological and architectural techniques that can meet the thousand-fold traffic growth from a transmission perspective. We then address these aspects for IP routers (the focus is on routers as they are more challenging and costly to scale than switches). The analysis considers transmission and routing in the backbone network. As the discussion is looking 20 years into the future, the numbers presented should be interpreted as rough, round estimates as opposed to precise figures. The analysis expands on the work that was originally presented in [5].
TRANSMISSION Today’s state-of-the-art deployed technology is capable of supporting 80 wavelengths of 40 Gb/s each, in the C-band. The largest carrier backbone networks currently carry on the order of 4 Tb/s of total traffic. (While we are considering a thousand-fold growth in the Internet, we assume this will manifest itself as a thousand-fold growth in the size of carrier networks.) At this level of traffic, and for reasonable assumptions regarding network topology, traffic statistics, and protection implementation, a network equipped with an 80 × 40 Gb/s transmission system is about one-third full. Thus, to support a thousand-fold growth in traffic over today’s levels, a combination of technology and architectural enhancements needs to provide about a factor of 350 benefit with respect to transmission. Current systems typically support optical bypass, where traffic that is transiting a node can remain in the optical domain, thereby eliminating much of the required electronic terminating equipment. One requirement for optical bypass is an extended optical reach, which is the distance the optical signal can travel before the signal quality degrades to a level that necessitates regeneration. It has been shown that the sweet spot for optical reach in continental-scale networks is on the order of 2000 to 2500 km [6]. It is highly desirable to maintain an optical reach of this extent as, together with all-optical switching equipment, it has played a significant role over the last decade in reducing network cost and power consumption.
IEEE Communications Magazine • January 2011
12/16/10
12:42 PM
Page 127
TECHNOLOGY Spectral Efficiency — In the past, transmission systems have kept pace with traffic growth by both increasing the number of wavelengths supported on a fiber and increasing the bit-rate of each wavelength. In the mid to late 1990s, stateof-the-art transmission systems consisted of 16 wavelengths of 2.5 Gb/s each, representing a spectral efficiency of 0.01 b/s/Hz (spectral efficiency is defined as the ratio of the information bit rate to the total bandwidth consumed). As mentioned above, today’s most advanced deployed systems are 80 × 40 Gb/s, for a spectral efficiency of 0.8 b/s/Hz. Additionally, 80 × 100 Gb/s systems (spectral efficiency of 2.0 b/s/Hz) are on the horizon. This technological progress has been attained through more complex modulation schemes and more advanced electronic signal processing. Increased capacity through increased spectral efficiency has provided favorable economies of scale. For example, the 10 Gb/s transponder cost is approximately twice that of a 2.5 Gb/s transponder, resulting in a halving of the cost per bit/s. Similarly, the power consumption and size of a 10 Gb/s transponder is less than that of four 2.5 Gb/s transponders, providing benefits in power and space per bit/s. It is expected that these trajectories will eventually apply to 40 Gb/s and 100 Gb/s equipment as these technologies mature. Continuing the trend of increased spectral efficiency, however, will become increasingly more difficult. The analysis of [4] indicates that for an optical reach of 2000 km, the theoretical limit on spectral efficiency is about 6 to 7 b/s/Hz per polarization. The theoretical limit is unlikely to be attainable in a practical system; thus, it is reasonable to consider the realizable spectral efficiency limit to be more on the order of 4 b/s/Hz per polarization. If it is assumed that future systems will be dual-polarization (as the planned 100 Gb/s systems are), the realizable system spectral efficiency will likely be on the order of 8 b/s/Hz. This could be realized, for example, with an 80 × 400 Gb/s system. As compared with today’s 40 Gb/s systems, 8 b/s/Hz represents a factor of 10 increase in transmission capacity, which is clearly insufficient to meet long-term projections of traffic growth. Reference [7] also noted the challenge of meeting future traffic growth solely via an increase in spectral efficiency. Expanded Transmission Band — 80 × 40 Gb/s systems are accommodated in approximately 32 nm of spectrum in the C-band. With the increase in spectral efficiency described above, it is expected that 80 × 400 Gb/s systems would be supported in this band as well. However, expansion into other bands can be used to increase system capacity. For example, the L-band provides low fiber loss comparable to the C-band, making it the most likely choice for expansion. It is important that an expanded system require only a single amplifier across the spectrum, to avoid the cost of multiple band amplifiers. Also, the tunable transponders ideally need to tune across the whole utilized spectrum. We assume
IEEE Communications Magazine • January 2011
10,000 CAGR = 58.5% CAGR = 50% CAGR = 40% CAGR = 30% CAGR = 26%
1000 Traffic level
SIMMONS LAYOUT
100
10
0 0
5
10 Years in the future
15
20
Figure 1. Level of traffic for various compound annual growth rates (CAGR). At 40 percent CAGR, the traffic level will increase by a factor of 1000 in roughly 20 years. that this will be feasible for ~65 nm of spectrum across the C- and L-bands (a current system already supports 54 nm across the C- and Lbands with a single amplifier [8]). Thus, we assume that expanding the transmission band will result in a factor of two increase in system capacity; e.g., a 160 × 400 Gb/s system. Multicore Fiber — Given that the capacity limit of a single fiber is being approached, it is natural to consider carrying future traffic on multiple fibers. However, this would require that the number of deployed optical amplifiers and the port-size of all-optical switching devices, such as Reconfigurable Optical Add/Drop Multiplexers (ROADMs), scale by the same amount. Thus, this solution does not provide benefits in cost per bit/s, and equally important, power per bit/s (i.e., energy per bit). An alternative solution is to increase the number of cores supported in a fiber. While carrier fiber plant is typically composed of singlecore, single-mode fiber, there have been recent advances in multicore fiber [9], where ideally the total fiber capacity increases in proportion to the number of cores. In order for the multicore solution to be effective, it must continue to demonstrate the benefits of a conventional single-fiber solution. For example, a single optical amplifier must be capable of amplifying each of the fiber cores, rather than requiring one amplifier per core. Operationally, a single connector must be capable of interconnecting multicore fibers as opposed to requiring one connector per core. Multicore fiber presents many challenges, most notably cross-talk between the cores. It will also require that new fiber plant be deployed (although new low-loss fiber plant might be needed to achieve high spectral efficiencies anyway). Additionally, many of the multicore experiments that have been reported cover distances of only about 100 km. The number of cores per fiber in these experiments is typically 7 or 19, as these numbers are compatible with hexagonal packing. For purposes of our discussion, it is assumed that 7 cores will eventually be feasible for long-haul applications. Whether this is realistic given the required optical reach of 2000 to
127
SIMMONS LAYOUT
12/16/10
B
12:42 PM
E
A
Page 128
B
F
E
A
C
F
C
G D
H (a)
G D
H (b)
Figure 2. Three unicast connections from A to C, F, and G; b) one multicast connection to the same three nodes.
2500 km is unknown at this time; the number of cores may need to be smaller to allow for greater inter-core distance and reduced cross-talk. Clearly, more research is warranted on multicore fiber, as its potential contribution to meeting future capacity requirements is significant. (Another solution being pursued is the use of multimode fiber with electronic multiple-input multiple-output processing, but this is likely to consume a significant amount of power.)
ARCHITECTURE Increased spectral efficiency, expansion into part of the L-band, and the use of multicore fiber yield about a factor of 140 increase in the capacity of transmission systems (using aggressive assumptions). As this does not meet the target level of growth, networks will need to rely on architectural enhancements to reduce the effective traffic load. IP Packing — IP traffic flows are typically much smaller than the data-rate of a wavelength such that many flows are carried on a single wavelength. Additionally, IP flows have traditionally been quite bursty, demonstrating a high peak-toaverage data-rate ratio. To accommodate this burstiness, carriers do not tightly pack IP links (i.e., the IP-carrying wavelengths between routers); they leave headroom such that sudden bursts of traffic can be handled without excessive packet loss or delay, and to allow for rerouting under failure conditions. In 2005, it was noted that the average fill-rate of IP links in the US Internet was about 25 percent [10]. As indicated previously, the data-rate of a wavelength has steadily increased. The average data-rate of individual flows has not increased at the same rate, resulting in a larger number of flows being multiplexed onto a single wavelength. This in turn has had a smoothing effect on the aggregated traffic, lessening the need to drastically overprovision IP links [11, 12]. Using a multi-rate Erlang loss model to determine the maximum utilization that results in an acceptable blocking probability [13] yields utilizations on the order of 95 percent for 400 Gb/s wavelengths. Depending on the amount of carried best-effort IP traffic that can be scaled back under failure conditions (which affects the amount of overprovisioning needed for rerouting), overall utilizations of 65 percent or more are feasible. If we assume that the current average fill-rate is on the order of 30 to 35 percent (representing some improvement over the 2005
128
figure), then this represents a factor of two benefit. Essentially, roughly twice as much traffic can be packed onto the same number of wavelengths, thereby reducing the effective capacity burden. Of course, traffic that is already at the data-rate of a wavelength (i.e., optical wavelength services) will not realize this benefit. We estimate that no more than 20 percent of the traffic is likely to be composed of wavelength services, such that the factor of two benefit in capacity applies to at least 80 percent of the traffic. It should be noted that IP router developments could also play a role in increasing the utilization level of IP links. For example, flow routers are capable of controlling the rate and route of individual flows, such that quality-ofservice can be attained through better control rather than through overprovisioning [10]. Multicasting, Asymmetric Traffic, and Improved Caching — Architectural benefits can also be achieved by better tuning the Internet to the changing nature of its traffic, for example, the growing amount of video distribution. First, we consider multicast as a replacement for multiple unicast connections between a single source and multiple destinations. This is illustrated in Fig. 2, where Node A is transmitting the same payload to Nodes C, F, and G. In Fig. 2a, three separate connections are established, whereas in Fig. 2b a single multicast tree is established. It can readily be seen that the capacity requirements will be smaller with multicast. To investigate the benefits of multicast, a study was performed on the network shown in Fig. 3. This network represents a typical U.S. backbone network, with 60 nodes and an average nodal degree of 2.6. Five thousand multicast sets were generated, with one source node and D destination nodes, where D was uniformly distributed between 5 and 15. The destinations were chosen based on their traffic weightings. (A typical demand set for this network was used to obtain the weightings. However, there was little difference as compared to the case where the nodes were selected with equal likelihood.) The study compared routing D unicast connections vs. one multicast connection. The multicast routing heuristic illustrated in [14] was used to route the multicast trees. The results show that multicast provides a factor of roughly three benefit in capacity, where capacity was measured as the average number of wavelengths required on a link (approximately the same capacity benefits were obtained when capacity was measured in terms of bandwidth-distance, or in terms of the number of wavelengths needed on the most heavily utilized link). While the study specifically investigated multicast at the optical layer, IP multicast offers similar levels of capacity savings (although somewhat smaller due to the finer granularity of the IP layer). In backbone networks today, connections are almost always bi-directional and symmetric; i.e., if a connection of rate R is established from Node A to Node Z, then a connection of rate R is also established from Node Z to Node A. With multicast distribution, the connections only
IEEE Communications Magazine • January 2011
SIMMONS LAYOUT
12/16/10
12:42 PM
Page 129
need to be unidirectional, from the source to the destinations. There are also other applications where the traffic may be very asymmetric in the two directions; for example, a 40 Gb/s connection may be needed in one direction, but only a 2.5 Gb/s in the reverse direction. Future networks could take advantage of this asymmetry and only provision what is needed in the two directions, to reduce the amount of utilized capacity. This will necessitate modifying carrier network management systems. Caching is another architectural feature that can be used to reduce capacity requirements. As the Internet is used more as a repository of data and video, the amount of traffic that can be cached will likely increase. Furthermore, caching algorithms are likely to improve, to increase the probability that the desired data is stored at a nearby location. While the capacity benefit of multicast was estimated above to be a factor of three, it is more difficult to quantify the gains that can be attained through taking advantage of asymmetric connections and improved caching. We estimate that overall, these factors (including multicast) will produce capacity benefits on the order of a factor of four. Furthermore, we assume that these gains apply to roughly 20 percent of the traffic. Dynamic Optical Networking — Networks today are mostly quasi-static, with connection setup typically requiring on-site manual involvement, and connections often remaining established for months or years. As a first move away from this relatively fixed environment, networks are becoming configurable, where connections can be established remotely through software control, assuming the necessary equipment is already deployed in the network. Configurable networks take advantage of flexible network elements such as ROADMs, which can be remotely reconfigured to add, drop, or bypass any wavelength without affecting existing network traffic, and tunable transponders, which can tune to any of the wavelengths supported on a fiber. The next step in this evolution is dynamic networking, where connections can be rapidly established and torn down, without the involvement of operations personnel. Dynamic offerings are currently limited to sub-wavelength rates with setup times on the order of minutes [15]. However, research is underway to extend dynamism to wavelength services and to provide setup times on the order of 100 msec. to 1 sec. [13]. Dynamic networking takes advantage of advancements such as very fast switching, the ability to manage optical transients, and a distributed control plane. This will likely be a pushpull evolution, where the need for on-demand services (e.g., cloud computing) drives the implementation of dynamic networks, and the availability of a dynamic infrastructure fuels the development of more services that can take advantage of it (e.g., distributed computing on large data sets, interactive visualization and collaboration, etc.). Dynamic networking is advantageous because it delivers bandwidth where and when it is needed. Bandwidth does not need to be permanently
IEEE Communications Magazine • January 2011
Figure 3. A typical U.S. backbone network, with 60 nodes and an average nodal degree of 2.6. reserved for services that are active for only a small percentage of the time, in effect, decreasing the network cost and capacity requirements. The benefits for the individual users of dynamic networking can be significant. A network study on dynamic traffic was performed using the network of Fig. 3. Demands were modeled as on/off services, with an average on-time of 10 percent. Connections were established and torn down as the demands toggled between the on and off states. This was compared to a scheme where connections were maintained for the duration of the service regardless of whether the service was active or not. The more traffic that can take advantage of dynamism, the greater the capacity benefits of a dynamic network. For purposes of the study, it was assumed that in a 20-year timeframe, about 25 percent of the connections in a network would be dynamic (this refers to connections at the optical layer, where a connection can carry a single wavelength service or multiple subrate services). The study showed that with a thousand-fold growth in traffic and 25 percent of the connections carrying on/off traffic, dynamic networking reduces the capacity required for these services by a factor of five.
SUMMARY OF TRANSMISSION FACTORS Table 1 summarizes the various factors described above that will either increase the capacity supported on a transmission system or that will decrease the effective traffic load. The third column in the table indicates what percentage of traffic we estimate will be able to take advantage of a particular factor. When combined with the fact that current systems are about one-third full, the technological and architectural advancements will provide transmission support for a thousand-fold growth over today’s traffic levels. The aggressive assumptions discussed above are indicative of the challenges faced. As indicated in the table, the combination of architectural enhancements effectively reduces the total required capacity by a factor of roughly 2.5. This result depends on the assumptions regarding traffic-type percentages. For example, if 50 percent of the traffic is dynamic rather than
129
SIMMONS LAYOUT
12/16/10
12:42 PM
Page 130
Benefit factor
Percentage of traffic subject to benefit
Effective capacity multiplier
Available excess capacity in today’s networks
3
100%
3
Increased spectral efficiency
10
100%
10
Expanded transmission band
2
100%
2
Multicore single-amp fiber
7
100%
7
More efficient IP packing
2
80%
1.7
Multicast/asymmetric/caching
4
20%
1.2
Dynamic networking
5
25%
1.3
Today’s largest carriers have deployed core IP routers of maximum size about 3 Tb/s. The development of routers of size 160 Tb/s (full duplex) has already been announced, which represents more than a factor of 50 increase over today’s deployed sizes.
Total effective capacity multiplier
Table 1. Summary of the factors affecting transmission.
25 percent, the benefit factor of dynamism increases from 5 to 6, and the combined benefit of architectural enhancements increases from 2.5 to about 3.5.
IP ROUTERS The previous section addressed traffic on a link; this section addresses traffic at a node. Nodal traffic may be handled at multiple network layers. The physical layer of most current networks includes switches, such as ROADMs, that process the traffic on a wavelength granularity in the optical domain. Because the switching is done optically rather than electronically, these elements have demonstrated good scalability in cost, size, and power consumption, and are likely to continue to do so. Such optical networking elements are fundamental to network scalability. If traffic needs to be processed at a granularity finer than a wavelength, it is sent to an electronic switch or router. Electronic switching or routing can occur at Layer 1 (e.g., in an Optical Transport Network (OTN) switch), Layer 2 (e.g., in an Ethernet switch), or Layer 3 (in an IP router), or some combination of these layers. The exact layered architecture depends on the carrier. In general, the higher the layer, the finer the granularity of operation, and thus, the more challenging it is to scale. Therefore, this section discusses nodal processing in the context of IP routers. If a factor of 1000 benefit can be achieved at the IP layer, then it is assumed that architectures such as IP-over-OTN-over-WDM, where much of the traffic grooming is offloaded from the IP router to the OTN switch, are feasible as well.
TECHNOLOGY Today’s largest carriers have deployed core IP routers of maximum size about 3 Tb/s. The development of routers of size 160 Tb/s (full duplex) has already been announced [16], which represents more than a factor of 50 increase over today’s deployed sizes. It is certainly conceivable that in 20 years, even larger routers will be developed, such that close to a thousand-fold
130
~1100
increase may be achieved through technology advancements alone, e.g., improved integration density and the use of optical interconnects. However, the fact that a router of this size can be built does not mean it is practical with respect to cost, size, and power. For example, the power consumption of today’s core routers is on the order of 10 Watts per Gb/s [17]. If we assume that in 20 years a 3000 Tb/s router will exist, then using current power figures, the total power consumption for a single router would be 30 Megawatts. Of course, power improvements can be expected over time; using the estimate of 20 percent better power efficiency per year [17], over 20 years, yields a single 3000 Tb/s router consuming 350 Kilowatts. For our purposes here, it is assumed that in the 20-year timeframe, feasible routers will be about five times larger than what has already been announced; e.g., ~750 Tb/s, which represents a factor of 250 increase over today’s deployed sizes. Even if larger routers are possible, the operational challenges of deploying such a large device are impetus to consider architectural innovations that can mitigate their need.
ARCHITECTURE IP Packing — As discussed earlier, increased wavelength data-rates result in less bursty aggregated subrate traffic, allowing for higher fillrates. It was estimated that the average IP-link fill-rate would increase by a factor of two, such that the number of required wavelengths to carry the subrate services would decrease by approximately the same factor. This translates into half as many ports needed on the IP routers (it is assumed that wavelength services do not enter the routers). Thus, tighter IP packing will result in approximately a factor of two decrease in the required size of IP routers. Optical Aggregation at the Network Edge — As the level of network traffic grows while the number of network nodes remains approximately fixed, the average amount of traffic between node pairs increases. Thus, an increas-
IEEE Communications Magazine • January 2011
SIMMONS LAYOUT
12/16/10
12:42 PM
Page 131
ing amount of traffic can be efficiently packed into wavelengths at the edge of the network without requiring further packing in the backbone network (given that the wavelength datarate does not increase in proportion to the traffic). This implies that efficient packing in the edge networks (i.e., regional and metro-core networks) can be used to offload much of the burden of the core IP routers, a paradigm that was discussed in [18] and illustrated in Fig. 4. (This paradigm is also similar to optical flow switching [19].) Optical aggregation techniques, such as optical burst switching, can possibly be used at the edge to reduce the overall amount of electronic processing. Optical aggregation is typically more suitable for edge networks than for backbone networks, due to these techniques requiring collision management and/or scheduling. To explore the benefits of optical edge aggregation, a study was performed on the network of Fig. 3, with the level of traffic assumed in this article combined with 400 Gb/s wavelengths. It is unlikely that all traffic could take advantage of optical aggregation at the edge as this will require specialized optical equipment, which may not be installed in a particular metro network or may be too costly for some end customers. Thus, it was assumed that 50 percent of the subrate traffic was eligible to be optically aggregated at the edge, with the remaining 50 percent being processed by IP routers as usual. This resulted in the maximum required size of the backbone IP router decreasing by a factor of two as compared to a conventional architecture. (Reference [18] aggressively assumed 90 percent of the traffic was optically aggregated at the edge, yielding a factor of ten benefit.) Since the optically aggregated traffic does not take advantage of grooming inside the backbone network, the wavelengths are slightly less packed, but the overall capacity requirement increase is negligible. Another approach to reducing the burden on IP routers is to support a range of wavelength data-rates on a fiber, for example, from 40 Gb/s to 400 Gb/s (where the 40 Gb/s wavelengths are more tightly spaced to maintain the same spectral efficiency). By better matching the wavelength data-rate to the service data-rate, less traffic grooming would be needed in the IP routers, as is discussed in [20]. However, supporting closely spaced wavelengths (e.g., 40 Gb/s wavelengths at 5 GHz spacing) will be challenging. While we do not include this technique in our assumptions here, it may be worthy of future research.
Edge Optical aggregation
Edge Circuit switched backbone network
Backbone node
Edge
Figure 4. Optical aggregation is implemented in the edge network, delivering traffic to the backbone nodes; this traffic does not undergo further grooming in the backbone network. The edge/backbone interface could be all-optical or optical-electronic-optical, as discussed in more detail in [18].
Multicasting, Asymmetric Traffic, Improved Caching, and Dynamic Optical Networking — Similar to earlier sections, taking advantage of multicasting, asymmetric traffic, improved caching, and dynamic optical networking will reduce routing requirements as well. However, some of the connections that will benefit from these factors represent wavelength services, which we assume bypass the IP routers. Furthermore, we assume that optical edge aggregation is implemented, as discussed in the previous section, such that much of the subrate traffic that will benefit from these factors has already been offloaded from the IP routers. With these assumptions, the contribution to router-size reduction from these factors will be small, and hence it is not included in our estimates.
SUMMARY OF IP ROUTER FACTORS Table 2 summarizes the various factors discussed above that relate to IP router size. Overall, the total benefit is 1000, which meets the target traffic growth level. Note that this discussion on routers has implicitly assumed that today’s networks implement optical bypass, such that IP traffic does not have to enter a router at every node along its path. This is already yielding a factor of two reduction in the maximum required size of a router in today’s large networks [14].
Benefit factor
Percentage of router capacity affected by factor
Effective capacity multiplier
Larger IP routers
250
100%
250
More efficient IP packing
2
100%
2
Optical edge aggregation
2
100%
2
Total effective capacity multiplier
1000
Table 2. Summary of the factors affecting maximum required IP router size.
IEEE Communications Magazine • January 2011
131
SIMMONS LAYOUT
12/16/10
12:42 PM
Page 132
CONCLUSION While the pace of Internet growth can be expected to slow at some point, eventually the thousand-fold growth figure will be exceeded as well, requiring even further innovations, yet to be invented!
At the current pace of growth, Internet traffic is doubling approximately every two years, leading to a factor of 1000 growth in the next two decades. We have shown that such staggering growth can indeed be supported, while keeping the network cost and power consumption in check. This requires advances in both technology, to increase the capacity of transmission and routing/switching systems, and architecture, to effectively reduce the capacity requirements. While we have addressed only the backbone portion of the network, access networks will need to scale as well, through a combination of advanced broadband fiber, cable, and wireless technologies. Of course, while the pace of Internet growth can be expected to slow at some point, eventually the thousand-fold growth figure will be exceeded as well, requiring even further innovations, yet to be invented!
REFERENCES [1] A. Odlyzko, “Minnesota Internet Traffic Studies (MINTS)”; www.dtc.umn.edu/mints/home.html. [2] Cisco, “Visual Networking Index: Forecast and Methodology, 2008–2013,” White Paper, June 9, 2009. [3] A. Colby, “AT&T, NEC, Corning Complete Record-Breaking Fiber Capacity Test,” May 11, 2009; news.soft32.com/att-nec-corning-complete-recordbreaking-fiber-capacity-test_7372.html. [4] R.-J. Essiambre et al., “Capacity Limits of Optical Fiber Networks,” J. Lightwave Tech., vol. 28, no. 4, Feb. 15, 2010, pp. 662–701. [5] A. A. M. Saleh, “Dynamic Optical Networking to Enable Scalability of the Future Internet,” OFC/NFOEC Future Internet Symp., San Diego, California, Feb. 24–28, 2008; www.monarchna.com/FutureInternet-OFCNFOEC-2008-Saleh.pdf. [6] J. M. Simmons, “On Determining the Optimal Optical Reach for a Long-Haul Network,” J. Lightwave Tech., vol. 23, no. 3, Mar. 2005, pp. 1039–48. [7] A. R. Chraplyvy, “The Coming Capacity Crunch,” ECOC Plenary Talk, Vienna, Austria, Sept. 20–24, 2009, Paper 1.0.2. [8] D. A. Fishman, W. A. Thompson, and L. Vallone, “LambdaXtreme® Transport System: R&D of a High Capacity System for Low Cost, Ultra Long Haul DWDM Transport,” Bell Labs Tech. J., vol. 11, no. 2, Summer 2006, pp. 27–53. [9] K. Imamura et al., “Multi-Core Holey Fibers for the Long-Distance (>>100 km) Ultra Large Capacity Transmission,” OFC 2009, San Diego, CA, March 22–26, 2009. [10] L. Roberts, “Enabling Data-Intensive iGrid Applications with Advanced Network Technology,” iGrid 2005, San Diego, CA, Sept. 26–29, 2005. [11] L. Yao et al., “Long Range Dependence in Internet Backbone Traffic,” IEEE ICC 2003, Anchorage, AK, May 11–15, 2003. [12] Ciena, “Evolution to the 100G Transport Network,” White Paper, Nov. 2007. [13] A. Chiu et al., “Network Design and Architectures for Highly Dynamic Next-Generation IP-Over-Optical Long Distance Networks,” J. Lightwave Tech., vol. 27, no. 12, June 15, 2009, pp. 1878–90.
132
[14] J. M. Simmons, Optical Network Design and Planning, Springer, 2008. [15] AT&T Product Brief, “AT&T Optical Mesh Service — OMS,” May 13, 2008; http://www.business.att.com/content/productbrochures/PB-OMS_16312_V01_05-13.pdf [16] Cisco, “Advanced Platform Designed to Deliver New Wave of Video, Mobile, and Data Center/Cloud Services,” Press Release, Mar. 9, 2010. [17] J. Baliga et al., “Energy Consumption in Optical IP Networks,” J. Lightwave Tech., vol. 27, no. 13, July 1, 2009, pp. 2391–2403. [18] A. A. M. Saleh and J. M. Simmons, “Evolution toward the Next-Generation Core Optical Network,” J. Lightwave Tech., vol. 24, no. 9, Sept. 2006, pp. 3303–21. [19] G. Weichenberg, V. Chan, and M. Medard, “Design and Analysis of Optical Flow-Switched Networks,” IEEE/OSA J. Optical Commun. Net., vol. 1, no. 3, Aug. 2009, pp. B81–B97. [20] J. Berthold et al., “Optical Networking: Past, Present, and Future,” J. Lightwave Tech., vol. 26, no. 9, May 1, 2008, pp. 1104–18.
BIOGRAPHIES ADEL A. M. SALEH [F] (
[email protected]) is a program manager at DARPA since 2005. From 2003 to 2004 he was a founding partner of Monarch Network Architects. From 2002 to 2003 he was chief scientist and vice president of Network Architecture at Kirana Networks. From 1999 to 2002 he was vice president and chief network architect of Corvis Corporation. Between 1991 and 1999 he was a department head at AT&T Bell Labs/AT&T Labs, conducting and leading research on the technologies, architectures, and applications of optical backbone and access networks. From 1970 to 1991 he was a member of technical staff at AT&T Bell Labs, Crawford Hill, New Jersey, conducting research on microwave, wireless, and optical communications systems. He led the AT&T effort on several crossindustry DARPA-funded consortia that pioneered the vision of all-optical networking in backbone, regional, metro, and access networks. He was a member, then Chair of the OFC Networks Subcommittee, 1995–1998; Technical Program Co-Chair, OFC 1999; General Program Co-Chair, OFC 2001; and OFC Steering Committee Member, 2001–2006. He has more than 100 publications and 25 patents. He holds Ph.D. and S.M. degrees from MIT, and a B.Sc. degree, First Class honors, from the University of Alexandria, Egypt, all in electrical engineering. He is a Fellow of the OSA. J ANE M. S IMMONS [F] (
[email protected]) is a founding partner of Monarch Network Architects, which provides optical network architectural services and design tools. Prior to Monarch, she was executive director of Network Planning and Architecture at Kirana Networks. From 1999 to 2002 she worked at Corvis Corporation as the executive engineer of network architecture, and later as chief network architect. From 1993 to 1999 she worked at AT&T, where she conducted research on backbone, regional, and broadband access networks. She received a B.S.E., Summa Cum Laude, from Princeton University, and S.M. and Ph.D. degrees from MIT, all in electrical engineering. She was a member of the OFC Networks Subcommittee, 2001-2002, and was the Subcommittee Chair in 2003. She is currently a member of the OFC Steering Committee. From 2004 to 2009 she was an Associate Editor of the IEEE Journal on Selected Areas in Communications (Optical Communications and Networking Series). She is currently an Associate Editor for the Journal of Optical Communications and Networking. She teaches a course on optical network design at OFC, and is the author of the book Optical Network Design and Planning.
IEEE Communications Magazine • January 2011
DHAINI LAYOUT
12/16/10
12:29 PM
Page 134
TOPICS IN OPTICAL COMMUNICATIONS
MC-FiWiBAN: An Emergency-Aware Mission-Critical Fiber-Wireless Broadband Access Network Ahmad R. Dhaini and Pin-Han Ho, University of Waterloo
ABSTRACT The convergence of optical and wireless networks has been envisioned recently as an attractive economic broadband access solution; yet, its usage has been restricted to public users only. To serve mission-critical services in public safety and disaster relief communications using the fiber-wireless integration, key challenges such as fault recovery, security, and quality of service assurance must be addressed. This article introduces MC-FiWiBAN, a new emergency-aware architecture that leverages layer 2 virtual private networks to support mission-critical services over the integration. Each virtual private network corresponds to a specific mission-critical system requirements bundle that is stipulated in the service level agreement and fulfilled via an effective resource management paradigm. Simulation results show that MC-FiWiBAN can commit guaranteed quality of service for emergency and non-emergency services.
INTRODUCTION Mission-critical (MC) systems and organizations, such as healthcare, police, and firefighting, rely heavily on the underlying telecommunications infrastructure to conduct their daily tactical and emergency operations. In recent years, the MC sector has witnessed the proliferation of diverse multimedia applications such as remote patient monitoring, two-way video/audio conferencing, 3D geographical mapping and positioning, enhanced telemetry, and live surveillance video broadcasting. All these technological advancements are expected to force MC service providers (SPs) to migrate from simple data services to triple-play (data, voice, and video) or even quadruple-play (triple-play plus mobile) services on a single infrastructure in order to offer stateof-the-art MC service support. Public safety and disaster relief (PSDR) networks are particularly designed to guarantee reliable, continuous, secure, and flexible MC service. Since 1997 the terrestrial trunked radio (TETRA) standard [1] (predominant in Europe, Middle East and Africa, and Asia Pacific) and the ASTRO-25 standard (equally predominant in the Americas) have become the de facto stan-
134
0163-6804/11/$25.00 © 2011 IEEE
dards for PSDR communications, and have been employed by senior market players (e.g., Motorola and Nokia) in their deployment of MC networks in rural and urban areas. More interestingly, TETRA networks have successfully penetrated other large markets such as transportation, utilities, and oil/gas services (as recently announced in Qatar) [2]. The current two major TETRA releases (TETRA-1 and TETRA2) developed by the European Telecommunications Standards Institute (ETSI), provide radio capabilities endorsing network controlled services and direct mobile-to-mobile communications, with a wide range of functionalities such as group calls, fast call setup, encryption/decryption, real-time localization, and grade-of-service (GoS) connections [1]. However, the access capability offered by TETRA, which focuses on voice plus data (V+D) services only, is not sufficient to satisfy the bandwidth requirements of the emerging multimedia PSDR applications. The transmission rates of these applications are peculiarly higher than the 28.8 kb/s TETRA maximum data bit rate. In May 2000 Project MESA [3] was introduced to define a new platform and a set of MC requirements that aim at forming an advanced PSDR telecommunication network with an improved bit rate of 2 Mb/s. In 2007, Motorola announced MOTOA4 [4], a suite of proprietary network solutions leveraging the ubiquity of the IEEE 802.11n to form a wireless outdoor mesh wide area network (MWAN). MOTOA 4 operates at high data bit rates of 300 Mb/s and offers full ASTRO interoperability. However, its deployment has been restricted to urban areas only. In a similar timeline, public networks have been preceding the MC sector in terms of technological evolution and market deployments. Recently, the integration of Ethernet passive optical network (EPON) and WiMAX has been presented as the evolutionary next step toward the convergence to a high-speed wireless and cost-effective broadband access network [5–7]. More attractively, EPON and WiMAX perfectly match in terms of capacity hierarchies and design premises. EPON, for instance, supports a total of 1 Gb/s bandwidth in both downstream and upstream directions, shared by typically N ≤ 32
IEEE Communications Magazine • January 2011
DHAINI LAYOUT
12/16/10
12:29 PM
Page 135
Fire intranet
TETRA network
BS
Remote expert
Fire HQ
TETRA SwiMI (IP-based)
SS Remote expert
ONU Internet/Gateway (IGW)
Remote database
Police Intranet
SS
Remote database
Fire station SS
IP/Ethernet
TETRA network
Bridge
SS
BS
TETRA site
Police HQ
OLT 20 Km
TETRA user/terminal
Splitter 1:N
Bridge SS
ONU BS Bridge
IP-based network c ore
Hospital intranet
ONU SS
AMBULANCE
OLT: Optical line terminal ONU: Optical network unit BS: WiMAX base station SS: subscriber station Bridge: network bridge (optional)
Remote expert
Note: ONU and BS can be mounted in one box (so-called ONU-BS).
Police SS station
SS
H
AMBULANCE
Clinic
SS
SS SS
AMBULANCE
SS
AMBULANCE
Mobile MC units
Remote database
Figure 1. MC-FiWiBAN supporting MC VPNs and integrating with TETRA networks. In safety-critical places (e.g., hospitals [H]), a network bridge may be used to connect ONU and BS, such that the BS is placed a small distance away from its designated safety-critical site. remote optical network units (ONUs). On average, each ONU accesses ≈ 70 Mb/s bandwidth, which matches the total capacity offered by a WiMAX base station (BS) over a 20 MHz channel. In addition, the integration enables integrated resource allocation and packet scheduling paradigms that help to better support the emerging quality of service (QoS) applications, as well as improve the overall network throughput. Furthermore, the integration can help realize fixed mobile convergence (FMC) by supporting mobility in the broadband network access, thereby significantly reducing network design and operational costs [5]. The article aims at taking advantage of the public network evolution to support MC services over the fiber-wireless (FiWi) integration. In this context, we propose MC-FiWiBAN (Fig. 1), a new emergency-aware architecture that can help extend the MC multimedia service coverage to rural areas, as well as improve the PSDR user access to the IP-based network core. PSDR and MC services require high security, custom network control, and fast network access, in addition to a fine degree of QoS assurance and bandwidth guarantees. MC-FiWiBAN enforces these features via the construction of a layer 2 virtual private network (VPN), which has been widely established in wired networks and is considered a killer application in modern Internet carriers [8, 9]. Figure 1 shows how MC-FiWiBAN can be easily integrated with the TETRA infrastructure (i.e., TETRA switching and management infrastructure [SwMI]) using IP/Ethernet as the interface between the optical line terminal (OLT) and an Internet gateway (IGW) located at the TETRA SwMI. The main function of an IGW is
IEEE Communications Magazine • January 2011
to hide from the SwMI the anomalies of MCFiWiBAN, thereby achieving seamless integration with almost no SwMI changes [10]. A simpler strategy would be to directly embed the OLT as part of the SwiMI infrastructure; however, a special interconnection agreement will then be required to preserve and/or lease the proper TETRA authorizations to the access network. A TETRA radio user is envisioned to operate as a dual-mode terminal capable of supporting both TETRA and WiMAX radio/air interfaces in order to provide full interoperability between MC units and systems across these heterogeneous networks. With MC-FiWiBAN, the article attempts to provide an effective solution to the resource management problem — one of the key issues that arise in VPN construction over a shared network infrastructure. A resource allocation framework is presented to achieve a guaranteed QoS level for MC services as defined in the VPN service level agreement (SLA). To the best of our knowledge, this is the first work that considers the support of MC services over EPONWiMAX. The rest of the article is organized as follows. We first overview MC systems. We next present MC-FiWiBAN by highlighting the main key issues in the aspects of design and implementation. An illustrative simulation example is then presented, and we finally conclude the article.
MISSION-CRITICAL SYSTEMS Different from public systems, an MC system requires a set of unique services and network requirements. In this section we briefly overview different types of multimedia services
135
DHAINI LAYOUT
12/16/10
12:29 PM
Page 136
Heart
Electrocardiogram
Home patient monitoring
VPN
EPON
ONU-BS Remote consulting
Event coverage and tracking
Robotic surgery
Mobile medical units (MMUs)
(a)
Headquarters SS
SS
Bedroom Bedroom
Livingroom Room Living (b)
Figure 2. a) Envisioned mission-critical applications and services; b) an illustration of a surveillance system over MC-FiWiBAN. envisioned to extensively appear in such a system. We also highlight the main network requirements for ensuring robust MC service support.
MISSION-CRITICAL SERVICES
1
In 2005 the U.S. Department of Homeland Security (DHS) activated a communications program, Safecom (http://www.safecomprogram.gov), which aims at improving emergency responses through wireless communications. Safecom has also listed the same features as emergency requirements.
136
With the emerging multimedia MC services, PSDR users can establish audio/video calls or share images and other miscellaneous MC information. These services (Fig. 2a) possess stringent QoS and security requirements that need to be fulfilled via an effective network solution. Typically, an MC service may belong to one of the following categories: • Emergency (asynchronous): A one-way message (or flow of short length) is triggered by an emergency event (e.g., fire event, criminal pursuit or an alarm by a health monitoring system) and requires immediate delivery/broadcast. • Real-time (synchronous): It can be as simple as telephone calls or as complex as sophisticated virtual-reality robotic surgery. It can also be used for monitoring of long-term care agents by establishing videoconferencing. A particular example of a real-time application is the live video surveillance system. The deployment of such a system over MC-FiWiBAN, as shown in Fig. 2b, can provide a secure and robust wireless connection that eliminates the wiring/rewiring overhead of conventional surveillance systems. • Store-and-forward (asynchronous): It supports delivery of non-emergency MC data (images, bio-signals, etc.) to specialists for e-consultation, evaluation, or other purposes. It does not require interactive communications between involved parties in real time; however, high data integrity and bandwidth guarantees are crucial.
MISSION-CRITICAL NETWORK REQUIREMENTS In addition to security and reliability, an MC network must fulfill the following salient requirements:1 • Operability: The ability to establish and sustain communications between different MC units during an MC event • Interoperability: The ability of all MC units (from all levels and disciplines) to communicate among each other as authorized and as needed, regardless of their network capabilities • Continuity of Communications: The ability to maintain communications, even in the event of a network failure It is therefore essential that any designed or adopted network solution must sustain these network requirements to perpetuate the diverse QoS requirements of MC services.
MC-FIWIBAN As described, the deployment of MC systems compels a set of features to be available in the underlying network. In this section we discern the building blocks of MC-FiWiBAN to embrace these distinctive attributes.
LAYER 2 VPNS OVER EPON-WIMAX An MC system provisions a specific bundle of services with different QoS requirements to its users. To support these services over EPONWiMAX, we propose to develop layer 2 VPNs over the EPON-WiMAX infrastructure; each VPN corresponds to a specific service requirement bundle issued by a registered system [9]. Such VPNs are referred to as layer 2 VPNs in the sense that they are built on the layer 2 protocols. Compared to layer 3 VPNs, layer 2 VPNs can better resolve complications due to network
IEEE Communications Magazine • January 2011
DHAINI LAYOUT
12/16/10
12:29 PM
Page 137
PE Bidirectional
Inter-domain VPN tunnel
CO
CE SS
ONU-BS
SS Intra-domain VPN tunnel
CE SS SS
SS SS
CE SS SS
SS
SS CE
SS
Point-to-multi-point (PMP)
ONU-BS
OLT
ONU-BS
SS
ONU-BS Optical Ring
SS ONU-BS
OLT
SS
ONU-BS
SS
SS SS
SS ONU-BS SS CE
ONU-BS
OLT
CO
SS
SS
Mesh
SS
PE SS
CE ONU-BS
CO Inter-domain VPN tunnel
PE
PE
Provider edge
ONU-BS
CO
Central office
OLT
SS CE
Optical network unit-base station
CE
Customer edge
Optical link
Optical line terminal
SS
Subscriber station
Wireless link
Figure 3. VPN tunneling in MC-FiWiBAN. dynamics and fast wireless changing channel status [9]. In addition, layer 2 VPNs provide strong support for premium services with customdesigned control of layer 3 routing and IP addressing, diverse QoS requirements, and security assurance, as well as the features intrinsically exhibited in the medium access control (MAC) layer [8]. These attributes are vital in the course of MC service support over the EPON-WiMAX integration. With the initiation of layer 2 VPNs directly on the integration, users of each registered system can dynamically configure their service requirements and possibly issue call preemption requests through a suite of service access points (SAPs) and interfaces that are manipulatively maintained in the corresponding VPN package. This property is essential to an MC system where stringent bandwidth guarantees and call preemption requests are present [3]. Furthermore, such VPNs enable interoperability between fixed and mobile platforms. The layer 2 VPN requests that are issued by the users can be handled within the MC organization without the need to involve further overlay and additional VPN agents [8]. As a result, the classified nature of MC operations is preserved; and the constantly changing MC units’ locations, especially in mobile tactical scenarios, are easily handled. Meanwhile, survivability of the end-user services can be enhanced by multihoming, 2 which can easily be manipulated in the layer 2 VPNs [8]. The VPN applications can determine if a user should be associated with multiple ONU-BSs through separate air interfaces, so as to enable fast fault recovery, which is vital to the MC scenario.
IEEE Communications Magazine • January 2011
VPN TUNNELING As depicted in Fig. 3, we classify an MCFiWiBAN VPN tunnel as either interdomain or intradomain: • Intra-domain: A tunnel is established between customer edges (CEs) that belong to the same architectural domain (e.g., between two WiMAX subscriber stations [SSs]). • Inter-domain: A tunnel is established between CEs that belong to different architectural domains (e.g., between the OLT and an SS). To achieve operability, a VPN tunnel is to be established such that sufficient resources are allocated in MC-FiWiBAN (in both the uplink/upstream and downlink/downstream directions) and in the optical ring. The downlink resource allocation is argued to be deterministic, due to being broadcast in nature under EPONWiMAX [5, 7]. Hence, the article focuses on the uplink/upstream resource management problem in MC-FiWiBAN.
FAULT TOLERANCE AND ERROR RECOVERY To maintain service continuity, a fast recovery from any possible unexpected failure is required. In MC-FiWiBAN a VPN tunnel may fail due to the following reasons: • A damage in the wireless link between the ONU-BS and the wireless user (an intradomain VPN error) • A fiber cut between the ONU-BS and the splitter (a hybrid inter-/intradomain VPN fault) • A fiber cut between the splitter and the OLT (a hybrid inter-/intradomain VPN error)
2
Multihoming refers to a single host making use of several connections associated with various interconnected networks.
137
DHAINI LAYOUT
12/16/10
12:29 PM
Page 138
QoS requirements Service type
Application
IEEE 802.16 CoS
Data bit rate
One-way end-to-end delay
Jitter
Device/human trigger
2–5 kb/s
≤10 ms
N/A
EGS
Diagnostic (interactive)
4–25 kb/s
≤1 s
≤1 ms
UGS
Conferencing (real-time)
5–70 kb/s
≤150 ms
≤1 ms
UGS
Video streaming
1–6 Mb/s
≤10 s
N/A
nrtPS/rtPS
Conferencing (real-time)
20 kb/s–1 Mb/s
≤250 ms
N/A
rtPS
Enhanced telemetry (real-time)
20–50 kb/s
≤250 ms
N/A
rtPS
MC data transfer
20–50 kb/s
≤250 ms
N/A
nrtPS
Bulk data transfer
50 kb/s–few Mb/s
≤10 s
N/A
nrtPS
Multimedia transfer
Routine activities
100 kb/s–few Mb/s
≤10 s
N/A
nrtPS
Best effort (BE)
Email/web browsing
Emergency call/event
Audio
Video
Monitoring services
Minimum throughput
BE
Table 1. Envisaged multimedia applications/service classification and CoS mapping. • A fiber cut on the metro ring, causing damage to the interdomain established VPN Enabled via multihoming, fast fault recovery can be attained through the following strategies. If the wireless channel between the ONU-BS and the SS is deteriorated, the VPN tunnel can be rerouted to another ONU-BS in range using a direct connection (in the case of point-to-multipoint [PMP] topology), or through a multihop route (in the case of mesh topology with the ONU-BS not in range) [11]. Similarly, if a fiber cut occurs between an ONU-BS and the splitter or between the OLT and the splitter, the failure can be averted by establishing communication with an alternative ONU-BS of a neighboring EPON-WiMAX network, to reach its network boundary by means of the bidirectional metro ring, which is also fault-tolerant by nature [12]. Such a strategy cannot be performed in traditional PONs and/or WiMAX networks [6]. These routing stratagems can easily be accommodated in layer 2 through specialized routing agents (located at each network boundary) to maintain service continuity of the VPNs. Nonetheless, other layer 3 FiWi routing protocols [11] may alternatively be adopted to serve the same purpose.
EMERGENCY-AWARE QOS SUPPORT
3
In WiMAX UGS traffic is not reported by the user. Instead, a fixed share of bandwidth is allocated in every PI for each admitted UGS flow.
138
As mentioned, QoS requirements are specified in the SLA of each VPN. To support QoS in the MAC, the IEEE 802.16 standard defines five classes of service (CoSs): unsolicited grant service (UGS), real-time polling service (rtPS), embedded real-time polling service (ertPS) (defined in IEEE 802.16e), non-real-time polling service (nrtPS), and best effort (BE) [13]. An MC service plan typically comprises two types of data: • Regular periodic traffic (i.e., real-time or store-and-forward) that is transmitted frequently
• Emergency messages that are highly erratic but extremely delay-sensitive As shown in Table 1, QoS support for regular MC traffic can be achieved by mapping each service to one of the WiMAX CoSs based on its type and QoS requirements. On the other hand, emergency traffic requires QoS protection and immediate transfer; therefore, it should not be mixed with other types of services. For this reason, we define a new IEEE 802.16 CoS, emergency grant service (EGS). EGS has the highest priority, over all other services, and possesses stringent QoS requirements. Moreover, due to its erratic nature and sporadic occurrence (e.g, once in several weeks/months), EGS traffic is reported to the ONU-BS whenever an emergency event occurs, and is placed in a separate buffering queue. EGS not only reduces the control overhead caused by reporting the bandwidth needs in each polling interval (PI), but also eliminates the waste of bandwidth caused by reserving resources for emergency services in each PI (in case the UGS approach is applied)3 [13]. With a separate buffer, EGS also facilitates the implementation of a service preemption mechanism required in emergency situations with congested networks.
SERVICE CLASSIFICATION AND COS MAPPING The design of a MAC protocol that arbitrates the transmission of MC users over the shared MC-FiWiBAN resources requires an MC service classification at both the end user/SS and ONUBS. At the SS — Table 1 shows the mapping of the envisaged MC users’ applications to the IEEE 802.16 CoSs of each SS. This classification makes MC-FiWiBAN scalable in the sense that the support of more users will then require no changes, especially to the adopted resource allocation
IEEE Communications Magazine • January 2011
DHAINI LAYOUT
12/16/10
12:29 PM
Page 139
Control overhead
1 Bmin
EGS not only reduces the control overhead caused by reporting
VPN 1
VPN 2
VPN 3
VPN 4
non-VPN
the bandwidth needs in each polling interval (PI), but also eliminates the
(1 −
βTVPON eff
β)TVPON eff
wastage of bandwidth caused by reserving
Real-time traffic
resources for
BE
emergency services in each PI (in case αkB1
(1-αk)B1
min
the UGS approach
Guard time
min
is applied).
ONU-BS1
ONU-BS5
ONU-BS8
ONU-BS10
Each SS accomodates multiple VPN service requests
SS3
EGS43
UGS43
SS4
......
SS4
SS9
Each ONU-BS provisions multiple VPNs
BE43
Figure 4. MC-FiWiBAN QoS-provisioning paradigm with K = 4 VPNs; all supported by ONU-BSs 1, 5, 8, and 10. These ONU-BSs provision V4 services through SSs 3, 4, 6, and 9. In turn, SS3 accommodates V4 requests. EGS43 means EGS bandwidth belongs to SS3 and V4.
schemes, since the VPN-to-MAC mapping, via the predefined SAPs, will implicitly take care of satisfying its requirements without compromising the QoS level of existing services. At the ONU-BS — According to the IEEE 802.13ah standard, an ONU is allowed to support and report up to eight queues [5]. To preserve the standard, we install a total of six queues at the ONU-BS and then perform a simple oneto-one CoS mapping.
VPN RESOURCE MANAGEMENT Our previous work in [9] introduced the first framework to address the resource management problem of layer 2 VPNs over EPON-WiMAX. The proposed framework, WiMAX-VPON, consists of a QoS-provisioning paradigm and a joint VPN-based access control (AC) and uplink/
IEEE Communications Magazine • January 2011
upstream dynamic bandwidth allocation (DBA) mechanism. WiMAX-VPON ensures bandwidth guarantees for each VPN service and protects its QoS. In this work we extend WiMAX-VPON to support MC services and requests. VPN-Based QoS Provisioning — With the new framework, the effective upstream VPON cycle VPON Teff , which is the upstream optical PI length minus the control overhead caused by the polling and requesting signaling, is divided into two subcycles. As illustrated in Fig. 4, the first subcycle VPON βT eff is shared among all K VPNs, whereas VPON the second subcycle (1 – β)T eff is shared among non-VPN services (to support legacy VPON users as well). Note that Teff can be obtained either via simulations by measuring the signaling overhead rate or analytically [14]. k Let Bmin be the bandwidth reserved for VPN
139
DHAINI LAYOUT
12/16/10
12:29 PM
Page 140
End-to-end (SS to OLT) average packet delay
Average packet delay (s)
0.045 0.04
AC-FTR AC-MTR NO-AC-FTR NO-AC-MTR
End-to-end (SS to OLT) average packet delay 0.05
System saturation
0.035 > 10 ms
0.03 0.025 0.02 0.015
0.04
0.03 0.025 0.02 0.015 0.01
0.005
0.005 0
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240 Number of VPN users
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240 Number of VPN users
(a)
100
(b)
End-to-end (SS to OLT) average packet delay AC-FTR AC-MTR NO-AC-FTR NO-AC-MTR
>1s
10-1
10-2 10-3 10-4
System saturation
End-to-end (SS to OLT) average packet delay
100 Average packet delay (s)
Average packet delay (s)
101
System saturation
0.035
0.01
0
AC-FTR AC-MTR NO-AC-FTR NO-AC-MTR
0.045 Average packet delay (s)
0.05
AC-FTR AC-MTR NO-AC-FTR NO-AC-MTR
10-1
System saturation
> 100 ms
10-2
10-3
< 10 ms
< 10 ms
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240 Number of VPN users
10-4
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240 Number of VPN users
(c)
(d)
Figure 5. One-way end-to-end average packet delay with and without our AC framework (90 percent confidence interval): a) EGS flow; b) UGS flow; c) rtPS flow; d) nrtPS flow. k (denoted Vk) in each PI, and RN the transmission speed of PON in megabits per second. In addition, let each V k be given a weight w k to k determine its agreed upon bandwidth share. Bmin (in bytes, thus divide by 8) is then computed as follows: k Bmin =
VPON × RN × wk βTeff
8
⎛ K ⎞ , where ⎜ ∑ wk =1 ⎟ . ⎝ k =1 ⎠
To guarantee a minimum per-VPN SLAbased throughput for BE traffic and in order to eliminate starvation (in case of the admission of a large number of EGS or real-time flows that have higher priority), we reserve a portion of α k B kmin . As a result, all the V k real-time and k emergency flows share the (1 – αk)Bmin remaining bandwidth. A Joint VPN-Based AC and DBA Mechanism — To guarantee a per-VPN QoS assurance in the uplink/upstream direction, a joint VPNbased AC and DBA scheme is proposed complementing the proposed QoS provisioning framework [9]. With MC-FiWiBAN, VPN-AC admits all EGS and BE flows. Each admitted BE flow k shares its reserved portion αkBmin with all other
140
BE flows belonging to Vk, while an EGS flow is always satisfied. Adversely, a constant bit rate (CBR, e.g., UGS) flow is admitted if its mean rate can be accommodated in both the wireless total capacity and VPN bandwidth share. On the other hand, for variable bit rate traffic (VBR, e.g., rtPS and nrtPS), a guaranteed rate (with specific delay requirements) is extracted from the arrival process passing through a dual-token leaky bucket (DTLB) situated at the MAC buffer entrance. Consequently, a VBR flow is admitted if its guaranteed rate can be accommodated in the network. The proposed VPN-DBA is installed at the OLT, and each ONU-BS and mainly divides each upstream/uplink cycle/frame into two subcycles. The first subcycle is used to allocate guaranteed bandwidth for admitted EGS and real-time traffic, whereas the second subcycle is used to allocate BE traffic per VPN. Moreover, in addition to the MAC requests, VPN-DBA takes into consideration the physical layer (PHY) burst profile associated with each wireless link such that a statistical QoS guarantee is achieved [9]. However unlike in [9] where EGS was not supported, if the uplink channel is overloaded and an emergency event occurs, a call preemption mechanism is designed to borrow enough bandwidth from the BE quota to serve the
IEEE Communications Magazine • January 2011
DHAINI LAYOUT
12/16/10
12:29 PM
Page 141
incoming EGS flow(s). The borrowed bandwidth is then re-allocated back to BE traffic once all emergency flows are served and satisfied.
35
SIMULATION RESULTS In our illustrative simulation example we set the WiMAX-VPON parameters as follows: number of ONU-BSs N = 16, ∀k, αk = 0.1, β = 1, K = 4, and R N =1 Gb/s. The WiMAX total bandwidth is equal to 20 MHz. Vk is randomly generated for each connecting SS. Without loss of generality, we assume that all MC systems (i.e., VPNs) have equal weights (wk). We also consider various adaptive modulation and coding (AMC) modes for different SSs (i.e., binary phase shift keying [BPSK], quaternary phase shift keying [QPSK], and M-quadrature amplitude modulation [QAM]). To stress test our framework, we assume that each SS has five flows (i.e., EGS, UGS, rtPS, nrtPS, and BE). We inject each EGS flow at a rate of 5 kb/s. Every UGS flow is generated with a guaranteed rate of 64 kb/s [9]. Each rtPS flow is generated at a guaranteed rate of 5 Mb/s (which is the average bit rate of a DVD-quality video [9]), and each nrtPS flow is generated with a guaranteed rate of 500 kb/s [9]. Each self-similar BE flow is generated at a mean rate of 1 Mb/s. The number of VPN users increments by ⎪N⎪ with time and reflects the attempt to connect a new MC unit to each ONU-BS simultaneously. Our framework is evaluated vs. the MC QoS requirements listed in Table 1. We consider two simulation scenarios: fixed transmission rate (FTR), where all SSs have the same fixed transmission rate, and multiple transmission rates (MTR), where a random AMC is selected for every SS. Figures 5 and 6 show the end-to-end average packet delay for EGS and real-time flows of a tagged SS, and the total per-VPN BE traffic throughput, respectively. As noticed, although able to meet the QoS requirements for some services (e.g., UGS and nrtPS), typical resource management schemes (with no AC) are unable to maintain the QoS requirements for all MC VPN services, especially after system saturation. On the other hand, the application of WiMAX-VPON (with AC) is able to maintain the QoS requirements for all types of services (in terms of delay and throughput). This proves the effectiveness of our framework in maintaining network stability as well as ensuring and protecting the QoS level for all types of traffic. Figure 6 also shows how the bandwidth borrowing mechanism may affect the overall BE throughput. However, in our simulations flows never leave the network; therefore, by giving up some of its bandwidth quota, the total V3 BE traffic witnesses permanent throughput degradation.
CONCLUSION AND FUTURE WORK CONCLUSION This article presents MC-FiWiBAN, a new emergency-aware fiber-wireless (FiWi) access network that leverages layer 2 VPNs over the EPON-WiMAX integrated network to provide state-of-the-art mission-critical service support. MC-FiWiBAN extends MC coverage to rural
IEEE Communications Magazine • January 2011
Total BE throughput (Mb/s)
30
No AC-MTR AC-MTR No AC-FTR AC-FTR
α = 0.1 ⇒ BE quota ≈ 24.5 Mb/s
25 20 Borrowed Bandwidth
15 10 5 0
1
2
VPN ID
3
4
Figure 6. Total per-VPN BE throughput.
areas and improves PSDR communications by endorsing the emerging MC multimedia services. A QoS provisioning framework was also presented to address the resource management problem arising from the establishment of VPNs. Our simulation results proved the effectiveness of MC-FiWiBAN in guaranteeing the QoS requirements for different types of services.
FUTURE WORK While the presented simulation results proved the effectiveness of MC-FiWiBAN, real deployment scenarios and experiments can be of great added value to further highlight the advantages of MC-FiWiBAN. Such experiments need to consider large cities and areas where TETRAbased and MC-FiWiBAN-based systems with dual-mode users are deployed. These users will then be configured to intercommunicate through the proposed architecture. Consequently, mobility management will emerge as a theme that needs to be handled in the new architecture. Not only will seamless handoff techniques be required, but scenarios where PSDR users roam between TETRA and MC-FiWiBAN coverage zones will require special handover protocols that consider the physical layer heterogeneity of both networks. Furthermore, other VPN related issues such as VPN gateway design and configuration, as well as advanced VPN resource management paradigms may also be addressed. As a final note, the key advantages of MCFiWiBAN are summarized as follows: • Robustness: By constructing layer 2 VPNs and performing the proper error recovery and resource management strategies, MCFiWiBAN can be more robust than legacy wired and wireless networks (e.g., traditional PONs, WiMAX, and WiFi) that also possess limited reaching abilities. • Centralized operation and management: MCFiWiBAN not only provides high-speed mobile access in rural areas, but also enables centralized control over the network resources (bandwidth and channel) at the OLT using its operation, administration, and management (OAM) unit.
141
DHAINI LAYOUT
12/16/10
While the presented simulation results proved the effectiveness of MC-FiWiBAN, real deployment scenarios and experiments can be of great added value to further highlight the advantages of MC-FiWiBAN.
12:29 PM
Page 142
• VPN interoperability: By deploying layer 2 VPNs, MC-FiWiBAN provides an integrated VPN service suite that enables interoperability between fixed and mobile platforms. • Scalability: The installation of WiMAX networks in rural areas enables seamless support of more users and services, such that no modifications are required to the already installed equipment and layer 2 protocols. • Improved TETRA services: Due to the increased bit rates of MC-FiWiBAN, TETRA (V+D) services’ limitations (e.g., call setup delays) can be considerably improved. Furthermore, the emerging MC multimedia applications can be supported seamlessly.
ACKNOWLEDGMENTS The authors would like to thank Prof. Abdallah Shami (UWO, Canada) and Dr. James She (Cambridge, United Kingdom) for their insightful feedback on this work.
REFERENCES [1] ETSI EN 300 392-2 V2.5.2, “Terrestrial Trunked Radio (TETRA); Voice plus Data (V+D); Part 2: Air Interface (AI),” May 2006. [2] TETRA; http://www.tetramou.com/. [3] Project MESA; http://www.projectmesa.org. [4] MOTOA4; http://business.motorola.com/MOTOA4/index.html. [5] G. Shen, R. S. Tucker, and C.-J. Chae, “Fixed Mobile Convergence Architectures for Broadband Access: Integration of EPON and WiMAX,” IEEE Commun. Mag., vol. 45, no. 8, Aug. 2007, pp. 44–50. [6] S. Sarkar, S. Dixit, and B. Mukherjee, “Hybrid WirelessOptical Broadband-Access Network (WOBAN): A Review of Relevant Challenges,” IEEE/OSA J. Lightwave Tech., vol. 25, no. 11, Nov. 2007, pp. 3329–40. [7] J. She and P.-H. Ho, “Cooperative Coded Video Multicast for IPTV Services under EPON-WiMAX Integration,” IEEE Commun. Mag., vol. 46, no. 8, Aug. 2008, pp. 104–10.
142
[8] W. Luo et al., Layer-2 VPN Architecture, CISCO Press, 2005. [9] A. R. Dhaini, P.-H. Ho, and X. Jiang, “WiMAX-VPON: A Framework of Layer-2 VPNs for Next-Generation Access Networks,” IEEE/OSA J. Optical Commun. Net., vol. 2, no. 7, July 2010, pp. 400–14. [10] A. K. Salkintzis, “Evolving Public Safety Communication Systems by Integrating WLAN and TETRA Networks,” IEEE Commun. Mag., vol. 44, no. 1, Jan. 2006, pp. 38–46. [11] S. Sarkar et al., “A Novel Delay-Aware Routing Algorithm (DARA) for a Hybrid Wireless-Optical Broadband Access Network (WOBAN),” IEEE Network, vol. 22, no. 3, May 2008, pp. 20–28. [12] S. Spadaro et al., “Positioning of the RPR Standard in Contemporary Operator Environments,” IEEE Network, vol. 18, no. 2, Mar. 2004, pp. 35–40. [13] IEEE, “IEEE Standard for Local and Metropolitan Area Networks Part 16: Air Interface for Fixed Broadband Wireless Access Systems,” 2004. [14] A. R. Dhaini, P.-H. Ho, and X. Jiang, “Performance Analysis of QoS-Aware Layer-2 VPNs over Fiber-Wireless (FiWi) Networks,” Proc. IEEE GLOBECOM ‘10, Miami, FL, Dec. 2010.
BIOGRAPHIES A HMAD R. D HAINI (
[email protected]) received his B.Sc. in computer science from the American University of Beirut (AUB) in 2004, and his M.App.Sc. in electrical and computer engineering from Concordia University, Montreal, Canada, with a best thesis award nomination in 2006. He worked as a software consultant at TEKSystems, Montreal, in 2006–2007; and as a software designer at Ericsson, Montreal, in 2007–2008. He is currently working toward his Ph.D. degree in electrical and computer engineering at the University of Waterloo. His research interests focus on optical/wireless broadband access networks. PIN-HAN HO (
[email protected]) received his Ph.D. degree from Queens University, Kingston, Ontario, Canada, in 2002. He is now an associate professor in the Department of Electrical and Computer Engineering, University of Waterloo, Canada. His current research interests cover a wide range of topics in broadband wired and wireless communication networks, including survivable network design, wireless metropolitan area networks such as IEEE 802.16 networks, fiber-wireless network integration, and network security.
IEEE Communications Magazine • January 2011
LYT-ADINDEX-January
12/16/10
5:04 PM
Page 144
ADVERTISERS’ INDEX Company
Page
Agilent Technologies.............................................................................3 Cisco .......................................................................................................Cover 2 GL Communications.............................................................................14 Huber + Suher ......................................................................................17 Intel.........................................................................................................20 Mobile World Congress ........................................................................43 OFC/NFOEC 2011................................................................................67 Omicron Lab..........................................................................................19 RSOFT Design Group..........................................................................5 Samsung .................................................................................................Cover 4 u2t Photonics .........................................................................................9 WCNC 2011 ...........................................................................................75 Wiley-Blackwell .....................................................................................Cover 3
ADVERTISING SALES OFFICES Closing date for space reservation: 1st of the month prior to issue date NATIONAL SALES OFFICE Eric L. Levine Advertising Sales Manager IEEE Communications Magazine 3 Park Avenue, 17th Floor New York, NY 10016 Tel: (212) 705-8920 Fax: (212) 705-8999 Email:
[email protected] SOUTHERN CALIFORNIA Patrick Jagendorf 7202 S. Marina Pacifica Drive Long Beach, CA 90803 Tel: (562) 795-9134 Fax: (562) 598-8242 Email:
[email protected] NORTHERN CALIFORNIA George Roman 4779 Luna Ridge Court Las Vegas, NV 89129 Tel: (702) 515-7247 Fax: (702) 515-7248 Cell: (702) 280-1158 Email:
[email protected] SOUTHEAST Scott Rickles 560 Jacaranda Court Alpharetta, GA 30022 Tel: (770) 664-4567 Fax: (770) 740-1399 Email:
[email protected] EUROPE Rachel DiSanto Huson International Media Cambridge House, Gogmore Lane Chertsey, Surrey, KT16 9AP ENGLAND Tel: +44 1428608150 Fax: +44 1 1932564998 Email:
[email protected] Yokogawa Corporation of America.....................................................13
FEBRUARY 2011 EDITORIAL PREVIEW Advances on Passive Optical Networks Synchronization over Ethernet and IP Networks IMT-Advanced and Next Generation Mobile Networks
MARCH 2011 EDITORIAL PREVIEW Topics in Radio Communications Network Testing Testbeds for Cognitive Radio Networks Future Media Internet
144
IEEE Communications Magazine • January 2011