Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2981
3
Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Christian Müller-Schloer Theo Ungerer Bernhard Bauer (Eds.)
Organic and Pervasive Computing – ARCS 2004 International Conference on Architecture of Computing Systems Augsburg, Germany, March 23-26, 2004 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Christian M¨uller-Schloer University of Hannover Institute of Systems Engineering, System and Computer Architecture - SRA Appelstr. 4, 30167 Hannover, Germany E-mail:
[email protected] Theo Ungerer University of Augsburg Institute of Informatics, 86159 Augsburg, Germany E-mail:
[email protected] Bernhard Bauer University of Augsburg Department of Software Engineering and Programming Languages 86159 Augsburg, Germany E-mail:
[email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at .
CR Subject Classification (1998): C.2, C.5.3, D.4, D.2.11, H.3.5, H.4, H.5.2 ISSN 0302-9743 ISBN 3-540-21238-8 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com c Springer-Verlag Berlin Heidelberg 2004 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Protago-TeX-Production GmbH Printed on acid-free paper SPIN: 10990674 06/3142 543210
Preface
Where is system architecture heading? The special interest group on Computer and Systems Architecture (Fachausschuss Rechner- und Systemarchitektur) of the German computer and information technology associations GI and ITG asked this question and discussed it during two Future Workshops in 2002. The result in a nutshell: Everything will change but everything else will remain. Future systems technologies will build on a mature basis of silicon and IC technology, on well-understood programming languages and software engineering techniques, and on well-established operating systems and middleware concepts. Newer and still exotic but exciting technologies like quantum computing and DNA processing are to be watched closely but they will not be mainstream in the next decade. Although there will be considerable progress in these basic technologies, is there any major trend which unifies these diverse developments? There is a common denominator – according to the result of the two Future Workshops – which marks a new quality. The challenge for future systems technologies lies in the mastering of complexity. Rigid and inflexible systems, built under a strict top-down regime, have reached the limits of manageable complexity, as has become obvious by the recent failure of several large-scale projects. Nature is the most complex system we know, and she has solved the problem somehow. We just haven’t understood exactly how nature does it. But it is clear that systems designed by nature, like an anthill or a beehive or a swarm of birds or a city, are different from today’s technical systems that have been designed by engineers and computer scientists. Natural systems are flexible, adaptive, and robust. They are in permanent exchange with their environment, respond to changes adequately, and are very successful in staying alive. It seems that also the traditional basic technologies have realized this trend. Hardware is becoming reconfigurable, software now updates itself to fulfill new requirements or replace buggy components, and small portable systems form ad hoc communities. Technical systems of this kind are called Organic Computer systems. The key challenge here will be to understand and harness self-organization and emergence. Organic Computing investigates the design and implementation of self-managing systems that are self-configuring, self-optimizing, self-healing, selfprotecting, context aware, and anticipatory. ARCS 2004 continued the biennial series of German Conferences on Architecture of Computing Systems. This seventeenth conference in the series served as a forum to present current work on all aspects of computer and systems architecture. The program committee of ARCS 2004 decided to devote this year’s conference to the trends in organic and pervasive computing. ARCS 2004 emphasized the design, realization, and analysis of the emerging organic and pervasive systems and their scientific, engineering, and commercial applications. The conference focused on system aspects of organic and pervasive computing in software and hardware. In particular, the system integration and
VI
Preface
self-management of hardware, software, and networking aspects of up-to-now unconnected devices is a challenging research topic. Besides its main focus, the conference was open to more general and interdisciplinary themes in operating systems, networking, and computer architecture. The program reflected the main topics of the conference. The invited talk of Andreas Maier (IBM) presented the Autonomic Computing Initiative sparked by IBM which has objectives similar to but not identical with Organic Computing. Erik Norden’s (Infineon) presentation discussed multithreading techniques in modern microprocessors. The program committee selected 22 out of 50 submitted papers. We were especially pleased by the wide range of countries represented at the conference. The submitted paper sessions covered the areas Organic Computing, peer-topeer computing, reconfigurable hardware, hardware, wireless architectures and networking, and applications. The conference would not have been possible without the support of a large number of people involved in the local conference organization in Augsburg, and the program preparation in Hannover. We want to extend our special thanks to the local organization at the University of Augsburg, Faruk Bagci, Jan Petzold, Mattias Pfeffer, Wolfgang Trumler, Sascha Uhrig, Brigitte Waimer-Eichenauer, and Petra Zettl, and in particular to Fabian Rochner of the University of Hannover, who managed and coordinated the work of the program committee with admirable endurance and great patience.
February 2004
Christian M¨ uller-Schloer Theo Ungerer Bernhard Bauer
VII
Organization
Executive Committee General Chair: General Co-chair: Program Chair: Workshop and Tutorial Chair:
Theo Ungerer University of Augsburg Bernhard Bauer University of Augsburg Christian M¨ uller-Schloer University of Hannover Uwe Brinkschulte
University of Karlsruhe (TH)
Program Committee Dimiter Avresky Nader Bagherzadeh Bernhard Bauer J¨ urgen Becker Michael Beigl Frank Bellosa Arndt Bode Gaetano Borriello Uwe Brinkschulte Francois Dolivo Kemal Ebcioglu Reinhold Eberhart Werner Erhard Hans Eveking Hans-W. Gellersen Werner Grass Wolfgang Karl J¨ urgen Klein¨ oder Rudolf Kober Erik Maehle Christian M¨ uller-Schloer J¨ org Nolte Wolfgang Rosenstiel Burghardt Schallenberger Alexander Schill Hartmut Schmeck Albrecht Schmidt Karsten Schwan Rainer G. Spallek Peter Steenkiste
Northeastern University, Boston, USA University of California Irvine, USA University of Augsburg, Germany University of Karlsruhe, Germany TecO, Karlsruhe, Germany University of Erlangen, Germany Technical University of M¨ unchen, Germany University of Washington, USA University of Karlsruhe, Germany IBM, Switzerland IBM T.J. Watson, Yorktown Heights, USA DaimlerChrysler Research, Ulm, Germany Friedrich Schiller University of Jena, Germany TU Darmstadt, Germany University of Lancaster, UK University of Passau, Germany University of Karlsruhe, Germany University of Erlangen-N¨ urnberg, Germany Siemens AG, M¨ unchen, Germany University of L¨ ubeck, Germany University of Hannover, Germany TU Cottbus, Germany University of T¨ ubingen, Germany Siemens AG, M¨ unchen, Germany Technical University of Dresden, Germany University of Karlsruhe, Germany LMU, M¨ unchen, Germany Georgia Tech, USA TU Dresden, Germany Carnegie-Mellon University, USA
VIII
Organization
Djamshid Tavangarian Rich Uhlig Theo Ungerer Klaus Waldschmidt Lars Wolf Hans Christoph Zeidler Martina Zitterbart
University of Rostock, Germany Intel Microprocessor Research Lab, USA University of Augsburg, Germany University of Frankfurt, Germany University of Braunschweig, Germany University Fed. Armed Forces, Germany University of Karlsruhe, Germany
Additional Reviewers Klaus Robert M¨ uller Christian Grimm
University of Potsdam University of Hannover
Local Organization Bernhard Bauer Faruk Bagci Jan Petzold Matthias Pfeffer Wolfgang Trumler Sascha Uhrig Theo Ungerer Brigitte Waimer-Eichenauer Petra Zettl
University University University University University University University University University
of of of of of of of of of
Augsburg Augsburg Augsburg Augsburg Augsburg Augsburg Augsburg Augsburg Augsburg
Program Organization Fabian Rochner
University of Hannover
Supporting/Sponsoring Societies The conference was organized by the special interest group on Computer and Systems Architecture of the GI (Gesellschaft f¨ ur Informatik – German Informatics Society) and the ITG (Informationstechnische Gesellschaft – Information Technology Society), supported by CEPIS and EUREL, and held in cooperation with IFIP, ACM, and IEEE (German section).
Sponsoring Company
Table of Contents
Invited Program Keynote: Autonomic Computing Initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . Andreas Maier
3
Keynote: Multithreading for Low-Cost, Low-Power Applications . . . . . . . . Erik Norden
4
I Organic Computing The SDVM: A Self Distributing Virtual Machine for Computer Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jan Haase, Frank Eschmann, Bernd Klauer, Klaus Waldschmidt Heterogenous Data Fusion via a Probabilistic Latent-Variable Model . . . . Kai Yu, Volker Tresp
9
20
Self-Stabilizing Microprocessor (Analyzing and Overcoming Soft-Errors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shlomi Dolev, Yinnon A. Haviv
31
Enforcement of Architectural Safety Guards to Deter Malicious Code Attacks through Buffer Overflow Vulnerabilities . . . . . . . . . . . . . . . . . Lynn Choi, Yong Shin
47
II Peer-to-Peer Latent Semantic Indexing in Peer-to-Peer Networks . . . . . . . . . . . . . . . . . . . Xuezheng Liu, Ming Chen, Guangwen Yang
63
A Taxonomy for Resource Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Koen Vanthournout, Geert Deconinck, Ronnie Belmans
78
Oasis: An Architecture for Simplified Data Management and Disconnected Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony LaMarca, Maya Rodrig
92
Towards a General Approach to Mobile Profile Based Distributed Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Christian Seitz, Michael Berger
X
Table of Contents
III Reconfigurable Hardware A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Ali Ahmadinia, Christophe Bobda, J¨ urgen Teich Definition of a Configurable Architecture for Implementation of Global Cellular Automaton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Christian Wiegand, Christian Siemers, Harald Richter RECAST: An Evaluation Framework for Coarse-Grain Reconfigurable Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Jens Braunes, Steffen K¨ ohler, Rainer G. Spallek
IV Hardware Component-Based Hardware-Software Co-design . . . . . . . . . . . . . . . . . . . . . . 169 ´ am Mann, Andr´ P´eter Arat´ o, Zolt´ an Ad´ as Orb´ an Cryptonite – A Programmable Crypto Processor Architecture for High-Bandwidth Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Rainer Buchty, Nevin Heintze, Dino Oliva STAFF: State Transition Applied Fast Flash Translation Layer . . . . . . . . . 199 Tae-Sun Chung, Stein Park, Myung-Jin Jung, Bumsoo Kim Simultaneously Exploiting Dynamic Voltage Scaling, Execution Time Variations, and Multiple Methods in Energy-Aware Hard Real-Time Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Markus Ramsauer V Wireless Architectures and Networking Application Characterization for Wireless Network Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Andreas Weissel, Matthias Faerber, Frank Bellosa Frame of Interest Approach on Quality of Prediction for Agent-Based Network Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Stefan Schulz, Michael Schulz, Andreas Tanner Bluetooth Scatternet Formation – State of the Art and a New Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Markus Augel, Rudi Knorr
Table of Contents
XI
A Note on Certificate Path Verification in Next Generation Mobile Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Matthias Enzmann, Elli Giessler, Michael Haisch, Brian Hunter, Mohammad Ilyas, Markus Schneider
VI Applications The Value of Handhelds in Smart Environments . . . . . . . . . . . . . . . . . . . . . . 291 Frank Siegemund, Christian Floerkemeier, Harald Vogt Extending the MVC Design Pattern towards a Task-Oriented Development Approach for Pervasive Computing Applications . . . . . . . . . . 309 Patrick Sauter, Gabriel V¨ ogler, G¨ unther Specht, Thomas Flor Adaptive Workload Balancing for Storage Management Applications in Multi Node Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Jens-Peter Akelbein, Ute Schr¨ ofel
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Keynote Autonomic Computing Initiative Andreas Maier IBM Lab B¨ oblingen
[email protected] Abstract. Autonomic computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing environments can perform such activities based on situations they observe or sense in the IT environment, rather than requiring IT professionals to initiate the tasks. Autonomic computing is important today because the cost of technology continues to decrease yet overall IT costs do not. With the expense challenges that many companies face, IT managers are looking for ways to improve the return on investment of IT by reducing total cost of ownership, improving quality of service, accelerating time to value and managing IT complexity. The presentation will outline where IBM comes from with its autonomic computing initiative and what has been achieved to date.
C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, p. 3, 2004. c Springer-Verlag Berlin Heidelberg 2004
Keynote Multithreading for Low-Cost, Low-Power Applications Erik Norden Senior Architect Infineon Technologies, Balanstrasse 73, 81541 Munich, Germany
[email protected] Abstract. Innovative architectural design may be the best route to create an economical and efficient balance between memory and logic elements in cost and power sensitive embedded solutions. When system prices are measured in a few euros instead of a few hundred, the large, power intensive and costly memory hierarchy solutions typically used in computer and communications applications are impractical. A multithreading extension to the microprocessor core is especially for deeply embedded systems a more effective approach. Infineon has developed a new processor solution: TriCore 2. It is the second generation of the TriCore Unified Processor architecture. TriCore 2 contains, among others, a block multithreading solution, which responds to the blocking code memory latency in one thread by executing the instructions of a second thread. In this way, the execution pipelines of the processor can be almost fully utilized. From the user programming model, each thread can be seen as one virtual processor. A typical scenario is a cell phone. Here, generally external 16-bit flash memories with a speed of 40 MHz are used while today’s performance requirements expect processors with a clock speed of 300-400 MHz. Because of this discrepancy, up to 80% of the performance can be lost, despite caches. Larger cache sizes and multi-level memory solutions are not applicable for cost reasons. Block multithreading allows system designers to use comparatively smaller instruction caches and slow external memory while still getting the same overall performance. The performance degradation in the cell phone example can be almost eliminated. Even the clock frequency can be reduced. Block multithreading is very efficient for a general CPU based application residing in cache memory and an algorithmic application in the local on-chip memory. This is a characteristic which many deeply embedded processor applications have. Effectively, a separate DSP and CPU can be replaced by a multithreaded hybrid to reduce chip area, tool costs etc. The block multithreading solution also supports a fast interrupt response, required for most deeply embedded applications. The additional costs for this multithreading solution are small. The implementation in TriCore 2 requires a chip area of only 0.3mm2 in 0.13 micron technology. The most obvious costs are caused by the duplicated register files to eliminate the penalty for task switching. Instruction cache C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 4–5, 2004. c Springer-Verlag Berlin Heidelberg 2004
Keynote Multithreading for Low-Cost, Low-Power Applications and fetch unit need to support multithreading but the overhead is low. Same to other areas that are affected which are traps and interrupt handling, virtual memory, and debug/trace. Apart from multithreading, TriCore 2 has also other highlights. Advanced pipeline technology allows high instruction per cycle (IPC) performance while reaching higher frequencies (400-600 MHz typical) and complying with demanding automotive requirements. The center of the processor’s hierarchical memory subsystem is an open, scalable crossbar architecture, which provides a method for efficient parallel communication to code and data memory including multiprocessor capability. This presentation will describe the root problems in low-cost, low-power embedded systems that require a multithreaded processor solution. Since the core architecture is well-deployed in the demanding automotive market, the first implementation is specified for these requirements like quality and determinism. Working silicon with this implementation is expected for the first quarter of 2004 and will be used as a demonstrator.
5
The SDVM: A Self Distributing Virtual Machine for Computer Clusters Jan Haase, Frank Eschmann, Bernd Klauer, and Klaus Waldschmidt J.W.Goethe-University, Technical Computer Sc. Dep., Box 11 19 32, D-60054 Frankfurt, Germany {haase|eschmann|klauer|waldsch}@ti.informatik.uni-frankfurt.de
Abstract. Computer systems of the future will consist more and more of autonomous and cooperative system parts and behave self-organizing. Self-organizing is mainly characterized by adaptive and context-oriented behaviour. The Self Distributing Virtual Machine (SDVM) is an adaptive, self configuring and self distributing virtual machine for clusters of heterogeneous, dynamic computer resources. In this paper the concept and features of the implemented prototype is presented.
1
Introduction
State of the art systems of computers are mostly used to run sequential programs, though client-server-systems or parallelizing compilers are increasingly popular. So far, many parallel programs run on dedicated parallel computing machines. Unfortunately, these multi-processor systems cannot be adapted to all needs of problems or algorithms to turn the inherent or explicit parallelism into efficiency. Therefore large clusters of PCs become important for parallel applications. These clusters show a huge dynamic and heterogenity with a structure that is usually unknown at the compile time of the application. Furthermore, the CPUs of the cluster have immense changing loads, nodes are more or less specific and the network is spontaneous with vanishing and appearing resources. Besides, the reuse of existing hardware would be more economic and cost-efficient. Thus, mechanisms have to be developed to perform the (efficient) distribution of code and data. They should be executable on arbitrary machines - not considering different processors and/or operating systems. Computers spending their machine time partly to the parallel calculations need applications running silently in the background without user interactions – like a UNIX-daemon. These computers should not be burdened too much by the background process to keep the foreground processes running smoothly. Full machine time must be available on demand for foreground processes, so the background process must support a shutdown at any time – or at least a nice feature to calm down its machine time consumption. Similarly the background
Parts of this work have been supported by the Deutsche Forschungsgemeinschaft (DFG).
C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 9–19, 2004. c Springer-Verlag Berlin Heidelberg 2004
10
J. Haase et al.
process should be started by the user expecting idle times for a while. So it has to reintegrate into the cluster without the need for a restart of the parallel application. The cluster would then be self configuring regarding its composition. Some batch processing systems, as e. g. Condor [1], feature cycle harvesting – a method to identify idle times of the participating sites. The systems base on a client-server structure, in which the server decides about the distribution and execution of the scheduled jobs centrally. As all sites have to report to the server, the communication channel tends to be the bottleneck of the system – especially in a loosely coupled cluster. The SDVM works without a client-server structure on a peer-to-peer basis, so neighborhood relations are automatically strengthened. Due to the size of large computer clusters the probability of hardware failures needs to be considered. Large clusters are hardly usable unless a concept for self healing of the cluster is available, making the parallel execution immune to hardware failures. To address these problems, the Self Distributing Virtual Machine (SDVM) has been developed and prototypically implemented in C++ by the authors. 1.1
Goals
The SDVM has been developed to introduce the following features into standard computer clusters. – self configuration: The system should adapt itself to different environments. Signing in and signing off at runtime should be available for all machines in the cluster. – adaptivity: The cluster should cope with heterogeneous computer systems, so computers with different speeds and even different operating systems can participate in an SDVM controlled cluster. – self distribution: Intelligent distributed scheduling and migration schemata should provide that data and instruction code is distributed among the cluster engines automatically at a minimum cost of machine time. – self healing: Crashes should be detected, handled and corrected by the SDVM automatically. – automatic parallelization: Hidden parallelism in programs should be detected and exploited. – self protection and protection of others: For a cluster working within an open network it is important to consider safety aspects. The authenticity of users and data has to be ensured, and sabotage, spying and corrumption has to be prevented. – accounting: Mechanisms and cost functions for accounting of provided and used computing time should be offered. The first three of these goals are addressed in this first paper about the SDVM.
The SDVM: A Self Distributing Virtual Machine for Computer Clusters
1.2
11
Possible Fields of Application
The SDVM can in the first instance be seen as a cheap opportunity to shorten the runtime of a program by parallelization. Due to self configuration (here: configuration of the computing cluster) it can be decided at run time to enlarge the cluster to get more performance or to downsize the cluster if participants are needed elsewhere. For example the use of a company’s workstation cluster at lunch time or at night is imaginable. Applied to the internet, the SDVM can be used to solve complex problems by collaboration of many computers, such as Seti@Home [2]. In this way computers which are currently at the night hemisphere of the earth can join into an SDVM cluster and sign off in the morning. In contrast to the internet solution a monolithic version can be implemented. In this case the SDVM can be seen as a concept for a self distributing scheduling of calculations on a given multiprocessor system. 1.3
VMs
Generally a virtual machine is a software layer, which emulates the functionality of a certain machine or processor on another machine. Usually the user does not recognize the emulation. Virtual machines are used for example to capsulate program execution because of security and supervision reasons, so that the application gets only access to virtual resources, and the access is not forwarded to the real resources, unless the access is checked and granted. The meanwhile widely used Parallel Virtual Machine (PVM) [3] is essentially a collection of procedures and functions to simplify the usage of heterogeneous computer clusters. On each participating computer a daemon (pvmd) has to be started, which performs communication with other pvmd’s on different cluster computers via special functions. To run an PVM a “host pool” has to be configured before run time, with a constant total of machines and a fixed communication infrastructure. The SDVM whereas allows signing in and out of computers at run time, without the need to know before run time which computers will participate.
2 2.1
Concepts Microframes and Microthreads
The SDVM is based on the concept of the Cache Only Memory Architectures (COMAs) [4]. In a COMA architecture a processor which needs specific data, looks for it in its local memory. The local memory is active and checks whether the data is locally available. In case of a cache hit it returns the data to the processor. In case of a cache miss it autonomously connects to another computer in the COMA cluster, and asks for the data. The answer of this query will be written into the local memory and propagated to the processor. Thus the data access on COMA clusters is done transparently for the application.
12
J. Haase et al.
input parameters ...
ID
ID target addresses ...
MicroFrame
[...] double romberg(double a, double b, int N){ double sum; int i, j; double res; double T[25][25]; if (N>25)return 0; T[0][0] = (b-a) * (f1(a) + f1(b)) /2; if (N > 25)return 0; for(i=1;i 0) then call func_a(a) else func_b(a); We call all other forms of live data corruption as data dependent data corruption. This may not change the flow of control directly but modifies the live data, which will be referenced by the program in the future. This may appear as a normal program execution but with an erroneous result or state. In addition, the data corruption may propagate to other data locations, which can eventually lead to control dependent data corruption or control data corruption. Note that all the above three cases of live data corruption may lead to an abnormal termination. The reference of the corrupted data item is susceptible to data reference exceptions or execution related errors caused by invalid data operands. For example, the load from a corrupted data address may result in a TLB miss, a page miss, or an access violation. Although the corrupted data item is successfully referenced, it may cause an execution related exceptions such as overflow or floating-point exceptions when it is used by a later arithmetic operation. When the corrupted data item is used by a conditional branch as a branch condition or by an indirect branch as a branch target, it modifies the execution control flow. Specifically, an instruction is fetched from a wrong target address. We call this control corruption, which implies that the illegal control flow is made by the buffer overflow attack. This may lead to a malicious code execution if the branch target is modified to point to the worm code brought in by the external input. We call this kind of control corruption external code control corruption. This is the most serious consequence of buffer overflow attacks since the malicious code can replicate and propagate to other vulnerable hosts. Also, this is the most common form of buffer overflow attacks. The other form of control corruption is when the modified branch target points to the legitimate code in the text region. We call this internal code control corruption, which has been reported in a few cases of buffer overflow incidents [6].
54
L. Choi and Y. Shin
3.2 Safety Guards: Detection of Buffer Overflow Exploits Fortunately, the program under attack exhibits abnormal symptoms during its execution, specifically during its data and instruction references. For example, a stack smashing attack modifies return address and often copies its accompanied malicious code into the stack area outside the current stack frame, neither of which is possible during a normal program execution. Furthermore, when it launches the malicious code, the attacked program fetches instructions from the stack region. Except for a few rare cases such as the implementation of Linux “signal” or gcc “trampolines” functions that require fetching instructions from the current stack frame, it is not usual to fetch instructions from the stack area. Moreover, it is impossible to fetch instructions from the non-local stack area during a normal program execution. Both the abnormal instruction reference and the abnormal data reference can be easily detected by the hardware at runtime with a simple safety checking of its address referenced. Figure 5 shows how the processor can protect the system against possible data or control corruptions made by the buffer overflow attacks in the processor pipeline. First, during the instruction fetch stage the value of the program counter can be inspected as shown in Figure 5(a). If the program counter points to a location either in the program text region or in the current stack frame, it is assumed to be safe. For other instruction references, we can enforce the following integrity checking to block unsafe instruction references in x86 processors as shown in (1). We generally call this kind of safety checking during the execution of an instruction as a safety guard for the instruction and call this specific case of instruction reference safety checking as instruction reference safety guard. If NOT ((PC ∈ text region) OR (EBP |Gi | ∧ γ(Gi , Gj ) > |Gj | σ v, γ(Gi , Gj ) = true Definition 8. Decentralized Group: Let Nvk all the neighbors of a node v. A Decentralized Group is obtained, if the local group of the node v is combined with the local group of each element in Nvk .
4
MPDG Application Architecture
In this section the architecture of a MPDG application is presented, which is depicted in figure 2. A MPDG application consists of three essential parts: the middleware, the MPDG unit and the Domain Description unit. The middleware establishes the basis for a MPDG application. It is in charge of detecting other hosts in the mobile environment and provides a mechanism for sending and receiving messages to other hosts, which are within transmission range. The central element of a MPDG application is the MPDG unit. It is made up of a MPDG algorithm entity and a MPDG Description entity. The MPDG algorithms entity distinguishes between Local Grouping and Decentralized Grouping. In order to obtain a decentralized group a two-tiered process is started. At first, each host selects from its neighbor hosts, the subset of hosts which ought to be in the group. This set of hosts is called a Local Group. After each host has determined its local group, these groups are exchanged in a second
112
C. Seitz and M. Berger D o m a in D e s c rip tio n U n it
D o m a in D e fin itio n
G ro u p D e fin itio n
P ro file D e fin itio n
M P D G U n it M P D G M e ta D e s c rip tio n s
M P D G A lg o rith m s
L o c a l G ro u p in g
D e c e n tra liz e d G ro u p in g
M e ta G ro u p D e s c rip tio n
M e ta P ro file D e s c rip tio n
M id d le w a re (A g e n t P la tfo rm , J X T A , L E A P , e tc . )
Fig. 2. Components of a MPDG application
step and a unique decentralized group is acquired. As the MPDG unit is totally domain independent, the structure of a profile or a group definition must be defined. This is done by the Meta Profile Description and the Meta Group Description which are elements of the MPDG Description entity. The Meta Group and Meta Profile Description define what a group or profile definition consist of and specify optional and mandatory elements of a group or profile definition. In both meta descriptions, there is a mandatory general part, defining the name of the profile or group. The meta profile description encompasses a set of tags for profile entry definitions, and specifies the structure of rules, that can be declared in the profile definition in the Domain Description Unit. The meta group description comprises abstract tag definitions for the size of a group, how a group is defined, and the properties to become a group member. On top of the MPDG unit, the Domain Description unit is located. This unit adjusts the MPDG unit to a specific application domain. In the Domain Definition entity, domain dependent knowledge is described and in the profile and group definition entities the domain specific profiles and group properties are defined. The Profile Definition specifies the structure of the profile for the domain, in accordance with the Meta Profile description. The Group term varies from application to application. A group can be a few people with similar properties (the same hobby, profession, age etc.) but also a set of machines with totally different capabilities. For that reason the Group Definition comprises the characteristics of the group ought to be found. In this paper, we concentrate on describing the MPDG algorithm entity and we only refer to the description entities in inevitable situations.
5
MPDG Algorithms
In this section the architecture of the algorithm entity is presented, the used network model is defined and the algorithms for each layer are shown. The algorithm entity has a layered architecture and encompasses algorithms for initiator determination, virtual topology creation, local grouping and decentralized grouping.
Towards a General Approach to Mobile Profile Based Distributed Grouping
113
Finally, we show some simulation results, in order to indicate how stable the generated groups are.
G ro u p in g L a y e r L o c a l G ro u p in g
D e c e n tra liz e d G ro u p in g
V irtu a l T o p o lo g y L a y e r In itia to r D e te rm in a tio n L a y e r
C o m m u n ic a tio n M o n ito rin g A g e n t
M P D G A lg o rith m s
A d h o c M id d le w a re
Fig. 3. Architecture of the MPDG Algorithm Entity
5.1
MPDG Algorithm Entity
The most important part of a MPDG application is the Algorithm Entity (AE). The design of this essential entity is shown in figure 3. The basis for the algorithm entity is an Ad hoc Middleware. The middleware is needed by the AE in order to send and receive messages in the dynamic environment. Furthermore, the middleware has to provide a lookup services to find new communication partners. The lowest layer of the AE is the Initiator Detection Layer, which assigns the initiator role to some hosts. An initiator is needed in order to guarantee, that the algorithm of the next layer is not started by each host of the network. This layer does not determine one single initiator for the whole ad hoc network. It is sufficient, if the number of initiator nodes is only reduced. The Virtual Topology Layer is responsible for covering the graph G with another topology, e. g. a tree or a logical ring. This virtual topology is necessary to reduce the number of messages, that are sent by the mobile hosts. First experiences show, that a tree is the most suitable virtual topology and therefore we will only address the tree approach in this paper. The next layer is the most important one, the Grouping Layer, which accomplishes both, the local grouping and the decentralized grouping. Local grouping comprises the selection of hosts which are taken into account for global grouping. Decentralized grouping encompasses the exchange of the local groups with the goal to achieve a well defined global group (see definitions in section 4). Furthermore, a Communication Monitoring Agent (CMA) exists. This agent registers for each message which is send or received, the sender, the receiver and the associated time. With these communication lists, it becomes possible to make assumptions, whether a mobile host is still in transmission range or not. Note, the mobility of the nodes is no problem, when the absolute value of the velocity and the direction is rather the same. Thus, if host A is able to communicate five
114
C. Seitz and M. Berger
minutes with host B, these two hosts must have a very low differential velocity. The results which are provided by the CMA are not totally correct, because if a mobile device is shut down by its user it cannot be recognized and not predicted by the CMA. If we proceed on the assumption that users shut down their devices only scarcely, the CMAs results are solid. 5.2
Initiator Determination
Before the spanning tree is created, the initiators must be determined who are allowed to send the first creation-messages. Without initiators all hosts start randomly sending messages with the result that a tree will never be created. We are not in search of one single initiator, we only want to guarantee, that not all hosts start the initiation. There are two ways to determine the initiator, an active and a passive one. The active approach starts an election algorithm (see Malpani et al. [12]). These algorithms are rather complex, i. e. a lot of messages are sent which is very time consuming. They guarantee that only one initiator is elected and in case of link failures that another host takes the initiator role. Such a procedure is not appropriate and not necessary for MPDG, because the initiator is only needed once and it matters little if more than one initiator is present. Therefore, we decided for the passive determination method, which is similar to Gafni and Bertsekas [8]. By applying the passive method no message is sent in the beginning to determine an initiator. Since each host has an ID and knows all neighbor IDs, we only allow a host being an initiator, if its ID is larger than all IDs of its neighbors. The initiator is in charge of starting the virtual topology algorithm, described in the next section. Unfortunately, fairness of the initiator determination process is not guaranteed. Thus, it could happen a mobile device never takes the initiator position. At this time it can not be said if it is an advantage being the initiator or not. 5.3
Virtual Topology Creation
Having confined the number of initiators, graph G0 can be covered with a virtual topology (VT). Simulations showed that a spanning tree is a promising approach for a VT and therefore we will only describe the spanning tree VT in this paper. A spanning tree spT(G) is a connected, acyclic subgraph containing all the vertices of the graph G. Graph theory (e. g. Diestel [5]) guarantees, that for every G a spT(G) exists. Figure 1b shows a graph with one possible spanning tree. The Algorithm Each host keeps a spanning tree sender list (STSL). The STSL contains the subset of a host’s neighbors belonging to the spanning tree. The initiator, determined in the previous section, sends a create-message furnished with its ID to all its neighbors. If a neighbor receives a create-message for the first time, this message is forwarded to all neighbors except for the sender of the createmessage. The host adds each receiver to the STSL. If a host receives a message
Towards a General Approach to Mobile Profile Based Distributed Grouping
115
Receipt of a CREATEMESSAGE from host p: if( not sent ) root = p; STSL += p; if( ++visitedHops < HOPS ) send CREATEMESSAGE to NEIGHBORS; sent = true; fi; Start (initiator node): fi; initiator := true; else STSL -= p; send CREATEMESSAGE to NEIGHBORS; Initialization for a node: STSL = null; initiator = false; sent = false; root = null; visitedHops = 0;
Fig. 4. PseudoCode of the spanning tree creation algorithm
from a host which is already in the STSL, it is removed from the list. The pseudocode notation of this algorithm is shown in figure 4. To identify a tree, the ID of the initiator is always added to each message. It may occur that a host already belongs to another tree. Under these circumstances the message is not forwarded any more and the corresponding host belongs to two (more are also possible) trees. In order to limit the tree size a hop-counter ch is enclosed to each message and is each time decremented, the message is forwarded. If the counter is equal to zero, the forwarding process stops. Note, with an increasing ch the time for building a group also increases, because ch is equivalent to the half diameter3 dG of the graph G. By using a hop-counter it may occur that a single host does not belong to any spanning tree, because all tree around are large enough, i. e. ch is reached. The affiliation of that host is not possible, because tree nodes do not send messages in case the hop-counter’s value is zero. When time elapses and a node does notice it does still not belong to a tree, an initiator determination is started by this host. Two cases must be distinguished. In the first one the host is surrounded only by tree nodes, in the other case a group of isolated hosts are existing. In both cases, the isolated host contacts all its neighbors by sending an init-message, and if a neighbor node already belongs to a tree it answers with a join-message. If no non-tree node is around, the single node chooses arbitrarily one of the neighbors and joins the tree by sending an join-agree-message, to the other hosts a join-refuse-message is sent. If another isolated host gets the init-message, a init-agree-message is returned and the host sending the init-message becomes the initiator starts creating a new tree. Evaluation The main reason for creating a virtual spanning tree upon the given topology is the reduction of messages needed to reach an agreement. Let n be the number of vertices and e be the number of edges in a graph G. Then, there are 3
The diameter d is the maximum of the distances between all possible pairs of vertices of a graph G.
116
C. Seitz and M. Berger
2e − n + 1 messages necessary to create the spanning tree. If no tree would be build and a host receives a message, this messages must be forwarded to all its neighbors, which again results in 2e−n+1 messages for distributing the message through the graph. Overlaying a graph with a virtual spanning tree, the number of forwarded messages is reduced to n − 1. Determining the factor, when a tree 2e−n+1 . If on the average e = 2n, becomes more profitable leads us to A = 2(e−n+1) 3n+1 the amortization A results in 2n+2 , which converges to 1.5 for the amortization factor A with increasing n. In the equation above, the tree maintenance costs are not taken into account. If a new host comes into transmission range or an other host goes away, this is recognized by the ad hoc agent platform and the host is added to or removed from the neighbor list. If the neighbor list has changed, the tree has to be updated. A vanishing host is worse than an appearing one and could have more negative effects to the tree. If a host is added to the tree, attention should be paid to cycles, which could appear. Therefore, the host is added only once into the STSL, to guarantee that no cycle is formed. If a host leaves the ad hoc network, all its neighbors are affected and the vanished host must be deleted from each neighbor host’s STSL. 5.4
Local Grouping – Optimizing the Local View
In this section algorithms are presented that determine the subset of neighbor hosts, which initially belong to a host’s group, called a local group. In order to guarantee, that groups are not formed arbitrarily, but bring a benefit to its members a Group Profit Function is defined. Definition 9. Group Profit Function fGP : Let P(V ) be the power set of the nodes of a graph G(V, E) and G a local group. The Group Profit function fGP (G) : P(V ) → R assigns a value to a group G ∈ P(V ). This value reflects the benefit, which emerges from group formation. If a new node vi is added to the group G, fGP must increase: fGP (G ∪ vi ) > fGP (G), in order to justify the addition of vi . The algorithm adds in each step exactly one new local group member. Initially, a host scans all known profiles and adds that one, with the smallest distance to him. If the group with two points will bring a greater benefit, the points is added to the point. The group now has two members. It may only one point be added in one step, because else the shape of a group gets beyond control A host A can add another host B in the exactly opposite direction than a host D is added by host C. If more than one points should be added, coordination is needed. The points already belong to the group form a fractional line, because at all times only one point is added. In order to sustain this kind of line, we only allow the two endpoints to add new points. To coordinate these two points, the endpoints of the line may add new points, alternately. If the right end has added a new point in step n, in step (n+1) the left side is on turn to add a
Towards a General Approach to Mobile Profile Based Distributed Grouping
117
point. The alternating procedure stops, when one side is not able to find a new point. In such a case, only the other side continues to add points, until no new point is found. If a host is allowed to add a point and there is also one to add, it is not added automatically. The new point must bring a benefit, according to definition 9. y
y y
x
x x
a )
c )
b )
y y
y
x x
x d )
e )
f)
Fig. 5. The local grouping algorithm
Figure 5 illustrates the process of finding a local group and figure 6 contains the pseudocode of the algorithm. The points in the coordinate system in figure 5 represent the destination of a user, and the dark black point is the point the local view is to be obtained for. In image a) the grouping starts. In b) and d) on the right side and in c) and d) on the left side of the line new points are added. In e) the new point is also added at the left side, although the right side would have been on turn. But, there are no points within rage, so that the left side has to find one. The last picture f) represent the complete group. 5.5
Decentralized Grouping – Achieving the Global View
In the previous section each host has identified its neighbor hosts that belong to its local group gi . These local groups must be exchanged in order to achieve a global group. The algorithm presupposes no special initiator role. Each host may start the algorithm and it can even be initiated by more than one host contemporaneously. The core of the used algorithm is an echo-algorithm, see [4]. Initially, a arbitrary host sends a EXPLORER-message with its local-group information enclosed to its neighbors which are element of the spanning tree (the STSL, see section 5.3). If a message arrives, the enclosed local-group is taken and it is merged with its current local view of the host to get a new local view.
118
C. Seitz and M. Berger
firstReferenceNode := currentPoint; secondReferenceNode := currentPoint; nextPoint := null; localGroup := currentPoint; currentProfit = profit_function( localGroup ); while((nextPoint:=getNearestProfilePoint(firstReferenceNode))!=null) futureProfit := profit_function( localGroup + nextPoint ); if( futureProfit > currentProfit ) then localGroup += nextPoint; neighbors -= nextPoint; currentProfit = futureProfit; firstReferenceNode := secondReferenceNode; secondReferenceNode := nextPoint; fi; elihw; Fig. 6. PseudoCode of the Local Grouping Algorithm
The merging function tries to maximize the group profit function, i. e. if two groups are merged, from each group these members become a member of the new group which together draw more profit than each single group. The new local view is forwarded to all neighbors except for the sender of the received message. If a node has no other outgoing edges and the algorithm has not terminated, the message is sent back to the sender.If more than one hosts initiate the algorithm and a host receive several EXPLORER-messages, then only the EXPLORER-message from that host are forwarded, which has the higher ID (message extinction). The pseudocode of the group distribution is shown in figure 7. But if the algorithm in figure 7 has terminated, it is still not yet guaranteed, that each node has the same global view. In the worst case only the initiator node has a global view. For that reason, the echo algorithm has to be executed once more. In order to save messages, in the second run, the echo messages need not be sent, because no further information gain is achieved. A critical point is to determine the termination of the grouping process. The algorithm terminates in at most 2 · dG = 4 · ch steps and because the echo messages in the second run are not sent this is reduced to 3 · ch . If a host receives this amount of messages, the grouping is finished. But, due to the mobility, nodes come and go. Currently, the algorithms stops, if a node gets from all its neighbors the same local group information. This local group information is supposed to be the global group information. To make sure, that all group member have the same global view, the corresponding hosts check this with additional confirmation-messages. But currently this part is considered to be optional.
Towards a General Approach to Mobile Profile Based Distributed Grouping Start (only if not ENGAGED): initiator := true; ENGAGED := true; N := 0; localGroup = getLocalGroup(); EXPLORER.add( localGroup ); send EXPLORER to STSL;
Receipt of an ECHO message: N := N+1; localGroup := merge( localGroup, ECHO.getLocalGroup() ); if N = |STSL| then ENGAGED := false; if( initiator ) then finish; else ECHO.setLocalGroup( localGroup ); send ECHO to PRED; fi; fi;
119
An EXPLORER message from host p is received: if ( not ENGAGED ) then ENGAGED := true; N := 0; PRED := p; localGroup := getLocalGroup() localGroup := merge(localGroup, EXPLORER.getLocalGroup()); EXPLORER.setLocalGroup(localGroup); send EXPLORER to STSL-PRED; fi; N := N+1; if ( N = |STSL| ) then ENGAGED := false; if( initiator ) then finish; else localGroup := merge(localGroup, EXPLORER.getLocalGroup()); ECHO.setLocalGroup(localGroup); send ECHO to PRED; fi; fi;
Fig. 7. PseudoCode of the Echo Algorithm including the group merging process
5.6
Group Stability
In this subsection the stability of the groups is evaluated. With stability we mean the time a group does not change, i. e. no other host is added and no group member leaves the group. This stability time is very important for our algorithms, because in this time the group formation must be finished. We developed a simulation tool to test, how long the groups are stable. The velocity of the mobile hosts is uniformly distributed in the interval [0; 5.2], the average velocity of the mobile hosts is 2.6 km h . This speed seems to be the prevailing speed in pedestrian areas. Some people do not walk at all (they look into shop windows etc.), other people hurry from one shop to the other and therefore walk faster. Moreover we assume a radio transmission radius of 50 meters. The left picture in figure 8 shows this dependency. The picture shows, that the time a group is stable, decreases rapidely. A group with 2 people exists on the average 30 seconds, whereas a group with 5 people is only stable for 9 seconds. Nevertheless, a group that is stable for 9 seconds is still sufficient for our algorithms. The stated times for groups are the times for the worst case, i. e. no group member leaves the group an no other joins the group. But, for the algorithms it does not matter if a group member leaves during the execution of the algorithm. The only problem is, this person cannot be informed about its potential group membership. In case a person joins the group, the profile information of this point must reach every other point in the group, which of course must also occur in the time the group is stable. Unfortunately, we do not have simulation results for groups with 10 to 20 members.
120
C. Seitz and M. Berger
tim e in s
tim e in s
9 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 0 0
1
2
3
4
5
6
g ro u p s iz e
s p e e d in k m /h
Fig. 8. The left picture depicts the time, a group is stable in dependency of the group size. The right picture shows the dependency of the group stability and the speed of the mobile hosts.
The group stability is furthermore affected by the speed of the mobile hosts. The faster the mobile hosts are, the more rapid they cross the communication range. In our simulation environment we analyzed the stability of the groups in dependency of the speed, which is shown in the right picture of figure 8. In this simulation, we investigated how long a group of three people is stable, when different velocities are prevailing. For the simulation again the chain algorithm is used and the transmission radius is 50 meters. The right picture of figure 8 shows that the group is stable for more than 40 seconds of the members have a speed of 1 km/h. The situation changes when the speed increases. If all members walk fast (speed 5 km/h) the group is only stable for approximately 10 seconds. Up to now we do not know why at first it decreases rapidly and with a speed of 2.8 km/h the decrease is stemmed.
6
Conclusion
In this paper we presented a kind of ad hoc applications called Mobile Profile based Distributed Grouping (MPDG). Each mobile host is endowed with its user’s profile and while the user walks around clusters are to be found, which are composed of hosts with a similar profile. The architecture of a MPDG application is shown, which basically is made up of a MPDG description entity, that makes the MPDG unit domain independent and a algorithm entity, which is responsible for local grouping and distributed grouping. At first, each host has to find its local group, which consists of all neighbor hosts with similar profiles. Finally, the local groups are exchanged and a global group is achieved. Simulation results show that the groups are stable long enough to run the algorithms. We simulated a first MPDG applications, which is a taxi-sharing scenario, where potential passenger with similar destinations form a group [14]. For the future we will apply the MPDG idea to different domains, e. g. the manufacturing or lifestyle area.
Towards a General Approach to Mobile Profile Based Distributed Grouping
121
References 1. N. Badache, M. Hurfun, and R. Macedo. Solving the consensus problem in a mobile environment. Technical report, IRISA, Rennes, 1997. 2. S. Banerjee and S. Khuller. A clustering scheme for hierarchical control in multihop wireless networks. Technical report, University of Maryland, 2000. 3. S. Basagni. Distributed clustering for ad hoc networks. In Proceedings of the IEEE International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN), Perth., pages 310–315, 1999. 4. E. J. H. Chang. Echo algorithms: Depth parallel operations on general graphs. IEEE Transactions on Software Engineering, SE-8(4):391–401, July 1982. 5. R. Diestel. Graph Theory, volume 173 of Graduate Texts in Mathematics. SpringerVerlag, New York, 2nd edition, February 2000. 6. D. Fasulo. An analysis of recent work on clustering algorithms. Technical report, University of Washington, 1999. 7. C. Fraley and A. E. Raftery. How many clusters ? Which clustering method ? Answers via model-based cluster analysis. The Computer Journal, 41(8), 1998. 8. E. M. Gafni and D. P. Bertsekas. Distributed algorithms for generating loopfree routes in networks with frequently changing topology. IEEE Transactions on Communications, COM-29(1):11–18, January 1981. 9. K. P. Hatzis, G. P. Pentaris, P. G. Spirakis, V. T. Tampakas, and R. B. Tan. Fundamental control algorithms in mobile networks. In ACM Symposium on Parallel Algorithms and Architectures, pages 251–260, 1999. 10. E. Kolatch. Clustering algorithms for spatial databases: A survey. Technical report, Department of Computer Science, University of Maryland, College Park, 2001. 11. R. Maitra. Clustering massive datasets. In statistical computing at the 1998 joint statistical meetings., 1998. 12. N. Malpani, J. Welch, and N. Vaidya. Leader election algorithms for mobile ad hoc networks. In Proceedings of the Fourth International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, 2000. 13. G.-C. Roman, Q. Huang, and A. Hazemi. Consistent group membership in ad hoc networks. In International Conference on Software Engineering, 2001. 14. C. Seitz and M. Berger. Towards an approach for mobile profile based distributed clustering. In Proceedings of the International Conference on Parallel and Distributed Computing (Euro-Par), Klagenfurt, Austria, August 2003.
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware Ali Ahmadinia, Christophe Bobda, and Jürgen Teich Department of Computer Science 12, Hardware-Software-Co-Design, University of Erlangen-Nuremberg, Am Weichselgarten 3, 91058 Erlangen, Germany {ahmadinia, bobda, teich}@cs.fau.de http://www12.informatik.uni-erlangen.de
Abstract. Recent generations of FPGAs allow run-time partial reconfiguration. To increase the efficacy of reconfigurable computing, multitasking on FPGAs is proposed. One of the challenging problems in multitasking systems is online template placement. In this paper, we describe how existing algorithms work, and propose a new free space manager which is one main part of the placement algorithm. The decision where to place a new module depends on its finishing time mobility. Therefore the proposed algorithm is a combination of scheduling and placement. The simulation results show a better performance against existing methods.
1 Introduction A reconfigurable computing system is usually composed of a host processor and a reconfigurable device such as an SRAM-based Field-Programmable Gate Array (FPGA)[4]. The host processor can map a code as an executable circuit on the FPGA, which is denoted as a hardware task. With the ability of partial reconfiguration for the new generation of FPGAs, multiple tasks can be configured separately and executed simultaneously. This multitasking and partial reconfiguration of FPGAs increases the device utilization but it also necessitates well thought dynamic task placement and scheduling algorithms [5]. Such algorithms strive to use the device area as efficiently as possible as well as reduce total task configuration and running time. But these existing algorithms have not a high performance [1]. Such efficient methods have been developed and perfected in a way such that the hardware tasks are placed on the reconfigurable hardware in a fast manner and that they are furthermore tightly packed to use the available area efficiently. However, most such algorithms are static in nature in the sense that the same placement and scheduling rules apply to every single arriving task and that the entire reconfigurable area is available for the placement of every task. The scope of the present paper hence consists of developing a dynamic task scheduling and placement method on a device divided into slots. More precisely, the FPGA is divided into separate slots, then each C. Müller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 125–139, 2004. © Springer-Verlag Berlin Heidelberg 2004
126
A. Ahmadinia, C. Bobda, and J. Teich
of these slots will accommodate only those tasks that end their execution at “nearly the same time”. This 1-D FPGA partitioning as well as the similarity of end times are two parameters that are dynamically varied during runtime. These parameters must then be controlled by an appropriate function in order to reduce the total execution time and the number of rejected tasks. Finally, relevant statistics are collected and the performance of this newly developed algorithm is then compared experimentally to that of existing ones. In the subsequent sections previously existing methods and algorithms will be briefly described, the motivation behind the proposed scheduling and 1-D partitioning approach will be explained and the developed algorithm will be described in detail. Finally, comparative results will be presented and analyzed.
2 Online Placement The problem of packing modules on a chip is similar to the well-studied problem of two-dimensional bin-packing, which is an extension of classical one-dimensional binpacking [7][8]. The one-dimensional bin-packing problem is similar to placing modules in rows of configurable logic, as done in the standard cell architecture. The twodimensional bin-packing problem can be used when the operations to be loaded on the modules are rectangles which can be placed anywhere on the chip [1]. In the context of online task placement on a reconfigurable device, the nature of the operations and hence the flow of the program are not known in advance. The configuration of hardware tasks on the FPGA must be done on the fly. To describe the placement problem clearly, we should define our task model: Definition 1 (Task Characteristics). Given a set of tasks T= { t1 ,t2 , …, tr } such that,
∀t k ∈ T , t k = (ak , ek , d k , wk , hk ) ak = arrival time of task tk ek = execution time of task tk dk = deadline time of task tk wk = width of task tk hk = height of task tk This set of tasks must be mapped to a fixed size of FPGA, according to the time and area constraints of tasks. In fact, each task will be mapped to a module which is a partial bitstream. This partial bitstream occupies a determined amount of logic blocks on the device and it has a rectangular shape. Placement algorithms are therefore developed that must determine the manner in which each arriving task is configured. These algorithms must be perfected to, on the one hand, use the available free placement areas efficiently and, on the other hand, execute in a fast manner. However, there most often exists a trade off between these two requirements as fast placement
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
127
algorithms are usually low-quality ones and those that use the chip area very efficiently compute slowly. In an online scenario, hardware tasks arrive, are placed on the hardware and end execution at any possible time. This situation leads to a complex space allocation on the FPGA. In order to determine where the new tasks can be placed, the state of the FPGA or the free area must be managed. This free space management aims to reduce the number of possible locations for the newly arriving tasks and to increase placement efficiency as well. Two such free space management algorithms have been developed in [1] and will be compared to our approach here. This free space management is the first main part in online placement algorithms. The second part involves fitting the new tasks inside the empty rectangles. Once the free area is managed and the possible locations for the placement of the new task are determined, a choice has to be made at which one of these locations the task will be configured. Multiple such fitting heuristics have been developed in [1]: First Fit, Best Fit and Bottom Left. 2.1 Free Space Management
The KAMER (Keeping All Maximum Empty Rectangles) method has the highest quality of placement as compared to other ones [1]. It is therefore used as the baseline for comparison against other algorithms in terms of the quality of placement that is lost to the benefit of the amount of speed-up that is gained. The KAMER algorithm should hence be described in order to understand why it has such a high placement quality and also why it requires high computation times. In order to decide where the new arriving task should be placed, the KAMER algorithm partitions the empty area on the reconfigurable hardware by keeping a list of empty rectangles. Moreover, these are Maximal Empty Rectangles (MERs), meaning that they are not contained within any other empty rectangle. The arriving task is then placed at the bottom left corner of one of the existing MERs; the choice of the MER depends on the fitting heuristic that is being used. Figure 1 illustrates the case where the empty free space is partitioned into four MERs; their bottom left corners are denoted by an X. An alternative to the KAMER free space manager is the method that keeps nonoverlapping free rectangles. These empty rectangles are not necessarily maximal and hence some quality in placement is lost. The advantage is though that this algorithm executes faster and is more easily implemented. An example of non-overlapping partitioning of the empty region is shown in Figure 2. It should be self evident that in this case of free space management, the empty area can be partitioned in more than one way. Different heuristics can be used on how to choose between different possible non-overlapping rectangles.
128
A. Ahmadinia, C. Bobda, and J. Teich
Fig. 1. A free space partition into maximal empty rectangles.
Fig. 2. A free space partition into non-overlapping empty rectangles.
2.2 Quality
The KAMER placement algorithm is indeed the highest quality method to partition the free space since the rectangles kept in the list and checked for placing the arriving task are maximal and therefore, offer the largest possible area where the new tasks can be accommodated [2]. Keeping all maximum empty rectangles clearly avoids a
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
129
high fragmentation of the empty space that can lead to the situation where a new task cannot be placed even though there is sufficient free area available. The reason for quality loss in the keeping non-overlapping rectangles method is that each empty rectangle is contained within one MER. Accordingly, if a task can be placed inside one of these empty rectangles it can also be placed inside the MER that contains it. The reverse is obviously not true. Therefore, this second method for free space management results in a higher fragmentation of the free space and some placement quality is lost. 2.3 Complexity
The KAMER algorithm has to be executed every time a new task is placed on the FPGA as well as every time a task ends its execution and is removed. More precisely, at the moment of the new task’s placement, all those MERs that overlap with it must be divided into smaller MERs, and at the moment of a task’s removal, the overlapping MERs must be merged into larger ones. As an example, Figure 3 illustrates the partitioning of the free space into five distinct MERs whose bottom left corners are identified by A, B, C, D and E. As the newly arriving task, shown in shaded color, is placed inside MER D, it overlaps with 4 of the 5 existing MERs; B, C, D and E. Each of the latter must then be split into smaller ones. Figure 4 illustrates how MER B is divided into 4 smaller maximal empty rectangles. In the same manner, MER D is split into 2, and MERs C and E are both split into 3 smaller maximum empty rectangles. In this case, the total number of MERs after insertion of the new task increased from 5 to 13. This hence indicates that, in the KAMER algorithm, many MERs must be verified whether they overlap with the new task and furthermore many of them must be divided into smaller MERs. In a similar fashion, after the deletion of a task, a considerable number of MERs must be merged into a few larger ones. Thus, in addition to the increased running time, there is a quadratic space requirement in keeping the number 2 of empty rectangles in a list; this method has to manage O(n ) rectangles for n placed tasks. It is obvious that the KAMER algorithm, although offering high quality placement, necessitates an important amount of computation and memory, and hence slows down the overall program operation. Consequently, one of the aims of our integrated scheduling and placement algorithm is to execute faster than the KAMER, but by maintaining a certain quality of placement as well. In the second free space management method, since the empty rectangles are nonoverlapping, only the rectangle where the new task is placed should split into two smaller ones. Therefore, we have a O(n) complexity; the number of empty rectangles considered for placing each hardware task is linear in terms of the number of running tasks on the FPGA.
130
A. Ahmadinia, C. Bobda, and J. Teich
Fig. 3. Placement of an arriving task at the bottom left corner of one of the MERs.
Fig. 4. Changes which are needed in MER B after placing the new module in the bottom left corner of MER D.
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
131
3 An Integrated Scheduling and Placement Algorithm The aim of this work is to develop an integrated task scheduling and placing algorithm including a 1-D partitioning of the reconfigurable array. In fact a new data structure for management of free space for online placement is developed. Accordingly, the FPGA is divided into slots and the arriving tasks are placed inside one of the slots depending on their execution end time value. Moreover, the width of the slots is to be varied during runtime in order to improve the overall quality of placement. There are two main parameters in this algorithm: The first one determines the closeness of end times for tasks to put in one slot, and the other one defines the width of the area partitioning. A proper function has to be implemented to govern each of these parameters in order to maximize the quality of task placement. The implemented algorithm will be described in detail and shown to require less memory and computation time than its KAMER counterpart. 3.1 A New Free Space Manager
Unlike in the KAMER algorithm where we have a quadratic memory requirement, our placement algorithm requires linear memory. Instead of maintaining a list of empty rectangles where the arriving task can be placed, we maintain exactly two horizontal lines, i.e. one above and one under the placed running tasks as depicted in Figure 5. For storing the information of each horizontal line, we use a separate linked-list. In online placement, all of the so far proposed fitting strategies [1][2] place a new arriving task adjacent to the already placed modules, so to minimize the fragmentation. Therefore these two horizontal lines can be determined. As we place new tasks above the horizontal line_1 or below the horizontal line_2, there shouldn’t be any considerable free space between these two lines to use the area as efficiently as possible. For example as shown in figure 5, if module 6 is removed earlier than modules 2,3,10 and 11 the area occupied by module 6 will be wasted. To avoid these cases, we suggest placing those tasks beside each other, that they will finish their tasks nearly simultaneously. This task clustering and scheduling will be detailed in the next section. The placement algorithm is implemented in such a way that, arriving tasks are placed above the currently running tasks as long as there is free space. Once there are no empty spaces found above the running tasks, the new ones start to be placed below them and so on. As already mentioned, this implementation requires linear memory. Furthermore, the addition and deletion of tasks involves updating and searching through lists, which is a much faster operation than looking for, merging and dividing maximum empty rectangles. Also, placing the arriving tasks alternatively above and then below the running tasks ensures an efficient use of available area.
132
A. Ahmadinia, C. Bobda, and J. Teich Reconfigurable Hardware
Horizontal_line_1
9 5
11
10
7
8
6 1 Horizontal_line_2
2
3
4
Fig. 5. Using horizontal lines to mange free space.
3.2 Task Scheduling
As we mentioned before, we need a task clustering to have less fragmentation between the two horizontal lines. For explaining this clustering, first we should define the required specifications of real-time tasks. Each arriving task t k ∈ T is, amongst other parameters, defined by its arrival time ak and its execution time ek (definition 1). Hence, if a particular task can be placed on the chip at the time of its arrival, it will end its execution at time ak + ek. Each task has also a deadline time dk assigned to it, which is greater than ak + ek and sets a limit on how long the task can reside inside the running process. Next we define a mobility interval for each task according to end times. The mobility interval is defined as mobility=[ASAP_end; ALAP_end], where ASAP_end= ak + ek is the as soon as possible task end time; and ALAP_end= dk is the as late as possible task end time. Therefore, each task, once placed on the array, will finish its execution at a time belonging to its mobility interval. For clustering tasks, first we should define clusters: Definition 2 (Clusters or Slots). An FPGA consists of a two-dimensional CLB(Configurable Logic Array) with m rows and n columns. The columns are partitioned into contiguous regions, which each region is called a cluster or slot.
The number of slots can be chosen to be different, but in our case, according to the FPGA size and the size of tasks, we have divided the area into three slots. Each task’s mobility interval is then used to determine in which cluster the task should be placed. Accordingly, we compute successive end time intervals denoted end_time1,
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
133
end_time2, end_time3... . The details of their computation will be explained in the pseudo code of scheduling. If a task’s mobility interval overlaps with the end_time1 interval as shown in Figure 6, then this task will be placed inside the first slot, if it overlaps with end_time2 it will be placed inside the second slot, with end_time3 inside the third slot, with end_time4 inside the first slot, and so on. This situation is illustrated in Figure 7. The motivation behind this clustering method, as can be observed in Figure 7, is to have all tasks with similar end times placed next to each other (the number in each module shows that the module belongs to which interval end_time). In this way, as tasks ”belonging” to the same end time interval end their execution, a large empty space will be created at a precise location. This newly created empty space will then be able to accommodate future, perhaps larger tasks.
Fig. 6. The tasks with mobility intervals overlapping with the same end time interval.
3.3 Optimizing Scheduling
In order to optimize the quality of task placement, or in other words, to reduce the number of rejected tasks, benchmarking had to be performed on how big the successive end time intervals should be. As shown in the pseudo code of computing end_time intervals, we divide the Total_Interval into three equal ranges. Moreover, an eventual function had to be implemented to vary the ratio of the end_time intervals to the Total_Interval during runtime. The idea there is that, when an excessive number of tasks are being placed inside a single cluster, the length of the end time intervals should be reduced so that tasks can continue being placed inside the remaining two clusters. Consequently, we define an input rate for each of the clusters as follows:
134
A. Ahmadinia, C. Bobda, and J. Teich
Fig. 7. Placement of tasks inside clusters according to their mobility intervals.
Input rate =
# of Tasks T
(1)
Where # of Tasks is the number of tasks placed inside the corresponding cluster during the period T. Now, as the input rate of one of the clusters becomes higher than some predetermined threshold value, the length of the corresponding end time intervals should be reduced and kept at that value during some time t. This process was simulated and repeated for a vast range of values for the period T, the threshold and the hold time t. The steps of this scheduling and its optimizing is presented in the following way. Here N is number of slots on the device: i=0; //number of arrived tasks k=1; Maxk=0; Task_Arriving: i=i+1; Min= ASAP_endi; Max= ALAP_endi; for j=1 to i {
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
135
Min= min( Min , ASAP_endi ); Max= max( Max , ALAP_endi ); } Min= max( Min , Maxk ); Total_Interval=Max - Min; for s=0 to N-1 Total _ Interval Total _ Interval end _ time(k + s) = (Min + × s, × (s + 1)) ; N N if( t mod T=0) // T: period of Time // t: Current time { nov=0; // Number of overloaded slots for s= 1 to N { Tasks = {ti ti ∈ end _ time(M ) and M mod N = s}
n(Tasks ) ; T if(Input rate(s) > Threshold) { as=1; nov=nov+1; } else as=2; } for s=0 to N-1 Input _ rate( s ) =
end _ time(k + s) = ( Min +
Total _ Interval × 2 N − nov
s
∑ i =1
ai ,
Total _ Interval × 2 N − nov
s +1
∑a ) ; i
i =1
} if ( Min > t ) // Current time { k=k+3; Maxk=Max; } Go to Task_Arriving
3.4 Optimizing Partitioning
As a new task arrives, its mobility interval is computed, the overlapping end time interval is determined and the task is assigned to the corresponding cluster. The situation might and will arise where that cluster is full and the task will have to be queued until some tasks within that same cluster end their execution so that the queued task can be placed. However, there might be enough free space in the remaining two clus-
136
A. Ahmadinia, C. Bobda, and J. Teich
ters to accommodate that queued task. Hence, to improve the quality of the overall algorithm the cluster widths are made to be dynamic and can increase and decrease during runtime when needed. This situation is illustrated in Figure 8. A proper function had to be found to govern this 1-D partitioning of the reconfigurable hardware. For this purpose, for all the queued tasks waiting to be configured on the device it was counted how many of them are assigned to each cluster. Hence, three variables (queue1, queue2, queue3) kept track of the number of tasks in the queue for each of the three clusters. The width of the clusters was then set proportionally to the values of these three variables. For example, if during runtime we have the situation where queue1=4, queue2=2, queue3=2, the width of the first cluster should be set to half and the widths of the second and third clusters should both be set to one quarter of the entire array width. The widths aren’t however changed instantly but rather gradually by some predetermined value at each time unit. This method for the 1-D space partitioning indeed proved to be the best one in reducing the overall number of rejected tasks.
Fig. 8. Dynamic 1-D array space partitioning.
4 Experimental Results Our cluster based algorithm was compared to the KAMER algorithm in terms of how fast they execute on the one hand, and of how many tasks get rejected on the other.
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
137
Since the main idea was to compare these two performance parameters, the generation of tasks was kept as simple as possible. Late arriving tasks were not taken into account, only one new task arrives at each clock cycle and once placed, a task’s execution cannot be aborted. Also, at every time unit or clock cycle, the algorithm tries to place the new arriving task on the device, checks for tasks that ended their execu-tion so they can be removed and finally all queued tasks are checked for placement. All simulations were performed for a chip size 80x120, a 2-dimensional CLB array, corresponding to the Xilinx XCV2000E device. In order to evaluate the improvement in the overall computation time of our algorithm as compared to the KAMER, we simulated the placement of 1000 tasks and measured the time in milliseconds both programs took to execute. This was done for different task sizes and shapes, precisely for tasks with width and height uniformly distributed in the intervals [10, 25], [15, 20] and [5, 35]. For each task size range, the simulation was repeated 50 times and the overall average of execution times was computed. The obtained results are summarized by the graph in Figure 9. For the different task sizes we observe an improvement of 15 to 20 percent as compared to the execution time of the KAMER algorithm. This can be observed in Figure 10, where our algorithm’s execution time is presented as a fraction of the time the KAMER algorithm takes to execute.
Fig. 9. Algorithm execution times as an average of 50 measurements.
138
A. Ahmadinia, C. Bobda, and J. Teich
For optimizing scheduling, the conclusion was that, by varying the width of end time intervals, slightly fewer tasks were being rejected than in the case where the width was just held constant at a single value. In fact, the best performance was observed when the length of the end time intervals was set to be distributed uniformly in the mobility interval range. The percent of rejected tasks in KAMER was 15.5% and in our cluster-based method was 16.2%. Because, in the KAMER algorithm where the entire chip area is available for all tasks to be placed, tasks with similar end times are most often separated from each other. Once these tasks end their execution, small empty spaces that are distant from each other are created and although there might be enough total free space to accommodate a new task, it might get rejected since not being able to be placed in either of those free locations.
Fig. 10. Fraction of execution time of the KAMER algorithm.
5 Conclusion In this paper we have discussed existing online placement techniques for reconfigurable FPGA. We suggested a new dynamic task scheduling and placement method. We have conducted experiments to evaluate our algorithm and previous one. We reported on simulations that show an improvement of up to 20% on the placement performance compared to [1]. Also, the quality of placement in this method is comparable to KAMER method and it has nearly the same percent of rejected tasks as KAMER method. Concerning further work, we plan to develop an online scheduling algorithm to minimize task rejections, and take into consider the dependencies between tasks. Also
A Dynamic Scheduling and Placement Algorithm for Reconfigurable Hardware
139
we intend to investigate more on online placement scenario and make a competitive analysis with optimal offline version of placement [6].
References 1. Kiarash Bazargan, Ryan Kastner, and Majid Sarrafzadeh. Fast Template Placement for Reconfigurable Computing Systems. In IEEE Design and Test of Computers, volume17, pages 68-83, 2000. 2. Ali Ahmadinia, and Jürgen Teich. Speeding up Online Placement for XILINX FPGAs by Reducing Configuration Overhead. To appear in Proceedings of 12th IFIP VLSI-SOC, December 2003. 3. Herbert Walder, Christoph Steiger, and Marco. Platzner. Fast Online Task Placement on FPGAs: Free Space Partitioning and 2-D Hashing. In Proceedings of the 17th Inter-national Parallel and Distributed Processing Symposium (IPDPS) / Reconfigurable Architectures Workshop (RAW). IEEE-CS Press, April 2003. 4. Grant Wigley, and David Kearney, Research Issues in Operating Systems for Reconfigurable Computing, In Proceedings of the 2nd International Conference on Engineering of Reconfigurable Systems and Architectures (ERSA). CSREA Press, Las Vegas USA, June 2002. 5. Oliver Diessel and Hossam ElGindy, On scheduling dynamic FPGA reconfigurations, In Kenneth A Hawick and Heath A James, eds, Proceedings of the Fifth Australasian Conference on Parallel and Real-Time Systems (PART'98) , pp. 191–200, Singapore, 1998. Springer-Verlag. 6. Sándor Fekete, Ekkehard Köhler, and Jürgen Teich, Optimal FPGA Module Placement with Temporal Precedence Constraints, In Proc. of Design Automation and Test in Europe , IEEE-CS Press, Munich Germany, 2001, pp. 658-665. 7. E.G Coffman, M.R. Garey, and D.S. Johnson, Approximation algorithms for bin packing: a survey. In D. Hochbaum, editor, Approximation algorithms for NP-hard problems, pages 46-93. PWS Publishing, Boston, 1996. 8. E. G. Coffman Jr., and P. W. Shor, Packings in Two Dimensions: Asymptotic AverageCase Analysis of Algorithms, Algorithmica, 9(3):253–277, March 1993.
Definition of a Configurable Architecture for Implementation of Global Cellular Automaton Christian Wiegand, Christian Siemers, and Harald Richter Technical University of Clausthal, Institute for Computer Science, Julius-Albert-Str. 4, 38678 Clausthal-Zellerfeld, Germany {wiegand|siemers|richter}@informatik.tu-clausthal.de
Abstract. The realisation of Global Cellular Automaton (GCA) using a comparatively high number of communicating finite state machines (FSM) leads to high communication effort. Inside configurable architectures, fixed numbers of FSM and fixed bus widths result in a granularity that makes mapping of larger GCA to these architectures even more difficult. This approach presents a configurable architecture to support mapping of GCA into a single Boolean network to omit increasing communication effort and to receive scalability as well as high efficiency.
1
Introduction
Cellular Automaton (CA) are defined as a finite set of cells with additional characteristics. The finite set is structured as n-dimensional array with well-defined coordinates of each cell and with a neighbourhood relation. Each cell is capable to read and utilise the state of its neighbouring cells. As the cells implement a (synchronised) finite state machine, all cells will change their states with each clock cycle, and all computations are performed in parallel. Data and/or states from nonneighbouring cells are transported stepwise from cell to cell when needed. Useful applications to be implemented within CA consist of problems with high degrees of data locality. Mapping a CA to real hardware – whether configurable or fixed – shows linear growth of communication lines with the number of cells. These links are fixed and of short length, resulting in limited communication efforts to implement a CA. If complexity of the functionality is defined by the RAM capacity to realise this function inside memory – this is normally the case inside configurable, look-up table based structures – the upper bound of the complexity will grow exponentially with the number of inputs and linear with the outputs. The number of input variables is the sum of all bits to code the states of all neighbouring cells as well as the own state. Cell complexity is therefore dominated by the number of communication lines. The concept of Global Cellular Automaton (GCA) [1] overcomes the limitations of CA by providing connections of a cell not only to neighbouring but to any cell in the array. The topology of a GCA is therefore no longer fixed, GCA enable applicationspecific communication topologies, even with runtime reconfiguration. The number of communication lines per cell might be fixed by an upper limit.
C. Müller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 140–155, 2004. © Springer-Verlag Berlin Heidelberg 2004
Definition of a Configurable Architecture for Implementation
141
As the number of possible links between k cells will grow with k², the number of realised communication lines per cell will also grow with order 2. The complexity of a single cell and the Boolean function inside will depend on the number of communication inputs, as discussed in the case of cellular automaton. If any GCA is mapped to a reconfigurable architecture like an FPGA, each cell must be capable of realising any Boolean function with maximum complexity. If the cells are mapped to a reconfigurable array of ALUs, each with local memory, each cell may integrate any complex functionality. The communication effort grows with the square number of cells, and the granularity of the circuit is defined by the number of cells and the bit width of the communication links between them. This architecture is well suited to realise a GCA, when the number of cells and the bit width fits well, because even complex computations might be performed inside one ALU. The disadvantage of this approach is that the cycle time is deterministic but not bounded, because any algorithm could be realised within one ALU but might use several cycles to perform. Even worse, mapping of GCA with non-fitting characteristics will be difficult if not impossible. To map the GCA to another type of reconfigurable cell array, each with programmable Boolean functionality, results in cells capable of computing data from each other cell including the own state. This means that all binary coded states from all cells might form the input vector of this function, while the output vector must be capable of coding the state of the cell. Consequently the complexity of the single cell will grow exponentially with the input vector size, while communication will grow in polynomial order. The approach in this paper presents an architecture capable of realising a GCA into a single Boolean network, where the output vector at the time tn form part of the input vector for the next state tn+1. This omits the resulting complexity by the communication lines, which is important for any reconfigurable architecture. Even more important, this architecture makes no assumption about the granularity, only the resulting complexity of the GCA is limited. The remainder of the work contains the definition of the architecture in chapter 2. The introduced structure is capable of containing Boolean functions with large number of input and output lines. Chapter 3 discusses the mapping of GCA to this architecture and presents an example for realising an algorithm in a GCA and mapping this on the introduced architecture. Chapter 4 finally gives an outlook to future work.
2
A Reconfigurable Boolean Network for Large Functions
To design a reconfigurable Boolean network, one of two basic architectures are normally used and are discussed below: The function to be implemented may be defined completely by storing all values inside a RAM memory. The input vector forms the address bit vector and addresses a well-defined memory cell for any combination. The contents of this memory cell defines the function at this point, and the data bus lines form the output vector. This is known as the look-up table (LUT) architecture. The most important advantages of this architecture are its simplicity, the simple reconfigurability, the high density of memory circuits and the fixed (and fast) timing.
142
C. Wiegand, C. Siemers, and H. Richter
The with the input vector size exponentially growing number of memory cells is of course disadvantageous, limiting the practical use of LUT structures to small functions. The second possibility to implement any Boolean function inside a reconfigurable architecture is to use a configurable network consisting of 2 or k stages. This mostly minimises the use of gates to implement the functionality, and theory has developed representations (e.g. Reed-Muller Logic), algorithms for minimising logic and partitioning it for several stages. The advantage of this approach is that minimised number of gates are used to implement the function. Especially fixed implementations are well supported, but for reconfigurable architectures, the effort again grows exponentially with the input vector sizes (but at different rates, compared to the LUT-based architecture). 2.1
Introducing the New Architecture
To combine the advantages of the first architecture – high degree of integration, simplicity of the circuit – with the reduced number of gates of the second approach, the following approach is considered inside this paper. The basic idea consists of the balanced combination of storing functionality inside RAM-based memory and introducing 3 stages inside the architecture to reduce memory size and complexity. 2.1.1
Three Stage Approach
First Stage The input vector of the Boolean function is represented by the input lines to the complete network. The first stage consists of several memory arrays (or ICs) in parallel, addressed by the input lines. The input vector is partitioned and the parts are each mapped to one corresponding memory array. The so-called minterms of the application, derived from logic minimisation e.g. using Quine-McCluskey or Espresso [2], are stored inside these memory arrays of the first stage. Each part of the array stores a well-defined representation of the (partial) input vector with the actual values (true, false, don’t care) and defines a representing code for this minterm, the so-called minterm-code. Each memory array of the first stage is addressed by a subset of the input lines and compares this address to each of its 3-valued partial minterms. If a match is found, the minterm-code is sent via the data bus lines to the second stage. If an address doesn't match any partial minterm of one memory array, no data is returned. After processing a complete input vector, the first stage returns a bit pattern that represents all minterms of the Boolean function, which correspond to the input vector. Second Stage The minterm-code addresses the memory of the second stage. The memory cells hold the corresponding bit pattern of the minterm-codes and the output vectors. If any input vector of this stage matches one of the stored codes, the stored output information is read out via the data lines and given to stage three for further computation.
Definition of a Configurable Architecture for Implementation
143
The addressing scheme uses again three-valued information, but this time the output consists only of two-valued information. The address information is compared to all stored information in parallel, and a matching hit results in presenting all stored data on the data bus of this second stage memory array. If no matching hit occurs, the corresponding memory array returns ‘0’. Third Stage The third stage combines all output values from stage 2 via the OR-function. 2.1.2 Detailed Implementation Figure 1 shows the complete implementation of a 3-stage network, using an example with 12 input lines, 10 minterms and 8 output lines.
Minterms:
a)
1. 2. 3. 4. 5.
0-11 1101 -0-1 ---0100
---11-0001 ---0001
1100 0000 -----10 ----
6. 7. 8. 9. 10.
0100 ----------0-1
11-0001 0001 ---0001
0000 0000 ---0000 1100
0-11
00
0001
00
0000
1101
01
11--
01
-0-1
10
10
--10
10
0100
11
11
1100
11
00 01
b) 00--11
...
010100
...
1000-----01 1100--
...
110100
...
--0000
...
--00--
...
----00
...
100111
... ... ...
≥1 12
8
Fig. 1. 3-stage reconfigurable Boolean network: a) Representing minterm table b) Circuit structure
The complete input vector is partitioned into 3 parts each containing 4 3-valued data. The minterm table in fig. 1 a) shows similarities to the open-PLA representation [3]. The stored value for the corresponding combination is the minterm code, and the resulting minterm code vector by simple concatenation of the responds is the complete code for the actual input vector.
144
C. Wiegand, C. Siemers, and H. Richter
There might be several matching minterm codes for one input. The input vector contains binary information, but 3-valued information is stored, and the ‘-’-value for don’t care matches to ‘0’ and ‘1’ by definition. Figure 1 shows as an example that the minterms 3, 4 and 8 matches e.g. to “0011 0001 0010”, and for the minterms 4, 5 and 8 matching input vectors are possible too. This means that for correct computation, the system must be capable of computing more than one minterm code vector. The resulting minterm code vectors are used by the second stage of the circuit, where the corresponding output vectors are read. The responds of the second stage have to be coupled using the OR-gate of stage three. This is the final result of the operation. One choice for storing the minterms could be an architecture similar to fully associative cache memory, as shown in figure 2. The minterms are stored as TAG information, the data field will hold the corresponding minterm code. A positive comparison, called compare hit, is marked in the Hit-field.
No.
TAG (3-valued address)
Data (3-valued code)
Hit
No.
TAG (3-valued address)
Data (3-valued code)
Hit
Data (3-valued code)
Hit
.....
No.
TAG (3-valued address)
Fig. 2. Structure of fully associative memory cells, used as stage-1-memory
A difficulty arises, if the minterms contain unbounded variables, coded as ‘don’t care’ (DC). The comparison must be performed for all stored minterms, and all compare hits must be marked. In summary, there might be several hits per partial minterm memory (equivalent to a row in figure 1a), and all partial hits must be compared again to extract all total hits. It will be advantageous to use normal structured RAM arrays as minterm memory. Each of these RAMs will be addressed by a partial input vector, and for all DC-parts of the minterm, the minterm code is stored on both addresses. If the data bus of the RAM array exceeds the bit width necessary for storing the code, more bits to code the context or other information might be stored in addition to show invalid conditions etc. Any normal RAM architecture will not be capable to communicate a data miss, therefore a specialised minterm code must be used to provide stage 2 with this information that the minterm is not stored inside. This is necessary to receive completeness inside minterm coding. Figure 3 shows partial minterms, first mapped to a Tag-RAM (b) with Don't-Carecomparison, then mapped to a conventional RAM (c). Please note that there is no need to store partial minterms consisting only of ‘Don't-Cares’ (DC, '-') in the TagRAM, because these partial minterm are matching any bit pattern of the input vector. The partial minterms no. 3 and 10 as well as no. 5 and 6 lead to the same bit pattern
Definition of a Configurable Architecture for Implementation
145
and need only to be stored once. Therefore code 2 in figure 3 b) indicates that the first part of the input vector matches minterm 3 and 10, and the code 3 indicates the minterms 5 and 6.
partial Minterms Bits 0-3 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
0-11 1101 -0-1 ---0100 0100 ----------0-1
a)
Tag-RAM incl. Don't Care Tag
Code
0-11
0
1101
1
-0-1
2
0100
3
b)
RAM 16x3 Address Code 0000 8 0001 2 0010 8 0011 4 0100 3 0101 8 0110 8 0111 0 1000 8 1001 2 1010 8 1011 2 1100 8 1101 1 1110 8
c)
Fig. 3. Mapping partial minterms to RAM a) partial minterms b) Mapping to Tag-RAM c) Mapping to conventional RAM
When the tags are mapped to a conventional RAM, every address of this memory, which binary representation matches the bit pattern of a 3-valued tag, stores the appropriate minterm code. The address “0011” of the RAM matches the tags of the minterms 1 and 3, so the new code 4 is stored here to indicate the occurrence of these partial minterms. All codes of 5–8 represent no matching minterm and may be used for context information, e.g. illegal input vector. The RAM structure in stage 2 has to use DC coding equivalent to stage 1. This incorporates that comparison has to be provided including this case. Again, using conventional RAM means that the DC codes are decoded to all addresses storing the corresponding data inside stage 2. The RAM of this stage is addressed by the minterm combinations of stage 1. If a Tag-RAM is used in stage 2, every 3-valued tag represents a combination of minterms, which are present at the input lines and detected by stage 1. The OR-combined output-vectors of all minterms, which are represented by one combination, must be stored as the appropriate value. Again several addresses might match the same minterm combination. To provide the parallel capacity, the memory of stage 2 is pipelined (figure 4): The resulting minterm code of stage 1 addresses the first pipeline stage of the stage-2-RAM, called combination-RAM. This RAM contains an index for every valid minterm code vector, and this index addresses the second pipeline stage RAM storing the output variables for the minterm combination. This RAM is called output-vector-RAM, and the scheme enables the mapping of different minterm combination, resulting from DC digits, to the same output value.
146
C. Wiegand, C. Siemers, and H. Richter
Fig. 4. Pipelining of combination-RAM and output-vector-RAM in stage 2
As any single RAM stores only a single index, the output-vector-RAM must hold all output vector values, combined by a logical OR. This implies that the RAM must hold all possible values of the function. As this results in exponential growth of RAM size, the combination- and output-vector-RAM are segmented in parallel parts, and the results are combined using the OR-operation of stage 3. If different minterm combinations create the same output-vector, these combinations are mapped to the same output value too. Note that in the shown example the minterm combination “111111” belongs to no minterm-combination, the appropriate index leads to the output vector “000000000000”. The indices 5–7 are not used here. To use as much capacity as possible of the segmented RAM, a configurable crossbar-switch is used to connect stage 1 and stage 2. This switch maps the minterm code vector to the address inputs of the combination-RAM. Non-used data lines might be used for additional information as context, visualisation of invalid codes etc., as already mentioned. The third stage contains the OR-operation, as discussed before. To use the inverted version of the sum-of-products structure, a configurable exclusive-OR-operation (XOR) is included in this stage. The last operation uses the contents of a stored bit vector to invert the corresponding output bits. This results in the sample architecture in figure 5. The sample architecture is shown for 12 input- and 12 output-lines. To store a complete Boolean function with 12 input- and output-variables in RAM, 12 bits of data must be stored for all 4096 possible input-combinations. Therefore an amount of 6144 bytes is necessary for complete storage in a LUT-architecture. The amount of memory to configure the sample architecture sums up to: 3x minterm-RAM 16x4 3x combination-RAM 64x4 3x output-vector-RAM 16x12 Crossbar-Switch configuration Inverting register 12x1 Sum
24 96 72 18 1,5 211,5
Bytes Bytes Bytes Bytes Bytes Bytes
Definition of a Configurable Architecture for Implementation
147
input 12
4
4
Minterm RAM 16x4
Minterm RAM 16x4
4 Minterm RAM 16x4
Stage 1 4
4
4
12 Crossbar Switch 12x12
12
6
6
Combination RAM 64x4
Combination RAM 64x4
4
6
4
Output-vector RAM 16x12
Output-vector RAM 16x12
12
4 Output-vector RAM 16x12
12
12x OR
Stage 2
Combination RAM 64x4
12
12x OR 12
12
Stage 3
Invert 12x1
12x XOR 12 3 Context
12 Functional output
Fig. 5. Sample architecture for a (0,1) →(0,1) function 12
12
Of course this architecture is not capable of containing all function. The impact of the minimising, partitioning and mapping algorithms on the results and the usability will be great and is subject to future work. At this point, the architecture should be introduced as one possible architecture to implement GCA.
3 3.1
Mapping of a GCA on This Architecture Sample Implementation of the Architecture
It is assumed for the discussion inside this chapter that all binary coded states of a GCA at tn-1 are the input to the next cycle tn when the GCA computes the next state. In this case, the GCA might be mapped to the introduced architecture. To store the
148
C. Wiegand, C. Siemers, and H. Richter
8
8
8
8
8
8
8
8
RAM 256*8
RAM 256*8
RAM 256*8
RAM 256*8
RAM 256*8
RAM 256*8
RAM 256*8
RAM 256*8
actual state, an additional set of binary valued registers must be added to the architecture. These registers decouple the actual state from the new computation. In most cases, edge-sensitive registers will be used. The unused data lines, originating from stage 1, may be used as context signals. If the architecture provides more than one layer of RAM in the crossbar-switch, the stage-2-RAM and the inverting register, a context switch can be performed within 1 cycle. This leads the way to multi-context implementations of a GCA. Figure 6 shows an implementation for realising GCA with 64 state bits.
8
8
8
8
8
8
8
8
64
64
64 * 1
CrossbarSwitch
64 * =1
CrossbarSwitch
64 * D-Flipflop
64 * >1 16
RAM 64K*8
8
RAM 64K*8
8
RAM 64K*8
8
RAM 64K*8
8
RAM 256*64
64 * >1
RAM 256*64
8
RAM 64K*8
16
RAM 256*64
8
RAM 64K*8
16
RAM 256*64
8
RAM 64K*8
16
RAM 256*64
8
RAM 64K*8
16
64 * >1 16
RAM 256*64
64 * >1 64 * >1
16
RAM 256*64
64 * >1 64 * >1
16
RAM 256*64
Fig. 6. Sample architecture for implementing GCA
The computation will use one cycle per step in normal cases. As the capacity of the circuit is limited by the combination-RAM, the application might be to large for implementation in the architecture. If all methods of minimising and mapping fail, computation can be split up in partial steps. This results in more then one cycle per step to receive the states of the GCA. The memory demand per memory layer for this implementation is assumed by the following table: 8x minterm-RAM 256x8 8x combination-RAM 64Kx8 8x output-vector-RAM 256x64 2xCrossbar-Switch configuration Inverting register 64x1 Sum
2 512 16 1 8 531
KBytes KBytes KBytes KBytes Bytes KBytes
Definition of a Configurable Architecture for Implementation
3.2
149
One Example: Mapping a 4-Bit-CPU on the Implementation
This chapter introduces a simple 4-Bit-CPU, implemented as GCA and mapped on the new hardware architecture. The CPU has a simplified instruction set and consists of only a few internal registers. Internal Registers Address, 8-Bit The memory-address-register. The content of this register is directly mapped to the address-lines of the processor and vice versa Data, 4-Bit The memory-data-register. The content of this register is mapped to the data-lines of the processor and vice versa Accu, 4 Bit Internal register, where all calculations occur Code, 4-Bit Instruction register. This register holds the opcode of the current instruction during execution-time PC, 4-Bit Program counter. The content of this register represents the memory-address of the current instruction
Instruction set: bne beq lda sta and or add
Address Address Address Address Data Data Address
sub Address
Branch not equal. Jump, if content of Accu is not 0 Branch equal. Jump if content of Accu is 0 Load Accu with the data from Address Store the content of Accu at the given Address Calculate Accu and Data and store the result in Accu Calculate Accu or Data and store the result in Accu Calculate Accu + data from Address and store the result in Accu Calculate Accu - data from Address and store the result in Accu
An unconditional jump can be achieved by a bne- followed by a beq-instruction. There is neither stack processing or subroutines nor a carry flag. The instruction set consists of 8 instructions and can be coded in a 3-bit instruction code. Two operand formats are used: 8-bit addresses and 4-bit data. 3.2.1 Mapping the CPU to a GCA The GCA consist of 7 named calls. The cells 'Address', 'Data', 'Accu' and 'Code' correspond directly to the registers. The additional cells 'RW', 'Reg0' and 'Reg1' are needed to realise the program flow of the CPU. The cell 'Address' has 256 states, the cell 'RW' two, the cell 'Code' eight, and all other cells have 16 different states. The states of 'Address', 'RW' and 'Data' correspond to the I/O-lines of the CPU, and if these lines change, the states of the corresponding cells change too. Thus the states of all cells of the GCA are coded in 28 bits. The instructions will be processed in several phases. Each phase needs another configuration of the GCA. This includes different functionality of the single cells as well as different communication links between the cells. These reconfigurations are directed by the context lines, which select different levels of memory to change the behaviour of the GCA. The following phases are used:
150
C. Wiegand, C. Siemers, and H. Richter I/O-lines
Address
Data
RW
8-bit
4-bit
1-bit
Reg0
Reg1
Accu
Code
4-bit
4-bit
4-bit
3-bit
Fig. 7. GCA to realise the 4-bit CPU
Instructions OpCodes bne, beq
000, 001
lda
010
sta
011
and, or
100, 101
add, sub
110, 111
Phases Fetch OpCode Fetch Addr#1 Set Address regarding to Accu Fetch OpCode Fetch Addr#1 Save&Set Address Set Accu and restore Address Fetch OpCode Fetch Addr#1 Save&Set Address and store Accu Restore Address Fetch OpCode Calculate Accu Fetch OpCode Fetch Addr#1 Save&Set Address Calculate Accu and restore Address
Every instruction starts with the 'Fetch OpCode'-Phase. After completing the instruction, the next phase is 'Fetch OpCode' again. The sequence of phases, determined by the OpCode, is coded in the cyclic change of the context lines determined by the content of the memory arrays of the first stage of the architecture and the configuration of the crossbar switch between stage one and two. All contextstates are referred with the name of the corresponding phase in the further text. The last phase of every sequence must leave the context lines in the state to select the configuration of the 'Fetch OpCode'-phase again. To allow the same configurations to be used by different instructions and in different sequences of configurations, the OpCode is stored in the cell 'Code'. This cell is not part of the registers of the CPU. Some of the phases used to process the instructions are described in detail below: Fetch OpCode (Fig. 8a) During this phase the state of the cell 'Address' represents the current Program Counter (PC), and the content of the cell 'Data' represents the current OpCode read from RAM. The cell 'RW' is in state 'read', the state of cell 'Accu' represents the current content of the register 'Accu'. The Program Counter is increased by one, the OpCode is stored in the cell 'Code'. The context lines change to context 'Calculate Accu' or to context 'Fetch Addr#1', according to the OpCode.
Definition of a Configurable Architecture for Implementation
Address
Data
RW
Address
Data
RW
Program Counter
OpCode
read
Program Counter
Addr#1
read
Reg0
Accu
Code
Reg0
Accu
Code
Accu
OpCode
Reg1
Reg1
Accu
a)
b)
Address
Data
RW
Address
Data
RW
Program Counter
Addr#2
read
Program Counter
Addr#2
write
Reg0
Accu
Code
Reg0
Code
OpCode
Accu
Accu
Addr#1
Accu
OpCode
Reg1
Addr#1
c)
151
Reg1
d)
Fig. 8. Instruction execution phases a) Fetch OpCode b) Fetch Addr#1 c) Save&Set Address d) Save&Set Address and Store Accu
Fetch Addr#1 (Fig. 8b) The cell data, linked to the state of the data-lines of the CPU, contains the first part of an address used by the current instruction. This partial address is stored in the state of cell 'Reg0'. The Program Counter is increased by one. According to the state of cell 'Code', the next context is 'Save&Set Address' or 'Save&Set Address and Store Accu'. Save&Set Address (Fig. 8c) This phase follows directly after the phase 'Fetch Addr#1'. The cell 'Address' represents the current PC, the cell 'Reg0' the first part of the address to be read, the cell 'Data' represents the second part of this 8-bit address. In this phase, the state of the cell 'Address' will be copied to Reg0 and Reg1, while the states of the cells 'Reg0' and 'Data' give the new state of 'Address'. According to the state of cell 'Code' the next context is 'Calculate Accu and restore Address' for each of the instructions 'lda', 'add' and 'sub'. Save&Set Address and Store Accu (Fig. 8d) As in the phase 'Save&Set Address', this phase saves the current PC in the cells 'Reg0' and 'Reg1' and sets the new Address. Unlike the last phase, the state of the cell 'Accu' is copied to the cell 'Data' and the cell 'RW' changes its state from 'read' to 'write'. In this way, the content of the CPU-register 'Accu' is stored to the given address. The next phase is 'Restore Address'. 3.2.2 Mapping the GCA on the Circuit The states of all cells of the GCA request 28 bits for coding. This architecture is mapped on a circuit consisting of 8 16x4-RAMs minterm-memory in the first stage and 4 8x8-RAMs combination-memory as well as 256x32-RAMs output-memory in the second stage. The cells of the GCA are mapped to the following bit positions in the state vector:
152
C. Wiegand, C. Siemers, and H. Richter
Accu Code RW Reg0 Reg1 Data Address
bit bit bit bit bit bit bit
4..7 8..10 11 12..15 16..19 20..23 24..31
The phases of the instruction processing consist of 6 different Boolean functions, which have to be mapped on the architecture in different combinations. These are: − Increment of an 8-bit value, used to increment the PC − Copy one bit, used to copy registers. This function is used in parallel up to 20 times to copy the PC, the Accu and the new Address − 4x Calculation of a 4-bit-value from 2 4-bit-values, for the operations 'add', 'sub', 'and', 'or'. All these Boolean functions may be executed in parallel, as long as they set different output bits. The phases 'Fetch OpCode' and 'Save&Set Address and Store Accu' are explained in detail now. Fetch OpCode, Context '0000' This phase combines the increment function with copying 4 bits. The context lines are set to 'Calculate Accu' or 'Fetch Addr#1', according to the current OpCode. Figure 9 shows the mapping of the cell states to the minterm-memories MIN0 to MIN7. Only the RAMs MIN0, MIN1, MIN2, MIN7, COMB0, OUT0, COMB1 and OUT1 are used. MIN0 and MIN1 are linked to COMB0, which is completely filled, while bit 0..2 of MIN2 are linked to COMB1. The data-lines of MIN7 are linked directly to the context-lines of the circuit. The data-path from COMB0 leads to the output-memory OUT0, where the next address bits are generated and where the read-write-state is set to 'read'. The datavalue of the bits 0..2 from MIN2 representing the OpCode is stored via OUT1 into the GCA as the state of the cell 'Code'. The context lines are set to '0001', coding the context 'Fetch Addr#1', for any instruction except 'and' and 'or'. If the OpCode-Value is 'and', context is set to '0010' for 'Calculate And' and to '0011' for 'Calculate Or'. After execution of 'Fetch OpCode', the cell 'Address' keeps the new PC, the cell 'Code' stores the OpCode, the cell 'RW' in in the state 'read' and the context is either 'Fetch Addr#1', 'Calculate And' or 'Calculate Or'. Save Address and Store Accu, Context '0110' This phase is only used when the instruction 'sta' is processed. The cell 'Address' stores the current PC, the cell 'Data' stores the second part of the storage address (Addr#2), the cell 'Accu' stores the current content of the accumulator register and the cell 'RW' is in state 'read'. This phase uses the RAMs MEM0-MEM5, COMB0, COMB1, OUT0 and OUT1 completely and COMB2 and OUT2 partially. The cell 'RW' is set to state 'write', the cell 'Data' stores the content of the accumulator, and the cell 'Address' stores the address, where the content of 'Data' must be written. The context lines are not set, the next context-value is '0000', the 'Fetch OpCode'-Phase.
Definition of a Configurable Architecture for Implementation Addr#1 Addr#2 Data
MIN2 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
MIN3
MIN4
MIN5
MIN6
MIN7 0001 0001 0001 0001 0010 0011 0001 0001 0001 0001 0001 0001 0010 0011 0001 0001
MIN0 MIN1
MIN1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Data
COMB0 0000 0000 0000 0001 0000 0010 ... 1111 1111
OUT0 0000 0001 0000 0000 0000 0000 0010 0000 0000 0000 0000 0011 0000 0000 0000 ... 0000 0000 0000 0000 0000
MIN20..2
MIN0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
153
COMB1 0000 0000 0000 0001 ... 0000 0111
OUT1 0000 0001 0000 0000 0000 0000 0000 0000 0000 0010 0000 0000 0000 0001 0000 0000 ... 0000 0000 0000 0000 0000 0111 0000 0000
1000 0000 0000 1000 0000 0000 1000 0000 0000 1000 0000 0000
MIN7
Context
Fig. 9. Fetch OpCode
4
Conclusion and Outlook
Starting with the problem to map a global cellular automaton on a physical circuit, where the cells could possibly but not likely have the full complexity and are linked to every other cell, this paper has introduced a new concept of realising Boolean functions with many input- and output variables. A GCA mapped to one Boolean function avoids the costs of the communication between the separated cells, which are mapped to the complexity of the single resulting Boolean function. This approach shifts the problem of communication to the theory of minimising Boolean functions and to the design of algorithms. If a Boolean function would be still too complex to fit on the architecture, the processing can be divided up in several independent steps with less complex functions. The example of the CPU, mapped to the new architecture approach, visualises the possibilities of this way to realise global cellular automata. At the same time it shows the limitations: the majority of the Boolean functions of the CPU like copying or incrementing the address are completely defined functions where every possible input combination and every possible output combination of variables may occur. For this
154
C. Wiegand, C. Siemers, and H. Richter
reason no simple way to map these functions on the architecture and save memory and complexity at the same time could be found. Addr#1 Addr#2 Data MIN2 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
MIN3 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
MIN4 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
MIN5
MIN6
MIN7
COMB0 0000 0000 0000 0001 0000 0010 ... 1111 1111
OUT0 0000 0001 0000 0000 0000 0000 0010 0000 0000 0001 0000 0011 0000 0000 0010 ... 0000 0000 0000 1111 1111
COMB1 0000 0000 0000 0001 0000 0010 ... 1111 1111
OUT1 0000 0001 0000 0000 0000 0000 0010 0000 0000 0000 0000 0011 0000 0000 0000 ... 0000 0000 0000 0000 0000
COMB2 0000 0000 0000 0001 ... 0000 1111
OUT2 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 0000 0000 0000 0000 0000 ... 0000 0000 1111 0000 0000 0000 0000 0000
MIN4
MIN0 MIN1
MIN1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Accu
MIN2 MIN3
MIN0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Reg0
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
Context
Fig. 10. 'Save Address and Store Accu'
The results of certain considerations show that a lot of improvement is possible by special algorithms and software. This will be the topic of further research. Another topic should be the exploration and improving of the architecture itself. Because only part of the first and second stage RAM was used, it could be advantageously to provide a circuit for an automaton with wider status-register and more status bits, compared to the input lines the first stage can handle. Another way to improve the circuit could be the introduction of a special context-RAM that handles context sequences to be processed. In summary it can be concluded that this kind of approach gives new possibilities and chances worth for future considerations.
Definition of a Configurable Architecture for Implementation
155
References [1]
[2] [3] [4]
Rolf Hoffmann, Klaus-Peter Völkmann, Wolfgang Heenes, ”Globaler Zellularautomat (GCA): Ein neues massivparalleles Berechnungsmodell”. Mitteilungen – Gesellschaft für Informatik e.V., Parallel-Algorithmen und Rechnerstrukturen, ISSN 0177-0454 Nr. 18, S. 21–28 (2001) (in German language). R.K Brayton et.al., ”Logic Minimization Algorithms for VLSI Synthesis”. Kluwer Academic Publishers, 1984. Mike Trapp, “PLD-design methods migrate existing designs to high-capacity devices”. EDN Access, Febr. 1994. Wolfgang Heenes, Rolf Hoffmann, Klaus-Peter Völkmann: ”Architekturen für den th globalen Zellularautomaten“. 19 PARS Workshop, March 2003, Basel (in German language). http://www.ra.informatik.tu-darmstadt.de/publikationen/pars03.pdf
RECAST: An Evaluation Framework for Coarse-Grain Reconfigurable Architectures Jens Braunes, Steffen K¨ohler, and Rainer G. Spallek Institute of Computer Engineering Dresden University of Technology D-01062 Dresden, Germany {braunes,stk,rgs}@ite.inf.tu-dresden.de
Abstract. Coarse-grain reconfigurable processors become more and more an alternative to FPGA based fine-grain reconfigurable devices due to their reduction of configuration overhead. This provides a higher degree of flexibility for dynamically reconfigurable systems design. But, to make them more interesting for industrial applications, suitable frameworks supporting design space exploration as well as the automatic generation of dedicated design tools are still missing. In our paper we present a runtime-reconfigurable VLIW processor which combines hardwired and reconfigurable functional units in one template. For design space exploration, we discuss a framework, called RECAST (Reconfiguration-Enabled Compiler And Simulation Toolset), based on a architecture description language, which is extended by a model of coarse-grain runtime-reconfigurable units. The framework comprises a retargetable compiler based on the SUIF compiler kit, a profiler driven hardware/software partitioner and a retargetable simulator. To evaluate the framework we performed some experiments on a instance of the architecture template. The results show an increase in performance but also a lot of potential for further improvements.
1
Introduction
Reconfigurable architectures, have been subject for academic research for some years, now moving also towards industrial applications as well. With respect to rising design and masks costs, they are a very promising alternative to Application-Specific Integrated Circuits (ASICs). As a result of the availability of high flexible FPGAs we recognize the migration from specialized fixed hardware to reconfigurable devices in many cases. On the other hand this high flexibility causes additional costs. The interconnect network consumes a lot of area on the die and in many cases, a not negligible number of logic blocks can not be used because of a lack of routing resources needed by complex algorithms. Due to the bandwidth-limited interfaces, which connect the configurable device and the memory from where the configuration bit-file is loaded, reconfiguration is a time-consuming process. Depending on the device size this can require thousands of cycles. For this reason reconfiguration at run-time must be able to cope C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 156–166, 2004. c Springer-Verlag Berlin Heidelberg 2004
RECAST: An Evaluation Framework
157
with high latencies and is not profitable in many cases. Partial-reconfigurable devices or multi-context FPGAs are trying to overcome this penalty. However, a large number of applications from the digital signal processing domain or multimedia algorithms do not need the high flexibility of FPGAs. In general, the algorithms have similar characteristics. The instruction stream is of a regular structure. Typically, the most time-critical and computation-intensive code blocks are executed repeatedly and have a high degree of parallelism inside. The operands are rather word-sized instead of bit-level. Coarse-grain reconfigurable devices with data bit-width much greater than one bit meet the demands of these algorithms sufficiently and are more efficient in terms of area, routing and configuration time. Furthermore the parallelism inside the algorithm can be mapped efficiently into the reconfigurable device to improve the overall performance. Meanwhile a large number of coarse-grain architectures have been proposed and have shown the advantages compared to fine-grain architectures [1]. Despite all effort developing new, high sophisticated architectures, there is a lack of tools, which support the design process. The tools have to provide Design Space Exploration (DSE) to find the most convenient architecture for a particular application as well as the automatic generation of dedicated design tools such as a compiler and a simulator. In this paper we present a framework for design space exploration, which supports a template based design process of a run-time reconfigurable VLIW processor. The framework comprises a retargetable compiler, a profiler driven hardware/software partitioner and a retargetable simulator. We discuss an Architecture Description Language (ADL) driven approach for reconfigurable hardware modeling. The paper is organized as follow. Section 2 outlines typical coarse-grain reconfigurable architectures and the belonging programming and mapping tools. Section 3 covers related work on models for reconfigurable devices. Section 4 presents our framework for DSE and compilation. Section 5 discusses our experimental results. Finally, section 6 concludes the paper.
2
Coarse-Grain Reconfigurable Processors
Because of the less efficiency in terms of routing area overhead and poor routability of FPGA based fine-grain reconfigurable devices, coarse-grain reconfigurable arrays become more and more an alternative if we want to bridge the gap between ASICs and general-purpose processors. The more regular structures within the Processing Elements (PEs) with their wider data bit-width (typically complete functional units e.g. ALUs and multipliers are implemented) and the regularity of the interconnect network between the PEs, involve a massive reduction of configuration data and configuration time. In the following we want to outline four typical examples of coarse-grain reconfigurable architectures. In particular we are interested in the architecture of the reconfigurable array and the availability of programming tools for these devices.
158
J. Braunes, S. K¨ ohler, and R.G. Spallek
The PipeRench [2] device is composed of several pipeline stages, called stripes. Each stripe contains an interconnection network connecting a number of PEs, each contains an ALU and a register file. Through the interconnect the PEs have local access to operands inside the stripe as well as global access to operands from the previous stripe. Via four global buses the configuration data and the state of each stripe as well as the operands for the computations can be exchanged with the host. A stripe can be reconfigured within one cycle while the other stripes are in execution. A compiler maps the source of an application written in a Dataflow Intermediate Language (DIL) to the PipeRench device. The compilation consists of several phases: module inlining, loop unrolling and the generation of a straightline, Single Assignment Program (SIP). A place and route algorithm tries to fit the SIP into the stripes of the reconfigurable device. REMARC [3] is a reconfigurable accelerator tightly coupled to a MIPS-II RISC processor. The reconfigurable device consists of an 8×8 array of PEs or nano processors. A nano processor has its own instruction RAM, a 16-bit ALU, a 16-entry data RAM, and 13 registers. It can communicate directly to its four neighbors and via a horizontal and a vertical bus to the nano processors in the same row and the same column. Different configurations of a single nano processor are held in the instruction memory in terms of instructions. A global program counter (the same for all nano processors) is used as an index to a particular instruction inside the instruction memory. For programming the MIPS-II / REMARC system a C source is extended by REMARC assembler instructions and compiled with the GCC compiler into MIPS assembler. The embedded REMARC instructions are then translated into binary code for the nano processors using a special assembler. The PACT XPP architecture [4] is a hierarchical array of coarse-grain Processing Array Elements (PAEs). These are grouped into a single or multiple Processing Array Clusters (PACs). Each PAC has its own configuration manager which can reconfigure the belonging PAEs while neighboring PAEs are processing data. A structural language called Native Mapping Language (NML) is used to define the configurations as a structure of interconnections and PAE operations. The XPP C compiler translates the most time consuming parts of a C source to NML. The remaining C parts were added with interface commands for the XPP device and compiled by a standard C compiler for running on the host processor. MorphoSys [5] combines a RISC processor with a reconfigurable cell array consisting of 8×8 (subdivided into 4 quadrants) identical 16-bit PEs. Each PE, or Reconfigurable Cell (RC) has an ALU-multiplier and a register file and is configured through a 32-bit context word. The interconnection network comprises three hierarchical levels were nearest neighbor, intra-quadrant, and interquadrant connections are possible.
RECAST: An Evaluation Framework
159
The partitioning of C source code for the reconfigurable cell array and for the RISC processor are done manually by adding prefixes to functions. The configuration contexts can be generated via a graphical user interface or manually from assembler.
3
Models for Reconfigurable Devices
For application designers it is not an easy task to exploit the capabilities of runtime reconfigurable systems. In many cases, the hardware/software partitioning and the application mapping have to be done manually. For this reason the designer has to have a detailed knowledge about the particular system. To allow the designer to think of run-time reconfiguration at a higher, algorithmic level, dedicated tools, based on suitable models, must take over the partitioning and mapping. In the following we want to outline two models which are proposed to abstract from the real hardware and provide a more general view. GRECOM: In [6] the authors proposed a General Reconfiguration Model (GRECOM) to bridge the semantic gap between the algorithm and the actual hardware. It covers a wide range of reconfigurable devices by the means of abstraction from their real hardware structure. Each reconfigurable device consists of a number of PEs linked together by an interconnection network. Both the functionality of each PE and the topology of the interconnection network can be configured. The model is specified by four basic parameters: 1. 2. 3. 4.
Granularity of the processing elements Topology of the interconnection network Method of reconfiguration Reconfiguration time
By varying these parameters a Reconfigurable Mesh model and an FPGA model was derived from the GRECOM. Both models fit into the class of finegrain reconfigurable devices – the PEs perform operations on one bit operands. DRAA: Related to our approach (cf. section 4), Jong-eun Lee et al. [7] proposed a generic template for a wide range of coarse-grain reconfigurable architectures called Dynamically Reconfigurable ALU Array (DRAA). The DRAA consists of identical PEs in a 2D array or plane with a regular interconnection network between them. Vertical and horizontal lines provide the interconnections between the PEs of the same line (diagonal connections are not possible) as well as the access to the main memory. The microarchitecture of the PEs are described using the EXPRESSION ADL [8]. A three-level mapping algorithm is used to generate loop pipelines fit into the DRAA. First, on the PE-level, microoperation trees (expression trees with microoperations as nodes) are mapped to single PEs without the need of reconfiguration. Then the PE-level mappings are grouped together on line-level in such a way, that the number of required memory accesses not exceed the capacity
160
J. Braunes, S. K¨ ohler, and R.G. Spallek
Local Registers Global Registers Global Memory
Hardwired FU Hardwired Part
Reconfigurable FU Reconfigurable Part
Fig. 1. VAMPIRE architecture template
of the memory interface belonging to the line. On the plane-level, the line-level mappings are put into the 2D array. If there are unused rows remaining, the generated pipeline can be replicated in the free space.
4
Design Space Exploration and Compilation
The availability of efficient DSE tools and automatic tool generators for reconfigurable processors is crucial for the success of these architectures in commercial applications. For the conventional process of hardware/software co-design of SoCs, tools like EXPRESSION are used to find the architecture meeting the requirements of an application best. We want to introduce our architecture template as starting point for DSE, the framework which combines DSE and tool generation and an approach to integrate run-time reconfiguration to an ADL defined architecture model. 4.1
Reconfigurable VLIW Architecture Template
In the following, we want to introduce our template based architecture concept called VAMPIRE (VLIW Architecture Maximizing Parallelism by Instruction REconfiguration.) We extend a common VLIW processor with a single or multiple tightly-coupled Reconfigurable Functional Units (RFUs). The architecture is parameterizable in such a way that we can alter the number and the types of functional units, the register architecture as well as the connectivity between the units, the register files and the memory. Figure 1 shows the architecture template schematically.
Switch
Switch c
Shifter
Shifter
Shifter
Reg
Reg
Reg
PE
PE
Switch
Switch Ctrl Reg
PE
Ctrl
c
Ctrl
Switch
Switch
Switch Ctrl
PE
Shifter
PE
c
Switch
Reg
Switch
Shifter
Reg
161
Ctrl
Shifter
Reg
c
Switch
Shifter
PE
Switch
Switch c
Ctrl
Switch
Switch c
Ctrl
Switch
Ctrl
Switch
RECAST: An Evaluation Framework
Shifter
PE
Reg
PE
Fig. 2. RFU microarchitecture
The RFUs are composed of coarse-grain PEs which process 8-bit operands. A switched matrix provides the configurable datapaths between the PEs (Fig. 2). The RFUs are fully embedded within the processor pipeline. Considering the programming model, every RFU is assigned to a slot within the very long instruction word, which can be filled with a reconfigurable instruction. From the compiler’s point of view, the code generation and scheduling is a much easier task than for loosely coupled devices. As well as the processor architecture, the RFU microarchitecture is also parameterizable. During the DSE the number of PEs of each row and column can be adapted to the demands of the particular application. Every PE consists of a configurable ALU. The results from the ALU can be routed either directly or across a shifter to the interconnection network of the RFU. For synchronization purposes a pipeline register can be used to hold the results for the next stage. 4.2
Framework for DSE and Compilation
Based on the SUIF compiler toolkit [9], we are developing a framework for evaluation of the VAMPIRE architecture. The framework, called RECAST
162
J. Braunes, S. K¨ ohler, and R.G. Spallek RFU Architecture Description
SUIF Frontend with Optimization
C Source
RFU Structural Description (PEs and Interconnect) Frontend Backend RFU Mapping
Candidate Identifier (Early Profiler)
Dynamic Architecture Description
Static Architecture Description Processor Description (Hardwired Instruction Set)
Reconfigurable Instruction Set Synthesis (Candidates)
Code Selector
Behaviour / Structure (Reconfigurable Instruction Set)
RFU Configuration Instruction and Configuration Scheduler
Context Generator
VLIW Object Code
Configuration Data
Retargetable Simulator
Fig. 3. DSE and compilation framework
(Reconfiguration-Enabled Compiler And Simulation Toolset), consists of a profiler, retargetable compiler, a mapping module, and a simulator (Fig. 3). Frontend: For the processing of the C source, we use the frontend that comes with the SUIF compiler kit. After standard analysis and some architecture independent transformation and optimization stages, the algorithm is now represented by the SUIF Intermediate Representation (IR) in terms of an abstract syntax tree. Candidate Identifier: To find the most time-consuming parts of the application which might be accelerated by execution within the RFUs, a profiler stage is included to estimate the run-time behavior of the application. In contrast to other profiler-driven concepts, early profiling is performed on the intermediate representation instead of requiring fully compiled object code. Synthesis of Reconfigurable Instructions: The hardware/software partitioning of the algorithm takes the profiling data into consideration. In the present implementation, the mapping module generates a VHDL description for subtrees of the IR. The mapping results can be influenced by a parameter set. This includes the maximal pipeline depth, the minimal clock frequency and the maximal area consumption. For synthesis, a set of predefined, parameterizable VHDL modules is used. These modules were evaluated previously in terms of
RECAST: An Evaluation Framework
163
implementation costs for common FPGAs. Beside the VHDL code, a behavioral description and code generation rules for every mapped subtree are generated. In some cases, when the results meet the demands, not only one mapping will be generated. Such a set of mappings (or candidates) for a particular subtree, is then forwarded to the code selection phase. Each candidate is annotated by synthesis data that is used to estimate the cost and performance gain. Code Selection: Based on the estimated run-time behavior as provided by the early profiler, the candidates for reconfigurable execution can be evaluated and selected so that high speedups are achieved or the number of run-time reconfigurations is minimized. The design parameters annotated to the reconfigurable instruction candidates are used to ensure that resource constraints and design requirements like the clock rate are met by the generated code. The code generation is performed by a tree matching and dynamic programming algorithm. The rules to transform the IR into object code are specified by hand for the hardwired instructions and are generated automatically for the reconfigurable instructions as mentioned above. Finally, the scheduling phase combines the selected hardwired instructions as well as the reconfigurable instructions to the VLIW object code as an input for the parametrizable simulation model. 4.3
Architecture Description
In our approach an architecture description acts as an interface between the compiler’s point of view and the real hardware of our architecture template (Fig. 3). At present, there exists only a simple subset of such an ADL in our framework, which provides a behavioral description of the instruction set as well as the rules for code generation. We mentioned this before. As an essential improvement of our DSE framework we are now utilizing the concepts behind Architecture Description Languages like EXPRESSION. EXPRESSION [8] is designed to support a wide range of processor architectures (RISC, DSP, ASIP, and VLIW). From the combined structural and behavioral description of the processor, an ILP compiler and a cycle-accurate simulator can be automatically generated. The execution pipelines of the functional units are described explicitly as an ordering of units which comprise the pipeline stages, the timing of multi-cycled units, and the datapaths connecting the units. For ILP compilers reservation tables are automatically generated to avoid resource conflicts. As another key feature, EXPRESSION supports the specification of novel memory organizations and hierarchies. Apart from the specification of the hardwired part of our processor template, containing the pipeline structure, the memory hierarchy, and the instruction set, we can also describe the microarchitecture of the RFUs including the PEs and the interconnect network. Every PE as such is functionally described using atomic operations identical to the SUIF intermediate instructions, which might be combined to less complex trees. By this means also the granularity of the processing elements is specified. Possible inner-PE configuration has to be represented by
164
J. Braunes, S. K¨ ohler, and R.G. Spallek
26
FFT
38
Viterbi Decoder
12
Bilinear Filter
3
61
37
14
36
compared to VLIW without RFUs
7
10
18
38
100
0%
50%
100%
instructions, excluded from RFU execution
saved cycles compared to hardwired processor
RFU instructions not selected for RFU execution
RFU instructions selected for RFU execution
Fig. 4. Comparison of benchmarks. 100% corresponds to a processor without reconfigurable units
a predefined set of these trees. From the mapping point of view, a hierarchical algorithm comparable to the DRAA three-level mapping generates the particular configurations, thereby, the candidates for reconfigurable execution as well as the code generation rules for them. For instruction and configuration scheduling in general reconfiguring the RFU at run-time can be compared to a data access through a memory hierarchy. Multi-cycle reconfiguration causes latencies to be hidden by the compiler. To support this compiler task, different configuration management concepts were proposed: configuration prefetch [10], configuration caching [11] or multiple contexts for fast context switching (cf. REMARK’s instruction memory). According to the GRECOM, the method of reconfiguration has to be specified for the RFU. This is achieved by a set of predefined behavioral descriptions which also contains the reconfiguration time as an additional parameter. Encapsulated into a particular resource, comparable with EXPRESSIONS memory components, it it possible to generate reservation tables to avoid scheduling conflicts.
5
Experiments and Analysis
To validate our framework, we have compiled and simulated benchmark algorithms, which have their origin in the DSP domain. We have evaluated the utilization of the reconfigurable units for a 512-point two-dimensional in-place FFT, a Viterbi decoder for a 1/3 rate convolutional code and a Bilinear Filter detail.
RECAST: An Evaluation Framework
165
Due to the development state of our DSE framework we had to do some modifications on the experimental environment. So the benchmark algorithms passed through the following phases: 1. Following to the frontend pass of the SUIF compiler, we had to perform some transformations on the IR. We had to dismantle for-loops and special array-instructions, which are provided by the SUIF IR, because our compiler backend does not support them yet. 2. The Early Profiler estimated the run-time behavior of the algorithms on a fixed set of input samples. 3. The synthesis module generated mappings for the RFUs based on a fixed set of 24 predefined VHDL modules which correspond to all SUIF intermediate instructions that can be mapped into hardware directly. Memory operations as well as control flow operations are excluded from mapping and have to be executed by the hardwired units. 4. Due to the development state of our architecture description, we could not define the microarchitecture of the RFUs more flexible. Furthermore, the complexity of the RFUs is much lower than those of other reconfigurable devices (cf. section 2). As a consequence, the mapped IR subtrees where of order less than 4. 5. The code generation pass transformed the IR into VLIW object code using hardwired instructions as well as new generated reconfigurable instructions. 6. A simple behavioral description was used to describe the functionality of the instruction set for the simulator. We apply our measurements to the ratio of executed cycles with and without the availability of a reconfigurable hardware on the same processor architecture. Figure 4 shows the results in detail. The increase in performance (decrease of cycles) are in the range of 10 to 38%, when we focus mainly on a maximal clock frequency and a small pipeline depth. The results for the FFT and the Viterbi fall short of our expectations. We identified the following reasons for the relative low increase in performance. Firstly, the very small RFUs cause a suboptimal mapping of subtrees and possibly a frequent reconfiguration. Because of the higher costs of a large portion of candidates are not selected for RFU execution. Furthermore, there are also a lot of memory transfers and control flow operations inside the algorithms, which have to be executed in the hardwired part of the processor and do not contribute to the increase in performance. If we allow the RFU to access the memory directly, we can overcome this penalty. Furthermore, with the utilization of optimizing scheduling techniques like basic block expansion and software pipelining, we are convinced to improve the results considerably.
6
Conclusion
In our paper we have presented the RECAST framework for design space exploration for the parameterizable coarse-grain reconfigurable architecture template
166
J. Braunes, S. K¨ ohler, and R.G. Spallek
VAMPIRE. We have outlined the particular components of this framework including a profiler based on the SUIF intermediate representation, a module for synthesis of reconfigurable instructions and the code selector. Furthermore we have analyzed the results we received by some experiments. We have seen a lot of potential for further improvements. At present we are extending our framework by a more powerful architecture description, that includes a flexible model for coarse-grain run-time reconfigurable units.
References 1. Hartenstein, R.: A Decade of Reconfigurable Computing: a Visionary Retrospective. In: Proceedings of the Conference on Design Automation and Testing in Europe (DATE’01), ACM Press (2001) 2. Goldstein, S.C., Schmit, H., Moe, M., Budiu, M., Cadambi, S., Taylor, R.R., Laufer, R.: PipeRench: A Coprocessor for Streaming Multimedia Acceleration. In: ISCA. (1999) 28–39 3. Miyamori, T., Olukotun, K.: REMARC: Reconfigurable Multimedia Array Coprocessor. In: Proceedings of the ACM/SIGDA Sixth International Symposium on Field Programmable Gate Arrays (FPGA’98), ACM Press (1998) 4. Baumgarte, V., May, F., N¨ uckel, A., Vorbach, M., Weinhardt, M.: PACT XPP - A Self-Reconfigurable Data Processing Architecture. In: Proceedings of the International Conference on Engineering of Reconfigurable Systems and Algorithms (ERSA’2001). (2001) 5. Singh, H., Lee, M.H., Lu, G., Kurdahi, F., Bagherzadeh, N., Filho, F.E.C.: MorphoSys: An Integrated Reconfigurable System for Data-Parallel and ComputationIntensive Applications. IEEE Transactions on Computers 49 (2000) 6. Sidhu, R.P., Bondalapati, K., Choi, S., Prasanna, V.K.: Computation Models for Reconfigurable Machines. In: International Symposium on Field-Programmable Gate Arrays. (1997) 7. Lee, J., Choi, K., Dutt, N.D.: Compilation Approach for Coarse-Grained Reconfigurable Architectures. IEEE Design and Test of Computers, Special Issue on Application Specific Processors 20 (2003) 8. Halambi, A., Grun, P., Ganesh, V., Khare, A., Dutt, N., Nicolau, A.: EXPRESSION: A Language for Architecture Exploration through Compiler/Simulator Retargetability. In: Proceedings of the European Conference on Design, Automation and Test (DATE 99). (1999) 485–490 9. Wilson, R.P., French, R.S., Wilson, C.S., Amarasinghe, S.P., Anderson, J.A.M., Tjiang, S.W.K., Liao, S.W., Tseng, C.W., Hall, M.W., Lam, M.S., Hennessy, J.L.: SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers. SIGPLAN Notices 29 (1994) 31–37 10. Hauck, S.: Configuration Prefetch for Single Context Reconfigurable Coprocessors. In: Proceedings of the Sixth ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays (FPGA ’98), ACM Press (1998) 65–74 11. Sudhir, S., Nath, S., Goldstein, S.C.: Configuration Caching and Swapping. In Brebner, G., Woods, R., eds.: Proceedings of the 11th International Conference on Field Programmable Logic (FPL 2001). Volume 2147 of Lecture Notes in Computer Science., Springer Verlag (2001) 192–202
Component-Based Hardware-Software Co-design ´ am Mann, and Andr´ P´eter Arat´o, Zolt´ an Ad´ as Orb´ an Budapest University of Technology and Economics Department of Control Engineering and Information Technology H-1117 Budapest, Magyar tud´ osok k¨ or´ utja 2, Hungary Phone: +36 14632487, Fax: +36 14632204
[email protected], {Zoltan.Mann,Andras.Orban}@cs.bme.hu
Abstract. The unbelievable growth in the complexity of computer systems poses a difficult challenge on system design. To cope with these problems, new methodologies are needed that allow the reuse of existing designs in a hierarchical manner, and at the same time let the designer work on the highest possible abstraction level. Such reusable building blocks are called components in the software world and IP (intellectual property) blocks in the hardware world. Based on the similarity between these two notions the authors propose a new system-level design methodology, called component-based hardware-software co-design, which allows rapid prototyping and functional simulation of complex hardware-software systems. Moreover, a tool is presented supporting the new design methodology and a case study is shown to demonstrate the applicability of the concepts.
1
Introduction
The requirements towards today’s computer systems are tougher than ever. Parallel to the growth in complexity of the systems to be designed, the time-tomarket pressure is also increasing. In most applications, it is not enough for the product to be functionally correct, but it has to be cheap, fast, and reliable as well. With the wide spread of mobile systems and the advent of ubiquitous computing, size, heat dissipation and energy consumption [1] are also becoming crucial aspects for a wide range of computer systems, especially embedded systems. To take into account all of these aspects in the design process is becoming next to impossible. According to the International Technology Roadmap for Semiconductors [2], the most crucially challenged branch of the computer industry is system design. The Roadmap clearly declares that Moore’s law can hold on for the next decades only if innovative new ways of system design will be proposed to handle the growing complexity.
This work have been supported by the European Union as part of the EASYCOMP project (IST-1999-14151) and by OTKA T 043329
C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 169–183, 2004. c Springer-Verlag Berlin Heidelberg 2004
170
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
Embedded systems have become a part of our lives in the form of consumer electronics, cell phones, smart cards, car electronics etc. These computer systems consist of both hardware and software; they together determine the operation of the system. The differences between hardware and software and their interaction contribute significantly to the above-mentioned huge complexity of systems. On the other hand, the similarities between hardware and software design open many possibilities for their optimized, synergetic co-design. This is the motivation for hardware-software co-design (HSCD) [3]. To address the above problems, different, but in many ways similar solutions have been developed in the software and hardware world. 1.1
Solutions in the Software World
Traditionally, the focus of software engineering has been on flexibility, code readability and modifiability, maintainability etc. This has led to the notions of separation of concerns, information hiding, decoupling, and object-orientation. In recent years though, as a result of the growing needs, the reuse of existing pieces of design or even code has received substantial attention. Examples of such efforts include design and analysis patterns, aspect-oriented programming, software product lines, and component-based software engineering [4]. Unfortunately, the definition of a component is not perfectly clear. There are several different component models, such as for instance the CORBA component model or the COM component model. Each of these component models define the notion of a component slightly differently. However, these definitions have much in common: a component is a piece of adaptable and reusable code, that has a well-defined functionality and a well-defined interface, and can be composed with other components to form an application. Components are often sold by thirdparty vendors, in which case we talk about COTS (commercial off-the-shelf) components. Each component model defines a way for the components—which might be very different in programming language, platform or vendor—to interact with each other. The component models are also often supported by middleware, which provides services that are often needed—such as support for distribution, naming and trading service, transactions, persistence etc.—to the components. As a result, the middleware can provide transparency (location transparency, programming language transparency, platform transparency etc.), which facilitates the development of distributed component-based software systems enormously. 1.2
Solutions in the Hardware World
Since the construction of hardware is much more costly and time-consuming than that of software, the idea of reusing existing units and creating the new applications out of the existing building blocks is definitely more adopted in the hardware world. This process has led from simple transistors to gates, then to simple circuits like flip-flops and registers, and then to more and more complex
Component-Based Hardware-Software Co-design
171
building blocks like microprocessors. Today’s building blocks perform complex tasks and are largely adaptable. These building blocks are called IP (intellectual property) units [5,6,7,8]. They clearly resemble software components; however, IPs are even less standardized than software components. We do not know widely accepted component models such as CORBA or EJB in the hardware world. Another consequence of the high cost of hardware production is that hardware must be carefully tested before it is actually synthesized. Therefore, testing solutions are more mature in the hardware world: e.g. design for testability (DFT) and built-in self test (BIST) are common features of hardware design. Moreover, it is common to use simulation of the real hardware for design and test purposes. 1.3
Convergence
The production costs of hardware units depend very much on the volume of the production. It is by orders of magnitude cheaper to use general-purpose, adaptable hardware elements which are produced in large volumes than specialpurpose units. The general-purpose units (e.g. Field Programmable Gate Arrays or microprocessors) have to be programmed to perform the given task. Therefore, when using general-purpose hardware units to solve a given problem, one actually uses software. Conversely, when creating a software solution, one actually uses general-purpose hardware. Consequently, the boundary between adaptable hardware units and software is not very sharp. As already mentioned, hardware is usually simulated from the early phases of the design process. This means that its functionality is first implemented by software. Moreover, there are now tools, for instance the PICO (Program In, Chip Out [9]) system, that can transform software to hardware. Motivated by the above facts this paper introduces a new system-level design methodology which handles both software and hardware units at a high abstraction level and propagates the concept of reuse by assembling the system from hardware and software building blocks. Note that it is not the intention of this paper to address each system-level synthesis problem emerging during HSCD, our goal is only to highlight the concept of a new system-design approach and to deal with problems special to the new methodology. The paper is organized as follows. Section 2 introduces the proposed new methodology and some related problems. The tool supporting the new concepts is demonstrated in Section 3 and a case study is presented in Section 4. Finally, Section 5 concludes the paper.
2
A New HSCD Methodology
Based on the growing needs towards system design, as well as both the software and hardware industry’s commitment to emphasize reuse as the remedy for
172
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
design complexity, we propose a novel HSCD methodology we call componentbased hardware-software co-design (CBHSCD). CBHSCD is an important contribution in the Easycomp (Easy Composition in Future Generation Component Systems1 ) project of the European Union. The main goal of CBHSCD is to assemble the system from existing pre-verified building blocks allowing the designer rapid prototyping [10,11] at a very high level of abstraction. At this abstraction level components do not know any implementation details of each other, not even whether the other is implemented as hardware or as software. The behavior of this prototype system can be simulated and verified at an early stage of the design process. CBHSCD supports hierarchical design: the generalized notion of components makes it possible to reuse complex hardware-software systems as components in later designs. (See also Section 2.6.) The main steps of CBHSCD are shown in Fig 1. In the following each subtask is detailed except the issues related to the synthesis which are beyond the scope of CBHSCD.
Problem specification Component selection
Composition
Component repository
Simulation Validation Real−time constraints Partitioning Cost & timing info Consistency check
Synthesis
Technology spec
Fig. 1. The process of CBHSCD
2.1
Component Selection
The process starts by selecting the appropriate components2 from a component repository based on the problem specification (Of course the selection of an appropriate component is an individual challenge [5,12], but it is beyond the scope of this paper to address this problem). From the aspect of CBHSCD it does not matter how the components are implemented: CBHSCD does not aim at replacing or reinventing specific hardware design and synthesis methods or software development methods. Instead, it relies on existing methodologies and best practices, and only complements them with co-design aspects. The used 1 2
www.easycomp.org We use the term component to refer to a reusable building block, which might be hardware, software, or the combination of both in hierarchical HSCD.
Component-Based Hardware-Software Co-design
173
components might include pure software and pure hardware components, but mixed components are also allowed, as well as components which exist in both hardware and software. In the latter case the designer does not have to decide in advance which version to use (only the functionality is considered), but this will be subject to optimization in the partitioning phase (see Section 2.4). 2.2
Composition
After the components are selected, they are composed to form a prototype system. Each component provides an interface for the outside world. The specification of this interface is either delivered with the component or if the component model provides a sufficient level of reflection, it can be generated automatically. One of the important contributions of CBHSCD is that the composition of components is based on remote method calls between components supported by the underlying middleware. To handle all components—including the hardware components—uniformly, a wrapper should be designed around the device driver communicating directly with the hardware. This wrapper has the task to produce a software-like interface for the hardware component, to delegate the calls and the parameters to the device driver and to trigger an event when a hardware interrupt occurs. The device driver and the wrapper together hide all hardware-specific details including port reads/writes, direct memory access (DMA) etc.: these are all done inside the wrapper and the device driver, transparently for other components. As a consequence hardware components can also participate in remote method calls both as initiator or as acceptor. Composition is supported by a visual tool that provides an intuitive graphical user interface (GUI) as well as an easy-to-use interconnection wizard. This easeof-use helps to overcome problems related to the learning-curve, since traditionally system designers have had to possess professional knowledge on hardware, software and architectural issues; thus, the lack of qualified system designers has been a critical problem. The simple composition also allows for easy rapid prototyping of complex hardware-software systems. 2.3
Simulation and Validation
Since the application has been composed of tested and verified components, only the correctness of the composition has to be validated by simulation. The individual units are handled as black-box components in this phase and only functional simulation is carried out. For instance, if a calculation is required from a hardware component, one would only monitor the final result passed back to the initiator component and not the individual steps taken inside the hardware. If problems are detected, the component selection and/or composition steps can be reviewed. It is even possible to simulate parts of the system, so that problems can be detected before the whole system is composed. Important to note that components are living and fully operable at composition time (e.g. a button can be pressed and it generates events), hence the
174
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
application can be tried out by simply triggering an event or sending a start signal to a component. This helps validate the system enormously. Since the design is only in a premature prototyping phase, it is possible that the (expensive) hardware components are not available at this stage3 . If the hardware component is already available and the component is decided to be in the hardware context, it can be used already in the simulation phase. However, it is possible that we want to synthesize or buy the hardware component only if it is surely needed. In this case, we need software simulation. If a software equivalent of the hardware component is available—e.g. if the hardware is synthesized from a software description, which is often the case, or if the hardware performs a well-known algorithm, which is also implemented in software—then this software equivalent can be used for simulation. Even if a complete software equivalent is not available, there might be an at least interface-equivalent software, e.g. if the IP vendor provides a C code to specify the interface of its product. Also, if the description of the hardware is available in a hardware description language such as VHDL, a commercial hardware simulator can be used. However, we can assume that sooner or later all IP vendors will provide some kind of formal description of their products which is suitable for functional simulation [5]. Related work includes the embedded code deployment and simulation possibilities of Matlab (http://www.mathworks.com) and the Ptolemy project (http://ptolemy.eecs.berkeley.edu/). 2.4
Partitioning
After the designer is convinced that the system is functionally correct, the system has to be partitioned, i.e. the components have to be mapped to software and hardware. (There can be components which only exist in hardware or in software, so that their mapping is trivial.) This is an important optimization problem, in which the optimal trade-off between cost and performance has to be found. Traditionally, this has been the task of the system designer, but manual partitioning is very time-consuming and often yields sub-optimal solutions. CBHSCD on the other hand makes it possible to design the system at a very high level, only concentrating on functionality. This frees the designer from dealing with low-level implementation issues. Partitioning is automated based on a declarative requirements specification. We defined a graph-theoretic model for the partitioning problem [13,14] and there are other partitioning algorithms in the literature, see e.g. [15,16,17] and references therein. The partitioning algorithm has to take into account the software running times, hardware costs (price, area, heat dissipation, energy consumption etc.), communication costs between components as well as possible constraints defined by the user (including soft and hard real-time constraints, area constraints etc.). This is very helpful for the design of embedded systems, especially real-time systems. When limiting the running time, partitioning aims at minimizing costs, which are largely the hardware 3
Before partitioning it is not even known of each component whether to be realized in software or hardware.
Component-Based Hardware-Software Co-design
175
costs. Similarly, when costs are limited, the running time is minimized, which is essentially the running time of the software plus the communication overhead. It is also possible to constrain both running time and costs, in which case it has to be decided whether there is a system that fulfills all these constraints, and in the case of a positive answer, such a partition has to be found. To generate all the input data for the partitioning algorithm is rather challenging. In case of hardware costs, it is assumed that the characteristic values of the components are provided with the component itself by the vendor. Communication costs are estimated based on the amount of exchanged data, and the communication protocol, for which there might be several possibilities with different cost and performance. Concerning the running times, a worst case (if hard real-time constraints are specified) or average case running time is either provided with the component or extracted by some profiling technique. An independent research field deals with the measurement or estimation of these values, see e.g. [18,19]. The time and cost constraints must be specified explicitly by the designer via use-cases (see Section 3 for more details). 2.5
Consistency
One of the main motivations of CBHSCD is to raise the abstraction level high enough where the boundary of hardware and software vanishes. Since components interacting with each other are not aware of the context of the other (only the interface is known), hence the change of implementation should be transparent to others. It implies two consistency problems special to partitioning in CBHSCD. Interface consistency. The components subject to partitioning are available also in software and hardware. There is an interface associated to all these pairs, which describes the necessary methods and attributes the implementations should provide in order to allow transparent change between them. It must be checked whether both implementations realize this interface. (For related work see e.g. [20].) State consistency. The prototype system can be repartitioned several times during the design process. Each time to realize a transparent swap between implementations the new implementation should be set to exactly the same state as the current one. (In the case of a long-lasting simulation it may not be feasible to restart the simulation after each swap.) This is not straightforward, because the components are handled as black-box, and it is not possible to access all the state-variables from outside. A number of component models explicitly forbid stateful components to avoid these problems. Our proposed solution to achieve the desired state is to repeat all the method calls on the new implementation that has affected the state of the current implementation since the last swap. (See Section 3 for more details.) At the end, the system is synthesized, which involves generation of glue code and adapters for the real interconnection of the system, and also the generation
176
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
of a test environment and test vectors for real-world testing. However, our main objective was to improve the design process, which is in our opinion the real bottleneck, so that the last phase, which involves implementation steps, is beyond the scope of this paper. 2.6
Hierarchical Design
Hierarchical design [21,22] is an integral part of CBHSCD. It helps in coping with design complexity using a divide-and-conquer approach, and also in enhancing testability. Namely, the system can be composed of well-tested components, and only the composition itself has to be tested, which compresses the test space enormously. Hierarchical design in CBHSCD can be interpreted either in a bottom-up or a top-down fashion. Bottom-up hierarchical design means that a system that has been composed of hardware, software and mixed components4 using CBHSCD methodology can later be used as a component on its own for building even more complex systems. Top-down hierarchical design means that a complex problem is divided into sub-problems, and this decomposition is refined until we get manageable pieces. The identified components can then be realized either based on existing components using CBHSCD methodology or using a traditional methodology if the component has to be implemented from scratch. As a simple example of such a hierarchical design, consider a computationintensive image-processing application, which consists of a set of algorithms. In order to guarantee some time constraints, one of the algorithms has to be performed by a very fast component. So the resulting system might consist of a general-purpose computer and an attached acceleration board. However, the acceleration board itself might include both non-programmable accelerator (NPA) logic and a very long instruction word processor (VLIW) processor [9], which performs the less performance-critical operations of the algorithm in software, as the result of a similar design step. 2.7
Communication
Communication between the components is facilitated through a middleware layer, which consists of the wrappers for the respective component types, as well as support for the naming of components, the conversion of data types and the delivery of events and method calls. This way we can achieve hardware-software transparency much in the same way as middleware systems for distributed software systems achieve location and implementation transparency. Consequently, the communication between hardware and software becomes very much like remote procedure calls (RPC) in distributed systems. The resulting architecture is shown in Fig 2. 4
Clearly, pure hardware and pure software components are just the two extremes of the general component notion. Generally, components can realize different cost/performance trade-offs ranging from the cheapest but slowest solution (pure software) to the most expensive but fastest solution ( pure hardware).
Component-Based Hardware-Software Co-design
177
Middleware COM wrapper
Hardware wrapper
COM component
Device driver Hardware
Fig. 2. Communication between a COTS software component (COM component in this example) and a hardware unit. The dotted line indicates the virtual communication, the full line the real communication.
The drawback of this approach is the large communication overhead introduced by the wrappers and the middleware layer in general. However, this is only problematic if the communication between hardware and software involves many calls, which is not typical. Most often, a hardware unit is given an amount of data on which it performs computation-intensive calculations and then it returns the results. In such cases, if the amount of computation is sufficiently large, the communication overhead can be reduced. However, the flexible but complicated wrapper structure is only used in the design phase, and it is replaced by a simpler, faster, but less flexible communication infrastructure in the synthesis phase. There are standard methodologies for that task, see e.g. [23,7].
3
CWB-X: A Tool for CBHSCD
Our tool to support CBHSCD is an extension of a component-based software engineering tool called Component Workbench (CWB), which has been developed at the Vienna Technical University in the Easycomp project [24]. CWB is a graphical design tool implemented in Java for the easy composition of applications from COTS software components. The main contribution of CWB is the support for composition of components from different component models, like COM, CORBA, EJB etc. To achieve this, CWB uses a generic component model called Vienna Composition Framework (VCF) which handles all existing component models similarly. This generic model offers a very flexible way to represent components, hence all existing software component models can be transformed to this one by means of wrappers. In the philosophy of CWB, each component is associated with a set of features. A feature is anything a component can provide. A component can declare the features it supports and new features can also be added to the CWB. The most typical features are the following. Property. The properties (attributes) provided by the component. Method. The methods of the component. Eventset. The set of events the component can emit.
178
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
Lifecycle. If a component has this feature, then it can be created and destroyed, activate or deactivate. GUI. The graphical interface of the component. Each component model is implemented as a plug-in in the CWB (see Fig 3). The plug-in class only provides information about the features the component can provide, the real functionality is hidden in the classes implementing the features. As the name suggests, new plug-ins can be added to the CWB, that is, new component models can be implemented. To do that, a new plug-in class and a class representing the required features have to be implemented. These classes realize the wrapper between the general component model of VCF and the specific component model.
GUI Generic Component Model COM Plugin
CORBA Plugin
EJB Plugin
COM
CORBA
EJB
CWB
Fig. 3. The architecture of the CWB.
For the communication between the components, CWB offers multiple communication styles. One of the most important communication styles supported by CWB is the event-to-method communication, i.e. a component triggers an event which induces a method call in all registered components. The registration mechanism and the remote method call is supported by Java. A wizard helps the user to set up a proper connection. New communication styles can also be added to the CWB. The used components are already operable at composition-time. This is very advantageous because this way the simulation and evaluation of the system is possible already in the early phases of the design process. Also, the user can invoke methods of the components, thus use-cases or call sequences can be tested without any programming efforts. 3.1
Extension of CWB to Support CBHSCD
CWB offers a good starting point for a hardware-software co-design tool because of its flexibility and extensibility. We extended CWB to support CBHSCD principles. In CWB-X (CWB eXtended), the designer of a hardware-software
Component-Based Hardware-Software Co-design
179
application may select both software and hardware and so called partitionable components from a repository. The latter identifies two implementations for the same behavior. These components can originate from different vendors and different component models including hardware and software. The selected component is put on the working canvas. In case of pure software components, the operable component itself—with possible GUI—can appear, but in case of hardware components the component itself might not be available and some kind of simulation is used. The designer can choose between different simulation levels, as already discussed. To enable the integration of hardware components in CWB-X, new component models are added to the CWB as plug-ins. Similarly to the software side, there is a need for several hardware component models according to the different ways the actual hardware might be connected to the computer. This goal is complicated by the lack of widely accepted industry standards for IP interface and communication specification. Since the implementation details of a component should be transparent for the other components, the hardware components should provide similar features as the software ones. Therefore we define the Method, Property and Eventset features for hardware components as well, and map methods to operations of the underlying hardware, properties to status information and initial parameters, and events to hardware interrupts. To identify the features a hardware component can provide [5], reflection is necessary, i.e. information about the interface of the component. Today’s IP vendors do not offer a standardized way to do that, often a simple text description is attached to the IP. In our model we require a hardware component to provide a description about its features (Properties, Methods, Events). The composition of components is supported by wizards; the wizard parses the component’s features and allows the connection according to the selected communication style. Due to the wrappers, hardware components act the same way as software ones, the wizards of the CWB can be used. When the architecture of the designed application is ready, partitioning is performed. We have integrated a partitioning algorithm [13] based on integer linear programming (ILP). This is not an approximation algorithm: it finds the exact optimum. This approach can handle systems with several hundreds of components in acceptable time. For the automatic partitioning process, the various cost parameters and the time constraints must be specified. Time constraints are defined on the basis of use-cases. Each use-case corresponds to a specific usage of the system, typically initiated by an entity outside the system. A use case involves some components of the system in a given order. A component can also participate multiple times in a use case. The designer defines a use-case by specifying the sequence of components affected in it and gives a time constraint for the sum of the execution times of the concerned components including communication. The constraints for all use-cases are simultaneously taken into account during partitioning. The measurement of running time and
180
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
communication cost parameters is at an initial stage in our tool; currently we expect that this data is explicitly given by the designer. CWB-X is able to check both interface and state consistency. To each partitionable component a Java-like interface is attached which describes the required features of the implementations. The tool checks whether the associated implementations are appropriate. Furthermore, to each method in this interface description file an attribute is ordered, which describes the behavior of this method in the state consistency check. The value and the meaning of the attribute are the following: NO SIDE EFFECT: the corresponding method has no effect on the state of the component, thus it should not be repeated after repartition. REPEAT AT REPARTITION: the corresponding method affects the state but has no side effect, thus it should be repeated after repartition. REPEAT AT REPARTITION ONCE: the same as the previous one, but in a sequence of this method call the last one should be repeated only. An example is setting a property to a value. SIDE EFFECT: the corresponding method does affect the state and also has some side effect (e.g. sends 100 pages to the printer) or takes too long to repeat. The system logs every method call and property change since the last implementation swap. If all these belong to the first three category, the correct state will be set automatically after the change of the implementations by repeating the appropriate function calls. If there is at least one call with SIDE EFFECT, the system shows a warning and asks the designer to decide which method calls to repeat. The designer is supported by a detailed log in this decision.
4
Case Study
In this section the CBHSCD methodology will be step-by-step demonstrated on a small example application. In this example the frequency of an unknown source signal has to be measured. This task might appear in several real-world applications like mobile phone technology, hence this system can be used as a building block in later designs. The architecture of the example can be seen in Fig 4. The frequency measurer (FM) measures the signal of the generator and sends the measured value periodically to the PC through the serial port. The PC on the one hand displays the current frequency value and plots a graph on the alteration of the value, and, on the other hand, controls the measurer through start and stop signals. There are two implementations available for the FM: the first one is a programmable PIC 16F876 microcontroller regarded as software implementation and an FPGA on a XILINX VIRTEX II XC2V1000 card as the hardware implementation. The two implementations behave exactly the same way, but their performance (and cost) is different. The microcontroller is able to precisely measure the frequency up to 25KHz (to take a sample lasts 40µs). The FPGA on the other hand can take a sample in 50ns, thus it can measure up to 20MHz without any problem.
Component-Based Hardware-Software Co-design
181
System boundary Frequency measurer PC 1.2KHz
start stop
serial port
Software impl. (microcontroller) Signal generator Hardware impl. (FPGA)
Fig. 4. The architecture of the example application
There are five components in this example: two JavaBeans buttons (start and stop), a TextField and a chart component for display and the FM declared as a partitionable component with the two implementations detailed above5 . Both implementations belong to the component model whose device driver is able to communicate with the devices through serial port. For consistency purposes the interface on Fig 5 is provided with the component. The device driver is wrapped by a CWB wrapper providing a software-like interface. The tool checks whether the interfaces of the wrappers match the requirements.
package frequency; public interface FrequencyEstimatorInterface { SIDE_EFFECT public void start(); SIDE_EFFECT public void stop(); NO_SIDE_EFFECT public void takeOneSample(); NO_SIDE_EFFECT public String getMeasuredFrequencyString(); NO_SIDE_EFFECT public Integer getMeasuredFrequency(); REPEAT_AT_REPARTITION_ONCE public void setCountEveryEdge(boolean b); NO_SIDE_EFFECT public boolean getCountEveryEdge(); } Fig. 5. Part of the required interface with state consistency attributes of the partitionable frequency measurer (FM) component
In the composition phase the start and the stop button should be mapped with the aid of the mentioned wizard to the start and stop method of the FM, respectively. The FM sends an interrupt whenever a new measured value is arrived. This interrupt appears as an event in the CWB-X; this event triggers the setText function of the TextField and the addValue function of the chart. The system can be immediately simulated without any further effort: after pressing 5
The signal generator is regarded as an outside source, hence not part of the system
182
´ Mann, and A. Orb´ P. Arat´ o, Z.A. an
the start button the current implementation of the FM starts measuring the signal of the generator and the PC will display the measured values. The task of partitioning will be to decide which implementation to use according to the time requirements of the system. The designer defines a use-case which declares a time-limit for the takeOneSample function of the FM. In this simple case the optimal partition is trivial6 : if the time-limit is under 40µs, the FPGA should be used, otherwise the microcontroller (here we assume, that to program the microcontroller is cheaper than to produce the FPGA). The partitioner finds this solution and changes the implementation if necessarily. The new implementation will be transformed to the same state as the current one according to the steps detailed in Section 3.
5
Conclusion
In this paper, we have described a new methodology for hardware-software codesign, which emphasizes reuse, a high abstraction level, design automation, and hierarchical design. The new methodology, called component-based hardwaresoftware co-design (CBHSCD), unifies component-based software engineering and IP-based hardware engineering practices. It supports rapid prototyping of complex systems consisting of both hardware and software, and helps in the design of embedded and real-time systems. The concepts of CBHSCD, as well as partitioning, enable advanced tool support for the system-level design process. Our tool CWB-X is based on the Component Workbench (CWB), a visual tool for the composition of software components of different component models. CWB-X extends the CWB with new component models for hardware components as well as partitioning and consistency checking functionality. We presented a case study to demonstrate the applicability of our concepts and usefulness of our tool. We believe that the notion of CBHSCD unifies the advantages of hardware and software design to a synergetic system-level design methodology, which can help in designing complex, reliable and cheap computer systems rapidly.
References 1. H. Lekatsas, W. Wolf, and J. Henkel. Arithmetic coding for low power embedded system design. In Data Compression Conference, pages 430–439, 2000. 2. A. Allan, D. Edenfeld, W. H. Joyner Jr., A. B. Kahng, M. Rodgers, and Y. Zorian. 2001 Technology Roadmap for Semiconductors. IEEE Computer, 35(1), 2002. 3. R. Niemann. Hardware/Software Co-Design for Data Flow Dominated Embedded Systems. Kluwer Academic Publishers, 1998. 4. George T. Heineman and William T. Councill. Component Based Software Engineering: Putting the Pieces Together. Addison-Wesley, 2001. 5. G. Martin, R. Seepold, T. Zhang, L. Benini, and G. De Micheli. Component selection and matching for IP-based design. In Proceedings of the DATE 2001 on Design, automation and test in Europe. IEEE Press, 2001. 6
Generally the partitioning problem is N P-hard.
Component-Based Hardware-Software Co-design
183
6. Ph. Coussy, A. Baganne, and E. Martin. A design methodology for integrating ip into soc systems. In Conf´erence Internationale IEEE CICC, 2002. 7. P. Chou, R. Ortega, K. Hines, K. Partridge, and G. Borriello. Ipchinook: an integrated ip-based design framework for distributed embedded systems. In Design Automation Conference, pages 44–49, 1999. 8. F. Pogodalla, R. Hersemeule, and P. Coulomb. Fast protoyping: a system design flow for fast design, prototyping and efficient IP reuse. In CODES, 1999. 9. V. Kathail, S. Aditya, R. Schreiber, B. R. Rau, D. C. Cronquist, and M. Sivaraman. PICO: automatically designing custom computers. IEEE Computer, 2002. 10. G. Spivey, S. S. Bhattacharyya, and Kazuo Nakajima. Logic Foundry: A rapid prototyping tool for FPGA-based DSP systems. Technical report, Department of Computer Science, University of Maryland, 2002. 11. Klaus Buchenrieder. Embedded system prototyping. In Tenth IEEE International Workshop on Rapid System Prototyping, 1999. 12. P. Roop and A. Sowmya. Automatic component matching using forced simulation. In 13th International Conference on VLSI Design. IEEE Press, 2000. ´ Mann and A. Orb´ 13. Z. A. an. Optimization problems in system-level synthesis. 3rd Hungarian-Japanese Symp. on Discrete Mathematics and Its Applications, 2003. ´ Mann, A. Orb´ 14. P. Arat´ o, S. Juh´ asz, Z. A. an, and D. Papp. Hardware/software partitioning in embedded system design. In Proceedings of the IEEE International Symposium on Intelligent Signal Processing, 2003. 15. N. N. Binh, M. Imai, A. Shiomi, and N. Hikichi. A hardware/software partitioning algorithm for designing pipelined ASIPs with least gate counts. In Proceedings of the 33rd Design Automation Conference, 1996. 16. B. Mei, P. Schaumont, and S. Vernalde. A hardware/software partitioning and scheduling algorithm for dynamically reconfigurable embedded systems. In Proceedings of ProRISC, 2000. 17. T. F. Abdelzaher and K. G. Shin. Period-based load partitioning and assignment for large real-time applications. IEEE Transactions on Computers, 49(1), 2000. 18. X. Hu, T. Zhou, and E. Sha. Estimating probabilistic timing performance for real-time embedded systems. IEEE Transactions on VLSI Systems, 9(6), 2001. 19. S. L. Graham, P. B. Kessler, and M. K. McKusick. An execution profiler for modular programs. Software Practice & Experience, 13:671–685, 1983. 20. A. Speck, E. Pulverm¨ uller, M. Jerger, and B. Franczyk. Component composition validation. International Journal of Applied Mathematics and Computer Science, pages 581–589, 2002. 21. G. Quan, X. Hu, and G. Greenwood. Preference-driven hierarchical hardware/software partitioning. In Proceedings of the IEEE/ACM International Conference on Computer Design, 1999. 22. R. P. Dick and N. K. Jha. MOGAC: A multiobjective genetic algorithm for hardware-software co-synthesis of hierarchical heterogeneous distributed embedded systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 17(10):920–935, 1998. 23. A. Basu, R. Mitra, and P. Marwedel. Interface synthesis for embedded applications in a co-design environment. In 11th IEEE International conference on VLSI design, pages 85–90, 1998. 24. Johann Oberleitner and Thomas Gschwind. Composing distributed components with the component workbench. In Proceedings of the Software Engineering and Middleware Workshop (SEM2002). Springer Verlag, 2002.
Cryptonite – A Programmable Crypto Processor Architecture for High-Bandwidth Applications Rainer Buchty , Nevin Heintze, and Dino Oliva Agere Systems 101 Crawfords Corner Rd Holmdel, NJ 07733, USA {buchty|nch|oliva}@agere.com
Abstract. Cryptographic methods are widely used within networking and digital rights management. Numerous algorithms exist, e.g. spanning VPNs or distributing sensitive data over a shared network infrastructure. While these algorithms can be run with moderate performance on general purpose processors, such processors do not meet typical embedded systems requirements (e.g. area, cost and power consumption). Instead, specialized cores dedicated to one or a combination of algorithms are typically used. These cores provide very high bandwidth data transmission and meet the needs of embedded systems. However, with such cores changing the algorithm is not possible without replacing the hardware. This paper describes a fully programmable processor architecture which has been tailored for the needs of a spectrum of cryptographic algorithms and has been explicitly designed to run at high clock rates while maintaining a significantly better performance/area/power tradeoff than general purpose processors. Both the architecture and instruction set have been developed to achieve a bits-per-clock rate of greater than one, even with complex algorithms. This performance will be demonstrated with standard cryptographic algorithms (AES and DES) and a widely used hash algorithm (MD5).
1
Introduction and Motivation
Hardware ASIC blocks are still the only available commercial solution for high-bandwidth cryptography. They are able to meet functionality and performance requirements at comparably low costs and, importantly for embedded systems applications, low power consumption. Their chief limitation is their fixed functionality: they are limited to the algorithm(s) for which they have been designed. In contrast, a general purpose processor is a much more flexible approach and can be used to implement any algorithm. The current generation of these processors has sufficient computing power to provide moderate levels of cryptographic performance. For example, a high-end Pentium PC can provide encryption rates of hundreds of MBits/sec. However, general purpose processors are more than
Rainer Buchty is now a member of the University of Karlsruhe, Institute for Computer Design and Fault Tolerance, Chair for Computer Architecture and Parallel Processing. You can reach him via
[email protected] C. M¨uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 184–198, 2004. c Springer-Verlag Berlin Heidelberg 2004
Cryptonite – A Programmable Crypto Processor Architecture
185
100x larger and consume over 100x more power than dedicated hardware with comparable performance. A general purpose processor is simply too expensive and too power hungry to be used in embedded applications. Our goal in this paper is to provide a better tradeoff between flexibility and performance/area/power in the context of embedded systems, especially networking systems. Our approach is to develop a programmable architecture dedicated to cryptographic applications. While this architecture may not be as flexible as a general purpose processor, it provides a substantially better performance/area/power tradeoff. This approach is not new. An early example is the PLD001 processor [17]. While this processor is specifically designed for the IDEA and RSA algorithms, it is in fact a microprogrammable processor and could in principle be used for variations of these algorithms. A more recent approach to building a fully programmable security processor is CryptoManiac [28]. The CryptoManiac architecture is based on a 4-way VLIW processor with a standard 32-bit instruction set. This instruction set is enhanced for cryptographic processing [7] by the addition of crypto instructions that combine arithmetic and memory operations with logical operations such as XOR, and embedded RAM for table-based permutation (called the SBOX Cache). These crypto instructions take one to three cycles. The instruction set enhancement is based on an analysis of cryptographic applications. However, the analysis made assumptions about key generation which are not suitable for embedded environments. CryptoManiac is very flexible since it is built on a general purpose RISC-like instruction set. It provides a moderate level of performance over a wide variety of algorithms, largely due to the special crypto instructions. However, the CryptoManiac architecture puts enormous pressure on the register file, which is a centralized resource shared by all functional units. The register file provides 3 source operands and one result operand per functional unit [28], giving a total of at least 16-ports on a 32x32-bit register array, running at 360 MHz. This is still a challenge for a custom design using today’s 0.13µm technology. It would be even more difficult using the 0.13µm library-based ASIC tool flows typical for embedded system chips. Unlike CryptoManiac, Cryptonite was designed from the ground up, and not based on a pre-existing instruction set. Our starting point was an in-depth application analysis in which we decomposed standard cryptographic algorithms down to their core functionality as described in [6]. Cryptonite was then designed to directly implement this core functionality. The result is a simple light-weight processor with a smaller instruction set, a distributed register file with significantly less register port pressure, and a two-cluster architecture to further reduce routing constraints. It natively supports both 64-bit and 32-bit computation. All instructions execute in a single cycle. Memory access and arithmetic functions are strictly separated. Another major difference from CryptoManiac is in the design of specialized instructions for cryptographic operations. CryptoManiac provides instructions that combine up to three standard logic, arithmetic, and memory operations. In contrast, Cryptonite supports several instructions that are much more closely tailored to cryptographic algorithms such as parallel 8-way permutation lookups, parameterized 64-bit/32-bit rotation, and a set of XOR-based fold operations. Cryptonite was designed to minimize implemen-
186
R. Buchty, N. Heintze, and D. Oliva
tation complexity, including register ports, and size, number and length of the internal data paths. A companion paper [10] focuses on AES and the software for implementing AES on Cryptonite. It describes novel techniques for AES implementation, the AES-relevant aspects of the Cryptonite architecture and how AES influenced the design of Cryptonite. In this paper we focus on the overall architecture and design methodology as well as give details of those aspects of the architecture influenced by DES and hashing algorithms such as MD5, including the distributed XOR unit and the DES unit of Cryptonite.
2
Key Design Ideas
Cryptonite was explicitly designed for high throughput. Our approach combines single-cycle instruction execution with a three-stage pipeline consisting of simple stages that can be clocked at a high rate. Our architecture is tailored for cryptographic algorithms, since the more closely an architecture reflects the structure of an application, the more efficiently the application can be run on that architecture. The architecture also addresses system issues. For example, we have designed Cryptonite to generate round keys needed for encryption and decryption within embedded systems. To achieve these goals while keeping implementation complexity to a minimum, Cryptonite employs a number of architectural concepts which will be discussed in this section. These concepts arose from an in-depth analysis of several cryptographic algorithms described in [6], namely DES/3DES [4],AES/Rijndael [9,8], RC6 [21], IDEA [22], and several hash algorithms (MD4 [19,18], MD5 [20], and SHA-1 [5]). 2.1 Two-Cluster Architecture Most other work on implementing cryptographic algorithms on a programmable processor focuses solely on the core encryption algorithm and does not include round key generation (i.e. the round keys have to be precomputed). For embedded system solutions, however, on-the-fly round key generation is vital because storing/retrieving the round keys for thousands or millions of connections is not feasible. Coarse-grain parallelism can be exploited; round key calculation is usually independent of the core cryptographic operation. For example, in DES [4], the round key generation is completely independent of encryption or decryption. The loose coupling is where the round key is fed into the encryption process. Certain coarse-grained parallelism exists also within hash algorithms like MD5 [20] or SHA [5]: these algorithms consist of the application of a non-linear function (NLF) followed by further adding of table-based values. In particular, the hash function’s NLF can be calculated in parallel with summing up the table-based values. Our analysis revealed that many algorithms show similar structure and would benefit from an architecture providing two independent computing clusters. Algorithm analysis further indicated that two clusters are a reasonable compromise between algorithm support and chip complexity. Adding further clusters would rarely result in speeding up computation but rather increase silicon.
Cryptonite – A Programmable Crypto Processor Architecture
2.2
187
XOR Unit
XOR is a critical operation in cryptography1 . Several algorithms employ more than one XOR operation per computation round or combine more than two input values. Therefore, using common two-input functions causes an unnecessary sequentiality; a multi-input XOR function would avoid this sequentiality. Such a unit is easy to realize as the XOR function is fast, simple and cheap in terms of die size and complexity. Thus, Cryptonite employs a 6-input XOR unit which can take any number from one to 6 inputs. These inputs are the four ALU registers, data coming from the memory unit, result data linked from the sibling ALU, and an immediate value. As the XOR unit can additionally complement its result, it turns into a negation unit when only one input is selected. Signal routing becomes an issue with bigger data sizes. For this reason the 6-input XOR function was embedded into the data path: Instead of routing ALU registers individually into the XOR unit, the XOR function has been partially embedded into the data path from the registers to the XOR unit. Thus only one intermediate result instead of four source values has to be routed across the chip and routing pressure is taken from the overall design. This reduces die size and increases speed of operation. 2.3
Parameterizable Permutation Engine
Another basic operation of cryptographic operations is permutation, commonly implemented as a table lookup. In typical hardware designs of DES, these lookup tables are hardwired. However, for a programmable architecture, hardwired permutation tables are not feasible as they would limit the architecture to the provided tables. Supporting several algorithms would require separate tables and hence increase die size. S−box index LAR (providing page)
00 11 11 00 00 11 00 11
11 00 00 11 11 00 00 11 00 11 00 11
1111 0000 0000 1111
111 000 000 111 000 111 000 111 00 11 00 11 11 00 00 11
Resulting Data
111 000 000 111 111 000 111 000
00 11 00 111 000 00011 111 001111 11 0000 00 00 11 111 000 00011 111 001111 11 0000
111 000 00 00 00011 111 0011 11 00 11 000 111 00 00 00011 111 0011 11 00 11
Fig. 1. Vectored Memory Access 1
XOR is a self-invertible operation. We can XOR data and key to generate cipher text and then XOR the cipher text with the key to recover the data.
188
R. Buchty, N. Heintze, and D. Oliva
Instead, a reconfigurable permutation engine is necessary. Cryptonite employs a novel vector memory unit as its reconfigurable permutation engine. Algorithm analysis showed that permutation lookups are mostly done on a per-byte or smaller basis (e.g. DES: 6-bit address, 4-bit output; AES: permutation based on an 8-bit multiplication table with 256 entries). Depending on the input data size and algorithm, up to 8 parallel lookups are performed. In Cryptonite, the vector memory unit receives a vector of indexes and a scalar base address. This is used to address a vector of memories (i.e. n independent memory lookups are performed). The result is a data vector. This differs from a typical vector memory unit which, when given a scalar address, returns a data vector (i.e. the n data elements are sequentially stored at the specified address). In non-vector addressing mode, the memory address used is the sum of a base address (from a local address register) and an optional index value. Each memory in the vector of memories receives the same address, and the results are concatenated to return a 64-bit scalar. The vector addressing mode is a slight modification to this scheme: we mask out the lower 8 bits of the base address provided by a local address register (LAR) and the 64 bits of the index vector are interpreted as eight 8-bit offsets from the base address as illustrated by Figure 1. Cryptonite’s vector memory unit is built from eight 8-bit memory banks.
2.4 AES-Supporting Functions The vector memory unit described above is important for AES performance. Cryptonite also provides some additional support instructions for AES2 . Eight supporting functions are listed in Table 1. Unlike typical DES functions, the AES-supporting functions implement relatively general fold, rotate and interleave functionality and should be applicable to other crypto algorithms. Table 1. AES-supporting ALU functions Function swap(x32 , y32 ) upper(x64 , y64 ) lower(x64 , y64 ) rblm (x64 ) rbrm (x64 ) xor rblm (x64 , y64 ) fold(x64 , y64 ) ifold(x64 , y64 )
Description f (x, y) = y | x ∗ f (x, y) = x7 | x3 | y7 | y3 | x6 | x2 | y6 | y2 ∗ f (x, y) = x5 | x1 | y5 | y1 | x4 | x0 | y4 | y0 f (x) = (x63..32 ≪ (m ∗ 8) | (x31..0 ≪ ((m + 1) ∗ 8)) f (x) = (x63..32 ≫ (m ∗ 8) | (x31..0 ≫ ((m + 1) ∗ 8)) f (x, y) = rblm (x ⊕ y) f (x, y) = (x1 ⊕ y0 ) | (x0 ⊕ y1 ⊕ y0 ) f (x, y) = (x0 ⊕ y1 ) | (y1 ⊕ y0 ) ∗ indices denote bytes
indices denote bits
indices denote 32-bit words
With these functions, it it possible to implement an AES decryption routine with 81 cycles (8 cycles per round, 6 cycles setup, 3 cycles post-processing) and encryption 2
We note that the need for such functions mainly arises from AES round key generation.
Cryptonite – A Programmable Crypto Processor Architecture
189
with just 70 cycles (7 cycles per round, 1 cycle setup, 6 cycles post-processing)3 . Both routines include on-the-fly round-key generation. [10] elaborates on the AES analysis and how supporting functions were determined. It also presents the AES implementation on the Cryptonite architecture.
ALU #1 control ALU #2 control ALU interlink ALU #1
ALU #2
immediate value #1 Control Unit
source register #1 index register #1 immediate value #2
Memory Unit #1
source register #2 index register #2
Memory Unit #2
Data I/O Unit #1
sbox index
Address Generation Unit #1
Address Generation Unit #2
sbox index
Data I/O Unit #2
64−bit imm. #2 64−bit immediate #1 external data external address Data
Address Local Memory #1 (4096x64 bit)
Address
Data Local Memory #2 (4096x64 bit)
External Access Unit
Fig. 2. Overview of the Cryptonite architecture
3 The Cryptonite Architecture A high-level view of Cryptonite is pictured in Figure 2. As mentioned previously, application analysis led to the two-cluster architecture. Each cluster consists of an ALU and its accompanying data I/O unit (DIO) managing accesses to the cluster’s local data memory. A crosslinking mechanism enables data exchange between the ALUs of both clusters. The overall system is controlled by the control unit (CU) which parses the instruction stream and generates control signals for all other units. A simple external access unit (EAU) provides an easy method to access or update the contents stored in both local data memories: on external access, the CU puts all other units on hold and grants the EAU access to the internal data paths. The CU also supplies a set of 16 registers for looping and conditional branching. 12 of these are 8-bit counter registers, the remaining four are virtual registers reflecting the two ALU’s states: we use these registers to realize conditional branches on ALU results such as zero result return (BEQ/BNE or JZ/JNZ) or carry overflow/borrow (BCC/BCS or JC/JNC). The Cryptonite CU is depicted in Figure 3. The use of special purpose 3
The asymmetry arises from the fact that AES, although symmetric in terms of cryptography, is asymmetric in terms of computation. Decryption needs a higher number of table lookups.
190
R. Buchty, N. Heintze, and D. Oliva
instruction data
immediate value
instruction address
decremented value
to Instruction Memory
ALU control MU control instruction decoder
jump
reset
register address
PC counter registers
1
1111 00 0011 00 000 111 00 11 00 11 00 000 111 111 0011 11 0011 11 00 11 000 11 111 00 00 00 11 000 ALU #1 zero flag ALU #1 carry flag ALU #2 zero flag ALU #2 carry flag
1 is_zero
0
=0 immediate address
Fig. 3. The Cryptonite Control Unit
looping registers reduces register port pressure, and routing issues. In addition, from our application analysis it was clear that most cryptographic algorithms have relatively small static loop bounds. In fact, data-dependent branching is rare (IDEA being one exception). Finally, the use of special purpose registers in conjunction with a post-decrement loop counter strategy allows us to reduce the branch penalty to 1 cycle. 3.1 The Cryptonite ALU Much effort was put into the development of the Cryptonite ALU. Our target clock frequency was 400 MHz in TSMC’s 0.13 µm process. To reach this goal, we had to carefully balance the ALU’s features (as required for the crypto algorithms) and its complexity (as dictated by technology constraints). One result of this tradeoff is that the number of 64-bit ALU registers in each cluster was limited to four. Based on our application analysis, this was judged sufficient. To compensate for the low register count, each register can be either used as one 64-bit or two individually addressable 32-bit quantities. The use of a 64-bit architecture was motivated by both the requirements of DES and AES as well as parameterizable algorithms like RC6. To enable data exchange between both blocks the first register of each ALU is crosslinked with the first register in the other cluster. This crosslink eases register pressure as it allows cooperative computation on both ALUs (this is critical for AES and MD5). To further reduce register pressure, each ALU employs an accumulator for storing intermediate results. The ALU itself consists of the arithmetic unit (AU) and a dedicated XOR unit (XU). The AU provides conventional arithmetic and boolean operations but also specialized functions supporting certain algorithms. It follows the common 3-address model with two source registers and one destination register. These registers are limited to the ALU’s accumulator, the four ALU registers, data input and output registers of the associated memory unit, and an immediate value provided by the CU. The XU may choose any number of up to 6 source operands. In addition, the XU may optionally complement the output of the XOR. Thus, with only one source operand, the XU can act like a negation unit.
Arithmetic Unit w/ Accumulator A−Bus
Link (out)
Cryptonite – A Programmable Crypto Processor Architecture
191
Register Bank
A−Bus X−Bus
Register r0
u cc A
Arith/Accu
Link
srcA
srcB
A−Bus
Register r1
X−Bus
Arithmetic Unit
0
1
Register r2
A−Bus X−Bus
Register r3
A−Bus X−Bus
0
A−Bus X−Bus
0
ALU data (out)
X−Bus XOR Unit
Data Size Selector ALU data (in)
Logical XOR
GIV Input
64bit, 32bit/H, 32bit/L, null
Fig. 4. Overview of Cryptonite’s ALU
The 64-bit results of AU and XU operations are placed on separate result buses. From these buses, either the upper 32-bits, lower 32-bits or the entire 64-bit value can be selected and stored in the assigned register (or register half). It is not possible to combine two individual 32-bit results from both result buses into one 64-bit register. Results may also be forwarded to the data unit. Figure 4 illustrates the Cryptonite ALU with its sub-units.
3.2 The Cryptonite Memory Unit Access to local data memory is handled by the memory unit. It is composed of an address generation unit (AGU) and a data I/O unit (DIO). The address generation unit depicted in Figure 5. It generates the address for local memory access using the local address registers (LAR). The AGU contains a small add/sub/and ALU for address arithmetic. This supports a number of addressing modes such as indexed, auto-increment and wraparound table access as listed in Table 2. Furthermore, the SBox addressing mode performs eight parallel lookups to the 64-bit memory with 8-bit indices individual to each lookup. For a detailed description of this addressing mode please refer to Section 2.3. The DIO, shown in Figure 6, contains two buffer registers which are the data input and data output registers (DIR and DOR). They buffer data from and to local memory. The DOR can also be used as an auxiliary register by the ALU. The DIR also serves as the SBox index to the AGU.
192
R. Buchty, N. Heintze, and D. Oliva immediate value
source register
index register
external address
write control
address generator
local address registers
source
index
source
0x8000
0
Modulo
sbox index
local memory address
Fig. 5. The Address Generation Unit Table 2. Addressing modes supported by Cryptonite’s AGU Addressing Mode Address Computation LAR Update direct addr = LAR ”, w/ register modulo addr = LARx LARx = LARx % LARy ”, w/ immediate modulo addr = LARx LARx = LARx % idx S-Box ∀0 ≤ i ≤ 7 : addri = (LAR ∧ 0x7f00) ∨ idxi (LAR unchanged) immediate-indexed addr = LAR LAR = LAR + idx ditto, w/ register modulo addr = LARx LARx = (LARx + idx) % LARy LARx = LARx + LARy register-indexed addr = LARx ditto, w/ immediate modulo addr = LARx LARx = (LARx + LARy ) % idx Addressing modes written in italics are based on architectural side-effects and have not been designed in by purpose. ALU data
external data
Input Register
Output Register
DES Unit
DES Output Input Register Content
sbox address
memory data
Fig. 6. Data I/O from embedded SRAM
Cryptonite – A Programmable Crypto Processor Architecture
193
The DIO also contains a specialized DES unit. Fast DES execution not only requires highly specialized operations but also SBox access to memory. Hence the DES support instructions are integrated into the memory unit rather than the ALU. 3.3 The Cryptonite DES Unit As mentioned in Section 3.2, the DES unit has been implemented into the memory unit instead of the ALU. The reason for doing so was to not bloat the ALU with functions which are plain DES-specific and cannot be reused for other algorithms. Even the most primitive permutation, compression, and expansion functions required for DES computation are clearly algorithm-specific as discussed in detail in [6]. In addition to these bit-shuffling functions, DES computation is based on a table-based transposition realized through an SBox lookup and therefore needs access to the data memory. For this reason, the DES unit has been placed directly into the memory unit rather than incorporated into the ALU. Doing so, no unnecessary complexity – i.e. die size and signal delay – is added to the ALU. Futhermore, penalty cycles resulting from data transfer from memory to ALU and back are avoided.
Input Permutation
L
R
Key Transform
Compression
Kl
Expansion
P−Box
Kr
Clear
Counter Register
Round Constant Memory
Enc#/Dec
Final Permutation
Output to Data Unit
Input from Data Unit
Fig. 7. Cryptonite’s DES Unit
The DES unit is pictured in Figure 7. It consists of data (L and R) and key (K) registers, a round counter and a constant memory of 16x2 bits providing the round constant for key shifting. The computation circuitry provides two selectable, monolithic functions performing the following operations:
194
R. Buchty, N. Heintze, and D. Oliva
1. expand Ri−1 , shift & compress key, and XOR the results 2. permutate the S-Box result using P-Box shuffling and XOR this result with Li−1 ; forward Ri−1 to Li These functions can be selected independently to enable either initial computation (function 1) or final computation (function 2) needed for the first and last round, or back-to-back execution (function 2 followed by function 1) for the inner rounds of computation.
4
Results
Several algorithms were investigated and implemented on a custom architecture simulator. Based on the simulation results, the architecture was fine-tuned to provide minimum cycle count while maintaining maximum flexibility. In particular, the decision to incorporate the DES support instructions within the memory unit instead of the ALU (see Section 3.2) was directly motivated by simulation results. In this section we will now present the simulation results for a set of algorithm implementations which were run on our architecture simulator.
4.1
DES and 3DES
As Cryptonite employs a dedicated DES unit, the results for the DES [4] and 3DES implementations were not surprising. Cryptonite reaches throughput of 732 MBit/s for DES and 244 MBit/s for 3DES. In contrast, the programmable CryptoManiac processor [28] achieves performance of 68 MBit/s for 3DES. To quantify the tradeoffs of programmability versus performance, we give some performance numbers for DES hardware implementations. Hifn’s range of cores ([16], 7711 [11], 7751 [12], 7811 [13] and 790x [14,15]) achieve performance of 143-245 MBit/s for DES and 78-252 MBit/s for 3DES. The OpenCore implementation of DES [27] achieves performance of 629 MBit/s. Arguably the state-of-the-art DES hardware implementation is by SecuCore [26]. SecuCore’s high-performance DES hardware implementation (SecuCore DES/3DES Core [24]) achieves 2 GBit/s, just a factor of 2.73 better than Cryptonite. These results are summarized in Figure 8.
4.2 Advanced Encryption Standard (AES) Figure 9 compares the AES performance of Cryptonite against a set of hardware implementations from Amphion [1], Hifn, and Secucore as well as the programmable CryptoManiac. Cryptonite running at 400 MHz outperforms a number of hardware implementations by a factor of 1.25 to 2.6. Compared with CryptoManiac, Cryptonite
Cryptonite – A Programmable Crypto Processor Architecture 2000 1800
Performance (MBit/s)
1600 1400 1200 1000 800 600 400 200 0
111 000 Hifn 7711 000 111 Hifn 7751 000 111 Hifn 790x 000 OpenCores 111 DES 000 111 DES 000 SecuCore 111 Cryptonite 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 00 11 000 111 00 11 000 111 00 11 000 111 00 11 000 111 00 11 000 111 00 11 000 111 00 000 111 00 11 11 00 11 000 111 00 11 00 000 111 00 11 11 00 11 000 111 00 11 111 000 11 00 111 000 11 00
Performance (MBit/s)
200
150
100
50
0
11 00 00 11 00 11 00 11 00 11
11 00
11 00 00 11 00 11 00 11 00 11 11 00
300
250
195
11 00 00 11 00 11 00 11 00 11 00 11 00 11 000 00111 11 00 11 00 11 000 111 00 11 00 11 000 00111 11 00 11 00 11 000 111 00 11 00 11 000 111 00 11 00 11 00 11 000 00111 11 00 11 000 111 00 11 00 11 00 11 111 000 00 11 00 11 000 111 11 00 11 00 00 11
Hifn 7711 Hifn 7751 Hifn 7811 Hifn 790x CryptoManiac Cryptonite
11 00 00 11 00 11 00 11
11 00
Fig. 8. DES and 3DES Performance Comparison
shows an almost two times better performance.4 This result justifies our decision to go for simple ALUs providing more specialized functionality. High-performance hardware AES implementations provided by Amphion CS521040 High Performance AES Encryption Cores [2] and CS5250-80 High Performance AES Decryption Cores [3] and SecuCore (Secucore AES/Rijndael Core [23]) are able to outperform Cryptonite by a factor of 2.64. In addition, an extremely fast implementation from Amphion is even able to reach 25.6 GBit/s performance. This performance, however, is paid with an enormous gate count (10x bigger than other hardware solutions) which is why this version has not been included in the comparison chart shown in Figure 9. 4.3
MD5 Hashing Algorithm
Cryptonite performance on MD5 was 421 MBit/s at 400 MHz clock speed. It outperforms the Hifn hardware cores (7711 [11], 7751 [12], 7811 [13], and 790x [14,15]) 4
We remark that the CryptoManiac results appear to exclude round key generation whereas Cryptonite includes round key generation. In the RC4 discussion, [28] mentions impact from writing back into the key table. A similar note is missing for the AES implementation which suggests that only the main encryption algorithm (i.e. excluding round key generation) was coded. The cycle count of just 9 cycles per round without significant AES instruction support seems consistent with this assumption.
196
R. Buchty, N. Heintze, and D. Oliva
by factors of 1.12 to 7.02. SecuCore’s high-performance MD5 core (SecuCore SHA1/MD5/HMAC Core [25]), is a factor of 2.97 faster than Cryptonite, highlighting the programmability tradeoff. A comparison with CryptoManiac is omitted because the performance of MD5 is not reported in [28]. Figure 10 summarizes the results for MD5. 2500
Performance (MBit/s)
2000
1500
1000
500
0
Amphion CS5220
Amphion CS5230 11 00 Hifn 7854 00 11 SecuCore AES−128 00 11 CryptoManiac 00 11 Cryptonite (enc) 00 11 000 111 Cryptonite (dec) 00111 11 00000 11 000 111 00 11 000 00111 11 000 111 00111 11 00000 11 000 111 00 11 000 00111 11 000 111 00 11 000 00111 11 000 00111 11 000 00111 11 000 111 00 11 000 00111 11 000 00111 11 000 111 00 11 00 11 000 111 00 11 00 11 000 111 00 11 00 11 000 00 11 00111 11 000 111 00 11 00 11 111 000 00000 11 00 11 111 000 111 000 11 00 111 11 00
11 00 00 11 00 11 00 11 00 11
00 11 11 00
00 11 111 000 00 11 000 111 00 11 000 111 00 11 000 111 00 000 11111 00 11
Fig. 9. AES-128/128 Performance Comparison
1400 1200
Performance (MBit/s)
1000 800 600 400 200 0
Hifn 7711 11 00 Hifn 7751 00 11 111 000 Hifn 7811 000 111 Hifn 790x 11 000 111 00 MD5 11 000 SecuCore 111 00 Cryptonite 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 000 111 00 11 00 11 111 000 00 11 00 11 000 111 00 11 111 000 111 000 11 00
11 00
11 00 00 11 00 11 00 11 00 11 11 00
Fig. 10. MD5 Performance Comparison
5
Summary
We have presented Cryptonite, a programmable processor architecture targeting cryptographic algorithms. The starting point of this architecture was an in-depth application analysis in which we decomposed standard cryptographic algorithms down to their core functionality. The Cryptonite architecture has several novel features, including a distributed multi-input XOR unit and a parameterizable permutation unit built using new form of vector-memory block. A central design constraint was simple implementation,
Cryptonite – A Programmable Crypto Processor Architecture
197
and many aspects of the architecture seek to reduce port counts on register files, number and width of internal buses and number and size of registers. In contrast, CryptoManiac has a number of implementation challenges, including a heavyweight 16-port register file. We expect the Cryptonite die size to be significantly smaller than CryptoManiac. A number of algorithms (including AES, DES and MD5) were implemented on the architecture simulator with promising results. Cryptonite was able to outperform numerous hardware cores. It outperformed the programmable CryptoManiac processor by factors of between two and three and comparable clock speeds. To determine the tradeoff between programmability and dedicated high-performance hardware cores, Cryptonite was compared to cores from Amphion and Secucore: these outperform Cryptonite by about a factor of 3.
References 1. Amphion Semiconductor Ltd. Corporate Web Site. 2001. http://www.amphion.com. 2. Amphion Semiconductor Ltd. CS5210-40 High Performance AES Encryption Cores Product Information. 2001. http://www.amphion.com/acrobat/DS5210-40.pdf. 3. Amphion Semiconductor Ltd. CS5210-40 High Performance AES Decryption Cores Product Information. 2002. http://www.amphion.com/acrobat/DS5250-80.pdf. 4. Ronald H. Brown, Mary L. Good, and Arati Prabhakar. Data Encryption Standard (DES) (FIPS 46-2). Federal Information Processing Standards Publication (FIPS), Dec 1993. http://www.itl.nist.gov/fipspubs/fip46-2.html (initial version from Jan 15, 1977). 5. Ronald H. Brown and Arati Prabhakar. FIPS180-1: Secure Hash Standard (SHA). Federal Information Processing Standards Publication (FIPS), May 1993. http://www.itl.nist.gov/fipspubs/fip180-1.htm. 6. Rainer Buchty. Cryptonite – A Programmable Crypto Processor Architecture for HighBandwidth Applications. PhD thesis, Technische Universit¨at M¨unchen, LRR, September 2002. http://tumb1.biblio.tu-muenchen.de/publ/diss/in/2002/buchty.pdf. 7. Jerome Burke, John McDonald, and ToddAustin. Architectural support for fast symmetric-key cryptography. In Proceedings of the 9th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), November 2000. 8. J. Daemen and V. Rijmen. The block cipher Rijndael, 2000. LNCS1820, Eds: J.-J. Quisquater and B. Schneier. 9. J. Daemen and V. Rijmen. Advanced Encryption Standard (AES) (FIPS 197). Technical report, Katholijke Universiteit Leuven / ESAT, Nov 2001. http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf. 10. Nevin Heintze Dino Oliva, Rainer Buchty. AES and the Cryptonite Crypto Processor. CASES’03 Conference Proceedings, pages 198–209, October 2003. 11. Hifn Inc. 7711 Encryption Processor Data Sheet. 2002. http://www.hifn.com/docs/a/DS-0001-04-7711.pdf. 12. Hifn Inc. 7751 Encryption Processor Data Sheet. 2002. http://www.hifn.com/docs/a/DS-0013-03-7751.pdf. 13. Hifn Inc. 7811 Network Security Processor Data Sheet. 2002. http://www.hifn.com/docs/a/DS-0018-02-7811.pdf. 14. Hifn Inc. 7901 Network Security Processor Data Sheet. 2002. http://www.hifn.com/docs/a/DS-0023-01-7901.pdf.
198
R. Buchty, N. Heintze, and D. Oliva
15. Hifn Inc. 7902 Network Security Processor Data Sheet. 2002. http://www.hifn.com/docs/a/DS-0040-00-7902.pdf. 16. Hifn Inc. Corporate Web Site. 2002. http://www.hifn.com. 17. Jüri Pöldre. Cryptoprocessor PLD001 (Master Thesis). June 1998. 18. R. Rivest. RFC1186: The MD4 Message-Digest Algorithm. October 1990. http://www.ietf.org/rfc/rfc1186.txt. 19. R. Rivest. The MD4 message digest algorithm. Advances in Cryptology - CRYPTO ’90 Proceedings, pages 303–311, 1991. 20. R. Rivest. RFC1312: The MD5 Message-Digest Algorithm, April 1992. http://www.ietf.org/rfc/rfc1321.txt. 21. Ronald R. Rivest, M.J.B. Robshaw, R. Sidney, and Y.L. Yin. The RC6T M Block Cipher. August 1998. http://www.rsasecurity.com/rsalabs/rc6/. 22. Bruce Schneier. 13.9: IDEA. Angewandte Kryptographie: Protokolle, Algorithmen und Sourcecode in C, pages 370–377, 1996. ISBN 3-89319-854-7. 23. SecuCore Consulting Services. SecuCore AES/Rijndael Core. 2001. http://www.secucore.com/secucore aes.pdf. 24. SecuCore Consulting Services. SecuCore DES/3DES Core. 2001. http://www.secucore.com/secucore des.pdf. 25. SecuCore Consulting Services. SecuCore SHA-1/MD5/HMAC Core. 2001. http://www.secucore.com/secucore hmac.pdf. 26. SecuCore Consulting Services. Corporate Web Site. 2002. http://www.secucore.com/. 27. Rudolf Usselmann. OpenCores DES Core. Sep 2001. http://www.opencores.org/projects/des/. 28. Lisa Wu, Chris Weaver, and Todd Austin. Cryptomaniac: A fast flexible architectore for secure communication. In 28th Annual International Symposium on Computer Architecture (ISCA 2001), June 2001.
STAFF: State Transition Applied Fast Flash Translation Layer Tae-Sun Chung, Stein Park, Myung-Jin Jung, and Bumsoo Kim Software Center, Samsung Electronics, Co., Ltd., Seoul 135-893, KOREA {ts.chung,steinpark,m.jung,bumsoo}@samsung.com
Abstract. Recently, flash memory is widely used in embedded applications since it has strong points: non-volatility, fast access speed, shock resistance, and low power consumption. However, due to its hardware characteristics, it requires a software layer called FTL (flash translation layer). The main functionality of FTL is to convert logical addresses from the host to physical addresses of flash memory. We present a new FTL algorithm called STAFF (State Transition Applied Fast Flash Translation Layer). Compared to the previous FTL algorithms, STAFF shows higher performance and requires less memory. We provide performance results based on our implementation of STAFF and previous FTL algorithms.
1
Introduction
Flash memory has strong points: non-volatility, fast access speed, shock resistance, and low power consumption. Therefore, it is widely used in embedded applications, mobile devices, and so on. However, due to its hardware characteristics, it requires specific software operations in using it. The basic hardware characteristics of flash memory is erase-before-write architectures [4]. That is, in order to update data on flash memory, if the physical location on flash memory was previously written, it has to be erased before the new data can be rewritten. In this case, the size of the memory portion for erasing is not same to the size of the memory portion for reading or writing1 [4], which results in the main performance degradation in the overall flash memory system. Thus, the system software called FTL (Flash Translation Layer) [2,3,5,7,8, 9] is required. The basic scheme for FTL is as follows. By using the logical to physical address mapping table, if a physical address location corresponding to a logical address is previously written, the input data is written to another physical location which is not previously written and the mapping table is changed. In applying the FTL algorithm to real embedded applications, there are two major points: the storage performance and the memory requirement. In 1
Flash memory produced by Hitachi has different characteristics. That is, the size of the memory portion for erasing is same to the size of the memory portion for reading or writing.
C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 199–212, 2004. c Springer-Verlag Berlin Heidelberg 2004
200
T.-S. Chung et al.
the storage performance, as flash memory has special hardware characteristics mentioned above, the overall system performance is mainly effected by the write performance. In particular, as the erase cost is much more than write and read cost, it is needed to minimize the erase operation. Additionally, the memory requirement for mapping information is important in real embedded applications. That is, if an FTL algorithm requires large memory for mapping information, the cost can not be approved in embedded applications. In this paper, we propose a high-speed FTL algorithm called STAFF (State Transition Applied Fast FTL) for flash memory system. Compared to the previous FTL algorithms, our solution shows higher performance and requires less memory. This paper is organized as follows. Problem definition and previous work is described in Section 2. Section 3 shows our FTL algorithm and Section 4 presents performance results. Finally, Section 5 concludes.
2 2.1
Problem Definition and Previous Work Problem Definition
In this paper, we assume that the sector is the unit of read and write operation, and the block is the unit of the erase operation on flash memory. The size of a block is some multiples of the size of a sector. Figure 1 shows the software architecture of a flash file system. We will consider the FTL layer in Figure 1. The File System layer issues a series of read and write commands with logical sector number to read from, and write data to, specific addresses of flash memory. The given logical sector number is converted to real physical sector number of flash memory by some mapping algorithm provided by FTL layer.
Fig. 1. Software architecture of flash memory system
Thus, the problem definition of FTL is as follows. We assume that flash memory is composed of n physical sectors and file system regards flash memory as m logical sectors. The number m is less than or equal to n.
STAFF: State Transition Applied Fast Flash Translation Layer
201
Definition 1. Flash memory is composed of blocks and each block is composed of sectors. Flash memory has the following characteristics: If the physical sector location on flash memory was previously written, it has to be erased in the unit of block before the new data can be rewritten. The FTL algorithm is to produce the physical sector number in flash memory from the logical sector number given by the file system. 2.2
Previous FTL Algorithms
Sector Mapping. First intuitive algorithm is sector mapping [2]. In sector mapping, if there are m logical sectors seen by the file system, the raw size of logical to physical mapping table is m. For example, Figure 2 shows an example of sector mapping. In the example, a block is composed of 4 sectors and there are 16 physical sectors. If we assume that there are 16 logical sectors, the raw size of the mapping table is 16. When the file system issues the command “write some data to LSN (Logical Sector Number) 9”, the FTL algorithm writes the data to PSN (Physical Sector Number) 3 according to the mapping table. If the physical sector location on flash memory was previously written, the FTL algorithm finds another sector that was not previously written. If it finds it, the FTL algorithm writes the data to the sector location and changes the mapping table. If it can not find it, a block should be erased, the corresponding sectors should be backed up, and the mapping table should be changed.
Fig. 2. Sector mapping
Block Mapping. Since the sector mapping algorithm requires the large size of mapping information, the block mapping FTL algorithm [3,5,8] is proposed. The
202
T.-S. Chung et al.
basic idea is that the logical sector offset within the logical block corresponds to the physical sector offset within the physical block. In block mapping method, if there are m logical blocks seen by the file system, the raw size of logical to physical mapping table is m. Figure 3 shows an example of the block mapping algorithm. If we assume that there are 4 logical blocks, the raw size of the mapping table is 4. When the file system issues the command “write some data to LSN (Logical Sector Number) 9”, the FTL algorithm calculates the logical block number (9/4=2) and sector offset (9%4=1), and then, it finds physical block number (1) according to the mapping table. Since the physical sector offset is identical to the logical sector offset (1) the physical sector location can be determined. Although the block mapping algorithm requires the small size of mapping information, if the file system issues write commands to the same sector frequently, the performance of the system is degraded since whole sectors in the block should be copied to another block.
Fig. 3. Block mapping
Hybrid Mapping. Since the both sector and block mapping have some disadvantages, the hybrid technique [7,9] is proposed. The hybrid technique first uses the block mapping technique to find the physical block corresponding to the logical block, and then, the sector mapping techniques used to find an available sector within the physical block. Figure 4 shows an example of the hybrid technique. When the file system issues the command “write some data to LSN (Logical Sector Number) 9”, the FTL algorithm calculates the logical block number (9/4=2), and then, it finds physical block number (1) according to the mapping table. After finding the block number, any available sector can be chosen to write the input data. In the example since the first sector in the block 1 is empty, the data is written to the
STAFF: State Transition Applied Fast Flash Translation Layer
203
first sector. In this case, since the logical sector offset within the logical block is not identical to the physical sector offset within the physical block, the logical sector number (9) should be written to the sector. When reading data from flash memory, FTL algorithm first finds the physical block number from the logical block number according to the mapping table, and then, by reading the logical sector numbers within the physical block, it can read the requested data.
Fig. 4. Hybrid mapping
Comparison. We can compare the previous FTL algorithms in two points of view: the read/write performance and memory requirement for mapping information. The read/write performance of the system can be measured by the number of flash I/O operations since the read/write performance is I/O bounded. If we assume that access cost of the mapping table in each FTL algorithms presented in the previous section is zero since it exists in RAM, the read/write cost can be measured by the following equations. Cread = xTr Cwrite = p(Tr + Tw ) + (1 − p)(Tr + (Te + Tw ) + Tc )
(1) (2)
Where Cread , Cwrite are read and write cost respectively, Tr , Tw , and Te are flash read, write, and erase cost. Tc is copy cost that is needed to move sectors within a block to other free block before erasing and to copy back after erasing. p is the probability that the write command does not require the erase operation. We assume that the input logical sector within the logical block is mapped to one physical sector within one physical block.
204
T.-S. Chung et al.
In sector and block mapping methods, the variable x in the equation (1) is 1 because the sector location to be read can be found directly by the mapping table. However, in the hybrid technique, the value of the variable x is in 1 ≤ x ≤ n, where n is the number of sectors within a block. It is because that the request data can be read only after scanning the logical sector number stored in flash memory. Thus, the hybrid mapping has higher read cost compared to sector and block mapping. In writing case, we assume that a read operation is needed before writing to see the corresponding sector can be written. Thus, Tr is added in the equation (2). Since Te and Tc is high cost compared to Tr and Tw , the variable p is a key point in the write performance. Sector mapping has the smallest probability of requiring the erase operation and block mapping has the worst. Other comparison criteria is the memory requirement for mapping information. Table 1 shows the memory requirement for mapping information. Here, we assume that 128MB flash device that is composed of 8192 blocks and each block is composed of 32 sectors [4]. In sector mapping, 3 bytes are needed to represent whole sectors and in block mapping, 2 bytes are necessary. In hybrid mapping, 2 bytes are needed to for block mapping, and 1 byte for sector mapping within a block. Table 1 shows that block mapping is superior to the others. Table 1. Memory requirement for mapping information Addressing (B: Byte) Total Sector mapping 3B 3B * 8192* 32 = 768KB Block mapping 2B 2B *8192= 16KB Hybrid mapping 2B+1B 2B*8192+1B*32*8192 = 272KB
3
STAFF (State Transition Applied Fast FTL)
STAFF is our FTL algorithm for flash memory. The purpose of STAFF is to provide a device driver for flash memory with maximum performance and small memory requirement. 3.1
Block States Machine
Compared to previous work, we introduced the states of the block. A block in STAFF has the following states. – F state: If a block is an F state block, the block is a free state. That is, the block is erased and is not written. – O state: If a block is an O state block, the block is an old state. The old state means that the value of the block is not valid any more.
STAFF: State Transition Applied Fast Flash Translation Layer
205
– M state: The M state block is in-order, free. The M state block is the first state from a free block and is in place. That is, the logical sector offset within the logical block is identical to the physical sector offset within the physical block. – S state: The S state block is in-order, full. The S state block is created by the swap merging operation. The swap merging operation will be described in Section 3.2. – N state: The N state block is out-of-order and is converted from an M state block.
Fig. 5. Block state machine
We have constructed a state machine according the states defined above and various events occurred during FTL operations. The state machine is formally defined as follows. Here, we use the notation about automata in [6]. An automa ton is denoted by a five-tuples (Q, , δ, qo , F ), and the meanings of each tuple are as follows. – Q is a finite set of states, namely Q = {F, O, M, S, N }. – is a finite input alphabet, in our definition, it corresponds to the set of various events during FTL operations. – δ is the transition function which maps Q × to Q. – q0 is the start state, that is a free state. – F is the set of all final states. Figure 5 shows the block state machine. The initial block state is F state. When an F state block gets the first write request, the F state block is converted to the M state block. The M state block can be converted to two states of S and N according to specific events during FTL operations. The S and N state block is converted to O state block in the event e4 and e5, and the O state block is converted to the F state block in the event e6. The detailed description of the events is presented in Section 3.2. 3.2
FTL Operation
The basic FTL operations are read and write operations.
206
T.-S. Chung et al.
Algorithm 1 Write algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
Input: Logical sector number (lsn), data to be written Output: None Procedure FTL write (lsn, data) if merging operation is needed then do merging operation; end if if the logical block corresponding to the lsn has an M or N state block then if corresponding sector is empty then write the input data to the M or N state block; else if the block is the M state block then the block is converted to the N state block; end if write the input data to the N state block; end if else get an F state block; the F state block is converted to the M state block; write the input data to the M state block; end if
Write Algorithm. Algorithm 1 shows the write algorithm of STAFF. The input of the algorithm is the logical sector number and the data to be written. The first operation of the algorithm is checking the merging operation. We have two kinds of merging operation: swap and smart merging. The swap merging operation is occurred when a write operation is requested to the M state block which has no more space. Figure 6-(a) shows that the swap merging operation is performed. Here, we assume that a block is composed of 4 sectors. The M state block is converted to the S state block and one logical block is mapped to two physical blocks. In Figure 5, the event e2 corresponds to the swap merging operation.
Fig. 6. Various write scenarios
STAFF: State Transition Applied Fast Flash Translation Layer
207
The smart merging operation is illustrated in Figure 6-(c). The smart merging operation is occurred when a write operation is requested to the N state block which has no more space. In the smart merging operation, a new F state block is allocated and the valid data in the N state block is copied to the F state block, which is now an M state block. In Figure 6-(c), since only one data corresponding to the lsn 0 is valid, it is copied to the newly allocated block. The smart merging operation is related to the events e5 and e1 in Figure 5. In line 7 of Algorithm 1, if the logical block corresponding to the input logical sector number doesn’t have an M or N state block, it means that the logical block corresponding to the lsn has not been written. Thus, a new F state block is allocated and the input data is written to the F state block, which is now an M state block (line 17-19). If the logical block corresponding to the input logical sector number has an M state block or an N state block, the write algorithm examines that the sector corresponding to the lsn is empty. If the sector is empty, the data corresponding to the lsn is written to it. Otherwise, the data is written to the N state block. In our write algorithm, a logical block can be mapped to two physical blocks in maximum. Thus, when there is no space, one block is converted to an O state block. Figure 6-(b) shows this scenario. When allocating an F state block (line 17), if there is no free block available, merging operation is performed explicitly and erase operations may also be needed. Read Algorithm. Algorithm 2 shows the read algorithm of STAFF. The input of the algorithm is the logical sector number and the data buffer to read. The data to be read may be stored in the M, N, or S state block. If a logical block is mapped to two physical blocks, the two physical blocks are S and M state blocks or S and N state blocks. In this case, if input lsn corresponds to both the S state block and (N or M) state block, data in the N and M state block is valid one. If the M or N state block has no data corresponding to the lsn, the data may be stored in an S state block. Thus, data can be read from the S state block or the error message is printed. When reading data from the N state block, since the data may be stored in a physical sector offset which is not identical to the logical sector offset, valid data should be found. The valid data can be determined according to the write algorithm. The detailed algorithm is omitted because it is trivial.
4 4.1
Experimental Evaluation Cost Estimation
As mentioned earlier, we can compare the FTL algorithm in two points of view: memory requirement for mapping information and the flash I/O performance. Since STAFF is based on the block mapping, it requires small memory for mapping information as presented in Section 2.2. Compared with the 1:1 block
208
T.-S. Chung et al.
Algorithm 2 Read algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32:
Input: Logical sector number (lsn), data buffer to read Output: None Procedure FTL read(lsn, data buffer) if the logical block corresponding to the lsn has an M or N state block then if the block is the M state block then if the corresponding sector is set then read from the M state block; else if the logical block has an S state block then read from the S state block; else print: “the logical sector has not been written”; end if end if else {the block is the N state block} if the sector is in the N state block then read from the N state block; else if the logical block has an S state block then read from the S state block; else print: “the logical sector has not been written”; end if end if end if else if the logical block has an S state block then read form the S state block; else print: “the logical sector has not been written”; end if end if
mapping technique presented in Section 2.2, STAFF is a hybrid of 1:1 and 1:2 block mapping. Additionally, the N state block should have sector mapping information. In the flash I/O performance, the read/write cost can be measured by the following equations:
Cread = pM Tr + pN k1 Tr + pS Tr (where pM + pN + pS = 1) = pf irst [(Tf + Tw )] + (1 − pf irst )[pmerge {Tm + pe1 Tw + (1 − pe1 )(k2 Tr + Tw )} + (1 − pmerge ){pe2 (Tr + Tw ) + (1 − pe2 )(k3 Tr + Tw ) +
Cwrite
Tr + pM N Tw }]
(3) (4)
STAFF: State Transition Applied Fast Flash Translation Layer
209
where 1 ≤ k1 , k2 , k3 ≤ n. Here, n is the number of sectors within a block. In the equation (3), pM , pN , and pS are the probability that data is stored in the M, N, and S state block, respectively. In the equation (4), pf irst is the probability that the write command is the first write operation with the input logical block and pmerge is the probability that the write command requires the merging operation. pe1 and pe2 are the probability that input logical sector can be written to the in place location with merging and without merging operation, respectively. Tf is the cost for allocating a free block. It may require the merging and the erasing operation. Tm is the cost for the merging operation. Finally, pM N is the probability that the write operation converts the M state block to the N state block. When the write operation converts the M state block to the N state block, a flash write operation is needed for marking states. The cost function shows that the read and write operations to the N state block requires some more flash read operation than the M or S state block. However, in flash memory the read cost is very low compared to the write and erase cost. Thus, since Tf and Tm may require the flash erase operation, they are dominant factors in evaluating the overall system performance. STAFF is designed to minimize Tf and Tm that require the erase operation. 4.2
Experimental Result
In overall flash system architecture presented in Figure 1, we implemented various FTL algorithms and compared them. The physical flash memory layer is simulated by a flash emulator [1] which has same characteristics as real flash memory. We have compared three FTL algorithms: Mitsubishi FTL [8], SSR [9], and STAFF. The Mitsubishi FTL algorithm is based on the block mapping algorithm presented in Section 2.2 and the SSR algorithm is based on the hybrid mapping. We have not implemented the sector mapping algorithm. It is not a realistic FTL algorithm since it requires too much memory. The FAT file system is widely used in embedded system. Thus, we got access patterns that the FAT file system on Symbian operating system [10] issues to the block device driver when it gets a 1MB file write request. The access patterns are very similar to the real workload in embedded applications. Figure 7 shows the total elapsed time. The x axis is the test count and the y axis is the total elapsed time in millisecond. At first, flash memory is empty, and flash memory is occupied as the iteration count increases. The result shows that STAFF has similar performance to hybrid mapping and has much better performance than block mapping. Since STAFF requires much smaller memory space compared to the hybrid mapping technique, it may be efficiently used in embedded applications. Figure 8 shows the erase count. The result is similar to the result of the total elapsed time. This is because the erase count is a dominant factor in overall system performance. In particular, when flash memory is empty, STAFF shows better performance. It is due to that STAFF delays the erase operation. That is,
210
T.-S. Chung et al. The total elapsed time 6000 ssr mistubishi staff 5000
Elapsed time (ms)
4000
3000
2000
1000
0 0
20
40
60
80
100
Test count
Fig. 7. The total elapsed time The erase count 500 ssr mistubishi staff 400
Count
300
200
100
0
0
20
40
60
80
100
Test count
Fig. 8. Erase counts
by using the O state block, STAFF delays the erase operation until there is no more space available. If the system provides concurrent operation, the O state blocks can be converted to the F state blocks by another process in STAFF. In addition, STAFF shows the consistent performance although flash memory is fully occupied. Figure 9 and Figure 10 shows the read and write counts respectively. STAFF shows reasonable read counts and best write counts. [4] says that the running time ratio of read (1 sector), write (1 sector), and erase (1 block) is 1:7:63 approximately. Thus, STAFF is a reasonable FTL algorithm.
STAFF: State Transition Applied Fast Flash Translation Layer
211
The read count 45000 ssr mistubishi staff
40000 35000
Counts
30000 25000 20000 15000 10000 5000 0 0
20
40
60
80
100
Test count
Fig. 9. Read counts The write count 14000 ssr mistubishi staff 12000
10000
Counts
8000
6000
4000
2000
0 0
20
40
60
80
100
Test count
Fig. 10. Write counts
5
Conclusion
In this paper, we propose a novel FTL algorithm called STAFF. The key idea of STAFF is to minimize the erase operation by introducing the concept of state transition in the erase block of flash memory. That is, according to the input patterns, the state of the erase block is converted to the appropriate states, which minimizes the erase operation. Additionally, we provided low cost merging operations: swap and smart merging. Compared to the previous work, our cost function and experimental results show that STAFF has reasonable performance and requires less resources.
212
T.-S. Chung et al.
For future work, we have plan to generate intensive workloads in real embedded applications. We can customize our algorithm according to the real workloads. Acknowledgments. The authors wish to thank Jae Sung Jung and Seon Taek Kim for their FTL implementations. We are also grateful to the rest of Embedded Subsystem Storage group for enabling this research.
References 1. Sunghwan Bae. SONA Programmer’s guide. Technical report, Samsung Electronics, Co., Ltd., 2003. 2. Amir Ban. Flash file system, 1995. United States Patent, no. 5,404,485. 3. Amir Ban. Flash file system optimized for page-mode flash technologies, 1999. United States Patent, no. 5,937,425. 4. Samsung Electronics. Nand flash memory & smartmedia data book, 2002. 5. Petro Estakhri and Berhanu Iman. Moving sequential sectors within a block of information in a flash memory mass storage architecture, 1999. United States Patent, no. 5,930,815. 6. John E. Hopcroft and Jeffrey D. Ullman. Introduction to automata theory, languages, and computation. Addison-Wesley Publishing Company, 1979. 7. Jesung Kim, Jong Min Kim, Sam H. Noh, Sang Lyul Min, and Yookun Cho. A space-efficient flash translation layer for compactflash systems. IEEE Transactions on Consumer Electronics, 48(2), 2002. 8. Takayuki Shinohara. Flash memory card with block memory address arrangement, 1999. United States Patent, no. 5,905,993. 9. Bum soo Kim and Gui young Lee. Method of driving remapping in flash memory and flash memory architecture suitable therefore, 2002. United States Patent, no. 6,381,176. 10. Symbian. http://www.symbian.com, 2003.
Simultaneously Exploiting Dynamic Voltage Scaling, Execution Time Variations, and Multiple Methods in Energy-Aware Hard Real-Time Scheduling Markus Ramsauer Chair of Computer Architecture (Prof. Dr.-Ing. Werner Grass) University of Passau, Innstrasse 33, 94032 Passau, Germany
[email protected] Abstract. In this paper we present a novel energy-aware scheduling algorithm that simultaneously exploits three effects to yield energy savings. Savings are achieved by using dynamic voltage scaling (DVS), flexibility provided by slack time, and by dynamically selecting for each task one of several alternative methods that can be used to implement the task. The algorithm is split up in two parts. The first part is an off-line optimizer that prepares a conditional scheduling precedence graph with timing conditions defining for any decision point in time which branch should be taken due to the assumed elapsed time. The second part is an efficient runtime dispatcher that evaluates the timing conditions. This separation of optimization complexity and runtime efficiency allows our algorithm to be used on mobile devices having only little energy resources and being driven to the edge by the applications that run on them, e.g., creating a video on a mobile phone. We show that our approach typically yields more energy savings than worst-case execution time based approaches while it still guarantees all real-time constraints. Our application model includes periodic non-preemptive tasks with release times, hard deadlines and data-dependencies. Multiple methods having different execution times and energy demands can be specified for each task, and an arbitrary number of processor speeds is supported.
1
Introduction
Today, there is an increasing number of mobile, battery operated devices, and the applications being run on them become more complex. A typical scenario one can think of is the transmission of live videos with a multi-media capable mobile phone. The phone may utilize an embedded processor that provides choice between two speed-modes which trade processor speed for energy consumption. This allows for an economically sound usage of the restricted energy provided by the phone’s small battery. We provide a scheduling algorithm that exploits the limited processor speed to execute processor intensive applications while it ensures that all hard real-time constraints are met and as little energy as C. M¨ uller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 213–227, 2004. c Springer-Verlag Berlin Heidelberg 2004
214
M. Ramsauer
possible is used. Mobile devices are usually equipped with small, low capacity batteries to keep the device itself small and lightweight. Therefore great attention must be paid to the energy consumption of the device to ensure long periods of operation without recharging the battery. Today, many processors can be configured to run at different speeds by changing the processors supply voltage. The processors speed approximately doubles if the voltage is doubled, and the power consumption quadruples. As modern processors are not fully utilized by some real-time applications, this effect can be used to save energy by reducing the supply voltage every time it is possible. E.g., if there is enough time to finish a task in half-speed mode, we need only half the energy that would be needed in full-speed mode. This is the case because the processor needs twice the time to complete the task, but it only consumes one fourth of the power at full-speed. Of course, preserving energy is not the users main concern. He also wants to fully exploit the mobile device’s computational power to run very demanding real-time applications such as live video transmission on his mobile device. They include tasks with deadlines and release times, and therefore it is not sufficient to simply execute tasks as fast as possible. Tasks must be scheduled at the right time, e.g., an image can not be encoded before it is provided by the camera hardware and it has to be sent before the next image arrives to avoid stuttering. Although these applications may demand the full processor speed in the worst-case, they usually do not need full processor speed all over the time in the average case. Slack time of preceding tasks can be used to reduce the speed for succeeding tasks, and thus energy can be saved. Additional energy can be saved by dynamically choosing between methods that implement a task. We allow to specify different implementations for a task, and during runtime we select the implementation that should be executed. Thereby, energy savings can be achieved even if the processor does not support multiple speed modes, because implementations differ not only in their worst-case execution time but also in their average execution time. We schedule the implementation with the lower expected execution time to save energy if possible. We also use the implementation with the lower worst-case execution time to meet deadlines if timing constraints are tight. Our scheduling algorithm is customized to schedule demanding real-time applications with release times and hard deadlines on energy restricted mobile devices. It simultaneously exploits the flexibility provided by the processor speed modes, it uses an explicit model of tasks’ slack times, and it allows to specify different implementations for tasks. We calculate an optimized data-structure statically that we interpret dynamically to select the best task, the best processor speed, and the best implementation to schedule. The paper is organized as follows: First, we give some references to related work. Then we introduce our underlying model before we describe our scheduling algorithm. In Section 5, we present benchmark results and we finish with a conclusion and an outlook in Section 6.
Simultaneously Exploiting Dynamic Voltage Scaling
2
215
Related Work
Energy-aware scheduling is getting increasing attention. An introduction to processor power consumption and processor features can be found in Wolf [1]. Concrete power and energy specifications of processors from Intel and Transmeta are available, e.g., in [2,3]. Unsal and Koren [4] give a survey on current powerand energy-related research. They advocate energy-aware scheduling techniques to cope with several problems in modern system design. Our work addresses several properties of real-time systems they state: real-time systems are “typically battery-operated and therefore have a limited energy budget”, they are “relatively more time-constrained compared to general-purpose systems”, and they are “typically over-designed to ensure that the temporal deadline guarantees are still met even if all tasks take up their worst-case execution time (WCET) to finish”. Additionally, they propose exploiting the differences in the execution time distributions of different implementations (multiple methods) instead of relying on WCET-based analysis. Our design-to-time algorithm [5], which is the basis for the algorithm presented here, has been specifically designed to do so. We will show the potential of exploiting execution time variations and multiple methods to save energy by adapting and applying our algorithm to energy-aware scheduling. Melhem et al. [6] give an overview on, and a classification for power-aware scheduling algorithms. They propose a concept of power management points (PMPs) to reduce the energy consumption of applications on systems with variable processor speeds. Simulated annealing has become a well known technique for very complex combinatorial optimization problems [7]. It has successfully been used for scheduling [5,8,9,10] and it is applied to energy-aware scheduling in this work. We use it to optimize applications that are too complex to be handled within a sufficiently short time by our exhaustive optimizer. Our design-to-time algorithm matches the hard real-time system constraints given above: It exploits DVS-based (Dynamic Voltage Scaling) energy savings, utilizes slack time, and it considers differences in execution time histograms to select energy optimal implementation dynamically. Our dispatcher algorithm fits very well into the PMP concept. We define that a PMP is reached every time a task finishes, and processor speed/voltage can be adjusted at each PMP.
3
Application Model
An application (see Figure 1) is made up of periodic tasks. A task instance may have a relative release time as well as a relative hard deadline, which both are measured from the begin of the period of the task instance. Additionally, a task can be data-dependent on one or more tasks having the same period duration. Every task can be implemented by alternative methods and only one of them has to be executed to fulfill the task. The execution time of a method may vary for different task instances. As the processor can run at different speeds, we do not specify a method’s execution time. Instead, we specify the number of
216
M. Ramsauer
period 40ms rel. time 0ms deadline 20ms
resize
period 40ms rel. time 2ms deadline 26ms
period 40ms rel. time 3ms deadline 35ms
period 40ms rel. time 9ms deadline 40ms
insert title
encode
send
task method processor
bilinear IU 2
prob 1.0
paste lines IU 1 4 7
prob 0.7 0.2 0.1
CPU
jpg−1 IU 6 12
prob 0.9 0.1
mode c full 1.0 half 0.5
jpg−2 IU 6 9
prob 0.2 0.8
transmit IU 4 5
e b e i 4 0.4 1 0.1
prob 0.6 0.4
implemented by data−dependency IU: instruction unit
Fig. 1. Live Video Application
instruction units (IUs) that a method needs, and we define an instruction unit to be a certain number of cpu clock cycles (e.g., compare [6]). Tasks In our model, a periodic task Ti with period pTi represents an abstract piece of work, e.g, encode, that has to be done regularly. There may be real-time constraints such as a release time RT (Ti ) and a hard deadline DL(Ti ), both being relative to every task instances period beginning. In our example in Figure 1, the task send has a deadline of 40ms to ensure the transmission of 25 images per second. A task may be data-dependent on other tasks, e.g., there is a datadependency DD(encode, send). The meaning of a task Ti being data-dependent on a task Tj is, that the first instance of Ti is data-dependent on the first instance of Tj , the second instance of Ti is dependent on the second one of Tj , etc. Methods A method is an implementation of an algorithm that is able to perform a task. The number of instruction units needed by a method to complete varies due to several reasons. Variation can be caused by user input during runtime, or if the complexity of the calculation depends on the input-data. To reflect these variations, we do not restrict our model to the specification of worst-case values, but we allow that every method’s number of instruction units is specified by means of a discrete probability distribution. Figure 2 shows the probability distribution of a jpg-encoder (jpg encoder 1 ) which needs 6 instruction units or less in 90 percent of its executions and 7 to 12 instruction units in the remaining cases. Methods are different regarding the number of instruction units they need to complete. Even more, the number of instruction units that a method needs is dependent on the input data. E.g., see Figure 2: The first method has a lower average number of instruction units to complete than the second one, but the
Simultaneously Exploiting Dynamic Voltage Scaling probability
217
probability
0.8
0.8
0.6
jpg encoder 1
0.6
average IUs = 6.6
average IUs = 8.4
0.4
0.4
0.2
0.2
2
4
6
8
10
12
IUs
jpg encoder 2
2
4
6
8
10
12
IUs
Fig. 2. Probability Distributions of two Methods
first method’s worst-case number of instruction units is higher than the second one’s. Although we can expect a lower energy consumption for the first method, we have to use the second method to guarantee that the deadline is met if the number of remaining instruction units before the deadline is less than 12. Of course, if the number of instruction units is 12 or more, we will prefer the first method to save energy. Processor The processor is responsible for the execution of methods. It can have several speed modes s = (c(s), ei (s), eb (s)) which differ in speed c(s) and in energy consumption per time unit during idle ei (s) and busy time eb (s). Speed is the number of instruction units that are executed per time unit. Due to 2 f (see [11]), power consumption Pdynamic quadruples Pdynamic = CL NSW VDD if the supply voltage VDD is doubled. As the processor’s speed (clock frequency) changes linearly with the supply voltage, speed is proportional to the square value of the power. Thus executing a method at a lower speed saves energy although the method needs more time. We derive the actual execution time et of a method m in mode s as et = IUa · c(s) · timeunit where IUa is the actual number of instruction units that m needs to complete. Our example processor in Figure 1 has two speed modes: full speed (1, 0.4, 4) is a mode that executes one instruction unit per time unit with an energy consumption of 0.4 energy units for one time unit idle time and 4 energy units for one busy time unit. The mode half speed (0.5, 0.1, 1) executes half an instruction unit per time unit with an energy consumption of 0.1 energy units for one time unit idle time and 1 energy units for one busy time unit. Thus, in the first mode one instruction unit needs one unit of time and 4 units of energy and in the second mode one instruction unit needs two units of time and 2 units of energy. This means if we have enough units of time to perform an instruction unit in the slower mode, we can reduce energy consumption by 2 energy units. In our example we define one instruction unit as the number of cpu clock cycles being performed in one millisecond if the cpu is set to its maximum speed, and we define one time unit to be 1ms.
218
M. Ramsauer
Energy Our aim is to minimize the expected energy consumption E per hyperperiod.1 To calculate E we sum up the expected energy consumptions of all task instances within one hyperperiod, which are in fact the energy consumptions of the methods used to perform the task instances. A method’s energy consumption depends on its executed number of instruction units, and on the energy consumption and speed of the current processor mode, which determines the actual time needed to complete the method.2 Thus, we compute the expected energy consumption E(Ti ) of the task instance Ti being performed in processor mode si with method m(Ti ) as: iu(m(Ti )) E(Ti ) = idle(Ti ) ∗ ei (si ) + (1) · eb (s), c(s) where iu(m(Ti )) is the average number of instruction units method m(Ti ) needs and idle(Ti ) is the time in which the processor is idle. This may be the time between the finishing time of the preceding task instance and the beginning of Ti or the beginning of the next hyperperiod. The first may happen, e.g., if the release time of Ti is later than the finishing time of the preceding task instance. The second is the case, if the last task instance of a hyperperiod does not need all the time left before the next hyperperiod starts. Example Application In our accompanying hypothetical application (Figure 1), the processor is an embedded two speed-step processor in a multi-media capable mobile phone that is equipped with a camera. The user wants to send a video of himself to a friend’s mobile phone, and he can insert text information, e.g. date and time, in realtime by pressing buttons on the phone. The application is modeled by four tasks: resize (the pictures taken by the camera are too big for transmission), insert text (date and time code), encode (compress each image to save bandwidth), and send. Resize is implemented as a bilinear scaling algorithm that completes in a constant number of instruction units. Insert text is implemented by copying the date and/or time strings into the image and it needs three instruction units per string and one unit for overhead operations. As the user changes the information that has to be inserted during runtime, the method has a non-deterministic 1
2
We do not consider the energy consumption of the device’s memory. The memory subsystems can dissipate as much as 50% of total power in embedded systems [12], but data-memory size is not changed by using multiple methods. Clearly, storing multiple methods increases the size of the instruction memory, but power dissipation does not grow if alternative methods for the same task are stored on different memory banks. Only the memory bank for the chosen method gets actived by the dispatcher. All other memory banks can be put to sleep mode and do not consume additional energy if only dynamic power is considered [13]. We do not consider the effects of cache misses, memory stall cycles, or main memory energy consumption. We asume averaged power consumptions for the specified speed modes.
Simultaneously Exploiting Dynamic Voltage Scaling
219
demand of instruction units. JPG-encoding is the most complex task. It has been implemented by two methods which both need a non-deterministic number of instruction units to complete, depending on the input data. The first algorithm has a lower expected but a higher worst-case number of instruction units to complete than the second one. Thus, it is preferable to use the first method to save energy if enough time is available. The number of instruction units needed by the transmit method depends on the amount of data that has to be sent and the actual transmission bandwidth. Therefore, even this method needs a nondeterministic number of instruction units to complete. As we want to transmit 25 frames per second to deliver fluent video, the hard deadline for the send task is 40 ms which is in this examples also the period duration of all tasks. Therefore, the hyperperiod is 40ms, too, and for every task exactly one instance has to be scheduled per hyperperiod. The release time of the resize task is zero ms, because a period starts when a picture has been taken. Starting with this release time and deadline, we derive the effective release times and deadlines shown in Figure 1. The probability distributions of the methods’ numbers of instructions are also shown, and the processor executes one instruction unit per millisecond in full-speed mode and half an instruction unit per millisecond in half-speed mode. The energy consumptions per millisecond for the two modes are 4 energy units in full speed mode and 1 energy unit in half speed mode.
4
Scheduling Algorithm
Our scheduling algorithm is a two-parts approach. First, we calculate an optimal CDAG (conditional-directed-acyclic-graph). The CDAG is a conditional precedence graph. The second part is a very efficient dispatcher that executes task instances according to the information stored in the CDAG. Each node of the CDAG specifies, which task instance and method to schedule at which speed for a given condition. A condition is described by the set of instances that still has to be scheduled and the maximum time that is allowed to be consumed by the predecessors. Therefore, the root node of a CDAG contains the task instance, method, and speed to start scheduling with under the initial condition (maximum elapsed time 0, ALL INSTANCES). If a method has finished at a certain time t, the dispatcher branches to the current node’s son whose time condition specifies the smallest value that is bigger than t. During optimization the time conditions of a node’s sons are calculated by adding the possible execution times of the specified method to the maximum of the nodes time condition and the specified task’s release time. Figure 3 shows a possible energy optimal CDAG for the live video application. The root node of the CDAG has starting time 0 and all tasks instances {resize, insert title, encode, send} have to be scheduled. The third row in the root node tells the dispatcher to start scheduling with task resize using method bilinear and to set the processor to speed mode half. The last row shows the expected energy consumption of the sub-CDAG the node is root of. This value is used during optimization only (see formula 2 at the end of this section). After the method
220
M. Ramsauer
time: 0 {resize,insert title,encode,send} resize, bilinear, half E: 34,89 0 < time <portlet:param name="action" value="expand"/> " >Expand Search Mask Fig. 8. Fig. 8 shows sample JSP code according to the JSR 168 (version 1.0) specification.
In general, this section points out that the separation of task state and view state effects the implementation of Web application in two ways: Both the dynamic information about the current state (which in this case is represented by JavaBeans objects) and the static state transition information (which we suggested storing in XML files) have to be subdivided into their task state and view state components.
5
Additional Benefits of the Separation between Task State and View State
The previous section describes a general solution of how to design task-based pervasive computing applications. We now discuss some additional benefits of our approach, which provides a solution to some typical pervasive computing issues. Changing Devices “on the Fly” The strict separation of task state and view state allows the implementation of a rather advanced feature that lets the behavior of the application move closer to the vision of pervasive and ubiquitous computing that applications should seamlessly move from one device to another tracking the user [6, 7]. Consider the situation that a user of a Web application submits a query to the employee search engine of his company on his PDA and alters the view state of the result list that is particular to this gadget. As he enters his office, he doesn't want to enter the query again on his desktop PC, so he logs in and in case the task state of the previous session and its value beans have been left unchanged, he is shown the result list in a desktop PC browser format, and not the search mask. This behavior could obviously not be realized if the two devices had different task models, e.g. different JSP file names, because even if the state of the application was handed over to the other device, it would not be guaranteed that this state (e.g. “somePdaSpecificResultList.jsp”) still is defined for the new device (e.g. if the desktop browser only features a single “resultlist.jsp” file). The separation of task state and view state is the key to support such behavior. When using the method described in this paper, it is assured that the two devices have their task state models in common. So, a dynamic device change could be implemented by simply handing over the
320
P. Sauter et al.
“old” TaskStateBean to the “new” dispatcher class of the device. More precisely, the login procedure of the application would have to accomplish this task, and the “old” ViewStateBean could either be destructed or retained as a separate object to possibly support the use of a single Web application with multiple devices in succession. That is, the implementation effort would merely be limited to the provision of a centralized TaskStateBean (and value beans) management, but still the underlying application and business logic architecture would not have to be changed at all. Using the XML Task Definition for Code Generation Now, all the information required for both task state and view state transitions is completely accessible in the XML files. This allows partial automatization of the development process as defined in section 3. One way to utilize this information is to manually create one JSP file for each task state as specified in the mappings.xml file (e.g. including typical HTML or WML headers) and to create one subsection inside the appropriate JSP file for each view state based on the information in the viewmappings.xml file. (This is essentially the functionality of steps 2 and 6 in the algorithm of section 3.) For small projects, manual execution of the algorithm might be sufficient. However, the task model included in the XML files described above is a good starting point for code generation, which could be helpful for larger projects with changing requirements. Furthermore, since the task state transition information is also included in the mappings.xml file, the development environment could also offer the developer the appropriate set of actions that should be used in the respective JSP file. For example, the development environment might, based on the information given in the mappings.xml file, suggest that the searchmask.jsp file contains a link or a button with the action "startSearch". Also the viewmappings.xml data could be used in a similar way. In the example of the finite view state automaton of the search mask (cf. Figure 2), the development environment could generate navigation elements (e.g. a navigation bar) inside the search mask automatically. This navigation element would offer the user the functionality to switch from one view to another, e.g. from reduced to expanded state. The main benefit of this approach is on the one hand less implementation effort and on the other hand assistance in maintaining a consistent user interface, because changes made to the XML files will be cascaded down to the JSP files.
6
Conclusions
This paper describes a development approach for Web applications that support multiple devices in a J2EE environment. The separation of task state and view state allows a more structured development approach and advanced run-time characteristics. In particular, device changes on the fly can be supported with little effort by retaining the device-independent task state information. The cost of adding support of another device to a given Web application involves merely the presentation and view state
Extending the MVC Design Pattern
321
layer. No changes have to be made to the task state transition model, while developers are still offered full control of design and usability (e.g. grouping and ordering) issues. However, this approach does not work for applications with device-specific task flow, e.g. if an internet shop requires a cellphone customer to sign another agreement before an order can be legally accepted. Furthermore, the XML file schemas used in the example are only suggestions and have not yet been formally specified.
References 1.
E. Gamma, R. Helm, R. Johnson, J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995. 2. G. Voegler, T. Flor: Die integrierte Client-Entwicklung für PC und Mobile Endgeräte – am Beispiel der Portaltechnologie. In: Klaus Dittrich, Wolfgang König, Andreas Oberweis, Kai Rannenberg, Wolfgang Wahlster (eds.): Informatik 2003 – Innovative Informatikanwendungen (Band 1). GI-Edition – Lecture Notes in Informatics (LNI), P-34. Köllen Verlag, Bonn, 2003, 207-211. 3. Sun Microsystems. Core J2EE Patterns. 2002. http://java.sun.com/blueprints/corej2eepatterns/Patterns/ServiceToWorker.html 4. G. Banavar, J. Beck, E. Gluzberg, J. Munson, J. Sussman, D. Zukowski. Challenges: An Application Model for Pervasive Computing. August 2000. Proc. 6th ACM MOBICOM, Boston, MA. 5. J. Burkhardt, H. Henn, S. Hepper, K. Rindtorff, T. Schäck. Pervasive Computing: Technology and Architecture of Mobile Internet Applications. Addison-Wesley, 2002. 6. U. Hansmann, L. Merk, M. Nicklous, T. Stober. Pervasive Computing Handbook. Springer, 2001. 7. G. Banavar, A. Bernstein. Software Infrastructure and Design Challenges for Ubiquitous Computing Applications. 2002. Communications of the ACM, December 2002. 8. F. Giannetti. Device Independence Web Application Framework (DIWAF). HP Labs. W3C Workshop on Device Independent Authoring Techniques, 25-26 September 2002, SAP University, St. Leon-Rot, Germany. 9. F. Paternò, C. Santoro. One Model, Many Interfaces. Proceedings Fourth International Conference on Computer-Aided Design of User Interfaces, pp. 143-154, Kluwer Academics Publishers, Valenciennes, May 2002. 10. L. Bergman, T. Kichkaylo, G. Banavar, J. Sussman. Pervasive Application Development and the WYSIWYG Pitfall. Lecture Notes in Computer Science, vol. 2254, 2001, pp. 157172, May 2001.
Adaptive Workload Balancing for Storage Management Applications in Multi Node Environments Jens-Peter Akelbein and Ute Schröfel IBM Deutschland Speichersysteme GmbH, Hechtsheimer Str. 2, 55131 Mainz, Germany
[email protected] Abstract. Cluster file systems provide access for multiple computing nodes to the same data via a storage area network with the same performance as a local drive. Such clusters allow applications distributed over multiple nodes to access storage via parallel data paths. Also storage management applications can take advantage of this parallelism by distributing the data transfers over multiple nodes. This paper presents an approach for client/server oriented backup solutions. The workload is distributed over multiple client nodes for successive backups to a storage repository server by an adaptive approach. An outlook is given how the method can be applied also for distributing workload of several clients to multiple servers.
1 Introduction Since decades the capacities of hard disks have been growing fast, up to doubling each year. Currently, no end to this trend is foreseeable and the amount of data being stored in some computing centers is growing even faster. By integrating multiple hard disks in raid arrays or in enterprise storage servers, applications are enabled to store dozens of Terabytes or more. Striping data on multiple hard disks provides a high bandwidth for read and write accesses up to several Gigabytes per second. Furthermore, using storage area networks and cluster file systems like IBM GPFS, SGI XCFS, or Sun SAMFS give access for multiple computing nodes to the same data. Applications distributed over several cluster nodes benefit from the parallelism in the underlying layers by a high overall performance. Growing storage capacity usually leads to a larger number of files and an increased average size of data objects (files and directories). The amount of time needed to access a single data object is related to the average seek time to position the heads. During the past two decades the duration of a single seek was reduced from 70 to a few milliseconds while the capacity of hard disk drives was multiplied by four orders of magnitude (20MByte to 200GByte). Therefore, the duration of scanning through a whole file system has grown over the time because of the number of contained objects. This trend affects the scalability of incremental backup solutions like progressive incremental which need to scan each object for changes [1]. If only the scan of a file system takes several hours because millions of objects have to be processed, the backup window extends unacceptably. The amount of data being backed up C. Müller-Schloer et al. (Eds.): ARCS 2004, LNCS 2981, pp. 322–337, 2004. © Springer-Verlag Berlin Heidelberg 2004
Adaptive Workload Balancing for Storage Management Applications
323
also grows constantly. Combining both trends leads to the conclusion that traditional incremental backups of large file systems require unacceptable processing time. File servers storing millions of files already demonstrate such limitations. 1.1 Backup Methods for Large Amounts of Data Several methods are known to address the scalability problem of incremental backups. Backing up images of logical volumes avoids scanning of objects. Nevertheless, all used data blocks in a logical volume need to be transferred. An incremental image backup removes the need to transfer all data. To provide such functionality, the file system or a component below this layer needs to memorize the information, which blocks changed since the last backup. Current implementations of the block-oriented layers of storage devices or software do not provide such functionality. Another approach is called journal based backup. All file system activities are monitored to create a journal containing a list of all changed objects so no scan is needed anymore to backup the incremental changes. This facility requires to be integrated into the operating system i.e. by integrating into the file system code or by intercepting all file system activities. Many file systems already provide a journal which can be used also for backup purposes. Snapshot and mirroring facilities allow saving the state of a file system which will not change anymore. The original data will remain online while a backup can be taken from the snapshot. Snapshot facilities reduce offline time of the data to a few seconds. Nevertheless, a snapshot does not reduce the period of time needed for incremental backups. Also parallelism in hardware and software can be used to reduce the time of a backup by splitting up the single task into parallel activities using multiple data paths. Current data protection products can use multiple processes or threads to backup data in storage repositories. For instance, scanning the file system, reading data and writing them to tape are performed concurrently each by their own thread. Furthermore, multiple instances of the same application can be started. If multiple applications are started on different nodes of a cluster each node can backup another part of the storage by using parallel data paths. In such a configuration a higher total throughput can be achieved compared to a single node configuration depending on the parallelism of the underlying storage system and the number of disk drives contained. All described methods are commonly known and used to backup data. This paper focuses on backups performed in parallel on multiple nodes (see fig. 1). Such configurations are not easy to administer because the backup application on each node needs to be configured separately. Furthermore, no current data protection product provides autonomous workload balancing between multiple nodes. In contrast, the worst case leads to adjusting the configuration on all nodes again to achieve a new workload distribution which also needs to be provided by the system administrator. Unfortunately, the amount of workload cannot be easily determined and a distribution is not easy to be calculated. Therefore, a solution is required which auto-configures the workload distribution in a cluster and adapts the configuration to deviations of the
324
J.-P. Akelbein and U. Schröfel
Fig. 1. Multiple backup application instances running on different nodes in a cluster allow copying different parts of a cluster file system via parallel data paths to backup media.
workload afterwards automatically by self-optimization. Section 2 of the following gives an overview of what needs to be considered for workload balancing of backups in a cluster and how it can be integrated into existing technology. Section 3 demonstrates a prototype solution and implementation details. The next section presents some results gained with the prototype and describes environments where performance improvements can be gained. Section 5 gives an outlook to other application areas of adaptive workload balancing for data management applications in multi node environments.
2 Workload Balancing of Parallel Backups in a Cluster If parallelism should be applied to backups of large amounts of shared data the workload has to be divided into a number of subtasks. All subtasks get assigned to a set of nodes within the cluster. Therefore, a distribution of the workload has to be determined so that all nodes have to backup a similar amount of data. If a separation into subtasks without intersections can be achieved n nodes can backup the whole set of data in the 1/n amount of time. Subtasks can be defined in different ways. This paper presents an approach using the hierarchy of file systems and the data objects contained to define parts of it as subtasks. Whole file systems, directories, files or even file stripes can define subtasks being assigned to different nodes. 2.1 Separating the Backup Workload Adaptively Compared to methods for dynamic workload balancing already used in network or parallel computing systems [3][4], a system to balance the workload for backups has to fulfill some other requirements. Online scheduling algorithms like the round robin strategy assign discrete amounts of workload immediately. In contrast, backup events are discrete events. The backup itself lasts over a period of time and should fit in a given backup window. No data transfer occurs between two backups so this time frame can be used for scheduling without an impact on the backups itself. Therefore,
Adaptive Workload Balancing for Storage Management Applications
325
performing backups can be compared to scheduling a set of batch jobs on multiprocessor machines where offline scheduling algorithms can be used. In contrast to scheduling batch jobs with a fixed amount of computing time, the scope of backup tasks and their duration can be varied by the algorithm which distributes the workload. A static approach of separating the workload into subtasks distributes all subtasks only once so a new assignment will not take place automatically. Typically, the configuration C0 is set up by a system administrator who identifies file systems or parts of them as subtasks. For all backup events E the workload of subtasks will be distributed over all nodes by the same assignment C0. If the workload deviates over time a new configuration C1 will have to be determined by the system administrator to balance the workload once again. By increasing the number of participating client nodes the configuration will become quite complex and hard to administer. In contrast to configuring all client tasks manually, a new configuration can be determined by a scheduling algorithm. An offline scheduling approach allows to modify the configuration C adaptively while online scheduling leads to changing C dynamically during the backup. Adaptive workload balancing means that a new configuration Cn+1 is computed after each backup event En. In most cases, it is not necessary to apply a new configuration after each backup. Instead, a lot of reconfigurations can be avoided by specifying limits for an allowed deviation. It is only needed when a significant change of the workload or its distribution occurs. Computing a new configuration needs the information how much workload was generated by each subtask in the past for a backup. Progressive incremental backup solutions using a database already provide most of the information by storing the size of each data object and whether it was backed up or not. Nevertheless, adaptive workload balancing does not depend on how this information is stored. Such information is taken as a set of statistical data available for each backup event Ei to compute new configurations without requiring a special format. Adaptive methods change the configuration C between two backup events Ei-1 and Ei. In contrast, balancing the workload dynamically changes the configuration when the backup Ei itself is ongoing. Such an approach is very similar to workload balancing already known in other application areas. Therefore, three different approaches can be distinguished by considering the following sets of statistical data of backup events (see fig. 2) 1. Ei 2. E0, …, Ei 3. E0, …, Ei-1
dynamic workload balancing without considering historical data dynamic workload balancing considering historical data adaptive workload balancing based on historical data only
An algorithm cannot compute an ideal configuration if only the workload of the current backup Ei is being considered like in case 1 because the workload is only partly known before the backup is completed. The workload needs to be distributed dy namically while the backup takes place leading to communication between the differ-
326
J.-P. Akelbein and U. Schröfel
ent nodes to coordinate the activities. The communication overhead increases significantly by adding more nodes.
Fig. 2. Sequence of backup events E and different workload balancing approaches with their scope of data mentioned to compute a new workload distribution
Case 3 is based on historical data only so computing of a new configuration can take place between two backups. An ideal configuration for previous backups can be computed which might also fit for subsequent backups. No additional communication is needed anymore in the backup window. On the other hand, the dynamic approach can try to balance the workload if it deviates significantly from previous backups. An adaptive solution like case 3 will not be able to appropriately handle such situations. Therefore, case 3 is a very handy way to determine an appropriate configuration for environments where the workload of future backup events correlates well with previous backups. Case 2 combines the potential of both approaches. Solutions for considering Ei are known from scheduling algorithms for CPU usage, network traffic workload balancing, and other application areas as well [5]. As there was no previous work known in the literature about an approach applied to storage management for case 3, a prototype solution was built for adaptive workload balancing to evaluate its potential and drawbacks in typical customer environments [6]. 2.2 Prototyping Based on Existing Technology Backups are performed since decades and no one will leave data unprotected if it is being identified as important or mission critical. A lot of backup applications on the market are commonly known and used since years. Therefore, one important goal for a new feature is that it can be easily integrated into existing technology. The prototype presented in this paper uses the IBM Tivoli Storage Manager (TSM) as an existing client/server oriented solution for storage management to perform the backups [2][7]. The TSM server manages all storage devices as a repository where data ob-
Adaptive Workload Balancing for Storage Management Applications
327
jects can be stored. A database contains the relation between a data object and the location on storage media. A TSM client resides on the machine which needs to be protected by creating a backup. Data objects are sent by the client to the server via a LAN. A variety of historical data can be used as the input for a workload balancing algorithm. Here is a list of typical information available for backups of file systems or directories: • number of objects contained in a file system or in each directory NDirAll • number of objects backed up in each directory, new and modified objects NDirDelta • the total number of bytes stored in all objects NBAll • bytes transferred during the backup NBXfer • number of objects deleted since the last backup NDirDel • number of subdirectories in a directory NDirSub The presented approach recognizes directories as the lowest level which can be assigned independently to a single client as a subtask. Files contained in a directory cannot be distinguished by the separation algorithm. One reason to limit the historical information on directory level is the amount of data which will be created if every file has to be recognized. In addition, large environments tend to spread over thousands or more directories. Therefore, a granularity on directory level with a large amount of individual workload measurements guarantees enough possibilities of separating the workload into equal parts. The following section will show how the functionality of existing data protection products need to be extended to allow adaptive workload balancing. Furthermore, a modification of an existing algorithm is described which implements a method to distribute the backup operations.
3 Functionality of the Prototype Today's data protection products in client/server architecture focus on backing up data objects by a single client node. For instance, a TSM server assigns a node name to a client based on the host name. All data from a client node can be found in file spaces associated to the node name. If the client node backing up the data changes over time the server needs to be able to detect that the files being stored remain the same. This is supported by the TSM client by specifying another node name instead of the host name. All clients involved in a backup of shared data need to use the same node name. Furthermore, single node backup clients define their backup domain in their local configuration. If the backup domain is reconfigured the backup copy of all objects not residing in the backup domain anymore will be expired on the TSM server to delete them later on. This should not occur for shared objects being backed up by another node in a multi node environment. In addition, the path names of the shared file systems have to be the same on all client nodes so that a single object can be identified as the same one.
328
J.-P. Akelbein and U. Schröfel
3.1 Separation of a File System Tree Backup solutions typically offer options for including and excluding file spaces, directories, and files defining the scope of a backup domain. If the backup workload should be distributed between a set of clients different nodes in a cluster a similar mechanism will be required to separate a shared backup domain into different parts. Therefore, separation points are defined dividing the shared domain into subtasks (see fig. 3). The subtasks are mutually exclusive so no object is backed up twice. All subtasks assigned to a client define the local backup domain so that all local backup domains together sum up to the shared backup domain. A client scans its backup domain by parsing the hierarchical tree structure of file systems and directories in linear order. The separation points intercept this scan so that only the local backup domain is parsed. Two additional options CONTINUE <path name> and PRUNE <path name> are required to define a separation point for the TSM client. A subtask is assigned to a client using the CONTINUE option. The path name of the root directory of the subtask needs to be specified. If the traversal through the file system reaches the next subtask the search has to be stopped by specifying PRUNE and the path name of the directory. All clients have to prune their scan at a separation point while only one client needs to continue its traversal. Therefore, all instances of the client get their own separation list (see an example of separation lists in fig. 4 derived from the subtasks shown in fig. 3). The TSM client used by the prototype supports separation lists by reading a file containing the options. A distribution of the workload W generated by a backup of a shared backup domain can address different goals. The workload can be balanced so that all clients need the same amount of time to backup their local domain. Another goal can be to transfer the same amount of bytes for each client. Both approaches lead to equal workloads Wi = W / n for each of n client instances regardless of whether the duration or the number of bytes are the base measurement W. By using a unitless workload value the same algorithm allows to achieve one of these goals by transforming the historical data. To balance the number of transferred bytes, this value needs to be considered as the workload W = NBXfer. The backup duration for a directory TDir can be measured directly by the client. If the duration itself cannot be determined an approximation has to be made. For instance, a single TSM backup client allows using multiple sessions to the server for backing up a number of objects simultaneously. Such embedded parallelism within the client reduces the total backup duration while the time used for a single object stays the same or even increases. Therefore, the prototype does not consider real time measurements, but uses the historical information to compute a workload value. One assumption is, that the duration of scanning a single object requires a fixed amount of time TScan. In addition, the network bandwidth BCS for all transfers between the client instance and the server is assumed as being constant so no other application uses the same LAN connection at the same time. By applying these simplifications, the backup duration TDir can be determined by TDir = NDirAll * TScan + NBXfer / BCS
Adaptive Workload Balancing for Storage Management Applications
329
Fig. 3. A shared backup domain divided into subtasks assigned to the local domain of three clients
Fig. 4. A shared backup domain divided into subtasks assigned to the local domain of three clients
The transformation results in a single workload value WDir = TDir for each backup. If more than one backup event is being considered by the workload balancing algorithm a second transformation is needed to determine a single workload value WDir. The prototype allows computing the mean value from all backup events, or, by specifying a multiple of the standard deviation, a transformation using the Gauss error distribution curve. The later one eliminates single events of peak workloads from the list of values being considered. After these computations are performed all directories have a single workload value WDir associated with them so algorithms for separating a
330
J.-P. Akelbein and U. Schröfel
weighted tree can be used. The modified TSM prototype client writes out backup protocols into files containing the statistical data needed for computing the workload value.
Fig. 5. The components of the workload dispatcher prototype and their inputs and outputs relate to the model of a autonomic manager with the four different phases of monitoring, analyzing, planning, and executing [12]
3.2 Selecting the Algorithm to Separate the Workload Present algorithms for computing the optimum separation of weighted trees lead to a non-polynomial complexity and run time. The time to compute a workload distribution is limited by the time between two backup windows. Large amounts of file systems and directories lead to a high number of subtasks so a non-polynomial complexity results in a run time much longer than the time frame between two backup windows. In most cases, an approximation of the optimal distribution will also fulfill the requirements of a balanced separation [8]. Therefore, another class of algorithms can be chosen which guarantee a polynomial complexity under all circumstances. A similar task has to be performed like for job scheduling solutions [9]. A list containing the workload for each job and the number of computing nodes will represent the input in such a case. The goal of such job scheduling is to assign each workload to one of the computing nodes in order to minimize the workload for each node [10]. In [11] the offline greedy algorithms are introduced as a typical class of job scheduling algorithms which work like this: 1. 2. 3.
The list is sorted by the workload of each job in decreasing order. The job with the biggest workload is removed out of the list and assigned to the computing node with the least workload at this moment. Step 2 is repeated until the list is empty.
Adaptive Workload Balancing for Storage Management Applications
331
As [11] shows, the worst case will be Tapprox / Toptimal = 4/3 – 1/3n, where n is the number of clients, Tapprox is the amount of time needed by the approximation algorithm, and Toptimal for an optimal distribution. So for n>>1, the worst case will need 33% amount of time. Nevertheless, the worst case will show up only for special sizes of subtasks not to be expected on any file server. By looking at this example for job scheduling the separation problem can be derived from the job scheduling problem. In both application areas, the algorithm has to assign multiple workloads to a number of instances. The differences to the separation of backup workloads are: 1.
2. 3.
The workload represents computing time for job scheduling problems, the value is derived from the previous backup workload history for separating directory trees. The instance of the separation problem is a tree while the instance of the job scheduling problem is a list. The result of the job scheduling problem is the lowest possible for the given set of workloads. It does not take into consideration splitting one task into subtasks.
The first point does not make a difference to the algorithm itself because generic numbers are used in both cases. Both of the other points can be addressed in the following way. A list is created out of the tree by using the workloads of all file systems WFSi as elements. WFSi represents the sum of the workload of all directories contained in a file system. By running the offline greedy algorithm with this initial list, an assignment of all file systems to client nodes is computed. In a second step, it has to be verified that the resulting separation guarantees having workloads for all clients less than the maximum amount of time or bytes to be transferred. For instance, a backup window specifies a time period which should not be exceeded. The prototype divides the whole workload by the number of clients to compute the average workload to be assigned to each client. After assigning all subtasks no client should perform more than a given percentage of workload above the average. If no client exceeds this limit the separation is taken as the final result. If one or more of the clients exceed the upper bound one workload WFSi will be replaced in the list by splitting it into workload values WDiri. They represent the workloads for each of the subdirectories in the file system i. After the modified list is created the next iteration is performed by running the offline Greedy algorithm again. Further steps may also split directories into their subdirectories. This leads to the following iterative approximation algorithm solving the separation problem: 1. 2. 3. 4. 5.
Copy the file system workloads WFSi contained in the tree into a list Perform the offline greedy algorithm Test, whether all client workloads are less than a specified bound If a client's workload is above the upper bound then split a workload into the workloads of the direct subdirectories Perform step 2 again
332
J.-P. Akelbein and U. Schröfel
Looking at step 4 of the algorithm, the decision is not described in detail which node has to be split. There are several alternatives of choosing an element e.g. 1. Remove the element that has been added last to the client's workload and try to split it. If this element cannot be split the solution will not improve. 2. Remove the element that has been added first to the client's workload and try to split it. If this element cannot be split the solution will not improve. The algorithm described above provides a solution for the separation problem in polynomial time. The complexity of the offline greedy algorithm itself is linear while 2 the presented algorithm shows O(N) = N in its worst case for N being the number of file systems and directories in the file system tree [6]. For common environments found on large file servers, the approximation provides reasonable workload distributions nearly equal to the optimal solution. The offline greedy algorithm was chosen because it was easy to adapt to the problem described above. It would be worth trying to adopt other load balancing algorithms to the problem to analyze whether they provide better results. The separation algorithm is implemented as a component called workload dispatcher (see fig. 5) as part of the prototype solution. It uses previous backup protocols of all clients instead of the TSM server database as the input and generates separation lists as output avoiding any changes in the server code. It can be configured to use different transformations of the historical data etc. which applies to the requirement of the environment. Most of the efforts to create the prototype implementing adaptive workload balancing were spent on developing the workload dispatcher component. Only a few lines of code needed to be changed in existing TSM client to be integrated into the prototype solution while the server was used without modifications. The small amount of efforts used for integrating the new component with the existing ones shows that adaptive workload balancing can easily be integrated into existing products fulfilling the requirement described in section 2.2. The next section presents some results generated by using the prototype in a simulated environment.
4 Results To analyze the results of distributing a backup workload five TSM clients backed up a shared GPFS file system. Different scenarios were performed like in the system verification tests of TSM clients which represent typical customer environments [6]. 4.1 Scenario One – Balanced Workload After each backup about 5% of the files are modified and created as new ones. All changes are randomly distributed over the file system. The file sizes also vary randomly following a typical distribution which can be found on file servers [13]. To
Adaptive Workload Balancing for Storage Management Applications
333
5 clients 1 client
18:00
duration 12:00 [min]
6:00
0:00 1
3
5
7
9
11
13
15
17
19
21
backup event E
Fig. 6. Backup durations of a file system parsed by one and by five clients in scenario 1
show the influence of peak workloads, 25% of the files are changed after the 15th backup. Fig. 6 shows the backup durations needed for all backups. By using five clients in parallel the backup duration takes no more than 30% of the time needed by one. Most of the backups take slightly more than 20%. Because the changes are distributed randomly over the whole file system a peak workload like in the 16th backup results in a longer backup time. Nevertheless, the backup is performed in about 30% of the time needed by a single client. The only exception can be found for the first backup. As no statistical data is available at this point in time one client parses the whole file system to gain this information. So the first backup can be seen as an initial configuration procedure. Other reconfigurations are not needed in this scenario. For a progressive incremental method the first backup is the only full backup. Dedicated time should be reserved to perform it. 4.2 Scenario Two – Unbalanced Workload Scenario one demonstrated that the workload dispatcher created only one list of separation points after the first backup. In this scenario the workload changes over the time, but the distribution of changes in the different subtasks remain the same. A second scenario shows how the configuration is adapted for successive backups. Therefore, after each backup 5% of the files are modified or created. In addition, a large directory containing subdirectories and files with about 25% of the whole data is added after the second and two other large directories after the twelfth backup.
334
J.-P. Akelbein and U. Schröfel
Fig. 7 shows the backup duration of each of the five TSM clients. Because the large directory has to be backed up by client 1 in backup 3, it needs more than double the time compared to the other clients. A similar situation occurs in the thirteenth backup event. So if a large amount of data is added at single place within the file system increasing the number of clients will not reduce the backup duration of the single client which has to backup significantly more data like in the backup three and thirteen.
Client 1 Client 2
6:00
Client 3 Client 4 Client 5 Backup duration [min]
3:00
0:00 2
4
6
8
10
12
14
Fig. 7. Comparison of the backup duration of five clients running in parallel as described in scenario 2 without showing the first full backup
After the backup took place the workload dispatcher distributes the workload of the additional data between all clients so their backups align to the same period of time again for the forth and 14th backup. Based on experiences with a large variety of different product environments such peak workloads tend to become less probable with an increasing amount data as most of the data is stored only for reference not changing anymore. One way to avoid the influence of peak workloads is running a dedicated backup apart from the normal schedule for only the large set of new or modified data. In [6], other experiments show further analyses of other situations which might occur in customer environments. 4.3 Introducing a Correlation Factor If a quantitative statement about the suitability of the presented adaptive workload algorithm in a specific environment is needed for a backup event Ei, the actual workload Wi,k of each client k will need to be compared with the forecasted workload WFi,k = Wi-1,k. Because the total workload Wi of all clients might deviate also like seen in scenario one, the normalized workload ratio Wi,k/Wi must be considered instead of Wi,k itself. By normalizing also the forecast, the correlation factor, ci,k = 1 - | WFi,k / WFi Wi,k / Wi | represents the deviation of the workload for client k in comparison to the expected ratio of the total workload Wi. Therefore, ci = MIN ( ci,1, …, ci,n ) defines the
Adaptive Workload Balancing for Storage Management Applications
335
worst ratio of a client for a backup Ei. So in a balanced workload distribution like in scenario the overall correlation factor c = MIN ( c0, …, ci ) for all backup events E0, …, Ei is near by one (in scenario one c=0,98). If c is significantly lower than one an unbalanced workload situation is indicated (in scenario two c=0,86).
5 Outlook After the prototype was developed and the chosen approach demonstrated its strengths and drawbacks, considerations of other areas in storage management seem worthwhile. The prototype assigns storage management clients to data objects like files or directories adaptively [14]. The prototype only considers backups, but restoring data can also be carried out in parallel by multiple client nodes. The scope of a restore is specified when this activity is started. The amount of data to be transferred cannot be forecasted by any previous event. Therefore, workload distributions can be computed only for predefined scopes. Such an approach is feasible for situations like a full restore.
Fig. 8. Adaptive assignment of client nodes and their workloads to server nodes by an adaptive assignment table realized as a proxy
Also on the server side an adaptive assignment of clients to different servers can lead to a workload balancing. Current distributed storage management applications assign a client statically to a server. If a client node and its data should be moved from one server to another one it needs to be exported and imported manually. Managing hundreds or even thousands of clients can be very time consuming if the workload should be assigned and balanced between several servers. A self-optimizing solution can be achieved by an equivalent adaptive approach as shown in section 2.1. If the backup workload generated by each client is known the same algorithm is able to generate a new assignment table between clients and servers. Such an assignment table is the equivalent of separation lists used in the prototype. The client nodes and their data have to be moved between the servers according to the assignment table (see fig. 8). Therefore, it makes sense to modify the algorithm to achieve a new assignment with a minimum of client node movements.
336
J.-P. Akelbein and U. Schröfel
A client needs to know which server should be used for backing up its data. So a proxy stores the assignment table. If a client contacts its server the proxy can tell it the appropriate IP address. By sharing the data of all client nodes on the managed storage devices in a SAN no movement of data objects is needed reducing the data traffic significantly. Sharing a distributed database between all server instances removes any meta data movements at all. A client node will not need to be assigned totally to a single server. If a proxy tracks also file spaces this level of granularity can also be used to distribute the whole workload. The method of adaptively assigning data objects to data management applications does not distinguish between such levels. Only the number of subtasks to be assigned by the algorithm varies. Adaptively assigning data objects to data management applications can automate a lot of tasks to configure complex and distributed storage management applications. Typical TSM customers rarely optimize their environment because of the efforts needed to perform such tasks. Furthermore, it needs time to collect the workload information and a good workload distribution cannot be achieved easily by manually configuring the whole environment. This paper demonstrates that backup windows can be reduced significantly by parallel data transfers of multiple client nodes sharing the same data. By approximating a good workload distribution the prototype automatically gains such a benefit. If only historical information is considered the design of existing storage management application does not need to be changed significantly leading to an easy integration of the concept of adaptive workload balancing. Peak workloads might lead to longer backup durations so an analysis of the correlation factor c should be made for environments to determine whether such events can take place.
References 1. 2. 3.
4.
5. 6. 7. 8.
Kaczmarksi, M., Jiang, T., Pease, D.A.: Beyond backup toward storage management. IBM Systems Journal, vol. 42, no. 2, pp. 223, (2003). Tivoli Storage Manager for UNIX. Backup-Archive Client Installation and User Guide, Version 5 Release 2, fourth Edition, (2003) Shen, K., Yang, T., Chu, L.: Cluster Load Balancing for Fine-grain Network Services, IEEE Proc. of the Int. Parallel and Distributed Processing Symposium IPDPS’02, pp. 51b, (2002) Thomé, V., Vianna, D., Costa, R., Plastino, A., Filho, O.T.: “Exploring Load Balancing in a Scientific SPMD Parallel Application, IEEE Proc. Of the Int. Conf. on Parallel Processing Workshops ICPPW’02, pp. 419, (2002) Tanenhaus, A.: Operating Systems – Design and Implementation, Prentice Hall, (1992) Schröfel, U.: Separation eines Dateisystembaums zum dynamischen Lastenausgleich, master thesis, University of Paderborn, Germany, 2002. Tivoli Storage Manager for AIX: Administrator’s Guide, Version 5 Release 2, second Edition, (2003) Wanka, R.: Approximationsalgorithmen, http://www.upd.de/cs/agmadh/ vorl/Approx02, version 2.1, 2002.
Adaptive Workload Balancing for Storage Management Applications 9. 10.
11. 12. 13.
14.
337
Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis, Cambridge University Press, (1998) Graham, R.L., Lawler, E.L., Lenstra, J.K., and Rinnoy Kan, A.: Optimization and approximation in deterministic sequencing and scheduling: a survey, Annals of discrete Mathematics 5, pp. 287, (1979) Graham, R.L.: Bounds on multiprocessing timing anomalies, SIAM J. Appl. Math. 17, pp. 263, (1969) Autonomic computing Overview, IBM research, http://www.research.ibm.com/autonomic/overview/ faqs.html, (2003) Akelbein, J.-P.: Ein Massenspeicher mit höherer logischer Schnittstelle – Design und Analyse, PhD thesis, University of the federal Armed Forces, Hamburg, Shaker Verlag, (1998) Akelbein, J.P., Schröfel, U.: A method for adaptively assigning of data management applications to data objects, patent application 02019559.0, European patent office, (2002)
Author Index
Ahmadinia, Ali 125 Akelbein, Jens-Peter 322 Arat´ o, P´eter 169 Augel, Markus 260 Bellosa, Frank 231 Belmans, Ronnie 78 Berger, Michael 107 Bobda, Christophe 125 Braunes, Jens 156 Buchty, Rainer 184 Chen, Ming 63 Choi, Lynn 47 Chung, Tae-Sun 199 Deconinck, Geert 78 Dolev, Shlomi 31 Enzmann, Matthias 273 Eschmann, Frank 9 Faerber, Matthias 231 Floerkemeier, Christian 291 Flor, Thomas 309 Giessler, Elli
273
Haase, Jan 9 Haisch, Michael 273 Haviv, Yinnon A. 31 Heintze, Nevin 184 Hunter, Brian 273 Ilyas, Mohammad Jung, Myung-Jin
273 199
Kim, Bumsoo 199 Klauer, Bernd 9 Knorr, Rudi 260 K¨ ohler, Steffen 156
LaMarca, Anthony 92 Liu, Xuezheng 63 Maier, Andreas 3 ´ am Mann, Zolt´ an Ad´ Norden, Erik
169
4
Oliva, Dino 184 Orb´ an, Andr´ as 169 Park, Stein
199
Ramsauer, Markus 213 Richter, Harald 140 Rodrig, Maya 92 Sauter, Patrick 309 Schneider, Markus 273 Schr¨ ofel, Ute 322 Schulz, Michael 246 Schulz, Stefan 246 Seitz, Christian 107 Shin, Yong 47 Siegemund, Frank 291 Siemers, Christian 140 Spallek, Rainer G. 156 Specht, G¨ unther 309 Tanner, Andreas 246 Teich, J¨ urgen 125 Tresp, Volker 20 Vanthournout, Koen 78 V¨ ogler, Gabriel 309 Vogt, Harald 291 Waldschmidt, Klaus 9 Weissel, Andreas 231 Wiegand, Christian 140 Yang, Guangwen Yu, Kai 20
63